404: Page not found
Sorry, we've misplaced that URL or it's pointing to something that doesn't exist.
diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/.nojekyll @@ -0,0 +1 @@ + diff --git a/404.html b/404.html new file mode 100644 index 0000000000..afae9cf3c1 --- /dev/null +++ b/404.html @@ -0,0 +1 @@ +
Sorry, we've misplaced that URL or it's pointing to something that doesn't exist.
I’m Timothy Stewart (Techno Tim), a full stack software engineer, content creator, and a HomeLab enthusiast. I create fun and easy to follow tech content on YouTube, host a community live stream on Twitch, and share tech related content on all social platforms.I even host a community wiki that is open for anyone to contribute to from our Discord Community. I also create and contribute to many open source projects. Even my documentation site for all my videos is open source! I really enjoy building open source software, creating and contributing to communities, teaching through video content, and helping out anywhere on the web.
If you’d like to connect with me please see my list of social links here!
45Drives is De-Microsoft-ifying and leading the charge by replacing Windows with Linux desktops and replacing proprietary solutions with open source. This topic of “demicrosoftification” was discus...
Say goodbye to all of the other Home Lab Dashboards that you end up not using, it’s time to use something smarter, Home Assistant. 📺 Watch Video Disclosures Nothing in this video was sponso...
This simple but powerful little adapter lets you build your own Zigbee network and easily add and manage it in Home Assistant, no hub required! 📺 Watch Video Disclosures Nothing in this vid...
There’s building a MINI SERVER RACK and then there’s beating Raid Owl in the mini server rack build challenge. Let’s see if I can do both. 📺 Watch Video Disclosures This video no longer has...
In this tutorial we’ll walk through my local, private, self-hosted AI stack so that you can run it too. 📺 Watch Video Disclosures Nothing in this video was sponsored Info If you’re looki...
I built a private, local, and self-hosted AI stack to help me out with daily tasks. 📺 Watch Video Disclosures Thanks to Surfshark Sponsoring this Video! Secure your privacy with Surfshark!...
Tracking things on the web just got a whole lot easier with ChangeDetection, the free and open source Docker container! Track website changes, price changes of products, and even track out of stoc...
Which is the best NAS operating system to use at home and in your HomeLab? Is it Unraid for maximizing storage efficiency? Or is it TrueNAS for bringing enterprise ZFS to home? Let’s find out. ...
Proxmox helper scripts is a collection of scripts to help you easily make changes to your Proxmox VE server along with installing many LXC Containers. This makes installing, configuring, and main...
I knew nothing about Unraid until today. I finally installed Unraid in my HomeLab on one of my servers. Is it any good? Does it live up to the hype? Let’s find out in my candid walkthrough of Un...
After showing off my Home Lab hardware in my late 2021 tour, many of you asked what services are self-hosted in this stack. This is always a moving target so I decided it was time to share which se...
I’ve been on a quest to find a new logging system.I’ve use quite a few in the past, some open source, some proprietary, and some home grown, but recently I’ve decided to switch.I’ve switched to Gra...
In my previous video (Meet Grafana LOKI, a log aggregation system for everything and post, I promised that I would also explain how to install Granfana Loki on Kubernetes using helm.If you’re looki...
Well, here it is! My Late 2021 Server Rack and HomeLab tour! This is a special one because I just revamped and remodeled a spot in the basement for my new data center / server room (still picking...
Windows 11 is here and with it comes new hardware requirements.These requirements not only affect physical hardware but also virtual hardware too.The TPM 2.0 requirement for Windows 11 is shaking t...
You’ve spun up lots of self-hosted services in your HomeLab but you haven’t set up monitoring and alerting yet.Well, be glad you waited because today well set up Uptime Kuma to do just that.Uptime ...
Meet NUT Server, or Network UPS Tools.It’s an open UPS networking monitoring tool that runs on many different operating systems and processors.This means you can run the server on Linux, MacOS, or ...
Meet File Browser, an open source, self-hosted alternative to services like Dropbox and other web based file browsers.Today we’ll configure a containerized version of File Browser and have you up a...
This guide will walk you through how to Install Docker Engine, containerd, and Docker Compose on Ubuntu. If you have an existing version of Docker install, it is best to remove it first.See the Cl...
Meet LittleLink & LittleLink-Server - a DIY, self hosted, and open source alternative to the popular service Linktree.This web site inside of a container allows you to create and host your own...
People have asked how I’ve been able to create and grow a Tech YouTube channel and what my process is when planning, filming, editing, and producing content.Today we talk about just that.All my sec...
As you may know, proxmox is my current choice for a hypervisor. Proxmox 7 is here and comes with a host of new features! In this video we’re cover all of the new features in Proxmox 7 as well as h...
Have you ever thought about running a Linux desktop inside of a container? Me neither until I found this awesome project from LinuxServer called Webtops.A webtop is a technology stack that allows ...
Authelia is an open source Single Sign On and 2FA companion for reverse proxies.It helps you secure your endpoints with single factor and 2 factor auth.It works with Nginx, Traefik, and HA proxy.To...
In some of my previous Pi-Hole videos many of you spotted my blocklist with over a millions sites added and you wondered how you can do the same.Well, today I show you how to block more ads, block ...
Today, we’re going to use SSL for everything.No more self-sign certs.No more http.No more hosting things on odd ports.We’re going all in with SSL for our internal services and our external services...
Pi-Hole is a wonderful ad blocking DNS sever for your network, but did you know you can also use it for a Local DNS server? In this fast, simple, and easy guide we’ll walk through how to create DNS...
Today in this step by step guide, we’ll set up Grafana, Prometheus, and Alertmanager to monitor your Kubernetes cluster.This can be set up really quickly using helm or the Rancher UI.We’ll install ...
This guide is for installing traefik 2 on k3s.If you’re not using rancher, that’s fine, just skip to Reconfiguring k3s Note: There is an updated tutorial on installing traefik + cert-manager on...
Today we’re going to talk about the new Cluster Explorer in Rancher.The Cluster Explorer is the new fancy user interface that will replace the old Cluster Manager.The new UI contains lots of new ar...
Building a Multi-architecture CPU Kubernetes cluster is easier than you think with k3s.In this video we’ll build a Raspberry Pi 4 with an ARM CPU and add it to our existing x86 x64 amd64 CPU Kubern...
Rancher vs. Portainer, which one is better” Which one should I choose? Can Portainer manager Kubernetes? Can Rancher manage Kubernetes? We answer all these questions and more in this quick, no f...
Lots of people ask which terminal I use on Windows and how I configure it.It’s pretty simple, I use the Microsoft Windows Terminal and it’s a fantastic terminal on Windows.It is free and open sourc...
What is a Home Lab and how do you get started? It’s easy. You can get started today in a few different ways. You can virtualize your entire home lab or build it on an old PC, a Raspberry Pi, or ...
Updating Portainer is easy, if you know how.In this quick no fluff video, I will show you how to update any version of Portainer.This guide can be used for installing it too.Portainer is a containe...
Handbrake is a fantastic open source transcoder.It allows you to transcode, or convert, your video files into different formats. It has a nice UI that’s easy to use and helps you transcode videos v...
In this quick no fluff video, I will show you how to create a multi-bootable USB drive with Ventoy that can boot all of your ISO, WIM, IMG, VHD, and EFI files.It supports both MBR and GPT partition...
Dual booting Windows and Ubuntu Linux can be a pain however there are many benefits do doing this if you do it right.In this tutorial we’ll discuss how to dual boot Windows and Ubuntu on your PC or...
My life, ran against a neural network and detected by Deep Learning.If you’d like to see how this video was generated using ML and Deep Learning, check out the video here: How this video was gener...
The NVIDIA RTX 3090 is a beast.We all know it can beat the benchmarks in gaming, but how about machine learning and neural networks? Today we walk through the RTX 3090 and then compile and run Dar...
I am a huge fan of self hosted home security and I’ve been doing it for years. I love the idea of being able to check on my home when I am away.Also, I’ve always kept my video footage on premise (o...
Internet speed tests are full of junk, ads, tracking, and some even contain deprecated plug-ins.Who needs this when we can self-host an open source one? LibreSpeed is a lightweight speedtest imple...
Storage in Kubernetes is hard, complicated, and messy.Configuring volumes, mounts, and persistent volumes claims and getting it right can be a challenge.It’s also challenging to manage that storage...
Ansible.Need I say more? Well, maybe, if you’ve never heard of it. Ansible is a simple IT / DevOps automation that anyone can use.You can Automate anything with an SSH connection and WITHOUT insta...
Are you running Kubernetes in your homelab or in the enterprise? Do you want an easy way to manage and create Kubernetes clusters? Do you want high availability Rancher? Join me as we walk through...
Are you running Kubernetes in your homelab or in the enterprise? Do you want an easy way to manage and create Kubernetes clusters? Join me as we walk through installing Rancher on an existing high ...
Dear Pi-Hole, We love your product.It keeps our network safe from malware and other unwanted domains. While we love what is there so far, please add a feature to your core product to keep multiple...
After setting up my Proxmox servers, there are a few things I do before I use them for their intended purpose.This ranges from updates, to storage, to networking and VLANS, to uploading ISOs, to cl...
I’ve been making great use of some older, bigger servers but I decided to try and build, upgrade, and migrate to some 1U servers.Join me as we unbox and build my 2 new virtualization servers! 📺 ...
Self hosting a VPN has traditionally been hard to set up and we’ve had very few options.That is until WireGuard came about. WireGuard is an extremely simple yet fast and modern VPN that utilizes st...
Are you thinking about ditching Google apps or looking for a Dropbox replacement? Are you ready to self host your own productivity platform? Well, Nextcloud may be for you! In today’s tutorial w...
After setting up my Linux servers, there are a few things I do before I use them for their intended purpose.This ranges from security, to tools, to config.Join me as we set up our first Linux serve...
I decided to give my Home Lab a proper upgrade for 2020 and in to 2021! I finally took the plunge and went all in with a UniFi UDM Pro and a UniFi Switch PRO 24 PoE switch and they are awesome! ...
Have you been putting off migrating your database to Docker and Kubernetes like I have? Well wait no longer.It’s simple using this step-by-step tutorial.Today, we’ll move a database that’s on a vi...
We’ve already figured out how to pass through a GPU to Windows machine but why let Windows have all the fun? Today, we do it on an Ubuntu headless server that’s virtualized, run some AI and Deep L...
I am betting you have at least 3 infrared remote controls in your house.I am also willing to be you would love to automate some of these from time to time.Well don’t worry I have the solution for y...
Do you have a lot of virtual machines? Are you running Windows, Linux, and Mac and need remote access from a single UI? Well, Apache Guacamole is for you! Apache Guacamole is a clientless remote...
Do you have some places where you can’t run ethernet? Do want to extend your ethernet without pulling more cable? Well this is the guide for you.In this step-by-step tutorial we’ll use a Ubiquiti...
So you’re a software engineer or a developer who wants to self-host your own code in your own homelab? Well this is the tutorial for you! In this step-by-step guide we’ll walk through setting up ...
Do you want to self host your Rancher UI securely in your homelab? Have you thought about putting your Rancher UI behind Traefik and your reverse proxy to get free SSL certificates using Let’s Encr...
What’s new in Portainer 2.0? Well, a ton.With the release of Portainer 2 you now have the option to install Kubernetes.This makes installing, managing, and deploying Kubenetes really easy.In this ...
Are you trying to access your self-hosted services outside of your firewall? Are you tired of trying to remember your IP when away, or worse yet, having your ISP change your IP address? Have you ...
Are you self-hosting lots of services at home in your homelab? Have you been port forwarding or using VPN to access your self-hosted services wishing you had certificates so that you can access th...
Have you ever wanted to run VS Code in your browser? What if you had access to your terminal and could pull and commit code as well as push it up to GitHub all from a browser or tablet? That’s ex...
Want to migrate FreeNAS to TrueNAS today? It’s simple using this step by step tutorial.We’ll walk through how to upgrade FreeNAS to TreNAS CORE.We’ll cover upgrading FreeNAS to TrueNAS on a physic...
Proxmox Backup Server is an enterprise-class client-server backup software that backs up virtual machines, containers, and physical hosts.In this step by step tutorial, we install and configure Pro...
In my homelab tour, I showed you my hardware and network setup that powers all the infrastructure at home.Then, many of you asked which services I am hosting on this hardware.Well, here it is.This...
You asked for a tour of my homelab, well here it is.In this tour I will take you through my home server rack and network setup.This includes my all of my home networking equipment, my servers, dis...
Slack is a great chat and communication tool used by small and large businesses as well as personal use.Slack has a great API and great official Node JS clients that help you automate many features...
It use to be hard to back up Rancher, but with Rancher 2 it’s super simple.Upgrading, backing up, and restoring your Rancher server should be part of your regular routine.Join me in this tutorial a...
Tired of bookmarking all of your self-hosted services only to lose them? Want access to all your sites from anywhere in the world? Well, Heimdall can help with a clean, responsive, and beautiful d...
Are you ready to start automating your smart home with the power of open source? Do you already have Home Assistant running but need a little more power than a Raspberry Pi? If so, join me in thi...
Should I virtualize this? Should I containerize this? These are great questions to ask yourself when spinning up self-hosted services in your Homelab environment.We’ll review my previous video (2...
We know you’ve heard of Pihole and we know you are probably aware of how to install it but… have you tried running it on Docker and Kubernetes using Rancher? Have you configured it for pfSense? D...
I’m a huge fan of virtualization and containerization (if you couldn’t tell already)! Today, we’ll walk though the various ways to install Plex step-by-step.We also see how easy it is to get Plex ...
It’s time to say goodbye to your home router and start virtualizing it using Proxmox and pfSense. pfSense Community Edition Download: https://www.pfsense.org/download/ Get started with Proxmox tod...
Streamlabs OBS for MacOS is here! In this video we’ll walk through setting up Streamlabs step by step.We’ll install Streamlabs OBS, set up desktop audio with iShowU Audio Capture so you can captur...
Let’s build a bot! Not a bad bot like a view bot, but bot for good.Let’s build a Twitch moderator bot using tmi.js! The Twitch API is powerful and and already has lots of great bots however no bo...
Looking for new ideas on how to use your virtual machines? Well, here’s the top 20 ways to use your virtual machines in your homelab. 📺 Watch Video Links 🛍️ Check out the new Merch Shop at ht...
I decided to tear apart our office and convert my old Ikea hack table tops into a standing desk.Oh, and I also clamped on 3 - 27” 1440p gaming monitors while I was at it 😉 📺 Watch Video Links ...
If you want to set up Kubernetes at home using Rancher to run Docker containers, this is the guide for you. This is a step by step tutorial of how to install and configure Rancher, Docker, and Kube...
Have you been thinking about updating your Proxmox VE server? Well, what are you waiting for? Upgrade your Proxmox server in your home lab in just a few minutes with this step-by-step tutorial! ...
Are you looking to build a remote gaming machine and passthrough your GPU to a virtual machine? Do you want to use GPU acceleration for transcoding Plex or Adobe Media Encoder? Do you dream of se...
Do you need to virtualize Ubuntu Server with Proxmox? Join me as we install and configure Ubuntu Server LTS on Proxmox VE step-by-step using the best performance options. 📺 Watch Video Links 🛍...
Do you need to virtualize Windows 10 with Proxmox? Join me as we install and configure Windows 10 on Proxmox VE step-by-step using the best performance options. 📺 Watch Video Links 🛍️ Check ou...
Do you need to virtualize something at home? Thinking of building your own Homelab? (The answer is YES).Join me as we install and configure Proxmox VE step-by-step. 📺 Watch Video Links 🛍️ Che...
You want to get started developing JavaScript with NodeJS, ReactJS, or AngularJS but you’re not sure how to get started? This is a complete, step by step guide on how to configure your Windows mac...
Setting up iSCSI with TrueNAS and Windows 10 is super simple with TrueNAS.This is an easy way to have a hard drive installed on your machine that isn’t really attached, it lives on the network. ...
Do you want a DIY NAS? Do you want to set up TrueNAS? Have you considered virtualizing TrueNAS with Proxmox? In this video we’ll walk through installing and setting up TrueNAS and configure a sa...
Let’s build a bot! Not a bad bot like a view bot, but bot for good.Let’s build a discord moderator bot using discord.js! Discord is powerful chat + video client and already has lots of great bots...
Let’s compare Touch Portal to Stream Deck.We’ll walk through some of the similarities and differences between the free software Touch Portal and the Stream Deck hardware/software combination.We’ll ...
There are so many upgrades out there for streaming, what do I start with? Video card? Microphone? Audio? CPU? RAM? Lights? I started with one that is overlooked by many streamers, and it’s the roo...
Do you want the best settings for OBS in 2020? This is the ultimate OBS settings guide with the BEST OBS settings for streaming Fortnite, Just Chatting APEX Legends, PUBG, or really ANY game.This v...
Connect any wireless headset to a GoXLR or GoXLR mini. In this video, I show you how you can connect any pair of wireless bluetooth headphones to a GoXLR or GoXLR mini.They can be AirPods, Beats, B...
Today I got rid of the slow and pesky microSD card in my Pi and replaced it with something MUCH faster in my Pi LED Panel. Don’t know what my Pi LED Panel is? Check it out! This is my first video...
Today I built the ultimate, all in one, HomeLab Home Server to handle everything. 📺 Watch Video Disclosures: Sliger did send this case to me however asked for nothing in return. Other 4u ...
In today’s Traefik tutorial we’ll get FREE Wildcard certificates to use in our HomeLab and with all of our internal self-hosted services. We’re going to set up Traefik 3 in Docker and get Let’s En...
The UDM Pro Max is here and it’s packed with upgrades like a faster CPU, more RAM, an internal SSD, more eMMC, Dual Drive bays and more! Today we check on the new UniFi Dream Machine Pro Max, conf...
I just discovered Multus and it fixed Kubernetes networking! In this video we cover a lot of Kubernetes networking topics from beginner topics like CNIs, to advanced topics like adding Multus for ...
After moving some of my HomeLab servers into the new colocation I have so many choices to make when it comes to services and architecture! From networking, to VPN, to security, to hypervisors, to b...
After a few months of planning and building, I colocated some of my homelab servers in a data center! There were so many unknowns like, how much does colocating server cost? Do you need to bring y...
LocalSend is an open source application that securely transfers files between devices without the internet. It’s cross platform meaning that it’s available for Windows, Mac, Linux, iOS (iPhone, i...
Meet Gatus a self-hosted, open source, health dashboard that lets you monitor all of you services and systems! This dashboard not only tracks your uptime, but also measure the results plotting the...
Today marks a very special day for me. It’s something I’ve been working on for quite some time, and it’s finally ready for everyone to see! 🎉 Today, I launched the Techno Tim Shop with its first d...
After setting up your TrueNAS server there are lots of things to configure when it comes to tuning ZFS. From pools, to disk configuration, to cache to networking, backups and more. This guide wil...
After many hours and testing, swapping, resetting, and EDID training, all of my PiKVM and TESmart issues were solved with with a simple, cheap dongle. If you aren’t aware of the struggles I faced ...
Meet Homepage, your new HomeLab services dashboard homepage! Homepage is an open source, highly customizable homepage (or startpage) dashboard that runs on Docker and is integrated with over 100 ...
What a year of self-hosting! Join me as we walk though my entire infrastructure and services that I have running in my HomeLab! This time I also include network diagrams and dive deep into which ...
Introducing the UniFi Pro Max 24 PoE and UniFi Etherlighting™ Patch Cables from Ubiquiti! We’ll discuss what makes this switch unique, how they are different from the existing pro line, and even t...
Well, here it is! My Late 2023 Server Rack and HomeLab tour! I’ve upgraded, replaced, added, and consolidated quite a bit since my last tour! New servers, new networking, UPS, cabling, power man...
The UniFi Express from Ubiquiti is here and it’s going to shake up how we connect small and home networks. It’s a gateway that has WiFi 6 that runs the UniFi network application and can transform ...
The HL15 from 45Drives is here. It brings a lot of unique features and was built and designed with the HomeLab community in mind. In this in-depth review we’ll cover everything you want to know a...
Imagine all of your favorite operating systems in one place, available anywhere on your network, and you’ll never need to use your flash drive again. That’s the promise of netboot.xyz, a network b...
Introducing the ZimaBlade, an affordable, low power, single board computer that’s great for a home server, homelabs, tinkering, NAS, retro gaming, or even a dual boot desktop system like me. 📺 W...
Ever wonder what my home office and studio looks like and which tools I use? Check out my NEW ultimate desk & setup Tour for 2023! (My setup, my desk, my workbench, and even my studio rack for ...
I debated buying a new Mac due to its limited options for expandability. This all changed when I found a way to not only rackmount my Mac, but add PCIe slots to add additional components like NVMe ...
Cut the cord and get free over the air TV with Plex! Today we’ll dive deep into selecting a TV tuner, an antenna, dialing in your TV signal, and configuring Plex to help you get the most out of Li...
I took a trip with 3 other Tech YouTubers to 45 Drives Headquarters to see the new 45 HomeLab HL15 and other devices during their first ever Creator Summit to discuss storage! We take a look at lo...
45Drives HQ Located in Sydney, Nova Scotia, Canada I was invited to 45Drives Headquarters in Sydney Nova Scotia Canada for a Creator Summit on Data Storage. 45Drives invited Tech YouTubers like ...
Today we’re going to maximize your Productivity on Windows with Microsoft PowerToys. I’ll show you step-by-step how you can use, customize, and be more efficient when using Microsoft PowerToys. ...
I was unsatisfied with the huge wall adapter that many products ship with, so I replaced it! Want to power a mini PC or a smaller device with Power Over Ethernet (POE)? No problem! 📺 Watch Vide...
I decided to go with another rack in my home but this time much smaller! Thanks to Rackstuds for sending a few packs of Rackstuds! 📺 Watch Video Where to Buy Products in this video: SYSRAC...
I’ve had a ton of fun setting up and configuring a ZimaBoard and CasaOS over the last few weeks! While CasaOS is a great fit for your Home Server projects, I also decided to walk through over 20 o...
This week I finally decided to automate the watering of my lawn and garden without irrigation, here’s how… 📺 Watch Video Automation Without Irrigation Since I don’t have irrigation, I have to ...
Keeping track of container image updates is hard. I started using Renovate Bot to to track these for me and I now get pull requests from a bot for my Docker and Kubernetes container images. It’s a ...
Proxmox 8.0 has been released (June, 22, 2023) and includes several new features and a Debian version upgrade. Among the changes are: Debian 12 “Bookworm”, but using a newer Linux kernel 6.2 ...
A year ago I started a challenge that encouraged everyone to join the #100DaysOfHomeLab challenge, a challenge designed to help improve your skills in IT. This is similar to any of the “100 Days” c...
This has been months in the making, my new Mobile HomeLab! It’s a device that I can take with me to provide secure internet access for all of my devices. Not only can it provide secure access, but ...
Meet Scrypted an Open Source app that will let you connect almost any camera to any home hub, certified or not! You can connect popular devices from UniFi, Amcrest, Hikvision, Nest & Google, T...
What is Reflector? Reflector is a Kubernetes addon designed to monitor changes to resources (secrets and configmaps) and reflect changes to mirror resources in the same or other namespaces.Since s...
I’ve been running a few clusters in my HomeLab over the past few years but they have always been virtualized inside of Proxmox.That all changed today when I decided to run my Kubernetes cluster on ...
What if I told you that this little machine is the perfect Proxmox Virtualization server? And what if I told you I crammed an intel core i5, 64 GB of RAM, a 1 TB NVMe SSD, another 1TB SSD all in t...
Today, we’re going to set up and configure Terraform on your machine so we can start using Terraform.Then we’ll configure cf-terraforming to import our Cloudflare state and configuration into Terra...
If you’ve been encrypting your secrets with SOPS and Age you know how useful it is to keep your secrets safe from prying eyes. If you’re not familiar with encrypting your secrets with SOPS and Age,...
What is a VLAN and How Do They Help? Today we’re going to cover setting up VLANs using UniFi’s network controller.We’ll set up a VLAN, from start to finish, which includes creating a new network, ...
What is Wake on LAN and why is it so hard? After releasing my video on the PiKVM I realized that there was so much confusion about Wake on LAN, and rightfully so, that I decided to put together th...
If you’re looking to configure the TESmart switch with PiKVM, I finally figured it out and you can read all about it here. What is the PiKVM? If you don’t know what a KVM switch is, it’s a de...
What is FAST.com? FAST.com is speed test gives you an estimate of your current Internet speed. It was created by Netflix to bring transparency to your upload / download speeds and to see if your ...
What is Flux? Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy...
What is Mozilla SOPS? SOPS is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP.It’s open source ...
What is Age? age is a simple, modern and secure file encryption tool, format, and Go library. It features small explicit keys, no config options, and UNIX-style composability.It is commonly used i...
MaaS or Metal as a service from Canonical is a great way to provision bare metal machines as well as virtual machines.MaaS allows you to deploy Windows, Linux, ESXi, and many other operating system...
What Is Nested Virtualization? Nested Virtualization is a feature that allows you to run a virtual machine within a virtual machine while still using hardware acceleration from the host machine.Pu...
ZFS is a great file system that comes with TrueNAS and can meet all of your storage needs.But with it comes some complexity on how to manage and expand your ZFS storage pools.Over the last week I l...
Wow, what a year of self-hosting! After showing off my Home Lab hardware in my late 2022 tour, many of you asked what services are self-hosted in this stack. This is always a moving target so I dec...
Well, here it is! My Late 2022 Server Rack and HomeLab tour! This is a special one because I just revamped my entire rack.I’ve upgraded, replaced, added, and consolidated quite a bit since my las...
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.It’s so easy, I should have done this years ago! In this tutorial, ...
Here’s a quick way to automate your battery backups and UPSes with and open source service called NUT server and a raspberry Pi. 📺 Watch Video NUT Server Install script Be sure to check out (a...
Today I look at 2 (or 3 depending on how you count them) UPS systems from Tripp Lite and Eaton.These UPS devices couldn’t be any different but they are awesome nonetheless.Each has it’s own unique ...
I’ve been on a quest looking for a new server rack for my HomeLab in my home.I’ve outgrown my current 18u open frame rack and decided to give a 32u Sysracks Enclosed Rack a try! Join me as we put ...
My Storinator server from 45Drives is great, except for 1 thing.It’s a little loud for my home.It would be fine if it were in a data center or a real network closet, however this is in my basement....
Committing secrets to your Git Repo can expose information like passwords, access tokens, and other types of sensitive information.Some might think that committing secrets to a private Git Repo is ...
Check out my new server! It’s an Storinator AV15 from 45 Drives loaded with lots of great upgrades! Will it be my new high performance storage server and replace TrueNAS? Will it be my new hype...
Every Home Labber and IT person has their go to set of tools and accessories to help them accomplish tasks for technical projects in their HomeLab.This ranges from the very specialized, to the comm...
Traefik, cert-manager, Cloudflare, and Let’s Encrypt are a winning combination when it comes to securing your services with certificates in Kubernetes.Today, we’ll install and configure Traefik, th...
YouTube sent a package.I have a feeling I know what it is, but we’ll all find out live! 📺 Watch Video Find all of my server gear here! https://kit.co/TechnoTim/techno-tim-homelab-and-server-roo...
Grafana and Prometheus are a powerful monitoring solution.It allows you to visualize, query, and alert metrics no matter where they are stored.Today, we’ll install and configure Prometheus and Graf...
After deciding to upgrade my “old” 24 PoE switch to a new 48 port PoE switch with 4 SFP+ ports, I decided to check to see if my old house with old Cat5e network wiring will work at 10 gigabit speed...
If I could start my HomeLab all over, what would I choose? Would I choose the same servers, rack, networking, gateway, switch, firewall, my pc conversion, and even my disk shelf NAS? Did I make a...
It’s here. The #100DaysOfHomeLab challenge! This challenge is meant to accelerate your knowledge in servers, networking, infrastructure, automation, storage, containerization, orchestration, virtu...
Jekyll is a static site generator that transforms your plain text into beautiful static web sites and blogs.It can be use for a documentation site, a blog, an event site, or really any web site you...
I think I found the perfect GitOps and DevOps toolkit with FluxCD and Kubernetes.Flux is an open source GitOps solution that helps your deploy app and infrastructure with automation.It can monitor ...
Pterodactyl is a free an open source dedicated game server.It comes with both a panel to configure and deploy your game servers as well as game server nodes to run your games.It runs games in Docke...
Tdarr is a distributed transcoding system that runs on on Windows, Mac, Linux, Arm, Docker, and even Unraid.It uses a server with one or more nodes to transcode videos into any format you like.Toda...
Setting up k3s is hard.That’s why we made it easy.Today we’ll set up a High Availability K3s cluster using etcd, MetalLB, kube-vip, and Ansible.We’ll automate the entire process giving you an easy,...
Using Cloud Images and Cloud Init with Proxmox is easy, fast, efficient, and fun! Cloud Images are small images that are certified cloud ready that have Cloud Init preinstalled and ready to accept...
Rancher released a next generation open source HCI software hypervisor built on Kubernetes that helps you run virtual machines.With Harvester you can create Linux, Windows, or any virtual machine t...
TrueNAS SCALE is here and with it comes new way of installing and managing applications.You can install official apps, unofficial and community apps using TrueCharts, and also any Docker image or K...
We spin up all types of containers on my channel in my tutorials, but we have yet to build our own custom Docker container image.Today we’ll start from scratch with an empty Dockerfile and create, ...
CrowdSec is a free, open-source and collaborative IPS. Analyze behaviors, respond to attacks & share signals across the community.With CrowdSec, you can set up your own intrusion detection syst...
When most people think about self-hosting services in their HomeLab, they often think of the last mile. By last mile I mean the very last hop before a user accesses your services. This last hop, wh...
Have you been thinking about building a low power, efficient, small form factor but performant Proxmox server? This is the perfect home server build for anyone who wanted to virtualize some machi...
The Turing Pi 2 is a compact ARM cluster that provides a scalable computing on the edge.The Turning Pi 2 comes with many improvements over the Turning Pi 1.This model ships with 32GB of RAM, SATA I...
In my quest to make my services highly available I decided to use keepalived.keepalived is a framework for both load balancing and high availability that implements VRRP.This is a protocol that you...
A year ago I started a challenge that encouraged everyone to join the #100DaysOfHomeLab challenge, a challenge designed to help improve your skills in IT. This is similar to any of the “100 Days” challenges - pick a topic, stick with it for 100 days, and form a habit. Some of you might be asking what a “HomeLab” is, and I think in it’s simplest terms it’s a “lab environment” mostly at home. Think of this as a test environment to learn about technology without the fear of breaking anything. If you’d like to learn more about HomeLabs and how to get started, I summarized it in a video.
After creating the challenge, I had lots of folks join in on Twitter, Mastodon, YouTube, Instagram and many other social networks using the hashtag #100DaysOfHomeLab
, and it’s still going today!
I ended up sticking with it, posting on socials (when I remembered), and took it all the way to 1 year! While some posts seemed redundant and repetitive, I kept on building, breaking, learning, and posting. Once I hit 100 Days, I decided to see how long I could go. 100 days turned into 200 days, and 200 days turned into 300… and today I hit 365 days. Looking back at my very first tweet, it seems I missed a few days of sharing, or I am really bad at math. If you can spot where I missed or messed up, let me me know in the comments below! 😀
OK, my turn 😀
— Techno Tim (@TechnoTimLive) June 12, 2022
Day 1 #100DaysOfHomeLab
Since Day 0 was spent launching the video, planning, connecting with people, and celebrating 100k - today will be spent:
Operationalizing the self-hosted website & Bot pic.twitter.com/5EObleZiAV
Over the last year I learned so much about HomeLabbing, but specifically Docker, Kubernetes, networking, ZFS, GitOps, and many other related technologies. You can see all of my 100 Days of HomeLab tweets here however I will summarize some of the topics.
I started out by creating a Twitter bot that would retweet everyone who was joining the challenge. I felt like this was important to build and grow a community around HomeLab and a simple way to bringing people together. This is a self-hosted bot that I wrote myself, and even open sourced the code!
I also decided to create a 100 Days of HomeLab website so people could learn more about the challenge and even showcase some of the creators I worked with to make this possible. Huge thanks to all creators, featured on this page or not, who joined in on the fun!
Another bucket of learnings were what not to do. This can be seen as mistakes but I looked at them as opportunities. These were things like:
It wasn’t all bad, I also picked up some good habits and learned what I should continue doing in the future:
I made lots of changes to my HomeLab over the past year, from a pile of machines on a shelf, to an open post rack, to a fully enclosed server rack in a room I converted to a server room. With this came new challenges like networking, power, and even RGB. I was sent a Storinator from 45 Drives and really expanded my storage while deprecating my old Disk Shelf. I also picked up a handful of low power devices and built a small low power cluster of Intel NUCs and rack mounted them in my server rack!
I also got to dive into Ansible deeper than ever before! Ansible is a powerful tool for automating things, especially infrastructure. I automated things like updates, configuration of my machines, password changes, and even building a fully HA Kubernetes cluster with k3s. The time spent learning this tool has already paid back in dividends compared to the time I would put into doing these task manually or even worse, pile up tech debt because I would skip them.
I also picked up Terraform too! Terraform is one of those things you may not ever learn until you need to. It’s definitely been eye opening building up new infrastructure with Terraform. Every time I see a form or a UI to create some sort of Infra, I automatically think about how I can automate this with Terraform… but thinking and doing are two different things and I need to start doing this more often. I’ve already figured out how to apply Terraform to Cloudflare DNS and will be applying to more systems in the future.
Once of the biggest changes to networking wasn’t new hardware or network speed, but VLANs. I implemented VLANs here to keep all of my network traffic segmented according to the roles these devices fill. For instance I created an IoT
vlan for all of my IoT devices, a Camera
VLAN just for secure video devices, and Server
VLAN for my servers that are used for public facing services. This helps ensure that not only am I not mixing traffic, but also minimizing the blast radius if one of my devices were to become compromised. I talked about this and more security recommendations in a video here. Highly recommended if you are going to self-host anything.
The next big theme is Kubernetes and has been a theme on my channel almost since the beginning. I now run 3 HA Kubernetes clusters at home. That might sound crazy, but it’s true. It’s taught me so much about how to build, support, and maintain one of the most popular technologies in the world. It’s been challenging but rewarding at the same time. I ended up going all in and migrating all of my Docker only hosts to Kubernetes. I no longer have single Docker hosts (pets) and now have more Kubernetes nodes (cattle). Once I moved everything to Kubernetes, I quickly learned that I needed a better way to manage it than just a UI or applying manifests from the CLI.
(GitOps has entered the chat)
Git Ops, such a such a huge term and people have varying opinions on where it starts and where it ends - but it’s the idea that Git is the source of truth to deliver infrastructure as code. What does that mean for me and in my HomeLab? For me it means that my Kubernetes cluster (and custom code) is source controlled in Git and the only way to get those changes applied is through CI. This was one of the most rewarding things I have learned about during my 100 Days of HomeLab. All 3 of my Kubernetes clusters are defined in code (YAML) in a Git repository and when I need to make changes I just commit them to my repo and push them up and FLUX takes care of the rest. It has not only taught me how to deliver infrastructure as code but also taught me about secret management with SOPS which is such a valuable lesson, Kubernetes or not. I will be looking to expand into more IaC this year and beyond because this is truly the future of infrastructure.
Last but definitely not least is community. Doing this challenge has taught me that there are so many other people out there just like me, trying to build/break/fix/learn with a lab environment at home. There are countless times where I have been inspired from others or even found better, more efficient way to accomplish things by interacting with the HomeLab community. I have even picked up new tech all thanks to you. I have met lots of people on socials and will continue to follow your journey!
So, what are you waiting for? Want to join the 100 Days of HomeLab Challenge? You’re just one click away!
Day 365 #100daysofhomelab
— Techno Tim (@TechnoTimLive) June 20, 2023
One year. It's been a year since I have started the 100 days of #homelab and I've learned a lot. Here are my learnings in a thread 🧵
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
It’s here. The #100DaysOfHomeLab challenge! This challenge is meant to accelerate your knowledge in servers, networking, infrastructure, automation, storage, containerization, orchestration, virtualization, Windows, Linux, and more.It can even possibly accelerate your IT career! So, commit to the Hundred Days of HomeLab challenge, share your progress, and encourage others along the way!
So, to celebrate my 100k subs, I brought in some of the biggest names in the HomeLab community and some new faces too! A hue thanks to everyone that took part in this video.I can’t thank you enough!
Take the challenge! https://100daysofhomelab.com/
Day 100! #100DaysOfHomeLab
— Techno Tim (@TechnoTimLive) September 19, 2022
🎉 We did it!
It's hard to believe that this challenge started 100 days ago. Still forever grateful for those in this video and all those who joined. This isn't the end, for some it's just the beginning.
🪞 Reflecting on my journey
Thank you! pic.twitter.com/XpNiTjAnCv
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
After deciding to upgrade my “old” 24 PoE switch to a new 48 port PoE switch with 4 SFP+ ports, I decided to check to see if my old house with old Cat5e network wiring will work at 10 gigabit speeds! If this works, I will have a 10 Gbe network connection from my PCs to my HomeLab server rack!
A HUGE thank you to Micro Center for sponsoring today’s video!
New Customer Exclusive, Receive a FREE 256GB SSD in Store: https://micro.center/6af2da
Check Out Micro Center’s PC Builder: https://micro.center/f65221
Visit the Micro Center Community: https://micro.center/e64c4c
Intel Server Adapter X540-T1 - https://ebay.us/mQCVfl
USW-PRO-48-POE - https://amzn.to/3PbuFYf
Patch Panel - https://amzn.to/3yuBduk
Slim Patch Cables - https://amzn.to/3yvdmdO
10GBase-T SFP+ Transceiver - https://amzn.to/3atGdHB
Server Rack - https://amzn.to/3AKfj8S
Cat5e Spool (you should buy cat 6) - https://amzn.to/3PwNeX9
Cat6 Spool - https://amzn.to/3Pgk6TC
RJ45 Keystone Jacks - https://amzn.to/3IxwMDG
SFP+ DAC - https://amzn.to/3Pg96py
Install
1
+2
+
sudo apt update
+sudo apt install iperf
+
on the remote machine
1
+
iperf -s
+
then on another machine
1
+
iperf -c 192.168.0.104 # ip of the remote machine
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I’ve been making great use of some older, bigger servers but I decided to try and build, upgrade, and migrate to some 1U servers.Join me as we unbox and build my 2 new virtualization servers!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Looking for new ideas on how to use your virtual machines? Well, here’s the top 20 ways to use your virtual machines in your homelab.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
The NVIDIA RTX 3090 is a beast.We all know it can beat the benchmarks in gaming, but how about machine learning and neural networks? Today we walk through the RTX 3090 and then compile and run Darknet, an open source neural network, on Windows and then Ubuntu Linux and run object detection on pictures, images, and real-time video.You will be amazed at how much more you can get out of your video card than just gaming!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Check out my new server! It’s an Storinator AV15 from 45 Drives loaded with lots of great upgrades! Will it be my new high performance storage server and replace TrueNAS? Will it be my new hypervisor and replace one of my Proxmox servers? Or will I cluster this server and do something else? Let’s see what this server is made of first!
A HUGE thank you to Micro Center for sponsoring this video!
New Customers Exclusive – FREE Redragon GS500 Gaming Stereo Speakers: https://micro.center/mkp
Check out Micro Center’s PC Builder: https://micro.center/njw
Submit your build to Micro Center’s Build Showcase: https://micro.center/gov
Check out 45Drives Storinators and other servers - https://www.45drives.com/
Seagate Exos X16 14TB Drives and more - https://kit.co/TechnoTim/best-ssd-hard-drive-flash-storage
NEW SERVER! This week I built, configured, and (kind of) racked a new server! It's a customized Storinator server from the folks over at @45Drives!
— Techno Tim (@TechnoTimLive) September 10, 2022
What do you think of the design??? 💅https://t.co/r8i1fqYETj pic.twitter.com/EvxgilZb27
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
45Drives HQ Located in Sydney, Nova Scotia, Canada
I was invited to 45Drives Headquarters in Sydney Nova Scotia Canada for a Creator Summit on Data Storage. 45Drives invited Tech YouTubers like Jeff Geerling, Wendell Wilson, Jeff from Craft Computing, Tom Lawrence, and myself for a 3 day event to meet the 45Drives team and other experts in the storage space and discuss the future of storage.
If you want to see the video of this tour, you can check it out here!
We got a tour of the labs where they are cooking up some new storage systems
Day 1 started out with a tour of the 45Drives HQ. The office is new, fully of natural light, and very modern. One of the last stops was to visit the labs area where they were testing hardware, cluster configurations, and even had some prototype devices.
A Storinator that includes both 2.5” drives and 3.5” drives
The first stop was the Storiantor Hybrid F8 x1. This is an interesting configuration because it combines both 3.5” slots of high density drives and 2.5” slots for SSDs. THis something I have been particularly interested in because I’ve modified my Storinator to do just that. This one solution would allow me to keep my workloads that need fast storage, like virtual machines, on up to 8 SSDs, while still giving me 12 bays for HDDs that could store all of my “slow” data like videos, images, and documents.
The next stop was the Stornado Gen II 2U SATA. This machine is specifically designed for speed and offers 32 slots for all flash storage. This is also something that I am interested in, not only because it would allow me to move to an all flash solution, but also because it comes in a 2U factor and would help reduce energy costs. THis does mean however that I will have to spend more on SSDs so not sure if there’s a real cost benefit for my specific application. (Still want all flash storage though! 😂)
The Storinator Homelab H15 (prototype)
The next stop was one of the most anticipated devices for me (being part of the HomeLab community) that was the 45 Homelab HL15. This device is meant to meet the needs of Homelab enthusiasts. This machine has the same build quality you would expect from a 45Drives system and has very similar parts and storage as their enterprise versions. One of the cool things about this version is that it can be rack mounted or stand alone as a desktop. One thing you’ll notice too is that it uses a standard ATX power supply which is a welcome addition for something that will be primarily used by consumers which will allow them to use commodity hardware. 45Drives have said that they will sell this devices in a few configurations, per their website:
The current server under development is a 15-bay, 4U chassis that will be offered in three options:
If you’re interested I would highly recommend signing up for their newsletter
The Storinator Jr., a Raspberry Pi based storage server (prototype)
The next stop was the Storinator Jr. This device is only a prototype but was inspired by Jeff Geerling’s Petabyte Pi project. The idea is simple, create a “mini” Storinator that is backed by a Raspberry Pi, but the technical limitations of the Pu were almost insurmountable. You’ll have to check out Jeff’s video for all of the challenges he had to overcome. To be clear, this was only a R&D Project and this product may never come to market, cool nonetheless.
Mitch Hall discussing Ceph at 45 Drives_
With the tour out of the way it was time to hop into our tech discussions. 45Drives kicked it off with talks from their CO-founder, architects, sales, and more. The topics ranged from the history of 45Drives, Ceph Clustering and “Cluster for Everyone”, to even ransomware protection using Snapshield which can quickly terminate connections from a suspected infected device.
Me (Timothy Stewart) discussing 45Drives at Home(Lab)
After that, it was my turn. For context, all creators were asked to share a topic related to storage, and I felt the most valuable thing I could talk about was my experience with a 45Drives Storinator at home. Also, in case you didn’t know, all current Storinators target enterprise customers and so I thought it was a rare opportunity to give them feedback about using and converting one of their storage servers for home use. This might be something that they would incorporate into their Homelab product. And yes, I did throw in the RGB idea. We’ll see how that goes…
Jeff Geerling discussing the history of 45Drives & Content Creators
Up next was a discussion led by Jeff Geerling. This was an interesting talk about the history of 45Drives involvement with content creators raging from Linus, to MKBHD, to iJustine, and even their recent round of creators (all of us). Jeff also analyzed each of the videos and broke them down into why they were successful vs ones that weren’t particularly successful (from a YouTube views perspective). He talked about his process, script writing, and even storytelling and offered some feedback on how they could improve their reach on their own content if they focused more on story storytelling with a hook. This is something that I definitely need to work on too, so I will be trying to apply this more to my future content.
Alan Nagl giving a deep dive on hard drive technology
Next was Alan Nagl from HDStor who did a deep dive on hard drives, how they work, and how the “SSD will never replace the HDD”. I learned quite a bit about about hard drives, how they work, HAMR hard drives and as a bonus learned all about helium filled drives at dinner. Alan is a wealth of information when it comes to storage!
Tom Lawrence discussing ZFS and Ceph solutions
Last but not least was Tom Lawrence from Lawrence systems. He joined virtual because he couldn’t attend in person but his topic was a discussion on how to position clustering in the market vs single server solutions. Tom had a lot of the same questions I had about Ceph, when to use Ceph, and at what point does Ceph make sense over a large ZFS pool.
Outside of lunch, dinner, and lots of snacks, that pretty much summed up the day. Lots of great talked and more to come tomorrow!
If you're curios what we've been up to here at at 45 Drives here's my recap of Day 1https://t.co/qWvzC4XdHd
— Techno Tim (@TechnoTimLive) August 25, 2023
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I took a trip with 3 other Tech YouTubers to 45 Drives Headquarters to see the new 45 HomeLab HL15 and other devices during their first ever Creator Summit to discuss storage! We take a look at lots of Storinators, the HL15 HomeLab, all flash Stornados, and even the Storinator Jr.!
If you’re looking for details on the Creator Summit, you can read all about it in a previous post!
Thank you so much to 45Drives for paying for this trip to the Creator Summit!
Thank you to Jeff Geerling, Wendell, Jeff from Craft Computing, Tom Lawrence, Alan Nagl, and Dave Dickerson for teaching me so much during this trip!
Pre-sales for the 45HomeLab here: https://presale.45homelab.com
You can check out the 45HomeLab here: https://45homelab.com
I took a trip with other Tech YouTubers to 45 Drives Headquarters to see the new 45 HomeLab HL15 and other devices during their first ever Creator Summit to discuss storage!@45Drives @Level1Techs @geerlingguy @CraftComputing @TomLawrenceTechhttps://t.co/0oOd22mHB7 pic.twitter.com/U1VO6IlQb0
— Techno Tim (@TechnoTimLive) September 14, 2023
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Meet NUT Server, or Network UPS Tools.It’s an open UPS networking monitoring tool that runs on many different operating systems and processors.This means you can run the server on Linux, MacOS, or BSD and run the client on Windows, MacOS, Linux, and more.It’ perfect for your Pi, server, or desktop.It works with hundreds of UPS devices, PDUs, and many other power management systems.
This is the ultimate guide to configuring Network UPS Tools (NUT).We cover everything from installing and configuring the server on as Raspberry Pi, configuring the client on Windows and Linux, configure a charting and graphing website to visualize NUT data, spin up an additional web site use Docker, and finally set up monitoring and alerting to automate shutdowns of your machine.
Also, note to self, don’t eat a salad before you record a video….
plug in ups
1
+
lsusb
+
should see something like
1
+2
+3
+4
+
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
+Bus 001 Device 019: ID 09ae:2012 Tripp Lite
+Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
+Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+
1
+2
+
sudo apt update
+sudo apt install nut nut-client nut-server
+
1
+
sudo nut-scanner -U
+
should see something like
tripp lite
1
+2
+3
+4
+5
+6
+7
+8
+
[nutdev1]
+ driver = "usbhid-ups"
+ port = "auto"
+ vendorid = "09AE"
+ productid = "2012"
+ product = "Tripp Lite UPS"
+ vendor = "Tripp Lite"
+ bus = "001"
+
apc 1500
1
+2
+3
+4
+5
+6
+7
+8
+9
+
[nutdev1]
+ driver = "usbhid-ups"
+ port = "auto"
+ vendorid = "051D"
+ productid = "0002"
+ product = "Back-UPS XS 1500M FW:947.d10 .D USB FW:d10"
+ serial = "3xxxxxxxxxxx"
+ vendor = "Tripp Lite"
+ bus = "001"
+
apc 850
1
+2
+3
+4
+5
+6
+7
+8
+9
+
[nutdev3]
+ driver = "usbhid-ups"
+ port = "auto"
+ vendorid = "051D"
+ productid = "0002"
+ product = "Back-UPS ES 850G2 FW:931.a10.D USB FW:a"
+ serial = "3xxxxxxxxxxx"
+ vendor = "American Power Conversion"
+ bus = "001"
+
1
+
sudo nano /etc/nut/ups.conf
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+
pollinterval = 1
+maxretry = 3
+
+[tripplite]
+ driver = usbhid-ups
+ port = auto
+ desc = "Tripp Lite 1500VA SmartUPS"
+ vendorid = 09ae
+ productid = 2012
+
+[apc-network]
+ driver = usbhid-ups
+ port = auto
+ desc = "APC Back-UPS XS 1500"
+ vendorid = 051d
+ productid = 0002
+ serial = 3xxxxxxxxx
+
+[apc-modem]
+ driver = usbhid-ups
+ port = auto
+ desc = "APC 850 VA"
+ vendorid = 051d
+ productid = 0002
+ serial = 3xxxxxxxxx
+
1
+
sudo nano /etc/nut/upsmon.conf
+
1
+2
+3
+
MONITOR tripplite@localhost 1 admin secret master
+MONITOR apc-modem@localhost 1 admin secret master
+MONITOR apc-network@localhost 1 admin secret master
+
1
+
sudo nano /etc/nut/upsd.conf
+
Change 127.0.0.1
1
+
LISTEN 127.0.0.1 3493
+
to all interface
1
+
LISTEN 0.0.0.0 3493
+
1
+
sudo nano /etc/nut/nut.conf
+
1
+
MODE=netserver
+
1
+
sudo nano /etc/nut/upsd.users
+
1
+2
+3
+
[monuser]
+ password = secret
+ admin master
+
1
+
sudo nano /etc/udev/rules.d/99-nut-ups.rules
+
1
+2
+3
+4
+5
+6
+7
+
SUBSYSTEM!="usb", GOTO="nut-usbups_rules_end"
+
+# TrippLite
+# e.g. TrippLite SMART1500LCD - usbhid-ups
+ACTION=="add|change", SUBSYSTEM=="usb|usb_device", SUBSYSTEMS=="usb|usb_device", ATTR{idVendor}=="09ae", ATTR{idProduct}=="2012", MODE="664", GROUP="nut", RUN+="/sbin/upsdrvctl stop; /sbin/upsdrvctl start"
+
+LABEL="nut-usbups_rules_end"
+
reboot (because it’s easy)
or
1
+2
+3
+4
+5
+
sudo service nut-server restart
+sudo service nut-client restart
+sudo systemctl restart nut-monitor
+sudo upsdrvctl stop
+sudo upsdrvctl start
+
APC UPS 950 va
list all usb devices
1
+
lsusb
+
query device by USB bus (replace with # from previous command)
1
+
lsusb -D /dev/bus/usb/001/057
+
You should see something like
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+
Device Descriptor:
+ bLength 18
+ bDescriptorType 1
+ bcdUSB 2.00
+ bDeviceClass 0
+ bDeviceSubClass 0
+ bDeviceProtocol 0
+ bMaxPacketSize0 64
+ idVendor 0x051d American Power Conversion
+ idProduct 0x0002 Uninterruptible Power Supply
+ bcdDevice 0.90
+ iManufacturer 1
+ iProduct 2
+ iSerial 3
+ bNumConfigurations 1
+ Configuration Descriptor:
+ bLength 9
+ bDescriptorType 2
+ wTotalLength 0x0022
+ bNumInterfaces 1
+ bConfigurationValue 1
+ iConfiguration 0
+ bmAttributes 0xe0
+ Self Powered
+ Remote Wakeup
+ MaxPower 2mA
+ Interface Descriptor:
+ bLength 9
+ bDescriptorType 4
+ bInterfaceNumber 0
+ bAlternateSetting 0
+ bNumEndpoints 1
+ bInterfaceClass 3 Human Interface Device
+ bInterfaceSubClass 0
+ bInterfaceProtocol 0
+ iInterface 0
+ HID Device Descriptor:
+ bLength 9
+ bDescriptorType 33
+ bcdHID 1.00
+ bCountryCode 33 US
+ bNumDescriptors 1
+ bDescriptorType 34 Report
+ wDescriptorLength 1049
+ Report Descriptors:
+ ** UNAVAILABLE **
+ Endpoint Descriptor:
+ bLength 7
+ bDescriptorType 5
+ bEndpointAddress 0x81 EP 1 IN
+ bmAttributes 3
+ Transfer Type Interrupt
+ Synch Type None
+ Usage Type Data
+ wMaxPacketSize 0x0008 1x 8 bytes
+ bInterval 100
+
1
+
sudo apt install apache2 nut-cgi
+
1
+
sudo nano /etc/nut/hosts.conf
+
1
+2
+3
+
MONITOR tripplite@localhost "Tripp Lite 1500VA SmartUPS - Rack"
+MONITOR apc-modem@localhost "APC 850 VA - Wall"
+MONITOR apc-network@localhost "APC Back-UPS XS 1500 - Rack"
+
1
+
sudo a2enmod cgi
+
1
+
sudo systemctl restart apache2
+
1
+
sudo nano /etc/nut/upsset.conf
+
1
+
I_HAVE_SECURED_MY_CGI_DIRECTORY
+
visit
http://your.ip.adddress/cgi-bin/nut/upsstats.cgi
1
+2
+3
+
mkdir webnut
+cd webnut
+nano docker-compose.yml
+
paste contents and save
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
version: "3.1"
+services:
+ nut:
+ image: teknologist/webnut
+ container_name: webnut
+ environment:
+ - UPS_HOST=ip.address.of.nut.server
+ - UPS_PORT=3493
+ - UPS_USER=admin
+ - UPS_PASSWORD=secret
+ restart: unless-stopped
+ security_opt:
+ - no-new-privileges:true
+ networks:
+ - proxy
+ ports:
+ - 6543:6543
+networks:
+ proxy:
+ external: true
+
1
+
docker-compose up -d --force-recreate
+
1
+
sudo apt install nut-client
+
then run
1
+
upsc
+
to verify
verify you can connect
1
+
upsc tripplite@ip.address.of.server
+
1
+
sudo nano /etc/nut/upsmon.conf
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+
RUN_AS_USER root
+
+MONITOR apc-modem@ip.address.of.nut.server 1 admin secret slave
+
+MINSUPPLIES 1
+SHUTDOWNCMD "/sbin/shutdown -h"
+NOTIFYCMD /usr/sbin/upssched
+POLLFREQ 2
+POLLFREQALERT 1
+HOSTSYNC 15
+DEADTIME 15
+POWERDOWNFLAG /etc/killpower
+
+NOTIFYMSG ONLINE "UPS %s on line power"
+NOTIFYMSG ONBATT "UPS %s on battery"
+NOTIFYMSG LOWBATT "UPS %s battery is low"
+NOTIFYMSG FSD "UPS %s: forced shutdown in progress"
+NOTIFYMSG COMMOK "Communications with UPS %s established"
+NOTIFYMSG COMMBAD "Communications with UPS %s lost"
+NOTIFYMSG SHUTDOWN "Auto logout and shutdown proceeding"
+NOTIFYMSG REPLBATT "UPS %s battery needs to be replaced"
+NOTIFYMSG NOCOMM "UPS %s is unavailable"
+NOTIFYMSG NOPARENT "upsmon parent process died - shutdown impossible"
+
+NOTIFYFLAG ONLINE SYSLOG+WALL+EXEC
+NOTIFYFLAG ONBATT SYSLOG+WALL+EXEC
+NOTIFYFLAG LOWBATT SYSLOG+WALL
+NOTIFYFLAG FSD SYSLOG+WALL+EXEC
+NOTIFYFLAG COMMOK SYSLOG+WALL+EXEC
+NOTIFYFLAG COMMBAD SYSLOG+WALL+EXEC
+NOTIFYFLAG SHUTDOWN SYSLOG+WALL+EXEC
+NOTIFYFLAG REPLBATT SYSLOG+WALL
+NOTIFYFLAG NOCOMM SYSLOG+WALL+EXEC
+NOTIFYFLAG NOPARENT SYSLOG+WALL
+
+RBWARNTIME 43200
+
+NOCOMMWARNTIME 600
+
+FINALDELAY 5
+
set net client
1
+
sudo nano /etc/nut/nut.conf
+
1
+
MODE=netclient
+
restart service
1
+
sudo systemctl restart nut-client
+
check status
1
+
sudo systemctl status nut-client
+
https://github.com/gawindx/WinNUT-Client/releases
scheduling on the remote system
1
+
sudo nano /etc/nut/upssched.conf
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+
CMDSCRIPT /etc/nut/upssched-cmd
+PIPEFN /etc/nut/upssched.pipe
+LOCKFN /etc/nut/upssched.lock
+
+AT ONBATT * START-TIMER onbatt 30
+AT ONLINE * CANCEL-TIMER onbatt online
+AT ONBATT * START-TIMER earlyshutdown 30
+AT LOWBATT * EXECUTE onbatt
+AT COMMBAD * START-TIMER commbad 30
+AT COMMOK * CANCEL-TIMER commbad commok
+AT NOCOMM * EXECUTE commbad
+AT SHUTDOWN * EXECUTE powerdown
+AT SHUTDOWN * EXECUTE powerdown
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+
sudo nano /etc/nut/upssched-cmd
+``
+
+```bash
+#!/bin/sh
+ case $1 in
+ onbatt)
+ logger -t upssched-cmd "UPS running on battery"
+ ;;
+ earlyshutdown)
+ logger -t upssched-cmd "UPS on battery too long, early shutdown"
+ /usr/sbin/upsmon -c fsd
+ ;;
+ shutdowncritical)
+ logger -t upssched-cmd "UPS on battery critical, forced shutdown"
+ /usr/sbin/upsmon -c fsd
+ ;;
+ upsgone)
+ logger -t upssched-cmd "UPS has been gone too long, can't reach"
+ ;;
+ *)
+ logger -t upssched-cmd "Unrecognized command: $1"
+ ;;
+ esac
+
make it executable (should already be)
1
+
chmod +x /etc/nut/upssched-cmd
+
Be sure PIPEFN and LOCKFN point to a folder that esists, I’ve seen it point to /etc/nut/upssched/
instead of /etc/nut
If it does, create the folder or update these variables.
1
+
mkdir /etc/nut/upssched/
+
test
1
+
systemctl restart nut-client
+
then pull the plug on the ups connected to the master, check syslogs
1
+
tail /var/log/syslog
+
should see the logs
machine should shutdown
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I just discovered Multus and it fixed Kubernetes networking! In this video we cover a lot of Kubernetes networking topics from beginner topics like CNIs, to advanced topics like adding Multus for more traditional networking within Kubernetes - which fixes a lot of problems you see with Kubernetes networking. Also, I had to turn the nerd up to 11 on this one.
Disclosures:
Multus can be installed a few different ways. The best thing to do is check with your Kubernetes distribution if they support enabling this with configuration. If they do, this is much easier than installing it yourself
Be sure to apply any additional config mentioned in the above links. This will most likely include configuration for your CNI to allow multus to plug into it.
Since I was using RKE2, I needed to apply this HelmChartConfig
to configure Cilium to work with Multus
Do not apply this unless you are also using Cilium and RKE2/
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+
# /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
+---
+apiVersion: helm.cattle.io/v1
+kind: HelmChartConfig
+metadata:
+ name: rke2-cilium
+ namespace: kube-system
+spec:
+ valuesContent: |-
+ cni:
+ exclusive: false
+
First check to see that it’s installed
1
+
kubectl get pods --all-namespaces | grep -i multus
+
You should see something similar to the output below. This will vary depending on how you installed multus.
1
+2
+3
+4
+5
+6
+
kube-system rke2-multus-4kbbv 1/1 Running 0 30h
+kube-system rke2-multus-qbhrb 1/1 Running 0 30h
+kube-system rke2-multus-rmh9l 1/1 Running 0 30h
+kube-system rke2-multus-vbpr5 1/1 Running 0 30h
+kube-system rke2-multus-x4bpg 1/1 Running 0 30h
+kube-system rke2-multus-z22sq 1/1 Running 0 30h
+
We will need to create a NetworkAttachmentDefinition
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
# network-attachment-definition.yaml
+---
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: multus-iot
+ namespace: default
+spec:
+ config: |-
+ {
+ "cniVersion": "0.3.1",
+ "type": "ipvlan",
+ "master": "eth1",
+ "ipam": {
+ "type": "static",
+ "routes": [
+ { "dst": "192.168.0.0/16", "gw": "192.168.20.1" }
+ ]
+ }
+ }
+
Then apply this NetworkAttachmentDefinition
1
+
kubectl apply -f network-attachment-definition.yaml
+
Then check to see if it was created
1
+
kubectl get network-attachment-definitions.k8s.cni.cncf.io multus-iot
+
Should see something like:
1
+2
+
NAME AGE
+multus-iot 30h
+
You can also describe it to see it contents
1
+
kubectl describe network-attachment-definitions.k8s.cni.cncf.io multus-iot
+
You should see something like:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+
Name: multus-iot
+Namespace: default
+Labels: <none>
+Annotations: <none>
+API Version: k8s.cni.cncf.io/v1
+Kind: NetworkAttachmentDefinition
+Metadata:
+ Creation Timestamp: 2024-04-14T04:56:02Z
+ Generation: 1
+ Resource Version: 3215172
+ UID: 89b7f3d0-c094-4831-9b94-5ecdf6b38232
+Spec:
+ Config: {
+ "cniVersion": "0.3.1",
+ "type": "ipvlan",
+ "master": "eth1",
+ "ipam": {
+ "type": "static",
+ "routes": [
+ { "dst": "192.168.0.0/16", "gw": "192.168.20.1" }
+ ]
+ }
+}
+Events: <none>
+
Let’s create a sample pod and see if it gets our IP
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
# sample-pod.yaml
+
+apiVersion: v1
+kind: Pod
+metadata:
+ name: sample-pod
+ namespace: default
+ annotations:
+ k8s.v1.cni.cncf.io/networks: |
+ [{
+ "name": "multus-iot",
+ "namespace": "default",
+ "mac": "c6:5e:a4:8e:7a:59",
+ "ips": ["192.168.20.202/24"]
+ }]
+spec:
+ containers:
+ - name: sample-pod
+ command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
+ image: alpine
+
Check to see if it’s running
1
+
kubectl get pod sample-pod
+
You should see something like:
1
+2
+
NAME READY STATUS RESTARTS AGE
+sample-pod 1/1 Running 0 30h
+
Now let’s describe the pod to see if it got our IP
1
+
kubectl describe pod sample-pod
+
You should see something like:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+
➜ home-ops git:(master) k describe pod sample-pod
+Name: sample-pod
+Namespace: default
+Priority: 0
+Service Account: default
+Node: k8s-home-worker-01/192.168.20.70
+Start Time: Sun, 14 Apr 2024 00:01:28 -0500
+Labels: <none>
+Annotations: k8s.v1.cni.cncf.io/network-status:
+ [{
+ "name": "portmap",
+ "interface": "eth0",
+ "ips": [
+ "10.42.4.89"
+ ],
+ "mac": "1a:af:f2:3f:32:f8",
+ "default": true,
+ "dns": {},
+ "gateway": [
+ "10.42.4.163"
+ ]
+ },{
+ "name": "default/multus-iot",
+ "interface": "net1",
+ "ips": [
+ "192.168.20.202"
+ ],
+ "mac": "bc:24:11:a0:4b:35",
+ "dns": {}
+ }]
+ k8s.v1.cni.cncf.io/networks:
+ [{
+ "name": "multus-iot",
+ "namespace": "default",
+ "mac": "c6:5e:a4:8e:7a:59",
+ "ips": ["192.168.20.202/24"]
+ }]
+Status: Running
+IP: 10.42.4.89
+IPs:
+ IP: 10.42.4.89
+Containers:
+ sample-pod:
+ Container ID: containerd://fdd56e2fcdb3d587d792878285ef0fe50d076167d2b283dbf42aeb1b210d36cf
+ Image: alpine
+ Image ID: docker.io/library/alpine@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /bin/ash
+ -c
+ trap : TERM INT; sleep infinity & wait
+ State: Running
+ Started: Sun, 14 Apr 2024 00:01:29 -0500
+ Ready: True
+ Restart Count: 0
+ Environment: <none>
+ Mounts:
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lggfv (ro)
+Conditions:
+ Type Status
+ Initialized True
+ Ready True
+ ContainersReady True
+ PodScheduled True
+Volumes:
+ kube-api-access-lggfv:
+ Type: Projected (a volume that contains injected data from multiple sources)
+ TokenExpirationSeconds: 3607
+ ConfigMapName: kube-root-ca.crt
+ ConfigMapOptional: <nil>
+ DownwardAPI: true
+QoS Class: BestEffort
+Node-Selectors: <none>
+Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
+ node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Scheduled 8s default-scheduler Successfully assigned default/sample-pod to k8s-home-worker-01
+ Normal AddedInterface 7s multus Add eth0 [10.42.4.89/32] from portmap
+ Normal AddedInterface 7s multus Add net1 [192.168.20.202/24] from default/multus-iot
+ Normal Pulling 7s kubelet Pulling image "alpine"
+ Normal Pulled 7s kubelet Successfully pulled image "alpine" in 388.090289ms (388.099785ms including waiting)
+ Normal Created 7s kubelet Created container sample-pod
+ Normal Started 7s kubelet Started container sample-pod
+
You should see an adapter added to the pod as well as an IP:
1
+2
+3
+
...
+Normal AddedInterface 7s multus Add net1 [192.168.20.202/24] from default/multus-iot
+...
+
Be sure you can ping that new IP
1
+2
+3
+4
+5
+6
+
➜ home-ops git:(master) ping 192.168.20.202
+PING 192.168.20.202 (192.168.20.202): 56 data bytes
+64 bytes from 192.168.20.202: icmp_seq=0 ttl=63 time=0.839 ms
+64 bytes from 192.168.20.202: icmp_seq=1 ttl=63 time=0.876 ms
+64 bytes from 192.168.20.202: icmp_seq=2 ttl=63 time=0.991 ms
+64 bytes from 192.168.20.202: icmp_seq=3 ttl=63 time=0.812 ms
+
exec
into the pod and test connectivity and DNS
1
+
kubectl exec -it pods/sample-pod -- /bin/sh
+
Once in ping your gateway, ping another device on the network, and ping something on the internet
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+
/ # ping 192.168.20.1
+PING 192.168.20.1 (192.168.20.1): 56 data bytes
+64 bytes from 192.168.20.1: seq=0 ttl=64 time=0.318 ms
+64 bytes from 192.168.20.1: seq=1 ttl=64 time=0.230 ms
+64 bytes from 192.168.20.1: seq=2 ttl=64 time=0.531 ms
+^C
+--- 192.168.20.1 ping statistics ---
+3 packets transmitted, 3 packets received, 0% packet loss
+round-trip min/avg/max = 0.230/0.359/0.531 ms
+/ # ping 192.168.20.52
+PING 192.168.20.52 (192.168.20.52): 56 data bytes
+64 bytes from 192.168.20.52: seq=0 ttl=255 time=88.498 ms
+64 bytes from 192.168.20.52: seq=1 ttl=255 time=3.375 ms
+64 bytes from 192.168.20.52: seq=2 ttl=255 time=25.688 ms
+^C
+--- 192.168.20.52 ping statistics ---
+3 packets transmitted, 3 packets received, 0% packet loss
+round-trip min/avg/max = 3.375/39.187/88.498 ms
+/ # ping google.com
+PING google.com (142.250.191.238): 56 data bytes
+64 bytes from 142.250.191.238: seq=0 ttl=111 time=8.229 ms
+64 bytes from 142.250.191.238: seq=1 ttl=111 time=8.578 ms
+64 bytes from 142.250.191.238: seq=2 ttl=111 time=8.579 ms
+^C
+--- google.com ping statistics ---
+3 packets transmitted, 3 packets received, 0% packet loss
+round-trip min/avg/max = 8.229/8.462/8.579 ms
+/ #
+
Now test DNS by looking up something on the internet, something on your local network, and something inside of your cluster
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+
/ # nslookup google.com
+Server: 10.43.0.10
+Address: 10.43.0.10:53
+
+Non-authoritative answer:
+Name: google.com
+Address: 142.250.191.238
+
+Non-authoritative answer:
+Name: google.com
+Address: 2607:f8b0:4009:81b::200e
+
+/ # nslookup k8s-home-worker-01.local.techtronic.us
+Server: 10.43.0.10
+Address: 10.43.0.10:53
+
+Name: k8s-home-worker-01.local.techtronic.us
+Address: 192.168.60.53
+
+Non-authoritative answer:
+
+/ # nslookup homepage
+Server: 10.43.0.10
+Address: 10.43.0.10:53
+
+** server can't find homepage.cluster.local: NXDOMAIN
+
+** server can't find homepage.svc.cluster.local: NXDOMAIN
+
+** server can't find homepage.cluster.local: NXDOMAIN
+
+** server can't find homepage.svc.cluster.local: NXDOMAIN
+
+
+Name: homepage.default.svc.cluster.local
+Address: 10.43.143.7
+
If all of the tests passed, you should be good!
You can now do the same thing for your other workloads that need to use Multus!
If you’re using RKE2 and you notice that your worker nodes are using the wrong IP address after adding an additional NIC, you can override the Node IP with config:
1
+2
+3
+
# /etc/rancher/rke2/config.yaml
+node-ip: 192.168.60.53 # the node's primary IP used for kubernetes
+node-external-ip: 192.168.60.53 # the node's primary IP used for kubernetes
+
You will need to restart the rke service or reboot.
Check with
1
+
kubectl get nodes -o wide
+
You should then see the IP on the node (note my k8s-home-worker-01
has the fix, but k8s-home-worker-02
and k8s-home-worker-03
don’t)
1
+2
+3
+4
+5
+6
+7
+
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+k8s-home-01 Ready control-plane,etcd,master 5d v1.28.8+rke2r1 192.168.60.50 <none> Ubuntu 22.04.4 LTS 5.15.0-102-generic containerd://1.7.11-k3s2
+k8s-home-02 Ready control-plane,etcd,master 5d v1.28.8+rke2r1 192.168.60.51 <none> Ubuntu 22.04.4 LTS 5.15.0-102-generic containerd://1.7.11-k3s2
+k8s-home-03 Ready control-plane,etcd,master 5d v1.28.8+rke2r1 192.168.60.52 <none> Ubuntu 22.04.4 LTS 5.15.0-102-generic containerd://1.7.11-k3s2
+k8s-home-worker-01 Ready worker 5d v1.28.8+rke2r1 192.168.60.53 192.168.60.53 Ubuntu 22.04.4 LTS 5.15.0-102-generic containerd://1.7.11-k3s2
+k8s-home-worker-02 Ready worker 5d v1.28.8+rke2r1 192.168.20.71 <none> Ubuntu 22.04.4 LTS 5.15.0-102-generic containerd://1.7.11-k3s2
+k8s-home-worker-03 Ready worker 5d v1.28.8+rke2r1 192.168.20.72 <none> Ubuntu 22.04.4 LTS 5.15.0-102-generic containerd://1.7.11-k3s2
+
You can see more flags on the RKE2 documentation page
I have also seen odd issues when with routing and using cloud init. I’ve had to override some settings using netplan
You can see there is a misplaced route in your tables
1
+2
+3
+4
+5
+6
+7
+8
+9
+
➜ ~ ip route
+192.168.20.0/24 dev eth1 proto kernel scope link src 192.168.20.72 metric 100
+192.168.20.1 dev eth1 proto dhcp scope link src 192.168.20.72 metric 100
+192.168.60.0/24 dev eth0 proto kernel scope link src 192.168.60.55 metric 100
+192.168.60.1 dev eth0 proto dhcp scope link src 192.168.60.55 metric 100
+192.168.60.10 via 192.168.20.1 dev eth1 proto dhcp src 192.168.20.72 metric 100 # wrong
+192.168.60.10 dev eth0 proto dhcp scope link src 192.168.60.55 metric 100
+192.168.60.22 via 192.168.20.1 dev eth1 proto dhcp src 192.168.20.72 metric 100 #wrong
+192.168.60.22 dev eth0 proto dhcp scope link src 192.168.60.55 metric 100
+
To fix this, we need to override the routes with netplan
1
+
sudo nano /etc/netplan/50-cloud-init.yaml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+
# This file is generated from information provided by the datasource. Changes
+# to it will not persist across an instance reboot. To disable cloud-init's
+# network configuration capabilities, write a file
+# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
+# network: {config: disabled}
+network:
+ version: 2
+ ethernets:
+ eth0:
+ addresses:
+ - 192.168.60.55/24
+ match:
+ macaddress: bc:24:11:f1:2a:e7
+ nameservers:
+ addresses:
+ - 192.168.60.10
+ - 192.168.60.22
+ routes:
+ - to: default
+ via: 192.168.60.1
+ set-name: eth0
+ eth1:
+ addresses:
+ - 192.168.20.65/24
+ match:
+ macaddress: bc:29:71:9a:01:29
+ nameservers:
+ addresses:
+ - 192.168.60.10
+ - 192.168.60.22
+ routes:
+ - to: 192.168.20.0/24
+ via: 192.168.20.1
+ set-name: eth1
+ eth2:
+ addresses:
+ - 192.168.40.52/24
+ match:
+ macaddress: bc:24:11:3d:c9:f7
+ nameservers:
+ addresses:
+ - 192.168.60.10
+ - 192.168.60.22
+ routes:
+ - to: 192.168.40.0/24
+ via: 192.168.40.1
+ set-name: eth2
+
If you know of a better way to do this, please let me know in the comments.
You will have to make some changes for this to work with k3s
. Thanks ThePCGeek!
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+
# k3s multus install
+apiVersion: helm.cattle.io/v1
+kind: HelmChart
+metadata:
+ name: multus
+ namespace: kube-system
+spec:
+ repo: https://rke2-charts.rancher.io
+ chart: rke2-multus
+ targetNamespace: kube-system
+ # createNamespace: true
+ valuesContent: |-
+ config:
+ cni_conf:
+ confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
+ clusterNetwork: /var/lib/rancher/k3s/agent/etc/cni/net.d/10-flannel.conflist
+ binDir: /var/lib/rancher/k3s/data/current/bin/
+ kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
+
I have also used this mac-vlan
config below successfully
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+
---
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: multus-iot
+ namespace: default
+spec:
+ config: |-
+ {
+ "cniVersion": "0.3.1",
+ "name": "multus-iot",
+ "plugins": [
+ {
+ "type": "macvlan",
+ "master": "eth1",
+ "mode": "bridge",
+ "capabilities": {
+ "ips": true
+ },
+ "ipam": {
+ "type": "static",
+ "routes": [{
+ "dst": "192.168.0.0/16",
+ "gw": "192.168.20.1"
+ }]
+ }
+ }
+ ]
+ }
+
Sample Pods
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+
apiVersion: v1
+kind: Pod
+metadata:
+ name: sample-pod
+ namespace: default
+ annotations:
+ k8s.v1.cni.cncf.io/networks: |
+ [{
+ "name": "multus-iot",
+ "namespace": "default",
+ "mac": "c6:5e:a4:8e:7a:59",
+ "ips": ["192.168.20.210/24"],
+ "gateway": [ "192.168.20.1" ]
+ }]
+spec:
+ containers:
+ - name: sample-pod
+ command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
+ image: alpine
+
Today I released 40 minute, super niche technical video on advanced Kubernetes networking with Multus.
— Techno Tim (@TechnoTimLive) April 14, 2024
I didn't do it for the algorithm, I did it because I loved every minute of it. (Well, after I got it working)https://t.co/O7sLjDIMXt pic.twitter.com/bBnBbmlsDx
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
In this tutorial we’ll walk through my local, private, self-hosted AI stack so that you can run it too.
If you’re looking for the overview of this stack, you can out the video here Self-Hosted AI That’s Actually Useful
Here are some cards that are good for local AI. Keep in mind that it’s always better to get a newer one for better CUDA support and more RAM. 8GB of RAM should be good for small models like the ones in this stack, however 12-24 is probably best.
You’ll want a modern CPU, if you are going desktop class here are a few I would choose
For flash storage, I always go with these SSDs
I am running Ubuntu Server 24.04 LTS
Installing NVIDIA Drivers
If you need help, you can check out this article but here are the commands I ran.
Install the best desktop graphics card for your machine.
1
+
sudo ubuntu-drivers install
+
Install NVIDIA tools
Be sure you install the version that matches your driver from above
1
+
sudo apt install nvidia-utils-535
+
Then reboot your machine
1
+
sudo reboot
+
Once the machine is back up, check to be sure your drivers are functioning properly
1
+
nvidia-smi
+
Here are the packages and repo’s we’re be using
I am using Traefik as the only entry point into this stack. No ports are exposed on the host. If you don’t want to use traefik, just comment out the labels (and optionally rename the network named traefik
). You will then need to expose ports for open-webui
, stable-diffusion-webui
, and whisper
in your Docker compose file.
If you need help installing Traefik, see this post on installing traefik 3 on Docker
Note: If using
traefik
(or any reverse proxy, remember that all of your internal DNS records will point to this machine! e.g. If the machine running this stack’s ip is192.168.0.100
you’ll need a DNS record likechat.local.example.com
that points to192.168.0.100
This stack contains middleware for basic auth so that the Ollama so that it is secure with a username and password. This is optional. If you don’t want to use basic auth, just remove the auth middleware labels from the ollama
service in your compose.
Otherwise, here’s how you create the credential:
Hashing your password for traefik
middleware
1
+
echo $(htpasswd -nB ollamauser) | sed -e s/\\$/\\$\\$/g
+
You’ll then want to place this in your .env
here using the OLLAMA_API_CREDENTIALS
variable. This is then used in the ollama
service in your compose file.
If you want to create a hash value for Basic Auth (Used for Continue extension). You’ll need to use the credential from above.
1
+
echo 'ollamauser:ollamapassword!' | base64
+
If you run into issues, you can always visit the NVIDIA Container Toolkit
Configure the production repository
1
+2
+3
+4
+
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
+ && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
+ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
+ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
+
Update the packages list from the repository:
1
+
sudo apt-get update
+
Install the NVIDIA Container Toolkit packages
1
+
sudo apt-get install -y nvidia-container-toolkit
+
Configure the container runtime by using the nvidia-ctk
command and restart docker
1
+2
+
sudo nvidia-ctk runtime configure --runtime=docker
+sudo systemctl restart docker
+
This will fail if you don’t have Docker installed yet.
If you need to install Docker see this post on how to install docker
and docker compose
After installing Docker you will need to reconfigure the runtime
1
+2
+
sudo nvidia-ctk runtime configure --runtime=docker
+sudo systemctl restart docker
+
This will test to make sure that the NVIDIA container toolkit can access the NVidia driver.
1
+
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
+
You should see the same output as running nvidia-smi
without Docker.
Stacks live in /opt/stacks
Here is the folder structure. Most subfolders are created when binding volumes.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+
├── ai-stack
+│ ├── .env
+│ ├── compose.yaml
+│ ├── ollama
+│ ├── open-webui
+│ ├── searxng
+│ ├── stable-diffusion-webui-docker
+│ └── whisper
+├── home-assistant-stack
+│ ├── compose.yaml
+│ ├── faster-whisper
+│ ├── home-assistant
+│ └── wyoming-piper
+
If you run into any folder permission errors while running any of this, you can simple change the owner to yourself using the command. Please replace the user and group with your own user and group.
1
+
sudo chown serveradmin:serveradmin -R /opt/stacks
+
My ai-stack
.env
is pretty minimal
1
+2
+3
+4
+5
+6
+7
+
OLLAMA_API_CREDENTIALS=
+DB_USER=
+DB_PASS=
+WHISHPER_HOST=https://whisper.local.example.com
+WHISPER_MODELS=tiny,small
+PUID=
+PGID=
+
Here is my compose.yaml
You’ll want to create this in the root of your stack folder (see folder structure above)
The command I use to start, build, and remove orphans is:
1
+
docker compose up -d --build --force-recreate --remove-orphans
+
otherwise you can use
1
+
docker compose up -d --build
+
There are additional steps you’ll need to do before starting this stack. Please continue on to the end.
Here are 2 Docker compose files that you can use on your system.
The stack is the one I use in the video as well as at home. If you want to use the general stack without traefik and macvlan, see the general Docker compose stack
Before running this, you will need to create the network for Docker to use.
This might already exist if you are using traefik. If so skip this step.
1
+
docker network create traefik
+
This will create the macvlan
network. Adjust accordingly.
1
+2
+3
+4
+5
+
docker network create -d macvlan \
+--subnet=192.168.20.0/24 \
+--gateway=192.168.20.1 \
+-o parent=eth1 \
+iot_macvlan
+
compose.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+98
+99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+213
+214
+215
+216
+217
+218
+219
+220
+221
+222
+223
+224
+225
+226
+227
+228
+229
+230
+
services:
+
+# Ollama
+
+ ollama:
+ image: ollama/ollama:latest
+ container_name: ollama
+ restart: unless-stopped
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - OLLAMA_KEEP_ALIVE=24h
+ - ENABLE_IMAGE_GENERATION=True
+ - COMFYUI_BASE_URL=http://stable-diffusion-webui:7860
+ networks:
+ - traefik
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./ollama:/root/.ollama
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.ollama.rule=Host(`ollama.local.example.com`)"
+ - "traefik.http.routers.ollama.entrypoints=https"
+ - "traefik.http.routers.ollama.tls=true"
+ - "traefik.http.routers.ollama.tls.certresolver=cloudflare"
+ - "traefik.http.routers.ollama.middlewares=default-headers@file"
+ - "traefik.http.routers.ollama.middlewares=ollama-auth"
+ - "traefik.http.services.ollama.loadbalancer.server.port=11434"
+ - "traefik.http.routers.ollama.middlewares=auth"
+ - "traefik.http.middlewares.auth.basicauth.users=${OLLAMA_API_CREDENTIALS}"
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: 1
+ capabilities: [gpu]
+
+# open web ui
+ open-webui:
+ image: ghcr.io/open-webui/open-webui:latest
+ container_name: open-webui
+ restart: unless-stopped
+ networks:
+ - traefik
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - 'OLLAMA_BASE_URL=http://ollama:11434'
+ - ENABLE_RAG_WEB_SEARCH=True
+ - RAG_WEB_SEARCH_ENGINE=searxng
+ - RAG_WEB_SEARCH_RESULT_COUNT=3
+ - RAG_WEB_SEARCH_CONCURRENT_REQUESTS=10
+ - SEARXNG_QUERY_URL=http://searxng:8080/search?q=<query>
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./open-webui:/app/backend/data
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.open-webui.rule=Host(`chat.local.example.com`)"
+ - "traefik.http.routers.open-webui.entrypoints=https"
+ - "traefik.http.routers.open-webui.tls=true"
+ - "traefik.http.routers.open-webui.tls.certresolver=cloudflare"
+ - "traefik.http.routers.open-webui.middlewares=default-headers@file"
+ - "traefik.http.services.open-webui.loadbalancer.server.port=8080"
+ depends_on:
+ - ollama
+ extra_hosts:
+ - host.docker.internal:host-gateway
+
+ searxng:
+ image: searxng/searxng:latest
+ container_name: searxng
+ networks:
+ - traefik
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./searxng:/etc/searxng
+ depends_on:
+ - ollama
+ - open-webui
+ restart: unless-stopped
+
+# stable diffusion
+
+ stable-diffusion-download:
+ build: ./stable-diffusion-webui-docker/services/download/
+ image: comfy-download
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./stable-diffusion-webui-docker/data:/data
+
+ stable-diffusion-webui:
+ build: ./stable-diffusion-webui-docker/services/comfy/
+ image: comfy-ui
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - CLI_ARGS=
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./stable-diffusion-webui-docker/data:/data
+ - ./stable-diffusion-webui-docker/output:/output
+
+ stop_signal: SIGKILL
+ tty: true
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ device_ids: ['0']
+ capabilities: [compute, utility]
+ restart: unless-stopped
+ networks:
+ - traefik
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.stable-diffusion.rule=Host(`stable-diffusion.local.example.com`)"
+ - "traefik.http.routers.stable-diffusion.entrypoints=https"
+ - "traefik.http.routers.stable-diffusion.tls=true"
+ - "traefik.http.routers.stable-diffusion.tls.certresolver=cloudflare"
+ - "traefik.http.services.stable-diffusion.loadbalancer.server.port=7860"
+ - "traefik.http.routers.stable-diffusion.middlewares=default-headers@file"
+
+# whisper
+ mongo:
+ image: mongo
+ env_file:
+ - .env
+ networks:
+ - traefik
+ restart: unless-stopped
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./whisper/db_data:/data/db
+ - ./whisper/db_data/logs/:/var/log/mongodb/
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - MONGO_INITDB_ROOT_USERNAME=${DB_USER:-whishper}
+ - MONGO_INITDB_ROOT_PASSWORD=${DB_PASS:-whishper}
+ command: ['--logpath', '/var/log/mongodb/mongod.log']
+
+ translate:
+ container_name: whisper-libretranslate
+ image: libretranslate/libretranslate:latest-cuda
+ env_file:
+ - .env
+ networks:
+ - traefik
+ restart: unless-stopped
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./whisper/libretranslate/data:/home/libretranslate/.local/share
+ - ./whisper/libretranslate/cache:/home/libretranslate/.local/cache
+ user: root
+ tty: true
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - LT_DISABLE_WEB_UI=True
+ - LT_LOAD_ONLY=${LT_LOAD_ONLY:-en,fr,es}
+ - LT_UPDATE_MODELS=True
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: all
+ capabilities: [gpu]
+
+ whisper:
+ container_name: whisper
+ pull_policy: always
+ image: pluja/whishper:latest-gpu
+ env_file:
+ - .env
+ networks:
+ - traefik
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./whisper/uploads:/app/uploads
+ - ./whisper/logs:/var/log/whishper
+ - ./whisper/models:/app/models
+ restart: unless-stopped
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.whisper.rule=Host(`whisper.local.example.com`)"
+ - "traefik.http.routers.whisper.entrypoints=https"
+ - "traefik.http.routers.whisper.tls=true"
+ - "traefik.http.routers.whisper.tls.certresolver=cloudflare"
+ - "traefik.http.services.whisper.loadbalancer.server.port=80"
+ - "traefik.http.routers.whisper.middlewares=default-headers@file"
+ depends_on:
+ - mongo
+ - translate
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - PUBLIC_INTERNAL_API_HOST=${WHISHPER_HOST}
+ - PUBLIC_TRANSLATION_API_HOST=${WHISHPER_HOST}
+ - PUBLIC_API_HOST=${WHISHPER_HOST:-}
+ - PUBLIC_WHISHPER_PROFILE=gpu
+ - WHISPER_MODELS_DIR=/app/models
+ - UPLOAD_DIR=/app/uploads
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: all
+ capabilities: [gpu]
+networks:
+ traefik:
+ external: true
+
This Docker compose stack does not use traefik and also exposes the port on the host for each service. If you don’t want to expose the port, comment that section out. If you want to use the stack with traefik and macvlan, see the stack I used in the video
Before running this, you will need to create the network for Docker to use.
1
+
docker network create ai-stack
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+98
+99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+
services:
+
+ ollama:
+ image: ollama/ollama:latest
+ container_name: ollama
+ restart: unless-stopped
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - OLLAMA_KEEP_ALIVE=24h
+ - ENABLE_IMAGE_GENERATION=True
+ - COMFYUI_BASE_URL=http://stable-diffusion-webui:7860
+ networks:
+ - ai-stack
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./ollama:/root/.ollama
+ ports:
+ - "11434:11434" # Add this line to expose the port
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: 1
+ capabilities: [gpu]
+
+ open-webui:
+ image: ghcr.io/open-webui/open-webui:latest
+ container_name: open-webui
+ restart: unless-stopped
+ networks:
+ - ai-stack
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - 'OLLAMA_BASE_URL=http://ollama:11434'
+ - ENABLE_RAG_WEB_SEARCH=True
+ - RAG_WEB_SEARCH_ENGINE=searxng
+ - RAG_WEB_SEARCH_RESULT_COUNT=3
+ - RAG_WEB_SEARCH_CONCURRENT_REQUESTS=10
+ - SEARXNG_QUERY_URL=http://searxng:8080/search?q=<query>
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./open-webui:/app/backend/data
+ depends_on:
+ - ollama
+ extra_hosts:
+ - host.docker.internal:host-gateway
+ ports:
+ - "8080:8080" # Add this line to expose the port
+
+ searxng:
+ image: searxng/searxng:latest
+ container_name: searxng
+ networks:
+ - ai-stack
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./searxng:/etc/searxng
+ depends_on:
+ - ollama
+ - open-webui
+ restart: unless-stopped
+ ports:
+ - "8081:8080" # Add this line to expose the port
+
+ stable-diffusion-download:
+ build: ./stable-diffusion-webui-docker/services/download/
+ image: comfy-download
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./stable-diffusion-webui-docker/data:/data
+
+ stable-diffusion-webui:
+ build: ./stable-diffusion-webui-docker/services/comfy/
+ image: comfy-ui
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - CLI_ARGS=
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./stable-diffusion-webui-docker/data:/data
+ - ./stable-diffusion-webui-docker/output:/output
+ stop_signal: SIGKILL
+ tty: true
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ device_ids: ['0']
+ capabilities: [compute, utility]
+ restart: unless-stopped
+ networks:
+ - ai-stack
+ ports:
+ - "7860:7860" # Add this line to expose the port
+
+ mongo:
+ image: mongo
+ env_file:
+ - .env
+ networks:
+ - ai-stack
+ restart: unless-stopped
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./whisper/db_data:/data/db
+ - ./whisper/db_data/logs/:/var/log/mongodb/
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - MONGO_INITDB_ROOT_USERNAME=${DB_USER:-whishper}
+ - MONGO_INITDB_ROOT_PASSWORD=${DB_PASS:-whishper}
+ command: ['--logpath', '/var/log/mongodb/mongod.log']
+ ports:
+ - "27017:27017" # Add this line to expose the port
+
+ translate:
+ container_name: whisper-libretranslate
+ image: libretranslate/libretranslate:latest-cuda
+ env_file:
+ - .env
+ networks:
+ - ai-stack
+ restart: unless-stopped
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./whisper/libretranslate/data:/home/libretranslate/.local/share
+ - ./whisper/libretranslate/cache:/home/libretranslate/.local/cache
+ user: root
+ tty: true
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - LT_DISABLE_WEB_UI=True
+ - LT_LOAD_ONLY=${LT_LOAD_ONLY:-en,fr,es}
+ - LT_UPDATE_MODELS=True
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: all
+ capabilities: [gpu]
+ ports:
+ - "5000:5000" # Add this line to expose the port
+
+ whisper:
+ container_name: whisper
+ pull_policy: always
+ image: pluja/whishper:latest-gpu
+ env_file:
+ - .env
+ networks:
+ - ai-stack
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./whisper/uploads:/app/uploads
+ - ./whisper/logs:/var/log/whishper
+ - ./whisper/models:/app/models
+ restart: unless-stopped
+ depends_on:
+ - mongo
+ - translate
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - PUBLIC_INTERNAL_API_HOST=${WHISHPER_HOST}
+ - PUBLIC_TRANSLATION_API_HOST=${WHISHPER_HOST}
+ - PUBLIC_API_HOST=${WHISHPER_HOST:-}
+ - PUBLIC_WHISHPER_PROFILE=gpu
+ - WHISPER_MODELS_DIR=/app/models
+ - UPLOAD_DIR=/app/uploads
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: all
+ capabilities: [gpu]
+ ports:
+ - "8000:80" # Add this line to expose the port
+
+networks:
+ ai-stack:
+ external: true
+
Before starting the stack, in the room ai-stack
folder, you’ll want to clone the repo (or just copy the necessary files).
(this will create the folder for you)
1
+
git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git
+
After cloning, you’ll want to make a change to the Docker file
1
+
nano stable-diffusion-webui-docker/services/comfy/Dockerfile
+
I commented out the pinning to commit hash and just grabbed the latest comfy.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+
FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
+
+ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
+
+RUN apt-get update && apt-get install -y git && apt-get clean
+
+ENV ROOT=/stable-diffusion
+RUN --mount=type=cache,target=/root/.cache/pip \
+ git clone https://github.com/comfyanonymous/ComfyUI.git ${ROOT} && \
+ cd ${ROOT} && \
+ git checkout master && \
+# git reset --hard 276f8fce9f5a80b500947fb5745a4dde9e84622d && \
+ pip install -r requirements.txt
+
+WORKDIR ${ROOT}
+COPY . /docker/
+RUN chmod u+x /docker/entrypoint.sh && cp /docker/extra_model_paths.yaml ${ROOT}
+
+ENV NVIDIA_VISIBLE_DEVICES=all PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
+EXPOSE 7860
+ENTRYPOINT ["/docker/entrypoint.sh"]
+CMD python -u main.py --listen --port 7860 ${CLI_ARGS}
+
If you cloned the repo and want to verify your changes, you can do so with:
1
+
git diff
+
You should see something like
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+
diff --git a/services/comfy/Dockerfile b/services/comfy/Dockerfile
+index 2de504d..a84c8ce 100644
+--- a/services/comfy/Dockerfile
++++ b/services/comfy/Dockerfile
+@@ -9,7 +9,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
+ git clone https://github.com/comfyanonymous/ComfyUI.git ${ROOT} && \
+ cd ${ROOT} && \
+ git checkout master && \
+- git reset --hard 276f8fce9f5a80b500947fb5745a4dde9e84622d && \
++# git reset --hard 276f8fce9f5a80b500947fb5745a4dde9e84622d && \
+ pip install -r requirements.txt
+
+ WORKDIR ${ROOT}
+(END)
+
You’ll want to grab any models you like from HuggingFace. I am using stabilityai/stable-diffusion-3-medium
You’ll want to download all of the models and then transfer them to your server and put them in the appropriate folders
Models will need to bt placed in the Stable-diffusion
folder.
1
+
stable-diffusion-webui-docker/data/models/Stable-diffusion
+
Models are any file in the root of stable-diffusion-3-medium
that have the extension *.safetensors
For clips, you’ll need to create this folder (because it doesn’t exist)
1
+
mkdir stable-diffusion-webui-docker/data/models/CLIPEncoder
+
In there you’ll place your clips, from the text_encoders
folder from stable-diffusion-3-medium
You’ll need to download the same workflows to the machine that accesses ComfyUI so you can import them into the browser.
Example workflows are also available on HuggingFace in the Stable Diffusion 3 Medium repo
If you’re going to spend all of that time downloading these model files, you should also pend a few minutes verifying them. I typically do this once they are on the server running the AI Stack
1
+
shasum -a 256 ./sd3_medium.safetensors
+
This should output something like:
1
+
cc236278d28c8c3eccb8e21ee0a67ebed7dd6e9ce40aa9de914fa34e8282f191 ./sd3_medium.safetensors
+
You’ll want to be sure the checksum matches the checksum from the source (HuggingFace, etc).
Please see folder structure above
Before running this, you will need to create the network for Docker to use.
This might already exist if you are using traefik. If so skip this step.
1
+
docker network create traefik
+
This will create the macvlan
network. Adjust accordingly.
1
+2
+3
+4
+5
+
docker network create -d macvlan \
+--subnet=192.168.20.0/24 \
+--gateway=192.168.20.1 \
+-o parent=eth1 \
+iot_macvlan
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+
---
+services:
+ homeassistant:
+ container_name: homeassistant
+ networks:
+ iot_macvlan:
+ ipv4_address: 192.168.20.202 #optional, I am using mac vlan, if you don't want to, remove iot_macvlan and don't create the network above
+ traefik:
+ image: ghcr.io/home-assistant/home-assistant:stable
+ depends_on:
+ - faster-whisper-gpu
+ - wyoming-piper
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./home-assistant/config:/config
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ restart: unless-stopped
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.homeassistant.rule=Host(`homeassistant.local.example.com`)"
+ - "traefik.http.routers.homeassistant.entrypoints=https"
+ - "traefik.http.routers.homeassistant.tls=true"
+ - "traefik.http.routers.homeassistant.tls.certresolver=cloudflare"
+ - "traefik.http.routers.homeassistant.middlewares=default-headers@file"
+ - "traefik.http.services.homeassistant.loadbalancer.server.port=8123"
+
+ faster-whisper-gpu:
+ image: lscr.io/linuxserver/faster-whisper:gpu
+ container_name: faster-whisper-gpu
+ networks:
+ - traefik
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - WHISPER_MODEL=tiny-int8
+ - WHISPER_BEAM=1 #optional
+ - WHISPER_LANG=en #optional
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./faster-whisper/data:/config
+ restart: unless-stopped
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: 1
+ capabilities: [gpu]
+ wyoming-piper:
+ container_name: wyoming-piper
+ networks:
+ - traefik
+ image: rhasspy/wyoming-piper # no gpu
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./wyoming-piper/data:/data
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ restart: unless-stopped
+ command: --voice en_US-lessac-medium
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: 1
+ capabilities: [gpu]
+
+networks:
+ traefik:
+ external: true
+ iot_macvlan:
+ external: true
+
I am using Basic Auth Middleware with traefik. Please see traefik section for details on how to set this up.
I am using Continue for code completion and integrated chat.
Example config.
If you aren’t going to use auth, remove the requestOptions
key.
If you are going to use auth, please replace the xxx
with the value from above
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+
{
+ "models": [
+ {
+ "title": "Ollama (Self-Hosted)",
+ "provider": "ollama",
+ "model": "AUTODETECT",
+ "completionOptions": {},
+ "apiBase": "https://ollama.local.example.com",
+ "requestOptions": {
+ "headers": {
+ "Authorization": "Basic xxx"
+ }
+ }
+ }
+ ],
+ "tabAutocompleteModel": {
+ "title": "Starcoder 3b",
+ "provider": "ollama",
+ "model": "starcoder2:3b",
+ "apiBase": "https://ollama.local.example.com",
+ "requestOptions": {
+ "headers": {
+ "Authorization": "Basic xxx"
+ }
+ }
+ }
+}
+
You know that video on self-hosted AI that I just released?
— Techno Tim (@TechnoTimLive) July 8, 2024
Well I just followed up with one of the most in-depth tutorials I have ever released.
FULL TUTORIAL HERE👇https://t.co/5JAo9Phd1Y pic.twitter.com/zdqgd0IgS8
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Ansible.Need I say more? Well, maybe, if you’ve never heard of it. Ansible is a simple IT / DevOps automation that anyone can use.You can Automate anything with an SSH connection and WITHOUT installing any agents or clients. Join me as we set up, configure and start automating with Ansible!
1
+2
+3
+
sudo apt update
+sudo apt install ansible
+sudo apt install sshpass
+
Note: Most distributions include an “older” version of Ansible.If you want to install the latest version of Ansible, see installing the latest version of ansible
hosts
1
+2
+3
+4
+5
+
[ubuntu]
+server-01
+server-02
+192.168.0.100
+192.168.0.1002
+
check ansible version
1
+
ansible --version
+
command with module
1
+
ansible -i ./inventory/hosts ubuntu -m ping --user someuser --ask-pass
+
command with playbook
1
+
ansible-playbook ./playbooks/apt.yml --user someuser --ask-pass --ask-become-pass -i ./inventory/hosts
+
apt.yml
1
+2
+3
+4
+5
+6
+7
+
- hosts: "*"
+ become: yes
+ tasks:
+ - name: apt
+ apt:
+ update_cache: yes
+ upgrade: 'yes'
+
qemu-guest-agent.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+
- name: install latest qemu-guest-agent
+ hosts: "*"
+ tasks:
+ - name: install qemu-guest-agent
+ apt:
+ name: qemu-guest-agent
+ state: present
+ update_cache: true
+ become: true
+
zsh.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+
- name: install latest zsh on all hosts
+ hosts: "*"
+ tasks:
+ - name: install zsh
+ apt:
+ name: zsh
+ state: present
+ update_cache: true
+ become: true
+
timezone.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+
- name: Set timezone and configure timesyncd
+ hosts: "*"
+ become: yes
+ tasks:
+ - name: set timezone
+ shell: timedatectl set-timezone America/Chicago
+
+ - name: Make sure timesyncd is stopped
+ systemd:
+ name: systemd-timesyncd.service
+ state: stopped
+
+ - name: Copy over the timesyncd config
+ template: src=../templates/timesyncd.conf dest=/etc/systemd/timesyncd.conf
+
+ - name: Make sure timesyncd is started
+ systemd:
+ name: systemd-timesyncd.service
+ state: started
+
timesyncd.conf
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+
# This file is part of systemd.
+#
+# systemd is free software; you can redistribute it and/or modify it
+# under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation; either version 2.1 of the License, or
+# (at your option) any later version.
+#
+# Entries in this file show the compile time defaults.
+# You can change settings by editing this file.
+# Defaults can be restored by simply deleting this file.
+#
+# See timesyncd.conf(5) for details.
+
+[Time]
+NTP=192.168.0.4
+FallbackNTP=time.cloudflare.com
+#RootDistanceMaxSec=5
+#PollIntervalMinSec=32
+#PollIntervalMaxSec=2048
+
Most distributions have an older version of Ansible installed.This is usually fine except sometimes you may need to use features from the latest Ansible.Use the following commands to update Ansible to the latest version.
Check version
1
+
ansible --version
+
If it’s not the version you are looking for, check to see where it is installed
1
+
which ansible
+
If it lives somewhere like
1
+
/usr/bin/ansible
+
this is most likely due to your distribution installing it there.
Remove previous version
1
+
sudo apt remove ansible
+
Check to be sure it is removed
1
+
which ansible
+
You should see
1
+
ansible not found
+
check to see that you have python3
and pip
1
+
python3 -m pip -V
+
You should see something like
1
+
pip 22.3.1 from /home/user/.local/lib/python3.8/site-packages/pip (python 3.8)
+
Install pip
if the previous couldn’t find the pip
module
1
+
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
+
Install ansible
1
+
python3 -m pip install --user ansible
+
Confirm your version with
1
+
ansible --version
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Authelia is an open source Single Sign On and 2FA companion for reverse proxies.It helps you secure your endpoints with single factor and 2 factor auth.It works with Nginx, Traefik, and HA proxy.Today, we’ll configure Authelia with Portainer and Traefik and have 2 Factor up and running with brute force protection!
Authelia will work with other reverse proxies but I used Traefik.If you want to configure Traefik as your reverse proxy see this guide.
See this post on how to install docker
and docker-compose
configuration.yml
, users_database.yml
, and docker-compose.yml
can be found here
Example heimdall
can be found here here
Traefik configuration changes can be found here
1
+2
+
$ docker run authelia/authelia:latest authelia hash-password 'yourpassword'
+Password hash: $argon2id$v=19$m=65536$3oc26byQuSkQqksq$zM1QiTvVPrMfV6BVLs2t4gM+af5IN7euO0VB6+Q8ZFs
+
1
+2
+3
+4
+5
+6
+7
+8
+
mkdir authelia
+cd authelia
+mkdir config
+cd config
+nano configuration.yml
+nano users_database.yml
+cd ..
+nano docker-compose.yml
+
1
+
docker-compose up -d
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
This week I finally decided to automate the watering of my lawn and garden without irrigation, here’s how…
Since I don’t have irrigation, I have to use hoses, but that’s OK because I picked up these hose faucet timers. They’re great because you can hook them up to any hose faucet. I picked up this manifold and connected 4 of these faucet timers, one for each zone. As you can see I also split one zone into 2, we’ll talk about that in a sec. I can program each zone in the b-hyve app app to turn on each individually, and these sprinklers have a ton of watering options. The app also takes into consideration the rainfall so you’re not wasting water. And it’s not just for lawns, I also automated watering my garden giving it just the right amount of water each day. These soakers, along with a pressure reducer, help deliver water exactly where I want it. The soakers also works great for flower boxes that doesn’t receive any rain. If you’re a geek like me, you can even connect this to Home Assistant or even HomeKit.
If you’re looking for the Home Assistant plugin I used to managed these timers, you can find it here Don’t forget to ⭐ the repo!
Here’s the 4 port manifold I used to create 4 zones
Here are the 4 faucet timers I used to create 4 separate zones!
Here are the sprinklers in action! Highly recommend these because they have a ton of watering options and they are silent
See the whole kit here! - https://kit.co/TechnoTim/automated-lawn-and-garden-care
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
I Automated Watering my Lawn & Garden! Have I gone too far with automation???https://t.co/HcPdCNKXbJ
— Techno Tim (@TechnoTimLive) July 3, 2023
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I am betting you have at least 3 infrared remote controls in your house.I am also willing to be you would love to automate some of these from time to time.Well don’t worry I have the solution for you! In this video we walk through setting up a self-hosted /local only Broadlink Wifi Smart Home Hub that you can use within your own home without connecting to the cloud.Added bonus, I built a Docker container you can pull down and add to your Rancher, Portainer, Synology, QNAP, or any server running Docker or Kubernetes.This includes a python backend and API as well as a ReactJS frontend so that you can discover, learn, and send commands from the web UI or even from the web API.I hope you enjoy it!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Tracking things on the web just got a whole lot easier with ChangeDetection, the free and open source Docker container! Track website changes, price changes of products, and even track out of stock products with notifications all from a container you host yourself!
⭐ ChangeDetection on GitHub: https://github.com/dgtlmoon/changedetection.io
See this post on how to install docker
and docker compose
Create folder for your compose and mounts
1
+2
+
mkdir changedetection
+cd changedetection
+
Then we’ll create a folder to hold our data and our datastore
1
+2
+3
+4
+5
+
mkdir data
+cd data
+mkdir datastore
+cd datastore
+cd ../.. # go back to the root of changedetection/
+
Create docker compose file and add contents
1
+2
+
touch compose.yaml
+nano compose.yaml
+
Your folder structure should look like this
1
+2
+3
+4
+
./changedetection
+├── data
+│ └── datastore
+└── docker-compose.yml
+
Simple version of change detection
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
---
+services:
+ changedetection:
+ image: ghcr.io/dgtlmoon/changedetection.io:latest
+ container_name: changedetection
+ hostname: changedetection
+ environment:
+ # - BASE_URL=https://mysite.com # configure this for your own domain
+ volumes:
+ - ./data/datastore:/datastore
+ ports:
+ - 5000:5000
+
Advanced version of change detection
If you want to use Selenium + Webdriver, uncomment the WEBDRIVER_URL
variable and the browser-chrome
service, and then comment out PLAYWRIGHT_DRIVER_URL
variable and playwright-chrome
service.
To see all supported configurations, see the Docker compose file on github
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+
---
+services:
+ changedetection:
+ image: ghcr.io/dgtlmoon/changedetection.io:latest
+ container_name: changedetection
+ hostname: changedetection
+ volumes:
+ - ./data/datastore:/datastore
+ ports:
+ - 5000:5000
+ environment:
+ # - WEBDRIVER_URL=http://playwright-chrome:4444/wd/hub
+ - PLAYWRIGHT_DRIVER_URL=ws://playwright-chrome:3000
+ # - BASE_URL=https://mysite.com # configure this for your own domain
+
+ depends_on:
+ playwright-chrome:
+ condition: service_started
+ restart: unless-stopped
+
+ # browser-chrome:
+ # hostname: browser-chrome
+ # image: selenium/standalone-chrome:125.0
+ # shm_size: '2gb'
+ # # volumes:
+ # # # Workaround to avoid the browser crashing inside a docker container
+ # # # See https://github.com/SeleniumHQ/docker-selenium#quick-start
+ # # - /dev/shm:/dev/shm
+ # restart: unless-stopped
+
+ playwright-chrome:
+ hostname: playwright-chrome
+ image: browserless/chrome
+ restart: unless-stopped
+ environment:
+ - SCREEN_WIDTH=1920
+ - SCREEN_HEIGHT=1024
+ - SCREEN_DEPTH=16
+ - ENABLE_DEBUGGER=false
+ - PREBOOT_CHROME=true
+ - CONNECTION_TIMEOUT=300000
+ - MAX_CONCURRENT_SESSIONS=10
+ - CHROME_REFRESH_TIME=600000
+ - DEFAULT_BLOCK_ADS=true
+ - DEFAULT_STEALTH=true
+ # Ignore HTTPS errors, like for self-signed certs
+ - DEFAULT_IGNORE_HTTPS_ERRORS=true
+
If you want to install the Chrome Extenions, you can but adding it here
Then all you need to do to configure it is visit your ChangeDetection site and click Settings and it will automatically configure it for you!
Then when visiting a site, all you need to do it click the extension and click add!
This week I spun up ChangeDetection, a free and open source (and self-hosted) container to help you track things on the web!
— Techno Tim (@TechnoTimLive) June 21, 2024
Check it out!https://t.co/Kmi5i94GcJ pic.twitter.com/s1uteYMHtH
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Using Cloud Images and Cloud Init with Proxmox is easy, fast, efficient, and fun! Cloud Images are small images that are certified cloud ready that have Cloud Init preinstalled and ready to accept a Cloud Config.Cloud Images and Cloud Init also work with Proxmox and if you combine the two you have a perfect, small, efficient, optimized clone template to provision machines with your ssh keys and network settings.So join me as we discuss, set up, and configure Proxmox with Cloud Images and Cloud Init.
Choose your Ubuntu Cloud Image
Download Ubuntu (replace with the url of the one you chose from above)
1
+
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
+
Create a new virtual machine
1
+
qm create 8000 --memory 2048 --core 2 --name ubuntu-cloud --net0 virtio,bridge=vmbr0
+
Import the downloaded Ubuntu disk to local
storage (Change local
to the storage of your choice)
1
+
qm disk import 8000 noble-server-cloudimg-amd64.img local
+
Attach the new disk to the vm as a scsi drive on the scsi controller (Change local
to the storage of your choice)
1
+
qm set 8000 --scsihw virtio-scsi-pci --scsi0 local:vm-8000-disk-0
+
Add cloud init drive ((Change local
to the storage of your choice)
1
+
qm set 8000 --ide2 local:cloudinit
+
Make the cloud init drive bootable and restrict BIOS to boot from disk only
1
+
qm set 8000 --boot c --bootdisk scsi0
+
Add serial console
1
+
qm set 8000 --serial0 socket --vga serial0
+
DO NOT START YOUR VM
Now, configure hardware and cloud init, then create a template and clone.If you want to expand your hard drive you can on this base image before creating a template or after you clone a new machine.I prefer to expand the hard drive after I clone a new machine based on need.
Create template.
1
+
qm template 8000
+
Clone template.
1
+
qm clone 8000 135 --name yoshi --full
+
If you need to reset your machine-id
1
+2
+
sudo rm -f /etc/machine-id
+sudo rm -f /var/lib/dbus/machine-id
+
Then shut it down and do not boot it up.A new id will be generated the next time it boots.If it does not you can run:
1
+
sudo systemd-machine-id-setup
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Have you ever wanted to run VS Code in your browser? What if you had access to your terminal and could pull and commit code as well as push it up to GitHub all from a browser or tablet? That’s exactly what code server does! In this tutorial we’ll walk through step by step of how to install and configure code server to get it self-hosted in your homelab.We’ll start with bare metal and virtualization and then work our way up to Docker, Kubernetes, and Rancher.Then, you don’t have to carry around your laptop anymore! You can preserve battery life on the go and leave the intensive tasks to your homelab server.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
This simple but powerful little adapter lets you build your own Zigbee network and easily add and manage it in Home Assistant, no hub required!
A simplified Home Assistant Stack with Zigbee2MQTT Support. Don’t forget to update your Zigbee2MQTT configuration!
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+
---
+services:
+ homeassistant:
+ container_name: homeassistant
+ image: ghcr.io/home-assistant/home-assistant:stable
+ depends_on:
+ - mqtt
+ - zigbee2mqtt
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./home-assistant/config:/config
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ restart: unless-stopped
+ ports:
+ - 8123:8123
+ mqtt:
+ container_name: mqtt
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ image: eclipse-mosquitto:latest
+ restart: unless-stopped
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./mqtt/data:/mosquitto
+ ports:
+ - 1883:1883
+ - 9001:9001
+ command: mosquitto -c /mosquitto-no-auth.conf
+
+ zigbee2mqtt:
+ container_name: zigbee2mqtt
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ restart: unless-stopped
+ image: koenkk/zigbee2mqtt:latest
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./zigbee2mqtt/data:/app/data
+ ports:
+ - 8080:8080
+
My Home Assistant Stack:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+98
+99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+
---
+services:
+ homeassistant:
+ container_name: homeassistant
+ networks:
+ iot_macvlan:
+ ipv4_address: 192.168.20.202
+ traefik:
+ image: ghcr.io/home-assistant/home-assistant:stable
+ depends_on:
+ - faster-whisper-gpu
+ - wyoming-piper
+ - mqtt
+ - zigbee2mqtt
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./home-assistant/config:/config
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ restart: unless-stopped
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.homeassistant.rule=Host(`homeassistant.local.techtronic.us`)"
+ - "traefik.http.routers.homeassistant.entrypoints=https"
+ - "traefik.http.routers.homeassistant.tls=true"
+ - "traefik.http.routers.homeassistant.tls.certresolver=cloudflare"
+ - "traefik.http.routers.homeassistant.middlewares=default-headers@file"
+ - "traefik.http.services.homeassistant.loadbalancer.server.port=8123"
+ - "com.centurylinklabs.watchtower.enable=true"
+ faster-whisper-gpu:
+ image: lscr.io/linuxserver/faster-whisper:gpu
+ container_name: faster-whisper-gpu
+ networks:
+ - traefik
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ - WHISPER_MODEL=tiny-int8
+ - WHISPER_BEAM=1 #optional
+ - WHISPER_LANG=en #optional
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./faster-whisper/data:/config
+ restart: unless-stopped
+ labels:
+ - "com.centurylinklabs.watchtower.enable=true"
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: 1
+ capabilities: [gpu]
+ wyoming-piper:
+ container_name: wyoming-piper
+ networks:
+ - traefik
+ image: rhasspy/wyoming-piper # no gpu
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./wyoming-piper/data:/data
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ restart: unless-stopped
+ labels:
+ - "com.centurylinklabs.watchtower.enable=true"
+ command: --voice en_US-lessac-medium
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: 1
+ capabilities: [gpu]
+ mqtt:
+ container_name: mqtt
+ networks:
+ - traefik
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ image: eclipse-mosquitto:latest
+ restart: unless-stopped
+ labels:
+ - "com.centurylinklabs.watchtower.enable=true"
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./mqtt/data:/mosquitto
+ command: mosquitto -c /mosquitto-no-auth.conf
+
+ zigbee2mqtt:
+ container_name: zigbee2mqtt
+ networks:
+ iot_macvlan:
+ ipv4_address: 192.168.20.204
+ traefik:
+ environment:
+ - PUID=${PUID:-1000}
+ - PGID=${PGID:-1000}
+ restart: unless-stopped
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.zigbee2mqtt.rule=Host(`zigbee2mqtt.local.techtronic.us`)"
+ - "traefik.http.routers.zigbee2mqtt.entrypoints=https"
+ - "traefik.http.routers.zigbee2mqtt.tls=true"
+ - "traefik.http.routers.zigbee2mqtt.tls.certresolver=cloudflare"
+ - "traefik.http.routers.zigbee2mqtt.middlewares=default-headers@file"
+ - "traefik.http.services.zigbee2mqtt.loadbalancer.server.port=8080"
+ - "com.centurylinklabs.watchtower.enable=true"
+ image: koenkk/zigbee2mqtt:latest
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /etc/timezone:/etc/timezone:ro
+ - ./zigbee2mqtt/data:/app/data
+networks:
+ traefik:
+ external: true
+ iot_macvlan:
+ external: true
+
+
In today's side quest, I created my own Zigbee hub and network and added it to Home Assistant with this awesome little adapter!!
— Techno Tim (@TechnoTimLive) August 4, 2024
check it out--->https://t.co/s0VRC6oLOD pic.twitter.com/8V9pavNH1T
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
45Drives is De-Microsoft-ifying and leading the charge by replacing Windows with Linux desktops and replacing proprietary solutions with open source. This topic of “demicrosoftification” was discussed at the Creator Summit 2024 at 45Drives headquarters. I attended along with many other tech YouTubers in this space. They also gave us a sneak peek at some new hardware too and an early look at some unique prototypes.
Thanks to 45Drives for inviting me to this event! If you’d like to learn more about them you can here: https://www.45drives.com/techno-tim
Here are some helpful links:
How one company is "De-Microsoft-ifying" and pushing Linux on the desktop
— Techno Tim (@TechnoTimLive) September 5, 2024
Check it out!
--->https://t.co/UgAF5J62DY pic.twitter.com/3W8j2O1RtF
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
CrowdSec is a free, open-source and collaborative IPS. Analyze behaviors, respond to attacks & share signals across the community.With CrowdSec, you can set up your own intrusion detection system that parses logs, detects and blocks threats, and shares bad actors with the larger CrowdSec community.It works great with a reverse proxy like traefik to help keep hackers at bay.Could this be a viable alternative to fail2ban?
A HUGE THANK YOU to Micro Center for sponsoring this video!
New Customers Exclusive – Get a Free 240gb SSD at Micro Center: https://micro.center/1fbb85
If you need to set up traefik, you can follow this post here on configuring traefik
If you need a high level overview of HomeLab and Self-Hosting Security, check out this video that will help you keep your network safe.
traefik bouncer repo https://github.com/fbonalair/traefik-crowdsec-bouncer
1
+2
+3
+4
+
mkdir crowdsec
+cd crowdsec
+touch docker-compose.yml
+nano docker-compose.yml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+
version: '3.8'
+services:
+ crowdsec:
+ image: crowdsecurity/crowdsec:latest
+ container_name: crowdsec
+ environment:
+ GID: "${GID-1000}"
+ COLLECTIONS: "crowdsecurity/linux crowdsecurity/traefik"
+ # depends_on: #uncomment if running traefik in the same compose file
+ # - 'traefik'
+ volumes:
+ - ./config/acquis.yaml:/etc/crowdsec/acquis.yaml
+ - crowdsec-db:/var/lib/crowdsec/data/
+ - crowdsec-config:/etc/crowdsec/
+ - traefik_traefik-logs:/var/log/traefik/:ro
+ networks:
+ - proxy
+ restart: unless-stopped
+
+ bouncer-traefik:
+ image: docker.io/fbonalair/traefik-crowdsec-bouncer:latest
+ container_name: bouncer-traefik
+ environment:
+ CROWDSEC_BOUNCER_API_KEY: some-api-key
+ CROWDSEC_AGENT_HOST: crowdsec:8080
+ networks:
+ - proxy # same network as traefik + crowdsec
+ depends_on:
+ - crowdsec
+ restart: unless-stopped
+networks:
+ proxy:
+ external: true
+volumes:
+ crowdsec-db:
+ crowdsec-config:
+ traefik_traefik-logs: # this will be the name of the volume from trarfic logs
+ external: true # remove if traefik is running on same stack
+
1
+2
+3
+4
+
cd config
+touch acquis.yaml
+nano acquis.yaml
+docker-compose up -d --force-recreate
+
1
+2
+3
+4
+
filenames:
+ - /var/log/traefik/*
+labels:
+ type: traefik
+
1
+2
+3
+
cd traefik
+cd data
+nano traefik.yml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+
api:
+ dashboard: true
+ debug: true
+entryPoints:
+ http:
+ address: ":80"
+ http:
+ middlewares:
+ - crowdsec-bouncer@file
+ https:
+ address: ":443"
+ http:
+ middlewares:
+ - crowdsec-bouncer@file
+serversTransport:
+ insecureSkipVerify: true
+providers:
+ docker:
+ endpoint: "unix:///var/run/docker.sock"
+ exposedByDefault: false
+ file:
+ filename: /config.yml
+certificatesResolvers:
+ cloudflare:
+ acme:
+ email: someone@example.com
+ storage: acme.json
+ dnsChallenge:
+ provider: cloudflare
+ resolvers:
+ - "1.1.1.1:53"
+log:
+ level: "INFO"
+ filePath: "/var/log/traefik/traefik.log"
+accessLog:
+ filePath: "/var/log/traefik/access.log"
+
1
+
nano docker-compose.yml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+
version: '3'
+
+services:
+ traefik:
+ image: traefik:latest
+ container_name: traefik
+ restart: unless-stopped
+ security_opt:
+ - no-new-privileges:true
+ networks:
+ - proxy
+ ports:
+ - 80:80
+ - 443:443
+ environment:
+ - CF_API_EMAIL=user@example.com
+ - CF_DNS_API_TOKEN=YOUR_API_TOKEN
+ # - CF_API_KEY=YOUR_API_KEY
+ # be sure to use the correct one depending on if you are using a token or key
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /var/run/docker.sock:/var/run/docker.sock:ro
+ - /home/username/traefik/data/traefik.yml:/traefik.yml:ro
+ - /home/username/traefik/data/acme.json:/acme.json
+ - /home/username/traefik/data/config.yml:/config.yml:ro
+ - traefik-logs:/var/log/traefik
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.traefik.entrypoints=http"
+ - "traefik.http.routers.traefik.rule=Host(`traefik-dashboard.local.example.com`)"
+ - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:BASIC_AUTH_PASSWORD"
+ - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
+ - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
+ - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
+ - "traefik.http.routers.traefik-secure.entrypoints=https"
+ - "traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.local.example.com`)"
+ - "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
+ - "traefik.http.routers.traefik-secure.tls=true"
+ - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
+ - "traefik.http.routers.traefik-secure.tls.domains[0].main=local.example.com"
+ - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.local.example.com"
+ - "traefik.http.routers.traefik-secure.service=api@internal"
+
+networks:
+ proxy:
+ external: true
+volumes:
+ traefik-logs:
+
1
+
docker-compose up -d --force-recreate
+
1
+2
+
cd config/data
+nano config.yml
+
add
1
+2
+3
+4
+5
+
crowdsec-bouncer:
+ forwardauth:
+ address: http://bouncer-traefik:8080/api/v1/forwardAuth
+ trustForwardHeader: true
+
+
1
+
nano traefik.yml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
# check to be sure you have your middleware set for both
+entryPoints:
+ http:
+ address: ":80"
+ http:
+ middlewares:
+ - crowdsec-bouncer@file
+ https:
+ address: ":443"
+ http:
+ middlewares:
+ - crowdsec-bouncer@file
+
To add a self-hosted dashboard update your docker-compose.yml
1
+2
+
cd crowdsec
+touch Dockerfile
+
1
+2
+
FROM metabase/metabase
+RUN mkdir /data/ && wget https://crowdsec-statics-assets.s3-eu-west-1.amazonaws.com/metabase_sqlite.zip && unzip metabase_sqlite.zip -d /data/
+
1
+
nano docker-compose.yml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+
dashboard:
+ #we're using a custom Dockerfile so that metabase pops with pre-configured dashboards
+ build: ./dashboard
+ restart: always
+ ports:
+ - 3000:3000
+ environment:
+ MB_DB_FILE: /data/metabase.db
+ MGID: "${GID-1000}"
+ depends_on:
+ - 'crowdsec'
+ volumes:
+ - crowdsec-db:/metabase-data/
+ networks:
+ crowdsec_test:
+ ipv4_address: 172.20.0.5
+
restart container
1
+
docker-compose up -d --force-recreate
+
Default’s credentials for metabase are crowdsec@crowdsec.net
and !!Cr0wdS3c_M3t4b4s3??
Be sure to change this.
see metrics
1
+
docker exec crowdsec cscli metrics
+
see bans
1
+
docker exec crowdsec cscli decisions list
+
manually install collections
1
+
docker exec crowdsec cscli collections install crowdsecurity/traefik
+
update hubs
1
+
docker exec crowdsec cscli hub update
+
upgrade hubs
1
+
docker exec crowdsec cscli hub upgrade
+
add bouncer
(save api key somewhere)
1
+
docker exec crowdsec cscli bouncers add bouncer-traefik
+
ban ip
1
+
docker exec crowdsec cscli decisions add --ip 192.168.0.101
+
unban ip
1
+
docker exec crowdsec cscli decisions delete --ip 192.168.0.101
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
We spin up all types of containers on my channel in my tutorials, but we have yet to build our own custom Docker container image.Today we’ll start from scratch with an empty Dockerfile and create, build, and run our very own custom Docker image! We’ll learn all the commands that everyone should know when building and maintaining images with Docker.This tutorial is a great way to get started with Docker!
To install docker, see this post
build image
1
+
docker build .
+
build image with tag
1
+
docker build -t hello-internet
+
list docker images
1
+
docker images
+
list docker containers
1
+
docker ps
+
list docker containers including stopped
1
+
docker ps -a
+
create container from image
1
+
docker run -d -p 80:80 <image id>
+
exec into running container
1
+
docker exec -it <container id> /bin/sh
+
stop running container
1
+
docker stop <container id>
+
start a stopped container
1
+
docker start <container id>
+
remove a container
1
+
docker rm <container id>
+
remove an image
1
+
docker rmi <image id>
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
My life, ran against a neural network and detected by Deep Learning.If you’d like to see how this video was generated using ML and Deep Learning, check out the video here:
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Ever wonder what my home office and studio looks like and which tools I use? Check out my NEW ultimate desk & setup Tour for 2023! (My setup, my desk, my workbench, and even my studio rack for 2023!)
Disclosures:
Thanks to Grovemade for helping me organize my desk! Use code TECHNOTIM
for 10% off! https://l.technotim.live/grovemade
Huge shout out to Elgato for helping me get my stream in check! https://www.elgato.com
Here are my notes from the video in case you wanted a little more context than I was able to provide in the video! This was such a fun project but yet I am glad I am done! Let me know if you have any questions in the comments below! All parts are linked in the Where to Buy section below!
Samsung 57” Ultrawide and my Dell 4k monitor!
Having a desk system like this one from Grovemade freed up a lot of space on my desk
TechnoTim
`Streaming has never been easier with my Elgato gear!
You might be noticing all of these things attached to my desk, up here I have a pair of Elgato Key Lights that help control my lighting when streaming and recording, then a bunch of arms. These arms help move my gear into the best place possible
This consists of velcro, the right power strips, an under desk basket, and a few systems underneath to hold cables and cords. USB Hubs and switchers help tidy this up too and allow me to switch between Windows and Mac.
Although my workbench is small, organizing it is key to getting projects done on time!
Products in this video:
Here are the items in the video, let me know if I missed anything! (some are affiliate links)
On my desk:
Cable management:
Studio Rack:
Workbench:
Mobile Workbench / Storage:
🛍️ See the whole kit https://kit.co/TechnoTim/techno-tim-desk-studio-2023
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
Ever wonder what my home office and studio looks like and which tools I use? Check out my NEW ultimate desk & setup Tour for 2023! (My setup, my desk, my workbench, and even my studio rack for 2023!)
— Techno Tim (@TechnoTimLive) October 21, 2023
Check it out!
👉https://t.co/j9W5LMZPNi pic.twitter.com/7bTlAmiER7
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Let’s build a bot! Not a bad bot like a view bot, but bot for good.Let’s build a discord moderator bot using discord.js! Discord is powerful chat + video client and already has lots of great bots however no bot has the flexibility of creating your own! In this video I will show you how to build a discord bot using DiscordJS from start to finish.You’ll see how to use the developer portal, create a bot using JavaScript, NodeJS, and NPM, invite the bot to your Discord server and have it moderate some of your channels.We have made this bot open source and will continue to contribute to this bot.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
This guide will walk you through how to Install Docker Engine, containerd, and Docker Compose on Ubuntu.
If you have an existing version of Docker install, it is best to remove it first.See the Cleaning Up
If you’re installing this on Debian, see Docker’s Debian Install Guide
Set up Docker’s apt
repository.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+
# Add Docker's official GPG key:
+sudo apt-get update
+sudo apt-get install ca-certificates curl gnupg
+sudo install -m 0755 -d /etc/apt/keyrings
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
+sudo chmod a+r /etc/apt/keyrings/docker.gpg
+
+# Add the repository to Apt sources:
+echo \
+ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
+ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
+ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+sudo apt-get update
+
If you use an Ubuntu derivative distro, such as Linux Mint, you may need to use
UBUNTU_CODENAME
instead ofVERSION_CODENAME
.
Install the latest version
1
+
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
+
Check Installed version
1
+
docker -v
+
Check docker compose
1
+
docker compose
+
Check runtime
1
+
sudo docker run hello-world
+
1
+
sudo usermod -aG docker $USER
+
You’ll need to log out then back in to apply this
If you need to uninstall Docker, run the following
1
+
sudo apt-get purge docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
If you want to set up Kubernetes at home using Rancher to run Docker containers, this is the guide for you. This is a step by step tutorial of how to install and configure Rancher, Docker, and Kubernetes for your homelab.In this video we set up and configure a Minecraft server in just a matter of minutes with some enterprise like features.You can use this same process to spin up other Docker containers at home on your server or desktop.
See all the hardware I recommend at https://l.technotim.live/gear
To install docker, see this post
The two paths in the workload configuration need to be reversed:
Path on the Node
should be mc
Mount Point
should be /data
You’ll want to use a command similar to this so that there aren’t any port conflicts with other services or kubernetes itself.
Also, you may want to consider pinning your docker tag to a version other than latest
to make backing up and upgrading easier. See here for the latest version.
1
+
docker run -d --restart=unless-stopped -p 9090:80 -p 9091:443 --privileged -v /opt/rancher:/var/lib/rancher --name=rancher_docker_server rancher/rancher:latest
+
local
cluster.This is a management cluster for Rancher.You should create new cluster for your workload, just like in this video.🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Dual booting Windows and Ubuntu Linux can be a pain however there are many benefits do doing this if you do it right.In this tutorial we’ll discuss how to dual boot Windows and Ubuntu on your PC or laptop in a few simple steps so that you can take advantage of all the hardware in your “best” machine with full access to your GPU.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Are you trying to access your self-hosted services outside of your firewall? Are you tired of trying to remember your IP when away, or worse yet, having your ISP change your IP address? Have you not purchased a domain yet but want to access your own personal VPN? If you answered “YES” to any of these, join me as we walk through this step-by-step tutorial and set up DuckDNS, the free dynamic DNS service, using Docker and then move on to use Rancher and Kubernetes.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
FAST.com is speed test gives you an estimate of your current Internet speed. It was created by Netflix to bring transparency to your upload / download speeds and to see if your ISP may be prioritizing traffic.I’ve run this quite a bit in a browser to do a quick spot check or my speeds, but I’ve never had a great tool to check this from some of my Linux machines. Let me clarify, some of my Linux servers that do not have a browser - that’s until I found this utility, fast
.fast
is an open source utility to run internet speed checks from machines that don’t have a browser, from the terminal, all in a small, zero dependency binary.You can read more about it on the GitHub repo.
We’re going to use curl
so you’ll want to be sure you have it installed
1
+
curl -V
+
This should return something similar to the following
1
+2
+3
+4
+
curl 7.68.0 (x86_64-pc-linux-gnu) libcurl/7.68.0 OpenSSL/1.1.1f zlib/1.2.11 brotli/1.0.7 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh/0.9.3/openssl/zlib nghttp2/1.40.0 librtmp/2.3
+Release-Date: 2020-01-08
+Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
+Features: AsynchDNS brotli GSS-API HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets
+
Then we’ll want to download the latest fast
binary by running
1
+2
+3
+
LATEST_VERSION=$(curl -s "https://api.github.com/repos/ddo/fast/releases/latest" | grep -Po '"tag_name": "v\K[0-9.]+')
+
+curl -L https://github.com/ddo/fast/releases/download/v${LATEST_VERSION}/fast_linux_$(dpkg --print-architecture) -o fast
+
If you want to use wget
instead of curl
, you can run the following
1
+2
+3
+
LATEST_VERSION=$(curl -s "https://api.github.com/repos/ddo/fast/releases/latest" | grep -Po '"tag_name": "v\K[0-9.]+')
+
+wget https://github.com/ddo/fast/releases/download/v${LATEST_VERSION}/fast_linux_$(dpkg --print-architecture) -O fast
+
Then we’ll want to make it executable by running
1
+
chmod +x fast
+
Then we can run a speed test by running
1
+
./fast
+
This should return something similar to the following
1
+2
+
➜ ~ ./fast
+ -> 477.72 Mbps
+
That’s it! You can now run an internet speed test from the Linux cli without a browser! What’s your download speed?
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
After setting up my Proxmox servers, there are a few things I do before I use them for their intended purpose.This ranges from updates, to storage, to networking and VLANS, to uploading ISOs, to clustering, and more.Join me as we pick up where the rest of the proxmox tutorials stop, and that’s everything you need to do to make these production ready (and maybe a bonus item too).
Edit /etc/apt/sources.list
1
+2
+3
+4
+5
+6
+7
+8
+9
+
deb http://ftp.us.debian.org/debian buster main contrib
+
+deb http://ftp.us.debian.org/debian buster-updates main contrib
+
+# security updates
+deb http://security.debian.org buster/updates main contrib
+
+# not for production use
+deb http://download.proxmox.com/debian buster pve-no-subscription
+
(for a full guide on Proxmox 7, please see this link)
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+
deb http://ftp.debian.org/debian bullseye main contrib
+
+deb http://ftp.debian.org/debian bullseye-updates main contrib
+
+# security updates
+deb http://security.debian.org/debian-security bullseye-security main contrib
+
+# PVE pve-no-subscription repository provided by proxmox.com,
+# NOT recommended for production use
+deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
+
Edit /etc/apt/sources.list.d/pve-enterprise.list
1
+
# deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
+
Create a file at /etc/apt/sources.list.d/pve-no-enterprise.list
with the following contents:
1
+2
+
# not for production use
+deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
+
If you are using ceph
Create a file at /etc/apt/sources.list.d/ceph.list
with the following contents:
1
+2
+
# not for production use
+deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
+
If you’re looking to upgrade to Proxmox 8, see this post
Run
1
+
apt-get update
+
1
+
apt dist-upgrade
+
1
+
reboot
+
BE CAREFUL.This will wipe your disks.
1
+
fdisk /dev/sda
+
Then P for partition, then D for delete, then W for write.
1
+
smartctl -a /dev/sda
+
You’ll first want to be sure that Vt-d / IOMMU is enabled in your BIOS before continuing.
If see “No IOMMU detected, please activate it.See Documentation for further information.” It means that IOMMU is not enabled in your BIOS or that it has not been enabled in Proxmox yet. If you’re seeing this and you’ve enabled it in your BIOS, you can enable it in Proxmox below.
Enabling PCI passthrough depends on your boot manager. You can check to see which one you are using by running
efibootmgr -v
If it returns an errors, it’s running in Legacy/BIOS with GRUB, skip to GRUB section
if it returns something like this, it’s running system-boot
, skip to system-d
section section
1
+
Boot0002* proxmox HD(2,GPT,b0f10348-020c-4bd6-b002-dc80edcf1899,0x800,0x100000)/File(\EFI\proxmox\shimx64.efi)
+
if it returns something like this.
1
+
Boot0006 * Linux Boot Manager [...] File(EFI\systemd\systemd-bootx64.efi)
+
If you’re using GRUB, use the following commands:
nano /etc/default/grub
add iommu=pt
to GRUB_CMDLINE_LINUX_DEFAULT
like so:
1
+
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
+
If you aren’t using an intel processor, remove intel_iommu=on
If you’re using system-boot
use the following commands.
nano /etc/kernel/cmdline
add intel_iommu=on iommu=pt
to the end of this line without line breaks
1
+
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt
+
If you aren’t using an intel processor, remove intel_iommu=on
run
1
+
pve-efiboot-tool refresh
+
then reboot
1
+
reboot
+
Edit /etc/modules
1
+2
+3
+4
+
vfio
+vfio_iommu_type1
+vfio_pci
+vfio_virqfd
+
run
1
+
update-initramfs -u -k all
+
then reboot
1
+
reboot
+
If you’re planning on using an NVIDIA card, I’ve found this helps prevent some apps like GPUz from crashing on the VM.
1
+
echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.conf
+
If you want to restrict your VLANS
1
+
nano /etc/network/interfaces
+
Set your VLAN here
1
+2
+
bridge-vlan-aware yes
+bridge-vids 20
+
1
+
nano /etc/network/interfaces
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+
auto eno1
+iface eno1 inet manual
+
+auto eno2
+iface eno2 inet manual
+
+auto bond0
+iface bond0 inet manual
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
+
+auto vmbr0
+iface vmbr0 inet static
+ address 192.168.0.11/24
+ gateway 192.168.0.1
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
+ bridge-vlan-aware yes
+ bridge-vids 2-4094
+#lacp nic team
+
If you’re running Proxmox 7, see the modified config here for LAGG / LACP
These are the commands I run after cloning a Linux machine so that it resets all information for the machine it was cloned from.
(Note: If you use cloud-init-aware OS images as described under Cloud-Init Support on https://pve.proxmox.com/pve-docs/chapter-qm.html, these steps won’t be necessary!)
change hostname
1
+
sudo nano /etc/hostname
+
change hosts file
1
+
sudo nano /etc/hosts
+
reset machine ID
1
+2
+3
+
rm -f /etc/machine-id /var/lib/dbus/machine-id
+dbus-uuidgen --ensure=/etc/machine-id
+dbus-uuidgen --ensure
+
regenerate ssh keys
1
+2
+3
+
regen ssh keys
+sudo rm /etc/ssh/ssh_host_*
+sudo dpkg-reconfigure openssh-server
+
reboot
I’ve added yet another item to my list when setting up a new Proxmox server, and that’s setting up alerts!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
After setting up my Linux servers, there are a few things I do before I use them for their intended purpose.This ranges from security, to tools, to config.Join me as we set up our first Linux server in this tutorial and walk through setting it up proper (and maybe some bonus items sprinkled in).
1
+2
+3
+
sudo apt-get update
+
+sudo apt-get upgrade
+
Reconfigure unattended-upgrades
1
+
sudo dpkg-reconfigure --priority=low unattended-upgrades
+
Verify unattended upgrades configuration file in your text editor of choice
1
+
/etc/apt/apt.conf.d/20auto-upgrades
+
To disable automatic reboots by the automatic upgrades configuration edit the following file:
1
+
/etc/apt/apt.conf.d/50unattended-upgrades
+
and uncomment the following line by removing the leading slashes:
1
+
//Unattended-Upgrade::Automatic-Reboot "false";
+
add user
1
+
sudo adduser someuser
+
add to sudoers
1
+
sudo usermod -aG sudo someuser
+
install
1
+
sudo apt-get install openssh-server
+
copy key from client to server
1
+
ssh-copy-id someuser@192.168.0.100
+
switch to key based auth
1
+
sudo nano /etc/ssh/sshd_config
+
Add these attributes
1
+2
+
PasswordAuthentication no
+ChallengeResponseAuthentication no
+
static IP
sudo nano /etc/netplan/01-netcfg.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+
network:
+ version: 2
+ renderer: networkd
+ ethernets:
+ ens18:
+ dhcp4: no
+ addresses:
+ - 192.168.0.222/24
+ gateway4: 192.168.0.1
+ nameservers:
+ addresses: [192.168.0.4]
+
oh-my-zsh
1
+2
+3
+4
+5
+
sudo apt-get update
+sudo apt-get install zsh
+sudo apt-get install powerline fonts-powerline
+
+sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
+
1
+
sudo lvm
+
1
+
lvscan
+
You should see your logical volumes
1
+2
+3
+
lvm> lvscan
+ ACTIVE '/dev/vgubuntu-server/root' [<168.54 GiB] inherit
+ ACTIVE '/dev/vgubuntu-server/swap_1' [980.00 MiB] inherit
+
resize the logical volume group, usually the first one in the list but check to be sure!
1
+
lvextend -l +100%FREE /dev/vgubuntu-server/root
+
You should see:
1
+2
+
Size of logical volume vgubuntu-server/root changed from <138.54 GiB (35466 extents) to <168.54 GiB (43146 extents).
+ Logical volume vgubuntu-server/root successfully resized
+
1
+
exit
+
resize the file system
1
+
sudo resize2fs /dev/vgubuntu-server/root
+
Check to see file system size
1
+
df -h
+
You should see:
1
+2
+3
+4
+5
+6
+7
+
Filesystem Size Used Avail Use% Mounted on
+tmpfs 1.6G 3.9M 1.6G 1% /run
+/dev/mapper/vgubuntu--server-root 166G 89G 70G 56% /
+tmpfs 7.9G 0 7.9G 0% /dev/shm
+tmpfs 5.0M 0 5.0M 0% /run/lock
+/dev/sda1 511M 4.0K 511M 1% /boot/efi
+tmpfs 1.6G 0 1.6G 0% /run/user/1000
+
You should see:
1
+2
+3
+4
+
resize2fs 1.46.5 (30-Dec-2021)
+Filesystem at /dev/vgubuntu-server/root is mounted on /; on-line resizing required
+old_desc_blocks = 18, new_desc_blocks = 22
+The filesystem on /dev/vgubuntu-server/root is now 44181504 (4k) blocks long.
+
1
+
sudo hostnamectl set-hostname
+
1
+
sudo nano /etc/hosts
+
Check time zone:
1
+
timedatectl
+
Change time zone:
1
+
sudo timedatectl set-timezone
+
You can also use if you want a menu.
1
+
sudo dpkg-reconfigure tzdata
+
1
+
sudo nano /etc/systemd/timesyncd.conf
+
1
+
NTP=192.168.0.4
+
1
+
sudo timedatectl set-ntp off
+
1
+
sudo timedatectl set-ntp on
+
1
+
sudo apt-get install qemu-guest-agent
+
1
+
sudo ufw default deny incoming
+
1
+
sudo ufw default allow outgoing
+
1
+
sudo ufw allow ssh
+
1
+
sudo ufw enable
+
1
+
sudo apt-get install fail2ban
+
1
+
sudo cp /etc/fail2ban/fail2ban.{conf,local}
+
1
+
sudo cp /etc/fail2ban/jail.{conf,local}
+
1
+
sudo nano /etc/fail2ban/jail.local
+
1
+
backend = systemd
+
check status
1
+
sudo fail2ban-client status
+
1
+
sudo fail2ban-client status sshd
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I think I found the perfect GitOps and DevOps toolkit with FluxCD and Kubernetes.Flux is an open source GitOps solution that helps your deploy app and infrastructure with automation.It can monitor git repositories, source control, image container repositories, helm repositories, and more.It can install apps using Kustomize, Helm, Kubernetes manifests so it’s designed to fit into your existing workflow.It can even push alerts to your chat system letting you know when deployments happen.In this tutorial we’ll cover all of this and more.
Be sure to ⭐ the Flux GitHub repo
If you’re looking to install your own Kubernetes cluster, be sure to check out this video that creates a cluster with Ansible
If you’re looking for the repo I created this in video, you can find it here /demos/flux-demo
1
+
curl -s https://fluxcd.io/install.sh | sudo bash
+
You’ll need to grab a personal access token from here
1
+2
+3
+4
+5
+6
+7
+8
+
flux bootstrap github \
+ --components-extra=image-reflector-controller,image-automation-controller \
+ --owner=YourGitHUbUserName \
+ --repository=flux \
+ --branch=main \
+ --path=clusters/home \
+ --personal \
+ --token-auth
+
Check flux pods
1
+
kubectl get pods -n flux-system
+
See reference repo for files, located in /demos/flux-demo
See reference repo for files, /demos/flux-demo
See reference repo for files, /demos/flux-demo
First create a workload (see redis deployment file)
Deploy the redis workload (deployment.yml
)
1
+2
+3
+
git add -A && \
+git commit -m "add redis deployment" && \
+git push origin main
+
Create ImageRepository
in the cluster, namespace, and chart that correspond.
1
+2
+3
+4
+
flux create image repository podinfo \
+--image=redis \
+--interval=1m \
+--export > ./clusters/home/default/redis/redis-registry.yaml
+
Create ImagePolicy
in the cluster, namespace, and chart that correspond.
1
+2
+3
+4
+
flux create image policy podinfo \
+--image-ref=podinfo \
+--select-semver=5.0.x \
+--export > ./clusters/home/default/redis/redis-policy.yaml
+
Then deploy the ImageRepository
and ImagePolicy
1
+2
+3
+
git add -A && \
+git commit -m "add redis image scan" && \
+git push origin main
+
tell flux to apply changes
1
+
flux reconcile kustomization flux-system --with-source
+
Now edit your deployment.yml
and add a comment
1
+2
+3
+4
+
spec:
+ containers:
+ - name: redis
+ image: redis:6.0.0 # {"$imagepolicy": "flux-system:redis"}
+
Create ImageUpdateAutomation
1
+2
+3
+4
+5
+6
+7
+8
+9
+
flux create image update flux-system \
+--git-repo-ref=flux-system \
+--git-repo-path="./clusters/home" \
+--checkout-branch=main \
+--push-branch=main \
+--author-name=fluxcdbot \
+--author-email=fluxcdbot@users.noreply.github.com \
+--commit-template="" \
+--export > ./clusters/home/flux-system-automation.yaml
+
Commit and deploy
1
+2
+3
+
git add -A && \
+git commit -m "add image updates automation" && \
+git push origin main
+
tell flux to apply changes
1
+
flux reconcile kustomization flux-system --with-source
+
Now do a git pull to see that flux has applied the tags
1
+
git pull
+
Your deployment.yml
should be updated and it should be deployed to your cluster!
1
+2
+3
+4
+
spec:
+ containers:
+ - name: redis
+ image: redis:6.0.16 # {"$imagepolicy": "flux-system:redis"}
+
Create a secret
1
+2
+
kubectl -n flux-system create secret generic discord-url \
+--from-literal=address=https://discord.com/api/webhooks/YOUR/WEBHOOK/URL
+
Create a notification provider
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+
apiVersion: notification.toolkit.fluxcd.io/v1beta1
+kind: Provider
+metadata:
+ name: discord
+ namespace: flux-system
+spec:
+ type: discord
+ channel: general
+ secretRef:
+ name: discord-url
+
Define an Alert
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+
apiVersion: notification.toolkit.fluxcd.io/v1beta1
+kind: Alert
+metadata:
+ name: on-call-webapp
+ namespace: flux-system
+spec:
+ providerRef:
+ name: discord
+ eventSeverity: info
+ eventSources:
+ - kind: GitRepository
+ name: '*'
+ - kind: Kustomization
+ name: '*'
+
Get alerts
1
+2
+3
+4
+
kubectl -n flux-system get alerts
+
+NAME READY STATUS AGE
+on-call-webapp True Initialized 1m
+
If you need to update flux, check out Updating Flux Installation Using the Latest Binary from CLI
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Meet Gatus a self-hosted, open source, health dashboard that lets you monitor all of you services and systems! This dashboard not only tracks your uptime, but also measure the results plotting the results on a chart over time. It also hooks into systems like Slack, Team, Discord, Twilio, and more! Join me as we configure and deploy Gatus into our own environment to measure and monitor all the things!
Disclosures:
Don’t forget to ⭐ Gatus on GitHub!
ssh into server
Make a directory and cd into it
1
+2
+
mkdir gatus_uptime
+cd gatus_uptime
+
In here we’re going to create a docker compose file
1
+2
+
touch docker-compose.yaml
+nano docker-compose.yaml
+
Basic Docker Compose
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+
version: "3.9"
+services:
+ gatus:
+ image: twinproduction/gatus:latest
+ restart: always
+ ports:
+ - "8080:8080"
+ environment:
+ - POSTGRES_USER=gatus_uptime_user # postgres user with access to the database
+ - POSTGRES_PASSWORD=gatuspassword # postgres user password
+ - POSTGRES_DB=gatus_uptime # this should be the name of your postgres database
+ volumes:
+ - ./config:/config
+
I am going use postgres to hold out data. Postgres is an open source object-relational database system that uses and extends the SQL language combined with many features that safely store and scale the most complicated data workloads.
If you need to create a Postgres database, here’s the official Docker image. If you want to include Postgres in the same stack, you can see an example here. It also supports SQLite if you don’t want to use postgres
Using pgadmin (Windows/macOS/Linux support) or similar tools:
gatus_uptime
gatus_uptime_user
Once you have that you should test your connection before proceeding.
Make a config folder to house our config and create a config file.
1
+2
+3
+
mkdir config
+cd config
+touch config.yaml
+
Now we need to create a config file for the sites we want to monitor.
Place the following contents inside of the config.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+
storage:
+ type: postgres
+ path: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}?sslmode=disable"
+
+endpoints:
+ - name: back-end
+ group: core
+ url: "https://example.org/"
+ interval: 5m
+ conditions:
+ - "[STATUS] == 200"
+ - "[CERTIFICATE_EXPIRATION] > 48h"
+
+ - name: monitoring
+ group: internal
+ url: "https://example.org/"
+ interval: 5m
+ conditions:
+ - "[STATUS] == 200"
+
+ - name: nas
+ group: internal
+ url: "https://example.org/"
+ interval: 5m
+ conditions:
+ - "[STATUS] == 200"
+
+ - name: example-dns-query
+ url: "8.8.8.8" # Address of the DNS server to use
+ interval: 5m
+ dns:
+ query-name: "example.com"
+ query-type: "A"
+ conditions:
+ - "[BODY] == 93.184.216.34"
+ - "[DNS_RCODE] == NOERROR"
+
+ - name: icmp-ping
+ url: "icmp://example.org"
+ interval: 1m
+ conditions:
+ - "[CONNECTED] == true"
+
Be sure to update the DB hostname with your IP if you’re using an external database.
path: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@192.168.30.240:5432/${POSTGRES_DB}?sslmode=disable"
You can use DNS here too if you like e.g. @database.example.com:5432
Now we can start our container
1
+
gatus docker compose up -d
+
Check to be sure the container is running with out errors
1
+
docker logs gatus-gatus-1
+
Before we check out the UI, let’s look at postgres and verify that tables were created in our database. You should see them listed here: gatus_uptime>Schemas>public>Tables
Once we can seethat tables were created, now lets check out the UI.
Gatus is hosted on the default port of 8080
.
Visit the IP with port in your browser:
http://192.168.10.125:8080/
Here’s how I monitor my sites:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+
storage:
+ type: postgres
+ path: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@192.168.30.240:5432/${POSTGRES_DB}?sslmode=disable"
+
+endpoint-defaults: &defaults
+ group: External
+ interval: 30s
+ client:
+ timeout: 10s
+ conditions:
+ - "[STATUS] == 200"
+ - "[CERTIFICATE_EXPIRATION] > 48h"
+
+endpoints:
+ - name: shop.technotim.live - HTTP
+ <<: *defaults
+ url: "https://shop.technotim.live"
+
+ - name: technotim.live - HTTP
+ <<: *defaults
+ url: "https://technotim.live"
+
+ - name: links.technotim.live - HTTP
+ <<: *defaults
+ url: "https://links.technotim.live"
+
+ - name: l.technotim.live - HTTP
+ <<: *defaults
+ url: "https://l.technotim.live"
+
+ - name: shop.technotim.live - DNS
+ group: External
+ url: "8.8.8.8" # Address of the DNS server to use
+ interval: 5m
+ dns:
+ query-name: "shop.technotim.live"
+ query-type: "A"
+ conditions:
+ - "[BODY] == 23.227.38.74"
+ - "[DNS_RCODE] == NOERROR"
+
+ - name: shop.technotim.live - Ping
+ group: External
+ url: "icmp://shop.technotim.live"
+ interval: 1m
+ conditions:
+ - "[CONNECTED] == true"
+
+ - name: Postgres
+ group: Internal
+ url: "tcp://192.168.30.240:5432"
+ interval: 30s
+ conditions:
+ - "[CONNECTED] == true"
+
Gatus supports many systems for alerts
To keep it simple, we’re going to create a discord alert.
We’ll configure some defaults too so we can keep our endpoints tidy like so:
1
+2
+3
+4
+5
+6
+7
+8
+
alerting:
+ discord:
+ webhook-url: "https://discord.com/api/webhooks/**********/**********"
+ default-alert:
+ description: "Health Check Failed"
+ send-on-resolved: true
+ failure-threshold: 2
+ success-threshold: 2
+
Then you’ll need to update each endpoint with the alert:
1
+2
+
alerts:
+ - type: discord
+
Or, you can just add it to your anchor which will add it to all
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+
endpoint-defaults: &defaults
+ group: External
+ interval: 30s
+ client:
+ timeout: 10s
+ conditions:
+ - "[STATUS] == 200"
+ - "[CERTIFICATE_EXPIRATION] > 48h"
+ alerts:
+ - type: discord
+
Now let’s create some chaos.
Now let’s test recovery
Full config example:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+
storage:
+ type: postgres
+ path: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@192.168.30.240:5432/${POSTGRES_DB}?sslmode=disable"
+
+alerting:
+ discord:
+ webhook-url: "https://discord.com/api/webhooks/**********/**********"
+ default-alert:
+ description: "Health Check Failed"
+ send-on-resolved: true
+ failure-threshold: 2
+ success-threshold: 2
+
+endpoint-defaults: &defaults
+ group: External
+ interval: 15s
+ client:
+ timeout: 10s
+ conditions:
+ - "[STATUS] == 200"
+ - "[CERTIFICATE_EXPIRATION] > 48h"
+ alerts:
+ - type: discord
+
+endpoints:
+ - name: shop.technotim.live - HTTP
+ <<: *defaults
+ url: "https://shop.technotim.live"
+
+ - name: technotim.live - HTTP
+ <<: *defaults
+ url: "https://technotim.live"
+
+ - name: links.technotim.live - HTTP
+ <<: *defaults
+ url: "https://links.technotim.live"
+
+ - name: l.technotim.live - HTTP
+ <<: *defaults
+ url: "https://l.technotim.live"
+
+ - name: shop.technotim.live - DNS
+ group: External
+ url: "8.8.8.8" # Address of the DNS server to use
+ interval: 5m
+ dns:
+ query-name: "shop.technotim.live"
+ query-type: "A"
+ conditions:
+ - "[BODY] == 23.227.38.74"
+ - "[DNS_RCODE] == NOERROR"
+
+ - name: shop.technotim.live - Ping
+ group: External
+ url: "icmp://shop.technotim.live"
+ interval: 1m
+ conditions:
+ - "[CONNECTED] == true"
+
+ - name: Postgres
+ group: Internal
+ url: "tcp://192.168.30.240:5432"
+ interval: 30s
+ conditions:
+ - "[CONNECTED] == true"
+
Over the last few weeks I have been looking for a more advanced self-hosted monitoring system. One that gives me more than just a simple up and down status and one that is config based. I think I found it!
— Techno Tim (@TechnoTimLive) February 26, 2024
Check it out!
👉https://t.co/7Z2xGA0w65 pic.twitter.com/LkDOo3hPzo
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Connect any wireless headset to a GoXLR or GoXLR mini. In this video, I show you how you can connect any pair of wireless bluetooth headphones to a GoXLR or GoXLR mini.They can be AirPods, Beats, Beats Wireless Pro, Bose, or any other wireless bluetooth headset.You can use this bluetooth adapter transmitter to stream while using the GoXLR or GoXLR mini.
I bought these products with my own money because I thought they were cool.Nothing in this video was sponsored.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
We’ve already figured out how to pass through a GPU to Windows machine but why let Windows have all the fun? Today, we do it on an Ubuntu headless server that’s virtualized, run some AI and Deep Learning workloads, then turn up the transcoding on Plex to 11.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+
88 88
+88 ""
+88
+88,dPPYba, 88 8b,dPPYba, ,adPPYb,d8 ,adPPYba,
+88P' "8a 88 88P' `"8a a8" `Y88 a8" "8a
+88 d8 88 88 88 8b 88 8b d8
+88b, ,a8" 88 88 88 "8a, ,d88 "8a, ,a8"
+8Y"Ybbd8"' 88 88 88 `"YbbdP"Y8 `"YbbdP"'
+ aa, ,88
+ "Y8bbdP"
+
If you need to passthrough a GPU, follow this guide but install Ubuntu instead.
Shut down your VM in proxmox, edit your conf file, it should be here (note, change path to your VM’s ID)
/etc/pve/qemu-server/100.conf
add cpu: host,hidden=1,flags=+pcid
to that file
start the server.
1
+2
+3
+4
+5
+6
+7
+8
+9
+
sudo apt-get update
+
+sudo apt-get upgrade
+
+sudo apt-get install qemu-guest-agent # this is optional if you are virtualizing this machine
+
+sudo apt-get install build-essential # build-essential is required for nvidia drivers to compile
+
+sudo apt install --no-install-recommends nvidia-cuda-toolkit nvidia-headless-450 nvidia-utils-450 libnvidia-encode-450
+
Then reboot.
Then install nvtop
1
+
sudo apt-get install nvtop
+
1
+
nvidia-docker run --rm -ti tensorflow/tensorflow:r0.9-devel-gpu
+
In your Rancher server (or kubernetes host)
1
+2
+3
+4
+5
+6
+7
+8
+9
+
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+
+curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
+
+curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+
+sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
+
+sudo apt-get install nvidia-container-runtime
+
update daemon.json
1
+
sudo nano /etc/docker/daemon.json
+
Replace with:
1
+2
+3
+4
+5
+6
+7
+8
+9
+
{
+ "default-runtime": "nvidia",
+ "runtimes": {
+ "nvidia": {
+ "path": "/usr/bin/nvidia-container-runtime",
+ "runtimeArgs": []
+ }
+ }
+}
+
Install one more util for nvidia:
1
+
sudo apt-get install -y nvidia-docker2
+
Reboot
Then, using kubectl
on your kubernetes / rancher host
1
+
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/master/nvidia-device-plugin.yml
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Are you looking to build a remote gaming machine and passthrough your GPU to a virtual machine? Do you want to use GPU acceleration for transcoding Plex or Adobe Media Encoder? Do you dream of setting up Steam Link or Remote Play In Home Streaming and streaming games to any screen in your house? If so, this complete step-by-step guide of how to passthrough your Nvidia or AMD video card through to a guest VM using Proxmox VE! If not, well, please watch this anyway.
edit grub
/etc/default/grub
Change this line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream,multifunction video=efifb:eek:ff"
run
1
+
update-grub
+
reboot
1
+
reboot
+
Edit /etc/modules
1
+2
+3
+4
+
vfio
+vfio_iommu_type1
+vfio_pci
+vfio_virqfd
+
reboot
1
+
reboot
+
/etc/pve/qemu-server/qm.conf
(will be something like 100.conf
)
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+
agent: 1
+balloon: 4096
+bios: ovmf
+boot: cdn
+bootdisk: virtio0
+cores: 8
+cpu: host,hidden=1,flags=+pcid
+efidisk0: fast1:vm-100-disk-1,size=128K
+hostpci0: 02:00,pcie=1,x-vga=1
+hostpci1: 04:00.0,rombar=0
+ide0: none,media=cdrom
+machine: q35
+memory: 14336
+name: beam
+numa: 0
+ostype: win10
+scsihw: virtio-scsi-pci
+smbios1: uuid=d6febb0d-4242-4bdb-8aea-7c03e7b5df0e
+sockets: 1
+unused0: storage1:vm-100-disk-0
+unused1: slow1:vm-100-disk-0
+virtio0: fast1:vm-100-disk-0,size=80G
+vmgenid: 524a58dd-7e3e-44f4-abf4-9de0f490d936
+
Add your PCI device
edit /etc/modprobe.d/pve-blacklist.conf
1
+2
+3
+4
+
blacklist nvidiafb
+blacklist nvidia
+blacklist radeon
+blacklist nouveau
+
If your Windows machine fails to boot, you may want to create a new Windows VM using UEFI rather than BIOS.
If your motherboard has onboard GPU set in BIOS to use the onboard primarily or exclusively to free up PCIE GPU
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
In my previous video (Meet Grafana LOKI, a log aggregation system for everything and post, I promised that I would also explain how to install Granfana Loki on Kubernetes using helm
.If you’re looking to set this up in docker-compose
, be sure to check out this video
Think of helm
as a package manager for kubernetes. It’a an easy way to bundle and deploy config to kubernetes with versioning.If you need to install helm
visit helm.sh
First add Loki’s chart repository to helm
1
+
helm repo add grafana https://grafana.github.io/helm-charts
+
Then update the chart repository
1
+
helm repo update
+
This command will:
1
+
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=nfs-client,loki.persistence.size=5Gi
+
You’ll want to set loki.persistence.storageClassName=nfs-client
to your StorageClass
In this example, I am using nf-client
which is the Kubernetes NFS Subdir External Provisioner
To access your Grafana dashboard you can run
1
+
kubectl port-forward --namespace <YOUR-NAMESPACE> service/loki-grafana 3000:80
+
To get the password for the admin
user run
1
+
kubectl get secret --namespace <YOUR-NAMESPACE> loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
+
This should print out your password
You can now access your dashboard on http://localhost:3000
If you want to create an IngressRoute
and you are using traefik can you apply the following
ingress.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+
apiVersion: traefik.containo.us/v1alpha1
+kind: IngressRoute
+metadata:
+ name: loki-grafana-ingress
+ annotations:
+ kubernetes.io/ingress.class: traefik-internal # change with your value
+spec:
+ entryPoints:
+ - websecure
+ routes:
+ - match: Host(`grafana.example.com`) # change with your value
+ kind: Rule
+ services:
+ - name: loki-grafana
+ port: 80
+
1
+
kubectl apply -f ingress.yml
+
You should now be able to access your dashboard on https://grafana.example.com
Query all logs from the container
label
1
+
{container="uptime-kuma"}
+
query all logs from the container
stream and filter on error
1
+2
+
{container="uptime-kuma"} |= "error"
+
+
query all logs from the pod
label of uptime-kuma-8d45g32fd-lk8rl
1
+2
+
{pod="uptime-kuma-8d45g32fd-lk8rl"}
+
+
Read more about LogQL here
To upgrade, you run the same command you use to install it, with an updated chart
1
+
helm repo update
+
1
+
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=nfs-client,loki.persistence.size=5Gi
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I’ve been on a quest to find a new logging system.I’ve use quite a few in the past, some open source, some proprietary, and some home grown, but recently I’ve decided to switch.I’ve switched to Grafana Loki for all of my logs for all of my systems - this includes machines, devices, docker systems and hosts, and my all of my kubernetes clusters.If you’re thinking of using Grafana and are also looking for a fast way to log all of your systems, join me as we discuss and configure Grafana Loki.
Don’t want to host it yourself? Check out Grafana Cloud and sign up for a free account https://l.technotim.live/grafana-labs
See this post on how to install docker
and docker-compose
If you’re using Docker compose
1
+2
+3
+4
+5
+6
+7
+
mkdir grafana
+mkdir loki
+mkdir promtail
+touch docker-compose.yml
+nano docker-compose.yml # copy the contents from below
+ls
+docker-compose up -d --force-recreate # be sure you've created promtail-config.yml and loki-config.yml before running this
+
docker-compose.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+
version: "3"
+networks:
+ loki:
+services:
+ loki:
+ image: grafana/loki:2.4.0
+ volumes:
+ - /home/serveradmin/docker_volumes/loki:/etc/loki
+ ports:
+ - "3100:3100"
+ restart: unless-stopped
+ command: -config.file=/etc/loki/loki-config.yml
+ networks:
+ - loki
+ promtail:
+ image: grafana/promtail:2.4.0
+ volumes:
+ - /var/log:/var/log
+ - /home/serveradmin/docker_volumes/promtail:/etc/promtail
+ # ports:
+ # - "1514:1514" # this is only needed if you are going to send syslogs
+ restart: unless-stopped
+ command: -config.file=/etc/promtail/promtail-config.yml
+ networks:
+ - loki
+ grafana:
+ image: grafana/grafana:latest
+ user: "1000"
+ volumes:
+ - /home/serveradmin/docker_volumes/grafana:/var/lib/grafana
+ ports:
+ - "3000:3000"
+ restart: unless-stopped
+ networks:
+ - loki
+
1
+
nano loki/loki-config.yml
+
loki-config.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+
auth_enabled: false
+
+server:
+ http_listen_port: 3100
+ grpc_listen_port: 9096
+
+common:
+ path_prefix: /tmp/loki
+ storage:
+ filesystem:
+ chunks_directory: /tmp/loki/chunks
+ rules_directory: /tmp/loki/rules
+ replication_factor: 1
+ ring:
+ instance_addr: 127.0.0.1
+ kvstore:
+ store: inmemory
+
+schema_config:
+ configs:
+ - from: 2020-10-24
+ store: boltdb-shipper
+ object_store: filesystem
+ schema: v11
+ index:
+ prefix: index_
+ period: 24h
+
+ruler:
+ alertmanager_url: http://localhost:9093
+
1
+
nano promtail/promtail-config.yml
+
promtail-config.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+
server:
+ http_listen_port: 9080
+ grpc_listen_port: 0
+
+positions:
+ filename: /tmp/positions.yaml
+
+clients:
+ - url: http://loki:3100/loki/api/v1/push
+
+scrape_configs:
+
+# local machine logs
+
+- job_name: local
+ static_configs:
+ - targets:
+ - localhost
+ labels:
+ job: varlogs
+ __path__: /var/log/*log
+
+## docker logs
+
+#- job_name: docker
+# pipeline_stages:
+# - docker: {}
+# static_configs:
+# - labels:
+# job: docker
+# __path__: /var/lib/docker/containers/*/*-json.log
+
+# syslog target
+
+#- job_name: syslog
+# syslog:
+# listen_address: 0.0.0.0:1514 # make sure you also expose this port on the container
+# idle_timeout: 60s
+# label_structured_data: yes
+# labels:
+# job: "syslog"
+# relabel_configs:
+# - source_labels: ['__syslog_message_hostname']
+# target_label: 'host'
+
Install docker plugin
1
+
docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions
+
Edit docker daemon config
1
+
sudo nano /etc/docker/daemon.json
+
daemon.json
1
+2
+3
+4
+5
+6
+7
+
{
+ "log-driver": "loki",
+ "log-opts": {
+ "loki-url": "http://localhost:3100/loki/api/v1/push",
+ "loki-batch-size": "400"
+ }
+}
+
Restart docker daemon.
1
+
sudo systemctl restart docker
+
You will also need to recreate your containers after applying this setting *
Query all logs from the varlogs
stream
1
+
{job="varlogs"}
+
Query all logs from the varlogs
stream and filter on docker
1
+2
+
{job="varlogs"} |= "docker"
+
+
Query all logs from the container_name
label of uptime-kuma
and filter on host
of juno
1
+2
+
{container_name="uptime-kuma", host="juno"}
+
+
Read more about LogQL here
There is a workaround for using this with ARM CPUs. Credit to AndreiTelteu for finding this in this discussion
delete /etc/docker/daemon.json
Add the vector service to the docker-compose.yml file
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+
+ vector:
+ image: timberio/vector:0.18.1-debian
+ volumes:
+ - /var/run/docker.sock:/var/run/docker.sock
+ - /home/serveradmin/docker_volumes/vector/vector-config.toml:/etc/vector/vector.toml:ro
+ ports:
+ - "8383:8383"
+ restart: unless-stopped
+ networks:
+ - loki
+
Run this command
1
+2
+3
+
mkdir vector
+cd vector
+nano vector-config.toml
+
paste this config in the file:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+
[sources.docker-local]
+ type = "docker_logs"
+ docker_host = "/var/run/docker.sock"
+ exclude_containers = []
+
+ # Identify zero-width space as first line of a multiline block.
+ multiline.condition_pattern = '^\x{200B}' # required
+ multiline.mode = "halt_before" # required
+ multiline.start_pattern = '^\x{200B}' # required
+ multiline.timeout_ms = 1000 # required, milliseconds
+
+[sinks.loki]
+ # General
+ type = "loki" # required
+ inputs = ["docker*"] # required
+ endpoint = "http://loki:3100" # required
+
+ # Auth
+ auth.strategy = "bearer" # required
+ auth.token = "none" # required
+
+ # Encoding
+ encoding.codec = "json" # required
+
+ # Healthcheck
+ healthcheck.enabled = false # optional, default
+
+ # Loki Labels
+ labels.forwarder = 'vector'
+ labels.host = ''
+ labels.container_name = ''
+ labels.compose_service = ''
+ labels.compose_project = ''
+ labels.source = ''
+ labels.category = 'dockerlogs'
+
Credits to this post for the config file: grafana/loki#2361 (comment)
If you’re looking to set this up in kubernetes, see this post
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Do you have a lot of virtual machines? Are you running Windows, Linux, and Mac and need remote access from a single UI? Well, Apache Guacamole is for you! Apache Guacamole is a clientless remote access gateway that give you a web portal to access any of your clients over standard protocols like VNC, RDP, SSH, TELNET, and more. Join me in this step by step tutorial as we set up a self-hosted version of Guacamole in your homelab.As an added bonus, we’ll set up 2FA (multifactor authentication) to help secure Guacamole.Oh, yeah, and we’ll do this all in Docker and or Kubernetes, it’s up to you! :)
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Dear Pi-Hole, We love your product.It keeps our network safe from malware and other unwanted domains. While we love what is there so far, please add a feature to your core product to keep multiple servers in sync and provide high availability DNS to our whole entire network.Then, we won’t have people asking us “Is the internet down?” every time we reboot our Pi-Hole server.
Until then, we will use Gravity Sync.
Sincerely,
Techno Tim (and probably thousands of other lovers of Pi-Hole).
P.S.Keep up the good work!
Thank you Gravity Sync!
(don’t forget to star the repo!)
https://github.com/vmstan/gravity-sync
Great Raspberry Pi - Pi-Hole Servers!
► Raspberry Pi Zero W Kit - https://amzn.to/3qOl9yS
► Raspberry Pi 4 Kit - https://amzn.to/3nophDm
If you’re looking to have your PiHole instances failover automatically, be sure to check out the documentation on keepalived
Meet keepalived - High Availability and Load Balancing in One
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Handbrake is a fantastic open source transcoder.It allows you to transcode, or convert, your video files into different formats. It has a nice UI that’s easy to use and helps you transcode videos very easily. It supports profiles that are optimized for your target devices. And because this is open source and cross compiled, you can run this on Windows, macOS, or Linux…but did you also know you can self host a containerized version of this with Docker and Kubernetes?
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Tired of bookmarking all of your self-hosted services only to lose them? Want access to all your sites from anywhere in the world? Well, Heimdall can help with a clean, responsive, and beautiful dashboard for all of your Homelab services. So join me in this tutorial as we install and configure Heimdall on Docker and Kubernetes and build a dashboard with live icons.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
The HL15 from 45Drives is here. It brings a lot of unique features and was built and designed with the HomeLab community in mind. In this in-depth review we’ll cover everything you want to know about this new storage server.
Disclosures:
This is the newly released HL15 from 45 Homelab division of 45 Drives. It’s a server meant to meet the needs of the HomeLab community while bringing the build quality and design from their enterprise offerings. It’s a 15 bay server that can be used for just about whatever you want to use it for but it goes without saying that you’re most likely thinking about buying this to be your next storage server.
There’s a lot to unpack with this new server so this will be an in-depth look at the H15 HomeLab Storage Server. We’ll cover everything from the chassis, the backplane, the motherboard, the CPU, the power supply and power consumption, cooling, software selection, and even the price and value proposition, because at the end of the day if you don’t think it’s worth it, you’re probably not going to buy it. So, is it any good? Let’s find out.
When purchasing this machine you have a few options. Now you might have sticker shock when seeing these prices, but we’ll talk about that later in the video. But your options are:
You have your choice of color, power cable, and additional add ons if you like. They sent a white for me, which is what I was hoping for.
First let’s dive into the chassis and the case.
Steel chassis with screws. I really dig the white and blue
This chassis is made of steel and it’s solid. Like really solid. If you’ve only had aluminum cases in the past you’ll notice the difference right away. It has a powder coat finish that comes in white or black, mine is obviously in white. This design is really nice, it has sort of this Star Wars Hoth vibe to it, which I am a fan of. Now looking at this case you can see that it’s almost entirely metal metal and screws, which I think is a good thing. Why does that matter? We’ll, if you’ve ever had a rivet pop off of one of your cases it’s near impossible to fix, at least for me, I’ve never used rivets and wouldn’t know where to start.
You can open this case up using these thumb screws and the first thing you see is the top loading drive cage.
Holds 15 drives that can easily slide in and out, easy to see serial numbers (no more labels) and has these nice little springs to keep the drives in place. You’ll also notice No caddies, which is something 45 Drives does not like and after loading quite a few drives in other systems I think it’s starting to rub off on me. Caddies just add more parts, more screws, and ultimately more time and complexity when servicing drives. It’s something that I can now appreciate after dealing with the status quo for so many years.
The back of the chassis is pretty basic, I do wish it used PCIe blanks vs breakaways
One of the downsides is that the PCIe slots have breakaways. Great if you never need to add an item (less parts, less screws, less things banging around) but awkward if you do shuffle around hardware. This is a carry over from their enterprise servers, something I also have in my AV15.
The other nice thing about this case is that it can lay flat or even stack up like a desktop with the included feet. If you’re choosing to rack mount this, you’ll just need to attach the included rack ears and then also pick up a pair of rack rails, which do not come with the server. If you don’t want the official rack rails, one of the universal rack shelves will work just fine.
Lots of this connectivity options on this motherboard
We’ll go a little more in depth on the backplane and some of its features a little bit later but let’s first focus on the motherboard and connectivity to help you understand how it all works.
This motherboard is the SuperMicro X11SPH-nCTPF. It comes in 2 flavors, one with SFP+ networking and the other with 10Gbe networking.
X11SPH-nCTF https://www.supermicro.com/en/products/motherboard/x11sph-nctpf
This is an intro level Xeon, great for PCIe lane, storage services, and a few containers or VMs but not too much after that
I opted for 32 GB of 2x16 and picked up more RAM from eBay and now have 128GB. It’s priced pretty fairly there if you’re thinking about buying some.
I made sure that I tested it and it all passed
If you do decide to do this, be sure to populate your DIMs in dual rank mode according to the motherboard manual.
Custom backplane created by 45 drives.
The HL15 has a modest power supply. Great for 15 drives but might need some additional wattage if you’re going to add more components
ATX, Fully Modular make sure that you only use the cables you need reducing clutter in the case. This is welcomed because when swapping out the power supply in my AV15 there were lots of cables and it was a specialized power supply. Great to see they are using a standard ATX now.
There are lots of additional headers on the board if you need, like TPM, USB, and even additional serial.
Overall, I think this is a solid server board for this configuration.
Intel(R) Xeon(R) Bronze 3204 CPU @ 1.90GHz https://ark.intel.com/content/www/us/en/ark/products/193381/intel-xeon-bronze-3204-processor-8-25m-cache-1-90-ghz.html
Real world, is the CPU enough? It is for me. I will be doing very little compute on this server. If you plan on using this as a hypervisor like Proxmox, you might want to look into another CPU. I checked other CPU options on the used market and it’s actually not to bad if you want to upgrade this CPU later on down the road.
These fans move a lot of air and you don’t need to worry about your components ever overheating. They do this at the cost of noise though. How loud are the fans? I am not sure how to put this but they are quiet for enterprise servers and loud for a home server. They are much quieter than its enterprise counterpart, the AV15, but still not something you want to put in your living room. I am thinking about replacing them with Zigbee controller RGB fans like I did with my AV15 customization video but haven’t had the time to rip it apart yet.
If you do decide to replace the fans, just be sure that you’re getting something on par with these and specifically ones with enough static pressure.
The CPU cooler is passive and I think that has a lot to do with why they’re using these fans. On my AV15 I did swap this out for a noctua cpu cooler that kept it just as cool, if not more, which then allowed me to replace the fans with quieter fans. You can find the fans I used in the AV15 in the Where to Buy section.
The HL15 ships with Rocky Linux but I assume that’s there just to run their QA tests. Since this is an open system x86 you can install anything you like on it from Proxmox, to TrueNAS, VMware or any operating system. If you’re buying this as a storage server, most likely it’s going to be something that handles storage nicely.
I took a slightly different approach with this system and decided to not install a hypervisor and install Ubuntu LTS. This time I am going to go bare metal and see how manageable Ubuntu is with services like SMB, NFS, ZFS, some docker containers, and possibly some KVM if I want virtualization.
I am also going to give Cockpit a shot, or the Houston UI as 45Drives calls it. This gives me a friendly UI to manage some of these services which are otherwise pretty complex.
That’s when I found out that they don’t yet support 22.04, only 20.04 which is the previous LTS from 3 years ago. With all of that being said, I am going to install Cockpit without the 45Drives special sauce.
This was super simple with a command…
Or so I thought…
And that had its own issues so for the sake of showing off what Cockpit / Houston can do I decided to install Ubuntu 20.04, which is still supported. Hopefully 22.04 will be supported, or even 24.04 in the spring.
Installing is as easy as downloading a script and then executing it. It will install many different modules to help you manage your server. After setup is complete you’ll reboot.
Once installed you can see Houston or Cockpit running on your server IP https 9090
At the time of testing Houston for this video, it seems like some of these modules are not working properly with the HL15, for example the 45Drives Disks and the 45Drives Motherboard do not load, and the 45Drives System area is partially loaded and there are some services that fail to load like ZnapZend. These few things aside, the rest of the UI seems to be working.
I can create ZFS pools, create SMB and NFS Shares, and even create some virtual machines if I like, although the UI is pretty basic for this. Creating users, Charts, metrics, benchmarks, and even accessing the terminal all seem fine. You can also install additional applications by using the cli and finding an application on their project page however most of them are already installed. For example I installed and open LDAP server on my machine and that worked just fine, however beta applications like Tailscale and cloudflare tunnels aren’t available on this version of cockpit so you won’t be able to install them, arguably that’s probably better suited for something like Docker and containers, and if you’re going to install manage containers, you’ll want to install Portainer to manage them.
If I do end up going this route, I would leave Cockpit for managing hardware type services and configuration and leave the rest to Docker and I will manage Docker with Portainer since it’s only a few clicks away. All that being said, I know that this is a new line for 45 drives, but it would be nice if this were working so that I wouldn’t be forced to use proxmox because I don’t need virtualization, and TrueNAS because I don’t want to run their apps. I might just go bare metal without any gui, but managing configs for smb, zfs, zfs, etc is such a pain.
So more to come on the OS for this server, I still have time to make my decision and the nice thing is it’s flexible!
The HL15 with a standard config can easily push 20 G/bs
You’ll have to see the video for this section. Spoiler alert, I can push 20 Gb/s with this configuration, no problem at all.
One of the first products that is targeting the HomeLab community
So let’s talk about some of the configuration options you saw earlier along with the value you’re getting.
You can choose:
Let’s not beat around the bush, all of these options are priced pretty high. No matter which way you look at it, $800, $900, $2,000 dollars is a lot of money no matter what you are spending it on and I think you have to determine if there’s enough value here for you. Is getting a steel, repairable case with a backplane that can hold 15 caddie-less drives and give you NVMe speeds over the network important to you? If it is, you have few options out there. This isn’t a chassis you buy that comes with a proprietary motherboard and components, it’s a chassis that is open enough to accommodate any motherboard / CPU combination you throw at it, now and in the future. If you’re comparing it to other storage vendors, like synology, it’s on par for price, but if you’re comparing it to old used gear you’re going to think it’s expensive.
That being said, I would love to see cheaper options for those that are in the market for a homelab chassis like this one. And if not, maybe they will see these on the second hand market later in life.
If you can get past the price, I think you’ll find a case that has everything you’re going to want for a storage server, now and in the future. It looks like 45Drives addressed everything I mentioned in the AV15 with the HL15, well, except RGB, unless you’re counting the power switch. Also, I have to give them credit because they took a huge risk at bringing something to market that is as niche as HomeLab, because to my knowledge, they are one of the first to do it and brand it with the “HomeLab” Title.
Well I learned a lot about the HL15, storage servers, and network throughput testing and I hope you learned something too. And remember if you found anything in this video helpful, don’t forget share this post!
After a week of using and testing the HL15, I finally released my in-depth review of this new storage server.
— Techno Tim (@TechnoTimLive) November 22, 2023
Does it meet the HomeLab community needs?
👉https://t.co/qCutlEbYYs pic.twitter.com/xGGOHodgaq
HL15:
HL15 Accessories:
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Are you ready to start automating your smart home with the power of open source? Do you already have Home Assistant running but need a little more power than a Raspberry Pi? If so, join me in this easy to follow, step by step tutorial on installing Home Assistant on Docker, Kubernetes, and Rancher. We’ll set it up, walk through and configure the UI, and then move on to configure some Wemo smart switches, Phillips Hue bulbs, Google Home / Chromecast devices, and even create a Dark Mode / Light mode automation script using Phillips Hue Scenes!
configuration.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+
# Configure a default setup of Home Assistant (frontend, api, etc)
+default_config:
+
+# Text to speech
+tts:
+ - platform: google_translate
+
+group: !include groups.yaml
+automation: !include automations.yaml
+script: !include scripts.yaml
+scene: !include scenes.yaml
+
+wemo:
+ discovery: true
+
scripts.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+
'1591564249617':
+ alias: Dark Mode
+ sequence:
+ - data:
+ group_name: Office
+ scene_name: Gaming
+ service: hue.hue_activate_scene
+ - device_id: f41ccf86433148dcbd8e932d1412f12a
+ domain: switch
+ entity_id: switch.gaming_lights
+ type: turn_on
+'1591564322588':
+ alias: Light Mode
+ sequence:
+ - data:
+ group_name: Office
+ scene_name: Energize
+ service: hue.hue_activate_scene
+ - device_id: f41ccf86433148dcbd8e932d1412f12a
+ domain: switch
+ entity_id: switch.gaming_lights
+ type: turn_off
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I decided to give my Home Lab a proper upgrade for 2020 and in to 2021! I finally took the plunge and went all in with a UniFi UDM Pro and a UniFi Switch PRO 24 PoE switch and they are awesome!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I am a huge fan of self hosted home security and I’ve been doing it for years. I love the idea of being able to check on my home when I am away.Also, I’ve always kept my video footage on premise (on prem) and never sent it to the cloud.It started way back with a laptop and a webcam and it evolved into self-hosting my own DVR software on a virtual machine with many PoE and wireless cameras… but this became way too much to manage. Well, this is the next evolution of my home security, integrating it into my recently upgraded UniFi network.I wanted to simplify my home security, just like my network, so I decided to pick up some UniFi Protect G3 FLEX cameras and some new UniFi Protect G3 Instant cameras to help secure my home.I also picked up the UniFi Smart Power Plug that will monitor my internet connection and reboot my modem if I lose connection.This is going to be awesome! I hope you enjoy this complete guide to setting up your new UniFi Protect system!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Say goodbye to all of the other Home Lab Dashboards that you end up not using, it’s time to use something smarter, Home Assistant.
You can check out my current Home Assistant Docker Compose Stack
Say goodbye to all of the other Home Lab Dashboards that you end up not using, it's time to use something smarter, Home Assistant.
-->https://t.co/VeliDgNYdq pic.twitter.com/6cAVe7ozQn</p>— Techno Tim (@TechnoTimLive) August 26, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
After moving some of my HomeLab servers into the new colocation I have so many choices to make when it comes to services and architecture! From networking, to VPN, to security, to hypervisors, to backups, and even DNS! I NEED YOUR HELP! Help me decide if I have created a solid foundation for my new HomeLab in a Colo!
Network diagram created with Figma https://l.technotim.live/figma (affiliate link but they have a free option)
Disclosures:
Find your UniFi cloud gateway here:
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
After moving some of my HomeLab servers into the new colocation I have so many choices to make when it comes to services and architecture! https://t.co/J998yIGsaY
— Techno Tim (@TechnoTimLive) April 5, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
After a few months of planning and building, I colocated some of my homelab servers in a data center! There were so many unknowns like, how much does colocating server cost? Do you need to bring your own networking? How do you even prepare for this? Join me as we figure this all out!
And don’t worry, I still have servers at home too!
Disclosures:
Find your UniFi cloud gateway here:
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
What happens when you colocate some of your HomeLab servers into a Data Center?
— Techno Tim (@TechnoTimLive) March 21, 2024
And a better question, is it still called a HomeLab?https://t.co/mfxl6elVEL pic.twitter.com/ee8iMH3R5Q
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Well, here it is! My Late 2023 Server Rack and HomeLab tour! I’ve upgraded, replaced, added, and consolidated quite a bit since my last tour! New servers, new networking, UPS, cabling, power management, and more new tech on the wall!
In case you missed it, check out my HomeLab Services Tour (2024)!
It's time of year again! Time for my Server Rack and HomeLab tour! If you've ever wondered what servers, networking, and even low power PCs I am running in my setup, check it out!
— Techno Tim (@TechnoTimLive) December 8, 2023
👉https://t.co/IolyDljMuq pic.twitter.com/jo1qzchS8S
Rack & Accessories
Network
Servers & Accessories
Accessories
Over the Air TV Gear (Plex or JellyFin)
Intel NUC Mini Cluster
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
You asked for a tour of my homelab, well here it is.In this tour I will take you through my home server rack and network setup.This includes my all of my home networking equipment, my servers, disk array, and everything else in my server rack.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
In my homelab tour, I showed you my hardware and network setup that powers all the infrastructure at home.Then, many of you asked which services I am hosting on this hardware.Well, here it is.This is a tour of all the self hosted services I have running in my Homelab.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
After showing off my Home Lab hardware in my late 2021 tour, many of you asked what services are self-hosted in this stack. This is always a moving target so I decided it was time to share which services I am running here at home.Today, we walk through everything I am hosting including: Dashboard, Hypervisor, Virtualization, Containerization, Network Attached Storage (NAS), DNS, Network Management, Home Security, Kubernetes, Kubernetes Storage, Docker, Reverse Proxy, Certificates, Monitoring, Logging, Syncing Data, File Sharing, Self-Promotion (Contact Page), Link Shortening, Home Entertainment, Home Automation, Battery / UPS Monitoring, CMS, Static Site Generators, Dynamic DNS, CI/CD, and many, many others.Enjoy the virtual tour!
Worth mentioning, I have videos on almost every service mentioned in this video!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Wow, what a year of self-hosting! After showing off my Home Lab hardware in my late 2022 tour, many of you asked what services are self-hosted in this stack. This is always a moving target so I decided it was time to share which services I am running here at home.Today, we walk through everything I am hosting including: Dashboard, Hypervisor, Virtualization, Containerization, Network Attached Storage (NAS), DNS, Network Management, Home Security, Kubernetes, Kubernetes Storage, Docker, Reverse Proxy, Certificates, Monitoring, Logging, Syncing Data, File Sharing, Link Page, Link Shortening, Home Entertainment, Home Automation, Battery / UPS Monitoring, CMS, Static Site Generators, Dynamic DNS, CI/CD, Git Ops, Dev Ops, and many, many others.Enjoy the virtual tour!
Worth mentioning, I have videos on almost every service mentioned in this video!
Here are most of the videos mentioned in this video:
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
Want to see all the gear in this video?
Check out the kit here: https://kit.co/TechnoTim/techno-tim-homelab-tour-late-2022
00:00 - What is Techno Tim Self-Hosting?
01:05 - Dashboard
01:36 - Hypervisor
07:09 - Network Attached Storage
09:37 - DNS
11:48 - Network Management
13:05 - Home Security
13:42 - Containers (Kubernetes & Docker)
17:59 - -Kubernetes Storage
21:04 - Git Ops
22:35 - Reverse Proxy (Internal, External, and Ingress Controller)
25:26 - Monitoring
26:10 - Metrics & Data Visualization
27:02 - Logging
28:28 - Home Automation
30:08 - Data Synchronization
30:55 - Link Page (Contact Page)
31:41 - Link Shortener
32:24 - Home Entertainment
33:00 - UPS Battery Monitoring
33:37 - CMS (Content Management System)
34:25 - Websites (Static Sites & Custom Code)
34:46 - Dynamic DNS (External DNS)
35:16 - CI/CD (Continuous Integration & Continuous Delivery)
37:04 - Everything Else
37:41 - How do I get started self-hosting?
38:30 - Thanks for Watching!
Wow, what a year of self-hosting! After showing off my HomeLab hardware in my late 2022 tour, many of you asked what services are self-hosted in this stack, so I decided it was time to share which services I am running here at home.https://t.co/Z1yKrwKOaP#homelab #selfhosted pic.twitter.com/JW2WdvuIQM
— Techno Tim (@TechnoTimLive) December 31, 2022
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
What a year of self-hosting! Join me as we walk though my entire infrastructure and services that I have running in my HomeLab! This time I also include network diagrams and dive deep into which services I have running, where they are running, and why I chose them!
In case you missed it, check out my HomeLab Hardware Tour (late 2023)!
Here is the diagram for my network!
A logical Network Diagram of my HomeLab including VLANs and servers
Since many have asked, I use Figma to design my network diagrams. (affiliate link but they have a free plan)
Here’s a breakdown of all the services I use
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Sites:
Tutorials:
Sites:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Sites:
Tutorials:
Sites:
Tutorials:
Sites:
Tutorials:
What a year of self-hosting! Join me as we walk though my entire infrastructure and services that I have running in my HomeLab! https://t.co/9b2hGFzoPz pic.twitter.com/zqEVKy8rhy
— Techno Tim (@TechnoTimLive) January 4, 2024
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Every Home Labber and IT person has their go to set of tools and accessories to help them accomplish tasks for technical projects in their HomeLab.This ranges from the very specialized, to the common.I do all kinds of projects at home, from building and racking servers, to building mini and full-size PCs, to upgrading and troubleshooting hardware, to home office upgrades, to installing wireless access points and cameras, down to building raspberry pi projects.I’ve gathered up some of my most essential tools and accessories to assist you in your projects!
A HUGE thanks to Micro Center for sponsoring this video!
New Customers Exclusive – FREE Redragon GS500 Gaming Stereo Speakers: https://micro.center/gom Check out Micro Center’s PC Builder: https://micro.center/7hj Submit your build to Micro Center’s Build Showcase: https://micro.center/vo6
Here are all of the items that were in the video, plus a few more.
📦 See the entire kit here: https://kit.co/TechnoTim/essential-homelab-tools-accessories-for-home-labbers-and-it-pros
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Well, here it is! My Late 2021 Server Rack and HomeLab tour! This is a special one because I just revamped and remodeled a spot in the basement for my new data center / server room (still picking out a name for it).I’ve upgraded, replaced, added, and consolidated quite a bit since my last tour! New servers, new networking, new UPS, new Raspberry Pi, and even a whole entire wall of tech gear. I also added lots of automation and IoT devices! Join me as we walk through my server room upgrade!
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
2u Rack Shelf - https://amzn.to/2ZVSJKN
APC 1500VA UPS - https://amzn.to/3GXLJh6
Nest Protect - https://amzn.to/3BLhc21
Hue Iris Light - https://amzn.to/3ET5Gn8
Hue Motion & Temp https://amzn.to/3qb1FXf
Axxtra Power Strip - https://amzn.to/3qbzIhT
Amazon Power Strip - https://amzn.to/3mMN16w
Wall Control Galvanized Steel Pegboard - https://amzn.to/3bJ8R4s
Hue Dimmer Switch - https://amzn.to/3wj9Sts
Hue Light Strips - https://amzn.to/3wkkLLD
Hue Smart Bulb Starter Kit - https://amzn.to/31renqs
Hue Motion & Temp Detection - https://amzn.to/3o7HOFR
Cloud Lamp - https://amzn.to/3GZji24
Pi 4 B - https://amzn.to/3BTPKzc
PoE Pi Hat - https://amzn.to/3GUqY5O
Pi Zero - https://amzn.to/3o4LGap
HD Homerun - https://amzn.to/2ZXxmYS
Intel NUC - https://amzn.to/3BKE3uR
24 Port Patch Panel - https://amzn.to/3GYA4yo
Wall Mount Patch Panel - https://amzn.to/3o2Axad
Slim Network Cables - https://amzn.to/3kbYV85
UniFi Flex Mini - https://l.technotim.live/ubiquiti
UniFi UDM Pro - https://l.technotim.live/ubiquiti
UniFi 24 Port PoE Gen 2 Switch Pro - https://l.technotim.live/ubiquiti
PC Conversion Case - https://amzn.to/3qgkFDJ
18u Server Rack - https://amzn.to/3kbZdvH
1u Rails - https://amzn.to/3GSd701
APC 600 VA UPS - https://amzn.to/3mMxsM1
NetApp DD4246 Disk Shelf - https://amzn.to/3o2AOKh
SuperMicro 1u Servers - https://amzn.to/3q9M7TJ
8 TB IronWolf NAS Drives - https://amzn.to/3EQXXGw
Rackmount Servers - https://kit.co/technotim/rackmount-home-lab-servers
HomeLab Racks - https://kit.co/technotim/server-rack-homelab
1u Servers - https://kit.co/technotim/techno-tim-1u-server
Networking Stack - https://kit.co/technotim/techno-tim-network-stack
Raspberry Pi with PoE - https://kit.co/technotim/best-raspberry-pi-with-poe
Home Security - https://kit.co/technotim/techno-tim-home-security
Storage and Hard Drives - https://kit.co/technotim/best-ssd-hard-drive-flash-storage
HomeLab and Server Room Upgrade 2021 - https://kit.co/technotim/techno-tim-homelab-and-server-room-upgrade-2021
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Well, here it is! My Late 2022 Server Rack and HomeLab tour! This is a special one because I just revamped my entire rack.I’ve upgraded, replaced, added, and consolidated quite a bit since my last tour! New servers, new networking, new UPS, new power management, and even a whole entire wall of tech gear.I also added lots of automation and IoT devices! Join me as we walk through my server rack transfer and upgrade!
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
After this, check out the 2022 (late) HomeLab Services Tour!
See the entire kit here! https://kit.co/TechnoTim/techno-tim-homelab-tour-late-2022
00:00 - What does Techno Tim’s HomeLab Look Like?
00:50 - HomeLab Music Video
02:13 - What’s all that stuff on the wall?
04:05 - New Server Rack
05:06 - Networking
07:13 - Smart PDU (Power in the front???)
10:36 - TBD Gear / Room for Growth
11:14 - 1u Servers
12:21 - Storinator
13:42 - PC Conversion Server
14:25 - Disk Shelf
15:27 - UPS
16:08 - Back of the Rack
17:08 - Power Channels
17:57 - Non Critical Power Devices
18:25 - Practical RGB Lighting (it has utility)
19:35 - Cable Management
20:51 - UPS Battery Extender
21:24 - Don’t be discouraged, Home Labs come in all shapes and sizes
23:05 - HomeLab Music Video Outro
Well, here it is! My Late 2022 Server Rack and HomeLab tour! This is a special one because I just revamped my entire rack. I've upgraded, replaced, added, and consolidated quite a bit since my last tour! https://t.co/2N04ZDS04c#homelab pic.twitter.com/9BivCrrQgg
— Techno Tim (@TechnoTimLive) December 24, 2022
Now that I have everything moved to the new rack, "Light Mode" or "Dark Mode"? pic.twitter.com/EKD45Wwl3f
— Techno Tim (@TechnoTimLive) December 25, 2022
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Meet Homepage, your new HomeLab services dashboard homepage! Homepage is an open source, highly customizable homepage (or startpage) dashboard that runs on Docker and is integrated with over 100 API services. It’s easy to set up, looks good by default, and helps you keep track of everything you are running in your HomeLab and more. Today we’ll set up Homepage and get it running in Docker in no time.
Disclosures:
Don’t forget to ⭐ homepage on GitHub!
See this post on how to install docker
and docker compose
Make a directory
1
+
mkdir homepage
+
cd
into it
1
+
cd homepage
+
create a docker-compose.yaml
file
1
+
touch docker-compose.yaml
+
Edit it
1
+
nano docker-compose.yaml
+
Place the contents
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+
version: "3.3"
+services:
+ homepage:
+ image: ghcr.io/gethomepage/homepage:latest
+ container_name: homepage
+ ports:
+ - 3000:3000
+ env_file: .env # use .env
+ volumes:
+ - /path/to/config:/app/config # Make sure your local config directory exists
+ - /var/run/docker.sock:/var/run/docker.sock # (optional) For docker integrations, see alternative methods
+ environment:
+ PUID: $PUID # read them from .env
+ PGID: $PGID # read them from .env
+
Create an .env
file for variables
1
+
touch .env
+
Edit it
1
+
nano .env
+
add to file:
1
+2
+
PUID=1000
+PGID=1000
+
Save and exit
Start the container
1
+
docker compose up -d
+
Note: The container can take up to 60 seconds to start the first time. It’s a good idea to check the container to see if it is passing health checks before browsing to your site.
Check to be sure you see that the container is healthy.
You can check by running:
1
+
docker ps
+
You should see something like:
1
+2
+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+8d841cf77e6f ghcr.io/gethomepage/homepage:latest "docker-entrypoint.s…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp homepage
+
Once it’s healthy, visit http://<IP-ADDRESS-DOCKER-MACHINE>:3000
You should see your new homepage!
On docker machine, cd into config
directory
1
+
cd config
+
You’ll see yaml files, these are configurations we can edit to configure our homepage
edit settings.yaml
1
+
nano config/settings.yaml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/settings
+
+title: My Awesome Homepage # <----- add this line
+
+providers:
+ openweathermap: openweathermapapikey
+ weatherapi: weatherapiapikey
+
+
Save, exit, and revisit your homepage
Should refresh, if not click the refresh in lower right hand corner
Title of document should now be
My Awesome Homepage
If we want, we can also customize the background but updating this file too
Edit settings.yaml
1
+
nano settings.yaml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/settings
+
+title: My Awesome Homepage
+
+background: https://images.unsplash.com/photo-1502790671504-542ad42d5189?auto=format&fit=crop&w=2560&q=80 # <----- add this line
+
+providers:
+ openweathermap: openweathermapapikey
+ weatherapi: weatherapiapikey
+
+
Save and exit again, and the background should be updated.
You can also mount your own image rather than reference one on the web however I am going to stick with one from the web that I don’t have to worry about additional mounts and configuration in the future.
If we want to add some additional filters to the image using tailwind css, we can like so
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/settings
+
+title: My Awesome Homepage
+
+background:
+ image: https://images.unsplash.com/photo-1502790671504-542ad42d5189?auto=format&fit=crop&w=2560&q=80
+ blur: sm # sm, "", md, xl... see https://tailwindcss.com/docs/backdrop-blur
+ saturate: 50 # 0, 50, 100... see https://tailwindcss.com/docs/backdrop-saturate
+ brightness: 50 # 0, 50, 75... see https://tailwindcss.com/docs/backdrop-brightness
+ opacity: 50 # 0-100
+
+# ^^^^ add the above block
+ providers:
+ openweathermap: openweathermapapikey
+ weatherapi: weatherapiapikey
+
If we want to set out homepage to dark mode and the color to slate
, we can like:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/settings
+
+title: My Awesome Homepage
+theme: dark # <----- add this line
+color: slate # <----- add this line
+background:
+ image: https://images.unsplash.com/photo-1502790671504-542ad42d5189?auto=format&fit=crop&w=2560&q=80
+ blur: sm # sm, "", md, xl... see https://tailwindcss.com/docs/backdrop-blur
+ saturate: 50 # 0, 50, 100... see https://tailwindcss.com/docs/backdrop-saturate
+ brightness: 50 # 0, 50, 75... see https://tailwindcss.com/docs/backdrop-brightness
+ opacity: 50 # 0-100
+
+# ^^^^ add the above block
+ providers:
+ openweathermap: openweathermapapikey
+ weatherapi: weatherapiapikey
+
Why do this? Isn’t this a lot of work?
1 word, it’s “repeatable”. We can back up our yaml files and even share them if we want. Also works great with Kubernetes since you can pass aConfigMap
file to your deployment thus not needing a volume.
Services are configured in service.yaml
and really are button for accessing some of your services
Edit service.yaml
1
+
nano config/service.yaml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/services
+
+- My First Group:
+ - My First Service:
+ href: http://localhost/
+ description: Homepage is awesome
+
+- My Second Group:
+ - My Second Service:
+ href: http://localhost/
+ description: Homepage is the best
+
+- My Third Group:
+ - My Third Service:
+ href: http://localhost/
+ description: Homepage is 😎
+- Top Services:
+ - Proxmox:
+ icon: proxmox.png # icons found here https://github.com/walkxcode/dashboard-icons
+ href: https://192.168.0.15:8006
+ description: Proxmox VE
+ - PiHole:
+ icon: pi-hole.svg # icons found here https://github.com/walkxcode/dashboard-icons
+ href: http://192.168.60.10/admin
+ description: Server 1
+ - Cowboy:
+ icon: mdi-account-cowboy-hat-#FF0000 # icons found here https://pictogrammers.com/library/mdi/
+ href: https://localhost/
+ description: giddyup service
+ - McDonald’s:
+ icon: si-mcdonalds-#FFD700 # icons found here https://simpleicons.org/
+ href: https://www.mcdonalds.com/
+ description: homepage
+ # ^^^ add this block
+
As you can see we configured 4 services:
Note: If you’re using Material Design Icons or Simple Icons you can change the color of the icon by appending the hex values to the icon name as shown above.
These extend the functionality of service buttons. Optional but cool.
Edit service.yaml
1
+
nano config/service.yaml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/services
+
+- My First Group:
+ - My First Service:
+ href: http://localhost/
+ description: Homepage is awesome
+
+- My Second Group:
+ - My Second Service:
+ href: http://localhost/
+ description: Homepage is the best
+
+- My Third Group:
+ - My Third Service:
+ href: http://localhost/
+ description: Homepage is 😎
+- Top Services:
+ - Proxmox:
+ icon: proxmox.png # icons found here https://github.com/walkxcode/dashboard-icons
+ href: https://192.168.0.15:8006
+ description: Proxmox VE
+ - PiHole:
+ icon: pi-hole.svg # icons found here https://github.com/walkxcode/dashboard-icons
+ href: http://192.168.60.10/admin
+ description: Server 1
+ widget:
+ type: pihole
+ url: http://192.168.60.10
+ key: "{{HOMEPAGE_VAR_PIHOLE_API_KEY}}" # <--- updated with API key from PiHole
+ - Cowboy:
+ icon: mdi-account-cowboy-hat-#FF0000 # icons found here https://pictogrammers.com/library/mdi/
+ href: https://localhost/
+ description: giddyup service
+ - McDonald’s:
+ icon: si-mcdonalds-#FFD700 # icons found here https://simpleicons.org/
+ href: https://www.mcdonalds.com/
+ description: homepage
+
Stop the Docker container
docker stop homepage
Start the Docker container
docker start homepage
Note: I have noticed that sometimes you need to recreate the container in order for the variables from your
.env
to be replaced. Not sure if this is a feature or a bug, butdocker compose up -d --force-recreate
will stop the old container, remove it, and create a new one
We should now see pi hole statistics
Widgets are standalone items like the resource and search at the top.
If you want to edit these items:
1
+
nano config/widgets.yaml
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/service-widgets
+
+- resources:
+ cpu: true
+ memory: true
+ disk: /
+
+- search:
+ provider: google # <--- updated with google
+ target: _blank
+
+- datetime:
+ text_size: xl
+ format:
+ timeStyle: short
+ # ^^^ add this block
+
Now we can see that search has been changed to Google and we’ve added a date widget.
Here’s a fully working example of my own Homepage dashboard that I use!
As promised, here is both the config for Docker and even Kubernetes!
docker-compose.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+
version: "3.3"
+services:
+ homepage:
+ image: ghcr.io/gethomepage/homepage:latest
+ container_name: homepage
+ restart: unless-stopped
+ ports:
+ - 3000:3000
+ env_file: .env
+ volumes:
+ - ./config:/app/config # Make sure your local config directory exists
+ - /var/run/docker.sock:/var/run/docker.sock # (optional) For docker integrations, see alternative methods
+ environment:
+ PUID: $PUID
+ PGID: $PGID
+
config/bookmarks.yaml
1
+
---
+
config/services.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+98
+99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/services
+# icons found here https://github.com/walkxcode/dashboard-icons
+
+- Hypervisor:
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve1
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: xing-01
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve2
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: xing-02
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve2
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: xing-03
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve4
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: storinator
+- Containers:
+ - Rancher:
+ icon: rancher.svg
+ href: "{{HOMEPAGE_VAR_RACNHER_URL}}"
+ description: k8s
+ - Longhorn:
+ icon: longhorn.svg
+ href: "{{HOMEPAGE_VAR_LONGHORN_URL}}"
+ description: k8s storage
+ - Portainer:
+ icon: portainer.svg
+ href: "{{HOMEPAGE_VAR_PORTAINER_URL}}"
+ description: docker
+ widget:
+ type: portainer
+ url: "{{HOMEPAGE_VAR_PORTAINER_URL}}"
+ env: 2
+ key: "{{HOMEPAGE_VAR_PORTAINER_API_KEY}}"
+- DNS:
+ - Pi-Hole1:
+ icon: pi-hole.svg
+ href: "{{HOMEPAGE_VAR_PIHOLE_URL_1}}"
+ description: quasar
+ widget:
+ type: pihole
+ url: "{{HOMEPAGE_VAR_PIHOLE_URL_1}}"
+ key: "{{HOMEPAGE_VAR_PIHOLE_API_KEY_1}}"
+ - Pi-Hole2:
+ icon: pi-hole.svg
+ href: "{{HOMEPAGE_VAR_PIHOLE_URL_2}}"
+ description: blazar
+ widget:
+ type: pihole
+ url: "{{HOMEPAGE_VAR_PIHOLE_URL_2}}"
+ key: "{{HOMEPAGE_VAR_PIHOLE_API_KEY_2}}"
+ - Pi-Hole3:
+ icon: pi-hole.svg
+ href: "{{HOMEPAGE_VAR_PIHOLE_URL_3}}"
+ description: electron
+ widget:
+ type: pihole
+ url: "{{HOMEPAGE_VAR_PIHOLE_URL_3}}"
+ key: "{{HOMEPAGE_VAR_PIHOLE_API_KEY_3}}"
+- Network:
+ - UniFi:
+ icon: unifi.svg
+ href: "{{HOMEPAGE_VAR_UNIFI_NETWORK_URL}}"
+ description: network
+ widget:
+ type: unifi
+ url: "{{HOMEPAGE_VAR_UNIFI_NETWORK_URL}}"
+ username: "{{HOMEPAGE_VAR_UNIFI_NETWORK_USERNAME}}"
+ password: "{{HOMEPAGE_VAR_UNIFI_NETWORK_PASSWORD}}"
+ - Uptime Kuma:
+ icon: uptime-kuma.svg
+ href: "{{HOMEPAGE_VAR_UPTIME_KUMA_URL}}"
+ description: internal
+ widget:
+ type: uptimekuma
+ url: "{{HOMEPAGE_VAR_UPTIME_KUMA_URL}}"
+ slug: home
+ - Uptime Robot:
+ icon: https://play-lh.googleusercontent.com/cUrv0t00FYQ1GKLuOTvv8qjo1lSDjqZC16IOp3Fb6ijew6Br5m4o16HhDp0GBu_Bw8Y=w240-h480-rw
+ href: https://uptimerobot.com/dashboard
+ description: external
+ widget:
+ type: uptimerobot
+ url: https://api.uptimerobot.com
+ key: "{{HOMEPAGE_VAR_UPTIME_ROBOT_API_KEY}}"
+- Storage:
+ - TrueNAS:
+ icon: truenas.svg
+ href: "{{HOMEPAGE_VAR_TRUENAS_URL}}"
+ description: scale
+ widget:
+ type: truenas
+ url: "{{HOMEPAGE_VAR_TRUENAS_URL}}"
+ key: "{{HOMEPAGE_VAR_TRUENAS_API_KEY}}"
+ - MinIO:
+ icon: minio.svg
+ href: "{{HOMEPAGE_VAR_MINIO_URL}}"
+ description: object storage
+- Media:
+ - Plex:
+ icon: plex.svg
+ href: "{{HOMEPAGE_VAR_PLEX_URL}}"
+ description: media server
+ widget:
+ type: plex
+ url: "{{HOMEPAGE_VAR_PLEX_URL}}"
+ key: "{{HOMEPAGE_VAR_PLEX_API_TOKEN}}"
+ - Tautulli:
+ icon: tautulli.svg
+ href: "{{HOMEPAGE_VAR_TAUTULLI_URL}}"
+ description: plex stats
+ widget:
+ type: tautulli
+ url: "{{HOMEPAGE_VAR_TAUTULLI_URL}}"
+ key: "{{HOMEPAGE_VAR_TAUTULLI_API_KEY}}"
+ - HDHomerun:
+ icon: hdhomerun.png
+ href: "{{HOMEPAGE_VAR_HDHOMERUN_URL}}"
+ description: flex 4k
+ widget:
+ type: hdhomerun
+ url: "{{HOMEPAGE_VAR_HDHOMERUN_URL}}"
+- Remote Access:
+ - PiKVM:
+ icon: https://avatars.githubusercontent.com/u/41749659?s=200&v=4
+ href: "{{HOMEPAGE_VAR_PIKVM_URL}}"
+ description: remote kvm
+ - IPMI:
+ icon: https://upload.wikimedia.org/wikipedia/commons/1/1d/Super_Micro_Computer_Logo.svg
+ href: "{{HOMEPAGE_VAR_IPMI_1_URL}}"
+ description: storinator
+ - IPMI:
+ icon: https://upload.wikimedia.org/wikipedia/commons/1/1d/Super_Micro_Computer_Logo.svg
+ href: "{{HOMEPAGE_VAR_IPMI_2_URL}}"
+ description: hl15
+ - Netboot:
+ icon: https://netboot.xyz/img/nbxyz-laptop.gif
+ href: "{{HOMEPAGE_VAR_NETBOOT_URL}}"
+ description: network boot utility
+ - Tripp Lite:
+ icon: https://upload.wikimedia.org/wikipedia/commons/f/f9/Tripp_Lite_logo.svg
+ href: "{{HOMEPAGE_VAR_UPS_1_URL}}"
+ description: 1500
+ - Eaton:
+ icon: https://cdn11.bigcommerce.com/s-fg272t4iw0/images/stencil/1280x1280/products/2549/2802/C-12556__63907.1557814942.jpg?c=2
+ href: "{{HOMEPAGE_VAR_UPS_2_URL}}"
+ description: 5p
+- Home Automation:
+ - Home Assistant:
+ icon: home-assistant.svg
+ href: "{{HOMEPAGE_VAR_HOME_ASSISTANT_URL}}"
+ description: home
+ widget:
+ type: homeassistant
+ url: "{{HOMEPAGE_VAR_HOME_ASSISTANT_URL}}"
+ key: "{{HOMEPAGE_VAR_HOME_ASSISTANT_API_KEY}}"
+ - UniFi:
+ icon: https://play-lh.googleusercontent.com/DmgQvSdocOrGr0D0rxSBE9sqh23Fw3ck3BgKRN788cZnOKgcZlcEAFRYwmUbp6vMTVI
+ href: "{{HOMEPAGE_VAR_UNIFI_PROTECT_URL}}"
+ description: protect
+ - Scryped:
+ icon: https://www.scrypted.app/images/web_hi_res_512.png
+ href: "{{HOMEPAGE_VAR_SCRYPTED_URL}}"
+ description: mgmt console
+ - Broadlink Control:
+ icon: https://nwzimg.wezhan.net/contents/sitefiles3606/18030899/images/5430245.png
+ href: "{{HOMEPAGE_VAR_BROADLINK_CONTROL_URL}}"
+ description: light control
+- Other:
+ - GitLab:
+ icon: gitlab.svg
+ href: https://gitlab.com
+ description: source code
+ - GitHub:
+ icon: github.svg
+ href: https://github.com
+ description: source code
+ - Shlink:
+ icon: https://shlink.io/images/shlink-logo-blue.svg
+ href: "{{HOMEPAGE_VAR_SHLINK_URL}}"
+ description: dashboard
+
+
config/settings.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/settings
+
+title: Techno Tim Homepage
+
+background:
+ image: https://cdnb.artstation.com/p/assets/images/images/006/897/659/large/mikael-gustafsson-wallpaper-mikael-gustafsson.jpg
+ blur: sm # sm, md, xl... see https://tailwindcss.com/docs/backdrop-blur
+ saturate: 100 # 0, 50, 100... see https://tailwindcss.com/docs/backdrop-saturate
+ brightness: 50 # 0, 50, 75... see https://tailwindcss.com/docs/backdrop-brightness
+ opacity: 100 # 0-100
+
+theme: dark
+color: slate
+
+useEqualHeights: true
+
+layout:
+ Hypervisor:
+ header: true
+ style: row
+ columns: 4
+ Containers:
+ header: true
+ style: row
+ columns: 4
+ DNS:
+ header: true
+ style: row
+ columns: 4
+ Network:
+ header: true
+ style: row
+ columns: 4
+ Remote Access:
+ header: true
+ style: row
+ columns: 4
+ Storage:
+ header: true
+ style: row
+ columns: 4
+ Media:
+ header: true
+ style: row
+ columns: 4
+ Home Automation:
+ header: true
+ style: row
+ columns: 4
+ Other:
+ header: true
+ style: row
+ columns: 4
+
config/widgets.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+
---
+# For configuration options and examples, please see:
+# https://gethomepage.dev/latest/configs/service-widgets
+
+- resources:
+ cpu: true
+ memory: true
+ disk: /
+
+- datetime:
+ text_size: xl
+ format:
+ timeStyle: short
+
.env
Note: These variables will be replace in the configuration above. You will need to supply your own values here in your file.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+
PUID=1000
+PGID=1000
+
+HOMEPAGE_VAR_PIHOLE_API_KEY_1=
+HOMEPAGE_VAR_PIHOLE_API_KEY_2=
+HOMEPAGE_VAR_PIHOLE_API_KEY_3=
+
+HOMEPAGE_VAR_PIHOLE_URL_1=
+HOMEPAGE_VAR_PIHOLE_URL_2=
+HOMEPAGE_VAR_PIHOLE_URL_3=
+
+HOMEPAGE_VAR_PLEX_URL=
+HOMEPAGE_VAR_PLEX_API_TOKEN=
+
+HOMEPAGE_VAR_TAUTULLI_URL=
+HOMEPAGE_VAR_TAUTULLI_API_KEY=
+
+HOMEPAGE_VAR_HDHOMERUN_URL=
+
+HOMEPAGE_VAR_HOME_ASSISTANT_URL=
+HOMEPAGE_VAR_HOME_ASSISTANT_API_KEY=
+
+HOMEPAGE_VAR_TRUENAS_URL=
+HOMEPAGE_VAR_TRUENAS_API_KEY=
+
+HOMEPAGE_VAR_UNIFI_NETWORK_URL=
+HOMEPAGE_VAR_UNIFI_NETWORK_USERNAME=
+HOMEPAGE_VAR_UNIFI_NETWORK_PASSWORD=
+
+HOMEPAGE_VAR_UNIFI_PROTECT_URL=
+
+HOMEPAGE_VAR_UPTIME_KUMA_URL=
+
+HOMEPAGE_VAR_MINIO_URL=
+
+HOMEPAGE_VAR_RACNHER_URL=
+
+HOMEPAGE_VAR_LONGHORN_URL=
+
+HOMEPAGE_VAR_PORTAINER_URL=
+HOMEPAGE_VAR_PORTAINER_API_KEY=
+
+HOMEPAGE_VAR_PROXMOX_URL=
+HOMEPAGE_VAR_PROXMOX_USER=
+HOMEPAGE_VAR_PROXMOX_API_KEY=
+
+HOMEPAGE_VAR_UPTIME_ROBOT_API_KEY=
+
+HOMEPAGE_VAR_SCRYPTED_URL=
+
+HOMEPAGE_VAR_PIKVM_URL=
+
+HOMEPAGE_VAR_NETBOOT_URL=
+
+HOMEPAGE_VAR_BROADLINK_CONTROL_URL=
+
+HOMEPAGE_VAR_IPMI_1_URL=
+HOMEPAGE_VAR_IPMI_2_URL=
+
+HOMEPAGE_VAR_UPS_1_URL=
+HOMEPAGE_VAR_UPS_2_URL=
+
+HOMEPAGE_VAR_SHLINK_URL=
+
Here’s my Kubernetes config!
deployment.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+
---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: homepage
+ namespace: default
+ labels:
+ app: homepage
+ annotations:
+ reloader.stakater.com/auto: "true"
+spec:
+ selector:
+ matchLabels:
+ app: homepage
+ replicas: 3
+ progressDeadlineSeconds: 600
+ revisionHistoryLimit: 1
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxUnavailable: 25%
+ maxSurge: 1
+ template:
+ metadata:
+ labels:
+ app: homepage
+ annotations:
+ deploy-date: "deploy-date-value"
+ spec:
+ containers:
+ - name: homepage
+ image: ghcr.io/gethomepage/homepage:v0.8.4
+ resources:
+ requests:
+ memory: 128Mi
+ cpu: 200m
+ envFrom:
+ - secretRef:
+ name: homepage-secret
+ ports:
+ - containerPort: 3000
+ name: http
+ readinessProbe:
+ httpGet:
+ path: /
+ port: http
+ initialDelaySeconds: 60
+ periodSeconds: 10
+ failureThreshold: 5
+ timeoutSeconds: 5
+ livenessProbe:
+ httpGet:
+ path: /
+ port: http
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ timeoutSeconds: 5
+ volumeMounts:
+ - mountPath: /app/config/custom.js
+ name: homepage-config
+ subPath: custom.js
+ - mountPath: /app/config/custom.css
+ name: homepage-config
+ subPath: custom.css
+ - mountPath: /app/config/bookmarks.yaml
+ name: homepage-config
+ subPath: bookmarks.yaml
+ - mountPath: /app/config/docker.yaml
+ name: homepage-config
+ subPath: docker.yaml
+ - mountPath: /app/config/kubernetes.yaml
+ name: homepage-config
+ subPath: kubernetes.yaml
+ - mountPath: /app/config/services.yaml
+ name: homepage-config
+ subPath: services.yaml
+ - mountPath: /app/config/settings.yaml
+ name: homepage-config
+ subPath: settings.yaml
+ - mountPath: /app/config/widgets.yaml
+ name: homepage-config
+ subPath: widgets.yaml
+ - mountPath: /app/config/logs
+ name: logs
+ volumes:
+ - name: homepage-config
+ configMap:
+ name: homepage
+ - name: logs
+ emptyDir: {}
+ topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: topology.kubernetes.io/zone
+ whenUnsatisfiable: DoNotSchedule
+ labelSelector:
+ matchLabels:
+ app: homepage
+
config.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+98
+99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+213
+214
+215
+216
+217
+218
+219
+220
+221
+222
+223
+224
+225
+226
+227
+228
+229
+230
+231
+232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248
+249
+250
+251
+252
+253
+254
+255
+256
+257
+258
+259
+260
+261
+262
+263
+264
+265
+266
+267
+268
+269
+270
+271
+272
+273
+274
+275
+276
+277
+278
+279
+280
+281
+282
+283
+
apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: homepage
+ namespace: default
+ labels:
+ app: homepage
+data:
+ kubernetes.yaml: |
+ mode: cluster
+ settings.yaml: |
+ title: Techno Tim Homepage
+
+ background:
+ image: https://cdnb.artstation.com/p/assets/images/images/006/897/659/large/mikael-gustafsson-wallpaper-mikael-gustafsson.jpg
+ blur: sm # sm, md, xl... see https://tailwindcss.com/docs/backdrop-blur
+ saturate: 100 # 0, 50, 100... see https://tailwindcss.com/docs/backdrop-saturate
+ brightness: 50 # 0, 50, 75... see https://tailwindcss.com/docs/backdrop-brightness
+ opacity: 100 # 0-100
+
+ theme: dark
+ color: slate
+
+ useEqualHeights: true
+
+ layout:
+ Hypervisor:
+ header: true
+ style: row
+ columns: 4
+ Containers:
+ header: true
+ style: row
+ columns: 4
+ DNS:
+ header: true
+ style: row
+ columns: 4
+ Network:
+ header: true
+ style: row
+ columns: 4
+ Remote Access:
+ header: true
+ style: row
+ columns: 4
+ Storage:
+ header: true
+ style: row
+ columns: 4
+ Media:
+ header: true
+ style: row
+ columns: 4
+ Home Automation:
+ header: true
+ style: row
+ columns: 4
+ Other:
+ header: true
+ style: row
+ columns: 4
+ custom.css: ""
+ custom.js: ""
+ bookmarks.yaml: ""
+ services.yaml: |
+ - Hypervisor:
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve1
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: xing-01
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve2
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: xing-02
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve2
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: xing-03
+ - Proxmox:
+ icon: proxmox.svg
+ href: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ description: pve4
+ widget:
+ type: proxmox
+ url: "{{HOMEPAGE_VAR_PROXMOX_URL}}"
+ username: "{{HOMEPAGE_VAR_PROXMOX_USER}}"
+ password: "{{HOMEPAGE_VAR_PROXMOX_API_KEY}}"
+ node: storinator
+ - Containers:
+ - Rancher:
+ icon: rancher.svg
+ href: "{{HOMEPAGE_VAR_RACNHER_URL}}"
+ description: k8s
+ - Longhorn:
+ icon: longhorn.svg
+ href: "{{HOMEPAGE_VAR_LONGHORN_URL}}"
+ description: k8s storage
+ - Portainer:
+ icon: portainer.svg
+ href: "{{HOMEPAGE_VAR_PORTAINER_URL}}"
+ description: docker
+ widget:
+ type: portainer
+ url: "{{HOMEPAGE_VAR_PORTAINER_URL}}"
+ env: 2
+ key: "{{HOMEPAGE_VAR_PORTAINER_API_KEY}}"
+ - DNS:
+ - Pi-Hole1:
+ icon: pi-hole.svg
+ href: "{{HOMEPAGE_VAR_PIHOLE_URL_1}}"
+ description: quasar
+ widget:
+ type: pihole
+ url: "{{HOMEPAGE_VAR_PIHOLE_URL_1}}"
+ key: "{{HOMEPAGE_VAR_PIHOLE_API_KEY_1}}"
+ - Pi-Hole2:
+ icon: pi-hole.svg
+ href: "{{HOMEPAGE_VAR_PIHOLE_URL_2}}"
+ description: blazar
+ widget:
+ type: pihole
+ url: "{{HOMEPAGE_VAR_PIHOLE_URL_2}}"
+ key: "{{HOMEPAGE_VAR_PIHOLE_API_KEY_2}}"
+ - Pi-Hole3:
+ icon: pi-hole.svg
+ href: "{{HOMEPAGE_VAR_PIHOLE_URL_3}}"
+ description: electron
+ widget:
+ type: pihole
+ url: "{{HOMEPAGE_VAR_PIHOLE_URL_3}}"
+ key: "{{HOMEPAGE_VAR_PIHOLE_API_KEY_3}}"
+ - Network:
+ - UniFi:
+ icon: unifi.svg
+ href: "{{HOMEPAGE_VAR_UNIFI_NETWORK_URL}}"
+ description: network
+ widget:
+ type: unifi
+ url: "{{HOMEPAGE_VAR_UNIFI_NETWORK_URL}}"
+ username: "{{HOMEPAGE_VAR_UNIFI_NETWORK_USERNAME}}"
+ password: "{{HOMEPAGE_VAR_UNIFI_NETWORK_PASSWORD}}"
+ - Uptime Kuma:
+ icon: uptime-kuma.svg
+ href: "{{HOMEPAGE_VAR_UPTIME_KUMA_URL}}"
+ description: internal
+ widget:
+ type: uptimekuma
+ url: "{{HOMEPAGE_VAR_UPTIME_KUMA_URL}}"
+ slug: home
+ - Uptime Robot:
+ icon: https://play-lh.googleusercontent.com/cUrv0t00FYQ1GKLuOTvv8qjo1lSDjqZC16IOp3Fb6ijew6Br5m4o16HhDp0GBu_Bw8Y=w240-h480-rw
+ href: https://uptimerobot.com/dashboard
+ description: external
+ widget:
+ type: uptimerobot
+ url: https://api.uptimerobot.com
+ key: "{{HOMEPAGE_VAR_UPTIME_ROBOT_API_KEY}}"
+ - Storage:
+ - TrueNAS:
+ icon: truenas.svg
+ href: "{{HOMEPAGE_VAR_TRUENAS_URL}}"
+ description: scale
+ widget:
+ type: truenas
+ url: "{{HOMEPAGE_VAR_TRUENAS_URL}}"
+ key: "{{HOMEPAGE_VAR_TRUENAS_API_KEY}}"
+ - MinIO:
+ icon: minio.svg
+ href: "{{HOMEPAGE_VAR_MINIO_URL}}"
+ description: object storage
+ - Media:
+ - Plex:
+ icon: plex.svg
+ href: "{{HOMEPAGE_VAR_PLEX_URL}}"
+ description: media server
+ widget:
+ type: plex
+ url: "{{HOMEPAGE_VAR_PLEX_URL}}"
+ key: "{{HOMEPAGE_VAR_PLEX_API_TOKEN}}"
+ - Tautulla:
+ icon: tautulli.svg
+ href: "{{HOMEPAGE_VAR_TAUTULLI_URL}}"
+ description: plex stats
+ widget:
+ type: tautulli
+ url: "{{HOMEPAGE_VAR_TAUTULLI_URL}}"
+ key: "{{HOMEPAGE_VAR_TAUTULLI_API_KEY}}"
+ - HDHomerun:
+ icon: hdhomerun.png
+ href: "{{HOMEPAGE_VAR_HDHOMERUN_URL}}"
+ description: flex 4k
+ widget:
+ type: hdhomerun
+ url: "{{HOMEPAGE_VAR_HDHOMERUN_URL}}"
+ - Remote Access:
+ - PiKVM:
+ icon: https://avatars.githubusercontent.com/u/41749659?s=200&v=4
+ href: "{{HOMEPAGE_VAR_PIKVM_URL}}"
+ description: remote kvm
+ - IPMI:
+ icon: https://upload.wikimedia.org/wikipedia/commons/1/1d/Super_Micro_Computer_Logo.svg
+ href: "{{HOMEPAGE_VAR_IPMI_1_URL}}"
+ description: storinator
+ - IPMI:
+ icon: https://upload.wikimedia.org/wikipedia/commons/1/1d/Super_Micro_Computer_Logo.svg
+ href: "{{HOMEPAGE_VAR_IPMI_2_URL}}"
+ description: hl15
+ - Netboot:
+ icon: https://netboot.xyz/img/nbxyz-laptop.gif
+ href: "{{HOMEPAGE_VAR_NETBOOT_URL}}"
+ description: network boot utility
+ - Tripp Lite:
+ icon: https://upload.wikimedia.org/wikipedia/commons/f/f9/Tripp_Lite_logo.svg
+ href: "{{HOMEPAGE_VAR_UPS_1_URL}}"
+ description: 1500
+ - Eaton:
+ icon: https://cdn11.bigcommerce.com/s-fg272t4iw0/images/stencil/1280x1280/products/2549/2802/C-12556__63907.1557814942.jpg?c=2
+ href: "{{HOMEPAGE_VAR_UPS_2_URL}}"
+ description: 5p
+ - Home Automation:
+ - Home Assistant:
+ icon: home-assistant.svg
+ href: "{{HOMEPAGE_VAR_HOME_ASSISTANT_URL}}"
+ description: home
+ widget:
+ type: homeassistant
+ url: "{{HOMEPAGE_VAR_HOME_ASSISTANT_URL}}"
+ key: "{{HOMEPAGE_VAR_HOME_ASSISTANT_API_KEY}}"
+ - UniFi:
+ icon: https://play-lh.googleusercontent.com/DmgQvSdocOrGr0D0rxSBE9sqh23Fw3ck3BgKRN788cZnOKgcZlcEAFRYwmUbp6vMTVI
+ href: "{{HOMEPAGE_VAR_UNIFI_PROTECT_URL}}"
+ description: protect
+ - Scryped:
+ icon: https://www.scrypted.app/images/web_hi_res_512.png
+ href: "{{HOMEPAGE_VAR_SCRYPTED_URL}}"
+ description: mgmt console
+ - Broadlink Control:
+ icon: https://nwzimg.wezhan.net/contents/sitefiles3606/18030899/images/5430245.png
+ href: "{{HOMEPAGE_VAR_BROADLINK_CONTROL_URL}}"
+ description: light control
+ - Other:
+ - GitLab:
+ icon: gitlab.svg
+ href: https://gitlab.com
+ description: source code
+ - GitHub:
+ icon: github.svg
+ href: https://github.com
+ description: source code
+ - Shlink:
+ icon: https://shlink.io/images/shlink-logo-blue.svg
+ href: "{{HOMEPAGE_VAR_SHLINK_URL}}"
+ description: dashboard
+ widgets.yaml: |
+ - resources:
+ cpu: true
+ memory: true
+ disk: /
+
+ - datetime:
+ text_size: xl
+ format:
+ timeStyle: short
+ docker.yaml: ""
+
secret.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+
kind: Secret
+apiVersion: v1
+type: Opaque
+metadata:
+ name: homepage-secret
+ namespace: default
+stringData:
+ HOMEPAGE_VAR_PIHOLE_API_KEY_1: ""
+ HOMEPAGE_VAR_PIHOLE_API_KEY_2: ""
+ HOMEPAGE_VAR_PIHOLE_API_KEY_3: ""
+ HOMEPAGE_VAR_PIHOLE_URL_1: ""
+ HOMEPAGE_VAR_PIHOLE_URL_2: ""
+ HOMEPAGE_VAR_PIHOLE_URL_3: ""
+ HOMEPAGE_VAR_PLEX_url: ""
+ HOMEPAGE_VAR_PLEX_API_TOKEN: ""
+ HOMEPAGE_VAR_TAUTULLI_url: ""
+ HOMEPAGE_VAR_TAUTULLI_API_key: ""
+ HOMEPAGE_VAR_HDHOMERUN_url: ""
+ HOMEPAGE_VAR_HOME_ASSISTANT_url: ""
+ HOMEPAGE_VAR_HOME_ASSISTANT_API_key: ""
+ HOMEPAGE_VAR_TRUENAS_url: ""
+ HOMEPAGE_VAR_TRUENAS_API_key: ""
+ HOMEPAGE_VAR_UNIFI_NETWORK_url: ""
+ HOMEPAGE_VAR_UNIFI_NETWORK_username: ""
+ HOMEPAGE_VAR_UNIFI_NETWORK_password: ""
+ HOMEPAGE_VAR_UNIFI_PROTECT_url: ""
+ HOMEPAGE_VAR_UPTIME_KUMA_url: ""
+ HOMEPAGE_VAR_MINIO_url: ""
+ HOMEPAGE_VAR_RACNHER_url: ""
+ HOMEPAGE_VAR_LONGHORN_url: ""
+ HOMEPAGE_VAR_PORTAINER_url: ""
+ HOMEPAGE_VAR_PORTAINER_API_key: ""
+ HOMEPAGE_VAR_PROXMOX_url: ""
+ HOMEPAGE_VAR_PROXMOX_USER: ""
+ HOMEPAGE_VAR_PROXMOX_API_key: ""
+ HOMEPAGE_VAR_UPTIME_ROBOT_API_key: ""
+ HOMEPAGE_VAR_SCRYPTED_url: ""
+ HOMEPAGE_VAR_PIKVM_url: ""
+ HOMEPAGE_VAR_NETBOOT_url: ""
+ HOMEPAGE_VAR_BROADLINK_CONTROL_url: ""
+ HOMEPAGE_VAR_IPMI_1_url: ""
+ HOMEPAGE_VAR_IPMI_2_url: ""
+ HOMEPAGE_VAR_UPS_1_url: ""
+ HOMEPAGE_VAR_UPS_2_url: ""
+ HOMEPAGE_VAR_SHLINK_url: ""
+
I finally replaced my homepage Dashboard!https://t.co/e571uMSQ89 pic.twitter.com/eN5sFrsVyN
— Techno Tim (@TechnoTimLive) January 15, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Do you want to self host your Rancher UI securely in your homelab? Have you thought about putting your Rancher UI behind Traefik and your reverse proxy to get free SSL certificates using Let’s Encrypt? Do you want to make your Rancher UI available publicly and secure it using 3rd party OAuth providers like Google, GitHub, Keycloak, Okta, Shibboleth, and more? Well this is the guide for you.In this step-by-step tutorial we’ll walk through setting up the Rancher UI to use Traefik reverse proxy, get SSL certificates using Let’s Encrypt, host our UI publicly, and then add 3rd party OAuth providers so that we can use 2 factor authentication (2FA) and all of the other security features auth providers give us.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
People have asked how I’ve been able to create and grow a Tech YouTube channel and what my process is when planning, filming, editing, and producing content.Today we talk about just that.All my secrets unveiled as we celebrate 50,000 subscribers in this behind the scenes look.Thank you so much!
See all the hardware I recommend at https://l.technotim.live/gear
Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files.
age is a simple, modern and secure file encryption tool, format, and Go library. It features small explicit keys, no config options, and UNIX-style composability.It is commonly used in tandem with Mozilla SOPS.It’s open source and you can read more about it on the GitHub repo. Looking for a tutorial on how use this? Checkout this video on how to use SOPS and Age for your Git Repos!
We want to get the latest release of age
so we need to look at their github repo for the latest version.
1
+2
+
AGE_LATEST_VERSION=$(curl -s "https://api.github.com/repos/FiloSottile/age/releases/latest" | grep -Po '"tag_name": "v\K[0-9.]+')
+
+
Then we’ll use curl
to download the latest .tar.gz
1
+2
+
curl -Lo age.tar.gz "https://github.com/FiloSottile/age/releases/latest/download/age-v${AGE_LATEST_VERSION}-linux-amd64.tar.gz"
+
+
Then we’ll want to extract age.tar.gz
1
+
tar xf age.tar.gz
+
Then we’ll move both binaries (age
and age-keygen
) to /usr/local/bin
so we can use them
1
+2
+
sudo mv age/age /usr/local/bin
+sudo mv age/age-keygen /usr/local/bin
+
Then we’ll clean up our downloads and extractions
1
+2
+
rm -rf age.tar.gz
+rm -rf age
+
Then we can test to make sure age
and age-keygen
are working by running
1
+
age -version
+
1
+
age-keygen -version
+
To uninstall, it’s as simple as removing the binaries
1
+2
+
sudo rm -rf /usr/local/bin/age
+sudo rm -rf /usr/local/bin/age-keygen
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
SOPS is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP.It’s open source and you can read more about it on the GitHub repo.Looking for a tutorial on how use this? Check out this video on how to use SOPS and Age for your Git Repos!
We want to get the latest release of SOPS
so we need to look at their github repo for the latest version.
1
+
SOPS_LATEST_VERSION=$(curl -s "https://api.github.com/repos/getsops/sops/releases/latest" | grep -Po '"tag_name": "v\K[0-9.]+')
+
Then we’ll use curl
to download the latest .deb
1
+2
+
curl -Lo sops.deb "https://github.com/getsops/sops/releases/download/v${SOPS_LATEST_VERSION}/sops_${SOPS_LATEST_VERSION}_amd64.deb"
+
+
Then we’ll want to install sops.deb
along with any missing dependencies
1
+
sudo apt --fix-broken install ./sops.deb
+
Then we’ll clean up our download
1
+
rm -rf sops.deb
+
Then we can test to make sure sops
is working by running:
1
+
sops -version
+
To uninstall, it’s as simple as using apt to remove it
1
+
sudo apt remove sops
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Setting up iSCSI with TrueNAS and Windows 10 is super simple with TrueNAS.This is an easy way to have a hard drive installed on your machine that isn’t really attached, it lives on the network.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Jekyll is a static site generator that transforms your plain text into beautiful static web sites and blogs.It can be use for a documentation site, a blog, an event site, or really any web site you like. It’s fast, secure, easy, and open source.It’s also the same site generator I use to maintain my open source documentation.Today, we’ll be installing and configuring Jekyll using the Chirpy theme.We configure the site, create some pages with markdown, automatically build it with a GitHub action and even host it for FREE on GitHub pages.If you don’t want to host in the cloud, I show how to host it on your own server or even in Docker.
A HUGE THANK YOU to Micro Center for Sponsoring this video!
New Customers Exclusive – Get a Free 256 GB SSD at Micro Center
Browse Micro Center’s 30,000 products in stock
Be sure to ⭐ the jekyll repo and the Chrirpy theme repo
If you need help setting up your terminal on Windows, check out these two posts which will help you configure your terminal with WSL like mine
1
+2
+
sudo apt update
+sudo apt install ruby-full build-essential zlib1g-dev git
+
To avoid installing RubyGems packages as the root user:
If you are using bash
(usually the default for most)
1
+2
+3
+4
+
echo '# Install Ruby Gems to ~/gems' >> ~/.bashrc
+echo 'export GEM_HOME="$HOME/gems"' >> ~/.bashrc
+echo 'export PATH="$HOME/gems/bin:$PATH"' >> ~/.bashrc
+source ~/.bashrc
+
If you are using zsh
(you know if you are)
1
+2
+3
+4
+
echo '# Install Ruby Gems to ~/gems' >> ~/.zshrc
+echo 'export GEM_HOME="$HOME/gems"' >> ~/.zshrc
+echo 'export PATH="$HOME/gems/bin:$PATH"' >> ~/.zshrc
+source ~/.zshrc
+
Install Jekyll bundler
1
+2
+
gem install jekyll bundler
+
+
Visit https://github.com/cotes2020/jekyll-theme-chirpy#quick-start
After creating a site based on the template, clone your repo
1
+
git clone git@<YOUR-USER-NAME>/<YOUR-REPO-NAME>.git
+
then install your dependencies
1
+2
+
cd repo-name
+bundle
+
After making changes to your site, commit and push then up to git
1
+2
+3
+
git add .
+git commit -m "made some changes"
+git push
+
serving your site
1
+
bundle exec jekyll s
+
Building your site in production mode
1
+
JEKYLL_ENV=production bundle exec jekyll b
+
This will output the production site to _site
This site already works with GitHub actions, just push it up and check the actions Tab.,
For GitLab you can see the pipeline I built for my own docs site here
Create a Dockerfile
with the following
1
+2
+
FROM nginx:stable-alpine
+COPY _site /usr/share/nginx/html
+
Build site in production mode
1
+
JEKYLL_ENV=production bundle exec jekyll b
+
Then build your image:
docker build .
Jekyll uses a naming convention for pages and posts
Create a file in _posts
with the format
YEAR-MONTH-DAY-title.md
+
For example:
2022-05-23-homelab-docs.md
+2022-05-34-hardware-specs.md
+
Jekyll can delay posts which have the date/time set for a point in the future determined by the “front matter” section at the top of your post file. Check the date & time as well as time zone if you don’t see a post appear shortly after re-build.
Image from asset:
1
+2
+
... which is shown in the screenshot below:
+![A screenshot](/assets/screenshot.webp)
+
Linking to a file
1
+
... you can [download the PDF](/assets/diagram.pdf) here.
+
See more post formatting rules on the Jekyll site
If you need some help with markdown, check out the markdown cheat sheet
I have lots of examples in my documentation site repo.Just click on the Raw button to see the code behind the page.
For more neat syntax for the Chirpy theme check their demo page on making posts https://chirpy.cotes.page/posts/write-a-new-post/
See reference repo for files
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Setting up k3s is hard.That’s why we made it easy.Today we’ll set up a High Availability K3s cluster using etcd, MetalLB, kube-vip, and Ansible.We’ll automate the entire process giving you an easy, repeatable way to create a k3s cluster that you can run in a few minutes.
A HUGE THANKS to our sponsor, Micro Center!
New Customers Exclusive – Get a Free 240gb SSD at Micro Center: https://micro.center/1043bc
You’ll need to be sure you have Ansible installed on your machine and that it is at least 2.11+. If you don’t, you can use the install Ansible post on how to install and update it.
Second, you’ll need to provision the VMs. Here’s an easy way to create perfect Proxmox templates with cloud image and cloud init and a video if you need.
Next, you’ll need to fork and clone the repo.While you’re at it, give it a ⭐ too :).
1
+
git clone https://github.com/techno-tim/k3s-ansible
+
Next you’ll want to create a local copy of ansible.example.cfg
.
1
+
cp ansible.example.cfg ansible.cfg
+
You’ll want to adapt this to suit your needs however the defaults should work without issue.If you’re looking for the old defaults, you can see them in this PR that remove the file.
Next you’ll need to install some requirements for ansible
1
+
ansible-galaxy install -r ./collections/requirements.yml
+
Next, you’ll want to cd
into the repo and copy the sample
directory within the inventory
directory.
(Be sure you’re using the latest template)
1
+
cp -R inventory/sample inventory/my-cluster
+
Next, edit the inventory/my-cluster/hosts.ini
to match your systems.DNS works here too.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
[master]
+192.168.30.38
+192.168.30.39
+192.168.30.40
+
+[node]
+192.168.30.41
+192.168.30.42
+
+[k3s_cluster:children]
+master
+node
+
Edit inventory/my-cluster/group_vars/all.yml
to your liking.See comments inline.
It’s best to start using these args, and optionally include traefik
if you want it installed with k3s
however I would recommend installing it later with helm
It’s best to start with the default values in the repo.
1
+2
+3
+4
+5
+6
+7
+8
+9
+
# change these to your liking, the only required are: --disable servicelb, --tls-san
+extra_server_args: >-
+
+ --node-taint node-role.kubernetes.io/master=true:NoSchedule
+ --tls-san
+ --disable servicelb
+ --disable traefik
+extra_agent_args: >-
+
+
I would not change these values unless you know what you are doing.It will most likely not work for you but listing for posterity.
Note: These are for an advanced use case. There isn’t a one size fits all setting for everyone and their needs, I would try using k3s with the above values before changing them.This could have undesired effects like nodes going offline, pods jumping or being removed, etc… Using these args might come at the cost of stability Also, these will not work anymore without some modifications
1
+2
+
extra_server_args: "--disable servicelb --disable traefik --write-kubeconfig-mode 644 --kube-apiserver-arg default-not-ready-toleration-seconds=30 --kube-apiserver-arg default-unreachable-toleration-seconds=30 --kube-controller-arg node-monitor-period=20s --kube-controller-arg node-monitor-grace-period=20s --kubelet-arg node-status-update-frequency=5s"
+extra_agent_args: "--kubelet-arg node-status-update-frequency=5s"
+
Start provisioning of the cluster using the following command:
1
+
ansible-playbook ./site.yml -i ./inventory/my-cluster/hosts.ini
+
Note: note: add –ask-pass –ask-become-pass if you are using password SSH login.
After deployment control plane will be accessible via virtual ip address which is defined in inventory/my-cluster/group_vars/all.yml
as apiserver_endpoint
To get access to your Kubernetes cluster and copy your kube config locally run:
1
+
scp ansibleuser@192.168.30.38:~/.kube/config ~/.kube/config
+
Be sure you can ping your VIP defined in inventory/my-cluster/group_vars/all.yml
as apiserver_endpoint
1
+
ping 192.168.30.222
+
Getting nodes
1
+
kubectl get nodes
+
Deploying a sample nginx
workload
1
+
kubectl apply -f example/deployment.yml
+
Check to be sure it was deployed
1
+
kubectl describe deployment nginx
+
Deploying a sample nginx
service with a LoadBalancer
1
+
kubectl apply -f example/service.yml
+
Check service and be sure it has an IP from metal lb as defined in inventory/my-cluster/group_vars/all.yml
1
+
kubectl describe service nginx
+
Visit that url or curl
1
+
curl http://192.168.30.80
+
You should see the nginx
welcome page.
You can clean this up by running
1
+2
+
kubectl delete -f example/deployment.yml
+kubectl delete -f example/service.yml
+
This will remove k3s from all nodes.These nodes should be rebooted afterwards.
1
+
ansible-playbook ./reset.yml -i ./inventory/my-cluster/hosts.ini
+
See here to get the steps for installing traefik + let’s encrypt
See here for steps to deploy rancher
Be sure to see this post on how to troubleshoot common problems
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Are you running Kubernetes in your homelab or in the enterprise? Do you want an easy way to manage and create Kubernetes clusters? Do you want high availability Rancher? Join me as we walk through stalling Rancher on an existing high availability k3s cluster in this step-by-step tutorial.We install Rancher, configure a load balancer, install and configure helm, install cert-manager, configure Rancher, walk through the GUI, scale up our cluster, and set up a health check and liveness check! Join me, it’s easy in this straightforward guide.
Create a load balancer using nginx
nginx.conf
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+
#uncomment this next line if you are NOT running nginx in docker
+#load_module /usr/lib/nginx/modules/ngx_stream_module.so;
+
+events {}
+
+stream {
+ upstream k3s_servers {
+ server 192.168.60.20:6443;
+ server 192.168.60.21:6443;
+ }
+
+ server {
+ listen 6443;
+ proxy_pass k3s_servers;
+ }
+}
+
On your k3s servers
1
+
export K3S_DATASTORE_ENDPOINT='mysql://username:password@tcp(database_ip_or_hostname:port)/database'
+
Note: It’s advised you consult the Rancher Support Matrix to get the recommended version for all Rancher dependencies.
then
1
+
curl -sfL https://get.k3s.io | sh -s - server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san load_balancer_ip_or_hostname
+
test with
1
+
sudo k3s kubectl get nodes
+
to add additional servers, get token from first server
1
+
sudo cat /var/lib/rancher/k3s/server/node-token
+
then run the same command but add the token (replace SECRET with token from previous command)
1
+
curl -sfL https://get.k3s.io | sh -s - server --token=SECRET --node-taint CriticalAddonsOnly=true:NoExecute --tls-san load_balancer_ip_or_hostname
+
on agents / workers
to run without sudo
1
+
sudo chmod 644 /etc/rancher/k3s/k3s.yaml` on the servers
+
get token
1
+
sudo cat /var/lib/rancher/k3s/server/node-token
+
1
+
curl -sfL https://get.k3s.io | K3S_URL=https://load_balancer_ip_or_hostname:6443 K3S_TOKEN=mynodetoken sh -
+
To install kubectl
see this link
kubeconfig
location on server
/etc/rancher/k3s/k3s.yaml
1
+
sudo cat /etc/rancher/k3s/k3s.yaml
+
copy contents to your dev machine
~/.kube/config
Be sure to update the server:
to your load balancer ip or hostname
check releases for the command to use. At time or filming it’s:
1
+
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
+
dashboard.admin-user.yml
1
+2
+3
+4
+5
+
apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: admin-user
+ namespace: kubernetes-dashboard
+
dashboard.admin-user-role.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: admin-user
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: cluster-admin
+subjects:
+- kind: ServiceAccount
+ name: admin-user
+ namespace: kubernetes-dashboard
+
Deploy the admin-user
configuration:
(if you’re doing this from your dev machine, remove sudo k3s
and just use kubectl
)
1
+
sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
+
get bearer token
1
+
sudo k3s kubectl -n kubernetes-dashboard create token admin-user
+
start dashboard locally
1
+
sudo k3s kubectl proxy
+
Then you can sign in at this URL using your token we got in the previous step:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
here’s testdeploy.yml
you can use
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+
apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: mysite
+ labels:
+ app: mysite
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: mysite
+ template:
+ metadata:
+ labels:
+ app : mysite
+ spec:
+ containers:
+ - name : mysite
+ image: nginx
+ ports:
+ - containerPort: 80
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
This guide is for installing traefik 2
on k3s
.If you’re not using rancher, that’s fine, just skip to Reconfiguring k3s
Note: There is an updated tutorial on installing traefik + cert-manager on Kubernetes here. However, if you want to store your certificates on disk, this tutorial here is perfectly fine.
It assumes you have followed:
There is a little bit of “undoing” we’ll have to do since k3s ships with traefik
and Rancher doesn’t play well with service load balancer. So, we’ll pick up after instaling these two.
Make note of your version of Rancher
Remove Rancher
1
+
helm uninstall rancher
+
Install Rancher
(replace with version above)
1
+2
+3
+4
+
helm install rancher rancher-stable/rancher \
+ --namespace cattle-system \
+ --set hostname=rancher.example.com \
+ --version 2.5.6
+
Get the version of k3s
that’s currently running
1
+2
+
k3s --version
+export INSTALL_K3S_VERSION=v1.20.5+k3s1
+
Run the same command you ran initially to install k3s
on your servers but add --disable traefik --disable servicelb
and be sure to set your version.
example (be sure you are using the right version)
1
+
export INSTALL_K3S_VERSION=v1.20.5+k3s1
+
1
+
curl -sfL https://get.k3s.io | sh -s - server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san your.load.balancer.ip --write-kubeconfig-mode 644 --disable traefik --disable servicelb
+
This should reconfigure your servers.Just run it on all server nodes, not agent nodes.
You can follow Self-Hosting Your Homelab Services with SSL to get the idea of Metal LB. It’s recommended to:
It’s a good idea to do this until traefik is configured otherwise you won’t have access to the Rancher UI
1
+
kubectl expose deployment rancher -n cattle-system --type=LoadBalancer --name=rancher-lb --port=443
+
Then, you can access Rancher UI after getting external-IP
1
+
kubectl get service/rancher-lb -n cattle-system
+
You can can choose between creating Ingress
in Rancher or IngresRoute
with traefik
If you choose IngressRoute
see IngressRoute otherwise continue on.
acme.json
certificateWe will be installing this into the kube-system
namespace, which already exists. If you are going to use anther namespace you will need change it everywhere.
The dynamic configuration for Traefik is stored in a persistent volume. If you want to persist the certificate, it’s better to create one now to claim later.
To create a persistent volume, it’s better to check out Cloud Native Distributed Storage in Kubernetes with Longhorn.
If not, just create one from Rancher UI > Clusters (Choose your cluster) > Storage > Persistent Volume > Add volume
traefik
helm repo and update1
+2
+
helm repo add traefik https://helm.traefik.io/traefik
+helm repo update
+
traefik-config.yaml
with the contents of /config/traefik-config.yaml
from /config1
+
kubectl apply -f traefik-config.yaml
+
traefik-chart-values.yaml
with the contents of /config/traefik-chart-values.yaml
from /configloadBalancerIP
in traefik-chart-values.yaml
with your Metal LB IPBefore running this, be sure you only have one default storage class set.
If you are using Rancher it is Cluster > Storage > Storage Classes
. Make sure only one is default.
1
+
helm install traefik traefik/traefik --namespace=kube-system --values=traefik-chart-values.yaml
+
More configuration value can be add from this default-value.yaml from Traefik github.
If all went well, you should now have traefik 2 installed and configured.
To check if the Traefik instance is running correctly, see the logs:
1
+
kubectl -n kube-system logs $(kubectl -n kube-system get pods --selector "app.kubernetes.io/name=traefik" --output=name)
+
It should be level=info msg="Configuration loaded from flags."
To see all router to Traefik, we can install and expose Traefik Dashboard.
First you will need htpassword
to generate a password for your dashboard.
1
+2
+
sudo apt-get update
+sudo apt-get install apache2-utils
+
You can then generate one using this, be sure to swap your username and password.
1
+
htpasswd -nb techno password | openssl base64
+
It should output:
1
+
dGVjaG5vOiRhcHIxJFRnVVJ0N2E1JFpoTFFGeDRLMk8uYVNaVWNueG41eTAKCg==
+
Save this in a secure place, it will be the password you use to access the traefik dashboard.
Copy traefik-dashboard-secret.yaml
locally and update it with your credentials.
Copy traefik-dashboard-ingressroute.yaml
and update it with your hostname, then apply:
1
+2
+
kubectl apply -f traefik-dashboard-secret.yaml
+kubectl apply -f traefik-dashboard-ingressroute.yaml
+
This should create:
traefik-dashboard-auth
traefik-dashboard-basicauth
dashboard
Check out the Traefik Dashboard with the URL you specify earlier.
In Rancher go to Load Balancing
kubernetes.io/ingress.class
= traefik-external
traefik-external
comes from --providers.kubernetesingress.ingressclass=traefik-external
in traefik-chart-values.yml
.If you used something else, you will need to set your label properly.https://service.example.com
) you should now see a certificate issues.If it’s a staging cert, see the note about switching to production in traefik-chart-values.yaml
.After changing, you will need to delete your certs in storage and reapply that file1
+2
+
kubectl delete -n kube-system persistentvolumeclaims acme-json-certs
+kubectl apply -f traefik-config.yaml
+
copy the contents of config-ingress-route/kubernetes to your local machine
then run
1
+
kubectl apply -f kubernetes
+
This will create the deployment, service, and ingress.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Reflector is a Kubernetes addon designed to monitor changes to resources (secrets and configmaps) and reflect changes to mirror resources in the same or other namespaces.Since secrets and configs are scoped to a single namespace, this helps you create and change resources in one namespace and “reflect” them to resources in other namespaces.This is especially helpful for things like certificates and configs that are needed in multiple namespaces.You can find the GitHub repo here!
This might go without saying but you’ll want to be sure you have a working Kubernetes cluster! If you need help setting on up, check out my Ansible Playbook!
You’ll also want to be sure you have helm installed.
Then we’ll run:
1
+2
+3
+
helm repo add emberstack https://emberstack.github.io/helm-charts
+helm repo update
+helm upgrade --install reflector emberstack/reflector
+
This command will add the helm
repo locally, then update the repo, then install reflector
in your cluster.
Now that it’s installed, all we need to do is add some annotations to “reflect” our resources to other namespaces.
Let’s say you create the following Secret
with the annotation below:
1
+2
+3
+4
+5
+6
+7
+8
+9
+
apiVersion: v1
+kind: Secret
+metadata:
+ name: some-secret
+ annotations:
+ reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
+ reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "namespace-1,namespace-2,namespace-[0-9]*"
+data:
+ ...
+
This will:
Secret
namespace-1
, namespace-2
and all other namespaces that match the pattern namespace-[0-9]*
ConfigMaps
are just as easy! Let’s say you have a ConfigMap
with the following contents:
1
+2
+3
+4
+5
+6
+7
+8
+9
+
apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: source-config-map
+ annotations:
+ reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
+ reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "namespace-1,namespace-2,namespace-[0-9]*"
+data:
+ ...
+
This will:
ConfigMap
ConfigMap
to namespace-1
, namespace-2
and all other namespaces that match the pattern namespace-[0-9]*
This is the real reason I brought this chart into my cluster, was support for cert-manager
certificates. There are many cases where I need to create the same certificate in multiple namespaces and rather than create them manually, I have reflector create them for me.
1
+2
+3
+4
+5
+6
+7
+8
+9
+
apiVersion: cert-manager.io/v1
+kind: Certificate
+...
+spec:
+ secretTemplate:
+ annotations:
+ reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
+ reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "namespace-1,namespace-2,namespace-[0-9]*"
+ ...
+
This will:
Certificate
Certificate
to namespace-1
, namespace-2
and all other namespaces that match the pattern namespace-[0-9]*
The benefit of doing it this way with cert-manager
is that when your certificates are updated with something like Let’s Encrypt, all certificates you reflect are also updated! Of course you will only want to limit your reflections to other namespaces you trust.If you’d like to check out cert-manager
see my post on how to install traefik and cert-manager!
Ok, I think I made it just in time!
— Techno Tim (@TechnoTimLive) April 27, 2023
A post on reflector for Kubernetes!https://t.co/IOYIhTk6g5#homelab
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
In my quest to make my services highly available I decided to use keepalived.keepalived is a framework for both load balancing and high availability that implements VRRP.This is a protocol that you see on some routers and has been implemented in keepalived. It creates a Virtual IP (or VIP, or floating IP) that acts as a gateway to route traffic to all participating hosts.This VIP that can provide a high availability setup and fail over to another host in the event that one is down. In this video, we’ll set up and configure keepalived, we’ll test our configuration to make sure it’s working, and we’ll also talk about some advanced use cases like load balancing.
1
+2
+3
+
sudo apt update
+sudo apt install keepalived
+sudo apt install libipset13
+
Find your IP
1
+
ip a
+
edit your config
1
+
sudo nano /etc/keepalived/keepalived.conf
+
First node
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
vrrp_instance VI_1 {
+ state MASTER
+ interface ens18
+ virtual_router_id 55
+ priority 150
+ advert_int 1
+ unicast_src_ip 192.168.30.31
+ unicast_peer {
+ 192.168.30.32
+ }
+
+ authentication {
+ auth_type PASS
+ auth_pass C3P9K9gc
+ }
+
+ virtual_ipaddress {
+ 192.168.30.100/24
+ }
+}
+
Second node
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
vrrp_instance VI_1 {
+ state BACKUP
+ interface ens18
+ virtual_router_id 55
+ priority 100
+ advert_int 1
+ unicast_src_ip 192.168.30.32
+ unicast_peer {
+ 192.168.30.31
+ }
+
+ authentication {
+ auth_type PASS
+ auth_pass C3P9K9gc
+ }
+
+ virtual_ipaddress {
+ 192.168.30.100/24
+ }
+}
+
Start and enable the service
1
+
sudo systemctl enable --now keepalived.service
+
stopping the service
1
+
sudo systemctl stop keepalived.service
+
get the status
1
+
sudo systemctl status keepalived.service
+
create index.html
to mount
1
+
nano /home/user/docker_volumes/nginx/index.html
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+
<!DOCTYPE html>
+<html lang="en">
+<head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <meta http-equiv="X-UA-Compatible" content="ie=edge">
+ <title>Hello From Primary Node</title>
+ <style>
+ h1{
+ font-weight:lighter;
+ font-family: Arial, Helvetica, sans-serif;
+ }
+ </style>
+</head>
+<body>
+
+ <h1>
+ Hello World 1
+ </h1>
+
+</body>
+</html>
+
install nginx via docker
1
+
docker run --name some-nginx -v /home/user/docker_volumes/nginx:/usr/share/nginx/html:ro -d -p 8080:80 nginx
+
visit your VIP on port 8080
In this video we covered the PiHole use case.After setting this up, be sure to check out the tutorial on Gravity Sync
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Grafana and Prometheus are a powerful monitoring solution.It allows you to visualize, query, and alert metrics no matter where they are stored.Today, we’ll install and configure Prometheus and Grafana in Kubernetes using kube-prometheus-stack. By the end of this tutorial you be able to observe and visualize your entire Kubernetes cluster with Grafana and Prometheus.
A HUGE thanks to Datree for sponsoring this video!
Combat misconfigurations. Empower engineers.
If you need to install a new kubernetes cluster you can use my Ansible Playbook to install one.
If you want to get metrics from your k3s servers, you will need to provide some additional flags to k3s.
Additional k3s flags used in the video:
1
+
extra_server_args: "--no-deploy servicelb --no-deploy traefik --kube-controller-manager-arg bind-address=0.0.0.0 --kube-proxy-arg metrics-bind-address=0.0.0.0 --kube-scheduler-arg bind-address=0.0.0.0 --etcd-expose-metrics true --kubelet-arg containerd=/run/k3s/containerd/containerd.sock"
+
1
+2
+3
+
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
+chmod 700 get_helm.sh
+./get_helm.sh
+
Install helm
The helm chart we will be using to install Grafana, Preometheus, and Alert Manager is kube-prometheus-stack
Verify you can communicate with your cluster
1
+
kubectl get nodes
+
1
+2
+3
+4
+5
+6
+
NAME STATUS ROLES AGE VERSION
+k3s-01 Ready control-plane,etcd,master 10h v1.23.4+k3s1
+k3s-02 Ready control-plane,etcd,master 10h v1.23.4+k3s1
+k3s-03 Ready control-plane,etcd,master 10h v1.23.4+k3s1
+k3s-04 Ready <none> 10h v1.23.4+k3s1
+k3s-05 Ready <none> 10h v1.23.4+k3s1
+
Verify helm is installed
1
+
helm version
+
1
+
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}
+
Add helm repo
1
+
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+
Update repo
1
+2
+
helm repo update
+
+
Create a Kubernetes Namespace
1
+
kubectl create namespace monitoring
+
Echo username and password to a file
1
+2
+
echo -n 'adminuser' > ./admin-user # change your username
+echo -n 'p@ssword!' > ./admin-password # change your password
+
Create a Kubernetes Secret
1
+
kubectl create secret generic grafana-admin-credentials --from-file=./admin-user --from-file=admin-password -n monitoring
+
You should see
1
+
secret/grafana-admin-credentials created
+
Verify your secret
1
+
kubectl describe secret -n monitoring grafana-admin-credentials
+
You should see
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+
Name: grafana-admin-credentials
+Namespace: monitoring
+Labels: <none>
+Annotations: <none>
+
+Type: Opaque
+
+Data
+====
+admin-password: 9 bytes
+admin-user: 9 bytes
+
Verify the username
1
+
kubectl get secret -n monitoring grafana-admin-credentials -o jsonpath="{.data.admin-user}" | base64 --decode
+
You should see
1
+
adminuser%
+
Verify password
1
+
kubectl get secret -n monitoring grafana-admin-credentials -o jsonpath="{.data.admin-password}" | base64 --decode
+
1
+
p@ssword!%
+
Remove username and password file from filesystem
1
+
rm admin-user && rm admin-password
+
Create a values file to hold our helm values
1
+
nano values.yaml
+
paste in values from here
Create our kube-prometheus-stack
1
+
helm install -n monitoring prometheus prometheus-community/kube-prometheus-stack -f values.yaml
+
Port Forwarding Grafana UI
(be sure to change the pod name to one that matches yours)
1
+
kubectl port-forward -n monitoring grafana-fcc55c57f-fhjfr 52222:3000
+
Visit Grafana
If you make changes to your values.yaml
you can deploy these changes by running
1
+
helm upgrade -n monitoring prometheus prometheus-community/kube-prometheus-stack -f values.yaml
+
Examples:
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Traefik, cert-manager, Cloudflare, and Let’s Encrypt are a winning combination when it comes to securing your services with certificates in Kubernetes.Today, we’ll install and configure Traefik, the cloud native proxy and load balancer, as our Kubernetes Ingress Controller.We’ll then install and configure cert-manager to manage certificates for our cluster.We’ll set up Let’s Encrypt as our Cluster Issuer so that cert-manager can automatically provision TLS certificates and even wildcard certificates using Cloudflare DNS challenge absolutely free.We’ll walk through all of this, step by step, so you can help secure your cluster today.
A HUGE thanks to Datree for sponsoring this video!
Combat misconfigurations. Empower engineers.
If you need to install a new kubernetes cluster you can use my Ansible Playbook to install one.
You can find all of the resources for this tutorial here
1
+2
+3
+
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
+chmod 700 get_helm.sh
+./get_helm.sh
+
For other ways to install Helm see the installation docs here
Verify you can communicate with your cluster
1
+
kubectl get nodes
+
You should see
1
+2
+3
+4
+5
+6
+
NAME STATUS ROLES AGE VERSION
+k3s-01 Ready control-plane,etcd,master 10h v1.23.4+k3s1
+k3s-02 Ready control-plane,etcd,master 10h v1.23.4+k3s1
+k3s-03 Ready control-plane,etcd,master 10h v1.23.4+k3s1
+k3s-04 Ready <none> 10h v1.23.4+k3s1
+k3s-05 Ready <none> 10h v1.23.4+k3s1
+
Verify helm is installed
1
+
helm version
+
You should see
1
+
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}
+
These resources are in the
launchpad/kubernetes/traefik-cert-manager/traefik/
folder
Add repo
1
+
helm repo add traefik https://helm.traefik.io/traefik
+
Update repo
1
+
helm repo update
+
Create our namespace
1
+
kubectl create namespace traefik
+
Get all namespaces
1
+
kubectl get namespaces
+
We should see
1
+2
+3
+4
+5
+6
+7
+
NAME STATUS AGE
+default Active 21h
+kube-node-lease Active 21h
+kube-public Active 21h
+kube-system Active 21h
+metallb-system Active 21h
+traefik Active 12s
+
Install traefik
1
+
helm install --namespace=traefik traefik traefik/traefik --values=values.yaml
+
Check the status of the traefik ingress controller service
1
+
kubectl get svc --all-namespaces -o wide
+
We should see traefik with the specified IP
1
+2
+3
+4
+5
+6
+
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
+default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16h <none>
+kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 16h k8s-app=kube-dns
+kube-system metrics-server ClusterIP 10.43.182.24 <none> 443/TCP 16h k8s-app=metrics-server
+metallb-system webhook-service ClusterIP 10.43.205.142 <none> 443/TCP 16h component=controller
+traefik traefik LoadBalancer 10.43.156.161 192.168.30.80 80:30358/TCP,443:31265/TCP 22s app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
+
Get all pods in traefik
namespace
1
+
kubectl get pods --namespace traefik
+
We should see pods in the traefik
namespace
1
+2
+3
+4
+
NAME READY STATUS RESTARTS AGE
+traefik-76474c4d47-l5z74 1/1 Running 0 11m
+traefik-76474c4d47-xb282 1/1 Running 0 11m
+traefik-76474c4d47-xx5lw 1/1 Running 0 11m
+
Apply middleware
1
+
kubectl apply -f default-headers.yaml
+
Get middleware
1
+
kubectl get middleware
+
We should see our headers
1
+2
+
NAME AGE
+default-headers 25s
+
Install htpassword
1
+2
+
sudo apt-get update
+sudo apt-get install apache2-utils
+
Generate a credential / password that’s base64 encoded
1
+
htpasswd -nb techno password | openssl base64
+
Apply secret
1
+
kubectl apply -f secret-dashboard.yaml
+
Get secret
1
+
kubectl get secrets --namespace traefik
+
Apply middleware
1
+
kubectl apply -f middleware.yaml
+
Apply dashboard
1
+
kubectl apply -f ingress.yaml
+
Visit https://traefik.local.example.com
These resources are in the
launchpad/kubernetes/traefik-cert-manager/nginx/
folder
1
+2
+3
+
kubectl apply -f deployment.yaml
+kubectl apply -f service.yaml
+kubectl apply -f ingress.yaml
+
Or you can apply an entire folder at once!
1
+
kubectl apply -f nginx
+
These resources are in the
launchpad/kubernetes/traefik-cert-manager/cert-manager/
folder
Add repo
1
+
helm repo add jetstack https://charts.jetstack.io
+
Update it
1
+
helm repo update
+
Create our namespace
1
+
kubectl create namespace cert-manager
+
Get all namespaces
1
+
kubectl get namespaces
+
We should see
1
+2
+3
+4
+5
+6
+7
+8
+
NAME STATUS AGE
+cert-manager Active 12s
+default Active 21h
+kube-node-lease Active 21h
+kube-public Active 21h
+kube-system Active 21h
+metallb-system Active 21h
+traefik Active 4h35m
+
Apply crds
Note: Be sure to change this to the latest version of
cert-manager
1
+
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.crds.yaml
+
Install with helm
1
+
helm install cert-manager jetstack/cert-manager --namespace cert-manager --values=values.yaml --version v1.9.1
+
Apply secrets
Be sure to generate the correct token if using Cloudflare.This is using an API Token and not a global key.
From issuers
folder
1
+
kubectl apply -f secret-cf-token.yaml
+
Apply staging ClusterIssuer
From issuers
folder
1
+
kubectl apply -f letsencrypt-staging.yaml
+
Create certs
From certificates/staging
folder
1
+
kubectl apply -f local-example-com.yaml
+
Check the logs
1
+
kubectl logs -n cert-manager -f cert-manager-877fd747c-fjwhp
+
Get challenges
1
+
kubectl get challenges
+
Get more details
1
+
kubectl describe order local-technotim-live-frm2z-1836084675
+
Apply production ClusterIssuer
From issuers
folder
1
+
kubectl apply -f letsencrypt-production.yaml
+
From certificates/production
folder
1
+
kubectl apply -f local-example-com.yaml
+
If you’re using cert-manager
to manage certificates, you might want to check out this post on how to mirror your Kubernetes configs, secrets, and resources to other namespaces. This is helpful when you need to share you secrets / certificates across namespaces!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Internet speed tests are full of junk, ads, tracking, and some even contain deprecated plug-ins.Who needs this when we can self-host an open source one? LibreSpeed is a lightweight speedtest implemented in JavaScript using XHR requests and web workers.It’s fast, feature rich, and supports every modern browser.Say goodbye to those other speed tests and host your own containerized in Docker or Kubernetes today!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
LocalSend is an open source application that securely transfers files between devices without the internet. It’s cross platform meaning that it’s available for Windows, Mac, Linux, iOS (iPhone, iPad), and Android devices. This is a great alternative to AirDrop or QuickSend and can send and receive files to other devices without a 3rd party service like Google Drive.
Disclosures:
Don’t forget to ⭐ localsend on GitHub!
From a terminal:
Using Winget
1
+
winget install localsend
+
Using Chocolatey
1
+
choco install localsend
+
Install from GitHub https://github.com/localsend/localsend/releases
App Store recommended for most users.
App Store recommended for most users.
Install using an APK https://github.com/localsend/localsend/releases
Package Manager:
Install with terminal.
Ubuntu / Debian
Download deb file https://github.com/localsend/localsend/releases
1
+2
+3
+4
+
cd ~/Downloads #change to download folder
+sudo dpkg -i LocalSend-1.14.0-linux-x86-64.deb #change version to match download
+sudo apt install -f # install missing dependencies
+sudo dpkg -i LocalSend-1.14.0-linux-x86-64.deb #change version to match download
+
Flathub
1
+2
+
flatpak install flathub org.localsend.localsend_app
+flatpak run org.localsend.localsend_app
+
AUR
1
+
yay -S localsend-bin
+
Nix
1
+2
+
nix-shell -p localsend
+pkgs.localsend # Config
+
Package Managers:
Install with terminal.
Homebrew
1
+2
+
brew tap localsend/localsend
+brew install localsend
+
Nix
1
+2
+
nix-shell -p localsend
+pkgs.localsend # Config
+
Binaries:
Download for offline usage. https://github.com/localsend/localsend/releases
App Store recommended for most users.
See all releases https://github.com/localsend/localsend/releases
I found a free and open source alternative to AirDrop called LocalSend! It works with Windows, macOS, Android, and even Linux! Join me as I test it on every platform and see if I can transfer a file to every platform using this app!
— Techno Tim (@TechnoTimLive) March 9, 2024
👉https://t.co/iWcGjDL476 pic.twitter.com/K24T37TOoq
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Storage in Kubernetes is hard, complicated, and messy.Configuring volumes, mounts, and persistent volumes claims and getting it right can be a challenge.It’s also challenging to manage that storage and replicate it across all your Kubernetes clusters.It’s also been very challenging to do this on bare metal, outside of a cloud provider.That’s where Longhorn comes.Longhorn is an open source, a CNCF distributed block storage system for Kubernetes.It comes with a UI, backups, snapshots, cluster disaster recovery, and it does all this with or without Rancher.Rancher is NOT a requirement.
There are some additional dependencies you might want to install on target nodes prior to configuring
1
+2
+3
+4
+
sudo apt update
+sudo apt install nfs-common open-iscsi
+#start the service now and on reboot
+sudo systemctl enable open-iscsi --now
+
See the app catalog within Rancher
1
+
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
+
1
+2
+3
+
kubectl get pods \
+--namespace longhorn-system \
+--watch
+
See more at https://longhorn.io/docs/1.0.0/deploy/install/install-with-kubectl
helm3
1
+2
+
kubectl create namespace longhorn-system
+helm install longhorn ./longhorn/chart/ --namespace longhorn-system
+
1
+
kubectl -n longhorn-system get pod
+
This is not required, nor do I taint nodes anymore.I allow Longhorn storage to use any available space on any node that is not running
etcd
/ control plane.You can simply skip this step and it will work like this.If you’re still convinced you need dedicated nodes, it’s much easier doing it in the Longhorn UI after a node joins the cluster than with taints.
I ended up tainting my storage nodes using this command
1
+2
+
kubectl taint nodes luna-01 luna-02 luna-03 luna-04 CriticalAddonsOnly=true:NoExecute
+kubectl taint nodes luna-01 luna-02 luna-03 luna-04 StorageOnly=true:NoExecute
+
Then applying that toleration to Longhorn in settings
StorageOnly=true:NoExecute;CriticalAddonsOnly=true:NoExecute
This ensures that the storage nodes won’t take on any general workloads and still allow Lonhorn to use these as storage.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I’ve been running a few clusters in my HomeLab over the past few years but they have always been virtualized inside of Proxmox.That all changed today when I decided to run my Kubernetes cluster on these 3 low power, small, and efficient, Intel NUCs.
I built a lower power, efficient, and near silent server cluster! Although this cluster is small and efficient, it’s still powerful enough to run a high availability Kubernetes cluster with many services running in High Availability mode! There are so many options with running a small cluster like this, the possibilities are endless!
A HUGE thanks to Datree for sponsoring this video!
Combat misconfigurations. Empower engineers. https://www.datree.io
See the whole kit here - https://kit.co/TechnoTim/efficient-low-power-powerful-virtualization-server
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
These Intel NUCs are probably my favorite small form factor devices.They are only 4x4 inches and pack quite a punch.That’s because these NUCs have anywhere from a Core i3, to a Core i5, to a Core i7 processor in them.This one has a Core i7 with 4 cores and 8 threads and has a base clock speed of 2.8 GHz and can turbo boost up to 4.70 GHz.It even has QuickSync on this chip too so that I can offload encoding if I need to.You can check out the specs here.
I maxed out the ram on each machine, giving it 64 GB of DDR4 RAM.Should just be just enough to run some of my workloads and another reason I chose not to run a hypervisor on these machines, I wanted to conserve resources.I added a 1 TB Samsung NVMe drive for the OS and to run all of my workloads, and then a second SSD for additional Kubernetes storage that will be replicated across all 3 devices.I may expand this in the future however this was one of many SSDs I had laying around.
So once I had all of the hardware buttoned up then I had to decide where exactly I was going to put these devices.Now I could have put these on my workbench or my desk, but I have a server rack in my basement that I wanted to take advantage of.I have a few general purpose shelves but I thought that these NUC deserved a little bit better home than that.I wanted a rack mount system that would hold 3 NUCs, hold them securely in place, and even give me some cable management and that’s when I found this small company that makes all kinds of small form factor rack mount systems.Mk1 Manufacturing makes all kinds of rack mount kits for small form factor devices like Mac Studios, Lenovo ThinkStations, Mac Minis, and of course Intel NUCs.The nice part about these racks too is that they are made here in the US. I purchased one for my Intel NUCs and quick rack mounted all 3.It was super easy to mount these and they even thought about the cable management for both power and networking.
Intel NUC, 1U Rack Mount System
I bet you’re wondering how I remote control these devices, because I wondered that too.Well, if you remember from a previous video, I picked up a PiKVM and I was able to attach multiple devices to it using an HDMI switch.Ths current switch lets me connect up to 4 devices, but I am going to try to expand to 8 later on. From the PiKVM I can even power on these devices using WAKE ON LAN that will send a magic packet to wake them up.And in the case that Wake On LAN doesn’t work, I can then use my UniFI Smart PDU Pro to toggle the power off then on to force them to wake up.
Intel NUC, 1U Rack Mount System
After getting this all hooked up and on my network, I then had to figure out how I was going to get an operating system on them.I ended up using MAAS or metal as a service to boot and provision these machines.I chose to go with Ubuntu server for these, well, because I like Ubuntu and so is the rest of my infrastructure so it makes it really easy to manage it.I was sure to reserve a static IP for these devices as well as create a DNS entry for them.
Now, for the most difficult part of this all, installing kubernetes.I bet you’re asking, why Kubernetes?
Because.
So to install Kubernetes I can do it one of a million different ways and on top of that I have my distributions to choose from.I ended up going with k3s because I like how lightweight that it is as well as the active community behind it.
As far as installation goes I could spend the 20+ hours doing it manually but I’ve already created an Ansible playbook that can do this all for me.It does everything that I need to give me a high availability Kubernetes cluster, with both an HA Kubernetes API as well as an HA service load balancer.With three nodes I can lose 1 node and everything will still function normally.
After setting my IP address it was off to the races. 🚀
I sat back and watched the automation run for about 3 minutes, and shortly after that I had a highly available Kubernetes cluster to run my workloads.If you’d like to do the same thing I will leave a link to the documentation and the video where I walk you through all of this.
Running the k3s-ansible playbook
So once I had Kubernetes installed I then copied my kube config file locally so I could communicate with the cluster.I was able to ping the Kubernetes API that is really a VIP and it responded.I then asked Kubernetes to show me all of my nodes and there they were, all three of them.So that was there to do next? Next I wanted to install some workloads to test out HA.Now typically I would install Traefik as my reverse proxy and cert-manager to manage my certificates, and LOKI, Grafana, and Prometheus for logging, monitoring, and visualization; however I just wanted to test out a few things before I go all in.
So I decided to install a simple web server that runs nginx to host it.This web site is just a tiny nginx web server that services a single page that shows its hostname, IP address, and port, and a few other things.This was going to be a good test to test high availability. My plan was to create this workload with 3 replicas and then pull the plug on one of the nodes and make sure that both Kubernetes was still up as well as this web page.
So, that’s what I did.
I created a Kubernetes deployment for this container and set the replicas to this.This will ensure that 3 are running but I wanted to be sure that they were spread out across all three nodes.I did this by setting a typology constraint of hostname.This will make sure that only more than one pod is never scheduled on the same node so that I can ensure HA.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+
---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginxdemos/hello
+ ports:
+ - containerPort: 80
+ topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: kubernetes.io/hostname
+ whenUnsatisfiable: DoNotSchedule
+ labelSelector:
+ matchLabels:
+ app: nginx
+
So once this was set, I then deployed the Kubernetes deployment and could see that I had 3 pods, all spread across 3 nodes, awesome.
But how to I actually get to this web page? Well, remember how I mentioned that I typically use Traefik as my reverse proxy? Well, that’s where this would come in handy.It would allow me to expose multiple services on the same IP, but since I don’t have it installed, I will just expose it on the metal lb load balancer that comes with my playbook.
To open up an IP on the virtual load balancer, all I have to do is create a service with a type of LoadBalancer.This will expose the service on one of the Metal LB Ip addresses so that we can see our web page.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+
---
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx
+spec:
+ ipFamilyPolicy: PreferDualStack
+ selector:
+ app: nginx
+ ports:
+ - port: 80
+ targetPort: 80
+ type: LoadBalancer
+
+
After deploying that service, we can then check the service to see which IP address it was assigned.Once we have that IP address we can then get to this web page.
After, we see our web page here so we know it’s working.And it should be HA because we have one of these pods running on each of the servers!
Now we need to introduce some chaos.
No, no no no, not that much chaos, just simply removing one of the nodes.
So, before doing that let’s ping our Kubernetes API to be sure that it ups, and you can see it’s up and responding.Next let’s open the web page and keep refreshing it.
Now, we can introduce chaos by shutting down one of the nodes.I can pick anyone that I like but let’s go with 2.Let’s also ping node 2 so you can see it going down.
So we shut down node 2 and wait.We can see that it’s down but out Kubernetes API is still up.We can do a kubectl get nodes and see all of our nodes, and if we refresh our web page we can see that the web site never went down.Now if we shutdown one more now, we will lose access to our Kubernetes api and web page, so let’s shutdown node 1.And as you can see we can’t get to it anymore, but if we bring up node 2 and leave node 1 down we can.
Testing my HA NGINX install, you can see that node 2 is down, but the Kubernetes API still responds and the web page is still up!
Awesome, so now we have an HA cluster but what can we do with it? Well, I mentioned a few things but you can do some awesome home Kubernetes stuff like install Home Assistant, game servers, web sites, or many other workloads, just remember that not all workloads can be HA out of the box, they have to be stateless like my nginx container, meaning they have no state like storage mounts or state in memory, but they get their state from outside of the container like an external database.
This diagram explains how stateless Kubernetes apps should be architected
I bet you’re wondering how much power these three devices use, well I wondered the same thing and I checked my UniFi PDU to be sure.I let all three NUCs run a few workloads and kept them on for a few hours and each of them uses about 20 watts of power.Keep in mind that my PDU only shows average power over time so I think they are using anywhere from 15-25 watts.Is that as good as a raspberry pi? Well, no, but what I do get is an x86 processor with 8 cores, lots of high speed storage, 2.5 Gigabit networking, AES instructions, and even a GPU for quick sync if I wanted to do any kind of transcoding.Also, it has enough compute to run anything I can throw at it because remember it’s a core i7.
Each Intel NUC only uses around 9 watts of power on average, with an idle k3s cluster!
So, what do I think of these lower power, small, yet powerful devices? Well, I think they are pretty awesome if you couldn’t tell by the fact that I bought 3.You can find these devices relatively cheap if you go with a model from a previous year.Is it as cheap as picking up older small form factor desktops? It’s not, what that might be a perfectly fine option for you if you want to save some money, but I didn’t have 3 devices that I could keep around for years to come, not to mention that I still have my first NUC from almost 9 years ago. These little devices are great for servers, especially if you are considering clustering them.And rack mounting them is a great solution if you’re thinking about picking up a few.
Well, I learned a lot today about low power servers, Intel NUCs, and cluster Kubernetes and I hope you learned something too.And remember if you found anything in this video helpful, don’t’ forget to share, like, and subscribe. Thanks for reading and watching!
What a week! I built a lower power, efficient, and near silent server cluster! Although this cluster is small and efficient, it's still powerful enough to run many services running in High Availability mode!
— Techno Tim (@TechnoTimLive) April 22, 2023
Check it out!👉https://t.co/5VP32kGqP4#homelab pic.twitter.com/8dWy6FQQVj
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Have you been thinking about building a low power, efficient, small form factor but performant Proxmox server? This is the perfect home server build for anyone who wanted to virtualize some machines while still staying green.This tiny, silent, and efficient build is one that won’t drive up your electricity bill either.
A HUGE thanks to Micro Center for sponsoring this video!
New Customers Exclusive – Get a Free 240gb SSD at Micro Center: https://micro.center/4e48d4
See the kit here: https://kit.co/technotim/efficient-low-power-powerful-virtualization-server
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
What if I told you that this little machine is the perfect Proxmox Virtualization server? And what if I told you I crammed an intel core i5, 64 GB of RAM, a 1 TB NVMe SSD, another 1TB SSD all in this tiny little box that’s dead silent without any fans? And what if I told you it can run Proxmox Virtual Server, host a pfsense router, runTrueNAS with a TB of storage, run Ubuntu server with Portainer running a few docker containers, run Windows 10 or Windows 11, and run ubuntu Desktop and pass though all of the hardware so I can use this server as a desktop, all while running a Plex Server and doing hardware transcoding with 3 - 4k streams?
No Way!
Yeah, I thought you’d say that.
You might have heard of Protectli before.They’re known for making really great appliances for many open source software distributions.Most people think of Protectli devices as the perfect device for a router like pfSense. And that makes sense considering that their devices come with anywhere from 2 - 6 network ports.But their devices can be used to run almost any software imaginable, from Linux, to Windows, to a dedicated firewall, to even a virtualization host or a hypervisor.
How is that possible?
A HUGE thanks to Protectli for sending this device for me to test!
See the whole kit here! - https://kit.co/TechnoTim/building-a-low-power-all-in-one-silent-server
That’s possible because of the hardware that these devices ship with.Protectli sent me a VP4560 to help with some of my HomeLab projects and this device is a beast. This is the VP4650 and comes with an intel Core i5 quad core CPU with hyper-threading that is rated at 1.6 GHz and can turbo boost up to 4.2 GHz.The nice thing about this CPU is that it supports VTx for virtualization and VT-d for IOMMU so I can pass through devices though to the guest.And because it’s an Intel x86 processor it comes with AES-NI support, which is super nice for encryption / decryption for VPN or TLS.
It supports up to 64GB of RAM which is plenty for what I will be using it for, but you can scale back if you need, all the way down to 4 GB.
It comes with 6 intel 2.5 gigabit NIC ports giving you enough throughput for most of your networking needs.
Protectli VP4650 has 6 - 2.5 Gb/s network ports!
It also ships with a 16GB eMMC module on board and many options for storage.Also, because this machine has an NVMe slot and a SATA port you can mix and match your storage to fit your needs.I opted for a 1 TB Samsung M.2 NVMe and a 1 TB Samsung EVO drive.
You also get the choice of adding WiFi modules and 4g LTE modems however I decided not to on this device because they also sent a lower powered VP2420 that has 4 network ports along with WiFI and LTE modules to help me build the ultimate router which you’ll be seeing in a future video.
Protectli Ultimate Router (Coming soon!)
If you noticed from everything I listed, you didn’t hear anything about fans.That’s not something that’s obvious from the specs on paper, but taking one look at this device you can see this huge heatsink that passively cools the entire device.It definitely looks like a grill but I promise you can’t cook anything on here.
Protectli devices come with plenty of options to connect all of your other devices devices_
As far as connectivity goes, you have plenty of options for connecting devices, from USB 2.0, 3.0, USB C, to HDMI, to Display power, and even a micro USB port for console access.
You can choose between AMI BIOS and and open source BIOS called “coreboot”_
One thing that I like about these devices is that you have your choice in firmware to use.You can use a standard AMI BIOS that works great, or your can use coreboot BIOS, which is a bare bones open source BIOS that lets you customize some cool features.For instance, if you flash their devices with core boot, you can boot to the network and download and install many different operating systems from the network.This is a neat feature that I welcome and it saves you the hassle of loading up that Ventoy USB disk with new ISO.I did however opt for AMI BIOS because I did have a few issues with coreboot related to using my Ventoy USB disk.But that’s the nice thing about this BIOS being open source, it will get better and more secure over time with more eyes looking at it and more engineers contributing to it.
So what did I do with all of this hardware? The better question is what didn’t I do?
I knew that I wanted thai build to be a complete silent hypervisor and I knew that it was going to run Proxmox.
The first decision I had to make was where I was going to install proxmox.Remember I have the choice between the NVMe drive, the SATA drive, and the eMMC module.Turns out the eMMC mobile isn’t really an option because Proxmox won’t let you install it there without some hacks and I didn’t want to hack this device so I decided to install it on the NVMe drive and use the rest of the partition for virtual machines.Typically I would have installed the OS on the slower drive and save the NVMe for VMs but I have other plans for that. Installing Proxmox was straightforward, just like any other Proxmox installation.After it was installed I then configured IOMMU so I can pass devices through to guest machines and everything else I have on my First 11 Things on Proxmox video.After that was all set it was now time to install some VMs.
I knew that I wanted to install a router on this machine.This will give me the flexibility to run a network firewall for all of these devices and give me the option to protect any device I use when I travel, but more on that later.
So I installed pfsense, and passed through 2 NICs from the host down to the guest. This first NIC is the WAN port, so an upstream provider like an ISP or even some network I don’t trust, and then one port for LAN if I do want to connect all of these devices to the local network. Passing these through and configuring them was pretty simple and if I forget which port is which they even included some stickers for me to label the ports.I also added another network port that’s used as a network bridge in case I want these VMs to use an internal network.
Protectli Network interface stickers
Now that the router was done, I wanted to configure a NAS on this device.This NAS could be any open source NAS but I chose to go with TrueNAS SCALE.I went with TrueNAS because, well it’s TrueNAS,and I went with SCALE because I wanted a Linux based OS that plays better with Proxmox.After installing TrueNAS I then created a 1 TB drive in Proxmox and assigned it to TrueNAS so that I can have 1 TB of storage on my NAS.I know it’s not ZFS and I don’t have redundant drives, so if you do the same you’ll want to be sure that you have your data backed up to another machine.Once I had TrueNAS up and running I could set up NFS and Samba shares just like I would normally with a physical install.I can also pass through one of the NICs to my NAS so that it can have a dedicated 2.5 Gb/s NIC if I like.
Next up I needed something to run my containers.Yes, I know I can use TrueNAS to do that but I wanted to go with my preferred combination of Ubuntu Server + Docker + Portainer.Having a dedicated Ubuntu server running Portainer gives me a great UI and so many possibilities.After installing and configuring I then created a few containers.This is a perfect host now to run all of my self-hosted services.
After getting my foundation all set up, I then had my choice of desktop OSes.I could choose between Windows and Ubuntu Desktop, then I looked at how much disk space and RAM I had left and I thought to myself, why not both.This is where things got a little bit interesting too. I first installed Windows 11 and configured it, no problems there, but after installing Windows I wanted to passthrough the GPU on the device to the VM, along with sound card and USB devices so I could use this all in one server as a desktop too.After messing with this for hours I could not get the single Intel GPU to display anything on the screen even though it was definitely passed through to the guest machine and I could see it over Remote Desktop.I thought maybe it was Windows 11 so I created a Windows 10 machine and it did the same thing.
Ubuntu Virtual machine running on Proxmox with the hardware passed through from host to guest so I can use it as a desktop simultaneously
So I decided to try it again, but with Ubuntu Desktop and sure enough it worked! I was able to pass through the integrated GPU front he host down to the guest and use this machine as a desktop.I will be the first to admit that it wasn’t winning any performance awards but I was able to do most tasks that I would expect to do on a laptop.I installed VSCode, customize the desktop, watched some YouTube, and even passed through the thermal subsystem so I could monitor the temperature of the host.
After I had this working I decided to install Plex on this machine so that I could see if I could get QuickSync working.QuickSync is a technology and a dedicated chip on most modern intel processors that lets you off load decoding and encoding video from the processor to this chip.This technology is similar to NVENC from NVIDIA, and AMF from AMD, but the idea is that you give this work to another part of the processor instead of pegging all of your CPU cores.Plex can take advantage of this if you have a plex pass and I do, so I wanted to see if I could get it working.
That’s where I started to run into troubles.I thought that since I had the GPU passed through to this Ubuntu machine that Plex would just see QuickSync and use it. No matter what I tried I could not get Plex do hardware encoding, I even tried using Docker containers which supposedly should work if the hardware is mapped properly however I couldn’t get it to work.
I could see the Intel GPU using Remote Desktop on Windows
Then I decided to try my Windows VM.I could see the Intel GPU when using Remote Desktop so there was hope.Sure enough that when I installed plex and started streaming a video the hardware enabled transcoding kicked it and the CPU barely budged! I was able to transcode 3-4k streams down to 1080, 720, and even 480, no problem!
I could encode 3 - 4k streams on Plex using Intel’s Quick Sync!
This was awesome and puzzling at the same time.The WIndows machine could see the GPU and take advantage of QuickSync but I couldn’t output the display from the HDMI or DisplayPort, and the Ubuntu machine could output to the monitor over HDMI but couldn’t use QuickSync.Judging by the fact that I was able to cover both of these use cases with different operating systems this told me that it’s something with software and not hardware so I chalked it up as software issue and it may be fixed some day.
Protectli VP4650 power draw with 4 virtual machines running and hardware attached
You might think that running 4 virtual machines on this device would draw a lot of power and generate a lot of heat.Well, I thought the same thing until I pulled out my kill-a-watt and decided to measure it.This Protectli machine running 4 VMs and the host itself with all of these devices plugged in pulled anywhere from 20-30 watts, which I think is pretty good considering I have all of this functionality in one device.If I wanted to save some more power I could power down any of these virtual machines when not using them.
Protectli VP4650 temperatures with 4 virtual machines running and hardware attached
And as far as heat goes? Well, do you hear that fan? That’s right, no fan equals no noise but that also means that it’s going to heat up these fins.As you saw earlier that the thermals were around 56 celsius on the die, it’s actually much cooler on the heatsink fins so you definitely can’t cook anything on it.
As you saw I was able to create quite a few virtual machines and spin up an absolutely quiet hypervisor and use it as a desktop, which goes to show just how flexible these devices are.If you go with Protectli you are getting a blank slate where you can create and build anything you want.From a full fledged server with virtual machines to a small dedicated development environment.
Now it wouldn’t be fair if I didn’t mention some of the beefs I have with it too.Remember that eMMC drive I mentioned? Well, if you’re planning on using it with Proxmox it’s almost useless.Proxmox can’t be installed on that device and even if it could it’s only 16GB.I’d love to see another option other than the eMMC module or even space for another SSD.The other things that some mention is the price.These devices are a little more expensive than some of the other small form factor devices out there, so I wish the price would come down just a bit.However these devices are purpose built, have lots of customization options like WiFi and 4g LTE, are industrial quality, offer support for all of their devices, and allow you to swap out firmware for coreboot anytime you like and those options might be enough for you to justify the premium cost, because I think these are truly premium devices.
Well, I learned a lot about running Proxmox on Protectli devices and I hope you learned something too.And remember if you found anything in this video helpful, don’t forget to like and subscribe.Thanks for reading and watching!
Here is my configuration for each virtual machine on my Proxmox server.Please note that (as seen in this article and the video) I did have issues getting the Windows machines to output their display to a physical monitor however Quick Sync to encode videos worked just fine and I could output the display using Ubuntu desktop however I could not use Quick Sync.If you have a fix, let me know in the comments!
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+
boot: order=virtio0;ide2;net0
+cores: 4
+cpu: host,flags=+aes
+hostpci0: 0000:06:00
+ide2: none,media=cdrom
+memory: 2048
+meta: creation-qemu=7.2.0,ctime=1680150221
+name: pfsense
+net0: virtio=12:70:A1:22:F9:2F,bridge=vmbr1
+numa: 0
+ostype: other
+scsihw: virtio-scsi-single
+smbios1: uuid=0388a78d-7950-49e7-8ef9-19a9744e8ee2
+sockets: 1
+startup: order=1,up=30,down=30
+vga: qxl
+virtio0: local-lvm:vm-100-disk-0,discard=on,iothread=1,size=20G
+vmgenid: 314798d0-820e-40bd-89ad-ac364b03b83c
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+
agent: 1
+balloon: 0
+bios: ovmf
+boot: order=ide0;ide2;virtio0;net0
+cores: 8
+cpu: host
+efidisk0: local-lvm:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
+hostpci0: 0000:00:02,pcie=1
+hostpci1: 0000:00:12.0
+ide0: local:iso/virtio-win-0.1.229.iso,media=cdrom,size=522284K
+ide2: none,media=cdrom
+machine: pc-q35-7.2
+memory: 32768
+meta: creation-qemu=7.2.0,ctime=1680233057
+name: windows-11
+net0: virtio=DE:AB:E8:6B:9F:B7,bridge=vmbr2,firewall=1,tag=60
+numa: 0
+ostype: win11
+scsihw: virtio-scsi-single
+smbios1: uuid=5f7d30a5-b3df-4a29-800c-730c7a43668d
+sockets: 1
+tpmstate0: local-lvm:vm-101-disk-1,size=4M,version=v2.0
+vga: std
+virtio0: local-lvm:vm-101-disk-2,cache=unsafe,discard=on,iothread=1,size=10>vmgenid: 9193bc41-1b82-4069-bc42-8cbb0dfca31d
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+
agent: 1
+balloon: 0
+boot: order=scsi0;ide2;net0
+cores: 8
+cpu: host
+hostpci0: 0000:00:02,pcie=1,rombar=0,x-vga=1
+hostpci1: 0000:00:1f
+hostpci2: 0000:00:1a
+hostpci3: 0000:00:12,pcie=1
+ide2: none,media=cdrom
+machine: q35
+memory: 16384
+meta: creation-qemu=7.2.0,ctime=1680232192
+name: ubuntu
+net0: virtio=1E:05:6A:E7:68:85,bridge=vmbr2,tag=60
+numa: 0
+ostype: l26
+scsi0: local-lvm:vm-102-disk-0,cache=writeback,discard=on,iothread=1,size=8>scsihw: virtio-scsi-single
+smbios1: uuid=7b2a286b-197d-4382-9c04-5a0544596b89
+sockets: 1
+startup: order=4,up=30,down=30
+usb0: host=24f0:0142
+usb1: host=045e:0724
+vga: none
+vmgenid: e93460b1-66f7-4694-a528-98ed006eb770
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+
agent: 1
+balloon: 0
+boot: order=scsi0;ide2;net0
+cores: 4
+cpu: host
+ide2: none,media=cdrom
+memory: 8192
+meta: creation-qemu=7.2.0,ctime=1680232488
+name: ubuntu-server
+net0: virtio=F6:BF:85:17:B6:0F,bridge=vmbr2,firewall=1,tag=60
+numa: 0
+ostype: l26
+scsi0: local-lvm:vm-103-disk-0,cache=unsafe,discard=on,iothread=1,size=32G
+scsihw: virtio-scsi-single
+smbios1: uuid=7bc4309c-dc9a-4632-bfd5-2e5f8a5e4fcd
+sockets: 1
+startup: order=3,up=30,down=30
+vmgenid: 2500d141-f7be-4c7b-ab9f-0a0f0075ea97
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
agent: 1
+balloon: 0
+boot: order=scsi0;ide2;net0
+cores: 4
+cpu: host
+ide2: none,media=cdrom
+machine: q35
+memory: 8192
+meta: creation-qemu=7.2.0,ctime=1680314889
+name: truenas
+net0: virtio=DE:16:B3:D8:6C:C7,bridge=vmbr2,firewall=1,tag=60
+numa: 0
+ostype: l26
+scsi0: local-lvm:vm-104-disk-0,discard=on,iothread=1,size=32G,ssd=1
+scsi1: evo:vm-104-disk-0,discard=on,iothread=1,size=1000G,ssd=1
+scsihw: virtio-scsi-single
+smbios1: uuid=2ab225ac-44d8-4fb0-b5eb-0ada70e05f33
+sockets: 1
+startup: order=2,up=30,down=30
+vmgenid: 79db20a3-ff24-457c-8abb-6dc4df3c6e38
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+
agent: 1
+balloon: 0
+bios: ovmf
+boot: order=ide0;ide2;scsi0;net0
+cores: 8
+cpu: host
+efidisk0: local-lvm:vm-105-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
+hostpci0: 0000:00:02,pcie=1
+hostpci1: 0000:00:12,pcie=1
+hostpci2: 0000:00:1f
+ide0: none,media=cdrom
+ide2: none,media=cdrom
+machine: pc-q35-7.2
+memory: 32768
+meta: creation-qemu=7.2.0,ctime=1680459394
+name: windows-10
+net0: virtio=72:B4:A4:CD:C6:96,bridge=vmbr2,firewall=1,tag=60
+net1: virtio=C6:9F:2F:F2:73:7B,bridge=vmbr1,firewall=1,link_down=1
+numa: 0
+ostype: win10
+scsi0: local-lvm:vm-105-disk-1,cache=unsafe,discard=on,iothread=1,size=150G>scsihw: virtio-scsi-single
+smbios1: uuid=74ff8b62-d60d-4d5c-81a0-e3939baa380c
+sockets: 1
+startup: order=4,up=30,down=30
+vga: none
+vmgenid: f4593d16-12ab-4483-8962-6c27ee576f05
+
Over the last few weeks I built a lower power, efficient, and silent Proxmox server! I can run many virtual machine and even pass through the hardware to use it as a desktop simultaneously!
— Techno Tim (@TechnoTimLive) April 8, 2023
Check it out! 👉https://t.co/4u6DW3BS3E#homelab pic.twitter.com/32quyBjXH5
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I debated buying a new Mac due to its limited options for expandability. This all changed when I found a way to not only rackmount my Mac, but add PCIe slots to add additional components like NVMe SSDs, video capture cards, dual 10 gig networking, and even testing a video card.
Thank you to Sonnet for sending this xMac Studio / Echo III / M.2 8x4 Silent Gen4 PCIe Card to help complete my video editing / software development Machine!
Disclosures:
Thinking about expanding your Mac/Windows/Linux Machines? Check out Sonnet!
I racked my Mac Studio using this rackmount case and it gives me so many connectivity options. It not only serves as a rackmount but it also expands its capabilities by adding a Thunderbolt enclosure that can fit 3 full length PCIe cards and connects over Thunderbolt 3 or 4. This unlocks the Mac’s full potential by allowing you to connect PCIe cards like network adapters, video capture cards, or even add super fast storage using this 16x PCIe card that can fit up to 8 NVMe SSD drives. the xMac Studio rackmount case also has a built-in USB hub, cut outs that still let you access the IO ports, an easy to reach out button, and even an area to keep USB or Thunderbolt drives if you’ve decided to connect those.
Sonnet xMac Studio enclosure with the Echo III expansion system
Today we’re going to take a look at the xMac Studio Pro Rackmount System from Sonnet, with the Echo III expansion system, and even one of their m.2 8x4 silent gen4 PCIe cards to add some additional storage. We’re going through all of this today, including the rackmount case, the enclosure, testing different cards that work with a Mac, and even do some speed tests using the included PCIe NVMe card. This kind of expandability makes it hard to know why the Mac pro even exists.
I bet you’re wondering, why a Mac Studio, and why not a Mac Mini? I debated this for quite some time and even started configuring a Mac Mini and after I started comparing the specs of what I wanted out of a Mac Mini m2 and a Mac Studio m2, I found that for only 100 dollars more I was able to get twice the GPU cores (38 in total), twice the RAM (64 in total), 2 additional USB C ports, a media card reader, faster on board SSDs and even faster memory. I did have to reduce the storage down to 1 TB but that’s a sacrifice I was willing to make and knew I could supplement storage with a system like this from Sonnet. Don’t get me wrong, the Mac Mini is a great Machine but once you start getting into the upper end of the specs, you’re better off going with a Mac Studio. Oh, also sonnet makes a Mac Mini rack too, which I’d love to test out in the future as a Mac build / render server.
If you’re going to increase the specs of the Mac Mini, at some point you’re better off getting a Mac Studio
The next question you’re probably asking is why rack a Mac system at all, I mean, aren’t they meant to be looked at? Joking. Kind of. I chose to rackmount my Mac Studio, not because it’s on brand (ding), but because I wanted better cable management. Wait, cable management? Yeah, cable management. Being a content creator, streamer, and developer, I have lots of cables and cords to connect lots of devices, like this 4kHDMI capture card that I connect cameras and devices to capture their output. The same goes for audio equipment, USB devices, XLR cables, and on and on. While building my server rack in my basement, I found that having everything in one cabinet, like a server rack, makes wire management much easier, or at least easier to hide. So recently I picked up a smaller server rack to rack both my Mac and my upcoming Windows / Linux build in a Sliger water cooled case.
But there are many audio and video creative professionals who do rack their equipment and I am adapting it to fit my needs. Will it work? Let’s find out.
The xMac Studio rack mount case isn’t just a case to keep it safe, but a way to expand the capabilities of your Mac Studio.
xMac Studio rakcmount case by Sonnet
Features:
The enclosure that comes with the xMac Studio / Echo III combo is actually a desktop enclosure that converts to a rack mount enclosure. It’s the same internals but without the outer case from the desktop module. This is a professional level enclosure for creative pros and can be connected to any device that has a Thunderbolt connection, but I opted for the rack mounted version without the desktop case. Let’s take a look at it.
Features:
So what am I going to put in the slots? Well one of them for sure is the Sonnet M.2 8x4 silent Gen4 PCIe Card
This is the Sonnet M.2 8x4 Silent Gen4 PCIe Card and it’s a professional level card. That’s blazing fast! It’s a 16x card, and the bandwidth is available to all of the connected NVMe SSD which help facilitate maximum speeds. It works with Windows, Mac, or Linux computers that have an x16 slot and is compatible with a variety of m.2 NVMe Gen4 and Gen3 SSDs, but you’ll want Gen 4 if you’re going for speed.
Here’s the cool thing about this card too, is that it doesn’t require a specific motherboard for raid or any other features, and it does not require PCIe bifurcation. PCIe bifurcation is just a fancy word that means taking something and dividing it into parts. If the card didn’t support this, we would only see one device or need a special motherboard, but because this card does support bifurcation the card presents multiple devices to the computer so we can see each individual drive. That makes this card very flexible. I did install 8 NVMe SSDs into this card, installed the thermal transfer pad to transfer the heat from the ssd drives to the cooler. This helps keep the drive cool and avoid any kind of thermal throttling.
Loaded it up with 8 NVMe drives!
Features:
This allows me to connect 8 NVMe SSDs via Thunderbolt port using Echo III PCIe expansion enclosure which pops right into the xMac Studio.
So let’s put all of this together, add some PCIe cards, and test various speeds and compatibility.
First of all, it’s worth mentioning that this card is really intended for a high performance server or desktop that is connected to a PCIe 4.0 device and can take advantage of all 16 lanes of PCIe. This is not the case with my Mac Studio since it is limited by Thunderbolt and the enclosure only supports PCIe 3.0. This because Thunderbolt does not support PCIe 4.0. I know this all sounds complicated, because it is 😀.
When testing in this enclosure over Thunderbolt, here are the speeds I was able to achieve:
~2800 MB/s Read / Write.
This speed test maxed out Thunderbolt speeds!
This is roughly 22 Gbs, which is no slouch, but that’s a far cry from the 40 Gbs that Thunderbolt supports?
This is actually the theoretical max of Thunderbolt, it reserves half for downstream devices like monitors so that card, enclosure, and even Thunderbolt is performing as it should. This isn’t a limitation of the card or the enclosure, it’s a limitation of Thunderbolt.
I will test this some more in my next rackmount project which is building my new Windows/Linux workstation in a Sliger case that’s water cooled.
As a side note, I tested many other cards which aren’t covered here but can be seen in the video!
This speed test maxed out Thunderbolt speeds!
Overall, I am very happy with my Sonnet xMac Studio and Echo III module. If you’re looking to rack your Mac Studio, there are few mounting options, but xMac Studio offers the additional Thunderbolt expansion system that really takes this to the next level. The combination of these two give me the flexibility I need to use my Mac how I want to use it. Thunderbolt connectivity ensures that I can connect this to any system I want, a new, a Windows Machine, or Linux, and even if they are a laptop. Overall it’s a great system even if Thunderbolt has some limitations. Well, I learned a lot about Thunderbolt 3 and 4, the Mac Studio, the xMac Studio system and I hope you learned something too. And remember if you found anything in this post helpful, don’t forget share! Thanks for reading!
I found a way to rackmount a Mac, add PCIe devices, and even add 8 NVMe SSDs! Spoiler alert, I tested a GPU and it did not work.https://t.co/btbnQ5SlKw pic.twitter.com/9per30lu5a
— Techno Tim (@TechnoTimLive) October 5, 2023
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Meet File Browser, an open source, self-hosted alternative to services like Dropbox and other web based file browsers.Today we’ll configure a containerized version of File Browser and have you up and going in just a few minutes.We’ll also walk through creating, editing, moving, copying, and even sharing files and folders so that you get a better understanding about what File Browser is all about.
See this post on how to install docker
and docker-compose
If you’re using Docker compose
1
+2
+3
+4
+5
+6
+
mkdir filebrowser
+cd filebrowser
+touch docker-compose.yml
+nano docker-compose.yml # copy the contents from below
+touch filebrowser.db
+docker-compose up -d --force-recreate
+
docker-compose.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+
---
+version: '3'
+services:
+ file-browser:
+ image: filebrowser/filebrowser
+ container_name: file-browser
+ user: 1000:1000
+ ports:
+ - 8081:80
+ volumes:
+ - /home/serveradmin/:/srv
+ - /home/serveradmin/filebrowser/filebrowser.db:/database.db
+ restart: unless-stopped
+ security_opt:
+ - no-new-privileges:true
+
If you’re using Rancher, Portainer, Open Media Vault, Unraid, or anything else with a GUI, just copy and paste the environment variables above into the form on the web page.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Rancher released a next generation open source HCI software hypervisor built on Kubernetes that helps you run virtual machines.With Harvester you can create Linux, Windows, or any virtual machine that can be easily scaled and cluster giving your high availability virtual machines with a few clicks.It also gives you a platform to automatically create HA RKE1, RKE2, and K3S Kubernetes clusters with etcd along with the virtual machines it runs on.Now you can run virtual machines and kubernetes on the edge on one machine.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
MaaS or Metal as a service from Canonical is a great way to provision bare metal machines as well as virtual machines.MaaS allows you to deploy Windows, Linux, ESXi, and many other operating systems to your systems helping you to build a bare metal cloud.You can even use Packer from Hashicorp to configure custom images too! We’ll cover all of this and more in this tutorial on how to install and configure MaaS from start to finish with Packer!
New Customer Exclusive - $25 Off ALL Processors: https://micro.center/3si
Check out Micro Center’s Custom PC Builder: https://micro.center/wcx
Submit your build to Micro Center’s Build Showcase: https://micro.center/dcm
Visit Micro Center’s Community Page: https://micro.center/2vr
MaaS can be installed via apt
or snap
.I had some issues with the apt
version so I used snap
for this install.
Be sure snap
is installed
1
+
sudo apt install snapd
+
1
+
sudo snap install --channel=3.2 maas
+
1
+2
+3
+
sudo apt-add-repository ppa:maas/3.2
+sudo apt update
+sudo apt install maas
+
(skip this step if you already have postgres in your environment)
This should be used if you want to use MaaS test database
1
+
sudo snap install maas-test-db
+
testing the database
1
+
sudo maas-test-db.psql
+
then list databases you should see maasdb
there
1
+
postgres=# \l
+
If you are using the test database above, initialize MaaS
1
+2
+
sudo maas init region+rack --database-uri maas-test-db:///
+
+
If you already have postgres in your environment you can initialize MaaS using your existing postgres service.Be sure to create the database, user, and assign that user permissions before running the init command.
1
+
sudo maas init region+rack --database-uri "postgres://username:password@192.168.0.100/maas" # replace username /password / ip /db name
+
if you don’t wand to store your secrets in your terminal’s history, consider using ENV variables:
1
+
sudo maas init region+rack --database-uri "postgres://$MAAS_DBUSER:$MAAS_DBPASS@$HOSTNAME/$MAAS_DBNAME"
+
1
+
sudo maas createadmin
+
Here you can choose to import your LaunchPad or GitHub public key using gh:githubusername
1
+
sudo maas status
+
The output should like something similar to this:
1
+2
+3
+4
+5
+6
+7
+8
+9
+
bind9 RUNNING pid 1014, uptime 2 days, 10:52:40
+dhcpd STOPPED Not started
+dhcpd6 STOPPED Not started
+http RUNNING pid 1477, uptime 2 days, 10:52:23
+ntp RUNNING pid 1143, uptime 2 days, 10:52:37
+proxy RUNNING pid 1454, uptime 2 days, 10:52:25
+rackd RUNNING pid 1017, uptime 2 days, 10:52:40
+regiond RUNNING pid 1018, uptime 2 days, 10:52:40
+syslog RUNNING pid 1144, uptime 2 days, 10:52:37
+
If you ever need to reinitialize MaaS
1
+
sudo maas init region
+
Get key ring
1
+
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
+
Add keyring
1
+
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release --codename --short) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
+
add Hashicorp Repo
1
+
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
+
Install Packer
1
+2
+
sudo apt update
+sudo apt install packer
+
Update and install dependencies needed to build images
1
+2
+
sudo apt update
+sudo apt install qemu-utils qemu-system ovmf cloud-image-utils make curtain git
+
canonical/packer-maas
Clone the canonical/packer-maas
repo
1
+2
+
git clone https://github.com/canonical/packer-maas.git
+
+
1
+2
+3
+4
+
cd packer-maas
+cd ubuntu
+sudo packer init ubuntu-cloudimg.pkr.hcl
+sudo make custom-cloudimg.tar.gz
+
check and change permissions of archive (change root
to your username)
1
+2
+
ls -l
+sudo chown root:root ./custom-cloudimg.tar.gz
+
echo your MaaS api key to your home directory
1
+
sudo maas apikey --username=massadmin > ~/api-key-file
+
You can check with with
1
+
cat ~/api-key-file
+
Authenticate to MaaS with your api key
1
+
maas login massadmin http://localhost:5240/MAAS/api/2.0/ $(head -1 ~/api-key-file)
+
Upload the custom image we made to MaaS
1
+
maas massadmin boot-resources create name='custom/cloudimg-tgz' title='Ubuntu Custom TGZ' architecture='amd64/generic' filetype='tgz' content@=custom-cloudimg.tar.gz
+
00:00 - What is MaaS (Metal as a Service) from Canonical?
02:00 - Micro Center / $25 Off CPUs! (Sponsor)
03:00 - Installing MaaS
06:56 - Initial MaaS Configuration
09:41 - Importing your SSH Key
10:23 - Networking Configuration & Discovery
14:05 - PXE & Network Boot with DHCP
15:33 - Commissioning a Machine (Initial Discovery)
18:45 - Power Types & Wake on LAN (WOL)
20:50 - Commissioning a Machine Part 2 (For real this time)
24:00 - Deploying Ubuntu
26:15 - SSH in to machine
26:54 - Creating Custom Images with Hashicorp Packer
33:40 - Uploading a Custom Image to MaaS
38:05 - What do I think of MaaS from Canonical?
39:57 - Stream Highlight - “100 + 50 subs dropped 🫳🎤”
This past week I learned how to solve the challenge of imaging bare metal machines. I settled on MaaS (Metal as a Service) and custom images with Hashicorp Packer. This is the missing link for automation in my #homelab
— Techno Tim (@TechnoTimLive) January 28, 2023
Check out the video here 👇https://t.co/5rhHtwaLi4 pic.twitter.com/KgeYCgYzgt
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Have you been putting off migrating your database to Docker and Kubernetes like I have? Well wait no longer.It’s simple using this step-by-step tutorial.Today, we’ll move a database that’s on a virtual machine to a container that’s running in kubernetes.Oh yeah, this will also work if it’s a bare metal server too, duh.🙂
mysql_backup.sh
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+
#! /bin/bash
+
+BACKUP_DIR="/home"
+MYSQL_USER="root"
+MYSQL=/usr/bin/mysql
+MYSQL_PASSWORD="your my sql password"
+MYSQLDUMP=/usr/bin/mysqldump
+MYSQL_HOST="mysql"
+MYSQL_PORT="3306"
+
+databases=`$MYSQL --user=$MYSQL_USER --host $MYSQL_HOST --port $MYSQL_PORT -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema)"`
+
+for db in $databases; do
+ $MYSQLDUMP --host $MYSQL_HOST --port $MYSQL_PORT --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db | gzip > "$BACKUP_DIR/$db.gz"
+done
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
There’s building a MINI SERVER RACK and then there’s beating Raid Owl in the mini server rack build challenge. Let’s see if I can do both.
Check out Raid Owl’s build here: https://www.youtube.com/watch?v=wJUDhQ7s9HM
📦 Mini Server Rack Parts List 📦
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
I finished my mini server rack build! I basically shrunk an entire HomeLab into this small rack.
— Techno Tim (@TechnoTimLive) July 26, 2024
Check it out! ---->https://t.co/NPI2YcSZcb pic.twitter.com/xJRmwyXYzP
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
This has been months in the making, my new Mobile HomeLab! It’s a device that I can take with me to provide secure internet access for all of my devices. Not only can it provide secure access, but it can also let me bring apps and services with me when I travel. It’s built on Proxmox, OpenWRT, Pi-hole, and many other services. I’m taking this with me everywhere!
A huge thank you to Protectli for sending this device!
See the whole kit here! - https://kit.co/TechnoTim/mobile-homelab
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
This is my mobile HomeLab, or is it my mobile home lab, or just mobile lab, or a travel router ++, or ultimate mobile HomeLab, anyway, It’s a computer that I bring with me that serves as a network firewall, an access point, and a platform to run apps, services, and virtual machines. I guess it’s a cross between Wendell’s forbidden router and Network Chuck’s travel router. It’s something that I am going to take with me every time I travel and will provide internet access whether that be from an existing network, or one I connect to over my carrier’s mobile data network.
A Mobile HomeLab device I can take with me that also provider network access!
This is something that I have wanted to create for quite some time because when traveling I bring with me a few pieces of technology to make my life a little easier and keep my nerd brain fed. Some of these are common like a laptop and a tablet, but others aren’t. You see, when I travel I like to take a router with me to keep all of my devices connected securely, rather than connecting all of my devices to say the Air BnB’s WiFi. Bringing my own router assures me that my laptop, tablet, phone, pi, even security camera are connected to my router and that no other devices can spy on me.
I’ve carried an old Cisco Linksys router with me every time I travel, and it provides a secure private network that only my devices can connect to. I can even take this a step further and use a VPN to connect all of my devices securely to my home network, where I get the same protection as I do when I am physically at home. This little router has worked great for quite some time, but I also started bringing a Raspberry Pi to provide a few more services on my local network. That’s around the time when I started thinking about how to combine all of the functionality into one package. Protectli reached out to me and said they wanted to give one of their devices a test run on my next trip and see if this combination of form factor and hardware would accomplish everything I needed out of my new “mobile homelab, forbidden travel router, plus plus, ultimate mobile homelab - thingie”?
This is a Protectli Vault VP2420 which, if you couldn’t tell by the huge heatsink on top, it’s fanless and silent. This model has an intel Celeron J6412, but it’s not like the Celerons of the past, this Celeron has 4 cores and 4 threads and has a base clock speed of 2 GHz and can burst to 2.6 GHz. What makes this CPU great is that it is super low power but yet still has features like AES-NI and VT-x and VT-d which makes it great for a hypervisor like VMWare or Proxmox. It also has QuickSync which can be used for video transcoding too. I opted for 32 GB of DDR-4 RAM , the most you can get on this device.
This model comes with (4) 2.5 Gb ethernet ports for lots of hard wired connectivity options. But even more interesting than the wired options, are the wireless options. You probably noticed all of the antennas sticking out, now one set is pretty obvious and that’s one for WiFi. It’s a Protectli WiFi module that supports 802.11 ac/a/b/g/n and fits into the m.2 slot. The other antennas are actually for a 4g LTE modem that works most carriers. It even has a slot on the outside of the case that you can insert your SIM card into without opening the device up.
As far as storage goes it has an internal 8GB eMMC module that I really won’t be using, and I opted for a 1 TB Samsung SSD. I would like to have another option for another drive, but I figure this was good enough for what I am going to use it for.
As far as IO goes we have an HDMI port, 2 USB 3.0 ports, a Display Port, USB C, and a micro USB port for console access. It’s powered by this little brick and has a barrel plug for power. This is quite a capable machine for something that’s smaller than a tablet. All in all, it’s a solid fanless, quiet, yet power build.
My Protectli Vault VP2420 with 2.5 Gb/s networking
So now that I have all of this put together (it came assembled) how was I going to build the ultimate mobile HomeLab?
My original thought was to just run pfsense or OPNsense on this machine and use it as a router however, FreeBSD, the operating system that these are built on do not have drivers for this wireless NIC. That shut that down really quick. Then I noticed that Protectli have documentation on their site on how to set up this device with OpenWRT. That’s when I remembered Network Chuck’s video and decided that if he got it working, I could too. Well, not really because he’s like a legit networking person and I am just a hack, but anyway I thought I would give it a shot.
I should have installed OpenWRT on Proxmox to begin with…
So I installed OpenWRT. The process was a little bit complicated but I had some help from Stuart from the Protectli team and they updated their docs with the challenges we worked through. After getting it running, I quickly realized that I should have just used a hypervisor and created it as an OpenWRT virtual machine. This would allow me to make changes and back them up as I go. It would also allow me to install other VMs and containers that I can use while on the go.
So that’s what I did, I installed Proxmox on this machine since it supports virtualization and hardware passthrough. At first, I wanted to create an LXC container for OpenWRT to use less resources, however, it does not support hardware passthrough like virtualization does for network cards so I created a simple virtual machine. I found this great guide on creating an OpenWRT VM on Proxmox!
The steps to create a VM were pretty straight forward and I followed each step on that checklist carefully.
Once I had the virtual machine configured, I then passed through the devices that I need to run a router along with an access point. I passed through a NIC for WAN access, the wireless adapter for the access point, a USB wireless NIC for additional WAN access, and the USB modem for, well, WAN access for LTE. I gave it 2GB of RAM and 2 CPU cores, and the disk of only 512 MB. This is how big the disk image is. Now this might not seem like much but this is much more than I will ever need, considering this router that also runs a version of OpenWRT only uses 32MB of RAM and 8 MB of disk space.
OpenWRT running as a VM on Proxmox
Once the machine was up and running, I made some changes to the NIC and then went to the OpenWRT admin interface. The interface is pretty basic although it does come with dark mode, so that’s a plus for me. They also support a few different themes however I decided to stick with the default bootstrap dark. I configured a few initial settings like NTP, my router’d name, and then headed over to the software section. Here I can install some additional packages. I installed a few optional packages like nano
, zsh
, usbutils
, and openssh sftp server
, and htop
for better monitoring. After doing this, it was now time to configure the network.
First I wanted to be sure I could connect to this device via LAN. This was as simple as just configuring the virtual machine to connect to the bridge on Proxmox. This means when I plug in a network adapter to a port dedicated as LAN, I can connect to anything running on the Proxmox bridge. This will be the local area network for all of my devices on this subnet. If you want, you can configure DHCP on this OpenWRT interface but I am going to do that later with Pi-hole or even pfSense later.
The next NIC I wanted to configure was the WAN NIC. This will be the NIC that is passed through to this virtual machine and will give it internet access if you have physical access to the modem or another switch. It’s as simple as assigning this NIC to WAN, and turning on DHCP. Physically plugging an ethernet cable is my preferred method of connecting this router to an upstream network like an Air BnB modem or any other network you don’t trust.
After this step, the LAN and WAN should work by physically connecting
Now that I have LAN and WAN NICs configured, I can plug in my laptop and connect to this network. This works fine but really we want to broadcast our own wireless SSID so all of our devices can connect to it. This is where we’ll need to configure our Protectli wireless NIC. In order for this NIC to work, we’ll need to install the drivers and a few packages on OpenWRT to enable the wireless access point feature and we can do this within the software section. You’ll need to install a few packages and then overwrite a few files with ones from Protectli. They’ve found that some of the packages that are available on OpenWRT aren’t compatible so they’ve provided these files on their website along with instructions. Once that’s taken care of and we reboot we can now see this wireless section with our wireless NIC! Here we’ll want to configure the wireless network we want to broadcast for our clients to connect to. You’ll need to configure the SSID, security, and wireless mode.
Now you should have a fully working router with LAN, WAN, and an access point!
Pro tip, I found out that even though this is a dual band NIC, you cannot broadcast on both bands at the same time. So if you aren’t going to use 2.4 GHz you’re fine, you can set it to AC mode or N 5GHz, but if you are using any 2.4GHz devices you’ll need to set the mode to the lowest common denominator of 2.4 GHz. Another thing you can do is configure a second NIC to broadcast on 2.4 GHz, but we’ll talk about it a little bit later.
Once you apply this, you should be able to see your new SSID and connect to OpenWRT! And if you have the WAN port connected to an upstream network, you should be able to use this as your router! But the fun doesn’t stop, there, not even close.
At this point you should be able to connect to your router and use the internet from the WAN port, but what if you don’t have access to the WAN port? This is where a second wireless network device comes into play. Let me be clear, this was the most complicated part of this whole project. OpenWRT supports very few USB wireless adapters. I tested 8 USB wireless network adapters before I finally found one that worked with OpenWRT. I tested name brands, no name brands, USB 2, USB 3, ones with odd antennas, and ones without external antennas at all. It turns out that most wireless USB adapters use a Realtek chipset and this does not play well with OpenWRT. It was hard to find one without a Realtek chip, but it turns out this tiny little no-name one works great and that’s because it’s based on a Ralink chipset, one that’s very hard to find.
I tested 8 wireless USB NICs before finally finding one that works with OpenWRT
So you’ll need to install a few more packages for driver support. I chose to install mt7601u-firmware` for this wireless USB NIC. After that you should see another NIC in the wireless section. This time we’re going to configure it as a client that connects to an existing wireless network, that way you don’t have to physically connect to the WAN port, we’ll connect over wireless. We can do this by scanning and connecting to an existing wireless network, and after that you’ll then have a completely functional router that can connect a wireless network and share it with all of your clients!
I should mention that even though this works fine, this USB NIC only supports 2.4GHz / Wireless N. This is generally fast enough for the internet connection but just know that you are going to be limited by the speed of this NIC, which is around 150 Mb/s at most. Personally I would only use this option if you can’t physically connect your WAN port to your upstream router. As you can see, when I am connected to the WAN via this USB NIC, I can be a lot slower than when it is connected via ethernet cable.
Now you should be able to connect your router to an upstream WiFi connection using this NIC!
If you can physically connect to the WAN via ethernet, what I would do is disable this NIC or configure it to broadcast the same private network on 2.4Ghz this way you can set your primary NIC to use A/C/N 5 GHz. I had to do this to connect my Wyze cam since it only supports 2.4 GHz. Yes I take a Wyze cam with me when I travel so that I can keep an eye on the place when I leave and also keep an eye on my pups, Nano and Buddy,
I bring a Wyze cam with me to keep an eye on the place!
Now that I have this all working, I can now fire up my router and connect any of my devices to it and use my own secure wireless network. (Use my phone and connect )
After running a speed test you can see I am getting anywhere from 180/200 Mbps which is pretty decent considering I have 500 up/down here at home. I’m sure I could squeeze out some more performance if I tweak some settings but this is great considering everything is running on stock settings.
So, not that I have OpenWRT working with an upstream router, what happens if I don’t have an upstream router at all? This is where the LTE modem that I mentioned earlier comes into play. This is great for times when you don’t have an internet provider where you are staying or if you decide to go and live the #vanlife.
Installing the software on OpenWRt was pretty straightforward, again you install a few packages (kmod-usb-net-rndi
, wwan
, comgt-ncm
) and then reboot. But before I rebooted I inserted this cheap testing SIM into my device. After rebooting, you’ll then go to network interfaces and add the new interface which should be USB0
. You’ll want to set this as WAN as the firewall zone and then save and apply. You can then access the modem’s web GUI on a private IP address of 172.16.0 from a device connected to the LAN port. You should then see your device connected to your cellular provider and !viola! this connection can be shared with anyone connected to this device! Oh yeah, I did update the firmware too because I love updating firmware 🤷
If you use an LTE modem, you can now connect all if your devices to LTE data from your carrier!
Now that I had OpenWRT working as an access point, a firewall, and a router that can connect to an upstream router via ethernet, wireless, or LTE, it was now time to focus on the “homelab” part of this device. Since I installed Proxmox on the host, I can now install anthony I want on this machine. The first thing I decided to install was a Pi-hole to keep every connected device safe and free of ads and tracking.
Like all installations on Proxmox you have options of how you want to install things. I typically choose VMs but I wanted to keep this lean and mean, so I went with an LXC container. LXC containers are easy to manage and use less resources than a full VM. So I created an LXC container and set a hostname and password and uploaded my public ssh key. I chose the ubuntu template, then gave it 8 GB of disk space, 2 CPU cores, and 2 GB of RAM. For networking I connected it to the existing bridge, which is my LAN and gave it a static IP address.
Once the LXC container was created, I updated it and installed Pi-hole. After installing I updated all of my ad lists. I also added about 5 millions sites to my block list that you can see here.
Added Pi-hole to block all those ads and tracking on the go!
I did end up enabling DHCP on Pi-hole just to see what it was all about. I usually let my router do this but for this travel router I wanted to have more control over blocking. I ended up disabling DHCP on OpenWRT and enabling it on Pi-hole.
Awesome, so now I have Pi-hole with network wide ad blocking running, so what’s next? Well, I know I want to have docker as a platform for running applications on this mobile HomeLab device and portainer is the best way to manage them. I chose to create another LXC container based on Ubuntu and gave it a 60 GB hard drive, 4 CPU cores, and 16 GB of ram. I spun up the container and let it grab an IP, and then I reserved that IP inside of Pi-hole. Once the container was up and running, I updated Ubuntu and installed Portainer. Once Portainer was running I then installed a Watchtower to keep all of my containers up to date. I typically use GitOps to handle this in my home production cluster, but I don’t want to worry about updating containers while traveling. Installing Watchtower was easy, just copy and paste the docker compose and I was good to go.
Installed Portainer to manage all of my Docker containers
So now that I have Portainer installed, we can install any container we like to and take it with us. For instance, we could install the super popular Plex or Jellyfin and take our media library with us. This would allow any connected device to stream movies from this device down to theirs, without an internet connection The nice thing about this Intel CPU is that it has QuickSync so you can hardware transcode videos if you need to, making sure that streaming is smooth on any resolution. I bet you’re wondering about disk space? Well, if you really wanted to take more than 1 TB of media with you, you could simply connect and attach a USB hard drive with your media to this machine and mount the drive to your Plex or Jellyfin machine.
Doing some local Plex transcoding while on the go!
Also, it does stop there, we can now install any docker container or LXC container we like, or even a full blown Virtual Machine since we’re running Proxmox! If we really wanted to, we could now install pfsense or OPNSense as a virtual machine and use that as our router and disable all the routing features on OpenWRT, and only use that as an access point. Once you have pFSense or OPNSense running, you can then create a VPN connection back home to get the same protection you have at home. The possibilities are really endless. Want to install other containers, like LANcache, Netcloud, or even a local Minecraft server, no problem.
You can even go as far as creating a VPN tunnel back home!
And that’s the nice thing about a general purpose machine like this, you have unlimited possibilities. And using the Intel Celeron platform you get powerful hardware, at a fraction of the power consumption so you get the best of both worlds. A WiFi router and access point that connects all of your devices, internet or not, and lots of services that you can use while you are on the go. I’ll be using this device full time when I travel so I will be sure to report back any modifications I make to this new “mobile homelab, forbidden travel router, plus plus, ultimate mobile homelab - thingie”
This has been months in the making, my new Mobile HomeLab! It's a device that I can take with me to provide secure internet access for all of my devices.
— Techno Tim (@TechnoTimLive) June 18, 2023
Check it out!👉https://t.co/2F5gG5cZn2 pic.twitter.com/5zsXS4VG9X
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Building a Multi-architecture CPU Kubernetes cluster is easier than you think with k3s
.In this video we’ll build a Raspberry Pi 4 with an ARM CPU and add it to our existing x86 x64 amd64 CPU Kubernetes cluster.Our foundation will be Ubuntu for ARM, then we’ll add k3s
, and then join it to our cluster.We’ll also discuss how this works with Docker images built for specific CPU types.We’ll also talk about some build configurations and requirements for your Pi.
Happy Pi Day!
1
+
k3s --version
+
get k3s
token from a server
1
+
sudo cat /var/lib/rancher/k3s/server/node-token
+
set k3s
version (the value you got from k3s --version
)
1
+
export INSTALL_K3S_VERSION=v1.20.5+k3s1
+
install k3s
as an agent using your token from above
1
+
curl -sfL https://get.k3s.io | K3S_URL=https://example.local.com:6443 K3S_TOKEN=hksadhahdklahkadjhasjdhasdhasjk::server:asljkdklasjdaskdljaskjdlasj sh -
+
check all k3s
nodes from your workstation
1
+
kubectl get nodes
+
get all pods running on a specific node (elio
)
1
+
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=elio
+
set a label on a node (elio
)
1
+
kubectl label nodes elio cputype=arm
+
describe a node (elio
)
1
+
kubectl describe node elio
+
Example pod spec
nginx-pod.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+
apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+ imagePullPolicy: IfNotPresent
+ nodeSelector:
+ cputype: arm64
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
If I could start my HomeLab all over, what would I choose? Would I choose the same servers, rack, networking, gateway, switch, firewall, my pc conversion, and even my disk shelf NAS? Did I make a good choice or a bad one? Join me as we give each piece of my HomeLab a Keep or Upgrade rating.
A HUGE thanks to Micro Center for sponsoring this video!
New Customer Exclusive, Receive a FREE 256GB SSD in Store: https://micro.center/cff7ca
Check Out Micro Center’s PC Builder: https://micro.center/81b822
Visit the Micro Center Community: https://micro.center/b33782
Find all of my server gear here! https://kit.co/TechnoTim/techno-tim-homelab-and-server-room-upgrade-2021
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Imagine all of your favorite operating systems in one place, available anywhere on your network, and you’ll never need to use your flash drive again. That’s the promise of netboot.xyz, a network boot service that lets you install or boot to any operating system simply by booting to the network.
Disclosures:
Don’t forget to ⭐ netboot.xyz on GitHub!
See this post on how to install docker
and docker compose
create folders netboot_xyz
, netboot_xyz/assets
, netboot_xyz/config
1
+2
+3
+4
+
mkdir netboot_xyz
+cd netboot_xyz
+mkdir assets
+mkdir config
+
Copy yaml to server or portainer, etc
linuxserver.io container image
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+
---
+version: "2.1"
+services:
+ netbootxyz:
+ image: lscr.io/linuxserver/netbootxyz:latest
+ container_name: netbootxyz
+ environment:
+ - PUID=1000 #current user
+ - PGID=1000 #current group
+ - TZ=Etc/UTC
+ # - MENU_VERSION=1.9.9 #optional, sets menus version, unset uses latest
+ - PORT_RANGE=30000:30010 #optional
+ - SUBFOLDER=/ #optional
+ volumes:
+ - ./config:/config
+ - ./assets:/assets #optional
+ ports:
+ - 3000:3000
+ - 69:69/udp
+ - 8080:80 #optional
+ restart: unless-stopped
+
Official container image
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+
---
+version: "2.1"
+services:
+ netbootxyz:
+ image: ghcr.io/netbootxyz/netbootxyz
+ container_name: netbootxyz
+ environment:
+ # - MENU_VERSION=2.0.47 # optional, sets menus version, unset uses latest
+ volumes:
+ - ./config:/config # optional
+ - ./assets:/assets # optional
+ ports:
+ - 3000:3000
+ - 69:69/udp
+ - 8080:80 #optional
+ restart: unless-stopped
+
bring up stack
1
+
docker compose up -d
+
check to be sure it’s running
1
+2
+3
+
➜ netboot_xyz docker ps
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+83e6c5192156 lscr.io/linuxserver/netbootxyz:latest "/init" 14 seconds ago Up 12 seconds 0.0.0.0:69->69/udp, :::69->69/udp, 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp, 0.0.0.0:8080->80/tcp, :::8080->80/tcp netbootxyz
+
should see something like:
Check the logs
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+
➜ netboot_xyz docker logs netbootxyz
+[migrations] started
+[migrations] no migrations found
+───────────────────────────────────────
+
+ ██╗ ███████╗██╗ ██████╗
+ ██║ ██╔════╝██║██╔═══██╗
+ ██║ ███████╗██║██║ ██║
+ ██║ ╚════██║██║██║ ██║
+ ███████╗███████║██║╚██████╔╝
+ ╚══════╝╚══════╝╚═╝ ╚═════╝
+
+ Brought to you by linuxserver.io
+───────────────────────────────────────
+
+To support the app dev(s) visit:
+netboot.xyz: https://opencollective.com/netbootxyz/donate
+
+To support LSIO projects visit:
+https://www.linuxserver.io/donate/
+
+───────────────────────────────────────
+GID/UID
+───────────────────────────────────────
+
+User UID: 1000
+User GID: 1000
+───────────────────────────────────────
+
+[netbootxyz-init] Downloading Netboot.xyz at 2.0.73
+[custom-init] No custom files found, skipping...
+crontab: can't open 'abc': No such file or directory
+listening on *:3000
+[ls.io-init] done.
+4Lg88gNm_wqDORftAAAB connected time=1699460581160
+
+
You can now browse to the container’s homepage
http://192.168.10:3000/
You should see a list of pxe boot menu items and the option to cache the pre boot environment locally
If you want to serve the files from a local mirror, you can edit the boot.cfg
file from the boot menus
change:
set live_endpoint https://github.com/netbootxyz
to:
set live_endpoint http://192.168.10.125:8080
Keep in mind that you will not be able to boot from any environments you haven’t downloaded.
Since I cannot cover configuring every DHCP service out there, I will cover the basics. Fortunately linuxserver.io has many routers covered as well as the official netboot.xyz docs.
UniFi UDM Pro / SE
Settings > Network > Choose Network > DHCP Service Management > Show Options
Here you’ll want to check “Network Boot” and fill in the server IP and the file name
For me, it’s:
Server IP: 192.168.10.125
Filename: netboot.xyz.kpxe
(this is the default BIOS option)
Save.
Preferably we would like to offer a PXE boot per architecture, and UDM supports it however not in the UI. Follow these instructions to do it via CLI
If you’re up to it, here’s my config:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+
#
+# Generated automatically by
+#
+
+# Configuration of PXE boot for '
+
+
+# The boot filename, Server name, Server Ip Address
+dhcp-boot=netboot.xyz.kpxe,netboot.xyz,192.168.10.125
+
+# inspect the vendor class string and match the text to set the tag
+dhcp-vendorclass=BIOS,PXEClient:Arch:00000
+dhcp-vendorclass=UEFI32,PXEClient:Arch:00006
+dhcp-vendorclass=UEFI,PXEClient:Arch:00007
+dhcp-vendorclass=UEFI64,PXEClient:Arch:00009
+
+# Set the boot file name based on the matching tag from the vendor class (above)
+dhcp-boot=net:UEFI32,netboot.xyz.efi,netboot.xyz,192.168.10.125
+dhcp-boot=net:BIOS,netboot.xyz.kpxe,netboot.xyz,192.168.10.125
+dhcp-boot=net:UEFI64,netboot.xyz.efi,netboot.xyz,192.168.10.125
+dhcp-boot=net:UEFI,netboot.xyz.efi,netboot.xyz,192.168.10.125
+
Verify
1
+
cat /run/dnsmasq.conf.d/PXE.conf
+
Copy file to /run/dnsmasq.conf.d/PXE.conf
on UDM
run
1
+
kill `cat /run/dnsmasq.pid`
+
You’ll have to do this on each reboot
If you don’t want to do this, you’ll have to change the image file each time.
To boot to the network you’ll need a BIOS and NIC that supports it
See the boot menu, choose OS and go!
Word of caution, there might be some that do not work. This is a moving target. e.g. Ubuntu 23.10 isn’t working for me now, but could soon. Other OS are fine. You may need to try different NICs if you are using virtualization
Requirements
Install Windows ADK for Windows 10/11.
Install Windows PE add-on for the Windows ADK.
Run Deployment and Imaging Tools Environment
as administrator from the start menu.
Navigate to folder
1
+
cd "..\Windows Preinstallation Environment\amd64"
+
Mount the Windows PE boot image.
1
+2
+
md C:\WinPE_amd64\mount
+Dism /Mount-Image /ImageFile:"en-us\winpe.wim" /index:1 /MountDir:"C:\WinPE_amd64\mount"
+
Copy files
1
+2
+
Xcopy "C:\WinPE_amd64\mount\Windows\Boot\EFI\bootmgr.efi" "Media\bootmgr.efi" /Y
+Xcopy "C:\WinPE_amd64\mount\Windows\Boot\EFI\bootmgfw.efi" "Media\EFI\Boot\bootx64.efi" /Y
+
Unmount the WinPE image, committing changes.
1
+
Dism /Unmount-Image /MountDir:"C:\WinPE_amd64\mount" /commit
+
Delete the temp folder that was created earlier (so we don’t get an error when copying)
1
+
rmdir /s C:\WinPE_amd64
+
Create working files
1
+
copype amd64 C:\WinPE_amd64
+
Create a bootable WinPE ISO
1
+
MakeWinPEMedia /ISO C:\WinPE_amd64 C:\WinPE_amd64\WinPE_amd64.iso
+
Then copy the contents of WinPE_amd64.iso
to netboot.xyz container’s /assets/WinPE/x64/
folder (need to create folders first)
Then you’ll want to create an SMB share named Windows
in your environment. You can create or download a Windows ISO by visiting Microsoft’s site
Once you have created your Windows ISO, you can then extract the files to the root of the Windows
share you just created above.
Now we need to configure netboot.xyz
In netboot.xyz UI, update boot.cfg
to set win_base_url http://192.168.10.125:8080/WinPE
and save.
Now you can PXE boot to the network (be sure you are using the EFI boot image and your device supports UEFI) and then choose Windows from the netboot.xyz menu.
This should boot to a DOS prompt in the Windows Pre-boot Environment
Type
1
+
wpeinit
+
then type
1
+
net use F: \\<server-ip-address>\<share-name> /user:<server-ip-address>\<username-if-needed> <password-if-needed>
+
If you want it to prompt for a username and password, remove the user
argument
1
+
net use F: \\<server-ip-address>\<share-name>
+
This will map the F:
drive to your Windows
share that the Windows ISO extracted
then type
1
+
F:\setup.exe
+
Then hit enter and Windows installer should launch!
I’d love to also automate the mounting of the share however I haven’t found a clean way to do it yet. If you know, let me know in the comments below and I can add it!
Back in my tech support days I thought that if I had PXE network boot at home, that I "made it". We'll, that day has come! This past week I learned all about netboot xyz! I can now boot and install any operating system over the network!
— Techno Tim (@TechnoTimLive) November 11, 2023
Check it out! https://t.co/PzPmYzKWLH pic.twitter.com/FQr4W4TPtp
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Today I look at 2 (or 3 depending on how you count them) UPS systems from Tripp Lite and Eaton.These UPS devices couldn’t be any different but they are awesome nonetheless.Each has it’s own unique capabilities and features.Which on will you choose when looking for your next UPS? Join me as we walk through and review these type UPS systems and rack them in my new rack!
Huge THANK YOU to Eaton / Tripp Lite for sending these UPS systems.If you’re looking for a new UPS for home or work, you should totally check them out!
Tripp Lite
Eaton
Be sure to check out (and star) David’s repo with an automated NUT server install!
⭐ https://github.com/dzomaya/NUTandRpi
00:00 - What should I protect with my UPS?
02:16 - Tripp Lite SmartPro UPS Review and Specs
03:24 - Tripp Lite 36v Battery Pack Review and Specs
04:29 - Tripp Lite SmartPro UPS Configuration
05:23 - Eaton 5P 1550 UPS Review and Specs
07:43 - Eaton 5P 1550 UPS Configuration
08:47 - Rack mounting the UPSes
10:53 - My Thoughts and Monitoring and Alerting Solutions
13:01 - Stream Highlight - “Testing in Production”
Today I look at 2 (or 3 depending on how you count them) UPS systems. These UPS devices couldn't be any different but they are awesome nonetheless.
— Techno Tim (@TechnoTimLive) November 26, 2022
Which UPS do you use?
(I think I had too much fun creating this thumbnail)
Check it out!https://t.co/1kxDvSeGt7 pic.twitter.com/fNiODnanAR
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Are you thinking about ditching Google apps or looking for a Dropbox replacement? Are you ready to self host your own productivity platform? Well, Nextcloud may be for you! In today’s tutorial we’ll walk though setting up Nextcloud with Docker and Kubernetes.We’ll also walk through some of the new features, installing apps from the app store, exposing this Nextcloud publicly, as well as setting up 2FA (2 factor authentication) with TOTP clients like Google Authenticator and Authy.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Today I got rid of the slow and pesky microSD card in my Pi and replaced it with something MUCH faster in my Pi LED Panel. Don’t know what my Pi LED Panel is? Check it out! This is my first video on the new channel @TechnoTimTinkers 🎉
Disclosures:
Things mentioned in the video (some are affiliate links) :
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
This project was built using rpi-rgb-led-matrix
After getting the new power supply I noticed a few odd things
I read the Pi hat documentation over and over, and no where did it mention that anything else was needed other than a large power supply. I thought for sure it was now a hardware issue and was in over my head. I dropped a message in a Discord that both Jeff Geerling and I are in and he mentioned checking out this post
In that post it suggests to add max_usb_current=1
to your config.txt
.
I tested it and sure enough the Pi can now power the LED panel, the Pi, and the USB drive all from a single power supply connected to the hat. 🎉
Thanks Jeff!
I just posted my first video on the new (3rd) channel "Techno Tim Tinkers". 🎉. .
It's a project that I have been putting off forever...https://t.co/aL1k98Z9rO</p>— Techno Tim (@TechnoTimLive) May 17, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Here’s a quick way to automate your battery backups and UPSes with and open source service called NUT server and a raspberry Pi.
Be sure to check out (and star) the repo with an automated NUT server install!
⭐ https://github.com/dzomaya/NUTandRpi
Be sure you have a raspberry pi or any machine running Debian / Ubuntu Linux.Then plug in your UPS via USB and then SSH into your Pi.
Then download th script.
1
+
wget https://raw.githubusercontent.com/dzomaya/NUTandRpi/main/scripts/nutinstall.sh
+
Make the script executable.
1
+
sudo chmod +x nutinstall.sh
+
Run the script.
1
+
sudo ./nutinstall.sh
+
Answer a few questions.
Be sure to keep your SNMP community string safe and treat this like a password.
You can now access NUT in a browser by going to:
http://yourRasberryPiIPaddress/cgi-bin/nut/upsstats.cgi
You can also query your device using SNMP
1
+
snmpwalk -v2c -c yourSNMPv2cCommunity yourRasberryPiIPaddress .1.3.6.1.4.1.8072.1.3.2.4.1.2
+
To see advanced configuration and configuring NUT Server and NUT client, see my Network UPS Tools (NUT) Ultimate Guide.
I figured I’d share my quick #homelab video on the open source NUT Server here too! pic.twitter.com/e7wA0fNGk4
— Techno Tim (@TechnoTimLive) November 29, 2022
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Do you want the best settings for OBS in 2020? This is the ultimate OBS settings guide with the BEST OBS settings for streaming Fortnite, Just Chatting APEX Legends, PUBG, or really ANY game.This video includes the best settings for quality, frame rate, bit rate, and audio for streaming at 60 frames per second (FPS) at 1080p (max settings for streamers).This guide works with OBS Studio, Streamlabs OBS (SLOBS), and OBS.LIVE (from StreamElements).I also include various Windows settings and tweaks to give you the best performance while streaming.I even cover the new NVENC settings (NVIDIA NVENC H.264 (new) ) for NVidia graphics cards with Turing Architecture. This is a great guide for anyone who wants to tweak their existing settings or have just installed it for the first time with the default settings.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I decided to tear apart our office and convert my old Ikea hack table tops into a standing desk.Oh, and I also clamped on 3 - 27” 1440p gaming monitors while I was at it 😉
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Meet LittleLink & LittleLink-Server - a DIY, self hosted, and open source alternative to the popular service Linktree.This web site inside of a container allows you to create and host your own web site with all of your social information and links, giving your followers multiple ways to connect with you! In this video we talk about what LittleLink-Server is, what it does, and how to create your own site using this Docker container with only a few environment variables, no knowledge of web development required.Be sure to check the documentation for details!
See this post on how to install docker
and docker-compose
1
+2
+3
+
mkdir littlelink-server
+cd littlelink-server
+touch docker-compose.yml
+
If you’re using Docker compose (see the GitHub repo for the latest file)
docker-compose.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+
---
+version: '3'
+services:
+ little-link:
+ image: ghcr.io/techno-tim/littlelink-server:latest
+ container_name: littlelink-server
+ environment:
+ - META_TITLE=Techno Tim
+ - META_DESCRIPTION=Techno Tim Link page
+ - META_AUTHOR=Techno Tim
+ - THEME=Dark
+ - FAVICON_URL=https://pbs.twimg.com/profile_images/1286144221217316864/qIAsKOpB_200x200.jpg
+ - AVATAR_URL=https://pbs.twimg.com/profile_images/1286144221217316864/qIAsKOpB_200x200.jpg
+ - AVATAR_2X_URL=https://pbs.twimg.com/profile_images/1286144221217316864/qIAsKOpB_400x400.jpg
+ - AVATAR_ALT=Techno Tim Profile Pic
+ - NAME=TechnoTim
+ - BIO=Hey! Just a place where you can connect with me!
+ - GITHUB=https://l.technotim.live/github
+ - TWITTER=https://l.technotim.live/twitter
+ - INSTAGRAM=https://l.technotim.live/instagram
+ - YOUTUBE=https://l.technotim.live/subscribe
+ - TWITCH=https://l.technotim.live/twitch/
+ - DISCORD=https://l.technotim.live/discord
+ - TIKTOK=https://l.technotim.live/tiktok
+ - KIT=https://l.technotim.live/gear
+ # - FACEBOOK=https://facebook.com
+ # - FACEBOOK_MESSENGER=https://facebook.com
+ # - LINKED_IN=https://linkedin.com
+ # - PRODUCT_HUNT=https://www.producthunt.com/
+ # - SNAPCHAT=https://www.snapchat.com/
+ # - SPOTIFY=https://www.spotify.com/
+ # - REDDIT=https://www.reddit.com/
+ # - MEDIUM=https://medium.com
+ # - PINTEREST=https://www.pinterest.com/
+ # - EMAIL=you@example.com
+ # - EMAIL_ALT=you@example.com
+ # - SOUND_CLOUD=https://souncloud.com
+ # - FIGMA=https://figma.com
+ # - TELEGRAM=https://telegram.org/
+ # - TUMBLR=https://www.tumblr.com/
+ # - STEAM=https://steamcommunity.com/
+ # - VIMEO=https://vimeo.com/
+ # - WORDPRESS=https://wordpress.com/
+ # - GOODREADS=https://www.goodreads.com/
+ # - SKOOB=https://www.skoob.com.br/
+ - FOOTER=Thanks for stopping by!
+ ports:
+ - 8080:3000
+ restart: unless-stopped
+ security_opt:
+ - no-new-privileges:true
+
If you’re running docker only
Docker command
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+
docker run -d \
+ --name=littlelink-server \
+ -p 8080:3000 \
+ -e META_TITLE='Techno Tim' \
+ -e META_DESCRIPTION='Techno Tim Link page' \
+ -e META_AUTHOR='Techno Tim' \
+ -e THEME='Dark' \
+ -e FAVICON_URL='https://pbs.twimg.com/profile_images/1286144221217316864/qIAsKOpB_200x200.jpg' \
+ -e AVATAR_URL='https://pbs.twimg.com/profile_images/1286144221217316864/qIAsKOpB_200x200.jpg' \
+ -e AVATAR_2X_URL='https://pbs.twimg.com/profile_images/1286144221217316864/qIAsKOpB_400x400.jpg' \
+ -e AVATAR_ALT='Techno Tim Profile Pic' \
+ -e NAME='TechnoTim' \
+ -e BIO='Hey! Just a place where you can connect with me!' \
+ -e GITHUB='https://l.technotim.live/github' \
+ -e TWITTER='https://l.technotim.live/twitter' \
+ -e INSTAGRAM='https://l.technotim.live/instagram' \
+ -e YOUTUBE='https://l.technotim.live/subscribe' \
+ -e TWITCH='https://l.technotim.live/twitch' \
+ -e DISCORD='https://l.technotim.live/discord' \
+ -e TIKTOK='https://l.technotim.live/tiktok' \
+ -e KIT='https://l.technotim.live/gear' \
+ --restart unless-stopped \
+ ghcr.io/techno-tim/littlelink-server:latest
+
If you’re using Rancher, Portainer, Open Media Vault, Unraid, or anything else with a GUI, just copy and paste the environment variables above into the form on the web page.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Cut the cord and get free over the air TV with Plex! Today we’ll dive deep into selecting a TV tuner, an antenna, dialing in your TV signal, and configuring Plex to help you get the most out of Live TV and DVR!
A huge THANK YOU to Plex for sponsoring this video during Plex Pro Week!
Get started with Plex today! https://www.plex.tv/
There’s this great free resource out there that many people aren’t taking advantage of. It’s something that most homes can access absolutely free and that’s over the air TV. Now I know what you’re thinking, what year is this?? I know, it sounds odd talking about over the air TV in this day and age but I found a way to modernize it and make it more accessible with a little bit of hardware and a little bit of software from Plex.
I have been using Plex for almost 10 years and one of the features that isn’t talked about that much and is one of my favorites is Live TV & DVR. I’m not talking about the free TV channels that Plex offers, although I will be talking about that a little bit later, I am talking about over the air channels like ABC, NBC, PBS, and many others to watch Sports, Local News, and more. And with ATSC 3.0 or NextGen TV rolling out in some areas, you can be sure that you’re getting the clearest broadcast possible with up to 4k resolution and uncompressed when over the air vs. 1080p and compressed from most providers. You’ll need a few simple things that I will cover in this video so you can start watching and recording Live TV today. So, full disclosure Plex is the sponsor of the original video and I want to thank them for asking me to share my deep dive on Live TV & DVR with Plex.
Plex can be a powerful DVR to record or watch your favorite TV shows
With a TV Tuner, an antenna, and a Plex Pass you can turn your media server into a powerful DVR to record your favorite shows or watch them live, even on the go. You can record any show in your area, whether that be sports, local news, or your favorite TV series and watch it from any device. Plex has apps for mobile devices, SmartTVs, gaming consoles, Apple TV, on the web and more. It’s hard to find a device that doesn’t have Plex.
Watching Live TV is as simple as launching the app and picking a channel. This will stream the channel from your Plex Media Server to your device. After the Live TV stream starts, you can pause or even rewind Live TV. But if you start watching something that’s currently being recorded, you have the option to watch from the beginning or watch live. If you choose to watch from the start you can skip through commercials and get caught up to the live broadcast. This is a little life hack I use to watch something that’s live without the commercials. For something you’ve already recorded, there’s also this awesome feature that will allow you to skip commercials or remove them altogether. This is a great time saver and we’ll get this set up today too a little bit later.
One of the best parts of Plex Live TV is you get the best EPG out there. EPG stands for electronic programming guide and it lists all shows for all channels along with some additional data like episode information and more. It’s how I know that Jeopardy! is on NBC at 4:30 local time, or that the episode of Nature is a rerun. It’s also what helps populate their powerful search in the Live TV section. I can’t stress enough how important it is to have a solid EPG when using a DVR, without accurate data you could schedule the wrong show to record or miss your show altogether. The EPG is even interactive on some clients, giving you a picture-in-picture guide while you browse the guide looking for the next thing to watch. And if you only watch a handful of local channels you can add channels to your favorites so you can quickly access them. Plex Live TV lets you prioritize your recordings so that I don’t get in trouble recording a rerun of Nature instead of the latest episode of The Bachelor. We’ll talk more about the EPG, recording priority, how not to miss your favorite shows when scheduling recordings a little bit later.
Plex’s EPG is the best Electronic Programming Guide out there!
So, what do we need in order to have our very own DVR? I know all of this sounds complicated but it’s much easier than you think. You’ll need a couple of things, all of which I use in my own home and have been recording TV for years, so you can be sure that this setup will also work for you.
First, you’re going to need a Plex server and a Plex pass. Next you’ll need a TV Tuner and an antenna. I’ve used a lot of TV Tuners in the past but the best tuner by far is one from SiliconDust, it’s the HDHomeRun Flex 4k. This nice little device sits on your network and converts a TV signal into a video stream so that your Plex Media Server can consume it and even change the channels when requested. This one in particular has 4 tuners inside that allows you to watch or record up to 4 channels at once. This one also supports the new ATSC 3.0 NextGen TV that we talked about earlier - so it’s future proof!
Silicon Dust’s HDHomerun TV Tuner will get you up and running in no time!
Another important thing you’ll want to have is an antenna that connects to our tuner. Antennas come in all shapes and sizes and depending on where you live, you might be able to get by with a small indoor antenna. If you’re in the US, there’s a great site to help you determine your distance from TV towers which might help you choose the right antenna. You can visit the site, enter your location, and see how far away you are from TV towers, their location, and get an estimate of the signal strength to your location. Based on this legend you can make a better decision about the antenna you choose.
Selecting an antenna is key to getting the right reception!
Here are my recommendation for choosing an antenna:
Once you have your antenna and tuner, go ahead and connect your tuner to the network and connect your antenna to the coaxial terminal and then finally, connect the power to the tuner. A word of caution, you might be tempted to buy an amplifier but I would recommend against it until you truly know that you have a weak signal, you run the risk of introducing noise and interference. We’ll see this later on and from there you can determine if you need a signal amplifier (affiliate link) or not.
Once your tuner is on the network, visit tuner’s web page by typing in the IP address in a browser. Here you will see the landing page for your device. If you see a message to update your firmware I would update it before continuing, it will only take a minute, plus, who doesn’t love updating firmware 😀
Once it’s updated you can see the tuner status and more information about your tuner. Next we want to see which channels our tuner can detect. We can do this by going into the channel lineup and clicking “Detect Channels”.
This will scan for all of the channels you can pick up using your antenna. Now your mileage may vary depending on your area and how close you are to the TV towers, but it’s a good idea to compare the results to what you expect. If you aren’t seeing the channels you expect, you might need to adjust your antenna or think about getting a signal amplifier, however I’ll show you how to check the signal strength in a little bit.
One thing you might have noticed is that little plug that I have connected to my HDHomeRun. This is a signal filter (affiliate link) that will filter out LTE and 5G signals from the line. I’ve noticed that as more cell phone towers go up, the more they can interfere with my antenna, so I popped this little filter on to filter out those frequencies. If you’re wondering what interference looks like, it’s that weird pixelated blocking that you see sometimes when watching TV. This isn’t going to magically make channels appear out of nowhere or boost the signal, it’s just there to take away the noise created by cell phone towers.
Once we’ve got our tuner all set up, make note of the IP address because we’ll need this for configuring Plex.
You might need to pick up an LTE filter to filter out 5G noise!
Now that we have our TV Tuner and antenna set up, we can now configure this in Plex! You’ll need to sign in to your Plex Media Server and then go to settings. In the Manage section you should see Live TV & DVR. Here you’ll want to configure a new tuner. When you try to add a new tuner, it will try to search for your tuner and in most cases it will find your device. If it doesn’t you can manually add your device by typing in the IP address of our tuner. Once it’s added you’ll need to set a few settings for Plex. You should choose an antenna, your home country, and your Postal Code. The postal code is needed to download the EPG. Once you’ve set this, you will then see a list of all the channels we found earlier. You can scan again or even remove channels however I wouldn’t remove channels here, I would just create favorites later. If you’re happy with the list, click continue. Plex will start to download the latest guide and after a few minutes we should see all of the TV shows that are available!
The channel guide can be found in the Live TV section. Here we can see a list of all of the shows we can watch or record. This will look different on different clients but the experience is mostly the same. The live TV feature is pretty self explanatory, we can scroll through channels and when we see something we want to watch, we just click on it. This will start a live stream. You can even pause or rewind a Live TV show, pretty cool.
So you can see that I have a pretty good signal and quality but what do you do if you don’t have the greatest picture quality? Well, earlier I mentioned that we could check our signal strength for a broadcast to determine if we need to adjust our antenna or think about a signal booster. This might work differently depending on your tuner, but if you’re using a SiliconDust tuner like the one I am using, the easiest way I have found to do this is to start a live show and then go to your tuner’s homepage while the show is playing. Once here go into Tuner Status and choose the tuner that’s being used, which you can see in the Summary. Click on the tuner that is in use and here you can see the status. The most important stat here is Signal Quality. The higher the better. If you notice that this is noticeably low and your TV stream isn’t the greatest you can try adding a signal booster or a line filter to try and clean it up. I will have links to this and all of the other hardware we talked about today in the video description. So, back to recording from the channel guide… After selecting a show, if you want to record a show all you have to do is click the record button. From here you can choose whether you want to record new airings only, new and repeat, and which library you want to save it to. I have multiple libraries, TV Shows and Recorded TV. That’s because I wanted to separate the two but that’s totally up to you. It’s as simple as creating a new library and setting the type to TV Shows. You will then see this option when scheduling recordings.
Scheduling to shows to record couldn’t be easier with their great EPG!
You also have some additional settings in “Show Advanced” but we won’t change them here, we’ll apply these to all recorded TV a little bit later. After clicking record we can now see that we have a record icon on the show, letting us know that it’s currently being recorded. You also have lots of quick actions when hovering or clicking on shows where you can schedule recordings or even cancel recordings, it’s pretty handy.
One thing you might have noticed is the categories across the top. Most of these are self-explanatory, however there’s one named Plex Channels that is different from the rest of the TV channels. These are FAST channels or Free Ad-Supported Streaming Television. It’s streaming TV that can be watched at any time. They aren’t channels that you can find over the air from your local TV stations, but channels that stream content 24/7, like for instance if you wanted to binge watch Top Gear or the Price is Right classics, there’s a channel for that. But, back to recorded TV.
Once a show has recorded it will be in your Library you set for recorded TV, the default is TV Shows. Once here you’ll see a similar experience that we see for Movies, you’ll have a “Recommended” section, a “Library” section, and a “Category” section, and view controls for your media. Clicking on your show will bring you to that show and from there you can see all of the recorded seasons for that show and if you want to get to your show you click into the season to get to your episodes. Once you are on your episode you can see more details about it like the date it aired, how long the recording is, the rating, and even details about this episode. You can also switch the track to another language and choose your subtitles if the broadcast supports it. After clicking play the video will start and you can watch it as you normally would!
Watching recorded TV is just like watching other media, complete with all of the artwork and metadata you’re use to seeing with other content!
One of the best features that comes with Live TV & DVR is Intro Skip and Commercial Skip. If enabled Plex can detect intros, commercials, and even credits to help you watch more TV without interruptions. When playing a show where an intro is detected, you will see a skip intro button in the bottom right corner that you can click on and it will skip right to the show. This also works for commercials too! When a commercial break starts you will see a button to skip Ads which will skip right to where the show picks back up! Now it’s not perfect but I’d say it’s pretty close for the shows that I watch. It’s not enabled by default so let’s enable it!
We can do this in our library settings which you can find in the Manage section. If we edit our Recorded TV library and go to “Advanced“ we should see a few settings in here that help us skip unwanted content. Be sure that “Enable Intro detection” and “Enable Credits detection” is turned on and then for the “Ads detection” setting you’ll want to choose “For recorded items” . This enables ad detection for new recordings. If you’ve already recorded TV with Plex or from another TV you can turn on “For all items” to force a scan of all items in this folder.
Great, once that’s turned on it should now add these markers so we can skip unwanted content. This detection does take a few minutes and only starts after a show is done recording. Also, you won’t see new recordings in your Library until it is done doing the detection. Now we can skip all of that unwanted content and watch TV like a pro. Like this show right here, if we start playing you can see it detects the intro that we can skip through if we like, and then once we get to a commercial break it will prompt us to skip it we like, and if it detects credits it will do the same.
Skipping commercials, intros, and credits makes watching OTA a breeze. No more wasting time!
Now there are some additional settings you can choose for skipping commercials like removing them altogether. This is in the DVR settings where we can set our default settings for new recordings. In the “Detect Commercials” setting you can choose from “Disabled”, “Detect”, “Mark for Skip”, or “Delete”. “I would recommend setting this to “Detect and Mark for skip” rather than setting it to “Detect and Delete” because deleting is a destructive action and while Plex commercial skip is really good at detecting commercials, it’s a lot safer to just add markers than accidentally delete part of your show. As for the rest of the settings in here, I have only adjusted a few. I set the resolution to Prefer HD. I don’t replace lower resolution items, I do allow partial airings, and I don’t adjust the minutes before and after a recording. Shows are pretty good at starting and ending on time, but if you find that you want to record a minute or two before and after, adjust the setting here. Live broadcasts that go over the scheduled time like sports might be a good good reason to add some padding at the end of the recording so you don’t miss overtime! Also I enable a refresh of the guide data during the maintenance window and for me that’s 2am.
So now that we have scheduled some recordings, how do we make sure that my reruns of Nature and NOVA don’t get scheduled instead of my wife’s show The Bachelor? (You’ll only make that mistake once). We can do this easily by adjusting our recording priority. If we go back into the Live TV area and choose the DVR Schedule we can see everything that’s scheduled to record and on the far right we can see our Recording Priority. This is where we can drag and drop to reorder our shows with the highest priority being at the top. This helps when there are scheduling conflicts due to the tuners being in use when recording or watching live TV. I have 4 tuners so I rarely have a conflict but if I did this is how it will choose to prioritize one recording over the other. Let’s say for instance I wanted to get in trouble again and prioritize Nature over The Bachelor, Survivor, and even Big Brother, I would just drag Nature all above all of those shows like. This would ensure that if there was ever a conflict or not enough free tuners, Nature would record instead of all of these shows. OK, let’s move this back before I get in trouble again.
Be sure to adjust your recording priority so that you can be sure your tops shows record over others if you run out of tuners!
Now that we have everything set, there’s also this small feature to make your life a little easier when channel surfing and that’s Favorites. As you can see from my list of local channels, I have a lot of channels that I almost never watch but at the same time I don’t want to remove them from my channel lined up. This is where favorites come in. I like to add all of my favorite channels to my favorites list so that I can easily browse them when I am looking for something to watch. You can even add some of the Plex FAST channels to your favorites too! I really like the BBC Earth channel, PBS Nature, and the Modern Marvels channel and I have added those to my favorites too. They have over 600 to choose from, so there’s no shortage of content there. Now, if I switch and go to my favorites, I can see a quick list of my favorite channels without skipping around through all of the channels I rarely watch.
Now just because I did all of this from a browser doesn’t mean you have to do it here too. Plex’s mobile app works great for watching live TV, previously recorded TV, and even scheduling recordings. There have been many times where my wife and I are out and about and hear about a new show that’s airing soon and it’s now second nature to immediately schedule it to record. I just open up the app, go to Live TV, Search for the show, and schedule the recording. It’s super simple and convenient to do.
The mobile app is not only great for watching shows, but also scheduling a recording while you are on the go!
So that’s everything you need to get started today to record live TV like a pro. I’ve been using this setup with Plex for years and it’s everything I could want in a DVR system, from high quality over the air uncompressed video and audio, to an accurate EPG, to how easy it is to schedule recordings anywhere on any device, to commercial skipping and so much more. I want to thank Plex again for sponsoring this video and thank you for watching. Well, I learned a ton about live TV, antennas, network tuners, and Plex, and I hope you learned something too. And remember if you found anything in this post helpful, don’t forget to share!
STOP Paying for TV and cut the Cord for good. I haven't paid for OTA TV in over a decade, and neither should you. I made a guide on how to build your own DVR with @plex for their Pro Week!
— Techno Tim (@TechnoTimLive) September 18, 2023
Check it out 👉https://t.co/3dZ7Ca9ru3 pic.twitter.com/bmAoSOEl8t
📦 Products in this video
TV Tuner that supports 4K and up to 4 streams!
Flat Indoor Antenna
Indoor Outdoor Antenna with 60+ mile range!
Adjustable Gain TV Antenna Preamplifier with LTE Filter
LTE / 5G Filter
Antenna Splitter with Power Passthrough
Solid Copper Coax Cable (NOT copper clad)
LTE / 5G Filter Alternative
See the whole kit!
https://kit.co/TechnoTim/build-you-own-dvr
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
In some of my previous Pi-Hole videos many of you spotted my blocklist with over a millions sites added and you wondered how you can do the same.Well, today I show you how to block more ads, block more tracking, block more malware, and block more telemetry with these community lists.Bonus (and spoiler alert) I show you how to add 3.5 million!
Thanks to Firebog for the great lists firebog.net
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Pi-Hole is a wonderful ad blocking DNS sever for your network, but did you know you can also use it for a Local DNS server? In this fast, simple, and easy guide we’ll walk through how to create DNS Entries (A Records) for the clients on your network and also set up Aliases (pointers to A Records) so that you can start using DNS at home instead of relying on IP addresses.
1
+2
+
nslookup juno.home.lan # lookup by host name
+host 192.168.0.100 # reverse lookup
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
We know you’ve heard of Pihole and we know you are probably aware of how to install it but… have you tried running it on Docker and Kubernetes using Rancher? Have you configured it for pfSense? Don’t worry, I figured out all the hard stuff for you.So let’s consolidate some hardware and services.
Ubuntu Fix
1
+
sudo apt-get update
+
1
+
sudo apt-get install resolvconf
+
1
+
sudo nano /etc/resolvconf/resolv.conf.d/head
+
enabled & start service
1
+
sudo systemctl enable resolvconf.service
+
1
+
sudo systemctl start resolvconf.service
+
add your upstream DNS (I use Quad9)
1
+
nameserver 9.9.9.9
+
update resolv.conf after adding nameserver
1
+
sudo resolvconf -u
+
Set pi-hole password
1
+
sudo pihole -a -p
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
If you’re looking to configure the TESmart switch with PiKVM, I finally figured it out and you can read all about it here.
If you don’t know what a KVM switch is, it’s a device that allows you to connect multiple computers to one device which allows you to control them with a single keyboard, monitor, and mouse.They’re relatively cheap unless you’re looking for an IP based one that will let you connect over the network.IP KVMs are really expensive, that is until the PiKVM came along. The PiKVM is a Raspberry Pi-based KVM switch, which allows you to remotely control a computer using a keyboard, a web browser, and mouse from anywhere in the world.It runs a web server that lets you connect to any computer connected to it and remote control it as if you’re sitting right in front of it, without plugins or installing any agents on the device. It’s much more capable than remote controlling it using a remote desktop client, it can even let you remote control a machine before it boots to let you change things in the bios, or even reformat and reinstall your operating system remotely.
This is all great except for one small thing, unlike a traditional KVM that lets you control multiple devices, the PiKVM is really meant for remote controlling just one device.The PiKVM is built with just one HDMI input and one keyboard mouse input while traditional KVMs have multiple inputs for multiple clients.So how can we scale the PiKVM to connect it to more devices so that we aren’t stuck moving it from machine to machine each time we need to remote control one of our other devices?
The little LCD is both cute and functional
You can build a PiKVM yourself by purchasing the PiKVM v3 HAT which is a great choice if you already have a raspberry pi4 and are willing to build it yourself.Or if you have a Pi Zero you can even build it using some inexpensive parts and without soldering.But chances are you have neither since raspberry pis are impossible to find and buying a pre-assembled kit is the only option.It was for me and that’s what I ended up doing.I purchased the PiKVM v3 pre-assembled which comes with a Raspberry Pi 4 2GB model, 32GB micro SD card, power supply, an HDMI cable, a USB C to USB A cable, and a nice case.The steel case is solid and feels sturdy and industrial.The PiKVM has lots of connections, connections for power, USB devices, mouse and keyboard emulation, RJ45 to serial connection, HDMI, and even an RJ45 connector for ATX power which lets me hook this up to a motherboard to power it on and off remotely.The other cool thing you get with the pre-assembled kit is the little LCD screen that shows system information and a cute cat when it boots. It comes pre-flashed with PiKVM installed and ready to go.
Oh, it runs arch BTW.
A HUGE THANKS to Micro Center for sponsoring today’s content.
New Customer Exclusive – Free 256GB SSD: https://micro.center/18l
Shop AMD Ryzen 5 3600 & Gigabyte B450M Combo Deal: https://micro.center/69d
Check out Micro Center’s Custom PC Builder: https://micro.center/d35
Submit your build to Micro Center’s Build Showcase: https://micro.center/dsw
But before we connect everything, remember when I said I wanted to connect it to more than one device? Well, I wanted to connect it to 8x times that, yes 8 devices.I found this HDMI KVM switcher with a USB hub that I thought would be perfect for it. This TESmart allows you to connect up to 8 devices with video and USB and has a built-in USB hub.It also has an RJ45 port that allows me to change the input over IP and that’s it. It’s not an IP KVM otherwise I would need the PiKVM, but being able to switch the input over IP is all I needed to automate it with the PiKVM.I thought this device was perfect for remote controlling some of my servers considering it is rack mountable.However there was just one catch that would almost ruin this entire project that I didn’t know about yet.
I tested PiKVM on my workbench with this old intel NUC and it worked fine.I was able to remote control it and even power it on and off using Wake on LAN.I chalked it up as a success and started moving everything into my server rack.It might not seem like it, but mounting this HDMI KVM Switch took quite some time.I had to run HDMI cables and USB B cables to and from all of my devices that I wanted to remote control.I started running the wires and wiring up 4 devices, just to be sure it worked with my existing machines before wiring up all 8.But, I bet you’re asking why I just don’t use IPMI that I have on my servers? Well, this isn’t to control my servers, it’s to control my rack mounted PC conversion along my new Intel NUC cluster.None of these machines have IPMI so that’s why I needed an IP KVM solution like the PiKVM.
I decided to put my PiKVM on this little shelf for now but I’ll probably find somewhere a little more permanent to place it. Once I had everything hooked up, that’s when the troubles began.I could remote into some of the NUCs running Linux and the PC conversion, but not the ones running Windows.I thought for sure that it was something with my connection so I checked all of the connections over and over again.It was right around that time that the creator of PiKVM Max Devaev reach out to me asking me how I was liking the PiKVM and to let him know if I ran into any troubles because he was interested in advanced use cases for the PiKVM.I’m not sure why he thought I was going to be using this in an advanced way…. But he was right…
This was my first attempt with a TESmart Switch
I worked with Max for a few days on and off over discord.He sent snippets of code for me to run and even gave me lots of EDIDs to try. EDIDs (Extended Display Identification Data) are a signature or metadata that tell a device how to work with the monitor.Sometimes we could get the Linux machines running on the TESmart switch working, but not the Windows machines. And other times we could get the Windows machines working but not the Linux machines.We ended up discovering that the TESmart HDMI switch would “poison” the PiKVM and send the TESmart EDID rather than the one from the PiKVM.
TESmart EDID:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+
Section "Monitor"
+ Identifier "ITE-FHD"
+ ModelName "ITE-FHD"
+ VendorName "ITE"
+ # Monitor Manufactured week 12 of 2010
+ # EDID version 1.3
+ # Digital Display
+ DisplaySize 620 340
+ Gamma 2.20
+ Option "DPMS" "false"
+ Horizsync 13-46
+ VertRefresh 23-61
+ # Maximum pixel clock is 170MHz
+
PiKVM EDID
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+
Section "Monitor"
+ Identifier "PiKVM"
+ ModelName "PiKVM"
+ VendorName "LNX"
+ # Monitor Manufactured week 28 of 2011
+ # EDID version 1.3
+ # Digital Display
+ # Display Physical Size not given. Normal for projectors.
+ Gamma 2.20
+ Option "DPMS" "false"
+ Horizsync 15-46
+ VertRefresh 59-61
+ # Maximum pixel clock is 150MHz
+ #Not giving standard mode: 256x160, 60Hz
+
At this point, I had to cut my losses and go with a smaller, non rack mountable, but more compatible EZCOO KVM Switch and, I have to say, it’s fantastic.
This is the EZCoo HDMI KVM switch. It’s a 4 port HDMI KVM Switch that allows 4 HDMI ports to be switched to a single display. This single display will be the PiKVM.It also has a built- in USB 3 hub which is awesome for plugging in USB devices that will connect to the machine when you switch the input.It has 4 HDMI inputs and 1 USB 3 input that you’ll connect to each machine and has one HDMI out and one USB for keyboard and one for mouse.We won’t be using these specific USB ports because we’ll be using mouse & keyboard emulation in one plugged into a generic USB port.
The real magic of this device is that it has a micro USB management port on the side that the PiKVM can use to control and toggle the inputs automatically, giving us a way to switch between all of our connected devices without having to manually press the input button.As nice as this device is, I really wish they made an 8 port rack mountable one because I want to control more than 4 devices without swapping them out or daisy chaining them, Which is why I wanted the TESmart switch in the first place.
Oh, speaking of the TESmart, after working with Max for a while on this device he mentioned that this might work with the new v4 version of the PiKVM which just recently launched on Kickstarter.He said he was going to send one of their prototypes to test so, fingers crossed it works.I will be sure to create a v4 video once it’s released and hopefully it supports the TESmart switch.
This EZCOO is small, compact, and 100% compatible
Now that I had everything working the way it should it was time to connect to each device through the web portal.Once connected, I can toggle between each of my devices here, from my first Intel NUC running Ubuntu, to my Second Intel NUC running Windows 10, to my third Intel NUC running Windows 11, to my PC conversion running Ubuntu server.And you can see that it’s pretty snappy. The latency is really low and I can even run HD videos no problem at all.If I do run into any latency issues or I am on a slow connection I can change the protocol and even the bitrate to something more fitting.
But running HD videos probably isn’t the reason you want a KVM, it’s more likely that you want to have access to the machine while it boots, and here’s where it gets really awesome. The PiKVM is open and it’s totally hackable and there are some great plugins and drivers that allow you to customize the UI with those plugins. For instance I can shutdown this machine and then wake it up using a Wake on LAN packet to power it back on.Side note, I learned a ton about making Wake on LAN work for Windows and Linux and I will be updating my blogs with complete walk-throughs of how to enable it, but anyway If that wasn’t cool enough I can then get into the BIOS of this machine to make any changes that I want.I can change the boot order, change boot devices, overclock the machine and do anything that I couldn’t normally do without being right in front of the machine.I can even upload ISOs to the PiKVM and then attach them to the device virtually and boot from it to install any operating system! This lets me rebuild any of these machines no matter where I am all from a web browser.
Want to install Linux on a machine that’s powered off, no problem.Just attach the virtual drive to the machine, send a wake on lan packet to wake it up, then boot from the virtual drive and install! You could also attach the ATX power control to the header of the motherboard if you like and power it on that way, but I have network access to all of my machines so I will use wake on lan.Plus, it’s super awesome to be able to wake devices up over the network. And here’s where it gets really awesome, remember how I said that my KVM also has a USB Hub? Well, I’ve attached a 64GB USB drive to it with Ventoy installed that has every ISO I could ever need.As I switch inputs between machines it attaches the USB drive with Ventoy to each machine allowing me to install any operating system I want.
You can make this even more powerful by adding a USB drive and Ventoy
Because the PiKVM is hackable, I’ve customized the GPIO menu to let me switch between devices, wake them up, wake them up on different NICs, and restart the kvmd
service or the PiKVM itself. (See my config below) I should say that I didn’t really “hack” it, this isn’t a “techno tim” hack - there’s an overrides file that lets you customize most of the PiKVM so I didn’t go totally off the rails.It even has a web ui to give you terminal access to your PiKVM in case you aren’t able to use SSH, which is super handy if you’re mobile. But this little device has so many features already and the fact that the software is open source and continues to be updated makes this solution such a great investment for me.
You can customize the menu however you like (my config below), here I added WoL for each network card and even a way to restart the PiKVM from the menu
So I bet you’re wondering if it’s worth it? I am going to break this down into 2 parts.Is it worth it to buy pre assembled? And is it worth it for remote control with a PiKVM. Well for me it is for a few reasons.First of all I can’t find a raspberry pi to assemble this myself, and if you consider it comes with a case, a fan, a 32GB micro sd, additional cables, and even a little LCD screen 100% ready to go for an additional 90 bucks? I would say it is.Now on to the tougher question, is it worth it to have a PiKVM at all? I would say yes for me, but for you it depends.The way I looked at it was that I was going to scale it to 8, which would divide the cost of a PiKVM and switch across 8 machines making it around 70 dollars per machine if you include all of the cables.
PiKVM $259 + TESmart $299 / 8 = $70 per machine
I’d say that it’s worth it for me to have remote access to that many machines for the life of each machine.But, I did have to downgrade to a smaller switch that only gives me access to 4 machines which is roughly 95 dollars per machine.
PiKVM $259 + EZCOO $120 / 4 = $95 per machine
That’s a little bit higher, however it’s a much better value than remote controlling just one machine with the PiKVM, which would be the cost of the PiKVM.
PiKVM $259 / 1 = $259 per machine
Here are the items that I used during this project.
PiKVM - https://pikvm.org
EZCOO - HDMI KVM Switch - https://amzn.to/3IyiIv1
HDMI Cables - https://amzn.to/3SgJ34g
USB B Cables - https://amzn.to/3Eel0wU
USB C Cables - https://amzn.to/3k8vQhb
USB Flash Drive - https://amzn.to/3XI50u6
TESmart Switch (not full compatible) - https://amzn.to/3YV0Gsi
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
Here is my PiKVM config that I use.You will need to edit /etc/kvmd/override.yaml
on your device and then restart the kvm
service.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+98
+99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+
kvmd:
+ gpio:
+ drivers:
+ ez:
+ type: ezcoo
+ protocol: 2
+ device: /dev/ttyUSB0
+ wol_server0:
+ type: wol
+ mac: 1c:69:7a:ad:11:85
+ wol_server1:
+ type: wol
+ mac: 88:ae:dd:05:cf:09
+ wol_server2:
+ type: wol
+ mac: 88:ae:dd:05:c6:3b
+ wol_server3:
+ type: wol
+ mac: a0:36:9f:4f:c4:b4
+ wol_server3a:
+ type: wol
+ mac: d8:50:e6:52:8e:c2
+ reboot:
+ type: cmd
+ cmd: [/usr/bin/sudo, reboot]
+ restart_service:
+ type: cmd
+ cmd: [/usr/bin/sudo, systemctl, restart, kvmd]
+ scheme:
+ ch0_led:
+ driver: ez
+ pin: 0
+ mode: input
+ ch1_led:
+ driver: ez
+ pin: 1
+ mode: input
+ ch2_led:
+ driver: ez
+ pin: 2
+ mode: input
+ ch3_led:
+ driver: ez
+ pin: 3
+ mode: input
+ pikvm_led:
+ pin: 0
+ mode: input
+ ch0_button:
+ driver: ez
+ pin: 0
+ mode: output
+ switch: false
+ ch1_button:
+ driver: ez
+ pin: 1
+ mode: output
+ switch: false
+ ch2_button:
+ driver: ez
+ pin: 2
+ mode: output
+ switch: false
+ ch3_button:
+ driver: ez
+ pin: 3
+ mode: output
+ switch: false
+ wol_server0:
+ driver: wol_server0
+ pin: 0
+ mode: output
+ switch: false
+ wol_server1:
+ driver: wol_server1
+ pin: 0
+ mode: output
+ switch: false
+ wol_server2:
+ driver: wol_server2
+ pin: 0
+ mode: output
+ switch: false
+ wol_server3:
+ driver: wol_server3
+ pin: 0
+ mode: output
+ switch: false
+ wol_server3a:
+ driver: wol_server3a
+ pin: 0
+ mode: output
+ switch: false
+ reboot_button:
+ driver: reboot
+ pin: 0
+ mode: output
+ switch: false
+ restart_service_button:
+ driver: restart_service
+ pin: 0
+ mode: output
+ switch: false
+ view:
+ table:
+ - ["#NUC1", ch0_led, ch0_button, "wol_server0 | WoL"]
+ - ["#NUC2", ch1_led, ch1_button, "wol_server1 | WoL"]
+ - ["#NUC3", ch2_led, ch2_button, "wol_server2 | WoL"]
+ - ["#PC", ch3_led, ch3_button, "wol_server3 | WoL-10g", "wol_server3a | WoL-1g"]
+ - ["#PiKVM", "pikvm_led|green", "restart_service_button|confirm|Service", "reboot_button|confirm|Reboot"]
+
If you’re having issues with Wake on LAN, see The Ultimate Guide to Wake on LAN for Windows, MacOS, and Linux
The last few weeks I have been trying to figure out how to scale the PiKVM to more than one device. It took a lot of twists and turns but I finally figured out a solution, even if the first attempts failed...
— Techno Tim (@TechnoTimLive) February 18, 2023
Check it out ⬇️https://t.co/4qgwcmPwMi#raspberrypi #homelab pic.twitter.com/ljxpIE3cYx
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
After many hours and testing, swapping, resetting, and EDID training, all of my PiKVM and TESmart issues were solved with with a simple, cheap dongle. If you aren’t aware of the struggles I faced when using the PiKVM with a TESmart switch, all of it is detailed in a previous post where I ended up settling on another switch altogether. That’s all changed now with a simple EDID emulator passthrough adapter (affiliate link). However, for the uninformed, I’ll summarize the symptoms I experienced with the PiKVM and TESmart KVM and how I was able to fix these issues.
When I originally set up my PiKVM (v3 at the time) I wanted to remote control more than one machine. I have a server rack and I rationalized that 8 would be a good number of machines to remote control and ultimately justify the cost of the hardware (cost spread over 8 machines). I also wanted something to rackmount since, after all, I have a server rack. That’s when I found the TESmart 8X1 HDMI KVM Switch 8 (affiliate link).
It met all of my requirements:
This device was listed as compatible with PiKVM so I thought it was a safe bet.
I have (3) Intel NUC 11th Gen device I planned on connecting to this switch along with an old PC that has a ASUS Z87-PRO motherboard. I figured since they were all Intel based GPUs it was a safe bet. Boy was I wrong…
Shortly after configuring the PiKVM with the TESmart KVM and my devices it was clear that something was definitely wrong.
As you can see, 1 out of 3 devices work. All 3 are the same model. Usually none of them work even after properly training them.
Some of the symptoms were:
I knew this was not good, however I figured it was something that could be fixed in software, after all the PiKVM is open source and running Linux. I did the obvious first, which was training the PiKVM and the TESmart switch according to TESmart’s YouTube video (which is excellent by the way). This kind of helped and by kind of I mean that 1 device would sometimes work but then after adding additional devices it would start experiencing the same symptoms as above. So I thought I could fix it in software…
I tried updating the device, testing different EDIDs,and even working with the creator of the PiKVM, Max Devaev, to see if we could tweak any settings to make it work with the TESmart KVM. After capturing logs and EDIDs, Max determined that the EDID was getting “poisoned” with some other EDID when switching. So we decided that it was a hardware issue. I ended up purchasing an EZCOO switch (affiliate link) which ended up working perfectly, albeit not rackmountable.
I reached out to TESmart around the same time to see if maybe it was something they had experience in the past, or something that might be addressed in a firmware update. They were great to work with (huge shout out to Ray from TESmart!) and walked me though some additional troubleshooting steps. Each step still yielded the same results. When we exhausted everything we could try, they sent a replacement to see if that might resolve the issue, but sadly it did not.
All of this troubleshooting was done on the PiKVM v3, which I had purchased pretty late in its lifecycle, and v4 was right around the corner. Max from PiKVM said that he felt the issue could be resolved in their v4 model and mentioned he would send one when it launched. I was hopeful that this work. A few months later, the PiKVM v4 Plus arrived on my doorstep.
After receiving the PiKVM v4 and hooking it up to the TESmart switch I found that I had the same issue as before. This told me it was most likely something with the TESmart switch. I reached out to them again after discovering that I had the 4k 30 fps model and hoped that the 4k 60 fps model would make a difference, after all there were some people who said theirs worked just fine and internet rumors that the 60 fps model worked better.
I talked to TESmart again and they shipped a replacement, this time the 4k 60 fps model. I quickly hooked it up and once again experienced all of the same symptoms. I was really puzzled as to why this was happening to me when it works fine with other switches and others claimed to use this switch fine.
After testing all of this, I was convinced that I would never be able to use my rackmount KVM. I have to admit that I wasn’t that upset that it didn’t work, I was more upset that I didn’t know when to quit. I was frustrated that I sunk over 80 hours of my time trying to fix this when in fact there was no fix. Sometimes you gotta just let go…
That’s when I got lucky and someone posted a comment on my previous post with something new to try. The comment from juristoeckli that mentioned something about an “EDID Emulator”. I had never heard of these before nor was I sure that this was the issue. Then NateDiTo also left a comment about how they had used these EDID emulators (affiliate link) and they worked for them. Finally, I wasn’t alone! Someone else who was experiencing the same thing or at least knew my struggle!
That’s when I decided to give it a shot. I told myself “If these don’t work, I am giving up!” and I meant it this time (maybe… 😂).
This little device will override your EDID possibly making it compatible with the device it’s connecting to.
I purchased a cheaper version of the EDID emulator (affiliate link) hoping they would work. Also, serendipitously, Ray from TESmart had mentioned an EDID emulator in an email that same night. He mentioned this as a last resort, however in my mind this was my last resort. After the devices arrived, I quickly inserted the emulators into the TESmart and connected my machines to them.
I plugged them directly into the TESmart switch and all devices started working immediately!
I retrained the devices per the video, and sure enough after powering the first one one it worked! I could see my machine in the PiKVM without issues! I quickly tempered my expectations because I have been here before. One device would sometimes work fine, but never more than one. Sure enough after training the next 2 devices it worked fine. I could now control all 3 devices from the PiKVM with the TESmart KVM switch. I tested by rebooting the devices and even the PiKVM and everything still works! They now work as reliably as they did with the EZCOO switch.
When I think about the solution, it’s challenging to know how and why this is working. As I understand it, EDID emulators are meant to override a device’s EDID, basically telling the connected device which capabilities it supports. You would think that my devices were sending the proper EDID to the TESmart switch, however as I experienced with 4 devices (2 unique), this was not the case.
Some people have mentioned that this happens more often when running Linux, and I even experienced that myself too. When one of my Intel NUCs was running Windows it seemed to work fine but when running Proxmox (Debian Linux) it seemed to experience these issues. This could be a Windows vs. Linux issue, or it could be chalked up to my other experience where 1 device would work fine but none of the others. I’ve tested quite a bit over the span of a year and it’s challenging to know for sure. Oddly enough, this doesn’t happen to everyone who uses TESmart switches. I do think it’s a combination of TESmart + Device + OS/driver that triggers the problem, because again, it works with my with EZCOO switch. I also have a hunch that these emulators might be instructing the device’s GPU to stay powered on even when a device isn’t plugged into it (just like HDMI dummy plugs), however I don’t know if that’s true. If you know, let me know in the comments below!
I am considering this “fixed” now even though technically this is a “workaround.” A huge thanks to Max from PiKVM, Ray from TESmart (and the TESmart team), and juristoeckli and NateDiTo in the comments because without all of you I would have given up. Each new idea or additional troubleshooting step motivated me to keep going. I can finally use this switch and recommend it to those who want something rackmountable (with workarounds).
Here is the configuration I use:
/etc/kvmd/override.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+98
+99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+
# /etc/kvmd/override.yaml.bak.tesmart
+####################################################################
+# #
+# Override Pi-KVM system settings. This file uses the YAML syntax. #
+# #
+# https://github.com/pikvm/pikvm/blob/master/pages/config.md #
+# #
+# All overridden parameters will be applied AFTER other configs #
+# and "!include" directives and BEFORE validation. #
+# Not: Sections should be combined under shared keys. #
+# #
+####################################################################
+
+kvmd:
+ gpio:
+ drivers:
+ tes:
+ type: tesmart
+ host: 192.168.20.63
+ port: 5000
+ wol_server0:
+ type: wol
+ mac: 1c:69:7a:ad:11:85
+ wol_server1:
+ type: wol
+ mac: 88:ae:dd:05:cf:09
+ wol_server2:
+ type: wol
+ mac: 88:ae:dd:05:c6:3b
+ wol_server3:
+ type: wol
+ mac: 3c:ec:ef:0e:d3:a4
+ wol_server3a:
+ type: wol
+ mac: 3c:ec:ef:0e:d3:a5
+ wol_server4:
+ type: wol
+ mac: 3c:ec:ef:90:c8:0c
+ wol_server4a:
+ type: wol
+ mac: 3c:ec:ef:90:c8:0d
+ reboot:
+ type: cmd
+ cmd: [/usr/bin/sudo, reboot]
+ restart_service:
+ type: cmd
+ cmd: [/usr/bin/sudo, systemctl, restart, kvmd]
+ scheme:
+ ch0_led:
+ driver: tes
+ pin: 0
+ mode: input
+ ch1_led:
+ driver: tes
+ pin: 1
+ mode: input
+ ch2_led:
+ driver: tes
+ pin: 2
+ mode: input
+ ch3_led:
+ driver: tes
+ pin: 3
+ mode: input
+ ch4_led:
+ driver: tes
+ pin: 4
+ mode: input
+ pikvm_led:
+ pin: 0
+ mode: input
+ ch0_button:
+ driver: tes
+ pin: 0
+ mode: output
+ switch: false
+ ch1_button:
+ driver: tes
+ pin: 1
+ mode: output
+ switch: false
+ ch2_button:
+ driver: tes
+ pin: 2
+ mode: output
+ switch: false
+ ch3_button:
+ driver: tes
+ pin: 3
+ mode: output
+ switch: false
+ ch4_button:
+ driver: tes
+ pin: 4
+ mode: output
+ switch: false
+ wol_server0:
+ driver: wol_server0
+ pin: 0
+ mode: output
+ switch: false
+ wol_server1:
+ driver: wol_server1
+ pin: 0
+ mode: output
+ switch: false
+ wol_server2:
+ driver: wol_server2
+ pin: 0
+ mode: output
+ switch: false
+ wol_server3:
+ driver: wol_server3
+ pin: 0
+ mode: output
+ switch: false
+ wol_server3a:
+ driver: wol_server3a
+ pin: 0
+ mode: output
+ switch: false
+ wol_server4:
+ driver: wol_server4
+ pin: 0
+ mode: output
+ switch: false
+ wol_server4a:
+ driver: wol_server4a
+ pin: 0
+ mode: output
+ switch: false
+ reboot_button:
+ driver: reboot
+ pin: 0
+ mode: output
+ switch: false
+ restart_service_button:
+ driver: restart_service
+ pin: 0
+ mode: output
+ switch: false
+ view:
+ table:
+ - ["#NUC1", ch0_led, ch0_button, "wol_server0 | WoL"]
+ - ["#NUC2", ch1_led, ch1_button, "wol_server1 | WoL"]
+ - ["#NUC3", ch2_led, ch2_button, "wol_server2 | WoL"]
+ - ["#HL15", ch3_led, ch3_button, "wol_server3 | WoL-10g", "wol_server3a | WoL-10g"]
+ - ["#Storinator", ch4_led, ch4_button, "wol_server4 | WoL-10g", "wol_server4a | WoL-10g"]
+ - ["#PiKVM", "pikvm_led|green", "restart_service_button|confirm|Service", "reboot_button|confirm|Reboot"]
+
edit /etc/sudoers.d/99_kvmd
add to the end:
1
+2
+
kvmd ALL=(ALL) NOPASSWD: /usr/bin/reboot
+kvmd ALL=(ALL) NOPASSWD: /usr/bin/systemctl
+
Then reboot or restart services.
Here are the items that I used during this project.
(Affiliate links may be included. I may receive a small commission at no cost to you.)
All the problems I had with the PiKVM and TESmart KVM switch were fixed with this one cheap little device.https://t.co/yOGjQTywYy
— Techno Tim (@TechnoTimLive) January 19, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
I’m a huge fan of virtualization and containerization (if you couldn’t tell already)! Today, we’ll walk though the various ways to install Plex step-by-step.We also see how easy it is to get Plex running on Docker and Kubernetes using Rancher.
Get Id and Group Id
1
+
id yourusername
+
Should see something like this:
1
+
uid=1001(technotim) gid=1001(technotim) groups=1001(technotim),27(sudo),999(docker)
+
Install cifs-utils
1
+
sudo apt-get install cifs-utils
+
Create credentials files for share
1
+
sudo nano /home/technotim/.smbcredentials
+
Set permissions
1
+
chmod 600 ~/.smbcredentials
+
1
+2
+
username=yourUsyourusernameername
+password=yourPassword
+
Edit /etc/fstab
1
+2
+
//192.168.0.22/plex_media/movies /mnt/movies cifs credentials=/home/technotim/.smbcredentials 0 0
+//192.168.0.22/plex_media/music /mnt/music cifs credentials=/home/technotim/.smbcredentials 0 0
+
Then reboot or
1
+
sudo mount -a
+
to mount
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I was unsatisfied with the huge wall adapter that many products ship with, so I replaced it! Want to power a mini PC or a smaller device with Power Over Ethernet (POE)? No problem!
Power Disclaimer: Be sure to check your device for the appropriate voltage and wattage. As with all power adapters, using something that isn’t intended for your device can break your device, switch, or both!
Power Over Ethernet or POE. It’s awesome!. I have some small low power devices that have barrel plugs and they have these power adapters that take up a lot of space. Also, this ZimaBoard (and even Raspberry Pis) have a power switch to power on and off. To power it on and off I have to unplug it and plug it back in and unplug it and then plug it back in.
I wanted to find a better way to power these devices, and that’s when I stumbled on this little POE adapter. This little adapter plugs into my POE switch and delivers power to the device. So it will pass through the network too. So if I plug it into my switch and plug it into the device, the device will power on. My POE Switch is managed too so I can use its console to power the device on and off.
Now to power the device, I can just unplug the network cable!
This adapter splits power and ether so you can power your low power devices
Products in this video (see power disclaimer above):
See the kit here: https://kit.co/TechnoTim/power-over-ethernet-poe-devices
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
Power Over Ethernet is AWESOME! (POE) #homelab pic.twitter.com/HCFhuDyc1z
— Techno Tim (@TechnoTimLive) July 28, 2023
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
What’s new in Portainer 2.0? Well, a ton.With the release of Portainer 2 you now have the option to install Kubernetes.This makes installing, managing, and deploying Kubenetes really easy.In this step by step tutorial, we’ll start with nothing and end up with a fully working Portainer 2 server running Kubernetes.We’ll set up k3s using k3d, install kubectl, and then spin up Portainer.As an added bonus, we’ll also run a Minecraft server in Kubernetes as a proof of work.Double bonus, we’ll cover how to pronounce kubectl…
Here are the commands used in the video.Be sure to use them appropriately.
To install docker, see this post
https://kubernetes.io/docs/tasks/tools/install-kubectl/
1
+
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
+
1
+
chmod +x ./kubectl
+
1
+
sudo mv ./kubectl /usr/local/bin/kubectl
+
1
+
kubectl version --client
+
https://github.com/rancher/k3d
1
+
curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
+
1
+
k3d cluster create portainer --api-port 6443 --servers 1 --agents 1 -p "30000-32767:30000-32767@server:0"
+
1
+
k3d cluster create portainer --api-port 6443 --servers 1 --agents 1 -p "30000-32767:30000-32767@server:0"
+
https://github.com/portainer/k8s
1
+2
+
kubectl create namespace portainer
+kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
+
1
+2
+3
+4
+
The Portainer UI is hosted on port `30777`
+
+
+ Example: `http://192.168.0.1:30777`
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Updating Portainer is easy, if you know how.In this quick no fluff video, I will show you how to update any version of Portainer.This guide can be used for installing it too.Portainer is a container management system for Docker, Kubernetes, Swarm, and Azure ACI. Portainer is free and open source.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I built a private, local, and self-hosted AI stack to help me out with daily tasks.
If you’re looking for the tutorial to run this yourself, you can out the video here How to Self-Host Your Own Private AI Stack
Full tutorial coming soon on my other channel! TechnoTimTinkers
AI should also be private, local, and useful. Here are some practical projects that prove it can be.https://t.co/2IQwvVbq5s pic.twitter.com/FU0a47K0uU
— Techno Tim (@TechnoTimLive) July 5, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
As you may know, proxmox is my current choice for a hypervisor. Proxmox 7 is here and comes with a host of new features! In this video we’re cover all of the new features in Proxmox 7 as well as how to upgrade your Proxmox server safely. We’ll also cover all of the “scary” prompts you get while upgrading as well as some of the ways to make sure your upgrade is successful. So, if you’re thinking about upgrading your HomeLab to Proxmox 7, be sure to check this video out first.
If you’re looking to upgrade to Proxmox 8, see this post
Check your upgrade status
1
+
pve6to7 --full
+
First, make sure we have the latest packages
1
+2
+
apt update
+apt dist-upgrade
+
Update all Debian repositories to Bullseye
1
+
sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list
+
We’ll also need to make sure we comment out any Proxmox ve 6.0 repositories.
1
+2
+
nano /etc/apt/sources.list
+nano /etc/apt/sources.list.d/pve-enterprise.list
+
Add Proxmox VE & package Repo
1
+
echo "deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list
+
If you’re using the non-subscription repository (like me) also run
1
+
sed -i -e 's/buster/bullseye/g' /etc/apt/sources.list.d/pve-install-repo.list
+
If you’re running Ceph, you’ll need to run
1
+
echo "deb http://download.proxmox.com/debian/ceph-octopus bullseye main" > /etc/apt/sources.list.d/ceph.list
+
Do the upgrade
1
+2
+
apt update
+apt dist-upgrade
+
If you’re running LACP / LAGG I found that you need to make some additional changes to your network config.See the comments in the config
/etc/network/interfaces
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+
auto lo
+iface lo inet loopback
+
+#auto eno1 <--- I had to comment this out
+iface eno1 inet manual
+
+#auto eno2 <--- I had to comment this out
+iface eno2 inet manual
+
+auto bond0
+iface bond0 inet manual
+ bond-slaves eno1 eno2
+ bond-miimon 100
+ bond-mode 802.3ad
+ bond-xmit-hash-policy layer2+3
+
+auto vmbr0
+iface vmbr0 inet static
+ address 192.168.0.11/24
+ gateway 192.168.0.1
+ bridge-ports bond0
+ bridge-stp off
+ bridge-fd 0
+ bridge-vlan-aware yes
+ bridge-vids 2-4094
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.It’s so easy, I should have done this years ago! In this tutorial, we’ll set up email notifications using SMTP with Gmail or G Suite that send email alerts when there are disk errors, ZSF Issues, or when backup jobs run.We’ll then test the alerts to make sure they are working by yoinking a drive from my ZFS pool (and hopefully it doesn’t fail).
Huge THANK YOU to Micro Center for Sponsoring Today’s video!
New Customer Exclusive – Free 256GB SDD: https://micro.center/24c
Check out Micro Center’s PC Builder: https://micro.center/1wk
Submit your build to Micro Center’s Build Showcase: https://micro.center/tvv
Shop Micro Center’s Top Deals: https://micro.center/jb4
install dependencies
1
+2
+
apt update
+apt install -y libsasl2-modules mailutils
+
Configure app passwords on your Google account
https://myaccount.google.com/apppasswords
Configure postfix
1
+
echo "smtp.gmail.com your-email@gmail.com:YourAppPassword" > /etc/postfix/sasl_passwd
+
update permissions
1
+
chmod 600 /etc/postfix/sasl_passwd
+
hash the file
1
+
postmap hash:/etc/postfix/sasl_passwd
+
check to to be sure the db file was created
1
+
cat /etc/postfix/sasl_passwd.db
+
edit postfix config
1
+
nano /etc/postfix/main.cf
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+
# google mail configuration
+
+relayhost = smtp.gmail.com:587
+smtp_use_tls = yes
+smtp_sasl_auth_enable = yes
+smtp_sasl_security_options =
+smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
+smtp_tls_CAfile = /etc/ssl/certs/Entrust_Root_Certification_Authority.pem
+smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_tls_session_cache
+smtp_tls_session_cache_timeout = 3600s
+
reload postfix
1
+
postfix reload
+
send a test email
1
+
echo "This is a test message sent from postfix on my Proxmox Server" | mail -s "Test Email from Proxmox" your-email@gmail.com
+
fix from name in email
install dependency
1
+2
+
apt update
+apt install postfix-pcre
+
edit config
1
+
nano /etc/postfix/smtp_header_checks
+
add the following text
1
+
/^From:.*/ REPLACE From: pve1-alert <pve1-alert@something.com>
+
hash the file
1
+
postmap hash:/etc/postfix/smtp_header_checks
+
check the contents of the file
1
+
cat /etc/postfix/smtp_header_checks.db
+
add the module to our postfix config
1
+
nano /etc/postfix/main.cf
+
add to the end of the file
1
+
smtp_header_checks = pcre:/etc/postfix/smtp_header_checks
+
reload postfix service
1
+
postfix reload
+
00:00 - Why you should set up alerts in Proxmox
01:42 - Micro Center / Free SSD (Sponsor)
02:56 - Where can I find the documentation
03:07 - Installing and configuring dependencies
03:54 - Google Email address configuration
08:43 - Configuring postfix and customizing the email alert
11:47 - Changing the mail sender name with pcre
14:20 - Configure where email alerts are sent
15:01 - Backup Alerts
17:33 - SMART alerts
18:53 - ZFS Alerts
19:52 - Testing in Production
24:03 - How Proxmox alerts could be better
25:30 - Stream Highlight - “Just some flashing lights & music”
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers. It's so easy, I should have done this years ago!https://t.co/6uRz0eVisA#homelab #proxmox pic.twitter.com/i8E1jrP2pE
— Techno Tim (@TechnoTimLive) December 17, 2022
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Proxmox Backup Server is an enterprise-class client-server backup software that backs up virtual machines, containers, and physical hosts.In this step by step tutorial, we install and configure Proxmox Backup Server (PBS) and back up all of our virtual machines. We’ll start with nothing and end up with a fully functional Proxmox Backup Server with a ZFS datastore you can use to back up and restore your machines today.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Proxmox helper scripts is a collection of scripts to help you easily make changes to your Proxmox VE server along with installing many LXC Containers. This makes installing, configuring, and maintaining your Proxmox server in your HomeLab along with many applications as simple as running a script.
Check out Proxmox VE Helper Scripts on Github: https://github.com/tteck/Proxmox
Note: Be sure to always inspect any script before executing it, whether local or from the internet!
You can find the website here: https://helper-scripts.com/scripts
If you want to execute scripts from a commit SHA (somewhat immutable), you can execute the script like so (commit SHA of the date this video was released):
homeassistant-core-install.sh
) choose the RAW optionYou can now use this hash to execute this script. This will ensure that you can run this repeatable (and not always latest)
1
+
bash -c "$(wget -qLO - https://raw.githubusercontent.com/tteck/Proxmox/e842d2ec3d8f358eed443be2ecbecb2f3b4137d0/install/homeassistant-core-install.sh)"
+
You can reuse this commit SHA for all other scripts (just replace the path)
This past week I got to learn all about Proxmox Helper Scripts, a wonderful collection of scripts to help you automate common tasks with Proxmox along with LXC container installs!https://t.co/CRuExA8Ik2 pic.twitter.com/1u1JRWGEav
— Techno Tim (@TechnoTimLive) May 30, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Nested Virtualization is a feature that allows you to run a virtual machine within a virtual machine while still using hardware acceleration from the host machine.Put simply, it allows you to run a vm inside of a vm.
Everything we do will be done on the host system running Proxmox.Once enabled, the guest can take advantage of it.
First we need to check to see if nested virtualization is enabled in Proxmox.
If you’re running an Intel CPU run this command:
1
+
cat /sys/module/kvm_intel/parameters/nested
+
If you’re running an AMD CPU run this command:
1
+
cat /sys/module/kvm_amd/parameters/nested
+
You should see an output of Y or N.If N this means that nested virtualization is not enabled, so let’s enabled it!
On the Proxmox host, run the following command as root:
If you’re running an Intel CPU run this command:
1
+
echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf
+
If you’re running an AMD CPU run this command:
1
+
echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf
+
Next reboot the system
1
+
reboot
+
Then check to see if nexted virtualization is enabled on the Proxmox host:
If you’re running an Intel CPU run this command:
1
+
cat /sys/module/kvm_intel/parameters/nested
+
If you’re running an AMD CPU run this command:
1
+
cat /sys/module/kvm_amd/parameters/nested
+
You should see Y this time.This means that you can now using virtualization inside of a VM, just be sure to set your VM’s processor accordingly! (use HOST for CPU type)
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
It’s time to say goodbye to your home router and start virtualizing it using Proxmox and pfSense.
pfSense Community Edition Download: https://www.pfsense.org/download/ Get started with Proxmox today: https://www.youtube.com/watch?v=hdoBQNI_Ab8
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Do you need to virtualize something at home? Thinking of building your own Homelab? (The answer is YES).Join me as we install and configure Proxmox VE step-by-step.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Do you need to virtualize Ubuntu Server with Proxmox? Join me as we install and configure Ubuntu Server LTS on Proxmox VE step-by-step using the best performance options.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Have you been thinking about updating your Proxmox VE server? Well, what are you waiting for? Upgrade your Proxmox server in your home lab in just a few minutes with this step-by-step tutorial!
See all the hardware I recommend at https://l.technotim.live/gear
Edit /etc/apt/sources.list
1
+2
+3
+4
+5
+6
+7
+8
+9
+
deb http://ftp.us.debian.org/debian buster main contrib
+
+deb http://ftp.us.debian.org/debian buster-updates main contrib
+
+# security updates
+deb http://security.debian.org buster/updates main contrib
+
+# not for production use
+deb http://download.proxmox.com/debian buster pve-no-subscription
+
Run
1
+
apt-get update
+
1
+
apt dist-upgrade
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Do you need to virtualize Windows 10 with Proxmox? Join me as we install and configure Windows 10 on Proxmox VE step-by-step using the best performance options.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Pterodactyl is a free an open source dedicated game server.It comes with both a panel to configure and deploy your game servers as well as game server nodes to run your games.It runs games in Docker containers to keep them isolated and making them easier than ever to deploy.We’re going to also use Docker to create our Pterodactyl server and the Wings agent making this truly Docker to the core.
Be sure to ⭐ the Pterodactyl GitHub repo and the Eggs repo (additional games)
To install docker, see this post
Both your Pterodactyl Panel server as well as your Pterodactyl Wing server will need to be configured in your reverse proxy, each with their own public URL. If you need help configuring your reverse proxy, see my guide on how to do that.
Check out game deals on Humble Games (affiliate link)
1
+2
+3
+4
+5
+
mkdir pterodactyl
+cd pterodactyl
+mkdir panel
+cd panel
+nano docker-compose.yml
+
docker-compose.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+
version: '3.8'
+x-common:
+ database:
+ &db-environment
+ # Do not remove the "&db-password" from the end of the line below, it is important
+ # for Panel functionality.
+ MYSQL_PASSWORD: &db-password "CHANGE_ME"
+ MYSQL_ROOT_PASSWORD: "CHANGE_ME_TOO"
+ panel:
+ &panel-environment
+ # This URL should be the URL that your reverse proxy routes to the panel server
+ APP_URL: "https://pterodactyl.example.com"
+ # A list of valid timezones can be found here: http://php.net/manual/en/timezones.php
+ APP_TIMEZONE: "UTC"
+ APP_SERVICE_AUTHOR: "noreply@example.com"
+ TRUSTED_PROXIES: "*" # Set this to your proxy IP
+ # Uncomment the line below and set to a non-empty value if you want to use Let's Encrypt
+ # to generate an SSL certificate for the Panel.
+ # LE_EMAIL: ""
+ mail:
+ &mail-environment
+ MAIL_FROM: "noreply@example.com"
+ MAIL_DRIVER: "smtp"
+ MAIL_HOST: "mail"
+ MAIL_PORT: "1025"
+ MAIL_USERNAME: ""
+ MAIL_PASSWORD: ""
+ MAIL_ENCRYPTION: "true"
+
+#
+# ------------------------------------------------------------------------------------------
+# DANGER ZONE BELOW
+#
+# The remainder of this file likely does not need to be changed. Please only make modifications
+# below if you understand what you are doing.
+#
+services:
+ database:
+ image: mariadb:10.5
+ restart: always
+ command: --default-authentication-plugin=mysql_native_password
+ volumes:
+ - "/srv/pterodactyl/database:/var/lib/mysql"
+ environment:
+ <<: *db-environment
+ MYSQL_DATABASE: "panel"
+ MYSQL_USER: "pterodactyl"
+ cache:
+ image: redis:alpine
+ restart: always
+ panel:
+ image: ghcr.io/pterodactyl/panel:latest
+ restart: always
+ ports:
+ - "80:80"
+ - "443:443"
+ links:
+ - database
+ - cache
+ volumes:
+ - "/srv/pterodactyl/var/:/app/var/"
+ - "/srv/pterodactyl/nginx/:/etc/nginx/http.d/"
+ - "/srv/pterodactyl/certs/:/etc/letsencrypt/"
+ - "/srv/pterodactyl/logs/:/app/storage/logs"
+ environment:
+ <<: [*panel-environment, *mail-environment]
+ DB_PASSWORD: *db-password
+ APP_ENV: "production"
+ APP_ENVIRONMENT_ONLY: "false"
+ CACHE_DRIVER: "redis"
+ SESSION_DRIVER: "redis"
+ QUEUE_DRIVER: "redis"
+ REDIS_HOST: "cache"
+ DB_HOST: "database"
+ DB_PORT: "3306"
+networks:
+ default:
+ ipam:
+ config:
+ - subnet: 172.20.0.0/16
+
Start the stack
1
+
docker-compose up -d
+
1
+
docker-compose run --rm panel php artisan p:user:make
+
1
+2
+3
+4
+5
+
mkdir pterodactyl
+cd pterodactyl
+mkdir wings
+cd wings
+nano docker-compose.yml
+
docker-compose.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+
version: '3.8'
+
+services:
+ wings:
+ image: ghcr.io/pterodactyl/wings:v1.6.1
+ restart: always
+ networks:
+ - wings0
+ ports:
+ - "8080:8080"
+ - "2022:2022"
+ - "443:443"
+ tty: true
+ environment:
+ TZ: "UTC"
+ WINGS_UID: 988
+ WINGS_GID: 988
+ WINGS_USERNAME: pterodactyl
+ volumes:
+ - "/var/run/docker.sock:/var/run/docker.sock"
+ - "/var/lib/docker/containers/:/var/lib/docker/containers/"
+ - "/etc/pterodactyl/:/etc/pterodactyl/"
+ - "/var/lib/pterodactyl/:/var/lib/pterodactyl/"
+ - "/var/log/pterodactyl/:/var/log/pterodactyl/"
+ - "/tmp/pterodactyl/:/tmp/pterodactyl/"
+ - "/etc/ssl/certs:/etc/ssl/certs:ro"
+ # you may need /srv/daemon-data if you are upgrading from an old daemon
+ #- "/srv/daemon-data/:/srv/daemon-data/"
+ # Required for ssl if you use let's encrypt. uncomment to use.
+ #- "/etc/letsencrypt/:/etc/letsencrypt/"
+networks:
+ wings0:
+ name: wings0
+ driver: bridge
+ ipam:
+ config:
+ - subnet: "172.21.0.0/16"
+ driver_opts:
+ com.docker.network.bridge.name: wings0
+
Start the stack
1
+
docker-compose up -d
+
1
+
sudo nano /etc/pterodactyl/config.yml
+
Paste the contents from the config your panel generated for your node into this file Note: The FQDN
field when configuring the node in the panel should be the URL that your reverse proxy routes to your wing server. Also ensure you entered 443
for the Daemon Port
field.
config.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+
debug: false
+uuid: 716deb8f-7047-42ad-9323-4a25ae49118b
+token_id: 7PoSfql3hdKjbMKn
+token: apEo1esCKe5sEWkpfnRB5xakj3mc0aM6jglacgBcsIsgglBtOm0oV1W3efTbwarN
+api:
+ host: 0.0.0.0
+ port: 443
+ ssl:
+ enabled: false
+ cert: /etc/letsencrypt/live/node-01.example.com/fullchain.pem
+ key: /etc/letsencrypt/live/node-01.example.com/privkey.pem
+ upload_limit: 100
+system:
+ data: /var/lib/pterodactyl/volumes
+ sftp:
+ bind_port: 2022
+allowed_mounts: []
+remote: 'https://pterodactyl.example.com'
+
Restart the stack
1
+
docker-compose up -d --force-recreate
+
If you aren’t seeing your stats in the console
1
+
sudo nano /etc/default/grub
+
add additional parameters to GRUB_CMDLINE_LINUX_DEFAULT
1
+
GRUB_CMDLINE_LINUX_DEFAULT="swapaccount=1 systemd.unified_cgroup_hierarchy=1"
+
1
+2
+
sudo update-grub
+sudo reboot
+
If you are looking to install the Pterodactyl Panel on kubernetes, see the manifests here.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
It use to be hard to back up Rancher, but with Rancher 2 it’s super simple.Upgrading, backing up, and restoring your Rancher server should be part of your regular routine.Join me in this tutorial as we walk through backing up, upgrading, and restoring a single node Rancher Docker install in just a couple of minutes.Trust me, you’ll feel better after you do.
Need to install Rancher? See my guide https://www.youtube.com/watch?v=YWqBxCIfxw4
See the full guide from Rancher https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/single-node/
See all containers
1
+
docker ps
+
See all containers including stopped ones
1
+
docker ps -a
+
Stop the container
1
+
docker stop <RANCHER_CONTAINER_NAME>
+
Create a data container
1
+
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data-<DATE> rancher/rancher:<RANCHER_CONTAINER_TAG>
+
Create a backup tarball
1
+2
+
docker run --volumes-from rancher-data-<DATE> -v $PWD:/backup:z busybox tar pzcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
+
+
Run ls
and you should see your tarball
1
+
rancher-data-backup-v2.4.3-2020-06-21.tar.gz
+
Pull a new docker image
1
+
docker pull rancher/rancher:<RANCHER_VERSION_TAG>
+
Start your new rancher server container.
Use the command you used to create your initial container, it looks something like this.
1
+
docker run -d --restart=unless-stopped -p 9090:80 -p 9091:443 --privileged -v /opt/rancher:/var/lib/rancher --name=rancher_docker_server rancher/rancher:<RANCHER_VERSION>
+
Check to see if it’s running
1
+
docker ps
+
Use the command you used to create your initial container, it looks something like this.
1
+
docker run -d --restart=unless-stopped -p 9090:80 -p 9091:443 --privileged -v /opt/rancher:/var/lib/rancher --name=rancher_docker_server rancher/rancher:<RANCHER_VERSION>
+
Stop the container
1
+
docker stop <RANCHER_CONTAINER_NAME>
+
Delete state data and replace from backup
1
+2
+3
+
docker run --volumes-from <RANCHER_CONTAINER_NAME> -v $PWD:/backup \
+busybox sh -c "rm /var/lib/rancher/* -rf && \
+tar pzxvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz"
+
Start the container
1
+
docker start <RANCHER_CONTAINER_NAME>
+
1
+
cd /opt
+
1
+
docker stop rancher_docker_server
+
if this fails it means you named your container something else, find it by running docker ps
1
+
sudo tar czpf rancher-data-backup-VERSION-DATE-unofficial.tar.gz rancher
+
1
+
sudo mv rancher-data-backup-VERSION-DATE-unofficial.tar.gz ~/
+
1
+
docker start rancher_docker_server
+
1
+
cd /opt
+
1
+
docker stop rancher_docker_server
+
if this fails it means you named your container something else, find it by running docker ps
1
+
sudo tar xzpf rancher-data-backup-VERSION-DATE-unofficial.tar.gz
+
1
+
docker start rancher_docker_server
+
Your rancher server must be named similar to rancher_docker_server_v2.4.5
otherwise you’ll need to modify this. This will not work with latest
tag, so be sure to pin your version.
It will need to be run with sudo
or scheduled in sudo crontab -e
rancher_backup.sh
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+
# go to rancher dir
+cd /opt
+
+# get current rancher tag
+RANCHER_TAG=$(docker ps | grep rancher/rancher | grep -Eio 'rancher/rancher:.{0,6}' | sed 's/rancher\/rancher://g')
+
+# date format
+TODAY=`date -I`
+
+# stop docker container
+docker stop rancher_docker_server_$RANCHER_TAG
+
+# create tar
+tar czpf rancher-data-backup-$RANCHER_TAG-$TODAY-unofficial.tar.gz rancher
+
+# move tar
+mv rancher-data-backup-$RANCHER_TAG-$TODAY-unofficial.tar.gz /home/USERNAME/backups/rancher_backups/
+
+# start server
+docker start rancher_docker_server_$RANCHER_TAG
+
+
1
+2
+
NEW_VERSION_TAG=v2.4.8
+docker run -d --restart=unless-stopped -p 9090:80 -p 9091:443 --privileged -v /opt/rancher:/var/lib/rancher --name=rancher_docker_server_$NEW_VERSION_TAG rancher/rancher:$NEW_VERSION_TAG
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Are you running Kubernetes in your homelab or in the enterprise? Do you want an easy way to manage and create Kubernetes clusters? Join me as we walk through installing Rancher on an existing high availability k3s cluster in this step-by-step tutorial.
We install Rancher, configure a load balancer, install and configure helm, install cert-manager, configure Rancher, walk through the GUI, scale up our cluster, and set up a health check and liveness check! Join me, it’s easy in this straightforward guide.
Note: It’s advised you consult the Rancher Support Matrix to get the recommended version for all Rancher dependencies.
kubectl
install helm
1
+
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
+
add helm
repo, stable
1
+
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
+
create rancher namespace
1
+
kubectl create namespace cattle-system
+
ssl configuration
user rancher generated (default)
install cert-manager
1
+
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml
+
create name-space for cert-manager
1
+
kubectl create namespace cert-manager
+
Add the Jetstack Helm repository
1
+
helm repo add jetstack https://charts.jetstack.io
+
update helm repo
1
+
helm repo update
+
install cert-manager
helm chart
*Note: If you receive an “Error: Kubernetes cluster unreachable” message when installing cert-manager, try copying
the contents of “/etc/rancher/k3s/k3s.yaml” to “~/.kube/config” to resolve the issue.*
1
+2
+3
+4
+
helm install \
+ cert-manager jetstack/cert-manager \
+ --namespace cert-manager \
+ --version v1.7.1
+
check rollout of cert-manager
1
+
kubectl get pods --namespace cert-manager
+
Be sure each pod is fully running before proceeding
Install Rancher with Helm
Note: If you have “.local” for your private TLD then Rancher will NOT finish the setup within the webUI
1
+2
+3
+
helm install rancher rancher-stable/rancher \
+ --namespace cattle-system \
+ --set hostname=rancher.example.com
+
check rollout
1
+
kubectl -n cattle-system rollout status deploy/rancher
+
you should see
1
+2
+3
+4
+
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
+Waiting for deployment "rancher" rollout to finish: 1 of 3 updated replicas are available...
+Waiting for deployment "rancher" rollout to finish: 2 of 3 updated replicas are available...
+deployment "rancher" successfully rolled out
+
check status
1
+
kubectl -n cattle-system rollout status deploy/rancher
+
you should see
deployment "rancher" successfully rolled out
+
If you are using k3s
you can use the traefik
ingress controller that ships with k3s
run
1
+
kubectl get svc --all-namespaces -o wide
+
look for
kube-system traefik LoadBalancer 10.43.202.72 192.168.100.10 80:32003/TCP,443:32532/TCP 5d23h app=traefik,release=traefik
+
then create a DNS entry for rancher.example.com 192.168.100.10
This can be a host entry on your machine, or a DNS entry in your local DNS system (router, pi hole, etc…)
otherwise you can use nginx
nginx lb
Separating Rancher Cluster from your User Cluster
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Today in this step by step guide, we’ll set up Grafana, Prometheus, and Alertmanager to monitor your Kubernetes cluster.This can be set up really quickly using helm or the Rancher UI.We’ll install and configure, set up some dashboards, and even set up some alerts using Slack.All this and more in this simple to follow, easy tutorial.Setting up Grafana and Prometheus has never been so easy.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Today we’re going to talk about the new Cluster Explorer in Rancher.The Cluster Explorer is the new fancy user interface that will replace the old Cluster Manager.The new UI contains lots of new areas to explore, from new dashboards to new workload and deployment views, to service discovery, to storage to RBAC, and more.If you’ve been hesitant to use the new UI, no need to worry, we all have.But hopefully after this you’ll switch over like I have done too!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Rancher vs. Portainer, which one is better” Which one should I choose? Can Portainer manager Kubernetes? Can Rancher manage Kubernetes? We answer all these questions and more in this quick, no fluff video. Side note, this is one of the most asked questions in my live streams.
Please share this with anyone who asks what a Home Lab is.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Keeping track of container image updates is hard. I started using Renovate Bot to to track these for me and I now get pull requests from a bot for my Docker and Kubernetes container images. It’s a game changer.
A HUGE thanks to Datree for sponsoring this video!
Secure Your Kubernetes, Prevent Misconfigurations
How much time do you spend looking for updated container images for services you have running in your Kubernetes cluster? 5 minutes, 5 hours? Never? I used to spend hours a week checking for new container images, reading up on the changes, and not really knowing if it was going to break my cluster or not. It was super tedious doing this to the point where I almost stopped doing it. That’s when I discovered Renovate Bot. Renovate is a dependency update automation tool that scans your software, discovers dependencies, and checks to see if an update exists, and if there is one, it will automatically help you out by submitting a pull request on your code base. It works out of the box and supports a wide variety of languages and technologies, it’s highly configurable putting you in control of what gets updated and when, and it’s pretty smart too and can automatically detect dependencies and suggest ideas for improvement.
Here’s the cool thing about it too, not only can it scan for all sorts of dependencies, it also gives you your choice of how you want to run it. Want to run it locally as a node module or from a CLI? Or in a Docker container? Or ever self host it in your Kubernetes cluster? No problem! Want to scan dependencies in GitHub, GitLab, AWS CodeCommit or other Git providers? No problem at all. One of the great things about Renovate is that because it’s open source, it puts you in control of how you want to run it, where you want to run it, and when you want to run it. So today we’ll be setting up Renovate bot to give us a helping hand with our Kubernetes resources. We’ll create a GitHub repo to house our Kubernetes deployments, add the Renovate bot to our repo, and then let it help us out by opening pull requests when it sees updates to any of the container images we’re using. Yeah… I feel like I just hired a devops engineer for free.
Renovate works with many different source control providers like GitHub, GitLab, CodeCommit, and many others. You also have your choice of how you want to manage Renovate, meaning you can self-host it with Docker or Kubernetes, or run it as a GitHub app for free that’s hosted by Renovate’s parent company Mend.
We’re going to go with GitHub and the GitHub app because it’s super simple to set up. First we need to create a github repo, this is as simple as going to GitHub and well, creating a new repo. After naming your repo you’ll want to choose whether to make this public or private. The choice is up to you and Renovate will work either way.
Note: if you want to see the repo I created in this post and video, you can check it out here technotim-k8s-renovate
After creating the repo you’ll want to clone it to your machine. Now, I know that it’s empty, but we’ll be adding some things here shortly. After cloning it, I am going to open it up with a VSCode but any editor will do.
Now that our repo is cloned, we’re going to add some Kubernetes resources so that Renovate can start analyzing our resources for updates. We’re going to create a simple nginx deployment and service.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.24-alpine
+ ports:
+ - containerPort: 80
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
---
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx
+spec:
+ selector:
+ app: nginx
+ ports:
+ - port: 80
+ targetPort: 80
+ type: LoadBalancer
+
For this deployment we’re going to use an older nginx container image tag because we want to see the Renovate bot actually work, so we’ll go out to Docker Hub and choose an older tag. We’ll then put that image tag in our deployment. Now that we have this manifest, let’s commit this code and push it up.
Now that we have a simple Kubernetes deployment committed to our repo, we should add the Renovate bot to start analyzing our code. We can do this by going out to GitHub, finding the Renovate app, and installing it in our repo. You need to authorize this app for your repo or org. Once you authorize this app and choose which repos it has access to you are all set!
If you ever decide to change your mind and remove this app for your repo, you can go to the repo settings and remove this app at any time.
Once the Renovate bot is authorized and installed, it won’t actually do anything until you merge a Pull Request that will be opened by the bot on your repo! This pull request is a special “onboarding” pull request that will show you what the bot has detected along with adding a default config for the bot. Renovate won’t take any further actions until you accept and merge this pull request. Once you have reviewed this PR, you can merge it in and it will activate the bot on your repo.
Add the Renovate app to your repo
After merging the onboarding PR, we can go and take a look at the logs for the bot on Mend’s bot page. Here we can see that it is trying to auto detect all of the various dependencies that the bot supports. It’s checking for ansible
, docker-compose
, flux
, gradle
, helm
, and many other dependencies. But it doesn’t know how to handle Kubernetes files out of the box because Kubernetes files don’t really have a naming convention because there really isn’t one. So we’ll need to tell Renovate how to check for Kubernetes files in our config.
Here you can see Renovate trying to scan our repo and automatically detect dependencies
So we’ll need to git pull
to get latest and we should see our Renovate config file. We’ll need to add the file match for Kubernetes files. You’ll want to be sure that you use the right extension here, whether that be yml
or yaml
, both are acceptable but I typically use yml
so that’s what I am going to use here.
1
+2
+3
+4
+5
+6
+7
+8
+9
+
{
+ "$schema": "https://docs.renovatebot.com/renovate-schema.json",
+ "extends": [
+ "config:base"
+ ],
+ "kubernetes": {
+ "fileMatch": ["\\.yml$"]
+ }
+}
+
Once we’ve made that change to our config locally, we’ll then commit that change and push it up.
Once we push this change up and it scans our repo, we can see a new issue that was created! This is a special type of issue that Renovate creates for us and it is kind of like a dashboard for all of our dependencies.
If we look at this issue, it’s telling us that it detected a new dependency that’s related to Kubernetes and that it detected not only our nginx tag but also our Kubernetes API version for the deployment. Super awesome. If you like you can choose to disable this dashboard issue in your config, but I would recommend keeping it.
If we look at the logs again from Renovate, we can also see that it detected our nginx deployment and that it created a PR for us to review. Now for the actual PR. We should see a new PR that was opened from the Renovate bot!
If we look at this PR we can see the proposed changes, it’s suggesting that we change our nginx container image from 1.24
to 1.25
which is the current latest tag. If we’re happy with the change we can merge it into our code with just a click.
Now our code base is up to date with the latest container image. What happens if a container you are using only has one tag, say like the “latest” tag? Well, let’s find out.
Let’s say for instance we’re running Wordpress in our cluster, and we create a deployment.yml
and in our deployment.yml
we specify the “latest” tag vs. a versioned tag.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+
---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: wordpress
+spec:
+ selector:
+ matchLabels:
+ app: wordpress
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: wordpress
+ spec:
+ containers:
+ - name: wordpress
+ image: wordpress:latest
+ ports:
+ - containerPort: 80
+
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
---
+apiVersion: v1
+kind: Service
+metadata:
+ name: wordpress
+spec:
+ selector:
+ app: wordpress
+ ports:
+ - port: 80
+ targetPort: 80
+ type: LoadBalancer
+
After we commit and push this up, we can wait for Renovate to check our repository again for new dependencies, or we can manually trigger one by going back to the dependency dashboard issue and check this box to trigger it to run again.
Now, if we look at the logs again we should see that it detected Wordpress, however it is unversioned. The latest
tag is nondeterministic, meaning that it is not deterministic, or simply put, it can mean more than one thing. Renovate can’t use this because it can’t determine what the current version is and what the next possible version is. So, instead of pinning this version to “latest”, we can actually pin it to a digest.
So if we look at the current latest
tag in Docker Hub and inspect the digest, we can see it here.
It’s this long string of characters:
DIGEST:sha256:75ba772cce073ec2aa6cec95c5ca229dfde9029c49874350a971499d567adae7
The digest is an immutable identifier for a container image and it is deterministic, meaning it can’t be changed and it only references one image. We can use that for Renovate. Once we have that, we can then pin our Wordpress container to this digest by using the like this, it’s:
container image @ sha256 : digest
Now if we make this change, commit this and push it up, we are now pinned to the digest which is also the same as “latest”. Again, if we want to force a scan instead of waiting, we can go back to this Dashboard issue, check the check box and then go look at the logs. We can now see that it detected our Wordpress container image along with the digest and it can now compare this to the current digest and open a PR if it needs to. If we take a look at the issue Dependency dashboard, we can now see that it detected Wordpress pinned to the digest. We won’t see a PR now because this digest is the latest digest, however if Wordpress releases a new container image with a new digest we will get a pull request to replace the digest. Awesome, so that solves the “latest” problem.
So that’s awesome, we have ways to work with Kubernetes manifests whether they are pinned to a versioned tag or an unversioned tag, but what about helm charts? Well, helm charts are just as easy. Let’s say we wanted to source control our mysql helm deployment, all we have to do is create a our helm values file and include the version as well as the repository.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
---
+image:
+ repository: bitnamicharts/mysql
+ version: 9.9.0
+persistence:
+ enabled: true
+ size: 10Gi
+architecture: replication
+auth:
+ existingSecret: mysql-secret
+primary:
+ replicaCount: 1
+
If you don’t specify the repository it will default to Docker hub, but as you can see here I am getting this chart from Bitnami. After we commit and push this up, we should now see a new dependency type of helm and since it detected an update we should also see a pull request to update this file!
updating helm charts with Renovate are just as easy!
So now with Renovate bot we can keep track and upgrade our Kubernetes deployment and even helm charts but I bet you are wondering how you can deploy them? There are quite a few ways to deploy these resources using GitOps tools like Flux and ArgoCD, or even just a simple CI task that runs kubectl and or helm. I have a few videos on this topic. What about Docker deployments? Well if you’re interested in how to automate deployments with Docker and Renovate let me know in the comments below.
Well, I learned a ton about Renovate Bot, how to add it to your Git Repository and how to automate Pull Requests when there are updates available, and I hope you learned something too! And remember if you found anything in this post helpful, don’t forget to share. Thanks for Reading!
Keeping track of container image updates is hard. I started using Renovate Bot to to track these for me and I now get pull requests from a bot for my Docker and Kubernetes container images. Automation is awesome. It's a game changer.
— Techno Tim (@TechnoTimLive) July 1, 2023
Check it out!
👉https://t.co/2ku5EayOIO pic.twitter.com/uLF1PC7Swi
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Are you self-hosting lots of services at home in your homelab? Have you been port forwarding or using VPN to access your self-hosted services wishing you had certificates so that you can access them securely over SSL? Well after this video, you can! In this step by step tutorial we’ll walk through setting up Rancher and Kubernetes with a reverse proxy, Kubernetes Ingress, MetalLB, Traefik, Let’s Encrypt, and DNS giving you free certificates.
https://www.youtube.com/watch?v=kL8iGErULiw
kubectl
https://kubernetes.io/docs/tasks/tools/install-kubectl/
https://metallb.universe.tf/installation/
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
You should only ever run this step once.
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
sample config.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
apiVersion: v1
+kind: ConfigMap
+metadata:
+ namespace: metallb-system
+ name: config
+data:
+ config: |
+ address-pools:
+ - name: default
+ protocol: layer2
+ addresses:
+ - 192.168.1.240-192.168.1.250
+
kubectl apply -f config.yaml
traefik sample answers yaml
change “staging: true” to “staging: false” once you confirm its all working to get the live certs
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+
---
+ defaultImage: true
+ imageTag: "1.7.14"
+ serviceType: "LoadBalancer"
+ debug:
+ enabled: false
+ rbac:
+ enabled: true
+ ssl:
+ enabled: true
+ enforced: true
+ permanentRedirect: false
+ acme:
+ enabled: true
+ email: "you@example.com"
+ onHostRule: true
+ staging: true
+ logging: true
+ challengeType: "dns-01"
+ dnsProvider:
+ name: "cloudflare"
+ existingSecretName: "cloudflare-dns"
+ persistence:
+ enabled: true
+ dashboard:
+ enabled: true
+ domain: "traefik.example.com"
+ auth:
+ basic: ""
+
https://hub.helm.sh/charts/stable/traefik
https://docs.traefik.io/https/acme/#providers
Be sure that your Traefik yaml matches the code above exactly, including whitespace.Yaml is whitespace sensitive.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
My Storinator server from 45Drives is great, except for 1 thing.It’s a little loud for my home.It would be fine if it were in a data center or a real network closet, however this is in my basement.I decided to swap out all the fans to make it quieter, and install RGB fans along with a ZigBee controller so I can control them with Home Automation!
HUGE THANK YOU to Micro Center for Sponsoring this Video!
New Customers Exclusive – Get $25 off your purchase of any AMD and Intel Processor (limit one per customer): https://micro.center/1z7
Check out Micro Center’s PC Builder: https://micro.center/mrp
Submit your build to Micro Center’s Build Showcase: https://micro.center/ow4
Thanks again to 45drives for the Storinator! https://45drives.com
📦See all the parts in this kit here! 📦 https://kit.co/TechnoTim/smart-rgb-fan-conversation
Time Codes
00:00 - Making My Server Quiet
02:13 - Micro Center (Sponsor)
03:18 - Taking the Server Apart
04:17 - Changing the CPU Cooler
05:02 - How to Add Smart RGB to a Server
06:07 - Wiring Up the ZigBee Controller and Fans
07:20 - Testing and Pairing the ZigBee Controller
08:08 - Why Put RGB Fans in a Server?
08:42 - How Much Quieter Is It?
09:13 - What’s Next for the Server?
09:33 - Stream Highlight - I will buy an LTT Screwdriver
I Put RGB Fans in My Server and I am NOT Apologizing!https://t.co/YRhA9jPRoj pic.twitter.com/rR8EEMOobh
— Techno Tim (@TechnoTimLive) October 22, 2022
I hacked my RGB PC fans to use zigbee. No PC software required! pic.twitter.com/DfV13n7tE2
— Techno Tim (@TechnoTimLive) October 12, 2022
I made my servers quiet! pic.twitter.com/iFQSwWhM15
— Techno Tim (@TechnoTimLive) October 26, 2022
Which color do you like best? pic.twitter.com/O7ChniryYd
— Techno Tim (@TechnoTimLive) October 23, 2022
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
If you’ve been encrypting your secrets with SOPS and Age you know how useful it is to keep your secrets safe from prying eyes. If you’re not familiar with encrypting your secrets with SOPS and Age, I highly recommend checking out a post I did a while back that shows you how easy it is to encrypt your secrets and even hide them in plain sight in a Git repo.I am happy (and relieved) that I started doing this for all of my secrets.
This works great, until you need to rotate your encryption key that’s used to encrypt your secrets. I use FLUX for GitOps which helps me deliver changes to my Kubernetes cluster via code and since I can commit my infrastructure, I can also commit my secrets as code too (SOPS, or Secrets Operations).This means that all of my secret files (typically secret.sops.yaml
) are all encrypted using my key.But what happens when I need to change the key, either for good security hygiene or because it was compromised? The short answer is, there’s no easy way other than writing a little bit of code.
First you’ll need to generate a new age
file with
1
+
age-keygen -o age.agekey
+
This will output a age.agekey
file.Take note of this location.
Then you’ll want to execute this script in the folder where you have secrets that need to be updated.
This script isn’t anything ground breaking but hopefully it will help you update all of your secrets without having to go and manually change them yourself.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+
#!/bin/bash
+
+# Define the paths to the old/current and new age key files
+SOPS_AGE_KEY_FILE=~/.config/sops/age/keys.txt
+SOPS_AGE_KEY_FILE_NEW=~/.config/sops/age/age.agekey
+
+# Define the commands to decrypt and encrypt the file
+DECRYPT_COMMAND="sops --decrypt --age \$(cat $SOPS_AGE_KEY_FILE |grep -oP \"public key: \K(.*)\") --encrypted-regex '^(data|stringData)$' --in-place"
+ENCRYPT_COMMAND="sops --encrypt --age \$(cat $SOPS_AGE_KEY_FILE_NEW |grep -oP \"public key: \K(.*)\") --encrypted-regex '^(data|stringData)$' --in-place"
+
+# Find all the *.sops.yaml files recursively in the current directory and apply the decrypt and encrypt commands to them
+find . -name "*.sops.yaml" -type f -print0 | while IFS= read -r -d '' file; do
+ eval "$DECRYPT_COMMAND $file"
+ eval "$ENCRYPT_COMMAND $file"
+done
+
It works like this:
SOPS_AGE_KEY_FILE
is the path to your existing keys.txt
or age.agekey
fileSOPS_AGE_KEY_FILE_NEW
is the path to your new age.agekey
file.*.sops.yaml
recursively and then decrypt the file using the old key, and encrypt it using the new key.After running this script you should see all of your secrets now encrypted with the new key! You can now replace your old key file with your new one so that SOPS_AGE_KEY_FILE
is referencing. You should test decrypting your secrets before saving them.
1
+
sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") --encrypted-regex '^(data|stringData)$' secret.sops.yaml
+
If you are able to see the decrypted secret, you are all set as far as the ket goes.Another thing you’ll need to do is delete your old secret in Kubernetes and replace it with this new one so that your secrets can be decrypted in your cluster!
1
+
kubectl delete -n flux-system secrets sops-age
+
Then create new secret from the new file
1
+2
+3
+4
+
cat age.agekey |
+kubectl create secret generic sops-age \
+--namespace=flux-system \
+--from-file=age.agekey=/dev/stdin
+
Now you should be all set! Be sure to keep your new age.agekey
somewhere safe.
I wrote up a quick post on how to rotate your SOPS encryption key and update all of your secrets at once!
— Techno Tim (@TechnoTimLive) March 6, 2023
Great if you're using SOPS with Kubernetes.https://t.co/iME1iS4Kpl
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Meet Scrypted an Open Source app that will let you connect almost any camera to any home hub, certified or not! You can connect popular devices from UniFi, Amcrest, Hikvision, Nest & Google, Tuya, Reolink, and many others to your home hub of choice, whether that be Apple’s HomeKit, Google Home, Alexa, or even Home Assistant.This lets you choose and reuse your own devices and take advantage of the automation and integration you get with your Smart Home Hub.
See the whole kit here! - https://kit.co/TechnoTim/smart-home-hubs-devices
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
I have cameras in and around my house.I have cameras on the front door, cameras inside my house, cameras that point outside of my house, cameras in my garage, cameras in my server room, and even cameras in my server rack….
All of these cameras work great and I keep all of the recorded footage in my home but what’s not great is that I am not able to tap into popular Home Hubs like HomeKit, Alexa, Google Home, or even Home Assistant.That’s because many of these platforms require some very specific requirements for adding cameras to your home hub.That means they might have to be certified, have to be compatible, and both manufacturers have to get along… and we know how that story goes.
Getting ecosystems to play nice together
So this left me with using one app to check my video, while leaving a whole host of features that my home hub provides on the table.Things like notifications within my ecosystem, the ability to trigger automation based on my eco system, cool features like picture in a picture on other devices, and all the things that make home hubs, well, hubs.And while I do like my home security choice, I don’t like that it doesn’t integrate with my home hub of choice.It’s not my fault these two companies don’t get along and I am not going to buy all new cameras just to be compatible with my home hub. That’s where Scrypted comes in.
Scrypted is open source software that you host on your own machine that allows you to connect almost any camera to any hub, that’s right, certified or not.
You can connect popular devices from UniFi, Amcrest, Hikvision, Nest & Google, Tuya, Reolink, and many others to your home hub of choice, whether that be Apple’s HomeKit, Google Home, Alexa, or even Home Assistant.This lets you choose and reuse your own devices and take advantage of the automation and integration you get with your home hub.That’s right, something the big players aren’t offering you, and that’s choice.This means that you don’t have to pay for that subscription if you want video outside of your home, you can use scrypted to connect to one of the major hubs or even an open source one like Home Assistant.
Scrypted supports many different camera integrations and many Home Hubs!
Here’s where it gets really cool…
Scrypted is pluggable, so it allows developers to create and update plugins within Scrypted, giving them and you lots of flexibility.Want to connect a Google Nest Camera to Alexa? Sure! Want to connect a Reolink camera to Google Home, absolutely! Want to connect your UniFi Cameras to HomeKit? No problem! What about connecting some named or no name camera that only supports RTSP or ONVIF? Scrypted has you covered!
You’re probably wondering, what does all of this cost? Well, if you already have the hardware it costs nothing but a little bit of your time.
The next thing you’re probably asking what hardware you need to get started, or maybe you’re not even asking that, but I will tell you that you will need some hardware to get started 😀
The requirements are actually pretty low and you can run it on the latest raspberry pi, or Windows, Mac, or Linux, and even Docker, either standalone or on many NAS devices like Unraid or Synology.It’s easy to set up and after you’ve connected your cameras to your home hub, you’ll be able to take advantage of all of the integrations your camera offers as well as automation your hub offers.
So that’s what I am going after today, setting up Scrypted to connect my UniFi cameras to my HomeKit hub, which is one of my Apple TVs so I can use all of my cameras as if they are HomeKit Certified. Now wait, even if you don’t want or have this combination of devices and hubs, you can still follow along to set this up with any camera or any hub.
So I first created a Linux machine and then installed Docker, which I highly recommend using, but if you don’t feel comfortable you can install it any other way you like.If you are using Docker, I recommend doing this on Ubuntu, but Windows Mac, or any other version of Linux will work just fine.If you’re using a Linux machine I recommend using Docker and Portainer.Portainer is a great container management system for Docker that has a great UI.The install is fast and painless and makes managing Docker really easy.
Once you’re in portainer, all you need to do is connect to your Docker instance, add a container, and then set a few properties like the name of the container, the image name and tag, and then you’ll need to map your data volume from the container to the local machine.This should be somewhere your Portainer machine can read and write to, for me it’s just a simple path to a folder on the machine.The last thing they recommend is setting the network to host mode which means it will use the networking on the host instead of Docker networking.Once all that’s set, just deploy the container and you’re good to go.
Installing Scrypted with Portainer is simple!
Oh, and if you want to use Docker compose, you can use this to get started quickly! Note, this will also include watchtower
to updated your stack automatically.If you don’t want to use watchtower
just comment out that section.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+
version: "3.5"
+
+# The Scrypted docker-compose.yml file typically resides at:
+# ~/.scrypted/docker-compose.yml
+
+# Example volumes SMB (CIFS) and NFS.
+# Uncomment only one.
+
+# volumes:
+# nvr:
+# driver_opts:
+# type: cifs
+# o: username=[username],password=[password],vers=3.0,file_mode=0777,dir_mode=0777
+# device: //[ip-address]/[path-to-directory]
+# nvr:
+# driver_opts:
+# type: "nfs"
+# o: "addr=[ip-address],nolock,soft,rw"
+# device: ":[path-to-directory]"
+
+services:
+ scrypted:
+ image: koush/scrypted
+ environment:
+ - SCRYPTED_WEBHOOK_UPDATE_AUTHORIZATION=Bearer SET_THIS_TO_SOME_RANDOM_TEXT
+ - SCRYPTED_WEBHOOK_UPDATE=http://localhost:10444/v1/update
+ # nvidia support
+ # - NVIDIA_VISIBLE_DEVICES=all
+ # - NVIDIA_DRIVER_CAPABILITIES=all
+ # runtime: nvidia
+ container_name: scrypted
+ restart: unless-stopped
+ network_mode: host
+
+ devices:
+ # hardware accelerated video decoding, opencl, etc.
+ - /dev/dri:/dev/dri
+ # uncomment below as necessary.
+ # zwave usb serial device
+ # - /dev/ttyACM0:/dev/ttyACM0
+ # all usb devices, such as coral tpu
+ # - /dev/bus/usb:/dev/bus/usb
+
+ volumes:
+ - ~/.scrypted/volume:/server/volume
+ # modify and add the additional volume for Scrypted NVR
+ # the following example would mount the /mnt/sda/video path on the host
+ # to the /nvr path inside the docker container.
+ # - /mnt/sda/video:/nvr
+
+ # or use a network mount from one of the examples above
+ # - type: volume
+ # source: nvr
+ # target: /nvr
+ # volume:
+ # nocopy: true
+
+ # uncomment the following lines to expose Avahi, an mDNS advertiser.
+ # make sure Avahi is running on the host machine, otherwise this will not work.
+ # - /var/run/dbus:/var/run/dbus
+ # - /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket
+ # logging is noisy and will unnecessarily wear on flash storage.
+ # scrypted has per device in memory logging that is preferred.
+ logging:
+ driver: "json-file"
+ options:
+ max-size: "10m"
+ max-file: "10"
+ labels:
+ - "com.centurylinklabs.watchtower.scope=scrypted"
+
+ # watchtower manages updates for Scrypted.
+ watchtower:
+ environment:
+ - WATCHTOWER_HTTP_API_TOKEN=SET_THIS_TO_SOME_RANDOM_TEXT
+ - WATCHTOWER_HTTP_API_UPDATE=true
+ - WATCHTOWER_SCOPE=scrypted
+ # remove the following line to never allow docker to auto update.
+ # this is not recommended.
+ - WATCHTOWER_HTTP_API_PERIODIC_POLLS=true
+ image: containrrr/watchtower
+ container_name: scrypted-watchtower
+ restart: unless-stopped
+ volumes:
+ - /var/run/docker.sock:/var/run/docker.sock
+ labels:
+ - "com.centurylinklabs.watchtower.scope=scrypted"
+ ports:
+ # The auto update port 10444 can be configured
+ # Must match the port in the auto update url above.
+ - 10444:8080
+ # check for updates once an hour (interval is in seconds)
+ command: --interval 3600 --cleanup --scope scrypted
+
Once the container is running, you’ll want to go to the machines IP address on port 10443
Once you get there you will be greeted with a sign in page where you will create an account and password.Once signing in you will see the scripted homepage!
The first thing I did was turn on dark mode, of course, and then went into the plugins page and clicked install plugins.
Here you’ll want to install the plugin for the platform you want to support.Scripted supports many different cameras and many different hub platforms, for instance you can search for Alex and see the plugins for Amazon Alexa or Google home and see the integrations for Google home, or even Amcrest if you want to find the amcrest camera plugin! There are lots of supported cameras but for me this is going to be unifi so I searched for unifi and installed it.Once it’s installed it will ask for a username and password for your unifi device.We need to create one in UniFi protect, but you’ll want to create a new local account and not provide yours!
Installing the uniFi Protect Plugin for Scrypted
So we need to go into UniFi and create a new user.We’ll have to do this in the UniFi console and what I did was create a new Role first that has Full Management access to Protect.The documentation says that you might be able to drop this down later to a Read Only user, which I may do, but I will create the user and give it admin access to Protect only.Again, be sure to create a local account and set the permissions appropriately.
Once that user was created I then added the user and the password as well as the IP address of my UDM SE.
Once I saved my credentials I could then see all of my cameras and you can view them now, however since I am going to use HomeKit, I need to add that plugin as well.
You’ll need to create an account in UniFi Protect for Scrypted to use
So I searched for the HomeKit plugin and installed it.We don’t need to change anything in the plugin, I just reloaded the unifi plugin and went back into my cameras.Now you’ll see some additional options, one being HomeKit.You’ll want to be sure that this is enabled.I also made sure that the SnapSho tplugin was enabled too.
Once this was set up, all I had to do now was just add the cameras to HomeKit.
I did this by navigating to each individual camera in Scrypted and then clicked on HomeKit and then Clicked on pairing.It will then show you a QR code which you can scan and it will add it to your home in HomeKitI did this for all of my cameras, accepted the message about it not being an officially certified accessory, and then chose the default settings of stream while I am home and while I am away.The reason I didn’t change any of this is because I still still use my UniFi Protect for camera storage, rather than store it on my docker container. And for those counting, I have nine cameras…. Yes, 9 cameras…
Now that we have this set up, what can we do with it?
Well, now in the home app I can see all of my cameras at a glance.I can see the latest snapshot of each camera and drill in further to see a live view.I can pin the camera too so that I can multitask.
Finally! Notifications on my Apple TV!
I get notifications if someone presses the doorbell and I can talk with them too.Not only do I get notifications on my phone, but I also get them on my macbook, ipad, and even AppleTV - so just in case I am “super busy” watching something that’s “super important” I can decide whether or not to get my lazy butt off the couch.
Here’s the other cool thing: Since I have an Apple TV, I can even say “Hey Siri” show me my cameras and it will show me all of my connected cameras.From there I can browse them, pick one to watch and even listen to, and even pin it to the screen so I can keep an eye on things or if I am expecting a delivery.If they are in the same zone as other devices, I can interact with these devices too, like toggling on and off the lights on my porch.
Picture in a Picture on my TV!_
One of the things I like most about Scrypted is that I can use almost any camera I want and I can connect it to one of many platforms.Now, I obviously connected UniFi cameras to HomeKit, but you can connect almost any camera to any platform.Want to connect some old PoE cameras to Alex or Google Home Hub, or even Home Assistant? No problem.That’s the beauty of Scrypted, is that it’s pluggable and can connect almost any camera to any Home Hub.Well, I learned a lot about Scrypted, HomeKit, and Unifi Protect and I hope you learned something too.And remember if you found anything in this video helpful, don’t forget to like and subscribe.Thanks for reading and watching!
What a week! I found this awesome open source software that let's me connect and stream almost ANY camera to ANY Smart Home Hub. No more vendor lock! It's called Scrypted and it's awesome!
— Techno Tim (@TechnoTimLive) May 13, 2023
Check it out!👉https://t.co/NdvoQydUEo pic.twitter.com/RIbFIKNJgp
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Committing secrets to your Git Repo can expose information like passwords, access tokens, and other types of sensitive information.Some might think that committing secrets to a private Git Repo is OK, but I am here to tell you it’s not.If you’re going to commit secrets to a git repo, private or public, you should encrypt them first using Mozilla SOPS (Secret Operations) and AGE.SOPS is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP.Age is a simple, modern, and secure file encryption tool, format, and build using Go.It can encrypt and decrypt your files making then safe enough to commit to your Git repos!
A HUGE thanks to Datree for sponsoring this video!
Combat misconfigurations. Empower engineers. https://www.datree.io
You can install sops by following this guide.
test with
1
+
sops -v
+
should see
1
+
sops 3.7.3 (latest)
+
You can install age by following this guide
test age
with
1
+
age -version
+
should see
1
+
v1.0.0
+
test age-keygen
with
1
+
age-keygen -version
+
should see
1
+
v1.0.0
+
Now that we have age
installed we need to create a public and private key
1
+
age-keygen -o key.txt
+
should see
1
+2
+
age-keygen: warning: writing secret key to a world-readable file
+Public key: age1epzmwwzw8n09slh0c7z0z52x43nnga7lkksx3qrh07tqz5v7lcys45428t
+
let’s look at the contents
1
+
cat key.txt
+
should see
1
+2
+3
+
# created: 2022-09-26T21:55:47-05:00
+# public key: age1epzmwwzw8n09slh0c7z0z52x43nnga7lkksx3qrh07tqz5v7lcys45428t
+AGE-SECRET-KEY-1HJCRJVK7EE3A5N8CRP8YSDUGZKNW90Y5UR2RGYAS8L279LFP6LCQU5ADNR
+
Remember this is a secret so keep this safe! Do not commit this!
move the file and add to our shell
1
+2
+
mkdir ~/.sops
+mv ./key.txt ~/.sops
+
add it to our shell
1
+2
+
nano ~/.zshrc
+# or nano ~/.bashrc
+
add to the end of file
1
+
export SOPS_AGE_KEY_FILE=$HOME/.sops/key.txt
+
source our shell
1
+2
+
source ~/.zshrc
+# or source ~/.bashrc
+
A few ways you can do this.You can encrypt in place or encrypt with an editor but we’re going to do an in place encryption.
This can be kubernetes secrets, helm values, or just plain old yaml
create a secret with the following contents
secret.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+
---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: mysql-secret
+ namespace: default
+stringData:
+ MYSQL_USER: root
+ MYSQL_PASSWORD: super-Secret-Password!!!!
+
to encrypt
1
+
sops --encrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") --encrypted-regex '^(data|stringData)$' --in-place ./secret.yaml
+
to decrypt
1
+
sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") --encrypted-regex '^(data|stringData)$' --in-place ./secret.yaml
+
If you want to decrypt this secret on the fly and apply to kubernetes
encrypt first
1
+
sops --encrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") --encrypted-regex '^(data|stringData)$' --in-place ./secret.yaml
+
decrypt and pipe to kubectl
1
+
sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") --encrypted-regex '^(data|stringData)$' ./secret.yaml | kubectl apply -f -
+
check it with
1
+
k describe secrets mysql-secret-test
+
then
1
+
kubectl get secret mysql-secret-test -o jsonpath='{.data}'
+
then
1
+
kubectl get secret mysql-secret-test -o jsonpath='{.data.MYSQL_PASSWORD}' | base64 --decode
+
install vscode extension
choose the beta for sops because that supports age + sops
don’t forget to add .decrypted~secret.yaml
to .gitignore
encrypt .env files
make sure extension is installed
create
secret.env
MYSQL_USER=superroot
+MYSQL_PASSWORD="super-Secret-Password!!!!############"
+
encrypt
1
+
sops --encrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") -i .env
+
decrypt
1
+
sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") -i .env
+
don’t forget to add .decrypted~secret.env
to your .gitignore
secret.json
1
+2
+3
+4
+
{
+ "mySqlUser": "superroot",
+ "password": "super-Secret-Password!!!!#######"
+}
+
encrypt
1
+
sops --encrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") -i secret.json
+
decrypt
1
+
sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") -i secret.json
+
don’t forget to add .decrypted~secret.json
to your .gitignore
secret.ini
1
+2
+3
+
[database]
+user = superroot
+password = super-Secret-Password!!!!1223
+
encrypt
1
+
sops --encrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") -i secret.ini
+
decrypt
1
+
sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") -i secret.ini
+
don’t forget to add .decrypted~secret.ini
to you .gitignore
secret.sql
1
+2
+3
+
--- https://xkcd.com/327/
+--- DO NOT USE
+INSERT INTO Students VALUES ( 'Robert' ); DROP TABLE STUDENTS; --' )
+
encrypt
1
+
sops --encrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") --in-place ./secret.sql
+
decrypt
1
+
sops --decrypt --age $(cat $SOPS_AGE_KEY_FILE |grep -oP "public key: \K(.*)") --in-place ./secret.sql
+
If you’re thinking of doing GitOps with Flux, you can check out my video on this topic or see my documentation.You can do cluster decryption and fully automate decryption of secrets.
In cluster decryption with Flux
https://fluxcd.io/flux/guides/mozilla-sops/#configure-in-cluster-secrets-decryption
People often ask if it's OK to save secrets in code or config to a private git repo. I though this was ok until now...
— Techno Tim (@TechnoTimLive) October 1, 2022
If you're using Secrets in Kubernetes or ENVs in Docker I highly recommend encrypting them with Mozilla SOPS.@mozhacks https://t.co/r1NpBoGWe2 pic.twitter.com/sdPFq06WQM
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
So you’re a software engineer or a developer who wants to self-host your own code in your own homelab? Well this is the tutorial for you! In this step-by-step guide we’ll walk through setting up a repo, building and testing our own code (with unit tests) in a self-hosted Gitlab CI runner in our CI pipeline, then we’ll build a Docker image and push it up to a container registry, then we’ll use kubectl in our CD pipeline to deploy our Docker container to our self-hosted kubernetes cluster! This all happens in a couple of minutes and then we’ll truly have continuous integration and continuous delivery in our homelab!
1 - Set Up Kubernetes with Rancher
2 - Set up a reverse proxy and SSL with Traefik
3 - Expose Rancher and Kubernetes API Securely
See the app here:
https://github.com/techno-tim/techno-react
Docker file:
https://github.com/techno-tim/techno-react/blob/master/Dockerfile
Kubernetes deployment yaml
https://github.com/techno-tim/techno-react/blob/master/kubernetes/deployment.yaml
nginx config for your react application
https://github.com/techno-tim/techno-react/blob/master/nginx.conf
pbcopy
for WSL on Windows
https://www.techtronic.us/pbcopy-pbpaste-for-wsl/ https://www.techtronic.us/pbcopy-pbpaste-for-wsl/
Example config.toml
for your GitLab runner.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+
concurrent = 1
+check_interval = 0
+
+[session_server]
+ session_timeout = 1800
+
+[[runners]]
+ name = "rancher-gitlab-runner"
+ url = "https://gitlab.com"
+ token = "your-gitlab-runner-token"
+ executor = "docker"
+ [runners.custom_build_dir]
+ [runners.cache]
+ [runners.cache.s3]
+ [runners.cache.gcs]
+ [runners.docker]
+ tls_verify = false
+ image = "docker:stable"
+ privileged = false
+ disable_entrypoint_overwrite = false
+ oom_kill_disable = false
+ disable_cache = false
+ volumes = [\"/var/run/docker.sock:/var/run/docker.sock\", \"/cache\"]
+ shm_size = 0
+
example ~/.kube/config
for your GitLab secret
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+
+apiVersion: v1
+kind: Config
+clusters:
+- name: "cluster1"
+ cluster:
+ server: "https://your.rancher.url/k8s/clusters/c-cluster-id"
+users:
+- name: "cluster1"
+ user:
+ token: "your kubernetes token"
+
+contexts:
+- name: "cluster1"
+ context:
+ user: "cluster1"
+ cluster: "cluster1"
+
+current-context: "cluster1"
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
When most people think about self-hosting services in their HomeLab, they often think of the last mile. By last mile I mean the very last hop before a user accesses your services. This last hop, whether that’s using certificates or a reverse proxy, is incredibly important, but it’s also important to know that security starts at the foundation of your HomeLab.Today, we’ll work our way up from hardware security, to OS, to networking, to containers, to firewalls, IDS/IPS, reverse proxies, auth proxies for authentication and authorization, and even lean in to an external provider like Cloudflare.
A HUGE thanks to Micro Center for sponsoring this video!
New Customers Exclusive – Get a Free 240gb SSD at Micro Center: https://micro.center/0ef37a
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Slack is a great chat and communication tool used by small and large businesses as well as personal use.Slack has a great API and great official Node JS clients that help you automate many features of Slack. If you’re thinking of building a bot for Slack, be sure to follow this step by step tutorial on how to build a Slack bot in JavaScript using the Slack API and the Node Slack SDK.With this SDK, we can connect to the Slack Web API and event hook into events using the RTM API and build a bot in just a few minutes that you can add to your Slack server today.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Streamlabs OBS for MacOS is here! In this video we’ll walk through setting up Streamlabs step by step.We’ll install Streamlabs OBS, set up desktop audio with iShowU Audio Capture so you can capture desktop audio, configure our webcam and game capture with a Cam Link, set up our alerts, configure the best possible streaming settings for Streamlabs, adjust our streaming layout, and go live.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I decided to go with another rack in my home but this time much smaller! Thanks to Rackstuds for sending a few packs of Rackstuds!
Products in this video:
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
Over the years I’ve gone from machines on a shelf, to racking machines in an open rack, to centralizing everything and racking it in an enclosed 36u rack, to what I am building now, and that’s this new 12u closed rack. It’s a smaller version of my 36u rack with a few changes that make this the perfect rack for a small office, home office, or even just at home. But is it right for you? Let’s check it out and see.
Introducing my 12u Sysracks Server Rack
This is the Sysracks 12u 24” Wall mount 19” enclosure server rack. It measures almost 24” wide, almost 24” deep, and almost 25” tall. It’s a standard 19” rack and if you’re wondering why they call it a 19” in rack, 19” refers to the mountable width of servers and equipment. It’s made of steel and has a powder paint finish in either gray or black. It has 2 brush panels with cable managers at the top and bottom to help with cable management and block dust from coming in. All around the case you are going to see lots of perforated edges, especially around the front doors. This is to help passively cool the entire enclosure. It also includes active cooling with this top 120mm fan module that connects to any standard outlet. There are models, especially the larger units like my 36u rack that come with temperature control units, but I opted to keep this one simple and will probably add temp sensors and smart switches to control this fan if it ever gets to that point. Wait, I thought I said I was keeping it simple??? Speaking of power it also comes with a PDU where you can plug in up to 8 devices and has a a secure on/off switch covered in a detachable cap.
I finally realized why they call these 19” racks
In the back we can see wall mounting hooks which makes this easy to wall mount if you choose to do so but I am choosing to attach some casters so I can move it around freely in my office if needed. The back panel is attached with screws but can be easily removed if needed.
On each side we have locking removable panels that can help you secure your enclosure and prevent anyone from getting in while still giving you access to get inside and make adjustments to your equipment.
Coming around to the front we have this nice glass door that has perforated edges on the side and a handle that can also be locked if needed. I like having a glass panel door because it lets me easily see inside to check on my equipment plus it looks cool with all the blinky lights.
Inside the rack has 4 posts to rack up to 12u of equipment making it great for small servers, networking equipment, DVRs, AV equipment, and anything else that can fit in a 19” short depth rack. It also comes with this shelf for equipment however I am not sure if I am going to use it or not yet.
Putting it together was so much easier than my 36u rack. You can do it alone but it might help to have someone for the very first step and that’s putting together the frame. Don’t worry though, I was able to manage it alone. After securing the fame, you’ll then need to mount all of the posts so you can rack your equipment. Before you go too far like I did, if you’re going to use the supplied shelf be sure you adjust the posts so that you can mount the shelf later if needed.
The back panel can be attached with a few screws. While I would have loved to see a door like on the 36u model, it makes sense for this to be a panel since it’s also wall mountable.
The sides are removable and can easily pop in and out with these clips. It also comes with a lock and key to secure it if you like.
Assembling the rack was pretty easy, you can do it with one person
This model supports both legs and casters however it only ships with the legs. If you’re going to use casters too you’ll need to pick some up or buy them separately. I did choose to go with casters because I want to move it around the room when needed. The casters lock in place and are very secure, so secure that I didn’t worry one bit about assembling this on my workbench.
Attaching the front door is pretty easy, but is a little challenging using the shims to get the door to hang just right. I love the look of the rack and I think the perforated edges and glass give it that premium feel. The door has a handle to keep the door shut and can also be locked with the included keys.
As I mentioned the back is removable via screws, I do wish it was a door but it’s easy enough to take off and most people are going to wall mount this anyway. Once it’s all put together, It’s pretty easy to work on and get inside of the rack when it’s empty, plenty of room to work on everything I plan on installing.
You can see the vents on the top for cables as well as the fan for cooling. I am glad I have a fan however it doesn’t have a switch to easily toggle on and off so I will end up wiring this up to a smart switch and put a temperature control inside if I ever really need to turn this fan on.
I started out by installing my UniFi 24 port POE switch. This was a switch that I replaced in my other rack but decided to hang on to it for this rack. I don’t think I will be using all 24 ports here in my office but it’s better than buying another switch.
As you can see I am using RackStuds for this install. RackStuds reached out and sent me a few packs of studs including their new 1ru rack studs. These are awesome and so simple to use. You just squeeze them, pop them in, and then hook up your devices. These new 1ru RackStuds are great for 1u devices like my switch and PDU I put in back. Simple solution. But then I decided to put them to the test. I wanted to rack mount my UPS and that thing weighs 22 lbs and only has rack ears in the front rather than the front and back. I tightened all 4 using their combo pack studs and so far so good. RackStuds are able to hold them without issue and without sagging. Oh, and I wasn’t paying attention when racking my UPS and I racked it upside down, which I then quickly flipped around after noticing, but it was super simple with RackStuds. If you’re interested in the ones I used I’ll have some links below.
This was my first time using Rackstuds, they were really easy to use.
After installing the UPS, the network switch, and my PDU I took a look at the rack. It’s a really nice rack for short depth items, not to mention that you can use shelves for anything that can’t be rack mounted natively. I am really impressed by the build quality and attention to detail, glass door, and all of the other features - especially for the price. At just 210 dollars I was expecting a lot less but what I got was a perfect rack for my office. This is going to house a few projects coming up so be sure you’re subscribed to see what else I am going to put in it. Well I learned a ton about racking smaller components, building a mini rack, and I hope you learned something too. And remember if you found anything in this post helpful, don’t forget share!
All in all, I am super happy with this new rack!
I decided to go with another server rack in my home but this time much smaller!
— Techno Tim (@TechnoTimLive) July 26, 2023
Check it out!
👉https://t.co/CncKenusZx pic.twitter.com/mwbkO1fA4c
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I’ve been on a quest looking for a new server rack for my HomeLab in my home.I’ve outgrown my current 18u open frame rack and decided to give a 32u Sysracks Enclosed Rack a try! Join me as we put together this server rack, test out all of the features, and I’ll let you know my thoughts about this brand new server rack!
A HUGE thank you to Sysracks for sending me this rack!
Check out their selection of racks at https://sysracks.com
A HUGE thank you to Micro Center for sponsoring this video!
New Customer Exclusive – Free 256GB SSD In-Store: https://micro.center/yi0
Check out Micro Center’s Custom PC Builder: https://micro.center/3dq
Submit your build to Micro Center’s Build Showcase: https://micro.center/lsn
Shop Micro Center’s Black Friday Deals: https://micro.center/rgu
📦 See a collection of Sysracks racks here: https://kit.co/TechnoTim/sysracks-server-racks
00:00 - Why get a new Server Rack?
01:14 - Sysracks 32u Server & Features
02:22 - Micro Center (Sponsor)
03:35 - Assembling the Rack
07:38 - Exploring the Rack Features
09:39 - Checking Out the Temperature Control Unit
11:04 - My Thoughts About the Sysracks Server Rack
13:42 - Stream Highlight - “The grow room isn’t big enough for 2 racks!”
I’ve been on a quest looking for a new server rack for my HomeLab and I think I’ve found one that fits my needs! I’ve decided to give a new enclosed rack a try! https://t.co/BS4TMHo3Qw pic.twitter.com/CCGJIiWXsu
— Techno Tim (@TechnoTimLive) November 12, 2022
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Tdarr is a distributed transcoding system that runs on on Windows, Mac, Linux, Arm, Docker, and even Unraid.It uses a server with one or more nodes to transcode videos into any format you like.Today, we’ll set up the Docker and Windows version of Tdarr using a GPU to regain up to 50% of your disk space back.I converted my video collection using Tdarr to h265 and saved over 700 GB of disk space.
A HUGE THANKS to our sponsor, Micro Center!
New Customers Exclusive – Get a Free 256gb SSD at Micro Center: https://micro.center/a643c4
docker-compose.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+
version: "3.4"
+services:
+ tdarr:
+ container_name: tdarr
+ image: ghcr.io/haveagitgat/tdarr:latest
+ restart: unless-stopped
+ network_mode: bridge
+ ports:
+ - 8265:8265 # webUI port
+ - 8266:8266 # server port
+ - 8267:8267 # Internal node port
+ environment:
+ - TZ=America/Chicago
+ - PUID=1000
+ - PGID=1000
+ - UMASK_SET=002
+ - serverIP=0.0.0.0
+ - serverPort=8266
+ - webUIPort=8265
+ - internalNode=true
+ - nodeID=MyInternalNode
+ - nodeIP=0.0.0.0
+ - nodePort=8267
+ - NVIDIA_DRIVER_CAPABILITIES=all
+ - NVIDIA_VISIBLE_DEVICES=all
+ volumes:
+ - /path/to/server:/app/server
+ - /path/to/configs:/app/configs
+ - /path/to/logs:/app/logs
+ - /path/to/media/:/media
+ - /path/to/temp/:/temp
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - capabilities:
+ - gpu
+
Tdarr_Node_Config.json
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+
{
+ "nodeID": "Windows-Node",
+ "nodeIP": "192.168.0.100",
+ "nodePort": "8267",
+ "serverIP": "192.168.0.101",
+ "serverPort": "8266",
+ "handbrakePath": "",
+ "ffmpegPath": "",
+ "mkvpropeditPath": "",
+ "pathTranslators": [
+ {
+ "server": "/media/",
+ "node": "C:/media"
+ },
+ {
+ "server": "/temp",
+ "node": "C:/temp"
+ }
+ ],
+ "platform_arch": "win32_x64_docker_false",
+ "logLevel": "INFO"
+}
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Today marks a very special day for me. It’s something I’ve been working on for quite some time, and it’s finally ready for everyone to see! 🎉
Today, I launched the Techno Tim Shop with its first drop: the “Dark Mode Everything” collection. This collection was designed in-house, quite literally, by my wife in our own house, and it represents my love for Dark Mode. There’s something for everyone in this collection.
Keep in mind that this is not a print-on-demand service, so supplies are truly limited for this initial drop. Thank you all for helping me get to this point! I couldn’t have done it without you!
You can see all of the items I offer in my shop here:
(Affiliate links may be included. I may receive a small commission at no cost to you.)
Today marks a very special day for me. It's something I've been working on for quite some time, and it's finally ready for everyone to see! 🎉
— Techno Tim (@TechnoTimLive) February 17, 2024
Today, I launched the Techno Tim Shop with its first drop: the "Dark Mode Everything" collection. https://t.co/ZBMUFcvjjJ pic.twitter.com/KZYRFYAoAB
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Today, we’re going to set up and configure Terraform on your machine so we can start using Terraform.Then we’ll configure cf-terraforming
to import our Cloudflare state and configuration into Terraform.After that we’ll set up a GitHub report and configure GitHub actions so you have CI and CD for deploying your Infrastructure automatically using a Git Flow.If you’re new to Terraform, that’s fine! This is a beginner tutorial for Terraform and by the end of this, you will feel like an expert!
Terraform is a powerful infrastructure as code tool to help you create and manage infrastructure across multiple public or private clouds. It can help you provision, configure, and manage infrastructure using their simple and human readable configuration language. Using Terraform helps you automate your infrastructure and your DevOps workflow, do it consistently, and allows you to collaborate with teams in Git.
There are 7 key areas where Terraform shines:
Automation: Terraform enables automation of infrastructure provisioning, configuration, and management, which reduces human error and saves time.
Consistency: Terraform ensures that your infrastructure is consistent across all environments, from development to production.
Collaboration: Terraform allows multiple teams to work together on infrastructure changes, using version control systems like Git.
Cloud-agnostic: Terraform supports various cloud providers, including AWS, Google Cloud, and Microsoft Azure, allowing you to use the same tool to manage resources across different clouds.
Scalability: Terraform is designed to handle large-scale infrastructure deployments and can easily manage thousands of resources.
Reusability: Terraform modules enable you to reuse code and infrastructure components across multiple projects, making it easier to manage infrastructure at scale.
Flexibility: Terraform is highly flexible and can be extended through plugins to integrate with other tools and services.
This will work on Ubuntu and Windows + WSL
Install terraform
for other platforms
Install dependencies
1
+2
+
sudo apt update
+sudo apt install software-properties-common gnupg2 curl
+
Import the gpg key
1
+2
+
curl https://apt.releases.hashicorp.com/gpg | gpg --dearmor > hashicorp.gpg
+sudo install -o root -g root -m 644 hashicorp.gpg /etc/apt/trusted.gpg.d/
+
Add hashicorp repository
1
+
sudo apt-add-repository "deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
+
Install terraform
1
+
sudo apt install terraform
+
Check the version
1
+2
+3
+
terraform --version
+Terraform v1.4.0
+on linux_amd64
+
First create a simple terraform config:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+
terraform {
+ required_providers {
+ cloudflare = {
+ source = "cloudflare/cloudflare"
+ version = "~> 3.0"
+ }
+ }
+}
+
+provider "cloudflare" {
+ api_token = var.cloudflare_api_token
+}
+
+# Create a record
+resource "cloudflare_record" "www" {
+ # ...
+}
+
+# Create a page rule
+resource "cloudflare_page_rule" "www" {
+ # ...
+}
+
Here is my .editorconfig
:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
# http://editorconfig.org
+root = true
+
+[*]
+indent_style = space
+indent_size = 2
+charset = utf-8
+trim_trailing_whitespace = true
+insert_final_newline = true
+
+[*.md]
+trim_trailing_whitespace = false
+
Initialize terraform and Cloudflare
1
+
terraform init
+
We should see that it installed plugins, and it should have created a lock file.
Next we’ll want to review our plan, we can see our proposed changes
1
+
terraform plan
+
Next we’ll apply our changes.
1
+
terraform apply
+
Verify on the dashboard.
After applying we can verify the results.
we can also test with nslookup
1
+
nslookup yoursite.example.com
+
Check Cloudflare’s site.
if we run plan again, we can see there’s no work to do
1
+
terraform apply
+
Should see that there isn’t any work to do.
We will need to import our Cloudflare state into our local Terraform state.
An important point to understand about Terraform is that it can only manage configuration it created or was explicitly told about after the fact. The reason for this limitation is that Terraform expects to be authoritative for the resources it manages. It relies on two types of files to understand what resources it controls and what state they are in. Terraform determines when and how to make changes from the following:
- A configuration file (ending in .tf) that defines the configuration of resources for Terraform to manage. This is what you worked with in the tutorial steps.
- A local state file that maps the resource names defined in your configuration file — for example, cloudflare_load_balancer.www-lb — to the resources that exist in Cloudflare.
https://developers.cloudflare.com/terraform/advanced-topics/import-cloudflare-resources/
So this means that we need to sync the remote state of Cloudflare, down to our local state.
This is where cf-terraforming
can help
Check for the latest version here: https://github.com/cloudflare/cf-terraforming/tags
Update this command with the latest tag
1
+2
+3
+4
+5
+6
+7
+8
+9
+
curl -L https://github.com/cloudflare/cf-terraforming/releases/download/v0.11.0/cf-terraforming_0.11.0_linux_amd64.tar.gz -o cf-terraforming.tar.gz
+
+tar -xzf cf-terraforming.tar.gz
+
+rm cf-terraforming.tar.gz
+
+sudo mv ./cf-terraforming /usr/local/bin
+
+sudo chmod +x /usr/local/bin/cf-terraforming
+
Then we need to updated our .zshrc
or .bashrc
with our variables
1
+
nano ~/.zshrc
+
1
+2
+
export CLOUDFLARE_API_TOKEN='12345'
+export CLOUDFLARE_ZONE_ID='abcde'
+
The source your shell
1
+
source ~/.zshrc
+
Now let’s export Cloudflare state
(Be sure you have copied your variables into your shell, or ran the export commands above )
1
+2
+3
+
cf-terraforming generate \
+ --resource-type "cloudflare_record" \
+ --zone $CLOUDFLARE_ZONE_ID > imported.tf
+
Look at the file and copy the contents into your cloudflare.tf
then run
1
+
terraform plan
+
Terraform thinks that we need to apply all of these resources, even though they exist.
We need to import them into our local state.
1
+2
+3
+
cf-terraforming import \
+ --resource-type "cloudflare_record" \
+ --zone $CLOUDFLARE_ZONE_ID
+
This will export a lot of commands, we now need to run them to import them into our state.
All you need to do it copy and paste the commands into your terminal.
This will import your local state, you can see it in terraform.tfstate
If we run terraform plan
now, we can see that there aren’t any changes.
Be sure to sign up for an account and then get add your CLOUDFLARE_API_TOKEN
and an ENV variable in Terraform Cloud.Mark it as seneitive.
Then you’ll want to updated your cloudflare.tf
It should look like this
1
+2
+3
+4
+5
+6
+7
+8
+
cloud {
+ hostname = "app.terraform.io"
+ organization = "your org"
+
+ workspaces {
+ name = "Cloudflare"
+ }
+ }
+
Your cloudflare.tf
file should now look like:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+
terraform {
+ cloud {
+ hostname = "app.terraform.io"
+ organization = "your org"
+
+ workspaces {
+ name = "Cloudflare"
+ }
+ }
+ required_providers {
+ cloudflare = {
+ source = "cloudflare/cloudflare"
+ version = "~> 3.0"
+ }
+ }
+}
+
+provider "cloudflare" {
+ api_token = var.cloudflare_api_token
+}
+
+# Create a record
+resource "cloudflare_record" "www" {
+ # ...
+}
+
+# Create a page rule
+resource "cloudflare_page_rule" "www" {
+ # ...
+}
+
Then run
1
+
terraform init
+
This will prompt you to sign in and then import your local state into Terraform cloud.
If you want to create a CI / CD pipeline with GitHub actions, you’ll need to create a new repo at GitHub
Here is my .gitignore
.terraform/
+terraform.tfstate*
+
Convert your local folder into a git repo:
first, cd
into your folder
Note: Be sure not to commit any of your secrets to git! This includes API tokens, terraform state, and any other files that might include sensitive information
1
+2
+3
+4
+5
+
git init
+git commit -m "first commit"
+git branch -M main
+git remote add origin git@github.com:username/your-repo-name.git
+git push -u origin main
+
To create a branch, add files, commit, and push
1
+2
+3
+4
+
git checkout -b my-new-branch
+git add .
+git commit -m "fix(terraform): made some changes"
+git push --set-upstream origin my-new-branch
+
Be sure you have created a secret TF_API_TOKEN
with your Terraform API token.
For reference, here is the terraform GitHub Action (with my bug fix)
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+
name: 'Terraform'
+
+on:
+ push:
+ branches: [ "main" ]
+ pull_request:
+
+permissions:
+ contents: read
+
+jobs:
+ terraform:
+ name: 'Terraform'
+ runs-on: ubuntu-latest
+ environment: production
+
+ # Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
+ defaults:
+ run:
+ shell: bash
+
+ steps:
+ # Checkout the repository to the GitHub Actions runner
+ - name: Checkout
+ uses: actions/checkout@v3
+
+ # Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
+ - name: Setup Terraform
+ uses: hashicorp/setup-terraform@v2
+ with:
+ cli_config_credentials_token: $
+
+ # Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
+ - name: Terraform Init
+ run: terraform init
+
+ # Checks that all Terraform configuration files adhere to a canonical format
+ - name: Terraform Format
+ run: terraform fmt -check
+
+ # Generates an execution plan for Terraform
+ - name: Terraform Plan
+ run: terraform plan -input=false
+
+ # On push to "main", build or change infrastructure according to Terraform configuration files
+ # Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
+ - name: Terraform Apply
+ if: github.ref == 'refs/heads/main' && github.event_name == 'push'
+ run: terraform apply -auto-approve -input=false
+
At this point you should be able to run terraform and have your Cloudflare state sync’d with Terraform Cloud and GitHub actions running in CI / CD so you can start deploying your infrastructure using code!
Over the past few weeks I learned all about Terraform and it's awesome! I converted my Cloudflare settings to code and deploy it with CI / CD using GitHub Actions!
— Techno Tim (@TechnoTimLive) March 25, 2023
You can check it out here:
👉 https://t.co/vUnvV8m3Mh#terraform #cloudflare #github #homelab pic.twitter.com/2oWkhtZshu
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Let’s compare Touch Portal to Stream Deck.We’ll walk through some of the similarities and differences between the free software Touch Portal and the Stream Deck hardware/software combination.We’ll see if we can set up, configure in a step by step guide, and clone our Stream Deck interface for OBS using Touch Portal and a mobile device, we’ll review features and experiences, then we’ll choose a winner in the Touch Portal vs. Stream Deck face off!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
In today’s Traefik tutorial we’ll get FREE Wildcard certificates to use in our HomeLab and with all of our internal self-hosted services. We’re going to set up Traefik 3 in Docker and get Let’s Encrypt certificates using Cloudflare as our DNS Provider (we’ll cover how to set up others too). Then we’ll configure local DNS using PiHole (or any other local DNS) to route to our services that are now protected with secure certificates!
Looking to do this same thing in Kubernetes? Check out traefik + cert-manager on Kubernetes
Looking for the Traefik + Portainer guide? Check out traefik + portainer on Docker
For reference, the following folder structure was used:
1
+2
+3
+4
+5
+6
+7
+
./traefik
+├── data
+│ ├── acme.json
+│ ├── config.yml
+│ └── traefik.yml
+└── cf_api_token.txt
+└── docker-compose.yml
+
You can also do this with other DNS providers too!
See this post on how to install docker
and docker-compose
Create folder for your compose and mounts
1
+2
+
mkdir docker_volumes
+cd docker_volumes
+
then we’ll create a folder to hold traefik files
1
+2
+
mkdir traefik
+cd traefik
+
create docker compose file and add contents
1
+2
+
touch docker-compose.yaml
+nano docker-compose.yaml
+
docker-compose.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+
version: "3.8"
+
+services:
+ traefik:
+ image: traefik:v3.0
+ container_name: traefik
+ restart: unless-stopped
+ security_opt:
+ - no-new-privileges:true
+ networks:
+ - proxy
+ ports:
+ - 80:80
+ - 443:443
+ # - 443:443/tcp # Uncomment if you want HTTP3
+ # - 443:443/udp # Uncomment if you want HTTP3
+ environment:
+ CF_DNS_API_TOKEN_FILE: /run/secrets/cf_api_token # note using _FILE for docker secrets
+ # CF_DNS_API_TOKEN: ${CF_DNS_API_TOKEN} # if using .env
+ TRAEFIK_DASHBOARD_CREDENTIALS: ${TRAEFIK_DASHBOARD_CREDENTIALS}
+ secrets:
+ - cf_api_token
+ env_file: .env # use .env
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /var/run/docker.sock:/var/run/docker.sock:ro
+ - ./data/traefik.yml:/traefik.yml:ro
+ - ./data/acme.json:/acme.json
+ # - ./data/config.yml:/config.yml:ro
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.traefik.entrypoints=http"
+ - "traefik.http.routers.traefik.rule=Host(`traefik-dashboard.local.example.com`)"
+ - "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}"
+ - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
+ - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
+ - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
+ - "traefik.http.routers.traefik-secure.entrypoints=https"
+ - "traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.local.example.com`)"
+ - "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
+ - "traefik.http.routers.traefik-secure.tls=true"
+ - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
+ - "traefik.http.routers.traefik-secure.tls.domains[0].main=local.example.com"
+ - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.local.example.com"
+ - "traefik.http.routers.traefik-secure.service=api@internal"
+
+secrets:
+ cf_api_token:
+ file: ./cf_api_token.txt
+
+networks:
+ proxy:
+ external: true
+
1
+2
+3
+4
+
mkdir data
+cd data
+touch acme.json
+chmod 600 acme.json
+
1
+2
+
touch traefik.yml
+nano traefik.yml
+
traefik.yml
contents:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+
api:
+ dashboard: true
+ debug: true
+entryPoints:
+ http:
+ address: ":80"
+ http:
+ redirections:
+ entryPoint:
+ to: https
+ scheme: https
+ https:
+ address: ":443"
+serversTransport:
+ insecureSkipVerify: true
+providers:
+ docker:
+ endpoint: "unix:///var/run/docker.sock"
+ exposedByDefault: false
+ # file:
+ # filename: /config.yml
+certificatesResolvers:
+ cloudflare:
+ acme:
+ email: youremail@email.com
+ storage: acme.json
+ # caServer: https://acme-v02.api.letsencrypt.org/directory # prod (default)
+ caServer: https://acme-staging-v02.api.letsencrypt.org/directory # staging
+ dnsChallenge:
+ provider: cloudflare
+ #disablePropagationCheck: true # uncomment this if you have issues pulling certificates through cloudflare, By setting this flag to true disables the need to wait for the propagation of the TXT record to all authoritative name servers.
+ #delayBeforeCheck: 60s # uncomment along with disablePropagationCheck if needed to ensure the TXT record is ready before verification is attempted
+ resolvers:
+ - "1.1.1.1:53"
+ - "1.0.0.1:53"
+
1
+2
+
touch cf_api_token.txt
+nano cf_api_token.txt
+
1
+
docker network create proxy
+
Paste your token into file from Cloudflare
make sure you have htpasswd
installed.
To install on Linux
1
+2
+
sudo apt update
+sudo apt install apache2-utils
+
Mac OS (should already be installed)
Windows
htpasswd.exe
Should already be installed on Windows
Generate credential pair
1
+
echo $(htpasswd -nB user) | sed -e s/\\$/\\$\\$/g
+
1
+2
+
touch .env
+nano .env
+
paste your credential pair:
e.g.
1
+
TRAEFIK_DASHBOARD_CREDENTIALS=user:$$2y$$05$$lSaEi.G.aIygyXRdiFpt7OqmUMW9QUG5I1N.j0bXoXxIjxQmoGOWu
+
1
+
docker compose up -d --force-recreate
+
Common ways to troubleshoot
1
+2
+3
+
docker ps
+docker logs traefik
+docker exec -it traefik /bin/sh
+
inside of container
1
+2
+3
+4
+5
+6
+7
+8
+
top
+ls
+cat acme.json
+cat traefik.yml
+ls /run/secrets
+cat /run/secrets/cf_api_token
+echo ${CF_DNS_API_TOKEN_FILE}
+echo ${TRAEFIK_DASHBOARD_CREDENTIALS}
+
1
+
nslookup traefik-dashboard.local.example.com
+
1
+2
+3
+4
+
...
+ caServer: https://acme-v02.api.letsencrypt.org/directory # prod (default)
+ #caServer: https://acme-staging-v02.api.letsencrypt.org/directory # staging
+...
+
Clear out the existing staging certificates
1
+2
+
cd data
+nano acme.json
+
Clear and save
Restart the stack
1
+
docker compose up -d --force-recreate
+
1
+2
+3
+4
+
mkdir nginx
+cd nginx
+touch docker-compose.yaml
+nano docker-compose.yaml
+
Contents of docker-compose.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+
version: '3.8'
+services:
+ nginx:
+ image: nginxdemos/nginx-hello
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.nginx.rule=Host(`nginx.local.example.com`)"
+ - "traefik.http.routers.nginx.entrypoints=https"
+ - "traefik.http.routers.nginx.tls=true"
+ - "traefik.http.services.nginx.loadbalancer.server.port=8080"
+ networks:
+ - proxy
+
+networks:
+ proxy:
+ external: true
+
Check DNS
1
+
nslookup nginx.local.example.com
+
Start the new NGINX Stack
1
+
docker compose up -d --force-recreate
+
Uncomment a few things:
In docker-compose.yaml
1
+2
+3
+
...
+ - ./data/config.yml:/config.yml:ro
+...
+
in traefik.yml
1
+2
+3
+4
+
...
+ file:
+ filename: /config.yml
+...
+
Create config
1
+2
+3
+
cd data
+touch config.yml
+nano config.yml
+
Contents of config.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+
http:
+ #region routers
+ routers:
+ proxmox:
+ entryPoints:
+ - "https"
+ rule: "Host(`proxmox.local.example.com`)"
+ middlewares:
+ - default-headers
+ - https-redirectscheme
+ tls: {}
+ service: proxmox
+ pihole:
+
+#endregion
+#region services
+ services:
+ proxmox:
+ loadBalancer:
+ servers:
+ - url: "https://192.168.0.17:8006"
+ passHostHeader: true
+#endregion
+ middlewares:
+ https-redirectscheme:
+ redirectScheme:
+ scheme: https
+ permanent: true
+ default-headers:
+ headers:
+ frameDeny: true
+ browserXssFilter: true
+ contentTypeNosniff: true
+ forceSTSHeader: true
+ stsIncludeSubdomains: true
+ stsPreload: true
+ stsSeconds: 15552000
+ customFrameOptionsValue: SAMEORIGIN
+ customRequestHeaders:
+ X-Forwarded-Proto: https
+
+ default-whitelist:
+ ipAllowList:
+ sourceRange:
+ - "10.0.0.0/8"
+ - "192.168.0.0/16"
+ - "172.16.0.0/12"
+
+ secured:
+ chain:
+ middlewares:
+ - default-whitelist
+ - default-headers
+
Restart the stack
1
+
docker compose up -d --force-recreate
+
To see more examples of commonly used services check out config.yml in the reference files
Traefik docker-compose.yaml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+
version: "3.8"
+
+services:
+ traefik:
+ image: traefik:v3.0
+ container_name: traefik
+ restart: unless-stopped
+ security_opt:
+ - no-new-privileges:true
+ networks:
+ - proxy
+ ports:
+ - 80:80
+ - 443:443
+ # - 443:443/tcp # Uncomment if you want HTTP3
+ # - 443:443/udp # Uncomment if you want HTTP3
+ environment:
+ CF_DNS_API_TOKEN_FILE: /run/secrets/cf_api_token # note using _FILE for docker secrets
+ # CF_DNS_API_TOKEN: ${CF_DNS_API_TOKEN} # if using .env
+ TRAEFIK_DASHBOARD_CREDENTIALS: ${TRAEFIK_DASHBOARD_CREDENTIALS}
+ secrets:
+ - cf_api_token
+ env_file: .env # use .env
+ volumes:
+ - /etc/localtime:/etc/localtime:ro
+ - /var/run/docker.sock:/var/run/docker.sock:ro
+ - ./data/traefik.yml:/traefik.yml:ro
+ - ./data/acme.json:/acme.json
+ - ./data/config.yml:/config.yml:ro
+ labels:
+ - "traefik.enable=true"
+ - "traefik.http.routers.traefik.entrypoints=http"
+ - "traefik.http.routers.traefik.rule=Host(`traefik-dashboard.local.example.com`)"
+ - "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}"
+ - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
+ - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
+ - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
+ - "traefik.http.routers.traefik-secure.entrypoints=https"
+ - "traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.local.example.com`)"
+ - "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
+ - "traefik.http.routers.traefik-secure.tls=true"
+ - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
+ - "traefik.http.routers.traefik-secure.tls.domains[0].main=local.example.com"
+ - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.local.example.com"
+ - "traefik.http.routers.traefik-secure.service=api@internal"
+
+secrets:
+ cf_api_token:
+ file: ./cf_api_token.txt
+
+networks:
+ proxy:
+ external: true
+
traefik.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+
api:
+ dashboard: true
+ debug: true
+entryPoints:
+ http:
+ address: ":80"
+ http:
+ redirections:
+ entryPoint:
+ to: https
+ scheme: https
+ https:
+ address: ":443"
+serversTransport:
+ insecureSkipVerify: true
+providers:
+ docker:
+ endpoint: "unix:///var/run/docker.sock"
+ exposedByDefault: false
+ file:
+ filename: /config.yml
+certificatesResolvers:
+ cloudflare:
+ acme:
+ email: youremail@email.com
+ storage: acme.json
+ caServer: https://acme-v02.api.letsencrypt.org/directory # prod (default)
+ # caServer: https://acme-staging-v02.api.letsencrypt.org/directory # staging
+ dnsChallenge:
+ provider: cloudflare
+ #disablePropagationCheck: true # uncomment this if you have issues pulling certificates through cloudflare, By setting this flag to true disables the need to wait for the propagation of the TXT record to all authoritative name servers.
+ #delayBeforeCheck: 60s # uncomment along with disablePropagationCheck if needed to ensure the TXT record is ready before verification is attempted
+ resolvers:
+ - "1.1.1.1:53"
+ - "1.0.0.1:53"
+
config.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+
http:
+ #region routers
+ routers:
+ proxmox:
+ entryPoints:
+ - "https"
+ rule: "Host(`proxmox.local.example.com`)"
+ middlewares:
+ - default-headers
+ - https-redirectscheme
+ tls: {}
+ service: proxmox
+ pihole:
+
+#endregion
+#region services
+ services:
+ proxmox:
+ loadBalancer:
+ servers:
+ - url: "https://192.168.0.17:8006"
+ passHostHeader: true
+#endregion
+ middlewares:
+ https-redirectscheme:
+ redirectScheme:
+ scheme: https
+ permanent: true
+ default-headers:
+ headers:
+ frameDeny: true
+ browserXssFilter: true
+ contentTypeNosniff: true
+ forceSTSHeader: true
+ stsIncludeSubdomains: true
+ stsPreload: true
+ stsSeconds: 15552000
+ customFrameOptionsValue: SAMEORIGIN
+ customRequestHeaders:
+ X-Forwarded-Proto: https
+
+ default-whitelist:
+ ipAllowList:
+ sourceRange:
+ - "10.0.0.0/8"
+ - "192.168.0.0/16"
+ - "172.16.0.0/12"
+
+ secured:
+ chain:
+ middlewares:
+ - default-whitelist
+ - default-headers
+
Traefik 3 is here! So, today we'll get trusted certificates with Let's Encrypt for all of our self-hosted services! No more https warnings and no more weird ports!https://t.co/MoRKYXvA0M pic.twitter.com/5OR22iRJRJ
— Techno Tim (@TechnoTimLive) April 30, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Today, we’re going to use SSL for everything.No more self-sign certs.No more http.No more hosting things on odd ports.We’re going all in with SSL for our internal services and our external services too.We going to set up a reverse proxy using Traefik, Portainer, and use that to get wildcard certificates from Let’s Encrypt. Join me and let’s secure all the things.
Looking for the Traefik 3.0 guide? Check out traefik 3 on Docker
Looking to do this same thing in Kubernetes? Check out traefik + cert-manager on Kubernetes
See this post on how to install docker
and docker-compose
1
+2
+3
+4
+5
+6
+7
+
mkdir traefik
+cd traefik
+mkdir data
+cd data
+touch acme.json
+chmod 600 acme.json
+touch traefik.yml
+
traefik.yml
can be found here
create docker network
1
+
docker network create proxy
+
1
+
touch docker-compose.yml
+
docker-compose.yml
can be found here
1
+2
+
cd data
+touch config.yml
+
1
+
docker-compose up -d
+
1
+2
+3
+4
+
mkdir portainer
+cd portainer
+touch docker-compose.yml
+mkdir data
+
docker-compose.yml
can be found here
1
+2
+
sudo apt update
+sudo apt install apache2-utils
+
1
+
echo $(htpasswd -nb "<USER>" "<PASSWORD>") | sed -e s/\\$/\\$\\$/g
+
NOTE: Replace <USER>
with your username and <PASSWORD>
with your password to be hashed.
If you’re having an issue with your password, it might not be escaped properly and you can use the following command to prompt for your password
1
+
echo $(htpasswd -nB USER) | sed -e s/\\$/\\$\\$/g
+
Paste the output in your docker-compose.yml
in line (traefik.http.middlewares.traefik-auth.basicauth.users=<USER>:<HASHED-PASSWORD>
)
1
+
docker-compose up -d
+
1
+2
+
cd traefik/data
+nano config.yml
+
config.yml
here
1
+
docker-compose up -d --force-recreate
+
Your folder structure should look like the below, if you are following along with the example.But feel free to make it however you wish just keep in mind you’ll need to change the location in the corresponding files.
1
+2
+3
+4
+5
+6
+
./traefik
+├── data
+│ ├── acme.json
+│ ├── config.yml
+│ └── traefik.yml
+└── docker-compose.yml
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
After setting up your TrueNAS server there are lots of things to configure when it comes to tuning ZFS. From pools, to disk configuration, to cache to networking, backups and more. This guide will walk you through everything you should do after installing TrueNAS with a focus on speed, safety, and optimization.
Disclosures:
I chose Mirrored VDEVs for a few reasons:
Upsides
Downsides
To increase read speeds:
To increase write speeds:
My Settings:
Here are some items I found useful when building my NAS
(Affiliate links may be included. I may receive a small commission at no cost to you.)
Over the last few weeks while building my new TrueNAS server I learned a lot about ZFS and how to optimize my new NAS for performance.https://t.co/qvAJBBSCkA pic.twitter.com/6LB5IqHZo7
— Techno Tim (@TechnoTimLive) February 9, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
TrueNAS SCALE is here and with it comes new way of installing and managing applications.You can install official apps, unofficial and community apps using TrueCharts, and also any Docker image or Kubernetes deployment with helm.Join me as we dive into managing applications and exploring TrueNAS SCALES’s new app engine that runs Docker, Kubernetes, and K3S.
If you’re looking for Community App Catalog for TrueNAS SCALE, you can find it here
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Which is the best NAS operating system to use at home and in your HomeLab?
Is it Unraid for maximizing storage efficiency?
Or is it TrueNAS for bringing enterprise ZFS to home?
Let’s find out.
Which is the best NAS operating system to use at home and in your HomeLab? Is it Unraid for maximizing storage efficiency? Or is it TrueNAS for bringing enterprise ZFS to home?
— Techno Tim (@TechnoTimLive) June 9, 2024
Let's find out.https://t.co/IoAHmiSev1 pic.twitter.com/Zn9zu1Ogy2
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
ZFS is a great file system that comes with TrueNAS and can meet all of your storage needs.But with it comes some complexity on how to manage and expand your ZFS storage pools.Over the last week I learned all about storage pools and how to move them, expand them, and even what not to do when trying to grow your storage pool.Join me as I figure out how to move a 20 TB pool to my new storage server with 100 TB of raw data.
Seagate Exos 14TB Drives https://amzn.to/3kaQnkN
Seagate IronWolf 8TB Drives https://amzn.to/3iGq3yH
See all of the storage I recommend in this kit!
https://kit.co/TechnoTim/best-ssd-hard-drive-flash-storage
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
00:00 - What are my options for expanding ZFS?
00:25 - Use ZFS Snapshot Replication (My First Attempt)
02:20 - Just Copy the Data to the New Pool (My Second Attempt)
03:01 - Expand the Pool by Replacing All Disks (My Third Attempt)
04:27 - Replacing All of the Drives & Resilvering
07:16 - Pool has Expanded!
07:43 - My Beef with ZFS and Recommendations
09:20 - Stream Highlight - This is how I got into this mess with ZFS…
A clip used in this video was from Tom Lawrence’s channel. Thanks Tom!
Over the last week I learned all about storage pools and how to move them, expand them, and even what not to do when trying to grow your storage pool.https://t.co/IoQKKKhEKm#truenas #zfs #nas pic.twitter.com/UFZF4hLSBc
— Techno Tim (@TechnoTimLive) January 14, 2023
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
The Turing Pi 2 is a compact ARM cluster that provides a scalable computing on the edge.The Turning Pi 2 comes with many improvements over the Turning Pi 1.This model ships with 32GB of RAM, SATA III interface, Raspberry Pi Compute module 4 support, and support for NVIDIA Jetson boards.This means that you can mix and match both raspberry Pis along with Nvidia Jetson boards. This gives us a ton of flexibility to be able to run Pis for general compute workloads, and then Nvidia Jetsons for AI or ML workloads.Join me as we explore the Turing Pi 2 and prepare its home inside of my HomeLab server rack.
Turing Pi 2 - https://turingpi.com
Raspberry Pi Compute Modules - https://www.raspberrypi.com/products/compute-module-4
NVIDIA Jetson - https://amzn.to/3eGDQje
Rosewill 2U Server Chassis Case - https://amzn.to/3qxbygk
EVGA 550 Power Supply - https://amzn.to/3EMEzd4
Noctua 80mm Redux PWM Fans - https://amzn.to/3zdBUbp
Samsung EVO microSD 64 GB - https://amzn.to/3FR5Oos
Samsung EVO microSD 128 GB - https://amzn.to/3eCIajx
CR2032 Batteries - https://amzn.to/3zfWE2b
CM4 Heat Sinks - https://amzn.to/31hSVVk
Multipurpose Rails https://amzn.to/3Hr6wsS
4 Pin Splitter Cables https://amzn.to/3mQvzh4
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Let’s build a bot! Not a bad bot like a view bot, but bot for good.Let’s build a Twitch moderator bot using tmi.js! The Twitch API is powerful and and already has lots of great bots however no bot has the flexibility of creating your own! In this video I will show you how to build a Twitch bot using TMI.JS from start to finish.You’ll see how to use the developer portal, set up oauth, set the correct scopes, get an access token, create a bot using JavaScript, NodeJS, and NPM, invite the bot to your Twitch channel, and have it moderate your chat.Also, We have made this bot open source and will continue to contribute to this bot.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
The UDM Pro Max is here and it’s packed with upgrades like a faster CPU, more RAM, an internal SSD, more eMMC, Dual Drive bays and more! Today we check on the new UniFi Dream Machine Pro Max, configure and test Shadow Mode, and test network throughput to see if this really is the fastest UniFi Dram Machine yet.
Disclosures:
A new UDM, the UniFi Dream Machine Pro Max.
— Techno Tim (@TechnoTimLive) April 23, 2024
I spent a lot of time testing this device, does it out perform its predecessors? https://t.co/t5UjJxJ61H pic.twitter.com/s9vjJnaeWy
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Today I built the ultimate, all in one, HomeLab Home Server to handle everything.
Disclosures:
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
Today I built the ultimate, all in one, HomeLab server to handle everything.https://t.co/K8YbJ3XKTl pic.twitter.com/qD2K6A46D5
— Techno Tim (@TechnoTimLive) May 10, 2024
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Do you have some places where you can’t run ethernet? Do want to extend your ethernet without pulling more cable? Well this is the guide for you.In this step-by-step tutorial we’ll use a Ubiquiti UniFi AP AC PRO and connect a second as a guest, giving use remote ethernet to a remote site! This is the pro tip guide to setting up a wireless bridge! Bonus, we’ll even do a live throughput test to see how much bandwidth we get running in bridge mode with 2 AC Pros!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
The UniFi Express from Ubiquiti is here and it’s going to shake up how we connect small and home networks. It’s a gateway that has WiFi 6 that runs the UniFi network application and can transform into an access point or mesh when your network grows. We’ll be taking a look at this device, its features, how to configure and manage it, and discuss all the uses cases for this versatile device.
Disclosures:
This newly released UniFi Express device is here and it’s here to shake up small networks. It’s a device that’s flexible enough to be used in just about any situation and comes in a small compact form factor with a small footprint. It’s a compact device that can act as a gateway and provide WiFI 6 to power your entire network, or can transform into a mesh device to expand your wireless network, or you can add it to an existing network and run it in access point only mode. Yes, it’s that flexible.
If you’ve not yet experienced UniFi’s network app and wireless access points, this might be the best entry point for you, and if you’re already using devices in their ecosystem, this can fight seamlessly with your existing network deployments.
It has a nice little LCM screen that can display useful information
It’s a small compact device that’s all white.
Setting up this device is easy too.
After powering it on you’ll want to adopt this device. It has bluetooth built in so you can adopt it if you have an existing unifi device on your network that also has bluetooth, or even easier from the UniFi network app on your phone.
Then you have the choice of setting up a new system, which is what we’ll choose, but you can also add this to an existing system which we’ll talk about later.
You’ll name your device and set up WiFi, and then it will set up this device for you automatically.
It will prepare the system for a few minutes and after that you’ll be able to connect to your newly created wifi network.
Once you’re connected, you can choose to manage the UniFi network via mobile app, or even the web. If you want to manage this from the cloud you can do this easily in the UniFi site manager. Once you land there you should see your UniFi Express ad clicking on that will bring you to the Network manager app.
If you’re familiar with the UniFi network app, you’ll feel right at home because it’s the same network application you’re used to with other UniFi products. But if you’re not, we’ll walk through some of the highlights of the application so you can understand how you can use your UniFi express.
It comes with the UniFi Dashboard
This is going to give you a heads up display of your network health and statistics. First we’ll need to turn on dark mode, there that’s better. The dashboard is great for seeing how your network is doing at a glance. From here you can drill into the different aspects of your network or for example if you wanted to see why this device is the most active, you can drill into it, see details and insights about the device, and even test the latency right from here.
You can also discover these features using the navigation on the left.
Topology shows how data flows through your network
Topology will show you your network typology and how each device connects to your network. It also has this cool visualization of how traffic flows through your network, I would watch that all day.
This section shows all of your UniFi devices
The unifi devices will just show your unifi devices on the network. This will look pretty bare with only a unifi express but if you have a more complicated network like I do at home, you can visually see how each device is connected. We’ll make this light up a little later with another device, and even create a mesh network with that additional device just waiting to be adopted.
This section shows all of your client devices, non-UniFi devices
In the client devices section you will see all of your client devices, your non unifi devices that are connected to the network. You’ll also see some more details about each device and you can always drill into each to see more information about them.
For example we can see that this device here is connected to the unifi express, using WiFI 6, and the connection is excellent. And here, again, you see insights, settings, and even test the latency for the device.
This section shows the port manager. The UniFi express only has 2.
In the ports section you see the port manager for each device. This is where you can manage the ports on your device. With the unifi express, we only have 2 ports. 1 LAN and One LAN. As far as port management goes on the WAN device, we can only manage the negotiation, but on the LAN side we can manage all aspects of the port.
We can rename it, disable it or enable it, we can change the VLAN (if we had more set up, which we’ll do in a bit) Allow or block tagged VLANs, and even some advanced configurations/
This section shows the radio manager.
The radios section is where you will get lots of wireless insights. You can see which network you are managing and other information. But if you look at the top this is where you will see some really interesting insights. Coverage is going to show you how dense your network is compared to the clients that are connecting. This is helpful because it will let you know if clients are too far away and if adding another AP is recommended. Connectivity is going to show you if any clients have connectivity issues which is super helpful if some of your clients have issues connecting.
This section shows other SSIDs in the area
Environment is going to show you all of the other networks in your area. This is really helpful for choosing your network channels for WiFi, something that UniFi can do for you automatically that we’ll see later. And speed tests are where all of your client speed tests are. This is a hidden gem and yet another reason I love UniFi products.
WiFiMan is a great app that works with UniFi network application to test your network and save the results
They have great mobile apps like WiFi man. WiFI man will help you test your network speeds from the client to the internet, check your WiFI signal, and even help you connect your device to VPN. Once you run a speed test on your device, the results will be stored in the Speed Tests section so you can compare results later.
Gateway shows which clients are using which services across your entire network
The gateway section is going to show some of the metrics from your gateway. This includes classification of services being used across your network. You can see which apps, services, and protocols along with the amount of data they are using.
In the system log section you can see different types of alerts and logs for your device and even manage push notifications for alerts.
In the settings section is where you can find all of the configuration options for your network.
In the wifi section you can create new networks, like a guest network in just a couple of clicks. You have a whole host of options for configuring wireless networks here.
One of the other options you might have seen in the WiFi section is the option to optimize now. This option scans for wireless networks and tries to find the best wifi channel for you to use. I would recommend doing this right away, and then again on a schedule which we’ll cover in a bit.
In the network sections this is where you will configure some advanced options like VLANs. I won’t cover this here because I’ve already covered this in depth in a video, but this is where you would do it.
The internet section is for configuring some of your WAN options, things like static IP addresses, dynamic DNS, and others can be configured here.
The next section is for VPN and that’s where this device really gets awesome, not that it wasn’t already, but you can set up Teleport, which is UniFi’s 0 configuration VPN we talked about earlier with the WiFi Man app which will allow you to create a VPN connection just by sending a link. It’s super handy. If you don’t want to go that route you have your more traditional methods for doing that, with WireGuard, OpenVPN, and L2TP. You can also configure this to be a VPN client of another network, or set up a site to site VPN connecting this network to another network. Again, these are the same options you see across all unifi devices that support the UniFi network app.
In the security section you do see a slight difference between this device and some of their other devices. This device does have IDS and IPS so you’re going to see any configuration options there. IDS and IPS will help detect and block different security volitions based on rules and heuristics. You do see this option on devices like the UDM pro but it’s missing on this device.
I think the reason for this is:
It seems they are breaking this up a bit but we’ll cover this a little later. But here we can configure general security including ad blocking and country blocking, traffic rules that can apply to different categories, port forwarding, probably the most important thing in this section, firewall rules for your VLANs and your WAN.
We also have routing which is for creating static routes
Profiles where you get to create groups and settings that can be applied to ports, speed limits, RADIUS users, and IPs.
And lastly in System we can change UI settings, schedule automatic updates, schedule automatic backups, and turn on some additional services in the advanced section. Who is this for Meshing
After looking at all of the features that are supported in the UniFi network application and those that aren’t, you might be wondering who this device is for. The more I thought about this the more use cases I came up with!
First of all, it’s obvious that this is for any small business that wants a low cost but powerful network management with WiFi 6. It lets you create and manage a new site with vpn capabilities for a fraction of the cost of a typical deployment and on top of that it gives you WiFi 6. It then allows you to expand the network by adding additional switches and other UniFi access points.
And if that site expands in the future, you can add additional UniFi Express devices that can act as an access point only and even mesh mode. That’s right, if you adopt this device to a network that already has a network controller running it will run in access point mode allowing you to extend your wifi range.
This also works for existing UniFi networks. Say for instance you have a network set up already and want to add a simple, small access point, you can plug this in, adapt it to your existing network, and have a wifi range extender in minutes.
This is great for remote workers too, since it can connect back to your corporate network with a VPN connection.
UniFi express is an easy recommendation for home use
This could also be used at home for the home user. It’s more device than most will even need at home but provides a simple way to create a mesh network at home. You can start with one and expand as you need. And a majority of home users only use wireless anyway so the fact that it only has one network port is fine for most people.
This also applies to anyone that has a Dream Router that wants to add some more coverage to their network without running ethernet cables and worrying about Power over ethernet. With an express you just plug one of these in, mesh it, and you’ve expanded your network. This is really helpful if you also have UniFi Protect Wireless cameras or DoorBells that aren’t near your router.
And a few personal use cases that I’ve thought about are installing one of these at my family’s house for the occasional remote support that comes up every now and then. This makes troubleshooting their networks remotely so much easier than walking them through it.
This might replace my travel router… time will tell
And lastly something that I plan to use this for is for my travel router. I’ve always wanted a UniFi device that’s small enough to travel with that provides secure access back to my home. Now I know that’s super specialized but it’s something that I’ve been trying to build on my own for quite some time.
UniFi express is a great way to expand your network
The more I think about this device the more use cases I come up with, from the first time home user that wants reliable WiFi at home, to the “pro-sumer” at home that wants to dip their toes in the UnFi ecosystem at a budget, to the company that wants to give a small remote site wifi access and the ability to expand while still managing their network, to the remote user that wants access back to their corporate LAN without a bunch of network gear in their home, to me, the odd ball, who just wants to bring a device with them when the travel so that all devices can securely connect back to home. I am sure there are plenty more use cases I missed, so let me know your thoughts in the comments below.
Well I learned a ton about the net UniFi Express, how to mesh wireless networks, and I hope you learned something too, and remember if you found anything in this post helpful, don’t forget share with a friend!
The past week I have been testing out a new device from Ubiquiti. It's the UniFi Express and it's here to shake up small networks!
— Techno Tim (@TechnoTimLive) November 29, 2023
👉https://t.co/E1CNDH5uNw pic.twitter.com/svs44LnsqW
UniFi Express:
Other items in this video (because I know you will ask)
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Introducing the UniFi Pro Max 24 PoE and UniFi Etherlighting™ Patch Cables from Ubiquiti! We’ll discuss what makes this switch unique, how they are different from the existing pro line, and even take a close look at their new cables that are meant for this switch. Is it useful or just a gimmick?
Disclosures:
This past week I got to test out a Switch with RGB from Ubiquiti, yes RGB. They call it Etherlighting and have even made their own patch cables to emphasis the effect.
— Techno Tim (@TechnoTimLive) December 15, 2023
Are they any good?
👉https://t.co/DhOY0XBtz6 pic.twitter.com/7yOXZSK5a3
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I knew nothing about Unraid until today. I finally installed Unraid in my HomeLab on one of my servers. Is it any good? Does it live up to the hype? Let’s find out in my candid walkthrough of Unraid as you see and hear my successes as well as my struggles.
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
This past week, I tried Unraid so that you don't have to.
— Techno Tim (@TechnoTimLive) May 24, 2024
Is it any good in 2024? Let's find outhttps://t.co/3k6aJpqjma pic.twitter.com/QKgH0kjQS4
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
🤝 Support me and help keep this site ad-free!
Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy. It’s open source and you can read more about it on the GitHub repo. Looking for a tutorial on how use this? Check out this video on how to use SOPS and Age for your Git Repos!
We’re going to use curl
so you’ll want to be sure you have it installed
1
+
curl -V
+
This should return something similar to the following
1
+2
+3
+4
+
curl 7.68.0 (x86_64-pc-linux-gnu) libcurl/7.68.0 OpenSSL/1.1.1f zlib/1.2.11 brotli/1.0.7 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh/0.9.3/openssl/zlib nghttp2/1.40.0 librtmp/2.3
+Release-Date: 2020-01-08
+Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
+Features: AsynchDNS brotli GSS-API HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets
+
1
+
sudo apt update && sudo apt install curl
+
Then we’ll want to download the latest Flux binary by running
1
+
curl -s https://fluxcd.io/install.sh | sudo bash
+
Then we’ll want to be sure it’s installed properly by running
1
+
flux -v
+
This should return something similar to the following
1
+
flux version 0.39.0
+
Next we’ll need to locate our gotk-components.yaml
file for flux in our git repo.For example, mine lives in clusters/cluster-01/base
1
+2
+3
+4
+5
+
clusters
+├── cluster-01
+│ ├── base
+│ │ └── flux-system
+│ │ └── gotk-components.yaml
+
One you locate gotk-components.yaml
we’ll patch it by using the following command
1
+
flux install --export > clusters/cluster-01/base/flux-system/gotk-components.yaml
+
After this file is updated, you can check to be sure it updated the correct file by running
1
+
git diff
+
You should see something like the following
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+
diff --git a/clusters/cluster-01/base/flux-system/gotk-components.yaml b/clusters/cluster-01/base/flux-system/gotk-components.yaml
+index 79576f0f..9c26b708 100644
+--- a/clusters/cluster-01/base/flux-system/gotk-components.yaml
++++ b/clusters/cluster-01/base/flux-system/gotk-components.yaml
+@@ -1,6 +1,6 @@
+ ---
+ # This manifest was generated by flux. DO NOT EDIT.
+-# Flux Version: v0.38.3
++# Flux Version: v0.39.0
+ # Components: source-controller,kustomize-controller,helm-controller,notification-controller
+ apiVersion: v1
+ kind: Namespace
+@@ -8,7 +8,7 @@ metadata:
+ labels:
+ app.kubernetes.io/instance: flux-system
+ app.kubernetes.io/part-of: flux
+- app.kubernetes.io/version: v0.38.3
++ app.kubernetes.io/version: v0.39.0
+ pod-security.kubernetes.io/warn: restricted
+ pod-security.kubernetes.io/warn-version: latest
+ name: flux-system
+@@ -23,7 +23,7 @@ metadata:
+ app.kubernetes.io/component: notification-controller
+ app.kubernetes.io/instance: flux-system
+ app.kubernetes.io/part-of: flux
+- app.kubernetes.io/version: v0.38.3
++ app.kubernetes.io/version: v0.39.0
+ name: alerts.notification.toolkit.fluxcd.io
+:
+
if you see something other than the original gotk-components.yaml
being updated, you might want to check the location of the file and try again.
Once this is updated, you can simply commit and push this up and let GitOps do the rest!
1
+2
+3
+
git add .
+git commit -m "feat(flux): Upgraded flux to 0.39.0" # replace with your verion
+git push
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Want to migrate FreeNAS to TrueNAS today? It’s simple using this step by step tutorial.We’ll walk through how to upgrade FreeNAS to TreNAS CORE.We’ll cover upgrading FreeNAS to TrueNAS on a physical machine (bare metal) as well as a virtualized install of FreeNAS. We’ll prepare our services, jails, plugins, virtual machines, pools, and disks for the migration and then upgrade each.We’ll even show you how to do an offline upgrade of TrueNAS and then how to upgrade a ZFS pool with newer feature flags.Finally we’ll walk through what’s different between TrueNAS and FreeNAS.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Proxmox 8.0 has been released (June, 22, 2023) and includes several new features and a Debian version upgrade. Among the changes are:
Many have been asking how to upgrade, so I decided to put together an easy-to-follow post to get your Proxmox server upgraded to 8!
This might go without saying, but you’ll want to be sure you back up your Proxmox server’s configs as well as any virtual machines running on thi server. After you’ve done that, you’ll need to check to be sure you are running at least 7.4.15 or newer (If you need to upgrade from 6 to 7, see my post on how to do this). If you aren’t sure which version you are running, you can run this to check:
1
+
pveversion
+
This should output something similar to:
1
+
pve-manager/7.4-15/a5d2a31e (running kernel: 5.15.108-1-pve)
+
Next we’ll want to run an upgrade script to check to if there are any potential issues during the upgrade process. Don’t worry, this does not execute anything other than checks and is safe to run multiple times.
You can run it by executing:
1
+
pve7to8
+
You can also run it with all checks enabled by executing:
1
+
pve7to8 --full
+
You should see something similar to the following in the output:
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97
+
➜ ~ pve7to8 --full
+= CHECKING VERSION INFORMATION FOR PVE PACKAGES =
+
+Checking for package updates..
+PASS: all packages up-to-date
+
+Checking proxmox-ve package version..
+PASS: proxmox-ve package has version >= 7.4-1
+
+Checking running kernel version..
+PASS: running kernel '5.15.108-1-pve' is considered suitable for upgrade.
+
+= CHECKING CLUSTER HEALTH/SETTINGS =
+
+PASS: systemd unit 'pve-cluster.service' is in state 'active'
+PASS: systemd unit 'corosync.service' is in state 'active'
+PASS: Cluster Filesystem is quorate.
+
+Analzying quorum settings and state..
+INFO: configured votes - nodes: 3
+INFO: configured votes - qdevice: 0
+INFO: current expected votes: 3
+INFO: current total votes: 3
+
+Checking nodelist entries..
+PASS: nodelist settings OK
+
+Checking totem settings..
+PASS: totem settings OK
+
+INFO: run 'pvecm status' to get detailed cluster status..
+
+= CHECKING HYPER-CONVERGED CEPH STATUS =
+
+SKIP: no hyper-converged ceph setup detected!
+
+= CHECKING CONFIGURED STORAGES =
+
+PASS: storage 'backups' enabled and active.
+PASS: storage 'fast10' enabled and active.
+PASS: storage 'local' enabled and active.
+INFO: Checking storage content type configuration..
+PASS: no storage content problems found
+PASS: no storage re-uses a directory for multiple content types.
+
+= MISCELLANEOUS CHECKS =
+
+INFO: Checking common daemon services..
+PASS: systemd unit 'pveproxy.service' is in state 'active'
+PASS: systemd unit 'pvedaemon.service' is in state 'active'
+PASS: systemd unit 'pvescheduler.service' is in state 'active'
+PASS: systemd unit 'pvestatd.service' is in state 'active'
+INFO: Checking for supported & active NTP service..
+WARN: systemd-timesyncd is not the best choice for time-keeping on servers, due to only applying updates on boot.
+ While not necessary for the upgrade it's recommended to use one of:
+ * chrony (Default in new Proxmox VE installations)
+ * ntpsec
+ * openntpd
+
+INFO: Checking for running guests..
+WARN: 6 running guest(s) detected - consider migrating or stopping them.
+INFO: Checking if the local node's hostname 'draco' is resolvable..
+INFO: Checking if resolved IP is configured on local node..
+PASS: Resolved node IP '192.168.0.11' configured and active on single interface.
+INFO: Check node certificate's RSA key size
+PASS: Certificate 'pve-root-ca.pem' passed Debian Busters (and newer) security level for TLS connections (4096 >= 2048)
+PASS: Certificate 'pve-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (2048 >= 2048)
+INFO: Checking backup retention settings..
+PASS: no backup retention problems found.
+INFO: checking CIFS credential location..
+PASS: no CIFS credentials at outdated location found.
+INFO: Checking permission system changes..
+INFO: Checking custom role IDs for clashes with new 'PVE' namespace..
+PASS: no custom roles defined, so no clash with 'PVE' role ID namespace enforced in Proxmox VE 8
+INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded..
+SKIP: not yet upgraded, no need to check the FUSE library version LXCFS uses
+INFO: Checking node and guest description/note length..
+PASS: All node config descriptions fit in the new limit of 64 KiB
+PASS: All guest config descriptions fit in the new limit of 8 KiB
+INFO: Checking container configs for deprecated lxc.cgroup entries
+PASS: No legacy 'lxc.cgroup' keys found.
+INFO: Checking if the suite for the Debian security repository is correct..
+INFO: Checking for existence of NVIDIA vGPU Manager..
+PASS: No NVIDIA vGPU Service found.
+INFO: Checking bootloader configuration...
+SKIP: not yet upgraded, no need to check the presence of systemd-boot
+SKIP: No containers on node detected.
+
+= SUMMARY =
+
+TOTAL: 33
+PASSED: 27
+SKIPPED: 4
+WARNINGS: 2
+FAILURES: 0
+
+ATTENTION: Please check the output for detailed information!
+
As you can see there are a few warnings but nothing failing. The warnings I have listed are ones related to time packages (which I am going to ignore) and one related to machines still running. To resolve the second warning I will shutdown all the machines before I upgrade.
We’ll want to be sure that we’ve applied all updates to our current installation before upgrading to 8. You can do this by running:
1
+2
+
apt update
+apt dist-upgrade
+
If there are updates, I recommend applying them all, rebooting, and upgrading again if needed. Repeat this until there aren’t any up updates to apply.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+15
+16
+
➜ ~ apt update
+Hit:1 http://security.debian.org bullseye-security InRelease
+Hit:2 http://download.proxmox.com/debian/pve bullseye InRelease
+Hit:3 http://ftp.us.debian.org/debian bullseye InRelease
+Hit:4 http://ftp.us.debian.org/debian bullseye-updates InRelease
+Reading package lists... Done
+Building dependency tree... Done
+Reading state information... Done
+All packages are up to date.
+
+➜ ~ apt dist-upgrade
+Reading package lists... Done
+Building dependency tree... Done
+Reading state information... Done
+Calculating upgrade... Done
+0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
+
Well need to update our Debian and Proxmox apt repositories to Bookworm:
1
+
sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
+
If you’re also using the “no-subscription” repository, you’ll also want to update those too:
1
+
sed -i -e 's/bullseye/bookworm/g' /etc/apt/sources.list.d/pve-install-repo.list
+
Mine is actually at /etc/apt/sources.list.d/pve-no-enterprise.list
so I will run instead:
1
+
sed -i -e 's/bullseye/bookworm/g' /etc/apt/sources.list.d/pve-no-enterprise.list
+
You can verify these files by checking to be sure they were updated with bookworm
:
1
+
cat /etc/apt/sources.list
+
1
+
cat /etc/apt/sources.list.d/pve-install-repo.list
+
or for me personally:
1
+
cat /etc/apt/sources.list.d/pve-no-enterprise.list
+
You should see something like this:
1
+
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
+
Remember, you are just verifying the sed
replaced bullseye
with bookworm
in each file.
If you’re running ceph
you’ll want to check the Proxmox 7 to 8 Upgrade Wiki for a few additional steps. I am not running ceph
so I will skip this part.
Now all that’s left is updating the system! If you’ve made it this far it’s now time to upgrade! I would recommend stopping or migrating any virtual machines and LXC containers before proceeding.
1
+2
+
apt update
+apt dist-upgrade
+
This step may take some time depending on your internet speed and server resources.
The upgrade might ask you to approve changes to configurations files. I am going to defer to the Proxmox documentation for this step, which is shown below:
It’s suggested to check the difference for each file in question and choose the answer accordingly to what’s most appropriate for your setup. Common configuration files with changes, and the recommended choices are:
- /etc/issue -> Proxmox VE will auto-generate this file on boot, and it has only cosmetic effects on the login console.
- Using the default “No” (keep your currently-installed version) is safe here.
- /etc/lvm/lvm.conf -> Changes relevant for Proxmox VE will be updated, and a newer config version might be useful.
- If you did not make extra changes yourself and are unsure it’s suggested to choose “Yes” (install the package maintainer’s version) here.
- /etc/default/grub -> Here you may want to take special care, as this is normally only asked for if you changed it manually, e.g., for adding some kernel command line option.
- It’s recommended to check the difference for any relevant change, note that changes in comments (lines starting with #) are not relevant.
- If unsure, we suggested to selected “No” (keep your currently-installed version)
After upgrading all packages you can verify the upgrade by running:
1
+
pve7to8
+
If all went well, you should see everything pass (or with minimal warnings).
You can now reboot your system.
After rebooting and logging into the system for the first time, you’ll want to clear your browser’s cache for pve web, or just hard reload:
If you have more servers in your cluster, repeat this for each server!
Enjoy Proxmox 8!
Check to be sure you see Proxmox 8 here!
If you were looking to upgrade to Proxmox 8 today, I wrote a quick guide to help! I've already tested it on my production cluster and it works great!https://t.co/NFqv0XXyWB
— Techno Tim (@TechnoTimLive) June 25, 2023
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
There are so many upgrades out there for streaming, what do I start with? Video card? Microphone? Audio? CPU? RAM? Lights? I started with one that is overlooked by many streamers, and it’s the room I stream in.So come along with me as give a tour of my stream room makeover! Hopefully this video gives you some stream background ideas for sofas, lights, smart LED lights, accent lighting, coffee tables, plants, rugs, bookshelves, and even Hyrule Historia as I walk through my stream studio setup!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
You’ve spun up lots of self-hosted services in your HomeLab but you haven’t set up monitoring and alerting yet.Well, be glad you waited because today well set up Uptime Kuma to do just that.Uptime Kuma is a self-hosted, open source, fancy uptime monitoring and alerting system.It can monitor HTTP, HTTP with keyword, TCP, Ping, and even DNS systems!
https://github.com/louislam/uptime-kuma
See this post on how to install docker
and docker-compose
If you’re using Docker compose
1
+2
+3
+4
+5
+6
+7
+
mkdir uptime-kuma
+cd uptime-kuma
+touch docker-compose.yml
+nano docker-compose.yml # copy the contents from below
+mkdir data
+ls
+docker-compose up -d --force-recreate
+
docker-compose.yml
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+13
+14
+
---
+version: "3.1"
+
+services:
+ uptime-kuma:
+ image: louislam/uptime-kuma:1
+ container_name: uptime-kuma
+ volumes:
+ - /home/serveradmin/docker_volumes/uptime-kuma/data:/app/data
+ ports:
+ - 3001:3001
+ restart: unless-stopped
+ security_opt:
+ - no-new-privileges:true
+
If you’re using Rancher, Portainer, Open Media Vault, Unraid, or anything else with a GUI, just copy and paste the environment variables, ports, and volumes from above into the form on the web page.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
In this quick no fluff video, I will show you how to create a multi-bootable USB drive with Ventoy that can boot all of your ISO, WIM, IMG, VHD, and EFI files.It supports both MBR and GPT partitions. This is the last USB drive you will ever need and you won’t ever need to format another one.Ventoy is free and open source.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Do you want a DIY NAS? Do you want to set up TrueNAS? Have you considered virtualizing TrueNAS with Proxmox? In this video we’ll walk through installing and setting up TrueNAS and configure a samba share for Windows.We’ll also install it on a virtual server using ProxmoxVE that’s running in my Homelab.Both are free and open source.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Should I virtualize this? Should I containerize this? These are great questions to ask yourself when spinning up self-hosted services in your Homelab environment.We’ll review my previous video (20 Ways to Use a Virtual Machine (and other ideas for your homelab) and decide which should run in a Docker container, which should be virtualized with Proxmox, and which should run on hardware as bare metal.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Today we’re going to cover setting up VLANs using UniFi’s network controller.We’ll set up a VLAN, from start to finish, which includes creating a new network, configuring a wireless network that uses VLANs, and then we’ll set up firewall rules to make sure we’re keeping our network safe. If you think VLANs are only for the enterprise, you’re wrong, I will show you how they are helpful at home too.
So what’s a VLAN? A VLAN or Virtual Local Area Networks, is a group of devices, computers, or servers that communicate with each other as if they are on the same physical LAN, but they are actually located on separate physical LAN segments.VLANs can be created by configuring a managed network switch to segment the network into different broadcast domains.
So why are VLANs important, even to the home user?
So what’s not to love about VLANs if they give you greater control over network traffic, help optimize network performance, give you better security, and give you management and flexibility? Well, for me it was complexity and knowing where to start.
Ubiquiti UniFi 6 Lite Access Point - https://l.technotim.live/ubiquiti
UniFI UDM SE - https://l.technotim.live/ubiquiti
UniFi UDM Pro - https://l.technotim.live/ubiquiti
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
A list of common VLANs in UniFi Network Application
Congrats you just created your first VLAN! 🎉
A list of common WiFI networks in UniFi Network Application
Once we’ve created our VLAN, we can now add this to a wireless network.This is perfect for IoT devices or really any VLAN that you want to use over your wireless network.
If you check your access points, you can now see this wireless network being set up and provisioned with the new config that contains our new WiFi network that is bound to our VLAN!
UniFi Network application port management
You can now assign this to one of your switch post by going into your switch and assigning it this VLAN / VLAN ID
Choose the new VLAN and let your device get a new DHCP address from the new VLAN. You should expect to see an IP in the range that we set above.
Once you choose once we assign it, let’s connect a device and test it out.
Connect a device, check its IP, ping google, then ping another device on another VLAN.Uh-oh!
UniFi allows inter VLAN communication out of the box.I guess this was a conscious decision from them to make things easier, but it does make your networks open to other networks.
We can fix that, with a firewall rule!
A list of common WiFI networks in UniFi Network Application
Before we set up our firewall rules, first let’s create a profile.Profiles are a simple way to group items or alias them.This comes in handy later when creating firewall rules.
192.168.0.0/24
(this is the default network)192.168.10.0/24
(this is my “Trusted” network)Again, this Profile is for all other VLANs, not our new VLAN we just created.
In order to block inter VLAN Communication we’ll need to set up some firewall rules.The pattern I usually follow is blocking all traffic from one VLAN destined to all other VLANs.This can be done by creating Profiles.
This rule will block all communication that that originates on your IoT VLAN to all other VLANs (IoT Only).
You’ll also want to be sure that this rule applies after every rule that you want to allow in your list of Firewall rules.
Be sure to test all of your firewall rules!
Once you have these rules in place, I highly recommend you test your firewall rules.Some examples of things you should test
192.168.100.1
) from your IoT VLAN?Checking these types of things will help you verify that your network rules are being applied properly.Repeat these tests anytime you make changes.
At this point you should have a new VLAN that works on your WiFi access points, your network ports, and should have firewall rules in place to prevent unauthorized access! You can simply repeat this process for every new VLAN that you need! Have you set up VLANs yet?
Today I decided to share how I set up my VLANs, Firewall rules, Wireless Networks, and Network Security.
— Techno Tim (@TechnoTimLive) March 4, 2023
👉https://t.co/SOGBrsmKXK#vlan #unifi #homelab pic.twitter.com/x2LovlQVd4
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
After releasing my video on the PiKVM I realized that there was so much confusion about Wake on LAN, and rightfully so, that I decided to put together this guide on how to configure Wake on LAN on any machine. Wake on LAN (WoL) is a networking standard that allows a computer to be turned on by sending a network packet. The client sends a special packet (sometimes referred to as a “magic packet”) and the remote machine will wake up either from a cold power state or from sleep.This is where it starts to get complicated because different hardware manufactures have implemented different controls in BIOS to enable or disable this, and to make even more complex operating systems like Windows, macOS, and Linux have also implemented their own way to wake the machine up when it’s sleeping or in a low powered state.I am just going to throw this out there, Wake on LAN is hard.Since there are many different combinations I will try to cover how to configure your machines wake up successfully regardless of hardware, operating system, and power state.
In order to wake your machine up, we have to be sure that WoL features are turned on in the BIOS and that other features are disabled.Since I cannot test every single BIOS out there, I am going to use my machine as an example for the types of options you will need to enable or disable.Most of the options should be named similarly however where it is located in your BIOS will depend on your manufacturer.
First, you’ll need to get into the BIOS of the machine, this is typically done by pressing a key at book like f2
or del
but varies by machine.
Once you’re in you’re in we’ll start changing some settings.
You’ll want to look around for something similar to power settings.If you do not see these options in your power settings, it could be in advanced, networking, or onboard devices.
Power settings menu for Intel NUC.This will look different for your machine but the idea is still the same
Here are some things to look for:
Deep S4/S5 Sleep - You’ll want to disable this, otherwise only the power button will wake the machine which will disable Wake on LAN
Wake on LAN from S4/S5 - You’ll want to enable this setting and if it has an option choose Power on - Normal Boot
Wake System from S5 - You’ll want to disable this.This is basically an alarm clock for your machine.There’s no need to enable this unless you want set a time for it to turn on every day.I’ve used this in the past as a contingency plan for some of my servers in case they were powered off accidentally. I would set an alarm for 12 AM.
USB S4/S5 Power - I typically disable this if it’s a server since nothing should be plugged in but if it’s a desktop with USB devices you want powered you can turn it on safely.
Wake on LAN - enable this might sound obvious but some older systems have an option that says exactly that however newer systems have options for waking in all of the different sleep states.
What to do when AC Power is restored - This is optional but I usually set it to Stay Off if it’s a desktop, Power On if it’s a server that should always be on, and Last power state if it’s something like a machine that I wake seldomly.There is one exception, which is if you have a way to toggle the power remotely too.I have a USP PDU Pro from UniFi that I can toggle all of my servers on and off.If you are able to toggle them on and off, the best setting is Power On, that way you have a way to power them on, even if they were gracefully shut down previously.
Another quick check you can do is power down the machine and check to be sure the network light is lit up on your NIC.If it’s not, this means Wake on LAN is not enabled on your machine and you’ll have to find the option in your BIOS to make it work.
If you don’t have an operating system on your machine yet, you should be able to wake up the machine over the network now.If you do have an operating system on your machine, another way you can test a bare metal / cold boot wake is by pulling the power on the machine and then plugging it back in.The reason this should work is because modern operating systems might not fully shut down (they go into a sort of sleep) or might disable WoL on the NIC when shutting down.We’ll fix this in the next section.
After you enabled Wake on LAN in the BIOS, and verified you see the light on your NIC blinking when you power off your machine, we can now enable Wake on LAN at the operating system level for Windows.This will work on all modern versions of Windows (Windows 10 and Windows 11).
First we’ll want to open the Device Manager.You can do this from the UI or from a command prompt
1
+
devmgmt.msc
+
Be sure to select the network card that you use to connect to your network.
Power Management options for your network adapter.
Then we’ll need to verify a few more settings.These settings may or may not exist and depend on your network adapter manufacturer.
You should check to see if Wake on LAN works before proceeding to the next step since this might not be necessary with your machine.
Another Windows Feature that can prevent a machine from shutting down properly to allow Wake on LAN is Fast Startup.This disables hibernation.I recommend testing to see if Wake on LAN works before disabling this.
First, we’ll need to open the Power Control Panel.You can do this from the UI or from a command prompt
1
+
powercfg.cpl
+
Then we’ll need to change some settings
You should now check to see if Wake on LAN works for your machine.
After you enable Wake on LAN in the BIOS, and verified you see the light on your NIC blinking when you power off your machine, we can now enable Wake on LAN at the operating system level for Linux.This sounds odd but I have found that machines (especially Linux) need WoL turned on for each NIC.
Install ethtool
if you don’t have it already
1
+2
+
sudo apt update
+sudo apt install ethtool
+
First check to see if WoL is supported by your NIC
1
+2
+
ip a # this will list all of your NICs
+sudo ethtool eno1 # replace with one of the NICs you want to check
+
This should output something similar to
Settings for eno1:
+ Supported ports: [ TP ]
+ Supported link modes: 10baseT/Half 10baseT/Full
+ 100baseT/Half 100baseT/Full
+ 1000baseT/Full
+ Supported pause frame use: No
+ Supports auto-negotiation: Yes
+ Supported FEC modes: Not reported
+ Advertised link modes: 10baseT/Half 10baseT/Full
+ 100baseT/Half 100baseT/Full
+ 1000baseT/Full
+ Advertised pause frame use: No
+ Advertised auto-negotiation: Yes
+ Advertised FEC modes: Not reported
+ Speed: 1000Mb/s
+ Duplex: Full
+ Auto-negotiation: on
+ Port: Twisted Pair
+ PHYAD: 1
+ Transceiver: internal
+ MDI-X: on (auto)
+ Supports Wake-on: pumbg
+ Wake-on: g
+ Current message level: 0x00000007 (7)
+ drv probe link
+ Link detected: yes
+
You’re looking for Supports Wake-on: pumbg
with at least the letter g
in the string.This means that the NIC does support WoL for a magic packet, which is a good thing.If you don’t see this here, don’t worry we’ll fix it in netplan
There are lots of outdated commands you’ll find on the internet that won’t work or will partially work so I advise that you only do this with netplan
.If you don’t have netplan
installed (Debian, etc…) skip to the next section.
To edit your netplan
1
+
sudo nano /etc/netplan/01-netcfg.yaml # replace with your netplan yaml
+
Once here, you’ll see your network settings.You’ll want to turn on wakeonlan
in this yaml for each NIC.For example if you have 2 NICs, eno1
and enp2s0
you would add it in both places under that key.
1
+2
+3
+4
+5
+6
+7
+8
+9
+10
+11
+12
+
# This file describes the network interfaces available on your system
+# For more information, see netplan(5).
+network:
+ version: 2
+ renderer: networkd
+ ethernets:
+ eno1:
+ dhcp4: yes
+ wakeonlan: true
+ enp2s0:
+ dhcp4: yes
+ wakeonlan: true
+
Once this is set, you’ll want to apply your netplan.
1
+
sudo netplan apply
+
Then we’ll want to shutdown
1
+
sudo shutdown -P now
+
Now we should be able to wake up the machine using WoL from a remote machine.
Since you don’t have netplan
we’ll have to create a service and enable it.Do not do this step if you configure it with netplan
.
Find the path to ethtool
1
+
which ethtool
+
In my case it’s at /usr/sbin/ethtool
but your may vary.
Nest we’ll create a file at /etc/systemd/system/wol.service
1
+
nano /etc/systemd/system/wol.service
+
In this file add the following
1
+2
+3
+4
+5
+6
+7
+8
+9
+
[Unit]
+Description=Enable Wake On LAN
+
+[Service]
+Type=oneshot
+ExecStart = /usr/sbin/ethtool --change eno1 wol g
+
+[Install]
+WantedBy=basic.target
+
You’ll want to be sure to change your path for ethtool
as well eno1
to the name of your NIC
Then we’ll need to enable the service
1
+2
+
sudo systemctl daemon-reload
+sudo systemctl enable wol.service
+
Then we can check to be sure out service is started
1
+
systemctl status wol
+
Then we’ll want to shutdown
1
+
sudo shutdown -P now
+
Now we should be able to wake up the machine using WoL from a remote machine.
Waking up a Mac is pretty easy, the easiest of them all.The most challenging part is finding the option in System Preferences.
For a Macbook:
For all other Macs you’ll want to search System Preferences for another option.
This option might appear different in different versions of macOS and it also varies by form factor, but you’ll want to be sure that the “Wake for network access” option is turned on
In order to wake up a remote machine machine up, you will need a tool that can send a wake on LAN packet to the remote machine.
I am a fan of doing this in a terminal however a decent Windows utility with a GUI is WakeOnLAN.It’s also open source and hosted on GitHub.After installing it and configuring a machine to wake you should be able to wake your machine if it is on the same network and you’ve followed the other steps that are outlined in this guide.
WakeOnLAN is an open source Windows utility that has a nice GUI
I usually prefer installing a command line tool to wake machines up over the network from a Linux machine and I typically using wakeonlan
an open source utility that’s simple to use.
To install it on a Debian-like system:
1
+2
+
sudo apt update
+sudo apt install wakeonlan
+
Once it’s installed you can now wake machines on the same network by using the command:
1
+
sudo wakeonlan 00:11:22:33:44:55
+
If your machine is on another network and you can reach the broadcast IP, you can supply it in your command
1
+
sudo wakeonlan -i 192.168.2.255 00:11:22:33:44:55
+
Be sure to replace the mac address and broadcast IP above with the mac address of the remote machine and set the broadcast IP if on a different network.
To instal a client for macOS it’s very simple using brew
1
+
brew install wakeonlan
+
Once it’s installed you can now wake machines on the same network by using the command:
1
+
wakeonlan 00:11:22:33:44:55
+
If your machine is on another network and you can reach the broadcast IP, you can supply it in your command
1
+
wakeonlan -i 192.168.2.255 00:11:22:33:44:55
+
Be sure to replace the mac address and broadcast IP above with the mac address of the remote machine and set the broadcast IP if on a different network.
At this point you should be able to power on any machine from any machine on your network. One piece of advice is if you are using VLANs you’ll want to b sure you are sending the WoL packet from the same network, otherwise you’ll have to be sure that you can reach and target the right broadcast IP from the network you are on.As I mentioned in the beginning of this post, Wake on LAN is hard however if you follow these steps for each machine type you should be able to enjoy reliably waking up your machine remotely over the network.
Day 252 #100daysofhomelab
— Techno Tim (@TechnoTimLive) February 19, 2023
A day late, but I just wrapped up my Ultimate Guide to Wake on LAN for Windows, MacOS, and Linux! https://t.co/xYMVDoyuo9
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
What is a Home Lab and how do you get started? It’s easy. You can get started today in a few different ways. You can virtualize your entire home lab or build it on an old PC, a Raspberry Pi, or even some enterprise servers. The choice is really up to you. You’ll need to first establish some goals for your homelab to determine capacity for your workloads. After that, the rest is up to you. You can take it as far as you want to go, and remember each home lab is almost as unique as the individual who builds it!
Please share this with anyone who asks what a Home Lab is.
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Have you ever thought about running a Linux desktop inside of a container? Me neither until I found this awesome project from LinuxServer called Webtops.A webtop is a technology stack that allows you to run Ubuntu or Alpine Linux within a container that is fully accessible from a browser.This allows you to use most Linux features with a container with a fraction of the cost of resources.Join me as we configure one from beginning to end.
See this post on how to install docker
and docker-compose
docker-compose.yml
and .env
can be found here
1
+2
+3
+4
+5
+
mkdir webtop
+cd webtop
+mkdir config
+cd ..
+nano docker-compose.yml
+
1
+
docker-compose up -d
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
YouTube sent a package.I have a feeling I know what it is, but we’ll all find out live!
Find all of my server gear here! https://kit.co/TechnoTim/techno-tim-homelab-and-server-room-upgrade-2021
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Windows 11 is here and with it comes new hardware requirements.These requirements not only affect physical hardware but also virtual hardware too.The TPM 2.0 requirement for Windows 11 is shaking the tech community, HomeLab community, and even virtualization too.Well have no fear, today we’re going to virtualize Windows 11 with a virtual TPM chip! We’re going to create a virtual machine according to proxmox best practices and even install a virtual TMP chip so that you can test Windows 11 with your hardware and software before upgrading Windows 10 in your HomeLab or production environment without any hacks!
Windows 11 Download
https://www.microsoft.com/en-us/software-download/windows11
KVM/QEMU Windows guest drivers (virtio-win) download
https://github.com/virtio-win/virtio-win-pkg-scripts
Need to Upgrade to Proxmox 7?
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
You want to get started developing JavaScript with NodeJS, ReactJS, or AngularJS but you’re not sure how to get started? This is a complete, step by step guide on how to configure your Windows machines for JavaScript development the right way.You’ll learn how to install and configure Windows, the new Windows Terminal, WSL, Ubuntu, ZSH with Oh My ZSH, yarn, NPM, NVM, NodeJS, and VS Code.We’ll also configure our git client for SSH access to GitHub.This is the perfect beginner tutorial for anyone trying to develop software on a Windows PC.
1
+
sudo apt-get update
+
1
+
sudo apt-get upgrade
+
1
+
sudo apt-get install zsh
+
Check this site for the command https://ohmyz.sh/#install
It should be something like this:
1
+
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
+
Be sure zshell
and oh-my-zsh
are working before continuing
Check this site for the command https://github.com/nvm-sh/nvm
It should be something like this, but be sure to use the version from the link above
1
+
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
+
If nvm
doesn’t work, check this https://youtu.be/kL8iGErULiw?t=507
Close all terminals and all VS Code instances after doing this step
1
+
nvm install 12.16.1
+
Be sure nvm
and node
are working before continuing
Check this site for the latest command https://classic.yarnpkg.com/en/docs/install/#alternatives-stable
It should be something like this, but be sure to use the version from the link above
1
+
curl -o- -L https://yarnpkg.com/install.sh | bash
+
You’ll want to follow this guide for configuring git.Be sure to follow the LINUX
version
https://docs.github.com/en/github/using-git/getting-started-with-git-and-github
1
+
git config --global user.name "Techno Tim"
+
1
+
git config --global user.email "your_email@example.com"
+
1
+
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
+
1
+
eval $(ssh-agent -s)
+
1
+
mkdir code && cd code
+
Be sure you choose the right repo before cloning, this is just an example
1
+
git clone git@github.com:techno-tim/techno-boto-discord.git
+
1
+
cd techno-boto-discord
+
1
+
yarn
+
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Today we’re going to maximize your Productivity on Windows with Microsoft PowerToys. I’ll show you step-by-step how you can use, customize, and be more efficient when using Microsoft PowerToys.
PowerToys is a set of utilities and apps that help you enhance the functionality of Windows and maximize your productivity. These tools provide a range of features, shortcuts, enhancements, and various ways to make you more efficient when using Windows. It also has some features that you might have seen in other operating systems but can be enabled in Windows too with PowerToys.
If you haven’t heard of PowerToys or it’s been a while since you’ve looked at all the features, sit back as we go through every utility in the PowerToys suite and by the end of this video you’ll be be a pro, or at least you’ll look like one while using Windows. PowerToys is open source and is being rapidly developed and they are adding new features with almost every release. So hopefully byu the end of this video I will have convinced you to install PowerToys and hit the like and subscribe button. Once installed you’ll have a little icon in your system tray where you can launch individual applications, toggle features on and off, or see all settings for all applications.
To get started, download and install PowerToys.
Keep a windows on top with Always On Top
We’ll start with Always on Top. This allows you to pin windows on top of all of your other windows. This is helpful for those times when you want a window to always hover above all other windows, regardless of which window is in focus.
To activate it you press: ⊞ Win+Ctrl+T
This will play a sound and show a border around the window that will always be on top. Now if you try to drag a window on top of this window it will remain on top. You can adjust the color mode for the border and choose any color you like or just stick with your theme’s default. You can also adjust the thickness of the border and choose whether or not you want to round the corners. Finally, you can also choose to enable or disable the sound when activating. You can also choose to exclude apps from pinning on top by entering the process name here. After adding it here, this process will ignore the shortcut to activate Always on Top.
Keep your computer awake without adjusting your power settings!
Awake is a quick way to keep your computer away without having to adjust any of your power & sleep settings. This is helpful when running demos, conferences, or any other task where you want to be sure that your device doesn’t go to sleep or turn off its screen.
In the settings for this utility you can choose to keep using the selected power plan, which means it will not affect your power settings at all.
If you change it Keep away indefinitely, your computer will stay awake until you explicitly put the machine to sleep or your exit or disable the utility. This also activates the Keep screen on setting, which gives you the option to also keep your screen on too.
If you choose to keep away for a time interval you can choose how long you want the utility to stay in this mode before reverting back to the previous state. Once the timer is up, it will revert back to the default setting.
The last setting, keep awake until expiration allows you to choose a date and time to end awake mode. This is Like the previous setting, after this expires, it will revert back to your previous setting. This is handy if you have a specific date and time you want to end awake mode.
Pick a color from any running application using Color Picker!
Next up is Color Picker and this is one of my favorite utilities in Power Toys. It lets you choose a color from any currently running application and you can copy it in a configurable format to your clipboard. Unlike color pickers for browsers, this works system wide and is great for creatives and developers.
To activate the Color Picker press: ⊞ Win+Shift+C
This will activate the color picker window where you can drag your cursor to any item on the screen. You will see a color preview and the color value in a specific format that we can change. To sample the color, just click and it’s on your clipboard ready for you to paste.
We have lots of nice options we can change in the settings for this utility, for example, we can choose what happens when we activate the color picker, we can choose to open the editor, pick a color and open the editor, or only pick a color. I set mine to pick a color and open editor because this gives me a popup after choosing my color where I can choose the one of the supported color formats to choose from. I can copy the value to my clipboard to use it. It also has a history feature on the left where I can choose previously sampled colors which is nice if you use this tool often. If you want to fine tune the color you picked, the editor will also show 2 shades darker and 2 shades lighter in the editor window at the top. If you want to go back to the previously selected color it will be in your history. Also, you can choose to customize the color even more by clicking on the color at the top middle and making adjustments using the slider.
You can also choose the default color formats to choose from and even add your own if you don’t see one of the 3 that come out of the box. I typically only use HEX and RGB in my day tyo day but it’s nice to know you have the option to add more.
Another thing I usually turn on is showing the color name. This is handy if you aren’t great at color recognition and need a way to describe this color to someone else. Just toggle it on, activate the color picker and you will see the name of the color that it matches.
Customize your windows layout using FancyZones!
The Windows manager in Windows is ok and is improved in later versions of Windows but FancyZones take this to the next level. FancyZones is a windows manager utility for arranging and snapping windows into custom layouts to help you work the way you want to with your windows and allows you to quickly restore them too. This is one of the most feature rich utilities in the stack so I’ll try to break down the most important parts to get you going fast.
If you’re going to FancyZones I would recommend letting FancyZones override the default Windows Snap that’s built in. You can do this in the settings and toggling on the override settings
Next let’s choose a default layout for our zones.
You can activate this by pressing: ⊞ Win+Shift+`
Here you can choose one of the existing templates or create your own. Let’s choose one of the existing ones for now.
After choosing a template you can now drag a window while holding Shift
and you will see your zones appear. As you move the window around you will see zones you can snap this window too. If you want to snap zone 3, just drop it in zone 3 and it will fill this area. You can repeat this for any window you have open.
If you want to do this without using the mouse, you can press: ⊞ Win+left/right
For example if you want to move a window into one of the zones, while the window is in focus press ⊞ Win + right
multiple times to cycle through the zones. Once you find the zone you want, just let go of the ⊞ Win
key and you’re done!
Once you start snapping windows in the same zone, you might find that you want to switch between windows that are snapped to the same zone, you can easily do this by selecting a window in that zone and then pressing ⊞ Win+PgUp/PgDn
. This will cycle through all windows snapped to this zone.
If you want to customize a zone template you can do so by pressing ⊞ Win+Shift+`
and then editing your template and adjusting some of the options. You can increase the number of zones, increase the space around zones, and even the distance to highlight adjacent zones which is helpful when trying to merge 2 zones together when dragging a window around.
If you’re not happy with existing zone templates you can create your own by using the Zone Editor
If you activate the Zone picker with: ⊞ Win+Shift+`
You will see this button at the bottom that says create new layout. If you click, you can create your own custom zones in either a grid layout that snaps windows into place without overlapping, or canvas which is kind of free form and will allow you to overlap windows.
Now there are many more customization options in the settings like changed colors, multi monitor support
Unlock those pesky locked files using File Locksmith!
File Locksmith is a nice little utility to help you know which files are in use and by which process. This is really helpful if you are trying to figure out which application is locking a file. For example if I right click on this folder and select “What’s using this file?” it will check to see if any of the files in this folder are being used. We can see here that I have a document opened with Word, Excel, VSCode, and even explorer. I can expand the details of each and see what the specific files are. I can even end the task from here, killing the process and removing the lock. Just be careful if you end the task, it will kill all instances of it.
Make Windows Explorer more useful with these add-ons!
This File Explorer add-on utility adds some additional functionality to Windows Explorer. The first setting allows you to preview additional file types in the preview pane on the right. To toggle on the preview pane you can press Alt + P. With this setting toggled on you can now see previews for SVGs, Markdown, Source code files, PDFs, and gcode files. The other setting with this the File Explorer add-on utility allows you to see more thumbnails inside of explorer when browsing your file system. This can be handy if you work with these types of files, letting you easily see a preview of the file before opening.
Never make a mistake again editing your host file with Host File Editor!
The Host File Editor utility allows you to quickly make changes to your host file. Your host file is the first place Windows looks to resolve IP addresses and although not common unless you are in IT, you might have some non standard items in this list. The host file editor makes it easy to edit this file without making mistakes. You’ll want to be sure that most of the settings are at default in order to get the most out of this utility and that’s “launch as administrator”, “show a warning at startup”, “top being the position of additional content”, and the encoding being “UTF-8”. You can then launch the host file editor and quickly add additional host entries without having to edit them manually. You can add comments, toggle them on and off, reorder entries moving them up and down, run a test ping, and even see the original host file by clicking on this button.
Bulk resize images with Image Resizer!
Another great feature of Power Toys is the image resizer. The image resizer lets you bulk resize images just by right clicking and then choosing “Resize pictures” This will pop up some options where you can choose the output for the resize. There are some presets that you can adjust in the settings but the default options are best. After choosing your size and clicking resize, Windows will batch convert all of them files for you. By default it will make copies so it’s safe to run, but this can be changed easily when resizing your files. There are also more settings you can choose from in the image Resizer settings. Still waiting on that webp option.
Take control of your keyboard mapping using Keyboard Manager!
The Keyboard Manage is a nice little utility that allows you to remap your keys on your keyboard. This is handy if you have an odd keyboard or want to customize some unused keys. For example if we want to remap a keep that is rarely used, at least form, like the CapsLock
key, we can easily do that by opening the utility and then either selecting or pressing our physical key of CapsLock
and then selecting or typing the key you want to map it to. I chose to enter. When saving you will see a warning about CapsLock
no longer being mapped but that’s ok since I never use it. You’re free to remap this if you like. After saving this, you can now see that my CapsLock
key is working just like my Enter
key! Well, looks like I can’t yell at anyone on the internet anymore, so let’s un do that. JK
You can also remap shortcuts if you like. If we wanted to remap the control + c shortcut to control +v in chrome only, we can do it like this. This will now override the copy function with the paste function only when in chrome. Confusing, but it works great.
Get some help for your cursor with Mouse Utilities!
Mouse Utilities is another one of my favorite Power Toys in this suite. It is a collection of features that enhance the mouse and cursor functions on Windows. It has 4 different features, the first being FInd my Mouse.
Find My Mouse highlights the position of the cursor when you press the left Control
key twice. This is helpful if you can’t find your mouse or even when giving demos to emphasize an area in your demonstration. I use this quite a bit to help viewers focus on what I am focusing on. You can change many aspects of this spotlight and animation, making it just the way you like. You can even change the activation method so if you don’t like pressing left control twice, you can just shake your mouse until it activates.
The next is Mouse Highlighter, this will highlight left and right clicks of your mouse. You can activate it by pressing ⊞ Win+Shift+H. Once activated left clicks will be the default color of yellow and right clicks will be the default color of blue. If you want a different color or experience all of these can be adjusted in the settings.
The next is Mouse Jump. You can activate it with ⊞ Win+Shift+D
and then it will show you a screenshot of your desktop. If you click on an area in the image, it will jump your cursor to the location that was clicked. This is great for large monitors where you need to travel great distances. Maybe one day I will have a monitor with a resolution this high to where I need something like this.
The last one in Mouse Utilities is Mouse Pointer Crosshairs. If you activate this with ⊞ Win+Alt+P
it will draw crosshairs centered on your mouse pointer. You can adjust any of the settings for the crosshairs in the Appearance & Behavior section.
Remote control up to 4 machines using one mouse and keyboard with Mouse Without Borders!
This is by far one of the coolest features of Power Toys and probably the most complicated. Mouse Without Borders allows you to control up to 4 computers from the same machine with only one keyboard and mouse. Think of it like extending your desktop across multiple machines but you can remote control all machines from all machines. This will make more sense in a bit. You’ll need at least one additional machine with Power Toys installed. Once you have Power Toys installed on all machines, be sure that Enable Mouse Without Borders is turned on.
On the first computer, select New Key to generate a new security key so you can securely connect. Then on the second machine enter the Security Key that was generated on the first machine and enter the first machine’s name. Then select connect. You will then see both machines appear in the device layout. You can rearrange them here to match their physical layout. Now you can switch between each computer by just moving your mouse cursor to the edge of the screen and it will transition between computers! You can add additional computers by repeating this process! Another cool thing I learned is that you can also go the other way too and control your primary machine from the secondary, just start moving the mouse over the shared edge and it will jump back to your main machine!
There are lots of settings and features that you can play with, but some worth mentioning are: sharing the clipboard between machines. This is allows you to copy text from one machine and paste it into another and Copying files between machines. Files less than 100 MB can be transferred too! This is as simple as copying a file and then pasting it. You will see the file transferred using the clipboard. Pretty cool! If you ever want to disconnect from other remote machines, you can simply generate a new key and the others will drop.
There are additional settings, keyboard shortcuts, and even a troubleshooting section that I encourage you to explore once you’ve set this up.
Remove all formatting when copying and pasting text with Paste As Plain Text!
Paste as plain text is just what it sounds like, it will paste text as plain text without the additional formatting. This is super helpful when you are copying something from the web and pasting it into a document. To fix this all you need to do is enable Paste as Plain Text in Power Toys and then when pasting just ⊞ Win+Ctrl+Alt+V
and it will paste your text without the formatting.
Get a quick preview of your files without switching context with Peek!
Peek is a nice little utility that lets you preview a file without opening it up and without scaling up explorer. To use Peek, be sure it’s turned on and then select a file in explorer and press Ctrl+Space
. THis will bring up a preview window where you can check out the file and even arrow through files if you have multiple. Then to close just press the same keys Ctrl+Space
and it will close the preview.
Rename multiple files like a pro with PowerRename!
PowerRename is another one of my top used Power Toys. It’s a bulk renaming tool that has a ton of flexibility for managing file names in bulk. To use it, be sure it’s enabled and then select a group of files you want to rename and right click. From there you will see the PowerRename option. After clicking it you will see a new interface that will help you rename files along with a preview. You search within a file name for specific text and even use regex if you like. You can then add text to replace the found text. You can apply it to extensions, files, folders, and sub folders. You can also shift the case to lower, upper, title case, or capitalize each word. You can even enumerate each item, basically giving them a numeric suffix. One other cool thing you can do is use variables in the file name. You can see a list of variables by licking the info button. From here you can click variables and it will add them to the text to replace. Once you are satisfied with the text, you can apply it and it will batch rename all of your files!
Quickly launch applications, do calculations, and more using the missing app launcher for Windows, Run!
PowerToys Run is one of those features that once you start using it’s hard to go back to the old way of doing things. It saves so much time and you look cool doing it too. PowerToys run is a quick launch utility that when pressed will allow you to launch applications, do calculations, even search the web just by typing and it’s way faster than the start menu.
To launch PowerTypes Run press: Alt+Space
From here you can do simple things like launch applications. If you want to launch Chrome just type “chrome” then hit enter. Easy enough. You can also search for files, settings, and even the web. You can also do some advanced searches using plugins. For example if you want to do calculations, you just type in the expression and it will compute it and if you want to copy the value to your clipboard you just hit enter. If you want to base64 encode something you can just type #base64 abcdef and see the value and hit enter to copy it to your clipboard. If you want a guid, just type #guid and it will generate one for you. There are lots of plugins you can explore in the settings or toggle off if you don’t plan on using them. Definitely worth checking out all of the available settings you can change if you’re going to use this feature. Super powerful, super cool.
Never misspell jalapeños again with Quick Accent!
Quick Accent is a quick way to type accented characters, this is especially useful when using a keyboard layout that doesn’t support the specific accent.
For example on a US English keyboard layout there isn’t an easy to type “ñ”. This makes it hard to type jalapeños. But don’t worry, with the Quick Accent power toy it’s super easy to do. After enabling Quick Accent you can activate it by pressing the key you want to accent along with space. Here we’ll hold N
while pressing space. Then you can keep pressing the spacebar to cycle through the different characters. Once you find the one you want, just let go of the N and it will insert it. If you want to insert “ö” in German, you hold O
, and tap spacebar
until you find it, then let go of O
. There are many settings you can change, especially the activation key if you want to switch from using the spacebar
.
Get a visual preview of Registry files using Registry Preview!
Registry preview is a quick little utility to visually preview registry changes. If you’ve ever opened a registry file with a text editor, you know the struggle of trying to validate these files, especially when editing. Registry Preview makes that a little easier. After opening Registry Preview you’ll then want to select a registry file to open. On the right you can see a preview of where this key lives in your registry along with any of the values. If you want to edit this file, you can on the right. Once edited you can save the file and then reload the file and you can then see the changes in the preview window. If you’re satisfied with these changes you can write them to the registry. You can also use the “Open Key” button to open the registry editor directly to your key. A word of caution, only edit the registry if you know what you’re doing.
Measure pixels anywhere with Screen Ruler!
Screen Ruler is a PowerToy that’s not only helpful if you’re a designer or developer, but it’s also super fun to use! Screen ruler helps you measure the pixels on your screen based on image edge detection. You can activate it with ⊞ Win+Shift+M
and then from here you can choose your measure style.
Bounds will create a bounding box where you can click and drag your mouse to measure the pixels in the box you draw. You can also hold Shift to have your boxes persist until you cancel your selections.
Spacing Will measure both vertical and horizontal pixels at the same time as you move your cursor around the screen. Horizontal and Vertical measure will do the same but only measuring one at a time. You can cancel any of these at any time by clicking the X or just hitting escape. There are a handful of options you can configure if you want to in settings.
Forget what a Windows keyboard shortcut does? Check it quickly with Shortcut Guide!
The shortcut guide is a nice little utility that shows common Windows shortcuts in an overlay. You can activate this by pressing ⊞ Win+Shift+/
or ⊞ Win+Shift+?
If you’re looking for the forward slash. From here you can see all of the items you can lunch by pressing the ⊞ Win
+ the key you see on the screen. For example, it says the emoji panel can be opened with ;
, so all we need to do is press ⊞ Win+;
. Feel free to explore the other shortcuts on the screen.
Extract text from any image using Text Extractor!
Text extractor is a great utility to extract text from any image and copy it to your clipboard. It uses OCR to do this and it actually works pretty good. THis is great when you want to quickly grab text from an image or a screenshot. To activate it all you need to do it press ⊞ Win+Shift+T
and then with your crosshair select the area that you want to extra text from. After selecting it will be copied to your clipboard where you can paste it. The text extractor can only extract languages that have the OCR language pack installed, so if you need to install additional languages I’ll have a link in my documentation on how to do that.
Take control of your microphone and camera during conferences using Video Conference Mute!
Now this PowerToy is in legacy mode meaning they won’t release any updates to it but it’s worth mentioning because it’s still available. I wouldn’t be surprised to see this go away since Windows is starting to support this natively without this Power Toy. Anyway…
First you’ll need to be sure you run Power Toys as Administrator. You’ll need to close it first, then right-click and run as administrator. After you do this and visit the Video Conference Mute section you will see shortcuts for muting the camera and microphone.
To mute both your camera and microphone you can press ⊞ Win+Shift+Q
and you will see a little bar appear that shows that both are muted. You can press this combination again to toggle them both on. To toggle just the microphone you press ⊞ Win+Shift+A
and to toggle the just the camera it’s ⊞ Win+Shift+O
. If you want toi mute your microphone and toggle it only when you want to speak, you can use the push to talk feature by pressing ⊞ Win+Shift+I
. This will unmute your microphone when you are holding this combination of keys. Again, this is a legacy feature that I personally don’t use but I want to cover it for completeness.
I hope you can see how powerful the Windows Power Toys are and how they can help you be more efficient at using Windows. There are so many useful utilities in this suite and more being added with each new release. I learned quite a few new shortcuts, and new ways of working on Windows, and I hope you learned something too! And remember if you found anything in this post helpful, don’t forget to share!
Over the last few weeks I dove deep into PowerToys (open source utilities for Windows) and learned how to be more productive using Windows. Finally, a decent app launcher, color pickers, remote controlling multiple machines, and more!
— Techno Tim (@TechnoTimLive) August 20, 2023
Check it out!
👉https://t.co/rENRbEM5tB pic.twitter.com/MZLRXQCDAR
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Lots of people ask which terminal I use on Windows and how I configure it.It’s pretty simple, I use the Microsoft Windows Terminal and it’s a fantastic terminal on Windows.It is free and open source.With Windows Terminal, you can install and configure different environments for Windows and Linux.You can choose between Ubuntu or any other WSL 1 or WSL 2 (Windows Subsystem for Linux) environment along with the typical PowerShell and cmd.In this fast, simple, and easy tutorial we’ll set up the Windows Terminal, install WSL, then install Ubuntu, and configure Ubuntu with ZSH (zshell) and oh my zsh (0h-my-zsh).Then, you’ll know exactly how I configure my Terminal on Windows.Bonus Now all your copy pasta commands will work on Windows, macOS, and Linux!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Self hosting a VPN has traditionally been hard to set up and we’ve had very few options.That is until WireGuard came about. WireGuard is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography.It also supports running inside of a Docker container and that’s exactly what we’ll be using in this tutorial!
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
Introducing the ZimaBlade, an affordable, low power, single board computer that’s great for a home server, homelabs, tinkering, NAS, retro gaming, or even a dual boot desktop system like me.
Disclosures:
Every so often a device comes into focus that’s a little different than the rest. It looks familiar yet different. It stands out among the others in the sea of familiar devices and once you use it and hold it in your hands you understand why it’s different and why that matters. This is the ZimaBlade, a single board computer from Ice Whale. It’s the second device we’ve seen from them and it’s much different from their first device, the ZimaBoard. There are a few features that make you think that this might be a successor to the ZimBoard, but there are many others which show that the ZimaBlade can stand on its own as a new product in their lineup. Today I am taking a look at the ZimaBlade and discussing some of my thoughts around this device, how it compares to the ZimaBoard, and some of the interesting quirks I found that you might be interested in.
ZimaBlade looks like a Walkman from the 80s, and that’s a good thing
Upon opening you can’t help but notice the cyber punk theme on the packaging and the device. I am a fan of this design and it fits right in with the renegade, self-hosting, cyber native vibe they are going for. Next, they chose to make the case transparent, which again plays well with the theme they have going and gives this device some character. And last, how much it looks like a Walkman from the 80s. Now that’s not a bad thing, Walkmans had a ton of style, came in all shapes and sizes, and was an icon in 80s culture. But I digress.
The ZimaBlade comes with either an Intel Celeron J3455 Quad core Apollo Lake processor or an Intel Celeron Dual core processor depending on which model you choose.
This CPU supports AES NI for encryption, VT-x for virtualization, and VT-d for directed I/O or hardware passthrough.
This processor is paired with the Intel integrated HD 500 graphics processor which is clocked at around 700 Mhz. This chip also supports QuickSync and a handful of other features that help ensure that video playback is smooth as well as enough processing power to do some lightweight retro gaming. You output the video with this mini display port that supports up to 4k at 60Hz.
It has a SODIMM slot that supports up to 16 GB of DDR3 RAM which is removable and not soldered on. As far as storage goes it has 32GB eMMC for storage and dual SATA 3.0 ports for connecting additional drives if you choose to do so.
It has a 1 Gb ethernet port and as far as USB goes it has one USB A 3.0 port and one USB C port that supports power, data, and display.
The back of this case is also aluminum alloy that is fused to the heatsink which will help dissipate heat without a fan,
Last but not least is this bump you see here and that’s the PCIe 2.0 x4 slot. This slot is what made the ZimaBoard unique, was not only creating an x86 single board computer but also including a PCI slot to connect devices that you can’t use on other mini PCs, laptops, or similar devices without using a Thunderbolt enclosure. And as you can see the ZimaBlade also gets this slot.
This slot can be used for almost any PCIe device you can think of that can fit into a 4x slot and doesn’t require external power. This includes things like 10g network adapters, 2.5g network adapters, additional USB ports, WiFi 6 adapters, additional SATA ports, NVMe adapters, cards and for AI and ML.
ZimaBlade with its case, showing off some of its ports
After installing the RAM and plugging in the device you will boot into Debian Linux which comes preinstalled. You can sign in using the username and password of casaos. You’ll want to change this as soon as possible and update your system.
You might want to grab the IP too because you’ll need it to get into the CasaOS web UI that also comes preinstalled. CasaOS is an open source management interface to help you install over 50 docker apps with a single click along with supporting any other docker image you can find. It makes setting up a NAS with Docker apps a snap and great for a beginner although if you’re already familiar with other open source NAS solutions you might be missing some features when evaluating CasaOS.
If you’re interested in a deeper dive into what CasaOS is, I’ve done a video on 20 different projects you can run on your ZimaBoard and now ZimaBlade, including CasaOS.
One thing that was mentioned in the instructions, yes I read them, if you go out to
You’ll also see that they have released Windows and Mac clients too. After downloading and installing these clients you’ll notice that it also installs ZeroTier. Again, there isn’t documentation on this stuff yet because it’s pretty new but it looks like they might allow you to connect to your Zima device easily over the internet no matter where you are and if you have an edge connection or not. This might be for the upcoming ZimaOS that I peeped on their Github. Oh and if you’re worried about the code or client that’s running, all of this is open source and on GitHub so you’re free to check it out if you don’t trust it.
PCIe connectivity is often not found on mini PCs
The nice thing about this hardware being open is that you are not locked into how the vendor wants to use it. If you don’t like Debian or CasaOS, fine, just wipe it and install whatever you like. Want to build your own NAS using OpenMediaVault or TrueNAS, go for it. Just get a bootable flash drive, install it, connect a couple of drives and you’re good to go. If you do go this route I would recommend picking up the NAS kit that includes this dual 3.5” storage drive stand and a special Y SATA cable that helps you connect and power 2 additional drives. And just like that you have a NAS…. or do what I did and create a dual boot Windows and Linux system!
I use both Windows and Linux a ton on my workbench and I always find myself toggling back and forth when testing hardware. Plugging in When I saw the ZimaBlade had a case for 2 additional drives I knew right away that this is how I was going to use it. I grabbed a few old SSDs out of my drawer, picked up a couple of cheap 3.5 to 2.5” drive adapters, installed the drives, and then connected them all with this Y SATA splitter and it was ready to go.
Installing a dual boot system on this was relatively easy however I’ve had some experience with this in past and I even did a video on it, but if you’re interested on how to do this I will have a link to my documentation that will explain exactly how to do dual boot with a ZimaBlade.
Now when booting I can choose Windows or Linux when booting up. You can see that this little quad core processor is working its tail off during the boot process but it tapers off after a few seconds. The integrated intel video card works decent enough for watching videos and I assume it works fine too for retro games but this is just enough for what I need to test out hardware and flash and wipe drives.
This device can boot into Windows 11 and play videos, no problem!
On the Windows side it’s using anywhere from 6-10 watts of power after signing in and letting the machine sit for about 5 minutes to ensure that most sign in tasks were complete. This variance we see here is due to a lot of the background tasks that run on Windows and a little bit for the task manager.
When launching the default browser of Edge and playing a video on YouTube we can see the power usage jump from 10–18 watts. The other thing when looking at the task manager is that the integrated graphics have kicked in to decode the video. If we look at the stats for nerds there were only a few dropped frames and most of them were when starting the video and resizing the window to full screen.
It’s no surprise that this also works with Linux! (Ubuntu 23.10 shown in picture)
The same goes for Linux, when booting this Ubuntu 23.10 machine the machine I can choose Linux from the GRUB menu and boot into Ubuntu Desktop and as you can see the CPU and resources taper off after a few seconds and everything runs smoothly. Power usage after letting the machine sit for about 5 minutes is anywhere between 5 - 7 watts, again the variance is due to background tasks and System Monitor. Playing a YouTube video with the default browser of Firefox we can see the power usage jump from 11 - 17 watts. System monitor doesn’t report GPU status so I installed intel gpu utilities and was able to see the intel video card decoding the video.
One of the things I use this for most often is flashing SSDs and wiping and formatting drives. These two things are pretty cumbersome to do without physically plugging them into a system via SATA, PCIe, or Thunderbolt. Each of these solutions take up a lot of space or require additional hardware requirements. I found that having an “open air” PCIe slot on this machine makes it super simple to test complete any of these tasks, and with a small footprint. And this is just one of many possibilities with the ZimaBlade because at the end of the day, it’s really just a mini desktop.
The engineering sample came with a white PCB, let’s hope it sticks around
I do have some thoughts after using this for a couple of days. Overall it’s great and hard to complain about something when there’s so much to like.
I think my biggest gripe has been power. I went into this thinking that I can use any USB C power adapter since it supports Power Delivery 3.0. Turns out that this is a 12v Power Delivery 3.0 power supply and I couldn’t find any power adapters here at home that supported it, not my Macbook charger, an Anker charger, or even this no-name charger. This really isn’t a big deal however if I used one of my existing adapters, it seemed to also do something weird to the device to where I had to reset the CMOS a few times. I reached out to IceWhale and they pointed out that I needed to use a Power Delivery 3.0 12v/3a power adapter like the one that is included. I 100% agree that the provided adapter works but USB C Power Delivery 3.0 with 12v/3a is less common in North America with a lot of power adapters, at least the ones I have access to. IceWhale did say that they were going to try to make it compatible with more adapters before production so we’ll see. Moral of the story is just use the adapter they ship and you’ll be fine, which means you might want to buy their power adapter to be sure. When using the correct power adapter with a USB C hub, everything worked fine including USB, HDMI, and power delivery.
Another small gripe is that you can’t see the power LED when you have it in the case. The only reason I am bringing this up is because I had to check the LED over and over when figuring out the previous issue related to power. Not a big deal but one little hole or clear cutout in the case would go a long way.
While we’re on the topic of LEDs it would be nice to have a pair of lights for the NIC for status and activity and also I noticed that the HD activity light doesn’t flash for eMMC. Again, not deal breakers, just nice to haves.
Also a small power button would have been awesome. I still feel odd pulling out the power to reset or power down this machine. I did check for pins and I could only make out the reset switch. It would be great to add these pins to the documentation, I am sure others will ask about this if they haven’t already, and everything else is documented nicely in the booklet that shipped with the device.
Oh, and they sent me 2 devices, one was an engineering sample with a white PCB and one that is closer to the final product which has a black PCB. Honestly I love the white PCB model even though I am a huge fan of dark mode! This white PCB just makes it feel more SciFi and futuristic and really makes all of the componentry pop. I dunno, what do you think? Light Mode PCB or Dark Mode?
Hopefully some printed caddies or the like make their way into the ecosystem
Also, it would be cool if there were some sort of printable adapter to prop up your PCIe devices when they are plugged in. When you aren’t using the 2 drive base stand it’s easy to prop something under it, but when it’s dangling about 6 inches up it’s kind of scary. A printable tray or stand that could hook into the existing stand would go a long way. You could even sprinkle in some of the Cyber Punk designs from the device.
OK it sounds like I am nitpicking now but these are just small things that I think would really put some polish on this device.
It’s hard to complain when there’s so much to like about this $64 board. I know that it has an older CPU in it, along with DDR3, and the PCI is only 2.0, but considering the cost and what I will use it for I would rather keep the cost down than pay for features I personally won’t use.
And I think that’s the goal of this device as stated by IceWhale:
“The ZimaBoard was built on top of a relatively expensive ($120-$200) x86 single-board computer compared with the popular Raspberry Pi. Since most people can’t afford such expensive hardware without knowing what exactly it can do, we decided to create something better suited for the broader cyber native – something that is cheaper, smaller, and easier to use and carry around.”
If this is something you want to support, check out the links for more information:
Well I learned a lot about the new ZimaBlade, Power Delivery over USB C, Dual Booting Windows and Linux and I hope you learned something too. And remember if you found anything in this post helpful, don’t forget to share!
The past week I got to play with and configure the ZimaBlade, a new single board computer. Super fun device with some quirks. Did it replace my ZimaBoard?
— Techno Tim (@TechnoTimLive) October 31, 2023
👉https://t.co/GfuWbkqXMJ
(Also I couldn't pass up doing a Halloween themed thumbnail!) pic.twitter.com/HzW7mxE35n
ZimaBlade:
ZimaBlade Accessories:
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
I’ve had a ton of fun setting up and configuring a ZimaBoard and CasaOS over the last few weeks! While CasaOS is a great fit for your Home Server projects, I also decided to walk through over 20 other home server projects you can start today. These projects are for everyone, from the beginner, to the tinkerer, to the hardcore enthusiast! Thanks to ZimaBoard for sending this device!
Check out ZimaBoard today!
See the whole kit here! - See the entire Kit https://kit.co/TechnoTim/zimaboard-project-kit
(Affiliate links are included in this description. I may receive a small commission at no cost to you.)
The ZimaBoard has been out for a little while now but I thought it would be a great time to check in and see how it’s doing, along with the open source project CasaOS which ships with every ZimaBoard. I also wanted to share with you lots of projects you can start today with a ZimaBoard in case you need some inspiration for your tech projects. I’ll cover some of the easy or “beginner” projects that don’t take a lot of work to get going, then we’ll cover some of the projects for the Tinkerer, and then those projects for the hardcore weekend warrior tech types. But first, what is a ZimaBoard?
The ZimaBoard is a self proclaimed “World’s First Hackable Single Board Server” Which means that it’s a complete functioning computer built on a single board circuit and while most don’t have expansion slots, this one actually does. The ZimaBoard comes in 3 varieties, the 232 has an Intel Celeron N3350 Dual core CPU, 2G of RAM, the 432 which has quad core Intel Celeron N3450 CPU with 4G of RAM, and the 832 which has the same Quad core Intel Celeron N3450 but has twice the RAM of the 432 for a total of 8GB of RAM.
Outside of those differences each ZimBoard comes with 32 GB eMMC storage, 2 SATA ports for disk drives, 2 Gigabit LAN ports, 2 USB 3.0 ports, and what makes this different than most kits you see out there is the PCIe slot that you can connect PCIe devices to, but more on that later.
It also has a mini display port that can output up to 4k 60 and has a TDP of only 6 watts.
It has a mini Display Port, (2) 1 Gbs NICs, (2) USB 3.0 Ports, and 12v power
A few other things you might be interested in if you are a geek like me is that the CPU supports Intel VT-x for virtualization, VT-d for hardware passthrough, AES-NI for encryption, and video transcoding, all which will come in handy with some of the projects we’re going to talking about today.
So first, we’re going to start with the “beginner” projects, but don’t be fooled by the name, this doesn’t mean that these projects aren’t technical, it just means that they take very little to get started. We’re going to start with one of the best uses for your ZimaBoard and that’s CasaOS.
CasasOS comes preinstalled with your ZimaBoard. CasaOS is an open source service, I’ll say and not necessarily an OS, it’s installed on top of Debian and many other Linux distributions but I still think that name is fitting. It’s software that focuses on delivering simple personal cloud experiences around the Docker ecosystem, and I think they’ve done a great job on delivering on that promise. You can launch it from the desktop on your ZimaBoard or you can simply connect to it from a web browser on your network.
You’ll be greeted with a dashboard and a few widgets. We can see the time and date, our system status including CPU and RAM usage, our storage along with any additional connected drives, and our network status where we can toggle between our two network adapters. We also get a built-in search bar where we can search using our favorite search engine.
There are two things you’ll be using this dashboard for:
If we launch the app store and take a look, we can see lots of applications that we can choose to install. The nice thing about CasaOS, is that every app you see here can be installed and configured with a single click. That means no messing with ports, account names, volumes, or any of the other typical things you have to do when installing Docker containers. Also, you don’t even need to know what a Docker container is. You can almost treat this as an app store without knowing any of the implementation details. Some of the apps included in the app store are:
and many others that will help you build up your own little personal cloud in no time!
CasaOS is a nice little open source service to run on your ZimaBoard and more!
And if you can’t find the application you want in their app store, you can also run any Docker container you like by using the custom install feature in the app store and then either filling out a form, or using the import feature to paste the docker commands, Docker compose file, or appfile (which is an export you create to share with friends from your own apps). Importing configs will fill out the form for you. It’s kind of hit or miss if all the settings will be imported properly so it’s worth a look to make sure they are right.
Once these apps are installed, if they have a web management page we can simply click on the app to launch it and configure it from there.
The other place where you’ll probably spend a lot of time is in the Files “app”. This app is a super elegant way to manage files and share and I think it’s one of the cleanest web file management UIs out there, not only because it looks good and is fast, but also because it makes sharing files super easy, let’s take a look…
The Files app is a nice way to manage your files!
After launching the files app we can see a default storage location for our media and documents and from here we can upload, download, and manage files if we like and it even has a built-in file previewer for different file types. If you want to share a file from here, you can simply share the folder from the menu, and then open it from any machine on your local network. That has to be one of the simplest ways of sharing files I’ve seen. If you want to see all your shares you can simply click on this share icon at the bottom and it will list all of your shares.
Since we’re talking about sharing and we’re down here in the bottom left, we should talk about the FilesDrop feature. This is a cool feature similar to AirDrop for Apple devices, except it works on the web and with any device that has a browser.
Let’s say for instance we are on our Windows machine and want to share a file with our phone. Instead of transferring the files through Google Drive and uploading and then downloading them on our phone we can simply do it all through CasaOS. If we click on the FilesDrop button it will launch a new experience where it shows my machine (the Windows Chrome machine) and then any other device that connects to CasaOS and visits this page will also show up here. When I connect my phone you should see another icon pop up. (It says macOS Chrome but should say iOS Chrome but that’s not important.) From my Windows machine I can click on my phone icon and then choose files I want to send to it. If I want to send this photo right here, I choose it, and then on my phone I will get a prompt to save it, I can then save it to my phone! I can also go the other way and upload files from my phone back to my machine, all without the cloud and from any device that has a web browser!
FilesDrop is like AirDrop, but for any machine with a browser and only uses your local network connection!
One other feature that you might be interested in when using the Files app is the availability to connect cloud storage. If we click on the plus we can add a Dropbox account, Google Drive, or even another network share on our local network. This feature is really cool for connecting and transferring things from your Google Drive to your own cloud or vice versa. This is also helpful for migrating to or from the cloud and could be even more useful if one day you can back up your data from CasaOS to one of these locations.
Another thing you might be interested in is the storage feature. This feature is limited but allows you to add additional drives to your ZimaBoard in a snap. You just open the storage manager, and click create storage. You’ll get a prompt asking you if you want to add this device and that it will erase all contents from the device. Once it’s created you will see the device in the files app and you can use it for additional storage. There’s also this new merge storage option that will merge all of your storage into one, which seems like a simple way of expanding your storage but this also means that if one drive dies you might lose all of your data. I did enable it and it does exactly what it says, it merges multiple drives into one using mergefs. It’s also pretty easy to undo this too.
You can add and wipe additional drives, and even merge them if you want!
Now don’t let the simplicity of this UI fool you, you can still do some advanced things from the web dashboard like access to logs, access to a terminal, as well as the logs from each individual docker container and the ability to exec into them. All in all, I think CasaOS is probably one of the best projects for this ZimaBoard.
The next project I can see people using this for is installing and running operating systems. Windows and Linux run fine on a ZimaBoard and I’ve tested it with Windows 10, Ubuntu Desktop, and Ubuntu server and I am sure many other distributions will run on this board because at the end of the day it’s a x86 intel based system. You won’t have any issues getting or installing drivers because it’s running on Intel hardware. Most things will be plug and plug and play and if you are going to go this route I would recommend picking up a USB hub and a solid state drive for additional storage. Then, you can run or test your software on this tiny little package. It does output to 4k 60Hz so it will look great on your display though it will start to push the limits on what you can do with this little board. Office apps, web browsing, watching video are all fine, anything outside of that and you might need a little more power. You could even dual boot Windows and Linux with 2 drives either by connecting 2 drives or by swapping them each time you want to boot but that’s starting to get into some of the more advanced use cases, and more for the Tinkerer.
I’ve tested on Windows 10, Ubuntu Desktop, and Ubuntu Server and all run great!
This next group of things you can do with your ZimaBoard is dedicated to the Tinkerers. These are folks who aren’t afraid of running Linux headless, know their way around a terminal, know how to exit VIM (first make sure you are not in edit mode and that you are in command mode and then press :quit
, but if you’ve made changes… nevermind, you get the picture)
The first thing I would recommend running on a ZimaBoard for this group is Portainer. Portainer is a great UI to run all of your containerized applications, some of the same applications we talked about earlier like Plex, Jellyfin, Nextcloud, and many others. This gives you a lot more control over which OS you run and which applications you run and you can keep it as minimal as you want, saving on resources. But with that comes a little complexity. But you’re a tinkerer, right?
Another quick project that sounds like a ton of fun is Emulation Station which is the same software that Retro Pi is based on. Just install your OS, Windows or Linux, and then install Emulation Station and your emulators, connect to a few controllers and you are good to go. The ZimaBoard has all of the rest of the hardware you need to play retro games and is compact enough to bring with you on a road trip.
You can easily build out an Emulation Station with a couple of USB powered controllers!
Other uses for a ZimaBoard include some projects that I will definitely use this for and these are diagnostic and troubleshooting projects.
First is a disk wiping station. Having a dedicated little machine to securely wipe disks that I am no longer using is welcomed because my current solution is using an old janky PC. Having something this small and dedicated to wiping disks just makes sense after you use it. I can just boot to killdisk, start a wipe and walk away.
You can easily connect HDDs and SSDs for a disk wiping station and more!
Another thing I use that old janky PC for is updating firmware on devices, especially SSDs. This is usually the case when building new systems or replacing drives in existing systems. I can even do the same for NVMe drives with this PCIe adapter.
You can also connect NVMe drives with this adapter
Another thing I do with that old janky PC (sorry old PC) is clone disks. I use CloneZilla every now and then to backup or clone hard drives from one to another. CloneZilla has been my go to for years either backing up and restoring images over the network or doing a disk to disk clone. If you’re doing a disk to disk clone you will need to pick up this special Y adapter that lets you connect 2 drives at once, but it’s like 4 dollars in their store. One of the other use cases is simple data recovery. It’s nice to have a small simple machine that I can plug a drive into and run and try to recover files if the drive is no longer bootable. And all of this is easy and accessible using a ZimaBoard.
Now you may have noticed I didn’t mention NAS, that’s because I honestly think the best NAS you can use on this tiny little machine is using CasaOS, sure TrueNAS and OpenMediaVault should work but CasaOS already does this beautifully. And since you are a Tinkerer, you might as well install Debian headless then CasaOS to save resources!
The last group of projects is geared towards the hardcore. It’s for those folks who like to push hardware to the limits or experiment with something that they’ve never tried before. This is where I think the PCIe slot really comes into play. This PCIe slot can be used to connect any PCIe device as long as it can run in a x4 slot, which should be most because the slot is open here at the end.
Most PCIe devices will fit into this slot since they end is slotted (Power but has limited power)
While I know it’s technically possible to attach a video card to this device, I am not sure that a majority of the people who pick up this device will be doing so. I could be wrong, but I think more people will be attaching smaller devices like extra NICs, wireless adapters, and possibly more sata drives.
While GPUs will fit, your mileage may vary on whether they work or not.
This opens the door for turning this device into a router or firewall. Having 4 cores, 2 gigabit NICs, AES-NI, and up to 8 GB of RAM make this a solid choice for pfSense or OPNSense, it’s small, compact, has enough compute and RAM, has dual NICs, and is completely silent. And if you want to turn this into an access point, all you need to do is add a wireless NIC and you a nice little OpenWRT system!
Yup, you can even turn this into a Firewall / Router with a few extra NICs!
But even if you’re not into creating a router or firewall and you’re the hardcore type there are plenty of projects for you. If you know RaidOwl, he created a high availability cluster with 3 using Proxmox. Which is another use case, and that’s installing a hypervisor. Because the ZimaBoard supports both VT-x and VT-d it can be used to test out the latest HyperVisor.
And if creating and testing virtualization isn’t your thing, there’s the use case that I think that this is great for and it’s for developing and testing hardware. Most developers I know have laptops and don’t have access to a PCIe slot and that can be painful if you are working on a project that requires it, for example machine learning and AI. The Coral TPU from Google is a great example of how you can add a small PCIe device that is capable of doing AI in a small package, and if you can get your hands on one it could fit right in this slot. Having access to AI on a small board like this could let you do local detections from your video feeds so you can detect things like people, cars, and more. There are so many use cases for the hardcore that I could go on all day!
If I could get my hands on a Google Coral TPU, it would fit right here (Hey Google, call me!)
ZimaBoards are super flexible and can be applied to many projects, whether you are a beginner, a tinkerer, or a hardcore enthusiast there’s bound to be a project for you. I am sure that I didn’t cover all of the projects you can do with a ZimaBoard and if I missed one let me know what you’d use it for in the comments below. Well, I learned a lot about ZimaBoards, lots of cool projects, and I hope you learned something too. And remember if you found anything in this blog post helpful, don’t forget to share!
You can get an idea of how small the ZimaBoard really is next to this AAA battery!
I've had a ton of fun setting up and configuring a ZimaBoard and CasaOS over the last few weeks! I decided to walk through over 20 other home server projects you can start today.
— Techno Tim (@TechnoTimLive) July 14, 2023
Check it out!
👉https://t.co/htIeMyXC8W pic.twitter.com/G7PAImfWya
🛍️ Check out the new Merch Shop at https://l.technotim.live/shop
⚙️ See all the hardware I recommend at https://l.technotim.live/gear
🚀 Don’t forget to check out the 🚀Launchpad repo with all of the quick start source files
This site and all of its contents is self-hosted and ad-free. If you’d like to help keep it this way, consider supporting by one of the following options:
You can support me and my work directly:
I also make a small commission (at no cost to you) if you use one of my affiliate links below:
Donating directly: