From d2f7c14b8ecccdde2b78b08bdc665f8a09eefe81 Mon Sep 17 00:00:00 2001
From: "Waylon S. Walker"
Date: Sat, 3 Aug 2024 21:44:08 -0500
Subject: [PATCH] tags
---
pages/blog/2018-retrospective.md | 5 +-
pages/blog/automate-your-deploys.md | 3 +-
pages/blog/blogging-in-2024.md | 1 -
pages/blog/brainstorming-kedro-hooks.md | 4 +-
pages/blog/cmd-exe-tips.md | 4 +-
pages/blog/expand-one-line-links.md | 6 +-
pages/blog/find-kedro-release.md | 11 +-
pages/blog/fix-git-commit-author.md | 7 +-
pages/blog/gatsby-rss-feed.md | 6 +-
pages/blog/git-diff-branches.md | 9 +-
pages/blog/goals-2019.md | 5 +-
pages/blog/happy.md | 74 +-
pages/blog/journey.md | 68 +-
pages/blog/kedro-dependency-management.md | 4 +-
pages/blog/knock-and-sweep.md | 2 +-
pages/blog/last-n-git-files.md | 3 +-
pages/blog/long-variable-names-are-good.md | 4 +-
pages/blog/mentorship-vs-sponsorship.md | 4 +-
pages/blog/out-of-space.md | 4 +-
pages/blog/pandas-pattern.md | 1492 +++++++++--------
pages/blog/passion.md | 41 +-
pages/blog/practice-your-craft.md | 5 +-
pages/blog/productive-one-on-one.md | 4 +-
pages/blog/python-tips.md | 5 +-
pages/blog/should-i-switch-to-zeit-now.md | 7 +-
pages/blog/stories_10-10-2020_10-21-2020.md | 4 +-
pages/blog/strip-trailing-whitespace.md | 8 +-
pages/blog/thank-you.md | 4 +-
pages/blog/vim-notes.md | 103 +-
.../adding-google-fonts-to-a-gatsbyjs-site.md | 6 +-
pages/notes/debugging-python.md | 4 +-
pages/notes/gatsby-scripts-with-onload.md | 4 +-
pages/notes/kedro-basics.md | 7 +-
pages/notes/kedro-catalog.md | 7 +-
pages/notes/kedro-preflight.md | 4 +-
.../notes/maintianing-multiple-git-remotes.md | 3 +-
pages/notes/new-machine-tpio.md | 7 +-
pages/notes/packages-to-investigate.md | 4 +-
pages/notes/pyspark.md | 3 +-
pages/notes/python-deepwatch.md | 3 +-
pages/notes/reasons-to-kedro-notes.md | 3 +-
.../notes/serverless-things-to-investigate.md | 4 +-
pages/til/animal-well-keyboard.md | 4 +-
43 files changed, 990 insertions(+), 970 deletions(-)
diff --git a/pages/blog/2018-retrospective.md b/pages/blog/2018-retrospective.md
index 76b9959df5..62097d96ef 100644
--- a/pages/blog/2018-retrospective.md
+++ b/pages/blog/2018-retrospective.md
@@ -1,9 +1,10 @@
---
-templateKey: 'blog-post'
+templateKey: blog-post
title: 2018 Retrospective
date: 2019-01-05
published: true
-
+tags:
+ - goals
---
2018 was a year of many ups and downs, and learning to deal with a whole new
diff --git a/pages/blog/automate-your-deploys.md b/pages/blog/automate-your-deploys.md
index 0856c51362..a2204bd7b7 100644
--- a/pages/blog/automate-your-deploys.md
+++ b/pages/blog/automate-your-deploys.md
@@ -1,6 +1,7 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - ci-cd
title: automate your deploys
date: 2020-02-07T12:08:00Z
published: false
diff --git a/pages/blog/blogging-in-2024.md b/pages/blog/blogging-in-2024.md
index 745c459705..c5648d2495 100644
--- a/pages/blog/blogging-in-2024.md
+++ b/pages/blog/blogging-in-2024.md
@@ -9,4 +9,3 @@ tags:
published: True
---
-
diff --git a/pages/blog/brainstorming-kedro-hooks.md b/pages/blog/brainstorming-kedro-hooks.md
index 0ee6517298..de775cddd5 100644
--- a/pages/blog/brainstorming-kedro-hooks.md
+++ b/pages/blog/brainstorming-kedro-hooks.md
@@ -1,10 +1,10 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - kedro
title: Brainstorming Kedro Hooks
date: 2020-05-22T22:02:00.000+00:00
published: true
-
---
This post is a 🧠 branstorming work in progress. I will likely use it as a
diff --git a/pages/blog/cmd-exe-tips.md b/pages/blog/cmd-exe-tips.md
index 7eb1b2bb73..f516cf7c45 100644
--- a/pages/blog/cmd-exe-tips.md
+++ b/pages/blog/cmd-exe-tips.md
@@ -1,11 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - cli
title: cmd.exe tips
date: 2020-01-23T15:18:45.000+00:00
published: true
description: cmd.exe tips
-
---
I spend a lot of my time at the terminal for my daily work, mostly in Linux or wsl. One big reason for using wsl over cmd.exe is the ease of walking through history that fzf provides. This week we had a windows bug in a cli and I was stuck in vanilla cmd.exe 😭
diff --git a/pages/blog/expand-one-line-links.md b/pages/blog/expand-one-line-links.md
index 47efd56467..150c1367a7 100644
--- a/pages/blog/expand-one-line-links.md
+++ b/pages/blog/expand-one-line-links.md
@@ -1,10 +1,12 @@
---
templateKey: blog-post
-tags: [webdev]
+tags:
+ - webdev
+ - blog
+ - meta
title: Expand One Line Links
date: 2020-11-18T05:00:00.000+00:00
published: true
-
---
I wanted a super simple way to cross-link blog posts that require as little effort as possible, yet still looks good in vanilla markdown in GitHub. I have been using a snippet that puts HTML into the markdown. While this works, it's more manual/difficult for me does not look the best, and does not read well as
diff --git a/pages/blog/find-kedro-release.md b/pages/blog/find-kedro-release.md
index b5282b3210..e02024cd93 100644
--- a/pages/blog/find-kedro-release.md
+++ b/pages/blog/find-kedro-release.md
@@ -1,14 +1,11 @@
---
templateKey: blog-post
-tags: []
-title: "\U0001F4E2 Announcing find-kedro"
+tags:
+ - kedro
+title: 📢 Announcing find-kedro
date: 2020-05-04T11:53:00Z
published: true
-description: kedro is an amazing project that allows for super-fast prototyping of
- data pipelines, yet yielding production-ready pipelines. find-kedro enhances this
- experience by adding a pytest-like node discovery eliminating the need to bubble
- up pipelines through modules.
-
+description: kedro is an amazing project that allows for super-fast prototyping of data pipelines, yet yielding production-ready pipelines. find-kedro enhances this experience by adding a pytest-like node discovery eliminating the need to bubble up pipelines through modules.
---
`find-kedro` is a small library to enhance your kedro experience. It looks through your modules to find kedro pipelines, nodes, and iterables (lists, sets, tuples) of nodes. It then assembles them into a dictionary of pipelines, each module will create a separate pipeline, and `__default__` being a combination of all pipelines. This format is compatible with the kedro `_create_pipelines` format.
diff --git a/pages/blog/fix-git-commit-author.md b/pages/blog/fix-git-commit-author.md
index 33d015d125..35428e634f 100644
--- a/pages/blog/fix-git-commit-author.md
+++ b/pages/blog/fix-git-commit-author.md
@@ -1,12 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - git
title: Fix git commit author
date: 2020-10-17T05:00:00.000+00:00
published: true
-description: "I was 20 commits into a hackoberfest PR when I suddenly realized they
- they all had my work email on them instead of my personal email \U0001F631."
-
+description: I was 20 commits into a hackoberfest PR when I suddenly realized they they all had my work email on them instead of my personal email 😱.
---
I was 20 commits into a hackoberfest PR when I suddenly realized they they all had my work email on them instead of my personal email 😱. This is the story of how I corrected my email address on 19 individual commits after already submitting for a PR.
diff --git a/pages/blog/gatsby-rss-feed.md b/pages/blog/gatsby-rss-feed.md
index 39d275649b..d5e433952a 100644
--- a/pages/blog/gatsby-rss-feed.md
+++ b/pages/blog/gatsby-rss-feed.md
@@ -1,12 +1,12 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - webdev
title: RSS feed for your Gatsby Site
date: 2020-01-21T13:58:59Z
published: false
description: Add an rss feed to your Gatsby Site
-cover: ''
-
+cover: ""
---
Adding an rss feed to your gatsby js site is super simple.
diff --git a/pages/blog/git-diff-branches.md b/pages/blog/git-diff-branches.md
index 991c9f8e7f..5accac273b 100644
--- a/pages/blog/git-diff-branches.md
+++ b/pages/blog/git-diff-branches.md
@@ -1,14 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - git
title: Today I learned `git diff feature..main`
date: 2020-03-03T11:58:00.000+00:00
published: true
-description: Sometimes we get a little `git add . && git commit -m "WIP"` happy and
- mistakenly commit something that we just cant figure out. This is a good way to
- figure out what the heck has changed on the current branch compared to any other
- branch.
-
+description: Sometimes we get a little `git add . && git commit -m "WIP"` happy and mistakenly commit something that we just cant figure out. This is a good way to figure out what the heck has changed on the current branch compared to any other branch.
---
Today I learned how to diff between two branches.
diff --git a/pages/blog/goals-2019.md b/pages/blog/goals-2019.md
index cd768253c6..9de82d8bd7 100644
--- a/pages/blog/goals-2019.md
+++ b/pages/blog/goals-2019.md
@@ -1,10 +1,11 @@
---
-templateKey: 'blog-post'
+templateKey: blog-post
title: 2019 goals
date: 2019-01-12
published: true
description: 2019 goals
-
+tags:
+ - goals
---
-
-
-
- |
- date |
- item |
- qty |
-
-
-
-
- 0 |
- 2017-01-01 |
- paper |
- 1 |
-
-
- 1 |
- 2017-01-01 |
- pencils |
- 4 |
-
-
- 2 |
- 2017-01-01 |
- note cards |
- 5 |
-
-
- 3 |
- 2017-01-01 |
- markers |
- 9 |
-
-
- 4 |
- 2017-01-02 |
- paper |
- 3 |
-
-
-
-
-
-## The pattern
-
-Here I am going to take my groupby date and item, this will take care of duplicate entries with the same time stamp. Select the value I want to sum on. unstack the items index into columns. Resample the data by month. I could easily use any of the [available rules](https://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases). Fill any missing months with 0, since there wasnt a transaction during that month. Apply a rolling window to get the annual sum. I find that this helps to ground values in values that my stakeholders are used to seeing on a regular basis and reduces the need for them to recalculate in their head. Then I am going to drop the nulls created by the rolling window for the first 11 rows.
-
-```python
-plot_data = (data
- .groupby(['date', 'item'])
- .sum()
- ['qty']
- .unstack()
- .resample('m')
- .sum()
- .fillna(0)
- .rolling(12)
- .sum()
- .dropna()
- )
-plot_data.head()
-```
-
-
-
-
-
-
- item |
- markers |
- note cards |
- paper |
- pencils |
-
-
- date |
- |
- |
- |
- |
-
-
-
-
- 2017-12-31 |
- 1543.0 |
- 1739.0 |
- 1613.0 |
- 1657.0 |
-
-
- 2018-01-31 |
- 1572.0 |
- 1744.0 |
- 1635.0 |
- 1635.0 |
-
-
- 2018-02-28 |
- 1563.0 |
- 1717.0 |
- 1645.0 |
- 1645.0 |
-
-
- 2018-03-31 |
- 1596.0 |
- 1703.0 |
- 1629.0 |
- 1600.0 |
-
-
- 2018-04-30 |
- 1557.0 |
- 1693.0 |
- 1648.0 |
- 1581.0 |
-
-
-
-
-
-```python
-plot_data.plot(title='Rolling annual sum of Categorical Random Data');
-```
-
-## For the Visual Learners
-
-### Groupby
-
-```python
-plot_data = (data
- .groupby(['date', 'item'])
- .sum()
- )
-plot_data.head()
-```
-
-
-
-
-
-
- |
- |
- qty |
-
-
- date |
- item |
- |
-
-
-
-
- 2017-01-01 |
- markers |
- 9 |
-
-
- note cards |
- 5 |
-
-
- paper |
- 1 |
-
-
- pencils |
- 4 |
-
-
- 2017-01-02 |
- markers |
- 4 |
-
-
-
-
-
-### Select Values
-
-In this case I chose to do this to avoid working with a multiple levels in the columns that would be created in the unstack() step.
-
-```python
-plot_data = plot_data['qty']
-
-plot_data.head()
-```
-
- date item
- 2017-01-01 markers 9
- note cards 5
- paper 1
- pencils 4
- 2017-01-02 markers 4
- Name: qty, dtype: int32
-
-### unstack
-
-transform the last column in the index ('item') into rows.
-
-```python
-plot_data = plot_data.unstack()
-
-plot_data.head()
-```
-
-
-
-
-
-
- item |
- markers |
- note cards |
- paper |
- pencils |
-
-
- date |
- |
- |
- |
- |
-
-
-
-
- 2017-01-01 |
- 9 |
- 5 |
- 1 |
- 4 |
-
-
- 2017-01-02 |
- 4 |
- 2 |
- 3 |
- 7 |
-
-
- 2017-01-03 |
- 9 |
- 5 |
- 2 |
- 3 |
-
-
- 2017-01-04 |
- 2 |
- 0 |
- 0 |
- 5 |
-
-
- 2017-01-05 |
- 0 |
- 1 |
- 6 |
- 2 |
-
-
-
-
-
-### resample
-
-This step is important for irregular data in order to get the data into regular intervals.
-
-```python
-plot_data = plot_data.resample('m').sum()
-
-plot_data.head()
-```
-
-
-
-
-
-
- item |
- markers |
- note cards |
- paper |
- pencils |
-
-
- date |
- |
- |
- |
- |
-
-
-
-
- 2017-01-31 |
- 145 |
- 128 |
- 117 |
- 146 |
-
-
- 2017-02-28 |
- 136 |
- 140 |
- 133 |
- 135 |
-
-
- 2017-03-31 |
- 112 |
- 145 |
- 125 |
- 163 |
-
-
- 2017-04-30 |
- 143 |
- 148 |
- 112 |
- 147 |
-
-
- 2017-05-31 |
- 86 |
- 134 |
- 139 |
- 141 |
-
-
-
-
-
-### rolling
-
-I like to use rolling because it get the data into annual numbers, and reduces noise. I have found that most of my datasets have patterns and trends that are greater than 1y. This is just due to the industry that I am in. Play with the resample and rolling rules to fit the need of your own data.
-
-```python
-plot_data = plot_data.rolling(12).sum()
-
-plot_data.head(20)
-```
-
-
-
-
-
-
- item |
- markers |
- note cards |
- paper |
- pencils |
-
-
- date |
- |
- |
- |
- |
-
-
-
-
- 2017-01-31 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-02-28 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-03-31 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-04-30 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-05-31 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-06-30 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-07-31 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-08-31 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-09-30 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-10-31 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-11-30 |
- NaN |
- NaN |
- NaN |
- NaN |
-
-
- 2017-12-31 |
- 1543.0 |
- 1739.0 |
- 1613.0 |
- 1657.0 |
-
-
- 2018-01-31 |
- 1572.0 |
- 1744.0 |
- 1635.0 |
- 1635.0 |
-
-
- 2018-02-28 |
- 1563.0 |
- 1717.0 |
- 1645.0 |
- 1645.0 |
-
-
- 2018-03-31 |
- 1596.0 |
- 1703.0 |
- 1629.0 |
- 1600.0 |
-
-
- 2018-04-30 |
- 1557.0 |
- 1693.0 |
- 1648.0 |
- 1581.0 |
-
-
- 2018-05-31 |
- 1624.0 |
- 1674.0 |
- 1632.0 |
- 1592.0 |
-
-
- 2018-06-30 |
- 1582.0 |
- 1645.0 |
- 1657.0 |
- 1593.0 |
-
-
- 2018-07-31 |
- 1662.0 |
- 1654.0 |
- 1680.0 |
- 1613.0 |
-
-
- 2018-08-31 |
- 1654.0 |
- 1617.0 |
- 1650.0 |
- 1616.0 |
-
-
-
-
-
-### dropna
-
-get rid of the first 11 null rows
-
-```python
-plot_data = plot_data.dropna()
-
-plot_data.head(10)
-```
-
-
-
-
-
-
- item |
- markers |
- note cards |
- paper |
- pencils |
-
-
- date |
- |
- |
- |
- |
-
-
-
-
- 2017-12-31 |
- 1543.0 |
- 1739.0 |
- 1613.0 |
- 1657.0 |
-
-
- 2018-01-31 |
- 1572.0 |
- 1744.0 |
- 1635.0 |
- 1635.0 |
-
-
- 2018-02-28 |
- 1563.0 |
- 1717.0 |
- 1645.0 |
- 1645.0 |
-
-
- 2018-03-31 |
- 1596.0 |
- 1703.0 |
- 1629.0 |
- 1600.0 |
-
-
- 2018-04-30 |
- 1557.0 |
- 1693.0 |
- 1648.0 |
- 1581.0 |
-
-
- 2018-05-31 |
- 1624.0 |
- 1674.0 |
- 1632.0 |
- 1592.0 |
-
-
- 2018-06-30 |
- 1582.0 |
- 1645.0 |
- 1657.0 |
- 1593.0 |
-
-
- 2018-07-31 |
- 1662.0 |
- 1654.0 |
- 1680.0 |
- 1613.0 |
-
-
- 2018-08-31 |
- 1654.0 |
- 1617.0 |
- 1650.0 |
- 1616.0 |
-
-
- 2018-09-30 |
- 1669.0 |
- 1648.0 |
- 1638.0 |
- 1634.0 |
-
-
-
-
+---
+templateKey: blog-post
+title: My favorite pandas pattern
+date: 2018-03-01
+published: false
+description:
+tags:
+ - python
+---
+
+# My favorite pandas pattern
+
+I work with a lot of transactional timeseries data that includes categories. I often want to create timeseries plots with each category as its own line. This is the method that I use almost data to achieve this result. Typically the data that am working with changes very slowly and trends happen over years not days or weeks. Plotting daily/weekly data tends to be noisy and hides the trend. I use this pattern because it works well with my data and is easy to explain to my stakeholders.
+
+```python
+import pandas as pd
+import numpy as np
+% matplotlib inline
+```
+
+## Lets Fake some data
+
+Here I am trying to simulate a subset of a large transactional data set. This could be something like sales data, production data, hourly billing, anything that has a date, category, and value. Since we generated this data we know that it is clean. I am still going to assume that it contains some nulls, and an irregular date range.
+
+```python
+n = 365*5
+cols = {'level_0': 'date',
+ 'level_1': 'item',
+ 0: 'qty', }
+data = (pd.DataFrame(np.random.randint(0, 10, size=(n, 4)),
+ columns=['paper', 'pencils', 'note cards', 'markers'],
+ index=pd.date_range('1/1/2017', periods=n, freq='d'),
+ )
+ .stack()
+ .to_frame()
+ .reset_index()
+ .rename(columns=cols))
+data.head()
+```
+
+
+
+
+
+
+ |
+ date |
+ item |
+ qty |
+
+
+
+
+ 0 |
+ 2017-01-01 |
+ paper |
+ 1 |
+
+
+ 1 |
+ 2017-01-01 |
+ pencils |
+ 4 |
+
+
+ 2 |
+ 2017-01-01 |
+ note cards |
+ 5 |
+
+
+ 3 |
+ 2017-01-01 |
+ markers |
+ 9 |
+
+
+ 4 |
+ 2017-01-02 |
+ paper |
+ 3 |
+
+
+
+
+
+## The pattern
+
+Here I am going to take my groupby date and item, this will take care of duplicate entries with the same time stamp. Select the value I want to sum on. unstack the items index into columns. Resample the data by month. I could easily use any of the [available rules](https://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases). Fill any missing months with 0, since there wasnt a transaction during that month. Apply a rolling window to get the annual sum. I find that this helps to ground values in values that my stakeholders are used to seeing on a regular basis and reduces the need for them to recalculate in their head. Then I am going to drop the nulls created by the rolling window for the first 11 rows.
+
+```python
+plot_data = (data
+ .groupby(['date', 'item'])
+ .sum()
+ ['qty']
+ .unstack()
+ .resample('m')
+ .sum()
+ .fillna(0)
+ .rolling(12)
+ .sum()
+ .dropna()
+ )
+plot_data.head()
+```
+
+
+
+
+
+
+ item |
+ markers |
+ note cards |
+ paper |
+ pencils |
+
+
+ date |
+ |
+ |
+ |
+ |
+
+
+
+
+ 2017-12-31 |
+ 1543.0 |
+ 1739.0 |
+ 1613.0 |
+ 1657.0 |
+
+
+ 2018-01-31 |
+ 1572.0 |
+ 1744.0 |
+ 1635.0 |
+ 1635.0 |
+
+
+ 2018-02-28 |
+ 1563.0 |
+ 1717.0 |
+ 1645.0 |
+ 1645.0 |
+
+
+ 2018-03-31 |
+ 1596.0 |
+ 1703.0 |
+ 1629.0 |
+ 1600.0 |
+
+
+ 2018-04-30 |
+ 1557.0 |
+ 1693.0 |
+ 1648.0 |
+ 1581.0 |
+
+
+
+
+
+```python
+plot_data.plot(title='Rolling annual sum of Categorical Random Data');
+```
+
+## For the Visual Learners
+
+### Groupby
+
+```python
+plot_data = (data
+ .groupby(['date', 'item'])
+ .sum()
+ )
+plot_data.head()
+```
+
+
+
+
+
+
+ |
+ |
+ qty |
+
+
+ date |
+ item |
+ |
+
+
+
+
+ 2017-01-01 |
+ markers |
+ 9 |
+
+
+ note cards |
+ 5 |
+
+
+ paper |
+ 1 |
+
+
+ pencils |
+ 4 |
+
+
+ 2017-01-02 |
+ markers |
+ 4 |
+
+
+
+
+
+### Select Values
+
+In this case I chose to do this to avoid working with a multiple levels in the columns that would be created in the unstack() step.
+
+```python
+plot_data = plot_data['qty']
+
+plot_data.head()
+```
+
+ date item
+ 2017-01-01 markers 9
+ note cards 5
+ paper 1
+ pencils 4
+ 2017-01-02 markers 4
+ Name: qty, dtype: int32
+
+### unstack
+
+transform the last column in the index ('item') into rows.
+
+```python
+plot_data = plot_data.unstack()
+
+plot_data.head()
+```
+
+
+
+
+
+
+ item |
+ markers |
+ note cards |
+ paper |
+ pencils |
+
+
+ date |
+ |
+ |
+ |
+ |
+
+
+
+
+ 2017-01-01 |
+ 9 |
+ 5 |
+ 1 |
+ 4 |
+
+
+ 2017-01-02 |
+ 4 |
+ 2 |
+ 3 |
+ 7 |
+
+
+ 2017-01-03 |
+ 9 |
+ 5 |
+ 2 |
+ 3 |
+
+
+ 2017-01-04 |
+ 2 |
+ 0 |
+ 0 |
+ 5 |
+
+
+ 2017-01-05 |
+ 0 |
+ 1 |
+ 6 |
+ 2 |
+
+
+
+
+
+### resample
+
+This step is important for irregular data in order to get the data into regular intervals.
+
+```python
+plot_data = plot_data.resample('m').sum()
+
+plot_data.head()
+```
+
+
+
+
+
+
+ item |
+ markers |
+ note cards |
+ paper |
+ pencils |
+
+
+ date |
+ |
+ |
+ |
+ |
+
+
+
+
+ 2017-01-31 |
+ 145 |
+ 128 |
+ 117 |
+ 146 |
+
+
+ 2017-02-28 |
+ 136 |
+ 140 |
+ 133 |
+ 135 |
+
+
+ 2017-03-31 |
+ 112 |
+ 145 |
+ 125 |
+ 163 |
+
+
+ 2017-04-30 |
+ 143 |
+ 148 |
+ 112 |
+ 147 |
+
+
+ 2017-05-31 |
+ 86 |
+ 134 |
+ 139 |
+ 141 |
+
+
+
+
+
+### rolling
+
+I like to use rolling because it get the data into annual numbers, and reduces noise. I have found that most of my datasets have patterns and trends that are greater than 1y. This is just due to the industry that I am in. Play with the resample and rolling rules to fit the need of your own data.
+
+```python
+plot_data = plot_data.rolling(12).sum()
+
+plot_data.head(20)
+```
+
+
+
+
+
+
+ item |
+ markers |
+ note cards |
+ paper |
+ pencils |
+
+
+ date |
+ |
+ |
+ |
+ |
+
+
+
+
+ 2017-01-31 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-02-28 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-03-31 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-04-30 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-05-31 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-06-30 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-07-31 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-08-31 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-09-30 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-10-31 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-11-30 |
+ NaN |
+ NaN |
+ NaN |
+ NaN |
+
+
+ 2017-12-31 |
+ 1543.0 |
+ 1739.0 |
+ 1613.0 |
+ 1657.0 |
+
+
+ 2018-01-31 |
+ 1572.0 |
+ 1744.0 |
+ 1635.0 |
+ 1635.0 |
+
+
+ 2018-02-28 |
+ 1563.0 |
+ 1717.0 |
+ 1645.0 |
+ 1645.0 |
+
+
+ 2018-03-31 |
+ 1596.0 |
+ 1703.0 |
+ 1629.0 |
+ 1600.0 |
+
+
+ 2018-04-30 |
+ 1557.0 |
+ 1693.0 |
+ 1648.0 |
+ 1581.0 |
+
+
+ 2018-05-31 |
+ 1624.0 |
+ 1674.0 |
+ 1632.0 |
+ 1592.0 |
+
+
+ 2018-06-30 |
+ 1582.0 |
+ 1645.0 |
+ 1657.0 |
+ 1593.0 |
+
+
+ 2018-07-31 |
+ 1662.0 |
+ 1654.0 |
+ 1680.0 |
+ 1613.0 |
+
+
+ 2018-08-31 |
+ 1654.0 |
+ 1617.0 |
+ 1650.0 |
+ 1616.0 |
+
+
+
+
+
+### dropna
+
+get rid of the first 11 null rows
+
+```python
+plot_data = plot_data.dropna()
+
+plot_data.head(10)
+```
+
+
+
+
+
+
+ item |
+ markers |
+ note cards |
+ paper |
+ pencils |
+
+
+ date |
+ |
+ |
+ |
+ |
+
+
+
+
+ 2017-12-31 |
+ 1543.0 |
+ 1739.0 |
+ 1613.0 |
+ 1657.0 |
+
+
+ 2018-01-31 |
+ 1572.0 |
+ 1744.0 |
+ 1635.0 |
+ 1635.0 |
+
+
+ 2018-02-28 |
+ 1563.0 |
+ 1717.0 |
+ 1645.0 |
+ 1645.0 |
+
+
+ 2018-03-31 |
+ 1596.0 |
+ 1703.0 |
+ 1629.0 |
+ 1600.0 |
+
+
+ 2018-04-30 |
+ 1557.0 |
+ 1693.0 |
+ 1648.0 |
+ 1581.0 |
+
+
+ 2018-05-31 |
+ 1624.0 |
+ 1674.0 |
+ 1632.0 |
+ 1592.0 |
+
+
+ 2018-06-30 |
+ 1582.0 |
+ 1645.0 |
+ 1657.0 |
+ 1593.0 |
+
+
+ 2018-07-31 |
+ 1662.0 |
+ 1654.0 |
+ 1680.0 |
+ 1613.0 |
+
+
+ 2018-08-31 |
+ 1654.0 |
+ 1617.0 |
+ 1650.0 |
+ 1616.0 |
+
+
+ 2018-09-30 |
+ 1669.0 |
+ 1648.0 |
+ 1638.0 |
+ 1634.0 |
+
+
+
+
diff --git a/pages/blog/passion.md b/pages/blog/passion.md
index 2c2d9098cf..d8dfd4fd04 100644
--- a/pages/blog/passion.md
+++ b/pages/blog/passion.md
@@ -1,19 +1,22 @@
----
-templateKey: 'blog-post'
-title: Follow Your Passion
-date: 2019-01-01
-published: false
-description: none
-cover: "./flex.png"
----
-
-## Follow Your Passion
-
-_my journey into data science_
-
-In January 2018 I started work as a full time data scientist turning my passion into a career. It is something that I didn't see myself doing 5 years ago, but is something that I love to do. It combines my love of data, visualization, story telling, software development, and writing code. Most of all it allows me to work in a space that promotes learning and creativity. As a mechanical engineer for a company that has been building equipment for nearly a century the mechanical engineering is very well established I felt that there was not a lot of room for creativity.
-
-
-## Find Your Role
-
-When I first started as a full time mechanical engineer
+---
+templateKey: blog-post
+title: Follow Your Passion
+date: 2019-01-01
+published: false
+description: none
+cover: ./flex.png
+tags:
+ - soft
+ - catalytic
+---
+
+## Follow Your Passion
+
+_my journey into data science_
+
+In January 2018 I started work as a full time data scientist turning my passion into a career. It is something that I didn't see myself doing 5 years ago, but is something that I love to do. It combines my love of data, visualization, story telling, software development, and writing code. Most of all it allows me to work in a space that promotes learning and creativity. As a mechanical engineer for a company that has been building equipment for nearly a century the mechanical engineering is very well established I felt that there was not a lot of room for creativity.
+
+
+## Find Your Role
+
+When I first started as a full time mechanical engineer
diff --git a/pages/blog/practice-your-craft.md b/pages/blog/practice-your-craft.md
index b35d9e92c2..d4c5339e64 100644
--- a/pages/blog/practice-your-craft.md
+++ b/pages/blog/practice-your-craft.md
@@ -1,10 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - catalytic
+ - soft
title: Practice your craft
date: 2020-03-04T06:00:00.000+00:00
published: false
-
---
## Show up For Practice
diff --git a/pages/blog/productive-one-on-one.md b/pages/blog/productive-one-on-one.md
index 874afb1664..1f610cb14a 100644
--- a/pages/blog/productive-one-on-one.md
+++ b/pages/blog/productive-one-on-one.md
@@ -1,6 +1,8 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - catalytic
+ - soft
title: Productive one on one
date: 2020-02-24T12:53:00Z
published: false
diff --git a/pages/blog/python-tips.md b/pages/blog/python-tips.md
index fa7bcd8dda..a5a98fa616 100644
--- a/pages/blog/python-tips.md
+++ b/pages/blog/python-tips.md
@@ -1,10 +1,11 @@
---
-templateKey: 'blog-post'
+templateKey: blog-post
title: Python Tips
date: 2019-01-21
published: false
description:
-
+tags:
+ - python
---
## Dictionaries
diff --git a/pages/blog/should-i-switch-to-zeit-now.md b/pages/blog/should-i-switch-to-zeit-now.md
index 1b47be929f..5e5c5e74da 100644
--- a/pages/blog/should-i-switch-to-zeit-now.md
+++ b/pages/blog/should-i-switch-to-zeit-now.md
@@ -1,12 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - webdev
title: Should I switch to Zeit Now
date: 2020-02-06T22:38:00Z
published: true
-description: Should I switch to Zeit Now. Netlify build times are starting to creep
- in.
-
+description: Should I switch to Zeit Now. Netlify build times are starting to creep in.
---
## Netlify
diff --git a/pages/blog/stories_10-10-2020_10-21-2020.md b/pages/blog/stories_10-10-2020_10-21-2020.md
index 77af277f9b..c9196370ec 100644
--- a/pages/blog/stories_10-10-2020_10-21-2020.md
+++ b/pages/blog/stories_10-10-2020_10-21-2020.md
@@ -1,10 +1,10 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - kedro
title: A brain dump of stories
date: 2020-10-21T05:00:00Z
published: true
-
---
I started making stories as kind of a brain dump a few times per day and
diff --git a/pages/blog/strip-trailing-whitespace.md b/pages/blog/strip-trailing-whitespace.md
index 15be2a1009..444f08822e 100644
--- a/pages/blog/strip-trailing-whitespace.md
+++ b/pages/blog/strip-trailing-whitespace.md
@@ -1,10 +1,12 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - python
+ - git
+ - pre-commit
title: Strip Trailing Whitespace from Git projects
date: 2020-09-30T05:00:00Z
published: true
-
---
A common linting error thrown by various linters is for trailing whitespace. I
@@ -34,4 +36,4 @@ git grep -I --name-only -z -e '' | xargs -0 sed -i -e 's/[ \t]\+\(\r\?\)$/\1/'
-read more about pre-commit [here](https://waylonwalker.com/pre-commit-is-awesome).
+read more about how [[pre-commit-is-awesome]]
diff --git a/pages/blog/thank-you.md b/pages/blog/thank-you.md
index d003422a28..b2b553497c 100644
--- a/pages/blog/thank-you.md
+++ b/pages/blog/thank-you.md
@@ -1,11 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - meta
title: Thanks For Subscribing
date: 2020-05-20T10:00:00Z
published: false
description: You're awesome! Thanks for subscribing to my newsletter.
-
---
diff --git a/pages/blog/vim-notes.md b/pages/blog/vim-notes.md
index 4ea7dbcc66..3894ebba2f 100644
--- a/pages/blog/vim-notes.md
+++ b/pages/blog/vim-notes.md
@@ -1,51 +1,52 @@
----
-templateKey: 'blog-post'
-title: Vim Notes
-date: 2018-02-01
-published: false
-
----
-
-# vim notes
-
-## nvim lua
-[norcalli/neovim-plugin](https://github.com/norcalli/neovim-plugin)
-
-## nvim lsp
-
-[python-lsp/python-lsp-server](https://github.com/python-lsp/python-lsp-server)
-
-## Using c to change text
-
-I have gone quite awhile without using ```c``` and instead using ```d```. The reason that I started using ```c``` is because it automatically places you into insert mode. This not only saves me one keystroke for commands such as ```diwi``` is now ```ciw```, but it also works with the repeat ```.``` command!!! This is huge. When refactoring a document I had been creating a macro to change one word to another, using ```c``` instead of ```d``` allows the use of the ```.``` rather than needing to create a macro.
-
-## Case for vim
-
-**Sublime/VSCode cannot**
-
-* edit a macro register
-* register
-* quickfix
-* gF
-
-## autocomplete
-
- repeats previously typed text
-
- 1. Whole lines |i CTRL-X CTRL-L|
- 2. keywords in the current file |i CTRL-X CTRL-N|
- 3. keywords in 'dictionary' |i CTRL-X CTRL-K|
- 4. keywords in 'thesaurus', thesaurus-style |i CTRL-X CTRL-T|
- 5. keywords in the current and included files |i CTRL-X CTRL-I|
- 6. tags |i CTRL-X CTRL-]|
- 7. file names |i CTRL-X CTRL-F|
- 8. definitions or macros |i CTRL-X CTRL-D|
- 9. Vim command-line |i CTRL-X CTRL-V|
- 10. User defined completion |i CTRL-X CTRL-U|
- 11. omni completion |i CTRL-X CTRL-O|
- 12. Spelling suggestions |i CTRL-X s|
- 13. keywords in 'complete' |i CTRL-N|
-
-## z-commands
-
-```zn``` Fold none: reset 'foldenable'. All folds will be open.
+---
+templateKey: blog-post
+title: Vim Notes
+date: 2018-02-01
+published: true
+tags:
+ - vim
+---
+
+# vim notes
+
+## nvim lua
+[norcalli/neovim-plugin](https://github.com/norcalli/neovim-plugin)
+
+## nvim lsp
+
+[python-lsp/python-lsp-server](https://github.com/python-lsp/python-lsp-server)
+
+## Using c to change text
+
+I have gone quite awhile without using ```c``` and instead using ```d```. The reason that I started using ```c``` is because it automatically places you into insert mode. This not only saves me one keystroke for commands such as ```diwi``` is now ```ciw```, but it also works with the repeat ```.``` command!!! This is huge. When refactoring a document I had been creating a macro to change one word to another, using ```c``` instead of ```d``` allows the use of the ```.``` rather than needing to create a macro.
+
+## Case for vim
+
+**Sublime/VSCode cannot**
+
+* edit a macro register
+* register
+* quickfix
+* gF
+
+## autocomplete
+
+ repeats previously typed text
+
+ 1. Whole lines |i CTRL-X CTRL-L|
+ 2. keywords in the current file |i CTRL-X CTRL-N|
+ 3. keywords in 'dictionary' |i CTRL-X CTRL-K|
+ 4. keywords in 'thesaurus', thesaurus-style |i CTRL-X CTRL-T|
+ 5. keywords in the current and included files |i CTRL-X CTRL-I|
+ 6. tags |i CTRL-X CTRL-]|
+ 7. file names |i CTRL-X CTRL-F|
+ 8. definitions or macros |i CTRL-X CTRL-D|
+ 9. Vim command-line |i CTRL-X CTRL-V|
+ 10. User defined completion |i CTRL-X CTRL-U|
+ 11. omni completion |i CTRL-X CTRL-O|
+ 12. Spelling suggestions |i CTRL-X s|
+ 13. keywords in 'complete' |i CTRL-N|
+
+## z-commands
+
+```zn``` Fold none: reset 'foldenable'. All folds will be open.
diff --git a/pages/notes/adding-google-fonts-to-a-gatsbyjs-site.md b/pages/notes/adding-google-fonts-to-a-gatsbyjs-site.md
index a6c3d994bd..60fee7394f 100644
--- a/pages/notes/adding-google-fonts-to-a-gatsbyjs-site.md
+++ b/pages/notes/adding-google-fonts-to-a-gatsbyjs-site.md
@@ -1,12 +1,12 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - webdev
title: Adding google fonts to a gatsbyjs site
date: 2020-05-17T05:00:00Z
published: true
description: https://stackoverflow.com/questions/47488440/how-do-i-add-google-fonts-to-a-gatsby-site
-cover: ''
-
+cover: ""
---
diff --git a/pages/notes/debugging-python.md b/pages/notes/debugging-python.md
index 8c97bb629b..faecb2fa02 100644
--- a/pages/notes/debugging-python.md
+++ b/pages/notes/debugging-python.md
@@ -1,10 +1,10 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - python
title: Debugging Python
date: 2019-10-01T05:00:00Z
published: true
description: Debugging Python
-
---
## Using pdb
diff --git a/pages/notes/gatsby-scripts-with-onload.md b/pages/notes/gatsby-scripts-with-onload.md
index 91adb3e3c0..878e2aa3d9 100644
--- a/pages/notes/gatsby-scripts-with-onload.md
+++ b/pages/notes/gatsby-scripts-with-onload.md
@@ -1,10 +1,10 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - webdev
title: Gatsby Scripts with onload
date: 2020-05-22T05:00:00Z
status: published
-
---
This might be useful
diff --git a/pages/notes/kedro-basics.md b/pages/notes/kedro-basics.md
index 12dad13b2a..7e6f9c70ad 100644
--- a/pages/notes/kedro-basics.md
+++ b/pages/notes/kedro-basics.md
@@ -1,12 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - kedro
title: Kedro Basics
date: 2020-08-08T05:00:00Z
published: false
-description:
- In my upcoming free kedro course, you can learn how to start building
- pipelines in 5 days.
+description: In my upcoming free kedro course, you can learn how to start building pipelines in 5 days.
cover: ""
---
diff --git a/pages/notes/kedro-catalog.md b/pages/notes/kedro-catalog.md
index eddadfb15f..c8e0b619a9 100644
--- a/pages/notes/kedro-catalog.md
+++ b/pages/notes/kedro-catalog.md
@@ -1,12 +1,13 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - python
+ - kedro
title: Kedro Catalog
date: 2020-07-24T05:00:00Z
published: true
description: I am exploring a kedro catalog meta data hook
-cover: ''
-
+cover: ""
---
I am exploring a kedro catalog meta data hook, these are some notes about what I am thinking.
diff --git a/pages/notes/kedro-preflight.md b/pages/notes/kedro-preflight.md
index 71fc1a72ec..d7d3e697ca 100644
--- a/pages/notes/kedro-preflight.md
+++ b/pages/notes/kedro-preflight.md
@@ -1,6 +1,8 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - kedro
+ - python
title: 📝 Kedro Preflight Notes
date: 2020-05-09T15:01:00Z
published: true
diff --git a/pages/notes/maintianing-multiple-git-remotes.md b/pages/notes/maintianing-multiple-git-remotes.md
index e907318696..b39ece109c 100644
--- a/pages/notes/maintianing-multiple-git-remotes.md
+++ b/pages/notes/maintianing-multiple-git-remotes.md
@@ -1,6 +1,7 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - git
title: Maintianing multiple git remotes
date: 2020-05-07T11:56:00Z
published: true
diff --git a/pages/notes/new-machine-tpio.md b/pages/notes/new-machine-tpio.md
index f7079a57a9..458c2454cb 100644
--- a/pages/notes/new-machine-tpio.md
+++ b/pages/notes/new-machine-tpio.md
@@ -1,12 +1,11 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - linux
title: New Machine for developing Tests with TestProject.io
date: 2020-07-25T05:00:00.000+00:00
published: true
-description: Today I setup a new machine on Digital Ocean to use with
- TestProject.io, Here are my installation notes.
-
+description: Today I setup a new machine on Digital Ocean to use with TestProject.io, Here are my installation notes.
---
Today I setup a new machine on Digital Ocean to use with TestProject.io, Here are my installation notes.
diff --git a/pages/notes/packages-to-investigate.md b/pages/notes/packages-to-investigate.md
index be1ad83e7f..75699a51b3 100644
--- a/pages/notes/packages-to-investigate.md
+++ b/pages/notes/packages-to-investigate.md
@@ -1,10 +1,10 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - python
title: 📝 Packages to Investigate Notes
date: 2019-10-14T05:00:00.000+00:00
published: true
-
---
* jmespath
* Tabnine
diff --git a/pages/notes/pyspark.md b/pages/notes/pyspark.md
index cd943ba3de..8c1b7f5941 100644
--- a/pages/notes/pyspark.md
+++ b/pages/notes/pyspark.md
@@ -1,6 +1,7 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - python
title: Pyspark
date: 2019-09-22T05:00:00Z
published: false
diff --git a/pages/notes/python-deepwatch.md b/pages/notes/python-deepwatch.md
index d54dbfd371..42b2a73bd6 100644
--- a/pages/notes/python-deepwatch.md
+++ b/pages/notes/python-deepwatch.md
@@ -1,6 +1,7 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - python
title: python-deepwatch
date: 2020-04-27T05:00:00Z
published: true
diff --git a/pages/notes/reasons-to-kedro-notes.md b/pages/notes/reasons-to-kedro-notes.md
index b53d6882a3..a6046262d4 100644
--- a/pages/notes/reasons-to-kedro-notes.md
+++ b/pages/notes/reasons-to-kedro-notes.md
@@ -1,6 +1,7 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - kedro
title: Reasons to Kedro
date: 2020-10-31T05:00:00.000+00:00
published: true
diff --git a/pages/notes/serverless-things-to-investigate.md b/pages/notes/serverless-things-to-investigate.md
index f266cc5399..0dde780db9 100644
--- a/pages/notes/serverless-things-to-investigate.md
+++ b/pages/notes/serverless-things-to-investigate.md
@@ -1,6 +1,8 @@
---
templateKey: blog-post
-tags: []
+tags:
+ - webdev
+ - pre-commit
title: Serverless things to investigate
date: 2020-02-10T15:00:00Z
published: true
diff --git a/pages/til/animal-well-keyboard.md b/pages/til/animal-well-keyboard.md
index eb198c1ffd..a7975220aa 100644
--- a/pages/til/animal-well-keyboard.md
+++ b/pages/til/animal-well-keyboard.md
@@ -4,8 +4,8 @@ templateKey: til
title: animal well keyboard
published: true
tags:
- - animal well
-
+ - animal-well
+ - game
---
Animal well does not let you remap keys, and really doesn't even inform you