The default keybinding for copy-mode <prefix>-[ is one that is just so
awkward for me to hit that I end up not using it at all. I was on a
call with my buddy Nic this week and saw him just fluidly jump into
copy-mode in an effortless fashion, so I had to ask him for his
keybinding and it just made sense. Enter, that’s it. So I have addedt
his to my ~/.tmux.conf along with one for alt-enter and have found
myself using it way more so far.
Setting copy-mode to enter # [1]
To do this I just popped open my ~/.tmux.conf and added the following.
Now I can get to copy-mode with <prefix>-Enter which is control-b Enter, or alt-enter.
bind Enter copy-mode
bind -n M-Enter copy-mode
More on copy-mode # [2]
I have a full video on copy-mode you can find here.
tmux copy-mode [3]
References:
[1]: #setting-copy-mode-to-enter
[2]: #more-on-copy-mode
[3]: /tmux-copy-mode/
Drafts
Draft and unpublished posts
0 posts
In python, a string is a string until you add special characters.
In browsing twitter this morning I came accross this tweet, that showed that
you can use is accross two strings if they do not contain special characters.
https://twitter.com/bascodes/status/1492147596688871424
I popped open ipython to play with this. I could confirm on 3.9.7, short
strings that I typed in worked as expected.
waylonwalker ↪main v3.9.7 ipython
❯ a = "asdf"
waylonwalker ↪main v3.9.7 ipython
❯ b = "asdf"
waylonwalker ↪main v3.9.7 ipython
❯ a is b
True
Using the upper() method on these strings does break down.
waylonwalker ↪main v3.9.7 ipython
❯ a.upper() is b.upper()
False
waylonwalker ↪main v3.9.7 ipython
❯ a = "ASDF"
waylonwalker ↪main v3.9.7 ipython
❯ b = "ASDF"
waylonwalker ↪main v3.9.7 ipython
❯ a is b
True
If You can also see this in the id of the objects as well, which is the memmory
address in CPython.
waylonwalker ↪main v3.9.7 ipython
❯ id(a)
140717359289568
waylonwalker ↪main v3.9.7 ipython
❯ id(b)
140717359289568
waylonwalker ↪main v3.9.7 ipython
❯ id(a.upper())
140717359581824
waylonwalker ↪main v3.9.7 ipython
❯ id(b.upper())
140717360337824
Finally just as the post shows if...
One thing about moving to a tiling window manager like awesome wm or i3 is that
they are so lightweight they are all missing things like bluetooth gui’s out of
the box, and you generally bring your own. Today I just needed to connet a new
set of headphones, so I decided to just give the bluetoothctl cli a try. It
seems to come with Ubuntu, I don’t think I did anything to get it.
bluetoothctl
Running bluetoothctl pops you into a repl/shell like bah, python, or ipython.
From here you can execute bluetoothctl commands.
Here is what I had to do to connect my headphones.
# list out the commands available
help
# scan for new devices and stop when you see your device show up
scan on
scan off
# list devices
devices
paired-devices
# pair the device
pair XX:XX:XX:XX:XX:XX
# now your device should show up in the paired list
paired-devices
# connet the device
connect XX:XX:XX:XX:XX:XX
help # [1]
Here is the output of the help menu on my machine, it seems pretty straight
forward to block, and remove devices from here.
note ctrl revers to the bluetooth controller on the machine you are on, and dev
refers to a device id.
Menu main:
Available commands:
-------------------
advertise A...
I often run shell commands from python with Popen, but not often enough
do I set up error handline for these subprocesses. It’s not too hard,
but it can be a bit awkward if you don’t do it enough.
Using Popen # [1]
import subprocess
from subprocess import Popen
# this will run the shell command `cat me` and capture stdout and stderr
proc = Popen(["cat", "me"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# this will wait for the process to finish.
proc.wait()
reading from stderr # [2]
To get the stderr we must get it from the proc, read it, and decode the
bystring. Note that we can only get the stderr object once, so if you want to
do more than just read it you will need to store a copy of it.
proc.stderr.read().decode()
Better Exception # [3]
Now that we can read the stderr we can make better error tracking for the
user so they can see what to do to resolve the issue rather than blindly
failing.
err_message = proc.stderr.read().decode()
if proc.returncode != 0:
# the process was not successful
if "No such file" in err_message:
raise FileNotFoundError('No such file "me"')
References:
[1]: #using-popen
[2]: #reading-from-stderr
[3]: #better-exception
Samba is an implementation of the smb protocol that allows me to setup network
shares on my linux machine that I can open on a variety of devices.
I think the homelab [1] is starting to intrigue me enought to dive into the path of
experimenting with different things that I might want setup in my own home.
One key piece of this is network storage. As I looked into nas, I realized
that it takes a dedicated machine, or one virtualized at a lower level than I
have capability for right now.
Humble Beginnings # [2]
To get goind I am going to make a directory /srv/samba/public open to anyone
on my network. I am not going to worry too much about it, I just want
something up and running so that I can learn.
Install samba, open the firewall, and edit the smb.conf
sudo apt install samba samba-common-bin
sudo ufw allow samba
sudo nvim /etc/samba/smb.conf
I added this to the end of my smb.conf
[public]
comment = public share, no need to enter username and password
path = /srv/samba/public/
browseable = yes
writable = yes
guest ok = yes
Then I made the /srv/samba/public directory and made it writable by anyone.
sudo mkdir -p /srv/samba/public
sudo setfacl -R -m "u:nobody:rwx" /srv/s...
A super useful tool when doing PR’s or checking your own work during a big
refactor is the silver searcher. Its a super fast command line based searching
tool. You just run ag "<search term>" to search for your search term. This
will list out every line of every file in any directory under your current
working directory that contains a match.
Ahead/Behind # [1]
It’s often useful to need some extra context around the change. I recently
reviewed a bunch of PR’s that moved schema from save_args to the root of the
dataset in all yaml configs. To ensure they all made it to the top level
DataSet configuraion, and not underneath save_args. I can do a search for all
the schemas, and ensure that none of them are under save_args anymore.
ag "schema: " -A 12 -B 12
References:
[1]: #aheadbehind
I’ve ran a Minecraft server at home since December 2017 for me and my
son to play on. We start a brand new one somewhere between every day
and every week. The older he gets the longer the server lasts.
In all these years, I’ve been popping open the command line and running
the server manually, and even inside of Digital Ocean occasionally to
play a more public server with a friend.
My buddy Nic has been sharing me some of his homelab [1] setup, and it’s
really got me to thinking about what I can run at home, and Dockerizing
all the things. Today I found a really sweet github repo that had a
minecraft server running in docker with a pretty incredible setup.
I ended up running the first thing in the Readme that included a volume
mount. If you are going to run this container, I HIGHLY reccomend that
you make sure that you have your world volume mounted, otherwise it will
die with your docker container.
Docker Compose # [2]
With the following stored as my docker-compose.yml in a brand new and
otherwise empty directory I was ready to start the server for the night.
version: "3"
services:
mc:
container_name: walkercraft
image: itzg/minecraft-server
ports:
- 25565:25565
en...
Installing rust in your own ansible playbook will make sure that you can
get consistent installs accross all the machines you may use, or
replicate your development machine if it ever goes down.
Personal philosophy # [1]
I try to install everything that I will want to use for more than just a
trial inside of my ansible playbook. This way I always get the same
setup across my work and home machines, and anytime I might setup a
throw away vm.
reccommended install # [2]
This is how rust reccomends that you install it on Ubuntu. First update
your system, then run their installer, and finally check that the
install was successful.
# system update
sudo apt update
sudo apt upgrade
# download and run the rust installer
curl https://sh.rustup.rs -sSf | sh
# confirm your installation is successful
rustc --version
Ansible Install # [3]
The first thing I do in my playbooks is to check if the tool is already
installed. Here I chose to look for cargo, you could also look for
rustc.
- name: check if cargo is installed
shell: command -v cargo
register: cargo_exists
ignore_errors: yes
I first check for an existing install so I can re-run my playbooks
quickly filling in only missing...
In looking for a way to automatically generate descriptions for pages I
stumbled into a markdown ast in python. It allows me to go over the
markdown page and get only paragraph text. This will ignore headings,
blockquotes, and code fences.
import commonmark
import frontmatter
post = frontmatter.load("post.md")
parser = commonmark.Parser()
ast = parser.parse(post.content)
paragraphs = ''
for node in ast.walker():
if node[0].t == "paragraph":
paragraphs += " "
paragraphs += node[0].first_child.literal
It’s also super fast, previously I was rendering to html [1] and using
beautifulsoup to get only the paragraphs. Using the commonmark ast was
about 5x faster on my site.
Duplicate Paragraphs # [2]
When I originally wrote this post, I did not realize at the time that
commonmark duplicates nodes. I still do not understand why, but I have had
success duplicating them based on the source position of the node with the
snippet below.
from itertools import compress
import commonmark
import frontmatter
post = frontmatter.load("post.md")
parser = commonmark.Parser()
ast = parser.parse(post.content)
# find all paragraph nodes
paragraph_nodes = [
n[0]
for n in ast.walker()
if n[0...
Creating a minimal config specifically for git [1] commits has made running
git commit much more pleasant. It starts up Much faster, and has all
of the parts of my config that I use while making a git commit. The one
thing that I often use is autocomplete, for things coming from elsewhere
in the tmux session. For this cmpe-tmux specifically is super
helpful.
The other thing that is engrained into my muscle memory is jj
for escape. For that I went agead and added my settings and keymap
with no noticable performance hit.
Here is the config that has taken
~/.config/nvim/init-git.vim
source ~/.config/nvim/settings.vim
source ~/.config/nvim/keymap.vim
source ~/.config/nvim/git-plugins.vim
lua require'waylonwalker.cmp'
~/.config/nvim/git-plugins.vim
call plug#begin('~/.local/share/nvim/plugged')
" cmp
Plug 'hrsh7th/nvim-cmp'
Plug 'hrsh7th/cmp-nvim-lsp'
Plug 'hrsh7th/cmp-buffer'
Plug 'hrsh7th/cmp-path'
Plug 'hrsh7th/cmp-calc'
Plug 'andersevenrud/compe-tmux', { 'branch': 'cmp' }
call plug#end()
~/.gitconfig
[core]
editor = nvim -u ~/.config/nvim/init-git.vim
References:
[1]: /glossary/git/
For an embarassingly long time, til today, I have been wrapping my dict
gets with key errors in python. I’m sure I’ve read it in code a bunch
of times, but just brushed over why you would use get. That is until I
read a bunch of PR’s from my buddy Nic and notice that he never gets
things with brackets and always with .get. This turns out so much
cleaner to create a default case than try except.
Example # [1]
Lets consider this example for prices of supplies. Here we set a variable of
prices as a dictionary of items and thier price.
prices = {'pen': 1.2, 'pencil', 0.3, 'eraser', 2.3}
Except KeyError # [2]
What I would always do is try to get the key, and if it failed on KeyError, I
would set the value (paper_price in this case) to a default value.
try:
paper_price = prices['paper']
except KeyError:
paper_price = None
.get # [3]
What I noticed Nic does is to use get. This feels just so much cleaner that
it’s a one liner and feels much easier to read and understand that if there is
no price for paper we set it to None.
paper_price = prices.get('paper', None)
We can just as easily set the default to other values. Let’s consider sales
for instance. If there is not a record f...
I was listening to shipit37 [1] with Vincent
Ambo talking about building fully declaritive systems with nix. Vincent is
building out Nixery and strongly believes that standard versioning systems are
flawed. If we have good ci setup, and every commit is a good commit the idea
of a release is just some arbitrary point in history that the maintainer
decided was a good time to release, and has less to do about features and
quality.
Since many things still want to see a version number, there is one automatic
always increasing number that is a part of every single git [2] repo, and that is
the commit count. Nixery is versioned by commit count. When counting on the
main branch there is no way for two points in time to share the same version.
The git cli will count all commits by default so you have to be careful to only
include commits from the branch you want to version/release from.
git rev-list main --count
References:
[1]: https://changelog.com/shipit/37
[2]: /glossary/git/
BeautifulSoup is a DOM like library for python. It’s quite useful to
manipulate html [1]. Here is an example to find_all html headings. I stole
the regex from stack overflow, but who doesn’t.
Make an example # [2]
sample.html
Lets make a sample.html file with the following contents. It mainly has
some headings, <h1> and <h2> tags that I want to be able to find.
<!DOCTYPE html>
<html lang="en">
<body>
<h1>hello</h1>
<p>this is a paragraph</p>
<h2>second heading</h2>
<p>this is also a paragraph</p>
<h2>third heading</h2>
<p>this is the last paragraph</p>
</body>
</html>
Get the headings with BeautifulSoup # [3]
Lets import our packages, read in our sample.html using pathlib and find all
headings using BeautifulSoup.
from bs4 import BeautifulSoup
from pathlib import Path
soup = BeautifulSoup(Path('sample.html').read_text(), features="lxml")
headings = soup.find_all(re.compile("^h[1-6]$"))
And what we get is a list of bs4.element.Tag’s.
>> print(headings)
[<h1>hello</h1>, <h2>second heading</h2>, <h2>third heading</h2>]
I recently added a heading_link plugin to markata, you might notice the
🔗’s next to each heading on this page, that is powered by this exact
techniq...
I keep my nodes short and sweet. They do one thing and do it well. I
turn almost every DataFrame transformation into its own node. It makes
it must easier to pull catalog entries, than firing up the pipeline,
running it, and starting a debugger. For this reason many of my nodes
can be built from inline lambdas.
Examples # [1]
Here are two examples, the first one lambda x: x is sometimes referred
to as an identity function. This is super common to use in the early
phases of a project. It lets you follow standard layering conventions,
without skipping a layer, overthinking if you should have the layer or
not, and leaves a good placholder to fill in later when you need it.
Many times I just want to get the data in as fast as possible, learn
about it, then go back and tidy it up.
from kedro.pipeline import node
my_first_node = node(
func=lambda x: x,
inputs='raw_cars',
output='int_cars',
tags=['int',]
)
my_first_node = node(
func=lambda cars: cars[['mpg', 'cyl', 'disp',]].query('disp>200'),
inputs='raw_cars',
output='int_cars',
tags=['pri',]
)
Note: try not to take the idea of a one liner too far. If your
one line function wraps several lines down it probably deserv...
stow -R --simulate -vvv git
I’ve never found a great use for a global .gitignore file. Mostly I fear
that by adding a lot of the common things like .pyc files it will be missing
from the project and inevitably be committed to the project by someone else.
Personal Tools # [1]
Within the past year I have added some tools to my personal setup that are not
required to run the project, but works really well with my setup. They are
direnv and pyflyby. Since these both support project level configuration,
are less common, and not in most .gitignore templates they make for great
candidates to add to a global .gitignore file.
create the config # [2]
Like any .gitignore it supports gits wildignore syntax. I made a
~/dotfiles/git/.global_gitignore file, and added the following to it.
.envrc
.pyflyby
.copier-defaults
.venv*/
.python-version
markout
.markata.cache
Once I had this file, I stowed it into ~/.global_gitignore.
cd ~/dotfiles/
stow git
Always stow your dotfiles, don’t set yourself up for wondering why your next
machine is not working right.
stow note # [3]
Note, the reason that it is a ~/.global_gitignore and not a ~/.gitignore is
that I was unable to stow a .gitignore file. They must be ignored by
...
Today I discovered a sweet new cli for compressing images.
squoosh cli [1]
is a wasm powered cli that supports a bunch of formats that I would want to
convert my website images to.
from the future
> Unfortunately, due to a few people leaving the team, and staffing issues
resulting from the current economic climate (ugh), I’m deprecating the
CLI and libsquoosh parts of Squoosh. The web app will continue to be
supported and improved. I know that sucks, but there simply isn’t the
time & people to work on this. If anyone from the community wants to fork
it, you have my blessing.
https://github.com/GoogleChromeLabs/squoosh/pull/1321
Web App # [2]
First the main feature of squoosh is a web app [3] that
makes your images smaller right in the browser, using the same wasm. It’s
sweet! There is a really cool swiper to compare the output image with the
original, and graphical dials to change your settings.
CLI # [4]
What is even cooler is that once you have settings you are happy with and are
really cutting down those kb’s on your images, there is a copy cli command
button! If you have npx (which you should if you have nodejs and npm) already
installed it just works without instal...
As you work on your kedro projects you are bound to need to add more
dependencies to the project eventually. Kedro uses a fantastic command
pip-compile under the hood to ensure that everyone is on the same version of
packages at all times, and able to easily upgrade them. It might be a bit
different workflow than what you have seen, let’s take a look at it.
git status # [2]
Before you start mucking around with any changes to dependencies make sure that
your git status is clean. I’d even reccomend starting a new branch for this,
and if you are working on a team potentially submit this as its own PR for
clarity.
git status
git checkout main
git checkout -b add-rich-dependency
requirements.in # [3]
New requirements get added to a requirements.in file. If you need to specify
an exact version, or a minimum version you can do that, but if all versions
generally work you can leave it open.
# requirements.in
rich
Here I added the popular rich package to my requirements.in file. Since
I am ok with the latest version I am not going to pin anything, I am going to
let the pip resolver pick the latest version that does not conflict with any of
my dependencies for me.
build-reqs # [4]
...
I am a huge believer in practicing your craft. Professional athletes
spend most of their time honing their skills and making themsleves
better. In Engineering many spend nearly 0 time practicing. I am not
saying that you need to spend all your free time practicing, but a few
minutes trying new things can go a long way in how you understand what
you are doing and make a hue impact on your long term productivity.
What is Kedro [1]
Start practicing # [2]
practice building pipelines with #kedro today
Go to your playground directory, and if you don’t have one, make one.
cd ~/playground
get pipx # [3]
Install pipx in your system python. This is one of the very few, and
possibly the only python library that deserves to be installed in your
system directory, primarily because its used to sanbox clis in their own
virtual environment [4] automatically for you.
pip install pipx
make a new project # [5]
From inside your playground directory, start your new kedro project.
This is quite simple and painless. So much so that if you mess this one
up doing something wild, it might be easier to make a new one that
fixing the wild one.
pipx run kedro new
# answer the questions it asks
I u...
One of the first things I noticed broken in my terminal based workflow moving
from Windows wsl to ubuntu was that my clipboard was all messed up and not
working with my terminal apps. Luckily setting tmux and neovim to work with
the system clipboard was much easier than it was on windows.
First off you need to get xclip if you don’t already have it provided by your
distro. I found it in the apt repositories. I have used it between Ubuntu
18.04 and 21.10 and they all work flawlessly for me.
I have tmux setup to automatically copy any selection I make to the clipboard
by setting the following in my ~/.tmux.conf. While I have neovim open I need
to be in insert mode for this to pick up.
# ~/tmux.conf
bind -T copy-mode-vi Enter send-keys -X copy-pipe-and-cancel "xclip -i -f -selection primary | xclip -i -selection clipboard"
bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -selection clipboard -i"
To get my yanks to go to the system clipboard in neovim, I just added
unnamedplus to my existing clipboard variable.
# ~/.config/nvim/init.vim
set clipboard+=unnamedplus
If you need to copy something right from the terminal you can use xclip
directly. ...