global Field
global BaseModel
from pydantic import BaseModel
from pydantic import Field
Pydantic is a Python library for serializing data into models that can be
validated with a deep set of built in valitators or your own custom validators,
and deserialize back to JSON or dictionary.
Installation # [1]
To install pydantic you will first need python and pip. Once you have pip
installed you can install pydantic with pip.
pip install pydantic
Always install in a virtual environment [2]
Creating a Pydantic model # [3]
To get started with pydantic you will first need to create a Pydantic model.
This is a python class that inherits from pydantic.BaseModel.
from pydantic import BaseModel
from pydantic import Field
from typing import Optional
class Person(BaseModel):
name: str = Field(...)
age: int
parsing an object # [4]
person = Person(name="John Doe", age=30)
print(person)
name='John Doe' age=30
data serialization # [5]
Pydantic has some very robust serialization methods that will automatically
coherse your data into the type specified by the type-hint in the model if it can.
person = Person(name=12, age="30")
print(f'name: {person.name}, type: {type(person.name)}')...
Drafts
Draft and unpublished posts
0 posts
Knockout City™
Steam achievements and progress for Knockout City™ - 2.0% complete with 1/50 achievements unlocked.
Badger
Steam achievements and progress for Badger - 12.5% complete with 5/40 achievements unlocked.
Frozen Flame
Steam achievements and progress for Frozen Flame - 6.25% complete with 2/32 achievements unlocked.
MultiVersus
Steam achievements and progress for MultiVersus - 53.57% complete with 15/28 achievements unlocked.
useful btrfs tools
disk usage # [1]
Looking at disk usage on any of these must be done using a tool built for it if
you want an accurate measurement. General purpose tools like du will be
inaccurate as they do not count things like duplicate copies in snapshots.
❯ sudo btrfs fi usage -T /
[sudo] password for waylon:
Overall:
Device size: 465.26GiB
Device allocated: 251.06GiB
Device unallocated: 214.20GiB
Device missing: 0.00B
Device slack: 0.00B
Used: 234.44GiB
Free (estimated): 227.37GiB (min: 120.27GiB)
Free (statfs, df): 227.37GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 478.88MiB (used: 0.00B)
Multiple profiles: no
Data Metadata System
Id Path single DUP DUP Unallocated Total Slack
-- -------------- --------- -------- -------- ----------- --------- -----
1 /dev/nvme1n1p2 239.00GiB 12.00GiB 64.00MiB 214.20GiB 465.26GiB -
-- -------------- --------- -------- -------- ----------- --------- -----
Total 239.00GiB 6.00GiB 32.00MiB 214.20GiB 465.26GiB 0.00B
Used 225.82GiB ...
Subnautica
Steam achievements and progress for Subnautica - 5.88% complete with 1/17 achievements unlocked.
Poly Bridge
Steam achievements and progress for Poly Bridge - 9.09% complete with 2/22 achievements unlocked.
devops philosophy
How to keep a secret - https://changelog.com/shipit/58
Kelsey Heightower Fundamentals - https://changelog.com/shipit/44
What does good devops look like - https://changelog.com/shipit/28
Docs are not optional - https://changelog.com/shipit/17
Dave Farley the foundations of Continuous Delivery - https://changelog.com/shipit/5
Steep
Steam achievements and progress for Steep - 0.0% complete with 0/41 achievements unlocked.
extending vim with shell commands
Vimconf 2022
The pitch # [1]
Extending vim does not need to be complicated and can be done using cli tools
that you might already be comfortable with. Examples, setting up
codeformatters with autocmds, using lf/ranger as a tui file manager, generating
new files using a template framework like cookiecutter/copier/yeoman, using ag
to populate your quickfix.
run a command # [2]
vimconf!!<esc>!!figlet
formatters # [3]
local settings = require'waylonwalker.settings'
M.waylonwalker_augroup = augroup('waylonwalker', { clear = true })
M.format_python = function()
if settings.auto_format.python then
vim.cmd('silent execute "%!tidy-imports --black --quiet --replace-star-imports --replace --add-missing --remove-unused " . bufname("%")')
vim.cmd('silent execute "%!isort " . bufname("%")')
vim.cmd('silent execute "%!black " . bufname("%")')
end
end
autocmd({ "BufWritePost" }, {
group=M.waylonwalker_augroup,
pattern = { "*.py" },
callback = M.format_python,
})
File Navigation # [4...
from kedro.pipeline import node
node(
input="raw",
output="int",
func=my_func,
tags=["one"],
)
- 11ty https://www.rockyourcode.com/how-to-deploy-eleventy-to-github-pages-with-github-actions/
- hugo puts it in the base url https://gohugo.io/getting-started/configuration/#baseurl
- mkdocs uses a special cli build command https://squidfunk.github.io/mkdocs-material/publishing-your-site/#github-pages
Upon first running an aws cli command using localstack you might end up with the following error.
Unable to locate credentials. You can configure credentials by running "aws configure".
Easy way # [1]
The easy easiest way is to leverage a package called awscli-local.
pipx install awscli-local
Leveraging the awscli # [2]
If you want to use the cli pro
pipx install awscli
aws config --profile localstack
# put what you want for the keys, but enter a valid region like us-east-1
alias aws='aws --endpoint-url http://localhost:4566 --profile localstack'
References:
[1]: #easy-way
[2]: #leveraging-the-awscli
npx create-react-app todoreact
import React,{useState,useEffect} from 'react';
import './App.css';
function App() {
const [data,setData]=useState([]);
const [newName,setNewName]=useState([]);
const getData=()=>{
fetch('/api'
,{
headers : {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
}
)
.then(function(response){
return response.json();
})
.then(function(myJson) {
setData(myJson)
});
}
useEffect(()=>{
getData()
},[])
const addItem= async () => {
const rawResponse = await fetch('/api/add/', {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({"name": newName})
});
const content = await rawResponse;
console.log(content);
getData()
}
return (
<div className="App">
{
data && data.length>0 && data.map((item)=><p>{item.id}{item.priority}{item.name}<button>raise priority</button></p>)
}
<input type='text' value={newName} onChange={(e) => (setNewName(e.target.value))} />
<button onClick={addItem} >add item</button>
</div>
);
}
export default App;
[1]
References:
[1]: https://dropper.waylonwalker.com/api/file/388f4342-8623-4ac7-9b4b-1d63cd82d2ad.png
Hatch allows you to specify direct references for dependencies in your
pyproject.toml file. This is useful when you want to depend on a package that
is not available on PyPI or when you want to use a specific version from a Git [1]
repository. Often used for unreleased packages, or unreleased versions of
packages.
docs [2]
[project]
dependencies = ['markata', 'markata-todoui@git+https://github.com/waylonwalker/markata-todoui']
[tool.hatch.metadata]
allow-direct-references=true
References:
[1]: /glossary/git/
[2]: https://hatch.pypa.io/dev/config/dependency/#direct-references
Setting up snapper on Arch
https://www.youtube.com/watch?v=_97JOyC1o2o
snapper
snap-pac
grub-btrfs
Note # [1]
These are mostly my notes to remind myself, I’d Highly reccomend watching
this-video [2] or reading this
arch wiki page [3]
/.snapshots already exists error # [4]
When I started running sudo snapper -c root create-config / I ran into the
following error.
[5]
Creating config failed (creating btrfs subvolume .snapshots failed since it already exists).
remove existing snapshots # [6]
sudo umount /.snapshots
sudo rm -r /.snapshots
configure snapper # [7]
sudo snapper -c root create-config /
sudo snapper -c home create-config /home
btrfs subvolumes # [8]
sudo btrfs subvolume list /
[9]
sudo btrfs subvolume delete /.snapshots
sudo mkdir /.snapshots
# [10]
# you might not see snapshots mounted yet
lsblk
# if you check fstab you will see an entry for it
cat /etc/fstab
# mount it
sudo mount -a
# now you should see /.snapshots mounted
lsblk
You should now see .snapshots in mountpoints.
[11...
{% for year in markata.map(“date.year”, filter=‘published’)|unique %}
{{ year }} # [1]
{% for post in markata.map(‘post’, filter=“published and date.year == “+year|string, sort=‘date’) %}
- [{{ post.title }} - {{ post.date.month }}/{{ post.date.day }}](/{{ post.slug }})
{% endfor %}
{% endfor %}
References:
[1]: #-year-