- publish
- page
- feature
-title: about
+title: Angelo Rodigues
date: 2024-12-09T19:14:24.046Z
lastmod: 2024-12-12T14:34:22.538Z
---
-# Angelo Rodrigues
-I'm Angelo. I've been working in Software Development for the last 20 years or so, doing a bit of everything. I started my journey as a "Full Stack" developer/designer back when [2Advanced Studios launched v3](https://www.webdesignmuseum.org/exhibitions/2advanced-studios-v3-2001) and over the years I've moved further and further "backward" in the stack. Since about 2012 my technical focus has been around Backend/DevOps/SRE work and more broadly around "Leveling Up Engineering". I have a history of joining teams that are ready to take the next step from "We have this MVP" and helping them get to "We are a company". I've gotten to do a bit of everything from Founding my own Startup in the Security Space, Consulting (Both technical and Engineering Management), Leading teams as a People Manager, and contributing as a Principal Engineer. My own career seems to have a focus on Ed Tech + Security, with a bit of dabbling in Crypto Currency.
+I'm Angelo. I've been working in Software Development for the last 20 years or so, doing a bit of everything. I started my journey as a "Full Stack" developer/designer back when [2Advanced Studios launched v3](https://www.webdesignmuseum.org/exhibitions/2advanced-studios-v3-2001) and over the years I've moved further and further "backward" in the stack. Since about 2012 my technical focus has been around Backend/DevOps/SRE work and more broadly around "Leveling Up Engineering". I have a history of joining teams that are ready to take the next step from "We have this MVP" and helping them get to "We are a company". I've gotten to do a bit of everything from Founding my own Startup in the Security Space, Consulting (Both technical and Engineering Management), Leading teams as a People Manager, and contributing as a Principal Engineer. My own career seems to have a focus on Ed Tech + Security, with a bit of dabbling in Crypto Currency.
This blog is a way for me to document all of the things I've learned and am still learning about everything I run across. The updates happen sporadically (as life does) and I hope that you'll be able to find something interesting here.
tags:
- publish
- page
-title: links
+title: Links
date: 2024-12-09T19:01:16.962Z
lastmod: 2025-01-22T15:44:41.731Z
---
-# Links
Here's a collection of blogs that I enjoy and try and keep up with. They aren't listed in any particular order. I'm really looking to re-live the nostalgia of early webrings in the pre web2.0 era. In fact, at some point I'm going to have to move this blog off GitHub pages so that I can start experimenting with some of the IndieWeb functionality...
-***
+---
### Drew Devaults Blog - https://drewdevault.com
date: 2024-12-09T16:29:52.051Z
lastmod: 2024-12-12T05:45:27.752Z
---
-# ElasticBeanstalk Gotchas
-I've been lucky and unlucky enough to work with AWS ElasticBeanstalk for a number of years and here's a list of footguns.
+I've been lucky and unlucky enough to work with AWS ElasticBeanstalk for a number of years and here's a list of footguns.
## Degraded on 4xx Errors
title: Amplify Docker Limitations
lastmod: 2024-12-10T19:47:15.594Z
---
-# Amplify Docker Limitations
I've recently had the chance to do some work with Amplify in AWS and I'm surprised how simultaneously feature rich and half baked it is. It seems if you're in to click-ops you'll be fine in Amplify until you hit a problem.
date: 2023-06-20T12:30:56.105-04:00
lastmod: 2024-12-13T16:47:37.707Z
---
-# Chrome HTTP Request stuck in a "Stalled" State
I got the chance to investigate a really odd bug where randomly network requests in Chrome would just hang. This would only occurr in our test environments at work and not in production. The request would hang for some long amount of time.. and then eventually complete successfully. The bug has been occurring for some time, but has been getting worse in Chrome. It got so bad that it was guaranteed that if you were using Chrome it was going to happen to you. Eventually it started happening in Firefox as well.. during an investor demo (what good is a demo if it doesn't go up in flames?). That's when I got roped in.
So we can clearly see here that the request took 2 minutes and the entirety of that time the connection was stuck in the `stalled` state. That indicates one of two things:
-1. Chrome never attempted to make the network request at all. Perhaps the priority on the request was dropped, maybe there were too many connections open to that FQDN already.
+1. Chrome never attempted to make the network request at all. Perhaps the priority on the request was dropped, maybe there were too many connections open to that FQDN already.
2. In some situations chrome actually merges the CORS preflight requests into what it reports as `stalled`. So it's possible that there was a problem in the preflight request that caused the delay before the actual request happened.
### Chrome Network Log
-One tool that chrome has to diagnose networking issues is hidden away at `chrome://net-export`. It generates a very VERY detailed log of everything network related that chrome is aware of.
+One tool that chrome has to diagnose networking issues is hidden away at `chrome://net-export`. It generates a very VERY detailed log of everything network related that chrome is aware of.

Once you get that capture file, you have to head over to https://netlog-viewer.appspot.com and import it. There's a TON of information here, and honestly I didn't even look at half of it. The only two things I cared about were the "Events" and "Timeline" sections. The Timeline really makes no sense until you have a idea of when your actual network event happened, so we can skip that and jump right over to Events
-There will likely be a lot of events. The "filter" at the top never worked for me given the sheer size of the events.. but scrolling through them all was just fine and eventually I found the URL request that caused the issue. If you click on the event it will display a bunch of debug information about the request.
+There will likely be a lot of events. The "filter" at the top never worked for me given the sheer size of the events.. but scrolling through them all was just fine and eventually I found the URL request that caused the issue. If you click on the event it will display a bunch of debug information about the request.

-As you can see.. suddenly there's a HUGE jump in time from `66807` to `187631`. We've confirmed now that this is a problem that's occurring within the CORS preflight request specifically, and it's just getting rolled into the `stalled` state. The log viewer makes it trivial to dig down into the events and if you click on the details of the `HTTP_STREAM_JOB_CONTROLLER` event you can see some more details.
+As you can see.. suddenly there's a HUGE jump in time from `66807` to `187631`. We've confirmed now that this is a problem that's occurring within the CORS preflight request specifically, and it's just getting rolled into the `stalled` state. The log viewer makes it trivial to dig down into the events and if you click on the details of the `HTTP_STREAM_JOB_CONTROLLER` event you can see some more details.

-Here again, we see that there is a definitely delay when it attempts to call `HTTP_STREAM_REQUEST_STARTED_JOB`
+Here again, we see that there is a definitely delay when it attempts to call `HTTP_STREAM_REQUEST_STARTED_JOB`

And now we can easily see the problem: `SOCKET_POOL_STALLED_MAX_SOCKETS_PER_GROUP`
-In HTTP1.1 each tab in your browser is configured to only make a certain number of requests per FQDN at the same time.This is one of the reasons why we load "static assets" on a different subdomain. By loading static assets on a separate FQDN we can increase the objects that are simultaneously loaded in our tab providing a better experience (for some definition of experience) to our user. In HTTP2, this restriction is across every single tab in your browser. For chrome, it can only instantiate 6 concurrent connections to an FQDN. This is because your connections are persistent in http2 and you don't need to deal with the initialization handshakes on every request. The connection, once opened, is continually reused.
+In HTTP1.1 each tab in your browser is configured to only make a certain number of requests per FQDN at the same time.This is one of the reasons why we load "static assets" on a different subdomain. By loading static assets on a separate FQDN we can increase the objects that are simultaneously loaded in our tab providing a better experience (for some definition of experience) to our user. In HTTP2, this restriction is across every single tab in your browser. For chrome, it can only instantiate 6 concurrent connections to an FQDN. This is because your connections are persistent in http2 and you don't need to deal with the initialization handshakes on every request. The connection, once opened, is continually reused.
<span style="font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-caps: normal;">For some reason, the socket pool dedicated to this particular FQDN gets filled up and so it can't actually make the next request. So it just sits there.. until suddenly a socket is available (2 minutes later) and it is able to complete the rest of the request as expected. The "suddenly" is likely due to the default socket timeout. Once that timeout is hit, Chrome kills the connection and opens a new one and suddenly our request works again.</span><br>
-We can dig even further! Since we know that this is happening on an HTTP2 call, we can filter our events to only show us the http2 connections and that paints a more serious picture!
+We can dig even further! Since we know that this is happening on an HTTP2 call, we can filter our events to only show us the http2 connections and that paints a more serious picture!

Every one of our http2 sockets is getting sent a `GOAWAY` frame by the server.. but notice that it says `NO_ERROR`. This generally indicates that the server is telling the client that it will be shutting down this socket and. The `GOAWAY` frame also tells the client what the last stream that it processed was. This is so that the client can resend any data that it needs to on a new connection. What should happen is that after this frame, the connection is ended by both parties and we move on to a new one. In practice, it happens after a following `GOAWAY` frame that indicates the connection is now dead. Except that final disconnect frame is never sent. So as far as chrome is concerned, we're still happily connected so it returns the connection to the pool. But the server has disconnected.
-So it just sits there trying to use the connection again, times out, and then closes and opens a new connection! And so we tracked down the mysterious slow-down and also used some cool tools in the process!
+So it just sits there trying to use the connection again, times out, and then closes and opens a new connection! And so we tracked down the mysterious slow-down and also used some cool tools in the process!
-***
+---
-One thing I do want to note: This seems like a really straight forward problem - but that's just in hindsight. In the moment there's lots of googling and staring off into space trying to remember obscure keywords. I have a really bad memory, and so one of the things I do is memorize keywords/ideas rather than content because there's just too much to remember. In this way I can ensure that I can find the pieces of information I need when I need to. In this case the keys things were:
+One thing I do want to note: This seems like a really straight forward problem - but that's just in hindsight. In the moment there's lots of googling and staring off into space trying to remember obscure keywords. I have a really bad memory, and so one of the things I do is memorize keywords/ideas rather than content because there's just too much to remember. In this way I can ensure that I can find the pieces of information I need when I need to. In this case the keys things were:
-* chrome has some kind of detailed network log
+- chrome has some kind of detailed network log
-* browsers like to fold CORS requests into the main request for reporting
+- browsers like to fold CORS requests into the main request for reporting
-* http2 has a max connection limit across your browser
+- http2 has a max connection limit across your browser
The rest of the information used is all derivable from those keys and a search engine.
summary: Code Reviews are nothing more than a half-hearted attempt to avoid planning
lastmod: 2024-12-10T19:49:29.025Z
---
-# Code Reviews are a Failure
As a new startup with one or two engineers on staff, you're very likely not doing code reviews. Engineers at this stage have a very deep understanding of the code - after all, they've probably written most of it. When it's time for a new feature, these initial developers know exactly how they're going to implement it given the architecture of their code base. Chances are, they keep their own work in a branch, and open a Pull Request or Merge Request, but they aren't asking someone to take a look at it. Instead they're making sure their changes work and they're merging it in themselves. Often they'll do this many times a day as they crank out features and bug fixes.
We've all seen the reasons for Code Reviews:
-* Find bugs further downstream
-* Propagation of Knowledge
-* Team Ownership
-* Double check functionality/architecture
+- Find bugs further downstream
+- Propagation of Knowledge
+- Team Ownership
+- Double check functionality/architecture
These are nonsense - Code Reviews in isolation almost always end up with the following results:
-* Reviews languishing in a "Ready for Review" state
-* Drastic code architecture changes
-* Being "Approved" based on social standing of the developer opening the request
+- Reviews languishing in a "Ready for Review" state
+- Drastic code architecture changes
+- Being "Approved" based on social standing of the developer opening the request
Code Reviews are often seen as some kind of magic bullet to catching errors before they get merged into code bases. The ideal is that a developer gets a ticket, makes some code changes, and then shares those changes with everyone else on the team for feedback. The idea is that other developers, with perhaps more context, can catch potential issues or side-effects in the code that the developer doing the work may not have even known about.
Unit tests, Integration Tests, Synthetic/BlackBox Tests - all of these can help ease the time code spends stuck in code reviews. By minimizing the time spent in code reviews, and maximizing the time spent in planning instead we can achieve things like:
-* Actually find bugs further downstream and upstream
-* Propagation of Knowledge throughout the team
-* Team Ownership of a feature
-* Double check functionality/architecture
+- Actually find bugs further downstream and upstream
+- Propagation of Knowledge throughout the team
+- Team Ownership of a feature
+- Double check functionality/architecture
How fun.
## Notes
-* This was originally published on Medium - https://xangelo.medium.com/code-reviews-are-a-failure-36b72a659de4
+- This was originally published on Medium - https://xangelo.medium.com/code-reviews-are-a-failure-36b72a659de4
title: Designing On Call
lastmod: 2024-12-10T19:47:03.143Z
---
-# Designing On-Call
On-call is one of those things that all developers end up doing at some point. My goal isn't to discuss the merit of on-call, but rather what the point of on-call is and how to go about designing what “on-call” means at your company. I'm going to start at the very beginning because chances are you're already doing it wrong. I should also note that I'm looking at this specifically from a SaaS point of view.
Defining the "something" is relatively easy compared to defining "broken". We know a thing is broken if it's not starting, but:
-* What if it works but 1% of the requests are resulting in an error?
-* What if it works but 1% of the time it crashes and restarts?
-* What if it works but is very slow?
-* What if it works, is not slow, and doesn't crash, but your API docs don't match what the endpoint is returning?
-* What if it works, but your database is being crushed by a sudden increase in traffic?
+- What if it works but 1% of the requests are resulting in an error?
+- What if it works but 1% of the time it crashes and restarts?
+- What if it works but is very slow?
+- What if it works, is not slow, and doesn't crash, but your API docs don't match what the endpoint is returning?
+- What if it works, but your database is being crushed by a sudden increase in traffic?
Defining our “something” first is important because it helps us to set bounds on what we consider “broke” and what is “degraded” and what is “fine”.
Deciding 0.1% of errors is "broken" where your application is currently sitting at 1% may seem like a good thing. 0.1% is where we want to be and so letting devs know when that isn't the case is good. We can work toward 0.1%. But this involves much larger product considerations.
-* When these alerts happen overnight - are devs tasked with a complete resolution during work hours?
-* Does that mean feature work will suffer?
-* How will you buffer your sprint to make time for these interruptions?
-* Do you have the ability to buffer your sprint given product launch dates?
-* What happens when a dev is up all night dealing with issues, do they take the next day off?
-* Do you offload the task to a different team (Ops/Infra/SRE) since feature work is so important?
-* What happens when those teams get burnt out and leave?
+- When these alerts happen overnight - are devs tasked with a complete resolution during work hours?
+- Does that mean feature work will suffer?
+- How will you buffer your sprint to make time for these interruptions?
+- Do you have the ability to buffer your sprint given product launch dates?
+- What happens when a dev is up all night dealing with issues, do they take the next day off?
+- Do you offload the task to a different team (Ops/Infra/SRE) since feature work is so important?
+- What happens when those teams get burnt out and leave?
On the flip side setting your alerts at 1% is accepting that this is the current state.. but now it becomes a decision on whether or not a 0.1% error rate is more important than the next feature you're supposed to get out.
What you should NOT be required to do, is:
-* figure out where your application code is crashing
-* optimize that weird nested join that you're supposed to tackle next sprint
+- figure out where your application code is crashing
+- optimize that weird nested join that you're supposed to tackle next sprint
Your job at 3 am is problem mitigation. Maybe nothing can be done except throw on a status message so that users know what's happening.
title: Now Powered by Outlines
lastmod: 2024-12-10T19:45:18.372Z
---
-# Now Powered by Outlines
One of the things that I do every so often is completely re-write the backend of my blog. I've mostly hit upon a UI that I like, but I've swapped out the backend over the years between various custom iterations, wordpress, ghost, and now finally Hugo. This time, I've swapped out how I write my blog posts - but kept everything else the same.
The current system allows me to write markdown in vim. I'm normally running `hugo serve -w` at the same time so I can watch the rendered version of what I'm doing as I go. It's sort of like a hacked-together live preview. It works well enough.
-However, for the last 10 years (maybe more?) I've been a huge fan of outliners. I original started with various projects by [Dave Winer](https://scripting.com) and I used almost everything he's written around them for a number of years. I've also tried tooling like [Workflowy](https://workflowy.com) and almost every other infinite-bullet-list tool that came after them. They were all.. fine? I had no real problems with them except that they never really stuck around for very long. They were in a tab in my browser, and my browser has like 100 tabs open at any given time.
+However, for the last 10 years (maybe more?) I've been a huge fan of outliners. I original started with various projects by [Dave Winer](https://scripting.com) and I used almost everything he's written around them for a number of years. I've also tried tooling like [Workflowy](https://workflowy.com) and almost every other infinite-bullet-list tool that came after them. They were all.. fine? I had no real problems with them except that they never really stuck around for very long. They were in a tab in my browser, and my browser has like 100 tabs open at any given time.
For the last 6 months or so, however, I've been working on my own outliner. It started as an in-browser tool... and I quickly moved it to an offline-first desktop app via [Tauri](https://tauri.app). Having it offline first meant a few big things.
This first iteration uses a lot of hard-coded stuff.. and I'll probably take some time to iron out some of the edge cases around rendering.. but it honestly came together pretty quickly. Since every node in the outliner is markdown it was trivial to put it together. As of right now, I can write my blog post in my outliner, press `shift+p` and have it write out a markdown file to my local hugo instance.
-For now, I do some manual reviewing before officially publishing it. For now there's a few more usability things that need to be added like
+For now, I do some manual reviewing before officially publishing it. For now there's a few more usability things that need to be added like
-* differentiating which posts are published vs. un-published
+- differentiating which posts are published vs. un-published
-* being able to 'unpublish' a node
+- being able to 'unpublish' a node
But honestly? I'm kind of enjoying this right now.
summary: Blogging with Obsidian
lastmod: 2024-12-09T21:19:10.953Z
---
-# Publishing with Obsidian
For the last couple years I've been been trending toward the idea that tools like https://remotestorage.io and https://5apps.com/ are actually not any better for owning your content than everything they purport to replace. Instead the true bastion of content owners is the lowly file that exists on their computer. While tools like RemoteStorage (and I'm definitely picking on them for no good reason) talk about freedom, really they're just pushing their vision of freedom on users. Freedom is really about making informed choices.
summary: Where I spend too long talking about why I removed the default font temporarily
lastmod: 2024-12-10T19:49:17.644Z
---
-# Removing the Default Font
This is a small change that I've made to the site that I've actually been thinking about for quite some time. I've always had a monospaced font configured in my CSS, forcing all text into whatever the default monospace font on your system is.
This is very cool stuff.
-Since each character has a bounding box that's the same size as every other character we actually run into a very specific instance of a cool typography side-effect known as [Rivers](https://en.wikipedia.org/wiki/River_\(typography\)). Each character aligns itself perfectly with the character above and below it, creating a giant grid of characters on your screen.
+Since each character has a bounding box that's the same size as every other character we actually run into a very specific instance of a cool typography side-effect known as [Rivers](<https://en.wikipedia.org/wiki/River_(typography)>). Each character aligns itself perfectly with the character above and below it, creating a giant grid of characters on your screen.
But this isn't always the best for reading.
title: Simple Redis Job Queue
lastmod: 2024-12-10T19:48:22.826Z
---
-# Simple Redis Job Queue
A common pattern in most software projects is the "queue". You'll throw some stuff on there and eventually get around to sorting it all out. In some cases, you may not even really care about that.
### Caveats:
-* The jobs have an easy "deduplication key". An ID that is the same given the same inputs.<br>
+- The jobs have an easy "deduplication key". An ID that is the same given the same inputs.<br>
-* If we don't complete a job for any reason, that's fine, another one will likely show up again in a few minutes
+- If we don't complete a job for any reason, that's fine, another one will likely show up again in a few minutes
We were also using redis so here's a real simple pattern that can handle this kind of workload.
4. If we have any additional information about the job we want to pass on, we can use `hset jobDetails:[jobId] k1 v1 k2 v2...` and store it in there.
-Your "worker" is just a process that runs `lrange [queueName] 0 0`. That will retrieve the oldest `jobId` in your queue. You can grab any further information from the `jobDetails:[jobId]` hash set and do whatever work you need. When you're done you can call `ltrim [queueName] 0 0` which will remove the job from the queue.
+Your "worker" is just a process that runs `lrange [queueName] 0 0`. That will retrieve the oldest `jobId` in your queue. You can grab any further information from the `jobDetails:[jobId]` hash set and do whatever work you need. When you're done you can call `ltrim [queueName] 0 0` which will remove the job from the queue.
-An interesting fact about this setup is that all your calls are `O(1)` and you can pipeline the initial `set`/`rpush`/`hset` calls so that things are even faster.
+An interesting fact about this setup is that all your calls are `O(1)` and you can pipeline the initial `set`/`rpush`/`hset` calls so that things are even faster.
date: 2023-06-19T06:12:47.302-04:00
lastmod: 2024-12-10T19:48:03.260Z
---
-# Started from *Free* now we're here
-Neopets was the first time I'd ever heard about HTML or CSS. I had been playing terrible games for a long time at this point, even digging into basic at one point to try and do something.. but I didn't have the interest at the time. I didn't have anyone in my (or my parents) circle that was interested in computers at the time, so I never even considered what was possible. Neopets was a free game where you took care of a digital pet and then played a ton of games to earn in-game money. They even had a huge "Auction house" that was made up of individual stores that users ran. You could start a store, and set up your own items for sale. It was amazing. It also gave you a little area to enter HTML/CSS snippets so you could customize your store. That was why I bothered learning HTML/CSS at all. To customize my neopets store. They had the same interface for your clan page - customize your entire clan page with HTML and CSS. It was amazing. It was the first time anything related to "programming" clicked and I realized I loved it.
+Neopets was the first time I'd ever heard about HTML or CSS. I had been playing terrible games for a long time at this point, even digging into basic at one point to try and do something.. but I didn't have the interest at the time. I didn't have anyone in my (or my parents) circle that was interested in computers at the time, so I never even considered what was possible. Neopets was a free game where you took care of a digital pet and then played a ton of games to earn in-game money. They even had a huge "Auction house" that was made up of individual stores that users ran. You could start a store, and set up your own items for sale. It was amazing. It also gave you a little area to enter HTML/CSS snippets so you could customize your store. That was why I bothered learning HTML/CSS at all. To customize my neopets store. They had the same interface for your clan page - customize your entire clan page with HTML and CSS. It was amazing. It was the first time anything related to "programming" clicked and I realized I loved it.
I don't remember a single ad on neopet trying to force me to make a purchase of a premium currency. There probably were.. but I don't remember them at all.
One in particular, Host Matrix, caught my attention - because I love the Matrix. They were having a promotion where if you wrote long form tutorials for them (around HTML/CSS/PHP) you could get free hosting! And so I did. I wrote a lot of tutorials so that I could get free hosting and continue into this weird world I had stumbled in to.
-Around the same time I picked up the guitar. My dad had played the guitar since his teens, and so there was always one around. I had never shown an interest in it.. but one day I picked it up and started learning how to play. Mostly because of tabs (tabulature) - a fingering based musical notation that works beautifully for the guitar. People, on the internet, would listen to songs, figure out what they were playing.. and then write it down and put it on the internet.. for free! I devoured everything I could find. Eventually I got good enough that I could write my own tabs and share with others! It was incredible.
+Around the same time I picked up the guitar. My dad had played the guitar since his teens, and so there was always one around. I had never shown an interest in it.. but one day I picked it up and started learning how to play. Mostly because of tabs (tabulature) - a fingering based musical notation that works beautifully for the guitar. People, on the internet, would listen to songs, figure out what they were playing.. and then write it down and put it on the internet.. for free! I devoured everything I could find. Eventually I got good enough that I could write my own tabs and share with others! It was incredible.
When I started branching out musically.. it was because of free access to it. The first time I had heard Ravenous by [Arch Enemy](https://archenemy.net/en/) my mind was blown - I ran home and downloaded the song. I listened to it every day for a week and then started learning it. When I discovered Children of Bodom and Kalmah it was the same thing. Obscure bands that I would never have had the hope of finding out, accessible to me because it was free.
Don't get me wrong, I understand there were costs to creating this. I've spent the last decade of my life building SaaS software for companies and managing infra budgets. I understand the cost. But also, I understand that I wasn't a customer. Arch Enemy didn't lose money by having me download their album. I didn't have access to purchase their album and I didn't have the money to do it. Arch Enemy gained a fan that would later on buy tickets to shows, and albums, and t-shirts. A fan that would have never bothered if he hadn't heard their albums for free.
-Free played a huge role in my life and I've always loved it. I donate a few hours every week to local startups that need technical advice. I hang out in slack groups answering questions. I hang out in Magic discord groups answering questions. I write software and give it away for free. I do it because I owe so much to free.
+Free played a huge role in my life and I've always loved it. I donate a few hours every week to local startups that need technical advice. I hang out in slack groups answering questions. I hang out in Magic discord groups answering questions. I write software and give it away for free. I do it because I owe so much to free.
And now I sit here trying to figure out monetization strategies for my projects.. and I don't like it. I wish I could do it for free, but there are costs associated with it that I know I won't be able to subsidize forever. I know I want as much of it to be free as possible. I don't want people to feel like they need to pay, I want them to want to. So I have to think about the best way to monetize [Rising Legends](https://www.risinglegends.net) that are in line with my philosophies...
summary: Why government services matter
lastmod: 2024-12-17T04:23:18.636Z
---
-# What Canada Post Can Teach Us About Chesterton’s Fence
When the Canada Post strike hit the headlines, I found myself scrolling through social media, sipping my morning coffee, and shaking my head at the comments. “Just get rid of Canada Post already,” one user declared, punctuating their argument with a digital shrug: “It’s useless these days.” Another chimed in, “Private companies do it better anyway.”
Chesterton’s Fence teaches us that the past is worth understanding before we discard it. Canada Post, with all its quirks and inefficiencies, exists for a reason. It’s a reminder that some systems, however antiquated they may seem, are worth preserving—not because they’re perfect, but because they serve a purpose that private solutions often fail to replicate.
-***
+---
One thing I'm trying to add for my blog posts is references to what I'm talking about
**Further Reading:**\
-• G.K. Chesterton’s original, "What's Wrong with the World": [Read Online](https://www.gutenberg.org/ebooks/1717)\
+• G.K. Chesterton’s original, "What's Wrong with the World": [Read Online](https://www.gutenberg.org/ebooks/1717)\
• Understanding Chesterton’s Fence: [Link](https://fs.blog/chestertons-fence/)\
• Canada Post’s mandate and role: [Government of Canada](https://laws-lois.justice.gc.ca/eng/acts/C-10/index.html)\
• The Economic Role of Public Postal Services in Rural Areas: [Link](https://www.wider.unu.edu/)\
summary: Setting up your own git repo browser
lastmod: 2024-12-10T19:50:13.254Z
---
-# gitweb - a GitHub/GitLab alternative
## Owning Your Digital Space
within a specified folder. You simply install gitweb, point nginx over to it,\
and edit a single configuration file. You immediately get
-* A browser for all local git projects
-* A tree view for your repos with raw file previews
-* Commit history w/ colorized diffs
-* Snapshot downloads
-* RSS feed tracking commit history
-* Search (with regex) throughout your repos
+- A browser for all local git projects
+- A tree view for your repos with raw file previews
+- Commit history w/ colorized diffs
+- Snapshot downloads
+- RSS feed tracking commit history
+- Search (with regex) throughout your repos
For personal projects, or even for small collaborative projects gitweb provides\
more than enough functionality.
# sets the title in the <title></title> html tag
$site_name = "My Site";
-# by default the root of your gitweb is called "projects".
-# I simply changed that to Home and explicitly set the url
+# by default the root of your gitweb is called "projects".
+# I simply changed that to Home and explicitly set the url
# that users get directed to when they click it
$home_link_str = "Home";
$home_link = "https://git.xangelo.ca";
-# There's a small "Header" section above the project listing
-# that you can customize with whatever text you want. This
-# allows you to specify an html file that should be used
+# There's a small "Header" section above the project listing
+# that you can customize with whatever text you want. This
+# allows you to specify an html file that should be used
# in that area
$home_text = "/path/to/file.html";
## Resources
-* Git Docs: https://git-scm.com/docs/gitweb.html
-* Gitweb Source: https://repo.or.cz/w/git.git/tree/HEAD:/gitweb/
-* My projects: https://git.xangelo.ca
+- Git Docs: https://git-scm.com/docs/gitweb.html
+- Gitweb Source: https://repo.or.cz/w/git.git/tree/HEAD:/gitweb/
+- My projects: https://git.xangelo.ca
medium_link: https://xangelo.medium.com/posse-has-it-backwards-ca9ab4d5b529?source=rss-d5a790d38792------2
slug: posse-has-it-backwards
title: POSSE Has it Backwards
+summary: The Presentation of Self in Every Blog Post
---
-> *The Presentation of Self in Every Blog post*
-
I’ve been blogging for a number of years now, even though my website makes it seem like I stopped. You can check the wayback machine and see posts from [2011](https://web.archive.org/web/20110507234835/http://xangelo.ca) before composer was a thing in the PHP world, or you can jump back to [2005](https://web.archive.org/web/20050415040309/http://www.xangelo.com/) when I was more focused on the design of the website than the content itself. I’ve tried almost every blogging platform out there, and a few that never made it off my hard-drive.
A few years ago I moved over to Hugo and static sites and decided to set up something a bit more barebones, but intentional. I bought in to the concepts behind POSSE (Publish (on your) Own Site, Syndicate Everywhere) and worked to make sure that my content would survive first. I wrote in markdown, published to my own git server (and GitHub as well) so that I could invoke GH Actions to actually build my website. You can check it out the behind the scenes [here](https://github.com/AngeloR/angelor.github.io). I actually really like this process, but I realized something.
I am not the same person I am on Instagram as I am on Twitch.
-The truth is that I shift tone and intention in every room I am in, based on the room itself. Erving Goffman talks about this extensively in his book “***The Presentation of Self in Everyday Life”;*** As the backdrop of our stage changes we present different sides of ourselves. This isn’t wrong or incorrect, and it isn’t being “fake”. This is the truth of who we are as people.
+The truth is that I shift tone and intention in every room I am in, based on the room itself. Erving Goffman talks about this extensively in his book “**_The Presentation of Self in Everyday Life”;_** As the backdrop of our stage changes we present different sides of ourselves. This isn’t wrong or incorrect, and it isn’t being “fake”. This is the truth of who we are as people.
The sorts of comments I leave on a LinkedIn post are vastly different from what I leave on X. They’re both facets of who I am, but they exist within the bounds of those systems. The systems inform how I interact with them.
The same tools that power POSSE can power EPOSS, you just need to point them the other way.
-For example, this post, even though it shows up on Medium, also appears on my website (https://xangelo.ca). It does this because Medium supports RSS and I can use that RSS feed to generate a post in markdown and feed it back to Hugo. This particular script looks at the RSS feed defined at RSS\_URL and writes markdown versions of it to content/posts/medium so that I can track which posts are being imported: <https://github.com/AngeloR/angelor.github.io/blob/main/.github/scripts/medium_to_hugo.py>
+For example, this post, even though it shows up on Medium, also appears on my website (https://xangelo.ca). It does this because Medium supports RSS and I can use that RSS feed to generate a post in markdown and feed it back to Hugo. This particular script looks at the RSS feed defined at RSS_URL and writes markdown versions of it to content/posts/medium so that I can track which posts are being imported: <https://github.com/AngeloR/angelor.github.io/blob/main/.github/scripts/medium_to_hugo.py>
<https://medium.com/media/c7634bd7099d8b4a3c68e75789d29869/href>
EPOSS isn’t the antithesis of POSSE, it’s an evoluion.
-
\ No newline at end of file
+
summary: Pagination techniques and trade-offs
lastmod: 2025-02-14T15:45:12.879Z
---
-# Pagination
Pagination is an interesting topic. Actually, to be fair, most topics are interesting topics once you make it past the surface. Pagination is no exception. The goals of pagination are simple: returning every since item in a list can be challenging along many axis, so it's easier to split that list into chunks and return the chunks.
```typescript
// Backend: Express.js + PostgreSQL
-import express from 'express';
-import { Pool } from 'pg';
+import express from "express";
+import { Pool } from "pg";
const app = express();
const pool = new Pool({
- user: 'your_user',
- host: 'your_host',
- database: 'your_db',
- password: 'your_password',
+ user: "your_user",
+ host: "your_host",
+ database: "your_db",
+ password: "your_password",
port: 5432,
});
-app.get('/items', async (req, res) => {
+app.get("/items", async (req, res) => {
try {
const limit = parseInt(req.query.limit as string) || 25;
const offset = parseInt(req.query.offset as string) || 0;
-
+
const { rows } = await pool.query(
- 'SELECT * FROM items ORDER BY id ASC LIMIT $1 OFFSET $2',
+ "SELECT * FROM items ORDER BY id ASC LIMIT $1 OFFSET $2",
[limit, offset]
);
-
+
res.json({
items: rows,
pagination: {
offset,
nextOffset: offset + limit < count ? offset + limit : null,
prevOffset: offset - limit >= 0 ? offset - limit : null,
- }
+ },
});
} catch (error) {
- res.status(500).json({ error: 'Internal Server Error' });
+ res.status(500).json({ error: "Internal Server Error" });
}
});
-app.listen(3000, () => console.log('Server running on port 3000'));
-
+app.listen(3000, () => console.log("Server running on port 3000"));
```
### Frontend Example
```tsx
-
// Frontend: React + Fetch API
-import { useState, useEffect } from 'react';
+import { useState, useEffect } from "react";
interface Item {
id: number;
useEffect(() => {
fetch(`/items?limit=${limit}&offset=${offset}`)
- .then(res => res.json())
- .then(data => {
+ .then((res) => res.json())
+ .then((data) => {
setItems(data.items);
setPagination(data.pagination);
});
return (
<div>
<ul>
- {items.map(item => (
+ {items.map((item) => (
<li key={item.id}>{item.name}</li>
))}
</ul>
</div>
);
}
-
```
## Cursor Pagination
```typescript
// Backend: Express.js + PostgreSQL (Cursor-Based Pagination)
-import express from 'express';
-import { Pool } from 'pg';
+import express from "express";
+import { Pool } from "pg";
const app = express();
const pool = new Pool({
- user: 'your_user',
- host: 'your_host',
- database: 'your_db',
- password: 'your_password',
+ user: "your_user",
+ host: "your_host",
+ database: "your_db",
+ password: "your_password",
port: 5432,
});
-app.get('/items', async (req, res) => {
+app.get("/items", async (req, res) => {
try {
const limit = parseInt(req.query.limit as string) || 25;
const cursor = req.query.cursor as string | null;
-
- let query = 'SELECT * FROM items ORDER BY created_at ASC LIMIT $1';
+
+ let query = "SELECT * FROM items ORDER BY created_at ASC LIMIT $1";
let params: any[] = [limit];
if (cursor) {
- query = 'SELECT * FROM items WHERE created_at > $1 ORDER BY created_at ASC LIMIT $2';
+ query =
+ "SELECT * FROM items WHERE created_at > $1 ORDER BY created_at ASC LIMIT $2";
params = [cursor, limit];
}
const { rows } = await pool.query(query, params);
-
- const nextCursor = rows.length > 0 ? rows[rows.length - 1].created_at : null;
-
+
+ const nextCursor =
+ rows.length > 0 ? rows[rows.length - 1].created_at : null;
+
res.json({
items: rows,
pagination: {
limit,
nextCursor,
- }
+ },
});
} catch (error) {
- res.status(500).json({ error: 'Internal Server Error' });
+ res.status(500).json({ error: "Internal Server Error" });
}
});
-app.listen(3000, () => console.log('Server running on port 3000'));
-
-
+app.listen(3000, () => console.log("Server running on port 3000"));
```
### Frontend
```tsx
// Frontend: React + Fetch API (Cursor-Based Pagination)
-import { useState, useEffect } from 'react';
+import { useState, useEffect } from "react";
interface Item {
id: number;
const limit = 25;
useEffect(() => {
- const url = cursor ? `/items?limit=${limit}&cursor=${cursor}` : `/items?limit=${limit}`;
+ const url = cursor
+ ? `/items?limit=${limit}&cursor=${cursor}`
+ : `/items?limit=${limit}`;
fetch(url)
- .then(res => res.json())
- .then(data => {
- setItems(prev => [...prev, ...data.items]);
+ .then((res) => res.json())
+ .then((data) => {
+ setItems((prev) => [...prev, ...data.items]);
setPagination(data.pagination);
});
}, [cursor]);
return (
<div>
<ul>
- {items.map(item => (
+ {items.map((item) => (
<li key={item.id}>{item.name}</li>
))}
</ul>
</div>
);
}
-
```
- publish
lastmod: 2024-12-12T05:30:40.335Z
---
-## What is LootCap?
Last week we ([Adam Cochran](https://twitter.com/AdamScochran) and myself) launched [LootCap](https://lootcap.com). The goal is to provide tracking on a new class of tokens called [Loot](https://medium.com/@adamscochran/what-are-loot-tokens-understanding-an-emerging-asset-class-380b0cc38749). It's been a few years since I used to work at Vault of Satoshi and since that time I fell a bit out of Crypto Currencies. I always felt that most of the buzz around them was focused on the tech or the coin itself. It never provided any value. It felt like most coins out there were focused on trying to re-create the trajectory of Bitcoin rather than trying to DO anything. Ethereum was different. It was different enough to force me to pay attention. It allows you to run [Smart Contracts](https://github.com/ethereumbook/ethereumbook/blob/develop/07smart-contracts-solidity.asciidoc#what-is-a-smart-contract). In recent years you've likely seen "Tokens" suddenly gain popularity. Well those are primarily powered by Ethereum. Again, however, it felt like tokens were just trying to recreate the Bitcoin boom.
We are using a two different 3rd party libraries on the front-end.
-* Mvp.css - https://andybrewer.github.io/mvp/ - because we wanted a simple, classless, css framework as a base
-* BigNumber - https://mikemcl.github.io/bignumber.js - because dealing with crypto numbers in JS can be challenging.
+- Mvp.css - https://andybrewer.github.io/mvp/ - because we wanted a simple, classless, css framework as a base
+- BigNumber - https://mikemcl.github.io/bignumber.js - because dealing with crypto numbers in JS can be challenging.
We aren't using React, and so for the times where we have to update the Dom we're using this function and interacting directly with the Element nodes.
```javascript
const $ = (selector, root) => {
- root = root || document;
- return root.querySelectorAll(selector);
-}
+ root = root || document;
+ return root.querySelectorAll(selector);
+};
```
We're also relying on `fetch` since all our endpoints are very simple `GET` based requests.
```javascript
function route() {
- const routeTable = [
- {name: 'route-name', route: REGEX_MATCH, handler: fn}
- ];
- $('.route.active')[0].classList.remove('active');
- const requestedRoute = window.location.hash.split('#')[1] || '/';
-
- if(!routeTable.some(route => {
- if(route.route.test(requestedRoute)) {
- if(route.handler)
- route.handler(route.route.exec(requestedRoute));
- $(`#route-${route.name}`)[0].classList.add('active');
- return true;
- }
- })) {
- // clear the info page
- $('#route-info')[0].innerHTML = '';
- $('.route.default')[0].classList.add('active');
- }
+ const routeTable = [{ name: "route-name", route: REGEX_MATCH, handler: fn }];
+ $(".route.active")[0].classList.remove("active");
+ const requestedRoute = window.location.hash.split("#")[1] || "/";
+
+ if (
+ !routeTable.some((route) => {
+ if (route.route.test(requestedRoute)) {
+ if (route.handler) route.handler(route.route.exec(requestedRoute));
+ $(`#route-${route.name}`)[0].classList.add("active");
+ return true;
+ }
+ })
+ ) {
+ // clear the info page
+ $("#route-info")[0].innerHTML = "";
+ $(".route.default")[0].classList.add("active");
+ }
}
```
The idea behind server less infrastructure is simply: You write apps that are designed as "functions". These functions are deployed and spun up/down as necessary. Scale becomes pretty straight-forward. The infrastructure is available, it's just a matter of
-* How much do you want to pay?
-* What's the cold-start time on your app?
+- How much do you want to pay?
+- What's the cold-start time on your app?
The payment is pretty straight foward. Since these serverless tends to be small apps that cater well to "bursty" applications in terms of requests, they tend to be cheaper to run.
CloudFlare workers are essentially serverless functions.. except they run at every CloudFlare Point-of-Presence (PoP). They have milliseconds of cold-start time, and they run globally. They do have way more restrictions than traditional serverless offerings
-* 1MB compressed filesize
-* 50ms CPU time
-* max 30 workers
-* < 50 subrequests per worker (redirects count!)
+- 1MB compressed filesize
+- 50ms CPU time
+- max 30 workers
+- < 50 subrequests per worker (redirects count!)
But I felt that if we COULD work within their limits, it would bring us closest to what we needed. They also offered an eventually consistent KV store that was accessible from the workers. Since our data was cached anyway, we didn't care about the eventual consistency - worst case we served stale data for a minute more than we expected.
{{/*
- The MIT License (MIT)
+The MIT License (MIT)
- Copyright (c) 2022 M E Leypold
+Copyright (c) 2022 M E Leypold
- Permission is hereby granted, free of charge, to any person obtaining a copy of
- this software and associated documentation files (the "Software"), to deal in
- the Software without restriction, including without limitation the rights to
- use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
- the Software, and to permit persons to whom the Software is furnished to do so,
- subject to the following conditions:
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
+the Software, and to permit persons to whom the Software is furnished to do so,
+subject to the following conditions:
- The above copyright notice and this permission notice shall be included in all
- copies or substantial portions of the Software.
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
+FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
+COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
+IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
- FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
- COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
- IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
*/}}
{{ define "title" }}
{{.Page.Title}} | {{.Site.Params.siteBasename}}
{{ end }}
{{ define "main" }}
-
+<h1>{{.Page.Title}}</h1>
{{.Content}}
{{ if .Params.tags }}
<div class="tag-list">
{{ range .Params.tags }}
- [ <a href="{{ "/tags/" | relURL }}{{ . | urlize }}">{{ . }}</a> ]
+ [ <a href="{{ " /tags/" | relURL }}{{ . | urlize }}">{{ . }}</a> ]
{{ end}}
</div>
{{ end}}
<div class="post-date">
Posted {{ .Date.Format "Monday, January 02, 2006"}}
</div>
-{{ end }}
+{{ end }}
\ No newline at end of file