[go: up one dir, main page]

inkel

Software programmer interested in SRE, DevOps, Machine Learning and Augmented Reality.

How to properly check environment variables values in Go

1min.

One thing I’ve found fairly often in Go programs is people checking if an environment variable is defined by using the os.LookupEnv function, that returns a string with the value, and a boolean indicating that the variable is present. However, what most people seem to miss is the comment that the returned value might still be empty!

Take for instance the following program that you can try in the Go Playground:

package main

import (
	"fmt"
	"os"
)

func main() {
	os.Setenv("FOO", "bar")
	printEnv("FOO")

	os.Setenv("FOO", "")
	printEnv("FOO")

	os.Unsetenv("FOO")
	printEnv("FOO")

}

func printEnv(v string) {
	ge := os.Getenv(v)
	le, ok := os.LookupEnv(v)
	fmt.Printf("Getenv(%[1]q) => %[2]q\nLookupEnv(%[1]q) => (%[3]q, %[4]t)\n\n", v, ge, le, ok)
}

If you run this, the results might not be what you were expecting:

Getenv("FOO") => "bar"
LookupEnv("FOO") => ("bar", true)

Getenv("FOO") => ""
LookupEnv("FOO") => ("", true)

Getenv("FOO") => ""
LookupEnv("FOO") => ("", false)

Now you know: the proper way to check an environment variable os it use os.LookupEnv and check both the boolean and that the string isn’t empty, otherwise you might introduce a bug in your program.

PS: how cool is that you can refer to arguments in a formatting string using their position?


Replace Go module with local version

1min.

I was going to write about this, but Pam Selle already did it: Use a Local Version of a Library in Go.

Long story short: just run go mod edit -replace path/to/module=/absolute/local/path and you’re good to go! Now when you compile your Go project it will be using the changes in your local environment and not those published in Go module registry.


Change several Git commits' author easily

4min.

Recently I’ve changed jobs, and that meant, among many other things, that my work email address changed, which means that my commits are now using the wrong email address! I need to fix that, but of course I’ve already committed stuff using my old email address. How can I fix that?

Luckily the solution isn’t difficult, though it took me a while to properly figure it out.

The first thing I needed to do was to configure my repository to use my work email address:

path/to/work-repo $ git config user.name "Leandro López (inkel)"
path/to/work-repo $ git config user.email "work@example.com"

Note that I’m doing this only for my work related repositories, so all of my other work (e.g. public or open source) uses my personal email address.

Next I want to know which commits SHA1s are using my personal email address. I can use the --author flag of git-log(1) and get that information easily:

$ git log --pretty=format:"%h - %aN <%ae> - %s" --abbrev-commit --author=personal@example.com
86b860dcc - inkel <personal@example.com> - Use stringSet for collecting child modules
9822062a9 - inkel <personal@example.com> - Use stringSet for collecting top root modules
d6d397f3c - inkel <personal@example.com> - Add stringSet type to collect unique set of strings

Now what I want to do is to reset the author to use the one I configured for this repository without editing the commit message; additionally I want to add a Signed off by line at the end of each commit message. One way to do this is to manually execute the following for every commit:

$ git commit --amend --signoff --author="Leandro López (inkel) <work@example.com>" $COMMITSHA

But as you can figure out, this becomes tedious if you have multiple commits. So let’s use the power of git-rebase(1) and automate this:

$ git rebase --exec='git commit --no-edit --amend -s --reset-author' d6d397f3c^
Executing: git commit --no-edit --amend -s --reset-author
[detached HEAD f29b70ad2] Add stringSet type to collect unique set of strings
 2 files changed, 120 insertions(+)
 create mode 100644 docker/terraform/automation/stringset.go
 create mode 100644 docker/terraform/automation/stringset_test.go
Executing: git commit --no-edit --amend -s --reset-author
[detached HEAD 3ca17042b] Use stringSet for collecting root modules
 1 file changed, 7 insertions(+), 21 deletions(-)
Executing: git commit --no-edit --amend -s --reset-author
[detached HEAD 5d168a566] Use stringSet to collect modules
 1 file changed, 3 insertions(+), 14 deletions(-)
Successfully rebased and updated refs/heads/foo.

And that’s it! All the commits got fixed to use my work email address instead.

What is going on?! Let’s dig into it

First is the --exec flag. This tells Git that for the given list of commits it needs to execute that command after each commit. In this case the command to execute is git commit --no-edit --amend -s --reset-author which amends the commit (--amend) without changing the commit message (--no-edit) except by adding a Signed off by line at the end of the message (-s which is short for --signoff) and changing the author to the one this repository should use (--reset-author, remember we add a custom configuration for it).

Then comes the commit list, in this case d6d397f3c^. Look again at the list of commits above. If you see, the SHA is the same as the one at the bottom of the list, which is the first commit of the list (they are shown in reverse order). The ^ at the end of the SHA instructs Git to go through the current list of commits until the parent of that commit, without including it.

And that’s it! Once this is done, I pull the list of commits again and it looks right:

$ git log --pretty=format:"%h - %aN <%ae> - %s (%cr)' --abbrev-commit --date=relative" --author=work@example.com
5d168a566 - Leandro López (inkel) <work@example.com> - Use stringSet for collecting child modules
3ca17042b - Leandro López (inkel) <work@example.com> - Use stringSet for collecting top root modules
f29b70ad2 - Leandro López (inkel) <work@example.com> - Add stringSet type to collect unique set of strings

This is a very particular use case, but what else we could do with it? Well, you can pass anything to the exec flag, not only Git commands, so you could say run your test suite for each commit and if the command fails then it will stop executing the rebase, allowing you to fix things before continuing. Definitely a very powerful tool to have in your toolbox!


Hello, Grafana Labs!

2min.

It’s been a while since my last post here, not because I’m lazy (which I am), but because as you know I recently changed jobs. It’s been almost a month at my new place, and now I feel confident to say that I joined Grafana Labs as an SRE for the internal cloud platform!

For most of my career I worked as a backend developer, only dabbling sporadically into an operations role, so I would be lying if I said that impostor syndrome didn’t kick in during my first few days at this new role, but luckily my team is super friendly, helping, and understanding, and my confidence is getting bigger as days pass. I think I made the right decision deciding to jump into this new adventure.

Switching jobs is never easy, specially after almost a decade at my previous work, and thus some of my schedules were affected, like the weekly cadence of newsletter or writing into this blog. I hope to get back into rhythm soon.

This past month I’ve spent it mostly going through the onboarding process, getting to know my team, some other teams' leaders, and familiarizing with the codebase and tools like Grafana, Loki, and Tanka. And I’ve even found the time to add a performance contribution to a third-party library!

There are still lots of things to learn, but for now I’ll continue to learn more about this amazing company and the work I’ll be doing, improving my skills, and learning a few more tricks. The future looks really bright!


Goodbye, Theorem

2min.

Today is my final day at Theorem after almost 10 years. It is a day full of mixed feelings for me, as I’m excited for what’s waiting for me next week (more on that, well, next week) and I’m saddened for leaving this amazing team of friendly and talented human beings.

When I joined the Citrusbyte, how it was named at the time, we were less than 20 people working in a hectic but fun and creative environment. And while it might seem hard to believe, that same spirit is still around all these years later, although luckily is not (that) hectic anymore.

Not only the company grew throughout but so did I. I went from being a single guy who had just bought an apartment doing nothing more than programming and reading all day to a father of two and owner of a house. And this company always had my back because all of us employees are treated like people.

But I digress…

Yesterday I’ve got a farewell Meet call with lots of teammates, and I would be lying if I wouldn’t say that afterwards my ego was the size of a continent, I’m happy and proud of leaving a mark on Citrusbyte/Theorem’s culture, and I will forever be grateful and honoured for this past decade. And who knows, maybe someday I’ll return with new tricks to share.

Me as King inkel at Gargonza Castle, Italy

inkelinks S02E03

1min.

Yesterday episode S02E03 of my newsletter was released. I hope you enjoy it! And help me making it better.


inkelinks S02E02

1min.

I might be on vacation but the second episode of 2021 edition of my newsletter is out. Enjoy and share it with your friends!


inkelinks S02E01

1min.

A new year has begun, and a new issue of my newsletter is out. Check it out!


Enabling Comments

1min.

I’ve been looking for a solution to add comments to this blog, but none of them convinced me, as required adding some JavaScript or IFRAME, which doesn’t feel right, so I decided to try a new approach and I’ve enabled GitHub Discussions in the repository where this blog’s content is hosted. I still have to create an entry for each post, which is a tiresome process as there doesn’t seem to be an API for it yet, and also I haven’t decided if that’s something I’d like to have or have you creating a new discussion if needed.


inkelinks S01E06 - Season Finale

1min.

The last one of the year! This episode is full with interesting links, but the one I like the most is the video at the end. I hope you enjoy it!


Bye 2020

1min.

It’s finally over. This awful year has ended. My work life hasn’t changed much, although I had to organize my work blocks in a better way. My personal life, on the other hand, was quite affected, specially for my kids and partner; they were the most affected by having to start kindergarden classes via Zoom (my kids), and almost completely stop they work life (my partner).

I did manage to do some stuff that I find nice to share, and hopefully will continue doing in 2021:

  • I dedicated more time to this blog again.
  • I have a newsletter now; please go and subscribe!
  • For the first time in my life I joined the Advent of Code, although for different reasons I’ve stopped working on the challenges; I’m planning on continue with them in my upcoming vacations.

I don’t believe in magical dates and I know it’s not that January 1st, 2021 will be automatically better: COVID-19 still exists, travel is restricted, the economy in this country (Argentina) isn’t getting any better, etc, but I’m looking forward to see what the new year will bring and work on making things better, not only for me, but for my family.


inkelinks S01E035 - Christmas edition

1min.

This week’s episode was delivered a little bit earlier because this Jesus guy stole my thunder and decided to celebrate his birthday on Friday. Anyway, if you are not happy with GitHub’s repository page UI, or want to program without using your keyboard, or are you in doubt if you should use a NoSQL database or an old RDBMS, then you don’t want to miss this episode.


inkelinks S01E04

1min.

Today I brought you a little bit of space and programming, more programming history, how to get recognized at work, designing better documentation, and speeding up your CI pipelines by cloning your repositories in a faster way. You don’t want to miss it!


Advent of Code 2020: Day 7

1min.

I had a very busy couple of days at work, so I couldn’t get too much done in Advent of Code. Working on day 7 took me a while to figure out the solution, specially in part 2. But I did it.

I think the biggest issue that I faced was that I was still trying to think in terms of procedural programming and not functional programming. I’m definitely not happy with the solution, again specially part 2, but it works. I’m definitely looking forward to start refactoring the code and looking for better solutions.


inkelinks S01E03

1min.

The wait is over! Issue #3 of inkelinks it out. Feel free to check it for a weekly dosis of design, history, and speeding your Bash shell startup times.


Advent of Code 2020: Day 6

1min.

So far these are the challenges that I liked the most, I think. Also, a feeling that I’ve been having lately was that the code is a mess (to be expected because I’m learning) and this particular day’s solutions showed me exactly that: there are lots of refactorings to apply that will make the code more reusable and maintainable. I don’t plan on doing that yet, but it’s likely I’ll do it in the upcoming days.


Advent of Code 2020: Day 5

1min.

These day 5 solutions took me a little longer, yet nothing compared to how long it took me when I started with Clojure.

One thing that I always knew theoretically but now I can confirm because of working with it the last few days is that the level of expressiveness is astonishing. I was very comfortable when I was programming in Ruby because I felt that it was too easy to translate my thoughts into code, but I’ve got to confess that while I’m nowhere near that level of comfort, I’m liking it more and more as each day passes.


Advent of Code 2020: Day 4

1min.

Last night I solved day 4 part 1 of the Advent of Code 2020 challenge, and this morning I finally solved part 2, thus solving day 4 completely. While arguably days 2, 3, and 4 have “simpler” challenges than day 1, they were still challenging, specially given how unfamiliar I am with Clojure’s syntax and API.

I feel more confident as each day passes, and I’m getting more proficient with the environment as well. I hope that by the end of Advent of Code I feel confident enough to try Clojure in a small project at work.


Advent of Code 2020: Day 3

1min.

I’m on a roll! Day 1 was painful; in day 2 I did some improvements, and now day 3 was also solved in less than 2 hours!

I’m getting more familiar with Clojure, and the tools I use for this project: Leiningen, Emacs + CIDER. About the latter, I know I could be more efficient if I knew the commands better, but so far it is really useful and I’m not only learning tons but also in awe as to how well integrated this tool is.


Advent of Code 2020: Day 2

1min.

Funny how day 1 took me 4 days to solve but I was able to solve day 2 in just an hour. I guess the challenges are not strictly in a increasing order of complexity.

I also like that I think my Clojure programming style is getting better as the time passes, which in the end was my sole main objective.

Let’s see what day 3 lies ahead…


Advent of Code 2020: Day 1

1min.

I did it! I was finally to solve both Day 1 challenges in Clojure! And it only took me four days!

As said in my previous post one of the reasons why I’m participating in this years Advent of Code is because I want to learn Clojure, and while at times I felt frustrated because I couldn’t find the solution, I’m happy that I did. As with everything, in hindsight, the solution is obvious now. Not that while the problems were similar, both solutions are quite different, and I think they showcase my improvement with the language and functional way of thinking.

I’m looking forward to Day 2 challenges, and I hope to solve them in less than four days!


inkelinks S01E02

1min.

Issue #2 of my newsletter was delivered today. I’m really enjoying curating links and sharing them with friends!


Advent Of Code 2020

1min.

For years I’ve been a fan of the idea behind Advent of Code:

…an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

For a variety of reasons I’ve neglected participating in past editions, but this year I’ve finally decided to give it a try.

The biggest reason behind the decision to join this year was that for the longest time I had the desire to become proficient in any of the many LISP languages out there, particularly in Clojure, so I saw this as a chance to get more familiar with it, learn something more about it, and having fun while doing so.

There are already 2 puzzles going on, and I haven’t even started, but when I do I also plan to publish everything into my GitHub account.


inkelinks S01e01

1min.

The first issue is inkelinks is out! Feel free to check it, and subscribe if you haven’t done it yet!


I have a newsletter now: inkelinks

1min.

One thing I like to do is collect links for reading later, and let’s be honest, who hasn’t? I also usually share some interesting links with folks at work, where I made myself the fame as the links provider. So that got me thinking: why not creating a newsletter where I can share these links with a broader audience? So thank you to Buttondown I now have a newsletter for you: inkelinks.

Besides the newsletter I’ll be posting the links on this blog too, for those who prefer it that way. But I suggest using the form below so you don’t miss any weekly links!



If that doesn’t work for you, you can still subscribe from the newsletter homepage.


Am I a gamer now?

2min.

For years I disliked playing games. Never before in my life I had a gaming console, and the last game I played from start to end was The Secret of Monkey Island. But a couple of years ago I decided to buy a PlayStation 4 and I’m extremely happy with that decision. The main reason was of course to start playing myself, but also to raise my kids in a house where gaming is “normal”.

Why did I wait so long?!

I don’t think I’m a hardcore gamer, not even close, but since I bought my PS4 I play at least one hour almost every night, as a way of cooling off my mind before going to bed. I find it very relaxing and entertaining. I mostly play solo games, the idea of working online with others doesn’t attract me, as I take this time as a me time and I don’t want to engage with others. I know that I’m missing lots of fun, or so I’ve been told, but I’m happy with this decision.

Over the course of the upcoming months I’ll try and leave some “reviews” on some of the games I played so far:

  • Horizon: Zero Dawn
  • God of War
  • Uncharted Saga
  • The WItcher 3
  • Detroit: Becoming Human
  • Mass Effect: Andromeda
  • Spider-Man
  • Rise of the Tomb Raider
  • The Last of Us
  • Dragon Age: Inquisition

I cannot finish this post without saying this: the very first game I played, Horizon: Zero Dawn, I fell in love with the character, the world, and the story behind it:

Horizon: Zero Dawn

Uses This

7min.

I always liked the interviews at Uses This, I find it really interesting knowing other people’s setups. And given that I do not foresee them interviewing me anytime soon, I went ahead and wrote my own.

Who are you, and what do you do?

I’m Leandro López, better known on the Internet as inkel (all lowercase). I work as a software developer at Theorem, mostly doing backend programming but also dabbling in operations when needed. I’m interested in programming as a science, machine learning, DevOps (whatever that means), and keeping systems running.

What hardware do you use?

My main machine is a MacBook Pro 2015 15", which I love. I’ve also have a secondary Dell XPS with Windows 10. Meh.

I, like everyone else, do not like the new butterfly keyboard, and because of that I’m using a Keychron K2 as an external, mechanical keyboard; I love this keyboard, it’s clickly but not too loud, it can connect to up to 3 devices and seamlessly change from one to the other by pressing a couple keys. The only thing I miss in this keyboard is a Touch ID reader.

For a long while my hands travelled from the K2 to my MBP touchpad, and that was becoming tedious. I love the touchpad, but buying an external one was costly in Argentina, and thus I bought a trackball for the first time in my life, as I also wanted to give them a try. I’ve got a Logitech Ergo MX and boy am I happy with it. It didn’t take me too long to get used to it, and while it’s not the same as the touchpad, it is customizable so I can do many things without leaving my hand off the trackball. Still, I would like for it to have a couple more buttons so I can customize it even more.

As I work in a 100% remote company, calls are something that happen almost daily. For that reason I use mostly Apple EarPods, they work just great. At my office I’ve a pair of Logitech G533, which sounds great and are wonderful, but have two main issues to my taste: 1. they are not Bluetooth, you need a USB receptor connected to your computer; 2. after 15 minutes of audio silence (i.e. no audio coming through the speakers), it powers off, which is troublesome if you are doing a presentation and the rest of the audience is muted; I’ve already felt the pain of talking and talking and talking and people trying to tell me they couldn’t hear me but I couldn’t hear them either because the headset was off. Sad emoji. I’ve also bought a pair of Philips SHB3075, they are comfortable and 100% Bluetooth, but for some reason they don’t work well with my MBP, so I’ve them paired with my iPhone.

When mobile I use an iPhone 8, one that needs a change because my screen shattered. I started using iPhones since the iPhone 5 and never looked back. They are fast, pretty, and just work, as anything from Apple. Are they expensive? Yes. Are they worth it? Hell yes.

I prefer reading physical books than ebooks, nevertheless, I’ve an iPad Mini mostly for just that. It’s a great device with an excellent size for reading and taking from one place to another, without being a fullsized iPad.

Last but not least, as a developer I spend most of my days sitting, and I’m over 40, so a good chair is more than just luxury. Because of that I own a Erasmo Onix, and my life (and back) has been happier ever since. I cannot stress enough how much I recommend to spend serious money on a good chair.

And what software?

On my main computer, macOS as the operative system. It’s pretty, it works, it makes me happy. But I know I’ve to upgrade to Catalina soon.

Given I’m a developer, most of the things I write happens in Emacs. My init.el is far from perfect and I’ve neglected updating it for a long time. Sometimes I use Visual Studio Code (VSCode) or Visual Studio for Mac, but I always end up coming back to Emacs.

When I’m not on Emacs, I spend most of my time in a few iTerm2 tabs. I’ve tried other terminals like kitty, and while much faster, it’s definitely nowhere near in UX. Besides using tabs I also use tmux a lot. A LOT. My tmux.conf is adequately updated to my tastes, although it could use some improvement. Yes, I’ve heard about iTerm and tmux integration; no I haven’t tried it. I use Bash. I’m super comfortable with it, so I’m not looking to change to another shell in the short term.

Writing Markdown is something I find myself doing every so often, and for that, I use Typora. It’s beautiful, elegant, fast, I love it. I might switch to Bear, though, as I’ve been trying it on my cellphone and it works really well, even without the Pro features; I might upgrade to Pro soon, still haven’t decided. This blog is using Typora for the content and Hugo as the static site generator.

My other computer runs Windows 10. Windows as an operating system has improved a lot over the years, but it’s still, in my opinion, not very developer friendly as an operating system. For writing code, Visual Studio is undoubtedly one of the best IDEs there is, although of course mostly tied to developing within the Microsoft platform. I also have VSCode installed: sometimes I just need something that loads fast for quick edits. SQL Server Management Studio is another great tool though it feels a bit dated. CLI is where I feel Windows fails the most, and while the new Windows Terminal looks promising, it still lacks the smooth UX that other terminal emulators like iTerm2 have. The Windows Subsystem for Linux (WSL) is another great addition to CLI on Windows, but again, still a bit clunky.

Languages are varied: I really enjoy writing Ruby, love toying with Go, have enough fun to work with .NET Core and C#. I like to have as much infrastructure as possible described using Terraform. And I really, really love writing scripts using Bash, AWK, sed, curl, and jq.

Last but not least is The Cloud. Nothing surprising there: GitHub for own and Open Source code, and some work related stuff. Amazon Web Services and Microsoft Azure for hosting stuff.

What would be your dream setup?

I’d love to get my hands on one of the newer MacBook Pro 16" with at least 32 GB of RAM and an SSD of at elast 1TB. I don’t think I’m going to try a new mechanical keyboard for the time being, but the Keychron K8 looks like a good future replacement. Similar with my mouse, I’ll continue with my Logitech Ergo MX, but also looking forward to get an Apple Magic Trackpad 2.

I’d like to improve my communications setup, probably by getting some AirPods Pro, or a cool Bluetooth headset.

My office needs some love, and due to the Covid-19 pandemic I’m not even going, but once back I’d like to have one or two external monitors, and a good microphone. I don’t have anything in mind yet, though.

As said my phone needs a change, and I’m looking forward to buying an iPhone 11. I’d also like to leave my computer at the office but still having something portable and comfortable at home where I can do some coding, and that’s why I’ve an iPad Pro in mind.

I want to have my office devices accessible from home, and thus I’m looking at using Tailscale. I’ve only heard amazing things about them.


PowerShell grep

1min.

I kept forgetting how to perform the equivalent of grep(1) in PowerShell. The simple answer is Select-String. It is aliased to sls.

ipconfig | Select-String 192.168

I should probably add a permanent alias.


How do I organize my work blocks

5min.

After publishing my previous Working at Theorem: a typical workday article a co-worker asked me the following:

Can you go into the details of what happens during those “work” blocks? Do you frequently check slack_email or only check them occasionally? Are you heads down for parts of the day? Do you have dedicated “help_review others” time or does it happen ad-hoc?

This is an interesting question, so here are the answers.

Checking email / Slack

I approach these two differently. Email is something I check only when I don’t have any other task or when I’m in a break. I don’t have alerts or notifications when a new email arrives, so it’s a task that only happens in an active way. I consider email as the best asynchronous communication tool available, thus I treat it that way. I’ve several different filters that labels or archives emails as they arrive so when I check them I don’t have to lose precious time triaging everything that I’ve received.

Of course working 100% asynchronously is an utopia when working in a team that interacts with an external customer, so synchronous communication is a must; at work we use Slack for this. I’ve enabled notifications although I’ve also muted several channels that only generate noise or are mostly for announcements. I could leave those muted channels, however they are still important and I check them a few times throughout the day, usually after catching up with email.

I try to be conscious of how much affects others when mentioning them on Slack, so I try to keep usage of @here at a minimum, @channel only when it is urgent or very important; using @everyone is completely off the table, unless a catastrophe happens. And when it comes to direct messages I always try to send initially only one message: a greeting plus whatever question or comment I’d like to communicate; this way the other person can quickly decide to dedicate some time to me or not.

Code reviews

A big part of my day consists of reviewing pull requests created by other members of the team. I’m not going to go into the details of how you should behave when doing PR reviews, there are already hundred of posts dedicated to that: this is how I approach this task, YMMV.

First I start by reading the title and description. If those are good and informative then I’d approach the review in a better mood: good programming is mostly about communication, so I held prose to the same standards as code.

Second I look at each commit individually. Writing clear and well scoped commits makes reviewing easier, as you can better understand the author’s intentions. PRs with just one or two gigantic commits are a bummer and from time to time I try and teach people to write smaller commits next time.

Last but not least I look at the code in detail. The first things I look for is overall structure: is the code properly indented or doesn’t violate the rules and standards set for the project? Then I look at the semantics, trying to understand each decision, expecting to see well named variables and methods, easy to follow flow control statements, etc. I can be very nit picky at times, so I have to keep me at bay of not becoming an asshole. And yet there are times when you need to become one; luckily is not something that I need to do often.

PR reviewing is an opportunity for both the author and the reviewer’s growth as programmers and communicators. Treat it like as a learning experience and not as a chore.

Time management

With the current pandemic my working conditions changed quite a bit, even when I was already working remotely. The biggest change was on time management. When I worked from my office, I had some dedicated time for things like checking emails and Slack, for deeply diving into any tasks I was working on, and for reviewing code. Now those dedicated time chunks are gone, so I try to manage my time as follows:

  • Checking emails and Slack is something I do first thing in the morning, before and after lunch, and during any coffee- or cigarette-break;
  • Working on assigned tasks in 20 minutes chunks. Nowadays is hard to find long stretch hours of work, 20 minutes is a good compromise between I want to do something and gonna check the kids, have a break.
  • PR reviews: I usually do them after the above mentioned 20 minutes. In the past I also scheduled this way, but after 40 minutes/1 hour instead.
  • Lunchtime is something I’ve cleared out in my calendar, daily from noon to 1pm, so we have some routine for the kids. This I try to keep on schedule, so it doesn’t conflicts with my calendar. Of course it doesn’t always happens but so far it hasn’t been a problem.

Endnotes

As you can see I haven’t shared any truth revealing insights, though I hope it helps others getting more organized. This schema works for me; it might work for you or don’t work at all.

Interesting in working with me? Check our careers page and apply to any of our current openings. We are waiting for you ;)


Migrating DNSimple ALIAS records to AWS Route53

2min.

Last week I was tasked with migrating a DNS zone from DNSimple to AWS Route53. Overall it was pretty straightforward except when I had to migrate two ALIAS records. This is a special type of record that’s not part of the DNS specification, so there was no direct alternative.

Subdomain

Say that you have a subdomain www.example.com that was using an ALIAS record pointing to www.example.net. This is by far the easiest to move, as it implies replacing the TXT record defining the alias for a CNAME.

Apex domain

This is were it gets complicated. If you had an ALIAS for example.com to example.net you cannot replace it with a CNAME, because apex domains do not support that. The solution is to use an A record, which loses the value of an alias you would need to keep updating the IP address of the destination of it ever changes.

Summary

As you can see it’s not that complicated to migrate ALIAS records to AWS Route53, however, they do have some limitations. I’ve went from this

www.example.com. 3600 IN TXT "ALIAS for example.net"
example.com.     3600 IN TXT "ALIAS for example.net"

to this

www.example.com.    3600    IN  CNAME   example.net.
example.net.        3600    IN  A       192.168.14.52

and achieved the expected results, though we now need to keep an eye on the IP address of the destination.


Working at Theorem: a typical day

4min.

I’ve joined CitrusbyteTheorem 9 years ago, and since day 1 it was a fully remote experience. Over the years I’ve learned lots about how to organize myself to approach each new workday although never gave much thought to it, until a few days ago, when while interviewing a candidate he asked what does a typical day at Theorem looks like. This post will try to address that question.

First and foremost, a disclaimer: by no means I speak on behalf of Theorem or the rest of my teammates; these are entirely my own experiences and do not reflect the reality of all the great people working at this company.

I live with my girlfriend and our two lovely kids (4 years old and 1 year and a half), so keep that in mind while reading this post.

As you are aware, at the time of writing this post we are living in very strange times, throughout a global pandemic that has most of the world in quarantine with people confined to their places and working remotely. As stated earlier, at Theorem we are 100% remote since the beginning, so the COVID-19 pandemic didn’t change much on how we work, although it had some effects.

Pre-pandemic typical workday

I have an office 20 blocks away from home, and my kids went to kindergarten 4 blocks away from the office, so days started at 6:15 am to enjoy breakfast with the family, then we would drive the kids to school, drop them and head to the office. My workdays usually started, then, at 8:00 am.

Office

The very first thing I usually do any given day is going through my emails and any pending notification from the day before. If something requires my attention immediately I answer right at it, otherwise, I either archive or snooze the message to a later time if required.

Next is checking the status of any ongoing task I have been working previously, and paving the way for what’s next. Then, off to work.

Around noon either myself or my girlfriend goes to pick up the kids from school, and takes them home to have lunch, and then back to the office. If I’m the one picking them, I do that during my lunch break, and then have a quick bite or snack. Otherwise, I cook something for myself or order some delivery. During this break I might read or watch something.

Then the rest of the day goes on, until 4:00 pm or 5:00 pm, depending on the day, and walk back home. And that concludes a typical workday.

Typical workday during the pandemic

Things have changed, clearly. We don’t wake up anymore at 6:15 am, now it’s usually at around 8:00 am. Breakfast is served, and I use this time to catch up on some news and go through my emails and notifications, again snoozing for an hour or so whatever needs my attention; the rest is archived.

I’ve set up a standing desk in my bedroom, which is right next to the living room, where the kids spend most of their time playing. The biggest change since the pandemic was that nowadays I don’t have long stretches of work time anymore, so I try and split my tasks into smaller time schedules, so I can take a look at my kids, play with them or help them with homework (yes, even my 1.5 years old daughter has Zoom meetings now.)

Work from home Work from home

Working from home with kids

Lunch and dinner are usually planned the night before, so at around noon either me or my girlfriend starts preparing lunch. The kids love this time as they get to watch something on Netflix. I had to cancel all my meetings during this time, but that doesn’t seem to have affected my work. Asynchronous communication works great!

After lunch work continues, and I might be able to get some work-only hours if the kids decide to nap, otherwise, again it’s split into smaller chunks of time. Either way, I’m still able to drive my commitments to success.

Conclusions

As you can see, not much has changed, other than how many hours in stretch I can work without interruptions. All other work details were already in place given that we are a 100% remote company. The biggest takeaway, for me, is that in order to survive this crazy new world we are living in, you need to work with great people, who understand not all experiences are equal, who trust you will work with professionalism and responsibility, and who you trust back in the same way.

If you like what you read and you would like to form part of this great team, check our careers page and apply to any of our current openings. Who knows? Perhaps your dream job is just waiting for you.


CLI sort tricks

2min.

If you are like me you might have used the sort(1) CLI utility more than once in your life. Today, I’ve found a trick that I’ve never used before, and hopefully it will help someone else in the future.

Say that we have the following file to sort:

fpdy 01 08 wcfo
juvi 01 02 ejan
urbx 04 03 ckbw
fkzq 01 08 myaz
fjie 04 09 rhvo
almv 04 02 adhs
cuah 07 04 gbyt
chok 09 06 nqwo
emjd 01 04 ledx
npto 02 10 nqsc

Now, supposed that I wanted to sort first by the third column and then for the second one, then one would do sort -k 3 foo.txt. Easy. But what if instead the source file looked like this and I wanted the same results?

fpdy 0108 wcfo
juvi 0102 ejan
urbx 0403 ckbw
fkzq 0108 myaz
fjie 0409 rhvo
almv 0402 adhs
cuah 0704 gbyt
chok 0906 nqwo
emjd 0104 ledx
npto 0210 nqsc

Well, tricky, right? Not really: the -k accepts the format F[.C], where F is the field number (2 in our case) and C is the character position within the field (4 in our case) so if we run the following we will achieve what we are looking for:

$ sort -k 2.4 foo.txt
almv 0402 adhs
juvi 0102 ejan
urbx 0403 ckbw
cuah 0704 gbyt
emjd 0104 ledx
chok 0906 nqwo
fkzq 0108 myaz
fpdy 0108 wcfo
fjie 0409 rhvo
npto 0210 nqsc

Why 4 and not 3, I don’t know, but I think it’s taking the separator into account. I need to investigate more. But so far I haven’t the need to do something as fine-grained as this, so I can still sleep well at night.


Testing Terraform Providers

3min.

If there’s one piece of technology I’ve come to love and depend upon these last years it definitely is Terraform. Sadly, the only provider that seems to be complete is the AWS provider, but others seems to be missing some useful resources or data sources. As an example, these past days at work I had to work with the Azure provider and found that I was really missing the ability to query Azure for Virtual Machine IDs, but there wasn’t a data source for this, and I didn’t want to import the virtual machines we’ve already created (insert long story reasons here).

But then I remember that Terraform and its providers are written in Go, so I took it to add the resource myself.

This post isn’t about how to write providers, the folks at Hashicorp already wrote a guide to write custom providers which is pretty useful. But I’ve found that there was something missing, or that I didn’t fully understood, and that was how can I test my changes are working?.

Most if not all providers have unit and acceptance tests that allow to test that the changes we introduced work as expected, which are great for once you want to send a pull request, but I am the type of programmer that likes to try things in the Real World™ before going into the TDD workflow, so I fired up my editor and start hacking on adding a new data source that would allow me to query a virtual machine ID by name. After a few tries, I’ve got something that looked about right, but then the question came: how do I test it? If I run terraform init it uses the published provider, which obviously doesn’t have the changes I made. I’ve found in the documentation some references as to where can you place third-party providers on your machine for testing, but that didn’t really work as it was missing, IMO, some information. Luckily, I’ve found some pointers in another document that explains how Terraform works, but that was still missing some information. After a while, I’ve found the solution, and here it is:

  • Git clone Azure Terraform provider source code.
  • Make the changes.
  • Run go build. This will generate a terraform-provider-azurerm executable in your current directory.
  • Move the executable to the discovery folder: mv terraform-provider-azurerm ~/.terraform.d/plugins/darwin_amd64/
  • Go to a folder with some Terraform configuration that uses the Azure provider.
  • Remove the cached version: rm -rv .terraform/plugins/ (this will remove all plugins, but don’t worry)
  • Run terraform init
  • Profit!

Now I can test my changes with a live Terraform configuration.


Proxy Protocol Support in Curl

1min.

I’ve came across the following tweet the other day, and I couldn’t be more excited:

This is exciting to me as the work I’ve been doing in viaproxy had one caveat: testing it works was a bit convoluted, as I was doing by running an HAProxy instance with a custom configuration like the following:

global
    debug
    maxconn 4000
    log 127.0.0.1 local0

defaults
    timeout connect 10s
    timeout client  1m
    timeout server  1m

listen without-send-proxy
    mode tcp
    log global
    option tcplog
    bind *:17654
    server app1 127.0.0.1:7655

listen with-send-proxy
    mode tcp
    log global
    option tcplog
    bind *:27654
    bind ipv6@:27654
    server app1 127.0.0.1:7654 send-proxy

Luckily commit 6baeb6df adds a new --haproxy-protocol that, as documented, will do the following:

Send a HAProxy PROXY protocol header at the beginning of the connection. This is used by some load balancers and reverse proxies to indicate the client’s true IP address and port.

This option is primarily useful when sending test requests to a service that expects this header.

Reading the commit changes is very enlightening, too, as it is a great example of nice and simple C code. I’m looking forward to the release!


Using responsive font sizes

1min.

Today Chad Ostrowski, a fellow engineer at Citrusbyte, shared an article he wrote: CSS pro tips: responsive font-sizes and when to use which units. After reading it, I couldn’t help myself and adapted some of the tips there to this site. It’s now much easier to maintain, I think, as I’ve removed all previous media queries, but I had to add one:

@media only screen and (min-device-width: 1200px) {
  html { font-size: calc(1em + 0.5vw); }
}

Without this the text on my machine looks too big. I need to work on this, I think.


From PEM to OpenSSH for usage in ~/.ssh/authorized_keys

1min.

Say you have a private key in PEM format, and you want to use that key for SSH into another server, by adding an entry to your ~/.ssh/authorized_keys file with the public key of such PEM file. The following command will parse your PEM file and output the required RSA format used in authorized_keys:

ssh-keygen -y -f path/to/file.pem

This will output a ssh-rsa AAAA… string that is safe to append to your ~/.ssh/authorized_keys. The ssh-keygen uses the -f flag to specify the input file name, and the -y flag to read a private file and output the OpenSSH public key to standard output.


lruc: a reverse cURL

2min.

Today Thorsten Ball asked a simple question on Twitter:

After a brief exchange of tweets, I said:

Twenty minutes later lruc was born.

It’s still very fresh and missing many features, but basically it is a web server that allows you to configure it to always respond with a custom response without too much hassle. The usage is very simple:

Usage of lruc:
  -addr string
        Address to listen for requests (default ":8080")
  -body -
        Response body. Use - to read from a stdin (default "Hello, World!")
  -code int
        HTTP response code (default 200)
  -content-type string
        Content-Type (default "text/plain")

Say that you want to create a server that always respond with a 404 Not Found and a body of No se pudo encontrar lo que buscaba (Spanish for Couldn’t find what you were looking for (sort of)) on port 7070, then you could execute the following:

lruc -addr :7070 -code 404 -body "No se pudo encontrar lo que buscaba"

Or say that you want to always return an image, then you could do something like:

< image.png lruc -content-type image/png -body -

# Or in an useless use of cat
cat image.png | lruc -content-type image/png -body -

This seems like an interesting tool to keep working on, so watch github.com/inkel/lruc for updates.

PS: did I said already that I love Go?


EC2 Key Pairs Fingerprinting

1min.

Ever happened to you that you wanted to know which SSH key you need to connect to an AWS EC2 instance? I always found that the fingerprints don’t tell me much, espcially because I always forget how to compute the fingerprints. Good that I’m back to writing, so I’m dumping my memory here:

  • if the key was generated by AWS, then use openssl pkcs8 -in path/to/key.pem -nocrypt -topk8 -outform DER | openssl sha1 -c
  • if the key was generated using ssh-keygen then use openssl rsa -in path/to/private/key -pubout -outform DER | openssl md5 -c

Why does AWS uses one format and why SSH other? Escapes my current knowledge.


On Go package names

2min.

Or why I renamed github.com/inkel/go-proxy-protocol to github.com/inkel/viaproxy.

In my previous article I introduced a repository that hold the code to create net.Conn objects aware of the proxy protocol, but I wasn’t happy with the name of the repository.

Package names are important in Go, and one aspect that we tend to overlook is that they actually are part of the calling signature when you want to use an export type or function. With the previous code, if we wanted to use the net.Conn wrapper we would have to first import the library:

import "github.com/inkel/go-proxy-protocol/conn"

Once we did that, then to wrap a connection we would have to call:

newCn, err := conn.WithProxyProtocol(cn)

Similarly if we wanted to use the net.Listen alternative, we should’ve had to import github.com/inkel/go-proxy-protocol/listen and then call cn, err := listen.WithProxyProtocol. This doesn’t look right to my eyes, and hopefully not to yours either. And aside aesthetics, two packages for such limited code? Doesn’t make much sense.

So I spent the day thinking on a better name that could allow me to better convey the effect we want to achieve and that fits in just one library, and thus, github.com/inkel/viaproxy came to be. Let’s see how better the code would look like now when wrapping a connection:

// import the package
import "github.com/inkel/viaproxy"

// wrap the connection
newCn, err := viaproxy.Wrap(cn)

Similarly if you want to use the net.Listener, the code looks just as well (and I might even add that looks better):

// import the package
import "github.com/inkel/viaproxy"

// create the listener
ln, err := viaproxy.Listen("tcp", ":1234")

It certainly looks much better, and I hope you agree.


Proxy Protocol: what is it and how to use it with Go

6min.

Today I became aware of the proxy protocol.

The Proxy Protocol was designed to chain proxies / reverse-proxies without losing the client information.

If you are proxying an HTTP(S) server, chances are that you have used the X-Forwarded-From header to keep the real remote address of the client making the request and not receving the proxy’s address instead. But this only works for HTTP(S): if you are proxying any other kind of TCP service, you are doomed.

Take for instance the following example: we will have a simple TCP server that echo backs the client’s remote address:

package main

import (
	"fmt"
	"io/ioutil"
	"log"
	"net"
)

func main() {
	ln, err := net.Listen("tcp", ":7654")
	if err != nil {
		log.Fatal(err)
	}

	for {
		cn, err := ln.Accept()
		if err != nil {
			log.Println("ln.Accept():", err)
			continue
		}

		go handle(cn)
	}
}

func handle(cn net.Conn) {
	defer func() {
		if err := cn.Close(); err != nil {
			log.Println("cn.Close():", err)
		}
	}()

	log.Println("handling connection from", cn.RemoteAddr())

	fmt.Fprintf(cn, "Your remote address is %v\n", cn.RemoteAddr())

	data, err := ioutil.ReadAll(cn)
	if err != nil {
		log.Println("reading from client:", err)
	} else {
		log.Printf("client sent %d bytes: %q", len(data), data)
	}
}

I’m running go run server.go in a machine whose IP is 192.168.1.20, and I’ll be sending requests from another machine whose IP is 192.168.1.12. One the server machine I’m also running an https://www.haproxy.org/ server that acts as a proxy to the Go program above:

global
    debug
    maxconn 4000
    log 127.0.0.1 local0

defaults
    timeout connect 10s
    timeout client  1m
    timeout server  1m

listen wo-send-proxy
    mode tcp
    log global
    option tcplog
    bind *:17654
    server app1 192.168.1.20:7654

listen w-send-proxy
    mode tcp
    log global
    option tcplog
    bind *:27654
    server app1 192.168.1.20:7654 send-proxy

This configuration creates 2 proxies: one listening on port 17654 which just proxies the client connection to the server, and another proxy listening in port 276564 which does the same but it also enables using the proxy protocol by using the send-proxy keyword.

On the client machine, I’m running the following to send requests directly to the Go server, via the regular proxy and via the proxy with proxy protocol enabled:

$ for port in {,1,2}7654; do echo inkel | nc 192.168.1.20 ${port}; done
Your remote address is 192.168.1.12:44966
Your remote address is 192.168.1.20:57680
Your remote address is 192.168.1.20:57681

As you can see in the first case the client is informed that its remote address is 192.168.1.12, which is correct, but in both the other cases it says 192.168.1.20, which is the address of the proxy. Let’s check what the server has to say in its output:

$ go run server.go
2017/10/13 11:50:54 handling connection from 192.168.1.12:44966
2017/10/13 11:50:54 client sent 6 bytes: "inkel\n"
2017/10/13 11:50:54 handling connection from 192.168.1.20:57680
2017/10/13 11:50:54 client sent 6 bytes: "inkel\n"
2017/10/13 11:50:54 handling connection from 192.168.1.20:57681
2017/10/13 11:50:54 client sent 56 bytes: "PROXY TCP4 192.168.1.12 192.168.1.20 58472 27654\r\ninkel\n"

Here something interesting happens: the first connection, the one made directly to the Go server, properly shows the remote address as 192.168.1.12 and the contents. The second and third ones incorrectly report the remote address as 192.168.1.20 but the third one shows something interesting in what was received from the client: instead of just receiving inkel it first received PROXY TCP4 192.168.1.12 192.168.1.20 58472 27654\r\n. This is what proxy protocol does, and if you see clearly, the client’s actual IP address is there!

The proxy protocol, when enabled, will send the following initial line to the proxied server:

PROXY <inet protocol> <client IP> <proxy IP> <client port> <proxy port>\r\n

The actual specification is fairly simple, and now we can see why the only condition for proxy protocol to work is that both endpoints of the connection MUST be compatible with proxy protocol.

This explains why the Go server isn’t reporting the right remote address, even when proxy protocol is used: the net package doesn’t (currently) supports proxy protocol. But adding support to it isn’t too difficult. Here we have a custom connection type that complies with the net.Conn interface:

type myConn struct {
	cn      net.Conn
	r       *bufio.Reader
	local   net.Addr
	remote  net.Addr
	proxied bool
}

func NewProxyConn(cn net.Conn) (net.Conn, error) {
	c := &myConn{cn: cn, r: bufio.NewReader(cn)}
	if err := c.Init(); err != nil {
		return nil, err
	}
	return c, nil
}

func (c *myConn) Close() error                { return c.cn.Close() }
func (c *myConn) Write(b []byte) (int, error) { return c.cn.Write(b) }

func (c *myConn) SetDeadline(t time.Time) error      { return c.cn.SetDeadline(t) }
func (c *myConn) SetReadDeadline(t time.Time) error  { return c.cn.SetReadDeadline(t) }
func (c *myConn) SetWriteDeadline(t time.Time) error { return c.cn.SetWriteDeadline(t) }

func (c *myConn) LocalAddr() net.Addr  { return c.local }
func (c *myConn) RemoteAddr() net.Addr { return c.remote }

func (c *myConn) Read(b []byte) (int, error) { return c.r.Read(b) }

func (c *myConn) Init() error {
	buf, err := c.r.Peek(5)
	if err != io.EOF && err != nil {
		return err
	}

	if err == nil && bytes.Equal([]byte(`PROXY`), buf) {
		c.proxied = true
		proxyLine, err := c.r.ReadString('\n')
		if err != nil {
			return err
		}
		fields := strings.Fields(proxyLine)
		c.remote = &addr{net.JoinHostPort(fields[2], fields[4])}
		c.local = &addr{net.JoinHostPort(fields[3], fields[5])}
	} else {
		c.local = c.cn.LocalAddr()
		c.remote = c.cn.RemoteAddr()
	}

	return nil
}

func (c *myConn) String() string {
	if c.proxied {
		return fmt.Sprintf("proxied connection %v", c.cn)
	}
	return fmt.Sprintf("%v", c.cn)
}

type addr struct{ hp string }

func (a addr) Network() string { return "tcp" }
func (a addr) String() string  { return a.hp }

Now in our server we wrap the connection into our new type, and pass it to the handle func:

func main() {
	ln, err := net.Listen("tcp", ":7654")
	if err != nil {
		log.Fatal(err)
	}

	for {
		cn, err := ln.Accept()
		if err != nil {
			log.Println("ln.Accept():", err)
			continue
		}

		pcn, err := NewProxyConn(cn)

		if err != nil {
			log.Println("NewProxyConn():", err)
			continue
		}

		go handle(pcn)
	}
}

With this, now we see the right output in both the client:

$ for port in {,1,2}7654; do echo inkel | nc 192.168.1.20 ${port}; done
Your remote address is 192.168.1.12:45050
Your remote address is 192.168.1.20:60729
Your remote address is 192.168.1.12:58556

…and in the server:

2017/10/13 13:37:45 accepted connection from 192.168.1.12:45056
2017/10/13 13:37:45 client sent 6 bytes: "inkel\n"
2017/10/13 13:37:45 accepted connection from 192.168.1.20:60738
2017/10/13 13:37:45 client sent 6 bytes: "inkel\n"
2017/10/13 13:37:45 accepted connection from 192.168.1.12:58562
2017/10/13 13:37:45 client sent 6 bytes: "inkel\n"

This has been turned into a Go library located at github.com/inkel/go-proxy-protocol. Feel free to use it and send your feedback and error reports!


Initial Commit

1min.

So here I am, once more, trying to have some sort of blog or journal. I’ll try to write about interesting pieces of code that I’ve written, problems I had to solve, books I’ve read (or dropped), et cetera. Don’t get your hopes too high, though, I’m lazy and tend to forget doing this kind of stuff.