init
|
After Width: | Height: | Size: 1.4 MiB |
@@ -0,0 +1,44 @@
|
||||
---
|
||||
title: A defense for the coding challenge
|
||||
description: none
|
||||
pubDate: 2022-04-15
|
||||
heroImage: ./assets/cover.png
|
||||
color: '#3d91ef'
|
||||
---
|
||||
|
||||
Let's talk about code challenges. Code challenges are a topic with many opinions and something I have been unsure if I liked or hated. Still, I would like to make a case for why I think there are situations where this practise is beneficial, not only for the interviewer but the candidate as well.
|
||||
|
||||
But before getting that far, I would like to point out some of the downsides to code challenges because it isn't one-size-fits-all, and you may want to steer completely clear of them or only use them in specific circumstances.
|
||||
|
||||
## Downside 1
|
||||
|
||||
The primary issue with coding challenges is that they may be built in a way that prevents the candidate from showing their strength. I have, for instance, often seen those logic-style code challenges being applied to all development positions, so a front-end developer would be quizzed on his ability to solve sorting algorithms. What he would be supposed to do after being hired was to align stuff correctly with CSS. This skill test, which ultimately assesses an entirely different set of skills than what is needed, will alienate the candidate and allow a candidate with skills in the quizzed topic to overshine one with the basic skills required.
|
||||
|
||||
Later I will talk a bit about some requirements that I think need to be considered in a good code test, so if used, at least would give a better indication of a candidate's skill concerning the specific role, not just as a "guy who does computer stuff".
|
||||
|
||||
## Downside 2
|
||||
|
||||
The second one is one I have mentioned before, but in a competitive hiring market, being the company with the most prolonged hiring process means that you might very well miss out on some of the best candidates due to them not having the available time to complete these tasks in their spare time, or because another company was able to close the hire quicker.
|
||||
|
||||
# Why you may want to use code challenges
|
||||
|
||||
Unfortunately, many people don't perform well in interviews. Without a technical assessment, the only place for a candidate to showcase their skills is in the interview itself.
|
||||
|
||||
The IT space has historically been associated with an introvert stereotype. While not always the case, they are definitely out there, and there is nothing wrong with that, but they are usually not the strongest at selling themself, and that is basically what most job interviews are. So if we give a candidate only the ability to showcase their skills through an interview, it stands to reason that the guy we end out hiring isn't necessarily the strongest candidate for the job but the best at showcasing hers/his skill.
|
||||
|
||||
Using a code challenge alongside the interview allows you to use the interview part to assess the person, get an idea about how they would interact on the team, have time to explain to them what the job would be like, without having the "hidden" agenda, of trying to trip them up with random technical questions, to try to see if they can answer correctly on the spot.
|
||||
|
||||
So instead of the on the spot question style, the candidate would get the time to seek information and solve the tasks more reminiscent of how they would work in the real world.
|
||||
|
||||
Additionally, if done right, the code challenge can also help the company/team prepare for the new candidate after the hire. For example, suppose your code challenge can indicate the candidate's strengths, weaknesses and knowledge level with various technologies. This can help put the "training"-program together to support the new hire to be up and running and comfortable in the position as quickly as possible.
|
||||
|
||||
## What makes a good code challenge
|
||||
|
||||
It isn't easy to answer, as it would vary from position to position, team to team, and company to company. Some jobs may require a specific knowledge set, where the "implement a sorting algorithm" may be the proper test and be something you would expect any candidate to be able to.
|
||||
|
||||
But here are a few questions I would use to evaluate the value of a code challenge:
|
||||
|
||||
1. Does it cover all the areas you are interested in in a candidate? This is not to evaluate if the candidate has ALL skills but rather to see if he has some skills which would add value to a team. For instance, if the role is for a front-end team that does both the front-end development, back-end for front-end, QA, DevOps, etc., the test should allow a candidate to showcase skills. If, for instance, your test is too heavily focused on one aspect, let's say front-end development, you may miss a candidate that could have elevated the entire team's ability at QA.
|
||||
1. Does it allow for flexible timeframes? Some candidates may not have time to spend 20 hours completing your code challenge, and the test should respect that. So if you have a lot of different tasks, as in the example above, you shouldn't expect the candidate to complete all, even if he has the time. Instead, make a suggested time frame, and give the candidate the possibility of picking particular focus areas to complete. That way, you respect their time, and you also allow them to showcase the skills they feel they are strongest at.
|
||||
|
||||
Another bonus thing to add is to give the candidate the ability to submit additional considerations and caveats to their solution. For example, a candidate may have chosen a particular path because the "right" approach wasn't clear from the context, have made suboptimal solutions to keep within the timeframe, or even skipped parts because of scope but still want to elaborate. This way, you get closer to the complete picture, not just the code-to-repo.
|
||||
|
After Width: | Height: | Size: 1.6 MiB |
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: A meta talk about Git strategies
|
||||
pubDate: 2022-12-05
|
||||
color: '#ff9922'
|
||||
heroImage: ./assets/cover.png
|
||||
description: 'Can Git can be your trusted "expected state" for deployments?'
|
||||
---
|
||||
|
||||
Let me start with a (semi) fictional story; It is Friday, and you and your team have spent the last five weeks working on this excellent new feature. You have written a bunch of unit tests to ensure that you maintain your project's impressive 100% test coverage, and both you, your product owner and the QA testers have all verified that everything is tip-top and ready to go for the launch! You hit the big "Deploy" button. 3-2-1 Success! it is released to production, and everyone gets their glass of Champagne!
|
||||
|
||||
You go home for the weekend satisfied with the great job you did.
|
||||
|
||||
On Monday, you open your email to find it flooded with customers screaming that nothing is working! Oh no, you must have made a mistake!!! So you set about debugging and quickly locate the error message in your monitoring, so you checkout the code from Git and start investigating. But the error that happens isn't even possible. So you spend the entire day debugging, again and again, coming to the same conclusion; This is not possible.
|
||||
|
||||
So finally, you decide to go and read the deployment log line-by-painstakingly-line, and there, on line 13.318, you see it! One of your 12 microservices failed deployment! The deployment used a script with a pipe in it. Unfortunately, the script did not have pipefail configured. The script, therefore, did not generate a non-zero exit code, so the deployment just kept humming along, deploying the remaining 11 with success. This chain of events resulted in a broken infrastructure state and unhappy customers, and you spend the entire Monday debugging and potentially the ENTIRE EXISTANCE coming to an end!
|
||||
|
||||
I think most developers would have a story similar to the one above, so why is getting release management right so damn hard? Modern software architecture and the tools that help us are complex machineries, which goes for our deployment tools. Therefore ensuring that every little thing is as planned means that we would have to check hundreds, if not thousands of items, each more to decipher than the last (anyone who has ever tried to solve a broken Xcode builds from an output log will know this).
|
||||
|
||||
So is there a better way? Unfortunately, when things break, any of those thousands of items could be the reason, so when stuff does break, the answer is most likely no, but what about just answering the simple question: "Is something broken?". Well, I am glad you asked because I do believe that there is a better way, and it is a way that revolves around Git.
|
||||
|
||||
# Declaring your expected state
|
||||
|
||||
So I am going to talk about Kubernetes, yet again - A technology I use less and less but, for some reason, ends up being part of my examples more and more often.
|
||||
|
||||
At its core Kubernetes has two conceptually simple tasks; it stores an expected state of the resources that it is supposed to keep track of two; if any of those resources are, in fact, not in the expected state, it tries to right the wrong.
|
||||
|
||||
This approach means that when we interact with Kubernetes, we don't ask it to perform a specific task - We never tell it, "create three additional instances of service X," but rather ", There should be five instances of service X".
|
||||
|
||||
This approach also means that instead of actions and events, we can use reconciliation - no tracking of what was and what is, just what we expect; the rest is the tool's responsibility.
|
||||
|
||||
It also makes it very easy for Kubernetes to track the health of the infrastructure - it knows the expected state. If the actual state differs, it is in some unhealthy state, and if it is unhealthy, it should either fix it or, failing that, raise the alarm for manual intervention.
|
||||
|
||||
# Git as the expected state
|
||||
|
||||
So how does this relate to Git? Well, Git is a version control system. As such, it should keep track of the state of the code. That, to me, doesn't just include when and why but also where - to elaborate: Git is already great at telling when something happened and also why (provided that you write good commit messages), but it should also be able to answer what is the code state in a given context.
|
||||
|
||||
So let's say you have a production environment; a good Git strategy, in my opinion, should be able to answer the question, "What is the expected code state on production right now?" And note the word "expected" here; it is crucial because Git is, of course, not able to do deployments or sync environments (in most cases) but what it can do is serve as our expected state that I talked about with Kubernetes.
|
||||
|
||||
The target is to be able to compare what we expect, with what is actually there completly independant of all the tooling that sits in between, as we want to remove those as a source of error or complexity.
|
||||
|
||||
We want to have something with the simplicity of the Kubernetes approach - we declare an expected state, and the tooling enforces this or alerts us if it can not.
|
||||
|
||||
We also need to ensure that we can compare our expected state to the actual state.
|
||||
|
||||
To achieve this we are going to focus on Git SHAs, so we will be tracking if a deployed resource is a deployment of our expected SHA.
|
||||
|
||||
For a web resource, an excellent way to do this could be through a `/.well-known/deployment-meta.json` while if you are running something like Terraform and AWS, you could tag your resources with this SHA - Try to have as few different methods of exposing this information as possible to keep monitoring simple.
|
||||
|
||||
With this piece of information, we are ready to create our monitor. Let's say we have a Git ref called `environments/production`, and its HEAD points to what we expect to be in production, now comparing is simply getting the SHA of the HEAD commit of that ref and comparing it to our `./well-known/deployment-meta.json`. If they match, the environment is in the expected state. If not, it is unhealthy.
|
||||
|
||||
Let's extend on this a bit; we can add a scheduled task that checks the monitor. If it is unhealthy, it retriggers a deployment and, if that fails, raises the alarm - So even if a deployment failed and no one noticed it yet, it will get auto-corrected the next time our simple reconciler runs. This can be done simply using something like a GitHub workflow.
|
||||
|
||||
You could also go all in and write a crossplane controller and use the actual Kubernetes reconciler to ensure your environments are in a healty state - Go as creazy as you like, just remember to make the tool work for you, not the other way around.
|
||||
|
||||
So, now we have a setup where Git tracks the expected state, and we can easily compare the expected state and the actual state. Lastly, we have a reconciliation loop that tries to rectify any discrepancy.
|
||||
|
||||
# Conclusion
|
||||
|
||||
So as a developer, the only thing I need to keep track of is that my Git refs are pointing to the right stuff. Everything else is reconciliation that I don't have to worry about - unless it is unreconcilable - and in which case, I will get alerted.
|
||||
|
||||
As someone responsible for the infrastructure, the only thing I need to keep track of is that the expected state matches the actual state.
|
||||
|
||||
No more multi-tool lookup, complex log dives or timeline reconstruction (until something fails, of course)
|
||||
|
||||
I believe that the switch from Git being just the code to being the code state makes a lot of daily tasks more straightforward and more transparent, builds a more resilient infrastructure and is worth considering when deciding how you want to do Git.
|
||||
BIN
src/content/articles/bob-the-algorithm/assets/Frame1.png
Normal file
|
After Width: | Height: | Size: 39 KiB |
BIN
src/content/articles/bob-the-algorithm/assets/Graph1.png
Normal file
|
After Width: | Height: | Size: 17 KiB |
BIN
src/content/articles/bob-the-algorithm/assets/Graph2.png
Normal file
|
After Width: | Height: | Size: 31 KiB |
BIN
src/content/articles/bob-the-algorithm/assets/GraphStep1.png
Normal file
|
After Width: | Height: | Size: 3.1 KiB |
BIN
src/content/articles/bob-the-algorithm/assets/GraphStep2.png
Normal file
|
After Width: | Height: | Size: 3.9 KiB |
BIN
src/content/articles/bob-the-algorithm/assets/Planned.png
Normal file
|
After Width: | Height: | Size: 165 KiB |
BIN
src/content/articles/bob-the-algorithm/assets/TaskBounds.png
Normal file
|
After Width: | Height: | Size: 4.3 KiB |
BIN
src/content/articles/bob-the-algorithm/assets/cover.png
Normal file
|
After Width: | Height: | Size: 1.6 MiB |
BIN
src/content/articles/bob-the-algorithm/assets/graph.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
85
src/content/articles/bob-the-algorithm/index.mdx
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
title: My day is being planned by an algorithm
|
||||
pubDate: 2022-05-06
|
||||
description: ''
|
||||
color: '#e7d9ac'
|
||||
heroImage: ./assets/cover.png
|
||||
---
|
||||
|
||||
import { Image } from 'astro:assets';
|
||||
import TaskBounds from './assets/TaskBounds.png';
|
||||
import Frame1 from './assets/Frame1.png';
|
||||
import Graph1 from './assets/Graph1.png';
|
||||
import Graph2 from './assets/Graph2.png';
|
||||
|
||||
Allow me to introduce Bob. Bob is an algorithm, and he has just accepted a role as my assistant.
|
||||
|
||||
I am not very good when it comes to planning my day, and the many apps out there that promise to help haven't solved the problem for me, usually due to three significant shortcomings:
|
||||
|
||||
1. Most day planner apps do what their paper counterparts would do: record the plan you create. I don't want to make the plan; someone should do that for me.
|
||||
2. They help you create a plan at the start of the day that you have to follow throughout the day. My days aren't that static, so my schedule needs to change throughout the day.
|
||||
3. They can't handle transits between locations very well.
|
||||
|
||||
So to solve those issues, I decided that the piece of silicon in my pocket, capable of doing a million calculations a second, should be able to help me do something other than waste time doom scrolling. It should let me get more done throughout the day and help me get more time for stuff I want to do. That is why I created Bob.
|
||||
|
||||
Also, I wanted a planning algorithm that was not only for productivity. I did not want to get into the same situation as poor Kiki in the book "The circle", who gets driven insane by a planning algorithm that tries to hyper-optimize her day. Bob also needs to plan downtime.
|
||||
|
||||
Bob is still pretty young and still learning new things, but he has gotten to the point where I believe he is good enough to start to use on a day to day basis.
|
||||
|
||||
<Image src={Frame1} alt="Frame1" />
|
||||
|
||||
How does Bob work? Bob gets a list of tasks, some from my calendar (both my work and my personal calendar), some from "routines" (which are daily tasks that I want to do most days, such as eating breakfast or picking up the kid), and some tasks come from "goals" which are a list of completable items. These tasks go into Bob, and he tries to create a plan for the next couple of days where I get everything done that I set out to do.
|
||||
|
||||
Tasks have a bit more data than your standard calendar events to allow for good scheduling
|
||||
An "earliest start time" and a "latest start time". These define when the task can add it to the schedule.
|
||||
|
||||
- A list of locations where the task can be completed.
|
||||
- A duration.
|
||||
- If the task is required.
|
||||
- A priority
|
||||
|
||||
<Image src={TaskBounds} alt="Task bounds" />
|
||||
|
||||
Bob uses a graph walk to create the optimal plan, where each node contains a few different things
|
||||
|
||||
- A list of remaining tasks
|
||||
- A list of tasks that are impossible to complete in the current plan
|
||||
- A score
|
||||
- The current location
|
||||
- The present time
|
||||
|
||||
Bob starts by figuring out which locations I can go to complete the remaining tasks and then create new leaf notes for all of those transits. Next, he figures out if some of the remaining tasks become impossible to complete and when I will arrive at the location and calculate a score for that node.
|
||||
|
||||
He then gets a list of all the remaining tasks for the current node which can be completed at the current location, again figuring out when I would be done with the task, updating the list of impossible tasks and scoring the node.
|
||||
If any node adds a required task to the impossible list, that node is considered dead, and Bob will not analyze it further.
|
||||
|
||||
<Image src={Graph1} alt="Graph1" />
|
||||
|
||||
Now we have a list of active leaves, and from that list, we find the node with the highest score and redo the process from above.
|
||||
|
||||
<Image src={Graph2} alt="Graph2" />
|
||||
|
||||
Bob has four different strategies for finding a plan.
|
||||
|
||||
- First valid: this finds the first plan that satisfies all restrains but may lead to non-required tasks getting removed, even though it would be possible to find a plan that included all tasks. This strategy is the fastest and least precise strategy.
|
||||
- First complete: this does the same as "First valid" but only exits early if it finds a plan that includes all tasks. This strategy will generally create pretty good plans but can contain excess transits. If it does not find any plans that contain all tasks, it will switch to the "All valid" strategy.
|
||||
- All valid: this explores all paths until the path is either dead or completed. Then it finds the plan with the highest score. If there are no valid plans, it will switch to the "All" strategy.
|
||||
- All: This explores all paths, even dead ones, and at the end returns the one with the highest score. This strategy allows a plan to be created even if it needs to remove some required tasks.
|
||||
|
||||
Scoring is quite simple at the moment, but something I plan to expand on a lot. Currently, the score gets increased when a task gets completed, and it gets decreased when a task becomes impossible. How much it is increased or decreased is influenced by the task's priority and if the task is required. It also decreases based on minutes spent transiting.
|
||||
|
||||
The leaf picked for analysis is the one with the highest score. This approach allows the two first strategies to create decent results, though they aren't guaranteed to be the best. It all comes down to how well tuned the scoring variables are tweaked. Currently, they aren't, but at some point, I plan to create a training algorithm for Bob, which will create plans, score them through "All", and then try to tweak the variables to arrive at the correct one with as few nodes analyzed as possible when running the same plan through "First valid"/"First complete".
|
||||
|
||||
This approach also allows me to calculate a plan with any start time, so I can re-plan it later in the day if I can't follow the original plan or if stuff gets added or removed. So this becomes a tool that helps me get the most out of my day without dictating it.
|
||||
|
||||
Bob can also do multi-day planning. Here, he gets a list of tasks for the different days as he usually would and a "shared" list of goals. So he runs the same calculation, adding in the tasks for that day, along with the shared goal list, and everything remaining from the shared list then gets carried over to the next day. This process repeats for all the remaining days.
|
||||
|
||||
I have created a proof of concept app that houses Bob. I can manage tasks, generate plans, and update my calendar with those plans in this app.
|
||||
|
||||
There are also a few features that I want to add later. The most important one is an "asset" system. For instance, when calculating transits, it needs to know if I have brought the bike along because if I took public transit to work, it doesn't make sense to calculate a bike transit later in the day. This system would work by "assets" being tied to a task and location, and then when Bob creates plans, he knows to consider if the asset is there or not. Assets could also be tied to tasks, so one task may be to pick up something, another to drop it off. In those cases, assets would act as dependencies, so I have to have picked up the asset before being able to drop it off. The system is pretty simple to implement but causes the graph to grow a lot, so I need to do some optimizations before it makes sense to put it in.
|
||||
|
||||
Wrapping up; I have only been using Bob for a few days, but so far, he seems to create good plans and has helped me achieve more both productive tasks, also scheduling downtime such as reading, meditation, playing console etc. and ensuring that I had time for that in the plan.
|
||||
|
||||
There is still a lot of stuff that needs to be done, and I will add in features and fix the code base slowly over time.
|
||||
|
||||
You can find the source for this algorithm and the app it lives in at [Github](https://github.com/morten-olsen/bob-the-algorithm), but beware, it is a proof of concept, so readability or maintainability hasn't been a goal.
|
||||
BIN
src/content/articles/hiring/assets/cover.png
Normal file
|
After Width: | Height: | Size: 1.6 MiB |
36
src/content/articles/hiring/index.mdx
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
title: How to hire engineers, by an engineer
|
||||
description: ''
|
||||
pubDate: 2022-03-16
|
||||
color: '#8bae8c'
|
||||
heroImage: ./assets/cover.png
|
||||
---
|
||||
|
||||
It has been a few years since I have been part of the recruitment process. Still, I did reasonably go through the hiring process when looking for a new job so that I will mix a bit from both sides for this article, so you get both some experience from hires and what worked and experience from the other side of the table and what caused my not to consider a company, because spoiler alert: Engineers are contacted a lot!
|
||||
|
||||
So first I need to introduce a hard truth as this will be underpining a lot of my points and is most likely the most important take away from this: Your company is not unique
|
||||
|
||||
Unless your tech brand is among the X highest regarded in the world, your company alone isn't a selling point. I have been contacted by so many companies which thought because they were leader in their field or had a "great product" that makes candidates come banging at their door. If I could disclose all those messages it would be really easy to see that except for the order of information all says almost the same thing, and chances are you job listing is the same. Sorry.
|
||||
The take away from this is that if everything is equal any misstep in your hiring process can cost you that candidate, so if you are not amongst the strongest of tech brands you need to be extremely aware or you will NOT fill the position
|
||||
|
||||
Okay after that slap in the face we can take a second to look at something...
|
||||
|
||||
A lot of people focuses on skills when hirering, and of cause the candidate should have the skills for the position, but I will make a case to put less focus on the hard skills and more focus on passion.
|
||||
|
||||
Usually screening skills through an interview is hard and techniques like code challenges has their own issues, but more on that later.
|
||||
Screening for passion is easier, usually you can get a good feeling if a candidate is passionate about a specific topic, and passionate people want to learn! So even if the candidate has limited skills, if they have passion they will learn and they will outgrow a candidate with experience but no passion.
|
||||
Filling a team with technically skills can solve an immediate requirement, but companies, teams and products change, your requirements will change along with it. Building a passionate team will adjust and evolve along where a product where a team consisting of skilled people but without passion will stay where they where when you hired them.
|
||||
|
||||
Another issue I see in many job postings is requiring a long list of skills. It would be awesome to find someone skilled in everything and who could solve all tasks. In the real world, when ever you add another skill to that list you are limiting the list of candidates that would fit so chances are you are not going to find anyone or the actual skills of any candidate in that very narrow list will be way lower than in a wider pool.
|
||||
A better way is to just add the most important skills, and learn the candidate any less important skills at the job. If you hired passionate people this should be possible (remember to screen for passion about learning new things)
|
||||
|
||||
While we are on the expected skill list: A lot of companies has this list of "it would be really nice if you had these skills". Well those could definitely be framed as learning experiences instead. If you have recruited passionate people, seeing that you will learn new cool skills count as a plus and any candidate who already have the skill will see it and think "awesome, I am already uniquely suited for the job!"
|
||||
|
||||
I promised to talk a bit about code challenges: They can be useful to screen a candidates ability to just go in and start to work from day one, and if done correctly can help a manager organise their process to best suit the teams unique skills but...
|
||||
Hiring at the moment is hard! And as stated pretty much any job listing I have seen are identical, so as in a competitive job market where a small outlier on your resumé lands you in the pile never read through, as likely is it in a competitive hiring market that your listing never gets acted upon.
|
||||
Engineers are contacted a lot by recruiters and speaking to all would require a lot of work so if a company has a prolonged process it quickly gets sorted out, especially by the best candidates whom most likely get contacted the most and most likely have a full time job so time is a scarce resource.
|
||||
So be aware that if you use time consuming processes such as the code challenge you might miss out on the best candidates.
|
||||
|
||||
Please just disclose the salary range. From being connected to a few hundred recruiters here on LinkedIn I can see that this isn't just me but a general issue. As mentioned before, it takes very little to have your listings ignored and most likely most of your strongest potential candidates already has full time jobs, and would not want to move to a position paying less unless the position where absolutely unique (which again, yours most likely isn't). Therefore if you choose not to disclose the salary range be aware that you miss out on most of the best candidates. A company will get an immediate no from me if not disclosing the salary range.
|
||||
|
||||
Lastly, I have spend a lot of words telling your that your company or position isn't unique, and well we both know that is not accurate, your company most likely has something unique to offer! Be that soft values or hard benefits. Be sure to put them in your job listing, to bring out this uniqueness, it is what is going to set you apart from the other listing. There are lot of other companies with the same tech stack, using an agile approach, with a high degree of autonomy, with a great team... But what can you offer that no one else can? Get it front and center... Recruiting is marketing and good copy writing
|
||||
BIN
src/content/articles/my-home-runs-redux/assets/cover.png
Normal file
|
After Width: | Height: | Size: 1.8 MiB |
BIN
src/content/articles/my-home-runs-redux/assets/graph.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
93
src/content/articles/my-home-runs-redux/index.mdx
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
title: My Home Runs Redux
|
||||
pubDate: 2022-03-15
|
||||
color: '#e80ccf'
|
||||
description: ''
|
||||
heroImage: ./assets/cover.png
|
||||
---
|
||||
|
||||
import graph from './assets/graph.png';
|
||||
import { Image } from 'astro:assets';
|
||||
|
||||
I have been playing around with smart homes for a long time; I have used most of the platforms out there, I have developed quite a few myself, and one thing I keep coming back to is Redux.
|
||||
|
||||
Those who know what Redux is may find this a weird choice, but for those who don't know Redux, I'll give a brief introduction to get up to speed.
|
||||
|
||||
Redux is a state management framework, initially built for a React talk by Dan Abramov and is still primarily associated with managing React applications. Redux has a declarative state derived through a "reducer"-function. This reducer function takes in the current state and an event, and, based on that event, it gives back an updated state. So you have an initial state inside Redux, and then you dispatch events into it, each getting the current state and updating it. That means that the resulting state will always be the same given the same set of events.
|
||||
|
||||
So why is a framework primarily used to keep track of application state for React-based frontends a good fit for a smart home? Well, your smart home platform most likely closely mimics this architecture already!
|
||||
|
||||
First, an event goes in, such as a motion sensor triggering, or you set the bathroom light to 75% brightness in the interface. This event then goes into the platform and hits some automation or routine, resulting in an update request getting sent to the correct devices, which then change the state to correspond to the new state.
|
||||
|
||||
...But that is not quite what happens on most platforms. Deterministic events may go into the system, but this usually doesn't cause a change to a deterministic state. Instead, it gets dispatched to the device, the devices updates, the platform sees this change, and then it updates its state to represent that new state.
|
||||
|
||||
This distinction is essential because it comes with a few drawbacks:
|
||||
|
||||
- Because the event does not change the state but sends a request to the device that does it, everything becomes asynchronous and can happen out of order. This behaviour can be seen either as an issue or a feature, but it does make integrating with it a lot harder from a technical point of view.
|
||||
- The request is sent to the device as a "fire-and-forget" event. It then relies on the success of that request and the subsequent state change to be reported back from the device before the state gets updated. This behaviour means that if this request fails (something you often see with ZigBee-based devices), the device and the state don't get updated.
|
||||
- Since the device is responsible for reporting the state change, you are dependent on having that actual device there to make the change. Without sending the changes to the actual device, you cannot test the setup.
|
||||
|
||||
So can we create a setup that gets away from these issues?
|
||||
|
||||
Another thing to add here is more terminology/philosophy, but most smart home setups are, in my opinion, not really smart, just connected and, to some extent, automated. I want a design that has some actual smartness to it. In this article, I will outline a setup closer to that of the connected, automated home, and at the end, I will give some thoughts on how to take this to the next level and make it smart.
|
||||
|
||||
We know what we want to achieve, and Redux can help us solve this. Remember that Redux takes actions and applies them in a deterministic way to produce a deterministic state.
|
||||
|
||||
Time to go a bit further down the React rabbit hole because another thing from React-land comes in handy here: the concept of reconciliation.
|
||||
|
||||
Instead of dispatching events to the devices waiting for them to update and report their state back, we can rely on reconciliation to update our device. For example, let's say we have a device state for our living room light that says it is at 80% brightness in our Redux store. So now we dispatch an event that sets it to 20% brightness.
|
||||
|
||||
Instead of sending this event to the device, we update the Redux state.
|
||||
|
||||
We have a state listener that detects when the state changes and compares it to the state of the actual device. In our case, it seems that the state indicates that the living room light should be at 20% but are, in fact, at 80%, so it sends a request to the actual device to update it to the correct value.
|
||||
|
||||
We can also do schedule reconciliation to compare our Redux state to that of the actual devices. If a device fails to update its state after a change, it will automatically get updated on our next scheduled run, ensuring that our smart home devices always reflect our state.
|
||||
|
||||
_Sidenote: Yes, of course, I have done a proof of concept using React with a home build reconciliation that reflected the virtual dom unto physical devices, just to have had a house that ran React-Redux_
|
||||
|
||||
Let's go through our list of issues with how most platforms handle this. We can see that we have eliminated all of them by switching to this Redux-reconciliation approach: we update the state directly to run it synchronously. We can re-run the reconciliation so failed or dropped device updates get re-run. We don't require any physical devices as our state is directly updated.
|
||||
|
||||
We now have a robust, reliable, state management mechanism for our smart home, time to add some smarts to it. It is a little outside the article's main focus as this is just my way of doing it; there may be way better ways, so use it at your discretion.
|
||||
|
||||
Redux has the concept of middlewares which are stateful functions that live between the event going into Redux and the reducer updating the state. These middlewares allow Redux to deal with side effects and do event transformations.
|
||||
|
||||
Time for another piece of my smart home philosophy: Most smart homes act on events, and I have used the word throughout this article, but to me, events are not the most valuable thing when creating a smart home, instead I would argue that the goal is to deal with intents rather than events. For instance, an event could be that I started to play a video on the TV. But, that state a fact, what we want to do is instead capture what I am trying to achieve, the "intent", so lets split this event into two intents; if the video is less than one hour, I want to watch a TV show, if it is more I want to watch a movie.
|
||||
|
||||
These intents allow us to not deal with weak-meaning events to do complex operations but instead split our concern into two separate concepts: intent classification and intent execution.
|
||||
|
||||
So last thing we need is a direct way of updating devices, as we can not capture everything through our intent classifier. For instance, if I sit down to read a book that does not generate any sensor data for our system to react to, I will still need a way to adjust device states manually. (I could add a button that would dispatch a reading intent)
|
||||
|
||||
I have separated the events going into Redux into two types:
|
||||
|
||||
- control events, which directly controls a device
|
||||
- environment events represent sensor data coming in (push on a button, motion sensor triggering, TV playing, etc.)
|
||||
|
||||
Now comes the part I have feared, where I need to draw a diagram.
|
||||
|
||||
...sorry
|
||||
|
||||
<Image src={graph} alt="graph" />
|
||||
|
||||
So this shows our final setup.
|
||||
|
||||
Events go into our Redux setup, either environment or control.
|
||||
|
||||
Control events go straight to the reducer, and the state is updated.
|
||||
|
||||
Environment events first go to the intent classifier, which uses previous events, the current state, and the incoming event to derive the correct intent. The intent then goes into our intent executor, which converts the intent into a set of actual device changes, which gets sent to our reducer, and the state is then updated.
|
||||
|
||||
Lastly, we invoke the reconciliation to update our real devices to reflect our new state.
|
||||
|
||||
There we go! Now we have ended up with a self-contained setup. We can run it without the reconciliation or mock it to create tests for our setup and work without changing any real devices, and we can re-run the reconciliation on our state to ensure our state gets updated correctly, even if a device should miss an update.
|
||||
|
||||
**Success!!!**
|
||||
|
||||
But I promised to give an idea of how to take this smart home and make it actually "smart."
|
||||
|
||||
Let's imagine that we did not want to "program" our smart home. Instead, we wanted to use it; turning the lights on and off using the switches when we entered and exited a room, dimming the lights for movie time, and so on, and over time we want our smart home to pick up on those routines and start to do them for us.
|
||||
|
||||
We have a setup where we both have control events and environments coming in. Control events represent how we want the state of our home to be in a given situation. Environment events represent what happened in our home. So we could store those historically with some machine learning and look for patterns.
|
||||
|
||||
Let's say you always dim the light when playing a movie that is more than one hour long; your smart home would be able to recognize this pattern and automatically start to do this routine for you.
|
||||
|
||||
Would this work? I don't know. I am trying to get more skilled at machine learning to find out.
|
||||
60
src/content/config.ts
Normal file
@@ -0,0 +1,60 @@
|
||||
import { defineCollection, z } from 'astro:content';
|
||||
|
||||
const articles = defineCollection({
|
||||
schema: ({ image }) =>
|
||||
z.object({
|
||||
title: z.string(),
|
||||
description: z.string(),
|
||||
color: z.string(),
|
||||
pubDate: z.coerce.date(),
|
||||
updatedDate: z.coerce.date().optional(),
|
||||
tags: z.array(z.string()).optional(),
|
||||
heroImage: image().refine((img) => img.width >= 320, {
|
||||
message: 'Cover image must be at least 1080 pixels wide!',
|
||||
}),
|
||||
}),
|
||||
});
|
||||
|
||||
const work = defineCollection({
|
||||
schema: ({ image }) =>
|
||||
z.object({
|
||||
name: z.string(),
|
||||
position: z.string(),
|
||||
startDate: z.coerce.date(),
|
||||
endDate: z.coerce.date().optional(),
|
||||
summary: z.string().optional(),
|
||||
url: z.string().optional(),
|
||||
logo: image()
|
||||
.refine((img) => img.width >= 200, {
|
||||
message: 'Logo must be at least 320 pixels wide!',
|
||||
})
|
||||
.optional(),
|
||||
banner: image()
|
||||
.refine((img) => img.height >= 50, {
|
||||
message: 'Logo must be at least 320 pixels wide!',
|
||||
})
|
||||
.optional(),
|
||||
}),
|
||||
});
|
||||
|
||||
const references = defineCollection({
|
||||
schema: () =>
|
||||
z.object({
|
||||
name: z.string(),
|
||||
position: z.string(),
|
||||
company: z.string(),
|
||||
date: z.coerce.date(),
|
||||
relation: z.string(),
|
||||
profile: z.string(),
|
||||
}),
|
||||
});
|
||||
|
||||
const skills = defineCollection({
|
||||
schema: () =>
|
||||
z.object({
|
||||
name: z.string(),
|
||||
technologies: z.array(z.string()),
|
||||
}),
|
||||
});
|
||||
|
||||
export const collections = { articles, work, references, skills };
|
||||
5
src/content/profile/description.md
Normal file
@@ -0,0 +1,5 @@
|
||||
As a software engineer with a diverse skill set in frontend, backend, and DevOps, I find my greatest satisfaction in unraveling complex challenges and transforming them into achievable solutions. My career has predominantly been in frontend development, but my keen interest and adaptability have frequently drawn me into backend and DevOps roles. I am driven not by titles or hierarchy but by opportunities where I can make a real difference through my work.
|
||||
|
||||
In every role, I strive to blend my technical skills with a collaborative spirit, focusing on contributing to team goals and delivering practical, effective solutions. My passion for development extends beyond professional settings; I continually engage in personal projects to explore new technologies and methodologies, keeping my skills sharp and current.
|
||||
|
||||
I am eager to find a role that aligns with my dedication to development and problem-solving, a place where I can apply my varied expertise to meaningful projects and grow within a team that values innovation and technical skill.
|
||||
BIN
src/content/profile/profile.jpg
Normal file
|
After Width: | Height: | Size: 142 KiB |
48
src/content/profile/profile.ts
Normal file
@@ -0,0 +1,48 @@
|
||||
import type { ResumeSchema } from '@/types/resume-schema.js';
|
||||
import { Content } from './description.md';
|
||||
import image from './profile.jpg';
|
||||
|
||||
const basics = {
|
||||
name: 'Morten Olsen',
|
||||
tagline: "Hi, I'm Morten and I make software 👋",
|
||||
email: 'fbtijfdq@void.black',
|
||||
url: 'https://mortenolsen.pro',
|
||||
image: image.src,
|
||||
location: {
|
||||
city: 'Copenhagen',
|
||||
countryCode: 'DK',
|
||||
region: 'Capital Region of Denmark',
|
||||
},
|
||||
profiles: [
|
||||
{
|
||||
network: 'GitHub',
|
||||
icon: 'mdi:github',
|
||||
username: 'morten-olsen',
|
||||
url: 'https://github.com/morten-olsen',
|
||||
},
|
||||
{
|
||||
network: 'LinkedIn',
|
||||
icon: 'mdi:linkedin',
|
||||
username: 'mortenolsendk',
|
||||
url: 'https://www.linkedin.com/in/mortenolsendk',
|
||||
},
|
||||
],
|
||||
languages: [
|
||||
{
|
||||
name: 'English',
|
||||
fluency: 'Conversational',
|
||||
},
|
||||
{
|
||||
name: 'Danish',
|
||||
fluency: 'Native speaker',
|
||||
},
|
||||
],
|
||||
} satisfies ResumeSchema['basics'];
|
||||
|
||||
const profile = {
|
||||
basics,
|
||||
image,
|
||||
Content,
|
||||
};
|
||||
|
||||
export { profile };
|
||||
12
src/content/references/jens-roland.md
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
name: Jens Roland
|
||||
position: Director of Engineering
|
||||
company: ZeroNorth
|
||||
date: 2021-10-28
|
||||
relation: Jens was senior to Morten but didn't manage Morten directly at Trendsales
|
||||
profile: https://www.linkedin.com/in/jensroland/
|
||||
---
|
||||
|
||||
Morten joined the frontend team at Trendsales as a very young developer and immediately it became clear that he is the kind of rock star developer you dream of managing; passionate, thirsty for learning, and exceptionally talented. Any task given, no matter how challenging, he would complete in record time and at a level far beyond expectations. And he manages to do this while always remaining humble, generous, and an overall fun guy to be around.
|
||||
|
||||
I heartily give Morten my best possible recommendation and frankly hope we will work together again in the future.
|
||||
10
src/content/references/ole-kristensen.md
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
name: Ole Højriis Kristensen
|
||||
position: Software Engineering Manager
|
||||
company: Apple
|
||||
date: 2017-11-22
|
||||
relation: Ole Højriis managed Morten directly at Trendsales
|
||||
profile: https://www.linkedin.com/in/okristensen/
|
||||
---
|
||||
|
||||
Jeg har haft den udsøgte fornøjelse at arbejde sammen med Morten i de sidste godt to år. Morten er en af de udviklere man sjældent møder i Danmark, og kan vel nærmest bedst beskrives som en troldmand. Morten har en dyb viden om frontend teknologi, og er i stand til at se en problemstilling fra vinkler kun få kender til. Morten stod i spidsen for vores frontend-team og var den naturlige tech-lead i et projekt hvor vi skulle bygge en platform op fra grunden. Morten er inspirerende at tale med, hvadenten det handler om reelle udfordringer i forbindelse med en opgave, eller om dybere teknologiske tanker. Jeg har sat stor pris på den tid vi har haft sammen, og kan kun give Morten mine varmeste anbefalinger
|
||||
10
src/content/showcase/bob/index.mdx
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
title: Bob the algorithm
|
||||
link: /articles/bob-the-algorithm
|
||||
keywords:
|
||||
- Typescript
|
||||
- React Native
|
||||
- Algorithmic
|
||||
---
|
||||
|
||||
`// TODO`
|
||||
9
src/content/showcase/mini-loader/index.mdx
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
title: Bob the algorithm
|
||||
link: https://github.com/morten-olsen/mini-loader
|
||||
keywords:
|
||||
- Typescript
|
||||
- Task management
|
||||
---
|
||||
|
||||
`// TODO`
|
||||
0
src/content/showcase/pictoroma/index.mdx
Normal file
11
src/content/skills/dev-ops.mdx
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
|
||||
name: DevOps
|
||||
technologies:
|
||||
|
||||
- Kubernetes
|
||||
- Docker
|
||||
- ArgoCD
|
||||
- Terraform
|
||||
- GitHub Actions
|
||||
- AWS
|
||||
10
src/content/skills/mobile-development.mdx
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
|
||||
name: Mobile development
|
||||
technologies:
|
||||
|
||||
- TypeScript
|
||||
- React Native
|
||||
- Expo
|
||||
- React Navigation
|
||||
- Xamarian
|
||||
16
src/content/skills/service-development.mdx
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
|
||||
name: Service development
|
||||
technologies:
|
||||
|
||||
- TypeScript
|
||||
- Node.js
|
||||
- Fastify
|
||||
- PostgreSQL
|
||||
- tRPC
|
||||
- Knex.js
|
||||
- Prisma
|
||||
- Vitest
|
||||
- Apollo
|
||||
- .Net
|
||||
- Rust
|
||||
15
src/content/skills/web-development.mdx
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
name: Web Development
|
||||
technologies:
|
||||
- React
|
||||
- TypeScript
|
||||
- RTK
|
||||
- React Query
|
||||
- Tailwind CSS
|
||||
- Storybook
|
||||
- React Testing Library
|
||||
- Vite
|
||||
- Webpack
|
||||
- Next.js
|
||||
- Astro
|
||||
---
|
||||
8
src/content/work/.haastrupit.mdx
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
name: Haastrup IT
|
||||
position: Web developer
|
||||
startDate: 2009-03-01
|
||||
endDate: 2010-05-30
|
||||
---
|
||||
|
||||
I have worked as a part time project koordinator and systems developer, sitting with responsibility for a wide variety of projects including projects for "Københavns Kommune" (Navision reporting software) and "Syddanmarks kommune" (Electronic application processing system). Most projects were made in C#, but also PHP, VB, ActionScript. In addtion to that i maintained the in-house hosting setup.
|
||||
9
src/content/work/bilzonen.mdx
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
name: BilZonen
|
||||
position: Web Developer
|
||||
startDate: 2010-06-01
|
||||
endDate: 2012-02-28
|
||||
summary: As a part-time web developer at bilzonen.dk, I managed both routine maintenance and major projects like new modules and integrations, introduced a custom provider-model system in .NET (C#) for data management, and established the development environment, including server setup and custom tools for building and testing.
|
||||
---
|
||||
|
||||
I work as a part-time web developer on bilzonen.dk. I have worked with both day-to-day maintenance and large scale projects (new search module, integration of new data catalog, mobile site, new-car-catalog and the entire dealer solution). The page is an Umbraco solution, with all .NET (C#) code. I have introduced a new custom build provider-model system, which allows data-providers to move data between data stores, external services, and the site. (search, caching and external car date is running through the provider system). Also, i have set up the development environment, from setting up virtual server hosts to building custom tool for building and unit testing.
|
||||
BIN
src/content/work/sampension/assets/logo.jpeg
Normal file
|
After Width: | Height: | Size: 3.2 KiB |
14
src/content/work/sampension/index.mdx
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
name: Sampension
|
||||
position: Senior Frontend Developer
|
||||
startDate: 2018-01-01
|
||||
endDate: 2021-12-31
|
||||
logo: ./assets/logo.jpeg
|
||||
summary: At Sampension, a Danish pension fund, I designed and helped build a cross-platform frontend architecture using React Native and React Native for Web, ensuring a unified, maintainable codebase for native iOS, Android, and web applications across devices.
|
||||
---
|
||||
|
||||
Sampension is a danish pension fund and my work has been to design and help to build a frontend architecture that would run natively on iOS and Android as well as on the web on both desktop and mobile devices.
|
||||
|
||||
It was important to ensure that the project felt at home on all platforms and that it was maintainable by a small team of developers.
|
||||
|
||||
To achieve this we used React Native and React Native for Web to create a unified codebase for all platforms, as well as create a component library which would deal with ensuring the best UX on all platforms.
|
||||
BIN
src/content/work/trendsales-1/assets/banner.png
Normal file
|
After Width: | Height: | Size: 5.8 KiB |
BIN
src/content/work/trendsales-1/assets/logo.png
Normal file
|
After Width: | Height: | Size: 4.6 KiB |
11
src/content/work/trendsales-1/index.mdx
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
name: Trendsales
|
||||
position: Web Developer
|
||||
startDate: 2012-03-01
|
||||
endDate: 2012-09-30
|
||||
logo: ./assets/logo.png
|
||||
banner: ./assets/banner.png
|
||||
summary: At Trendsales, I started with a part-time role focused on maintaining the API for the iOS app, eventually diversifying my responsibilities to include broader platform development, allocating 25-50% of my time to the API.
|
||||
---
|
||||
|
||||
I got a part-time job at Trendsales, where my primary responsibility was maintaining the API which powered the iOS app. Quickly my tasks became more diverse, and I ended using about 25-50 percent of my time on the API, while the remaining was spend doing work on the platform in general.
|
||||
18
src/content/work/trendsales-2.mdx
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
name: Trendsales
|
||||
position: iOS and Android Developer
|
||||
startDate: 2012-10-01
|
||||
endDate: 2015-12-31
|
||||
logo: ./trendsales-1/assets/logo.png
|
||||
summary: I led the development of a new Xamarin-based iOS app from scratch at Trendsales, including a supporting API and backend work, culminating in a successful app with over 15 million screen views and 1.5 million sessions per month, and later joined a team to expand into Android development.
|
||||
---
|
||||
|
||||
I became responsible for the iOS platform, which was a task that required a new app to be built from the ground up using _Xamarin_. In addition to that, a new API to support the app along with support for our larger vendors was needed which had to be build using something closely similar to _Microsoft MVC_ so that other people could join the project at a later stage.
|
||||
|
||||
he project started in October with the initial version available to our users in late December.
|
||||
|
||||
This project represented my first adventure into mobile development and became an app with more than 15 million screen views and 1.5 million sessions per month.
|
||||
|
||||
After that, I joined two other colleagues, who were working on an Android version of the app, to form a join mobile development team.
|
||||
|
||||
Throughout the period I also worked on the backend for the web page from time to time.
|
||||
15
src/content/work/trendsales-3.mdx
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
name: Trendsales
|
||||
position: Frontend Technical Lead
|
||||
startDate: 2016-01-01
|
||||
endDate: 2017-12-31
|
||||
logo: ./trendsales-1/assets/logo.png
|
||||
summary: In 2015, I spearheaded the creation of a new frontend architecture for Trendsales, leading to the development of m.trendsales.dk, using React and Redux, and devising bespoke frameworks for navigation, flexible routing, skeleton page transitions, and integrating workflows across systems like Github, Jira, Octopus Deploy, AppVeyor, and Docker.
|
||||
---
|
||||
|
||||
In 2015 Trendsales decided to build an entirely new platform. It became my responsibility to create a modernized frontend architecture. The work began in 2016 with just me on the project and consisted of a proof of concept version containing everything from framework selection, structure, style guides build chain, continuous deployment, and an actual initial working version. The result where the platform which I was given technical ownership over and which I, along with two others, worked on expanding over the next year. The platform is currently powering _m.trendsales.dk_. The project is build using React and state management are done using Redux. In addition to the of the shelve frameworks, we also needed to develop quite a few bespoke frameworks, in order to meet demands. Among others, these were created to solve the following issues:
|
||||
|
||||
- Introducing a new navigational paradigm
|
||||
- Create a more flexible routing mechanism
|
||||
- Be able to serve skeleton page, for page transitions while still being able to create complete server-side pages
|
||||
- Ensure project flows between multiple systems such as Github, Jira, Octopus Deploy, AppVeyor and Docker
|
||||
BIN
src/content/work/zeronorth/assets/logo.png
Normal file
|
After Width: | Height: | Size: 194 KiB |
11
src/content/work/zeronorth/index.mdx
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
name: ZeroNorth
|
||||
position: Senior Software Engineer
|
||||
startDate: 2022-01-01
|
||||
logo: ./assets/logo.png
|
||||
summary: At ZeroNorth, I develop and maintain a NextJS-based, offline-first PWA for on-vessel reporting, and enhance report processing infrastructure using Terraform and NodeJS.
|
||||
---
|
||||
|
||||
I am currently employed at ZeroNorth, a Danish software as a service company that specializes in providing solutions to help the shipping industry decarbonize through optimization. My primary focus has been on the development and maintenance of the on-vessel reporting platform. This platform is a NextJS based PWA with offline-first capabilities, which allows for easy and efficient reporting on board ships.
|
||||
|
||||
In addition to working on the on-vessel reporting platform, I have also contributed to the development of the general infrastructure around report processing. My experience includes utilizing Terraform and NodeJS to build efficient and scalable report processing pipelines.
|
||||