Discover more from Architecture Weekly
Architecture Weekly #143 - 4th September 2023
Welcome to the new week!
Accidental complexity can kill even the best-motivated person. We want to understand and reflect on the business process in the code, but our perspective becomes immediately blurred.
We add a new field, change the business flow a bit and w realise that “dammit, foreign keys won’t work now” or “now views will be broken”. That's happening if we build systems traditionally.
In event-sourced, things can go differently. We can start by reflecting on our business process and thinking about how to record facts. That keeps us focused and lets us streamline the effort and think about reading models later.
Yet, the "later" will eventually come, and it might not be so easy to build proper events to fulfil multiple needs.
I wrote about Event Transformations and how they can help keep things loosely coupled. As always, I also showed a practical solution to the business case.
Last month, McKinsey wrote an article with the bold statement: “Yes, you can measure software developer productivity”.
It went pretty loud as Kent Beck and Gergely Orosz decided to write a response in collaboration:
What’s in the article? There are some good thoughts about summarising and explaining why measuring software developer productivity is hard. For instance:
Misuse is most common when companies try to employ overly simple measurements, such as lines of code produced, or number of code commits (when developers submit their code to a version control system). Not only do such simple metrics fail to generate truly useful insights, they can have unintended consequences, such as leaders making inappropriate trade-offs. For example, optimizing for lead time or deployment frequency can allow quality to suffer.
It also tells between the lines that the “back to office” trend is a dead end:
The increase in remote work and its popularity among developers is one overriding factor. Developers have long worked in agile teams, collaborating in the same physical space, and some technology leaders believe that kind of in-person teamwork is essential to the job. However, the digital tools that are so central to their work made it easy to switch to remote work during the pandemic lockdowns, and as in most sectors, this shift is hard to undo.
So what’s the story?
The first thing is that it doesn’t sound too honest when a big consultancy that shaped the management trends points the finger at the management weaknesses and comes up with a holy grail solution. They are coming with three metrics:
Developer Velocity Index benchmark
Talent capability score.
All of them are blank and generally described as possible. But hey
Initial results are promising. They include the following improvements:
20 to 30 percent reduction in customer-reported product defects
20 percent improvement in employee experience scores
60-percentage-point improvement in customer satisfaction ratings
Whatever that means…
This could be a decent summary article if it doesn’t promote new vague metrics that will rule them all.
Don’t get me wrong, I’m all for measurement statistics, not to use them as decision-making, but hints.
It’s like in regular life; if we’re tired, sleepy, and not feeling great, we should not ignore that. We could start by observing how long we sleep and changing the length. Then let’s say we found out we’re sleeping too short and extended the length. That helped a bit, but we’re still tired. So we made our room darker, freshened the air, etc., before sleep. That also helped, but it’s still not great. We could keep it going after we realise that all that is just symptoms and we’re trying to fix the issue, not the root cause. And the root cause could be that we’re too stressed at work, impacting our personal life.
What I’m trying to say is that metrics are highly contextual. We need to learn the characteristics of our environment and then continuously evolve them. I’m not in the team that shouts: “no estimates!” or “we just cannot measure”. Usually, that’s just laziness and trying to run out from being made accountable.
I think the main issue why it’s harder to estimate and measure productivity is that we’re still the back office. That’s kind of what Kent and Gergely called impact. We’re just in part of the process; we’re not part of the product but producing software components.
And that’s also why McKinsey is showing that they’re wrong, as they said:
For example, one company found that its most talented developers were spending excessive time on noncoding activities such as design sessions or managing interdependencies across teams. In response, the company changed its operating model and clarified roles and responsibilities to enable those highest-value developers to do what they do best: code.
If those metrics are pushing people to get back to code instead of making them work hand-in-hand with other people to deliver the product, then they’re just clearly a no-go and a mark of snake oil sales by McKinsey.
Check also old, but still actual article by Marin Fowler
And discussion with Nick Tune on how modernising software delivery is multidimensional:
Let’s continue the controversy going! In the Software Engineering Radio, Casey Muratori explained why he believes that Clean Code brings Horrible Performance.
It’s a refresher and expanded take on his article from earlier this year:
I think you already know I’m not a fan of clean code, yet I do not fully agree with Casey.
As always, by listening to or reading such opinions, we should understand the place from where the claim is made. Casey is highly performance-focused; he’s coding in C++. He’s even saying that the code should be read for him, showing clearly what it is doing and how it is doing.
I think that’s the dangerous part, where people are skilled and cautious about the tech stack they’re using, assuming that others can also reason the same way. Making a proper declarative way and trusting the compiler in most of the line of business applications in the long term can be a good decision. If the patterns are common and repeatable, compiler authors will embrace them.
Of course, our code structures need to be machine-friendly. And I think that’s the main message Casey is trying to tell. I wrote an example of that in Tell, don't ask! Or, how to keep an eye on boiling milk. Having clean code should never be an end goal. The goal should be to have code that’s correct and will be good enough in the long term.
The episode is worth listening to and checking as it also touches on the general pain points of where and how to optimise. Check also the constructive criticism response:
Architecture is too often assigned as a backend thing. It’s not, and I can also blame myself, as I should put more content around that here.
One of the most challenging duties is to shape technologies and team constelation. If we break our teams into purely tech stack: backend, and frontend, we’ll create technology silos. We’ve been there, and the developer productivity discussion proves that’s not working. This is an example of the wrong optimisation made to increase the coding efficiency, not the product impact.
It’s interesting how tech trends are also showing management struggles. We’ve started with server-side rendered simple pages. Then, we found out that it’s a challenge to provide a good user experience with constant page reloads and weird technologies in the backend to facilitate that. Then, we went too far with it with a pure split on the backend and front end made in JS (React, Vue, Angular, etc.). That didn’t work out, not only from the management perspective but also performance. Now we have tooling that can do both, like Htmx Next, and that’s making possible new ways for collaboration between teams.
That’s a topic of interesting video:
That triggered multiple responses:
It’s interesting to watch/read to learn both about new technologies that can help streamline your work and also from the perspective of the tech stack evolution and management impact.
Nubank made a decision in which developer productivity was also a main factor. They decided to kill end-to-end tests suit, and luckily, they wrote their case study:
It’s an interesting read, showing why we should always think about our product and context. Not always, common best practices will be such for us. I know this feeling, as I was part of the project where we discussed end-to-end testing a lot.
In our case, end-to-end tests were also not giving us the expected value. They were randomly failing, sluggish, and also impacting our deployment process. Yet, I was defending the idea to remove them. Why?
Because it’s like Chesterton Fence: before we remove it, we need to understand why it’s here and have a plan on reshaping our process. If we just remove it, then it’s like breaking a thermometer while having a fever.
Nubank replaced the end-to-end suite with contract testing. I think that’s a decent move, as it reduces the coupling between services and the stability of tests. We no longer need to call other services to understand if we broke compatibility. Interestingly, they wrote their own tool for that instead of using Pact. That can also be a good decision as, in my opinion, we need more flexible and lightweight tooling than Pact.
I’d still ensure that we correctly set up synthetic testing and telemetry. We need to have a way to test full critical paths.
And to end with another killing. Microsoft killed Visual Studio for Mac.
That shouldn’t be a surprise for people who know the story of it.
Microsoft bought Xamarin in 2016 - a tool for cross-platform development in .NET (mostly mobile). It had an IDE called Xamarin Studio. A decent one (for that time), and most importantly, running on a Macbook that the main IDE Visual Studio couldn’t do (and still can’t).
Microsoft was also pushing hard to revamp the .NET Framework, making it cross-platform. Still, even if you can run it everywhere but can’t code it everywhere, it doesn’t sound great, right? That’s why they renamed Xamarin Studio into Visual Studio for Mac.
Still, it was a subpar experience, and Microsoft now had three IDEs to maintain, considering how Visual Studio Code expanded. Yet, coding in .NET in VSCode was (and is) a terrible experience. And that’s intentional by Microsoft, as they didn’t want the freebie to eat the main IDE market (that’s a story on their own). Yet, recently, they came up with a plugin for VSCode that’s, in theory, free but requires a Visual Studio Enterprise License. So, the decision to kill VS for Mac was obvious. From budget management perspective.
I don’t think that’s a good decision in the long term unless they invest a lot in the VSCode plugin, making it a real thing.
Why would you use it if you have to pay for an enterprise license, and if you’re paying, then you have JetBrains Rider? JetBrains already trolled MS, giving a 65% discount to welcome VS for Mac users.
Wow, this is the longest Architecture Weekly so far! In each ArchitectureWeekly release, I'm adding more thoughts and commentary to the presented resources. I think that curation already spares some time for people to find good resources, but I believe that getting more personal flavour also gives more insights and food for thought.
What are your thoughts? When you read the newsletter, are you just in "give me links!" mode or want to get more of a commentary?
Check also other links!
p.s. I invite you to join the paid version of Architecture Weekly. It already contains the exclusive Discord channel for subscribers (and my GitHub sponsors), monthly webinars, etc. It is a vibrant space for knowledge sharing. Don’t wait to be a part of it!
p.s.2. Ukraine is still under brutal Russian invasion. A lot of Ukrainian people are hurt, without shelter and need help. You can help in various ways, for instance, directly helping refugees, spreading awareness, and putting pressure on your local government or companies. You can also support Ukraine by donating, e.g. to the Ukraine humanitarian organisation, Ambulances for Ukraine or Red Cross.