Discover more from Architecture Weekly
Architecture Weekly #133 - 26th June 2023
Welcome to the new week!
Let’s start with a quick reminder. Architecture Weekly became something bigger than just a newsletter with Software Architecture links. We now have a community of over 3100 people every week eager to learn more, challenge their views and advance in their journey.
We also have an exclusive community of paid subscribers that helps in the technical leader’s solitude. We discuss our challenges, help each other in the Discord channel, and have a webinar every month with Q&A.
Today at 6 PM, we’ll have another one: Evolutionary Architecture: The What. The Why. The How. It’ll be run by Maciej "MJ" Jędrzejewski, an experienced software architect and facilitator of modern software development practices.
Plus, Maciej is a member of our community, which makes me happy, as I never wanted Architecture Weekly to be a one-man show but a community of experienced people learning from each other.
You can still sign up as a paid subscriber and join the webinar live!
In the spirit of evolutionary architecture, I wrote why I believe that Removability is more important than Maintainability.
We’re taught that our systems should be maintainable, aye? This statement is pretty blank, and too often, it ends up in a defensive approach blocking innovation or a fragile, complicated system.
Also, what if they never reach the phase where it does matter?
In the article, I told my story of the process decision, which was a model way, but still not the best fit for our project.
When we invest more in Removability instead of the Maintainability of our software. By that, we won't have the same design we expected, but we'll have leftovers that are much stronger than we'd come up with initially. And accidentally, we'll also make it maintainable.
When it matters.
I started my article with a quote from Donella Meadows:
A diverse system with multiple pathways and redundancies is more stable and less vulnerable to external shock than a uniform system with little diversity.
Diversity is critical, giving us more options to evolve and pivot. That was the topic and the personal story Aaron Stannard gave in his talk:
We need to give ourselves more options. That may mean doing a bit more work in advance and keeping the right balance and understanding when needed is required. How to do it?
Three weeks ago, I mentioned Risk Matrix, Residuality Theory and Anti-Requirements. Today, I’ll also add Risk Storming. Yeah, another storming, but still worth checking together with other strategies:
Not enough evolutionary approach? Join our webinar! But also read:
Speaking about risk and (r)evolution, remember the blank statement from AI leaders I linked last week? It seems that I was right about the due-diligence part. On the one hand, Sam Altman, the CEO of OpenAI, warns about the AI bringing risk to our human nation, but on the other hand, he’s going around the globe and advocating watering down AI regulation:
That’s yet another example that there’s no free lunch. With each exciting technology, we get a line of corporations trying to use it to maximise the money. Right now, in the Wild-Wild West history of AI.
I’m not surprised about this lobbying, as the Large Language Models in the current state cannot provide the proper privacy. You cannot say how they concluded or which materials they used. So, e.g. to ensure the GDPR’s law-to-be-forgotten, you’d need to remove data from the input training set and rerun the algorithm. And that’s super costly, as those algorithms operate on huge amounts of data.
So if European Union pushes them (and they eventually will) to apply privacy regulation, it may mean reshaping fully their implementation.
Now it’s a battle of whether that’ll break the foundations or lift and shift will be enough. How lift and shift may look like, we can see in new announcement from Microsoft offering OpenAI service on your data:
I guess that’s a trick; they don’t provide you with the re-learning of the model if you remove data, but they use the same general model and a much bigger context of your prompt with your data.
So IMHO, it just works as you’d always prepend your prompt with data from your databases, etc.
Understanding the internals of how sausages are made is not always critical. Still, it’s a must-have if we need to understand how technology works. We need to carefully define how far we need to go down the rabbit hole not to waste time playing with nitty-gritty details. Danica Fine’s talk shows how Kafka processes requests end-to-end and is a decent example of how far we should go. It shows a good-enough level of detail to get the feeling about technology.
Another good example is Mihir Sathe’s article about load balancing:
An explanation of differences between AWS messaging tooling:
From the other topic, we again have big security-related issues. One is a downtime of Let’s Encrypt, and the other is on the IoT with OpenSSH issues:
Especially the first article is an excellent explanation of the investigation that Andrew Ayer made to discover the core issues behind the downtime.
Last but not least, check a great article from Jeremy D. Miller on how Wolverine, together with Marten, can make your multi-tenant apps much, much easier:
I think that’s a rare feature for the messaging tooling.
Check also other links!
p.s. I invite you to join the paid version of Architecture Weekly. It already contains the exclusive Discord channel for subscribers (and my GitHub sponsors), monthly webinars, etc. It is a vibrant space for knowledge sharing. Don’t wait to be a part of it!
p.s.2. Ukraine is still under brutal Russian invasion. A lot of Ukrainian people are hurt, without shelter and need help. You can help in various ways, for instance, directly helping refugees, spreading awareness, and putting pressure on your local government or companies. You can also support Ukraine by donating, e.g. to the Ukraine humanitarian organisation, Ambulances for Ukraine or Red Cross.