Architecture Weekly #183 - 10th June 2024
Welcome to the new week!
At war, love, and managing processes, all tricks are allowed. Business processes are usually the most critical part of the core functionality, so we need to ensure that we can diagnose them correctly. We also need to ensure that they won’t get stuck in the middle without being able to resume or terminate their work.
Scheduled messages are one way to solve it, but we have different options. In my latest article, I described how to combine a To-Do List and Passage of Time patterns to achieve a straightforward way to handle process deadlines efficiently:
In a nutshell, we're creating a read model with pending items (a.k.a To-Do List) and subscribing to the events representing the passage of time to check if we have some items to handle. Read more in the article to see all the nuances.
GraphQL is one of those love & hate topics in our industry. Discussions around it are too often based on personal preferences and pet peeves. Arguments are getting more into the emotional level rather than specific arguments being put into specific usage contexts. That’s why I enjoyed the recent blogging exchange from people who spent a few years working with it:
Matt Bessey wrote his rant after being burned by crafting public APIs with GraphQL. He nicely shows the challenges in designing the API to guard itself against typical attack threats. That’s related to authorizing access to related entities, performance, etc. All of that can bring much more complexity than we typically anticipate.
Marc-Andre Giroux agreed with the conclusion that GraphQL may not be the best choice for public APIs. For such, it may be better to design the intended flows and use the REST+OpenApi combination cautiously. GraphQL shines the most for Backend For Frontend design, where we want to have data access flexibility but still push it into some boundaries.
I agree with Marc-Andre that some of Matt's cases are general issues that can and will happen to REST APIs. I’ve been burned multiple times by trying to design a dashboard REST API, and it’s always a nightmare. It always requires a lot of thinking, proper boundary design, and performance considerations. Selecting one or the other way to access data won’t magically remove that need.
Staying with the complexity of Identity and Access Management, Mat Duggan outlined the challenges of setting it up in the big Cloud providers.
He explained how constantly changing the set of permissions to cloud services can impact security. For instance, you may carefully craft roles and assign people to them, but then, being in a rush or trying to keep up with the changes in the permission set, you’re adding more and more into roles. By that, you’re losing control over who should be doing what. He provided a bit radical but intriguing idea based on his past projects:
What is this obvious solution? You, an application developer, need to launch a new service. I give you a service account that lets you do almost everything inside of that account along with a viewer account for your user that lets you go into the web console and see everything. You churn away happily, writing code that uses all those new great services. Meanwhile, we're tracking all the permissions your application and you are using.
At some time interval, 30 or 90 or whatever days, my tool looks at the permissions your application has used over the last 90 days and says "remove the global permissions and scope it to these". I don't need to ask you what you need, because I can see it. In the same vein I do the same thing with your user or group permissions. You don't need viewer everywhere because I can see what you've looked at.
Both GCP and AWS support this and have all this functionality baked in. GCP has the role recommendations which tracks exactly what I'm talking about and recommends lowering the role. AWS tracks the exact same information and can be used to do the exact same thing.
Again, I’d be careful with that, especially for the global-facing projects, but for the internal APIs, that might not be as crazy as it seems.
I can confirm that most people don’t care about granular permissions; they either try to guess them or grant the highest possible permissions. Because of that, if I tried to apply Mat’s proposal, I’d still try to do a review and ensure that minimum required scopes are used and if those higher permissions are justified. But then, of course, we may fall into the same loophole as initially described…
Last week, I linked Kevin Beaumont’s coverage of the huge privacy and security hole in Windows Copilot+ Recall. It appeared that Microsoft had enough backslash to rewind the change and make it opt-in instead of opt-out.
Just to recap: Recall is the new AI tool that will take screenshots of your desktop at random times and act as a potential key logger. Yup, in case of security threats, and if you’re not lucky, your screen with personal information, passwords, etc., can leak out. Or even worse, some other Windows user with admin rights could access and steal those snapshots…
I’m not sure how someone thought that such a stupid idea could be a good choice. Let’s give a voice to Pavan Davuluri – “Corporate Vice President, Windows + Devices”
Our team is driven by a relentless desire to empower people through the transformative potential of AI and we see great utility in Recall and the problem it can solve. We also know for people to get the full value out of experiences like Recall, they have to trust it. That’s why we are launching Recall in preview on Copilot+ PCs – to give customers a choice to engage with the feature early, or not, and to give us an opportunity to learn from the types of real world scenarios customers and the Windows community finds most useful.
Of course…
Nevertheless, it’s good to apply pressure. From now on, you’ll need to explicitly enable Windows Hello to use Recall and authenticate each time you open the Recall app to view your data, and they will start encrypting SQLite database data. Yup, they’re doing it now after it was deployed and installed. What a mess.
Of course, the question is how temporary this change would be. Given their mad push into Gen AI, they will probably find a different way to get people to use it.
Are you building hashtag EventSourcing applications in hashtag Node.js? Are you tired of maintaining your homebrew solutions? I plan to add PostgreSQL storage to Emmett (plus other features like telemetry, etc.).
Getting sponsorship would help me to prioritise that effort. As I built a few cases like that, I think that it would be cheaper in the long term to outsource that to me rather than maintain it on your own.
If sponsoring is not an option for now, I'm still happy to take your feedback on what would make you consider moving from the homebrew event store to the one in Emmett.
I have already started to work on the subscription API; it’ll be based on the WebStreams standard to make it both available in the browser and the Node.js backend. If you don’t know it yet, check:
And if you’d like to learn more about how cool Node.js are, check out a great walkthrough by Matteo Collina:
On the last Domain Driven Design Europe, besides doing an advanced Event Sourcing workshop, I also had the pleasure of being an MC and announcing two talks. One of them was Mufrid Krilic, here’s the recording of the version he gave at KanDDDinsky conference:
There are not many talks about the insights from crunching the business domain. There’s a thing to that, as it’s not easy to show tradeoffs without accidentally presenting them as best practices. Mufrid did a good job explaining the tools they used and the modelling process that helped them build a focused model. It also made me prioritise learning Domain Storytelling more.
Check also other links!
Cheers
Oskar
p.s. I invite you to join the paid version of Architecture Weekly. It already contains the exclusive Discord channel for subscribers (and my GitHub sponsors), monthly webinars, etc. It is a vibrant space for knowledge sharing. Don’t wait to be a part of it!
p.s.2. Ukraine is still under brutal Russian invasion. A lot of Ukrainian people are hurt, without shelter and need help. You can help in various ways, for instance, directly helping refugees, spreading awareness, and putting pressure on your local government or companies. You can also support Ukraine by donating, e.g. to the Ukraine humanitarian organisation, Ambulances for Ukraine or Red Cross.
Architecture
Marc-Andre Giroux - Why, after 8 years, I still like GraphQL sometimes in the right context
📺 How About Tomorrow? Podcast - What Does “Full Stack” Mean? w/ Taylor Otwell and Ryan Florence
📺 Mufrid Krilic - Multiple Models with Multiple Perspectives in a Cross-Functional Team
Ralf Westphal - Integration Operation Segregation Principle (IOSP)
DocuEye - A tool that lets You visualize views and documentation created using Structurizr DSL
Database
Stripe - How Stripe’s document databases supported 99.999% uptime with zero-downtime data migrations
Lukas Fittl - Understanding Postgres GIN Indexes: The Good and the Bad
Testing
Azure
Node.js
.NET
Coding Life
Management
Security
Sam Curry - Hacking Millions of Modems (and Investigating Who Hacked My Modem)
ArsTechnica - Microsoft is reworking Recall after researchers point out its security problems
Microsoft - Update on the Recall preview feature for Copilot+ PCs