The future is fragile
“At this point someone might have a better chance of understanding the willy wonka factory than this.” - some guy in a helmet
This topic’s been on my mind for a while and it came up between my colleagues a few times. The idea that things are getting more and more complex. In infosec at least, there are tools upon tools that can assist in a variety of tasks. It got to the point where being a script kiddy can get you remarkably far.
The issue comes from not understanding how it all works. OffSec does things to force you to understand how things work in their course, but for the most part the world is happy to just pile on complexity and then abstract it away through some commands and interface. Is this bad? Is it good? I don’t know. The same, presumably, happened to cars. I have no interest in cars, I just have a cheap and reliable car that gets me places and plays my podcasts and that’s just about all I can ask of it. But I don’t work on the car. If it makes a funny noise I ̶j̶u̶s̶t̶ ̶i̶g̶n̶o̶r̶e̶ ̶i̶t̶ ̶a̶n̶d̶ ̶h̶o̶p̶e̶ ̶i̶t̶ ̶g̶o̶e̶s̶ ̶a̶w̶a̶y̶ take it to a mechanic. But I do work in tech, more specifically in infosec. Where I’m supposed to know how everything works, for the most part better than those that design it in the first place. Both of us rely on tools. They rely on frameworks and libraries, I rely on things like kali and metasploit.
This does include some fragility through dependency chains. We’ve seen cases when one library maker changes one line of code and it affects all the hundreds of other libraries that use it as a dependency. I’ve broken my VMs countless times and I was often lost without them. Installing some of those tools is a proper pain. The stakes are higher and the system is more fragile, but productivity increases by leaps and bounds on both sides. Again, I can’t say if this is good or bad.
A colleague once told me that “give it another decade or two, and even us tech workers will be just stewards to the machines”. If AI keeps advancing, it seems like an inevitable future. We’ll be pitying AI systems against each other, like some sort of cyber pokemon battle. We will be much like the mechanicus in WH40k, who hardly understand what they’re doing and just repeat the processes with ritualistic rigour.
Complexity and streamlining are a powerful weapon. Used well, they can increase progress tremendously. We should just be aware of the cost. We’ve already seen a few instances where amazon kicks the bucket (pun intended) and a third of the internet collapses. We’ve seen what a 1 hour google service outage can feel like. For the people of Ukraine, they’ve seen what a malware attack can do to their infrastructure. For Maersk, they’ve seen what ransomware can do to their business. Have we, as a collective, learned anything from this? What was there even to learn? A lot of companies have no choice but to go for increased complexity and risk, otherwise their competitors will do it instead. They have to assume the risk of blowing up. Circumstances can change very fast - look at Robinhood during the GME debacle.
Are we left without a choice? Either be competitive and fragile, or get crumpled by those who are? Food for thought.