by Ididntdothis on 4/10/2020, 8:38:05 PM
by smitty1e on 4/10/2020, 4:48:14 PM
Great article.
Recalls Gall's Law[1]. "A complex system that works is invariably found to have evolved from a simple system that worked."
Also, TFA invites a question: if handed a big ball of mud, is it riskier to start from scratch and go for something more triumphant, or try to evolve the mud gradually?
I favor the former, but am quite often wrong.
by mannykannot on 4/10/2020, 4:51:07 PM
Big balls of mud result from a process that resembles reinforcement learning, in that modifications are made with a goal in mind and with testing to weed out changes that are not satisfactory, but without any correct, detailed theory about how the changes will achieve the goal without breaking anything.
by carapace on 4/10/2020, 5:26:43 PM
"Introduction to Cybernetics" W. Ross Ashby
http://pespmc1.vub.ac.be/ASHBBOOK.html
> ... still the only real textbook on cybernetics (and, one might add, system theory). It explains the basic principles with concrete examples, elementary mathematics and exercises for the reader. It does not require any mathematics beyond the basic high school level. Although simple, the book formulates principles at a high level of abstraction.
by xyzzy2020 on 4/10/2020, 6:19:09 PM
I think this is useful even for systems (SW stacks) that are much smaller and "knowable": you start by observing, trying small things, observing more, trying different things, observe more and slowly build a mental model of what is likely happening and where.
His defining characteristic is where you can permanently work around a bug (not know it, but know _of_ it) vs find it, know it, fix it.
Very interesting.
by jborichevskiy on 4/10/2020, 6:04:12 PM
> If you run an even-moderately-sophisticated web application and install client-side error reporting for Javascript errors, it’s a well-known phenomenon that you will receive a deluge of weird and incomprehensible errors from your application, many of which appear to you to be utterly nonsensical or impossible.
...
> These failures are, individually, mostly comprehensible! You can figure out which browser the report comes from, triage which extensions might be implicated, understand the interactions and identify the failure and a specific workaround. Much of the time.
> However, doing that work is, in most cases, just a colossal waste of effort; you’ll often see any individual error once or twice, and by the time you track it down and understand it, you’ll see three new ones from users in different weird predicaments. The ecosystem is just too heterogenous and fast-changing for deep understanding of individual issues to be worth it as a primary strategy.
Sadly far too accurate.
by naringas on 4/11/2020, 1:09:52 AM
I firmly believe that in theory all computer systems can be understood.
But I agree when he says, it has become impractical to do so. But I just don't like it personally, I got into computing because it was supposed to be the most explainable thing of all (until I worked with the cloud and it wasn't).
I highly doubt that the original engineers who designed the first microchips and wrote the first compilers, etc... relied on 'empirical' tests to understand their systems.
Yet, he is absolutely correct, it can no longer be understood, and when I wonder why I think the economic incentives of the industry might be one of the reasons?
for example, the fact that chasing crashes down the rabbit hole is "always a slow and inconsistent process" will make any managerial decision maker feel rather uneasy. This make sense.
Imagine if the first microprocessors where made by incrementally and empirically throwing together different logic gates until it just sort of worked??
by woodandsteel on 4/10/2020, 9:42:56 PM
From a philosophical perspective, I would say this is an example of the inherent finitudes of human understanding. And I would add that such finitudes are deeply intertwined with many other basic finitudes of human existence.
by lucas_membrane on 4/11/2020, 8:43:55 AM
I suspect that systems that defy understanding demonstrate something that ought to be a corollary of the halting problem, i.e. just as you can't figure out for sure how long an arbitrary system will take to halt, or even figure out for sure whether or not it will, neither can you figure out how long it will take to figure out what's going on when an arbitrary system reaches an erroneous state, or even figure out for sure whether or not you can figure it out.
by natmaka on 4/11/2020, 3:34:10 AM
Postel's Robustness principle seems pertinent, along with "The Harmful Consequences of the Robustness Principle". https://tools.ietf.org/id/draft-thomson-postel-was-wrong-03....
by INTPnerd on 4/11/2020, 3:20:19 AM
Even if you can reason about the code enough to come to a conclusion that seems like it must be true, that doesn't prove your conclusion is correct. When you figure something out about the code, whether through reason and research, or tinkering and logging/monitoring, you should embed that knowledge into the code, and use releases to production as a way test if you were right or not.
For example, in PHP I often find myself wondering if perhaps a class I am looking at might have subclasses that inherit from it. Since this is PHP and we have a certain amount of technical debt in the code, I cannot 100% rely on a tool to give me the answer. Instead I have to manually search through the code for subclasses and the like. If after such a search I am reasonably sure nothing is extending that class, I will change it to a "final" class in the code itself. Then I will rerun our tests and lints. If I am wrong, eventually an error or exception will be thrown, and this will be noticed. But if that doesn't happen, the next programmer who comes along and wonders if anything extends that class (probably me) will immediately find the answer in the code, the class is final. This drastically reduces possibilities for what is possible to happen, which makes it much easier to examine the code and refactor or make necessary changes.
Another example is often you come across some legacy code that seems like it no longer can run (dead code). But you are not sure, so you leave the code in there for now. In harmony with this article, you might log or in some way monitor if that path in the code ever gets executed. If after trying out different scenarios to get it to run down that path, and after leaving the monitoring in place on production for a healthy amount of time, you come to the conclusion the code really is dead code, don't just add this to your mental model or some documentation, embed it in the code as an absolute fact by deleting the code. If this manifests as a bug, it will eventually be noticed and you can fix it then.
By taking this approach you are slowly narrowing down what is possible and simplifying the code in a way that makes it an absolute fact, not just a theory or a model or a document. As you slowly remove this technical debt, you will naturally adopt rules like, all new classes must start out final, and only be changed to not be final when you need to actually extend them. Eventually you will be in a position to adopt new tools, frameworks, and languages that narrow down the possibilities even more, and further embedding the mental model of what is possible directly into the code.
by jerzyt on 4/10/2020, 6:08:46 PM
Great read. A lot hard earned wisdom!
by drvortex on 4/10/2020, 6:51:58 PM
What a long winded article on what has been known to scientists for decades as "emergence". Emergent properties are systems level properties that are not obvious/predictable from properties of individual components. Looking and observing one ant is unlikely to tell you that several of these creatures can build an anthill.
I often wonder if things would be better if systems were less forgiving. I bet people would pay more attention if the browser stopped rendering on JavaScript errors or misformed HTML/CSS. This forgiveness seems to encourage a culture of sloppiness which tends to spread out. I have the displeasure of looking at quite a bit of PHP code. When I point out that they should fix the hundreds of warnings the usual answer is “why? It works.” My answer usually is “are you sure? “.
On the other hand maybe this forgiveness allowed us to build complex systems.