Friday, May 3, 2024

If "worse is better", then why monoliths?

Richard Gabriel, in his essay The Rise of "Worse Is Better", argues that systems should be written first for functionality and only later for correctness--that is, for consistency and completeness.

His reasons:

  1. Systems that focus on practical use are less computationally expensive to use, and thus receive widespread adoption on underpowered machines.
  2. Because of the assumption of inconsistency, users will become accustomed to needing to know something about the implementation of the system. (This also ameliorates leaky abstractions.)
  3. Components will be kept small and reused ("because a New Jersey language and system are not really powerful enough to build complex monolithic software").

It took a while, but I have finally begun to agree with this way of thinking. "Make it work, make it right, make it fast" seems to demand first making a pile of utilities and then conglomerating it into a language or framework or IDE or whatnot when that becomes useful. The world is still under development, and I can help, so I should build instead of complaining. Systems are not complete because they are never complete; if they were, they would have nothing for me to add.

But now, after rereading the conclusion of the article, I have realized that I mostly use and work on kitchen-sink "workflow" programs that have received none of the stated benefits of "worse is better":

  1. They are computationally expensive and supported only on certain architectures.
  2. Their users have no idea how they work or why certain interactions are more likely to produce bugs.
  3. They feature primarily GUI interfaces, thus not being integrated easily, nor do they apparently integrate other programs, since each one features yet another awful, flashy, underpowered text editor.

So why do we have monolithic programs? Do users really desire all those details to be hidden and thus inaccessible? What do they do when the program fails to work as intended, anyway? Have Oracle and Microsoft pushed us into thinking in terms of monoliths against our better interests so that they can sell us products related to their platforms? Or do we have good reasons to build programs that each do fifteen different things?

And if we have indeed written monolithic programs, then have we also perhaps written needlessly bloated languages? Should we reconsider our usual implicit assumption that everything should be done in the same language, which implies that each language must cover all use cases?

No comments:

Post a Comment

Stuff I don't understand about C#

Rant ahead. Criticisms of the language and the language idioms are interspersed. Disclaimer: I've been wrong about C# before. My lack of...