Friday, May 17, 2024

OOP for FP programmers, part 2: Why bother when we can pass functions?

I am still exploring, but since the previous post I am having difficulty finding a use case in which object-orientation provides any functionality or simplicity that function parameters do not. So I intend to test the hypothesis that object-oriented programming offers nothing over function parameters.

The point of OOP is inheritance-based polymorphism. (Polymorphism, I think, means "choosing which entity is called at runtime", but that would technically define if/then statements as polymorphic, so my definition might not be specific enough.) OOP's inherited classes can be passed into any function that takes the parent type. But you can do the same thing by taking a function of a specific signature (or an object of specified functions, the equivalent of an interface). OOP does offer the ability to find all inheritors of a class or an interface, but the functional programmer can do the same thing by extracting a record of parameters (the equivalent of an interface) and then finding all uses of that. A class also has the advantage of a debug-visible name; I highly doubt that debuggers will start recording the names of functions passed as parameters anytime soon, so the best alternative would be to pass with each function a string describing it (ugh).

So class inheritance does not not appear necessary. Is it useful? I intend to use the scientific method to find out. I determine never to type a colon character in C# until I find a good use for OOP. I leave myself two exceptions: I may use inheritance for compatibility with existing systems and for domain modeling with immutable data.

Wednesday, May 15, 2024

OOP for FP programmers, part 1

The following content is egregiously oversimplified.

I learned to program in Python, where classes are a sign of overthinking, and then worked using F#. Now I need to learn object-oriented programming for working in C#. This is difficult because:

  1. No one ever explains object-orientation to be understood by functional programmers.
  2. Many OOP and FP constructs are semantically quite similar, yet not quite identical.
  3. No one ever compares OOP and FP constructs; at best, they ask whether we should use object-oriented or functional programming, a highly questionable question.

Here is what I have so far:

Polymorphism is accomplished using discriminated unions, inheritance, and runtime type checks. Runtime type checks are a Wrong Thing because they do not give you exhaustive matching, whereas either other style requires you to say somewhere that you are purposely choosing not to implement logic. Discriminated unions require definition of all subtypes in one place but allow adding arbitrary use cases elsewhere. Inheritance requires definition of all use cases in one place but allows adding arbitrary subtypes later. Discriminated unions can be faked using Church encoding or the Visitor pattern. So you should use inheritance if you want to see all defined behaviors in one place, but discriminated unions if you want to see all defined subtypes in one place.

Inheritance works only on the instance level. Static inheritance is probably theoretically possible but would be a Wrong Thing for reasons I do not yet understand. The proper OOP solution is to model all polymorphic behaviors themselves single-method classes with no state, at which point the Jack Diederich in my head tells me to convert them to functions and just pass in a function parameter. I have not tried enough times to know yet why he is wrong.

Classes hold encapsulated state. They also hold the behaviors that may operate on that state. They also hold polymorphic behaviors, with or without state. They are also a means of organizing static methods. Each class should have its own file all to itself to reflect the class's status as the basic unit of programming. You cannot program in OOP effectively unless you dream at night of the soft embrace of the class.

Interfaces are objects of functions, sometimes with field references. They serve the same role as function parameters except without duck typing. Interfaces are different from function (and ref) parameters in that they can hold multiple functions, so we should probably use function parameters at first and then extract multiple function parameters into interfaces. Except that the Interface Segregation Principle might imply that at least most interfaces should have a single method?

This all seems highly complicated. I don't know why anyone goes to the trouble to learn OOP instead of matching on DUs and passing functions around. But if there is a reason, I want to find out.

Monday, May 13, 2024

Reverting postmodernism

Is it just me, or has Western culture kind of forgotten that postmodernism was ever a thing?

(Feel free to correct my facts or definitions. I am dabbling in a field about which I know little because I know no one else who is. Yes, I know that this reasoning is circular. I could say that this post is a research effort weaponizing Cunningham's Law, but that would depend on this blog having readers. So I am just writing because writing feels like the right thing to do.)

According to my understanding, we used to operate based on Modernism. We believed that we could understand the world and tweak it for the benefit of humanity, or for whatever it was that we wanted.

From modernism, we logically discarded the belief in the inherent value of the natural as opposed to the synthetic. (I am still looking for the term for this philosophy; "naturalism" is already taken.) We did so because we had already rejected a designed universe, and we figured that evolution's ad-hoc systems should be easy for us to improve. So we prescribed invasive and radical medical procedures freely. These had some unexpected side effects; science advanced and picked fights with its past selves.

And then we rebelled against the previous generation, discovered that nature is a lot more complicated than we thought, and popularized holistic, natural remedies to avoid the consequences of hubris. At some point the literary deconstructionists had a collective existential crisis and started questioning the preconditions of intelligibility as applied to communication and arrived at some sort of auto-simulation hypothesis. And thus Postmodernism came to replace Modernism. We determined that truth was unknowable and therefore relative. We heard "I think, therefore I am" and thought, "How interesting that my perception has told me a story about a man called Decartes. I wonder what this reveals about the pulsing mass of tubes that lives inside my skull. Or perhaps there are no tubes."

And now, with COVID-19 and global warming and whatnot, we are using "anti-science" as a slur, which clearly reflects a belief in some sort of absolute truth. Transhumanism and body-modding seem to be popular. "Natural remedies" means "anti-vax" and "anti-vax" means "conspiracy theorist". So we seem to believe that we can reliably understand and restructure at least the human body. I guess we are back on modernism now?

So are the doctors or the literary critics correct? We cannot reconcile modernism and postmodernism, at least if postmodernism precludes epistemology. If human communication cannot reliably indicate the intention of its author, then science is impossible since we rely on the discoveries and publications of others. If we are going to operate under the assumption that we can know things, should we not say then that the deconstructionists are wrong? And if the deconstructionists are wrong, maybe we should reconsider postmodern literature? Why does no one acknowledge the apparent contradiction? I cannot even find anyone criticizing literary deconstructionism with an Internet search.

I am confused.

Friday, May 3, 2024

If "worse is better", then why monoliths?

Richard Gabriel, in his essay The Rise of "Worse Is Better", argues that systems should be written first for functionality and only later for correctness--that is, for consistency and completeness.

His reasons:

  1. Systems that focus on practical use are less computationally expensive to use, and thus receive widespread adoption on underpowered machines.
  2. Because of the assumption of inconsistency, users will become accustomed to needing to know something about the implementation of the system. (This also ameliorates leaky abstractions.)
  3. Components will be kept small and reused ("because a New Jersey language and system are not really powerful enough to build complex monolithic software").

It took a while, but I have finally begun to agree with this way of thinking. "Make it work, make it right, make it fast" seems to demand first making a pile of utilities and then conglomerating it into a language or framework or IDE or whatnot when that becomes useful. The world is still under development, and I can help, so I should build instead of complaining. Systems are not complete because they are never complete; if they were, they would have nothing for me to add.

But now, after rereading the conclusion of the article, I have realized that I mostly use and work on kitchen-sink "workflow" programs that have received none of the stated benefits of "worse is better":

  1. They are computationally expensive and supported only on certain architectures.
  2. Their users have no idea how they work or why certain interactions are more likely to produce bugs.
  3. They feature primarily GUI interfaces, thus not being integrated easily, nor do they apparently integrate other programs, since each one features yet another awful, flashy, underpowered text editor.

So why do we have monolithic programs? Do users really desire all those details to be hidden and thus inaccessible? What do they do when the program fails to work as intended, anyway? Have Oracle and Microsoft pushed us into thinking in terms of monoliths against our better interests so that they can sell us products related to their platforms? Or do we have good reasons to build programs that each do fifteen different things?

And if we have indeed written monolithic programs, then have we also perhaps written needlessly bloated languages? Should we reconsider our usual implicit assumption that everything should be done in the same language, which implies that each language must cover all use cases?

Stuff I don't understand about C#

Rant ahead. Criticisms of the language and the language idioms are interspersed. Disclaimer: I've been wrong about C# before. My lack of...