October 3, 2023

LoveCMS Pro

Do it through Technology

The Two Cultures of Programming: Why Both Are Important | by Andy Plakhov | Yandex | Feb, 2023

15 min read

For several years, I’ve observed that programmers and programming tools are divided into two distinct cultures:

As someone who was originally part of the first culture, I used to dismiss the second culture as frivolous. But a few years ago, I finally realized how wrong I was. Many older developers share my former perspective. In recent years, even more people are making the same mistake, but from the opposite side. I’ve learned that understanding and getting to know the other culture will make you a better developer.

Culture 1: Values big projects

Culture 2: Values short, meaningful code snippets

This difference probably dictates the rest. Young developers may not fully grasp the scope of projects in the first culture. For instance, a modern AAA game written in a language with C-like syntax can have millions of lines of code, more than anyone could ever read. Linux, with over 15 million lines of code, is an even more prominent example. Windows and macOS are many times larger. Some say that car manufacturers have surpassed these numbers, with a Mercedes car running on up to 100 million lines of code. I’m not sure if this is true, but even if it is, most of these lines are probably redundant. In any case, even 100 million is nothing compared to the codebase of a FAANG company, which can contain literal billions of lines of code.

Even code that may seem “dumb,” such as typical business logic, can become complex when it comes in large volumes. It’s like writing a novel; typing words is easy, but they must come together to create a consistent, functioning plot with interesting characters. Without figuring out the high-level structure and principles, it simply won’t happen. And just because you can write a short story doesn’t mean you can write a book.

Maintaining a large codebase requires specific navigation and refactoring tools. These tools are easier to use in some languages than in others.

  • As an example, take the simple “go to definition” feature — the ability to jump to the code of a function from where it’s called. For a code written in first-culture languages, it’s easy to implement such a feature; the function definition syntax is clear, and there’s a known amount of them as functions typically can’t somehow appear “on the fly.” Homonyms, if any, can be distinguished with a formal procedure of type and scope checking. IDEs like VS Code may try to offer “go to definition” in languages like Python and JS. However, this is only and imitation to some extent: since functions are first-class objects, there’s a many-to-many relationship between “function name” and “function body.”
  • Or let’s consider member access control: private/public. Contrary to what you might think, this feature isn’t about security in the traditional sense; it’s essential for survival in large projects. This feature allows you to define the boundaries within which you can define the API that interacts with a particular piece of code and supports all the necessary invariants. Outside of these boundaries, you can only interact with that API without being able to accidentally disrupt the invariants. Without this separation of access, in a project with millions of lines of code, any code can inadvertently interfere with the work of another, making survival quite difficult.

Again, someone from the first culture may think that the second culture is children’s play. But that would be fundamentally wrong.

In large projects, one-time overhead costs are insignificant, so creators of first-culture programming languages didn’t concern themselves with these costs. For example, it doesn’t matter how much space is occupied by a program that prints “Hello, World!”. A C++ programmer will start it with the #include <iostream> command, then write a few more lines, and think nothing of it. A Java developer will have to define a special class first. A Python programmer will look at both of them as if they’re crazy: “Can’t you just write print("Hello, World!") as normal people do?”

This principle applies to any situation where a few lines of code should carry meaning. REPL and the first culture are hardly compatible; for example, Jupyter notebooks wouldn’t be possible in Java, despite the letter “J” in the name (theoretically, you could find ways to do it, but people from the first culture wouldn’t even think of it).

The same access control feature mentioned above is also hard to reconcile with the concept that objects are just boxes of heterogeneous data that can appear from anywhere. For example, they can appear during execution via “eval.” This makes the system surprisingly manageable and configurable at runtime; you can fix the code during debugging, replace it on the fly and continue execution, or have the system configuration in the same language that the whole system is written in.

It was a huge shock for me to find out that the implementation of transformer neural networks (the great and terrible) takes about 100 lines of Python code. Of course, this is very high-level code, and you can’t have these lines without PyTorch, NumPy, CUDA, etc. Nevertheless, such compact code is inconceivable in the first culture.

When code is this packed with meaning, the speed of development can increase dramatically in appropriate situations.

Culture 1: Values code speed

Culture 2: Values coding speed

For most programmers, except those who code in C/C++, it may seem strange to hear that the behavior of a program written in their language can be “undefined,” “unspecified,” or “implementation-defined,” and that the three are quite different. “Undefined” behavior (the worst one) means that the programmer has made a mistake, but it doesn’t necessarily mean that the program will throw an error. The standard officially allows the program to do whatever it wants when this happens, literally anything. But why would anyone need that?

Many of you already know the answer: it allows the compiler to do various low-level optimizations. For example, if the compiler sees that the definition of a macro or template has resulted in the expression (x+1 > x), it can freely replace it with “true.” But what if x == INT_MAX? Since integer overflow is an undefined behavior, the compiler reserves the right to ignore this exotic case. With this simple example, we notice something frightening: during the execution of the program, there isn’t really a moment when an undefined behavior “occurs”; you can’t “detect” it because it was left somewhere in a parallel universe, while still affecting ours.

If you don’t program in C/C++, you may be in shock: “People really write programs like this?” Indeed, people have been doing it for 50 years, and throughout this time, they’ve been regularly shooting themselves in the foot. The compiler’s convenience stands above the programmer’s.

In contrast, there are many examples of opposite behavior in Python. Let’s begin with something simple:

No overflow and none of the problems that come with it! The programmer’s mind has an easier time: an integer is just that — an integer, and you no longer need to think about INT_MAX. Of course, this bliss comes at a price: BigInt arithmetic is much slower than built-in arithmetic.

Here’s a less widely known example:

Integer division in Python ensures that the remainder after dividing by N is always a number between 0 and N-1, even for negative numbers. In C/C++, on the other hand, the remainder after dividing -15 by 10 is -5. Again, the former approach saves time and reduces the cognitive load on the programmer. For example, when determining the time of day from a timestamp, the programmer doesn’t have to worry about whether the timestamp is older than 1970. This is no coincidence — Guido van Rossum himself chose this semantics based on a similar line of thought. The latter approach is better suited for certain hardware and is thus sometimes a few picoseconds faster.

To demonstrate the extent of the Python creator’s concern with problems like these, here’s one last example: what would you expect to see after running this sample?

Here’s the answer.

Believe me: this isn’t a bug, but rather exactly how it was intended. Try to hazard a guess (or look up online) why this rounding approach is correct from the second culture’s standpoint, even though it may require additional checks within the implementation of the round function.

Finally, it’s interesting to note that with a deep understanding of both cultures, it becomes possible to strike a compromise between program speed and programmer speed. Complex computations can be done using a library written in C++ (occasionally Fortran) with bindings to Python or Lua. You can then use these library functions to build complex structures like Lego blocks. For example, the training and inference of large neural networks — some of the most high-load IT projects today — are developed primarily within the boundaries of the second culture. The “number cruncher” program is hidden under the hood. It’s still useful to know its quirks, but this knowledge is no longer essential, even to achieve world-class results.

Culture 1: It can be rigorously mathematically proven that the code doesn’t contain errors of a certain type

Culture 2: There’s statistical evidence that the code almost always behaves as expected

Having contrasted the program speed and programmer speed, it might seem that the first culture doesn’t care about security and convenience. This generalization is utterly wrong (like any overgeneralization). In reality, programmers appreciate having the compiler check everything for them. The idea of “what if the compiler could check everything possible” is the foundation of Rust — an entire language that’s quickly becoming mainstream.

Generally, programmers in the first culture have a static approach to error prevention. This group includes type checks at compile time, checking formal specifications, and using code analysis tools. In addition, similar checks can also be applied to data when it’s “compiled” into a binary format that’s used by the runtime code.

Second-culture programmers, on the other hand, tend to avoid errors with a statistical approach. They rely on unit and regression testing, functional testing, manual testing, and other, more exotic techniques such as fuzzing.

As you can see, there’s no clear-cut dividing line between the two cultures. Some old-school C++ programmers may also write unit tests, and some Python programmers may create code analyzers. However, each culture tends to use its own set of established practices. This is true of all the other differences between the two cultures.

A key difference between the two sets of practices is the level of assurance they provide. For example, a C++ or Java compiler guarantees that an object of the “wrong” type can’t be passed as a function parameter and that a non-existent method can’t be invoked for an object. In Python, these checks can be performed using unit, regression, and functional testing, as well as the good old way of “try it a few times and see if it fails.” Both approaches can ensure reliability in practice and make everyone happy, but herein lies a crucial difference between the two: no test can cover all possible situations at the same time. There’s always a small chance that the user of the program will do something unexpected, causing a hidden bug to surface.

Of course, this isn’t mathematics, so the compiler’s “proof” is never fully reliable. For example, the compiler itself could theoretically contain a rare defect. In practice, however, this possibility can be safely ignored for several reasons:

  • On average, compilers are many times more reliable than regular programs, and their behavior is much more strictly regulated,
  • Even if the compiler doesn’t “notice” the error, it will be detected after a small change in the code,
  • We’re not talking about one rare event but about two unlikely conditions coinciding (the compiler didn’t see the programmer’s error, and this defect is also incredibly rare),
  • And so on.

The big example

For many years, when discussing this topic, I’d use this example. A long time ago, when I was working in game development on an RTS game for the Nintendo DS, the testers found a desynchronization problem in the multiplayer mode.

This is a particularly frustrating type of bug. In our game, the multiplayer was organized peer-to-peer: different DS consoles would only pass user input between each other, while the world state was calculated by each device separately. This is perfect for RTS games because it allows for feature-rich gameplay based on a very limited data transmission channel and minimal communication. The main drawback, however, is that the entire game logic must be absolutely deterministic — meaning that given the same set of inputs, the resulting state of the game world must always be identical. There can be no randomness, no dependence on the system timer, no round-over bit mischief, no uninitialized variables, or the like.

Once desync happens, it’s fatal: any discrepancy, even if it starts with one bit, quickly accumulates, and soon the players see completely different pictures of what’s happening. Naturally, both players end up “winning” with a huge lead. With the help of checksums, the moment of time where a desync has occurred can be roughly established. After that, a thorough investigation is required. The investigation becomes somewhat easier if you can afford to serialize all game data with certain annotations and then compare two dumps from different devices. Unfortunately, we didn’t have that option: after all, we were working with the Nintendo DS, a console with a tiny amount of memory.

Here’s a bug description I received from QA: “Sometimes, under unknown circumstances, a desync occurs. The exact cause is unknown. To reproduce: create a four-player game and play to exhaustion, actively using different characters and abilities. If the game finishes without problems, repeat the process. The bug will appear eventually, whether it’s immediately or the next day.”

But why does the game desync at all? Fortunately, uninitialized variables can be ruled out: we wrote our own memory management system, and gameplay data is stored in a separate location that is initially filled with zeros. This ensures that without desync, their states would be identical down to the bit, even if there were a few uninitialized variables. This means that one of the developers must’ve done something unexpected, such as calling a function that isn’t required to be synchronous, like addressing the UI, from the game logic. Technically, this isn’t easy to do: at the very least, you’d have to write #include <../../interface/blah-blah.h> in the game logic without considering the obvious consequences this entails. A simple regexp search showed that nobody was that foolish.

It was then that I realized that I had a type checking task on my hands. I wasn’t interested in types in the language sense, but in the “logical types” of functions, so it’s not a typical task. Something like Haskell doesn’t distinguish between these two concepts, but we’re talking C++ here.

All our functions, class methods, and data must be divided into synchronous and asynchronous. Synchronous functions can call the asynchronous ones (for side effects), but they can’t use the values returned to them. For example, a gameplay function can say: “UI, show the user a message,” but it can’t ask: “UI, what’s the user’s viewport?” And conversely, asynchronous functions can’t call synchronous functions with side effects, but they can freely use the values returned to them (“Gameplay, what’s this unit’s health percentage?”).

Main takeaways:

• The code contained some violations of these rules; otherwise, the desync error wouldn’t occur.

• Not every such violation would lead to a desync, but if there were no violations, then we could mathematically prove that there’s no desync.

Great, now I had to take on the role of the compiler and rename all the “doSmth” functions on the border between the game world, UI, and graphics to either “doSmthSync” or “doSmthAsync” while monitoring which one calls which. In less than an hour, all the type errors were fixed, and I was able to restore the chain of events that led to the desync.

I found the bug where the anvil drop was interpreted (as we all know, an anvil is a cartoonish superweapon). To check if the anvil was dropped on an empty spot or if it was aimed at a specific unit, a wrong function was used by mistake: isVisible(getCurrentPlayer()) instead of isVisible(player sho is dropping the anvil).

Here’s how the bug was reproduced. One player had to build a Scout, make him invisible, and go to the enemy’s base. The second player had to build a Sniper and use the Anvil Drop ability on the spot where the invisible Scout is standing (or walking by) — more specifically, on his torso. On one DS, this command meant “drop the anvil on the spot behind the Scout’s torso,” while on the other, it meant “drop the anvil on the Scout” (on the spot under his feet). It was also important not to get too close, lest the Scout be “spotted” and brought out of invisibility.

I can’t think of any unit test or any other kind of test that would be able to catch such a fantastic coincidence. To keep such bugs out of a game, you need selfless human testers. Ensuring that the code is mathematically rigorous would also be highly advisable.

Culture 1: Type checking at compile time

Culture 2: Duck typing

You can see why this is the case from the previous sections. Elements of one culture applied to the other can be incredibly helpful in certain situations — as long as you understand that it’s not forbidden to do so.

Culture 1: “I’ve read the code of all the libraries my project depends on, and I’m comfortable with them.”

Culture 2: npm install left-pad

New code that does something new is both a blessing and a curse. Consider a huge project where every 10,000 lines of code implements its own function, strictly defined by specifications (lest chaos ensue and everything falls apart). In such a project, every line of code, especially if you didn’t write it yourself, is a pain point and a potential time bomb. On the other hand, sometimes you just need to upload a PNG image and determine if there’s a hot dog in it. Either you can do it in four lines without thinking too hard, or you can’t — and second-culture people can do it more often than not.

Culture 1: Static assembly

Culture 2: Dynamic assembly, including loading libraries from the depths of the internet

There are two questions at play:

  • Can strangers suddenly break your product without any action on your part?
  • Can strangers meaningfully improve your product without any action on your part? (For example, patch a security hole you didn’t even know existed)

As you could probably guess, the answers to these two questions are closely related.

Culture 1: Documentation is stored locally

Culture 2: Documentation is stored online (on your website, GitHub, Read the Docs, or Stack Overflow)

Such a small thing, you might think. Why does it matter where you store something that’s basically plain text (okay, maybe hypertext)? But this difference is quite telling. In the first case, the documentation is stored on my computer; it’s “mine,” it definitely describes the version I’m using, it won’t change unless I want it to, etc. In the second case, I live and evolve with the world around me, and I have the chance to learn new things about this technology as soon as someone discovers them.

Culture 1: Formal languages, search algorithms, finite state machines analysis, sophisticated data structures

Culture 2: Deep learning

Each culture has its heroes and great wizards. They do something so cool and scientific that you want to be like them. In the first culture, these people are the makers of compilers and standard libraries. As anyone who has read the Dragon Book knows, even a simple compiler is an incredibly complex contraption. Writing yet another data container in C++ is very easy, but making one that others will use is an art in itself. Technological achievements read like the proof of a mathematical theorem: “We have therefore found that there exists a data structure with amortized search time O(1) and inserted time complexity O(In In N).”

In the second culture, the heroes are the people who make everyone marvel at just WHAT computers can now do. Often, experts can predict that this is possible, but that doesn’t take away from the achievement. For example, it’s obvious that someone will be the first to train diffusion to generate high-quality video without any hacks, end-to-end. This will likely happen before the end of 2023, but the result will still be amazing, and those who do it will be praised as heroes.

lovecms.org All rights reserved. | Newsphere by AF themes.