32

Until now, I always believed that you should learn programming languages that make you do low-level stuff (e.g. C) to understand what's really happening under the hood and how the computer really works. this question, this question and an answer from this question reinforced that belief:

The more I program in the abstracted languages, the more I miss what got me into computers in the first place: poking around the computer and seeing what twitches. Assembler and C are very much suited for poking :)

Eventually, I thought you will become a better programmer knowing this, because you'll know what's happening rather than assuming that everything is magic. And knowing/writing low-level stuff is much more interesting than writing business programs, I think.

But a month ago, I came across this book called Structure and Interpretation of Computer Programs. Everything in the web suggests that this is one of the best computer science books, and you will get better as a programmer when reading it.

I'm really enjoying the concepts a lot. But I find that the book makes it seem that abstraction is the best concept in computer science while only spending one chapter on the low-level part.

My goal is to become a better programmer, to understand computer science more and this got me really confused. Mainly shouldn't we avoid all abstractions and observe what really is happening at the very low-level? I know why abstraction is great, but doesn't that prevent you from learning how computers work?

30
  • 25
    Read. Understand. Commented Jan 30, 2016 at 17:26
  • 10
    Consider this abstraction: your C or asm program sees memory as a huge contiguous array, while the underlying hardware is a very different story. Each person's "low level" is another person's "abstraction". Commented Jan 30, 2016 at 21:47
  • 1
    You are using abstractions every day. Moreover, many problems are stupidly and unnecessarily hard without(proper) abstractions - but abstracted away are trivial; they just need the right approach. Some fine examples there involve graphs. Commented Jan 31, 2016 at 3:40
  • 7
    There is too much to computing for you to understand all of the things, all of the time. I write mobile applications, which talk to REST APIs, which talk to SQL databases. If I had to hold in my head all of TCP, HTTP, all 7 layers of the OSI model, the IIS stack, SQL and the architecture of every computer involved, just to write a simple mobile app, I'd have a migraine before lunchtime. Abstractions save you from having to think about the parts that someone else has thought out for you, so that you can think about the new bits that you're adding to the system. Commented Jan 31, 2016 at 9:34
  • 2
    Opinion, so a comment instead of an answer: I think there's a connection between the idea, which is popular in academia, that students should learn low-level concepts first, and the fact that about 50% of CS professors have never worked a day in the industry. Commented Jan 31, 2016 at 18:34

13 Answers 13

22

No, abstractions don't prevent you from understanding how things work. Abstractions allow you to understand why (to what end) things work the way they do.

First off, let's make one thing clear: pretty much everything you've ever known is at a level of abstraction. Java is an abstraction, C++ is an abstraction, C is an abstraction, x86 is an abstraction, ones and zeroes are an abstraction, digital circuits are an abstraction, integrated circuits are an abstraction, amplifiers are an abstraction, transistors are an abstraction, circuits are an abstraction, semiconductors are an abstraction, atoms are an abstraction, electron bands are an abstraction, electrons are an abstraction (and for all we know it could be abstractions all the way down). By the logic that low level knowledge is required to understand how something really works, if you want to understand how computers really work, you need to study physics, then electrical engineering, then computer engineering, then computer science, and essentially work your way up in terms of abstraction. (I've taken the liberty of not mentioning that you also need to study math first, to really understand physics.)

Now realistically, the days when you could make sense of computers and programming by building your way up from the lowest level details were the earliest days of computers. By now, this field has advanced too much, to the point where it can't possibly be rediscovered from scratch by a single person. There are hundreds of thousands of very qualified people specializing at every level of abstraction, working hard daily to make advances that you can't hope to understand without spending years of studying a specific portion thoroughly and committing to keeping up with the latest advancements there.

Consider this Java snippet:

public void Example() { 
    Object obj = new String("...");
    // ...
}

You can easily understand what this snippet promises (and what it doesn't promise), at the level of the Java language. But unless you are well-versed in topics like stack frames, heap data structures, concurrent generational tracing garbage collection, memory compacting, static analysis, escape analysis, virtual machines, dynamic analysis, assembly language and executable space protection, you are wrong if you think that you really know all the low level details that are involved in actually running it on a real computer.

Alternatively, consider this C snippet:

void example(int i) {
    int j;
    if(i == 0) {
        j = i * 2;
        printf("Received zero, printing %d", j);
    } else {
        printf("Received non-zero, printing %d", j);
    }
}

If you show it to a beginner, they'll tell you that when the argument is non-zero, the residual contents of a memory location will be printed, because "variables are actually just memory addresses behind the scenes" and "when you don't initialize them it simply means that you don't move anything to their address" and "there's nothing magic about them". If you show it to a non-beginner, they'll tell you that this program's behavior is undefined for non-zero values of this function's parameter and the compiler could potentially remove the conditional, treat all arguments to this function as zero, replace all of the function's call spots with calls that pass zero as the argument, set all variables that are ever passed as arguments to this function to zero, or do other seemingly paradoxical things.

It's important to realize that you're always working at a level of abstraction. The beginner in this example took everything he/she knows into account and arrived elegantly to a completely wrong answer because (a) he/she didn't read the spec of the language (which is an abstraction on purpose, not because C programmers aren't clever enough to understand computer architecture) and (b) tried to reason about implementation details which he/she didn't fully grasp and which have evolved way beyond his/her mental model by now. This example is fictional, but it draws from everyday real-world misconceptions - the kind that sometimes lead to perilous bugs and occasionally famous security holes.

It's also important to see the bigger picture. For example, if you don't understand higher abstractions well enough, you may find out that C has structs, struct pointers of equal sizes, incomplete type declarations and function pointers, and you'll likely see them as a bunch of unrelated features that could occasionally be useful. But if you understand a higher abstraction like OOP well enough, you'll recognize the aforementioned features as the building blocks for OOP concepts: structs can contain other structs (code reuse), data pointers can pass something as a reference (like classes), the fact that these pointers have the same size allows them to be substituted (subtyping), incomplete type declarations allow you to have opaque pointers to structs (private members) and function pointers allow you to build dispatch tables (polymorphism).

In this fictional example, knowledge of an OOP language not only didn't prevent you from understanding C, but it actually taught you concepts that you can carry over to C. These concepts can be applied selectively when you need them to make your code easier to manage even when the language doesn't actively push you towards it. (I would argue that there is a similar relationship between OOP and FP, but let's not get carried away.)

Some of the best programmers I've met are the way they are because they understand abstractions and they can carry their knowledge over to any language, and adapt it to any problem they need to solve, on any level they happen to be working at. Some of the worst programmers I've met are the way they are because they insist on focusing on details and trivia which they don't really understand and which a lot of the time aren't exactly up-to-date, or relevant to the problem, or applicable in the context they attempt to use them in, or have never been true in the first place.

All you need to realize is that there is no single level of abstraction at any given point and a person isn't limited to a single level of abstraction at any given moment. You can understand one, then move on to another. You can employ one, then switch to another - any time you want.

3
  • "they understand abstractions and they can carry their knowledge to any language, and adapt it to any problem they need to solve, at any level they happen to be working on." - I think this is what SICP is hinting at, I just don't understand it well enough. Can you expand on this? Say someday I have mastered abstraction. What advantage would it give me if for some reason I would use a more low-level language (E.G. from scheme to c or whatever) given that c also have abstraction? Commented Jan 31, 2016 at 9:15
  • @recursivePointer I've made some edits to expand on that. Commented Jan 31, 2016 at 9:43
  • 1
    @recursivePointer: The point is that EVERYTHING in programming is an abstraction, at one level or another. (Yes, even assembly.) Part of the skill of programming is to know which abstractions to use to best get the desired results. For instance, some years ago I was given a program written by a CS grad student which used an abstraction - C++ strings - to parse a complicated input file. It worked, but took about half an hour to do its job. Switching to a different abstraction (Lex/YACC) got it to run in a few seconds, and shrunk 20K lines of code to 1K.
    – jamesqf
    Commented Jan 31, 2016 at 18:52
40

Eventually, I thought you will become a better programmer knowing this because you'll know what's happening rather than assuming that everything is magic.

These are not contradictory things. I have no idea how to pave a road, but I know that it is not magic.

But a month ago, I came across this book called Structure and Interpretation of Computer Programs and anything in the web suggests that this is one of the best computer science book and you will get better as a programmer when reading it.

Yes, that is absolutely true.

Mainly shouldn't we avoid all abstractions and observe what really is happening at the very low-level?

Maybe once or twice, but doing that every time will prevent you from being a good programmer. You don't need to watch the electrons flow through a gate to know that it's not magic. You don't need to see the cpu translate those electrons to the bitwise representation of a number to know it's not magic. You don't need to see those bits go down the wire to know that it's not magic. There are a few dozen abstractions necessary just to put these letters alongside one another. A few hundred probably to get them to your computer from SE's servers.

Nobody knows all of them - not in depth.

This is a very common problem with beginners in programming. They want to know how things work. They think that low level is the best level, because it gives them the most power, or control, or knowledge. It does not.

Yeah, you can't treat it as magic. But you really don't need to know that stuff either. Concatenating strings by hand isn't interesting. Rolling your own sorting algorithm isn't interesting. Because any programmer can get the same result in a few order of magnitudes less time by using something written by far better programmers decades ago.

15
  • 1
    "doing that every time will prevent you from being a good programmer." - this is interesting, but when should we need to actually understand the details? For example, a few years ago while learning java, my professor never mentioned anything about object references. When we say Object foo = new Object() we're just creating an object to memory. It's until I learned a little bit of c++ when I found out java used pointers implicitly. It lead me to understand why my objects are getting their values changed for some unknown reason. So when should we ignore the details and when to focus on them? Commented Jan 30, 2016 at 18:22
  • 1
    I think the biggest problem beginners have is understanding that there's a difference between "it happens to be that way this time" and "it is guaranteed to be that way", and that they should care. Getting lost in the details looks like a part of that. Commented Jan 30, 2016 at 18:36
  • 11
    @recursivePointer: No offence but it sounds like you weren't taught Java very well. Those are the absolute basics of the language. Commented Jan 30, 2016 at 18:42
  • 2
    I guess I'm trying to say, find out what you want to learn more about, then learn it - you'll be able to apply it somewhere. Don't worry about what you should learn more about. Commented Jan 30, 2016 at 22:47
  • 1
    Ignore abstraction details when you don't know how. Don't ignore them when you don't know why. If you know why changing foo's state also changes bar's state it isn't that important how it's implemented unless implementing it is your job. But if you don't know why that happens you're gonna have problems. I find I learn the most about whats going on under the covers when things break. So if you're curious, start breaking things. Commented Jan 31, 2016 at 3:14
32

I know why abstraction is great, but doesn't that prevent you from learning how computers work?

Certainly not. If you want to understand the abstractions at work, then study those abstractions. If you want to understand the low-level technical details of a real, physical, computer then study those details. If you want to understand both, study both. (In my opinion, a good programmer will do that.)

You seem to have got yourself stuck in a false dichotomy, as if you can only understand one abstraction level at a time (or, worse, as if only one abstraction level exists at a time). That's rather like suggesting that it is fundamentally impossible for someone to have any understanding of both physics and mathematics.

A good start would be discovering the distinction between computer science and programming.

16

A key skill in programming is simultaneously thinking at multiple levels of abstraction. Another key skill is building abstractions; this skill uses the previous one. Low-level programming is valuable in part because it exercises and expands both these skills.

SICP models and implements interpreters, simulators for a machine model, and a compiler to that machine model (the output of which can then be run on the simulator). The machine model, while very simple, is not inherently less low-level than x86_64. Frankly, a good amount of low-level programming is dealing with arbitrary and arcane hardware/firmware interfaces which are no better than the arbitrary rules of business logic.

2
  • "a good amount of low-level programming is dealing with arbitrary and arcane hardware/firmware interfaces which are no better than the arbitrary rules of business logic" - can you explain this further? How is that possible? Commented Jan 30, 2016 at 18:24
  • 5
    Have you ever written a bootloader for x86_64? If not, try it some time. Most of the work required is due to backwards compatibility so that programs written in the '70s can still run on your 2016 processor. Much of the hardware initialization is also tied to interfaces designed in the '80s. Hardware peripherals are often far worse (in part because there's a lot more of them). Most of the same pressures exist for both firmware and business software, leading to most of the same problems. Commented Jan 30, 2016 at 18:41
5

I know why abstraction is great, but doesn't that prevent you from learning how computers work? Am I missing something?

Go to a magic show and you'll be entertained but you won't understand how the tricks work. Read a book on magic and you'll learn how tricks work but you still won't be entertaining.

Do both. Work hard. And you might be both.

I'ved worked at high levels SOLID, bash, UML. I've worked at low levels, TASM, Arithmetic Logic Units, Analog Circuits. I can tell you, there is no level at which you can work that there isn't some magic abstracted away from you.

The key certainly is not to understand every level of abstraction at once. It's to understand one level well enough to use it right and well enough to know when it's time to move to a different one.

Any sufficiently advanced technology is indistinguishable from magic.

Arthur C Clark

4

What time is it?

Is it time to become a know-it-all programmer or is it time to become a productive programmer?

Knowing the abstraction layers that exist below those among which you work is a good thing, it grants you a better understanding behind the structure of your work and it will even allow creating better solutions.

Yet, you do that, you study when it is not the time to be productive. When it is the time to be productive you take high level tools and build with them. Using all the knowledge you have of how things work, and expressing the solution in terms that makes sense to the problem space.

Of course you would be required to know to use such tools, there is the tradeoff of the time it takes to build with a poor tool vs the time to learn to use a more appropriate tool. That's beyond the scope of this question. Yet, you understand that there is an advantage in knowing diverse tools.


Until now, I always believed that you should learn programming languages that makes you do low-level stuff (e.g. C)

It is not that you should, it is that it is a good thing to know low-level stuff.

to understand what's really happening under the hood and how the computer really works.

First off, you don't need a programing language to have an appreciation of how the computer works. Second... I'm afraid that C may be too high level, and a bit narrow. C is a good mapping of an old version of the hardware architecture, made abstract enough to be source portable to different systems. If you want an in-depth understanding you should aim for assembly language (note that assembly language is an abstraction of machine language).

Eventually, I thought you will become a better programmer knowing this

Yes, you will become a better programmer by knowing that.

Because you'll know what's happening rather than assuming that everything is magic.

You don't need to know how things work to know they are not magic. All you need is an appreciation of how they work, what are its costs and what to expect from it.

And knowing low-level stuff is much more interesting than writing business programs, I think.

Well.... low-level stuff is interesting in the sense that they are thought provoking and inspire curiosity. Business programs may deal with another meaning of interest, one related with money.

Besides, you may be under the impression that all business programs are pretty much the same: some database, some model classes, some CRUD, etc... Those are the boring ones. But look how diverse software is, and how diverse are the business around them. Do not constraint yourself.

My goal is to become a better programmer, to understand computer science more and this got me really confused.

If you want to become a better programmer, you will learn to write reusable code. Because having code that you can trust to work and you don't have to write again each time will make you more productive (as in: doing the same task in less time, but also as in: having less bugs). If you do this, you may be interested in placing your re-utilizable code in a re-utilizable and sharable format... and maybe even share it with other developers. If at least you even consider doing this, I hope it will develop an appreciation for re-utilizable code shared by others, an appreciation for the usefulness of development tools, and an appreciation for higher level languages. These things make you a better programmer.

On the other hand, if you want to learn computer science you will have to disregard about the specifics. For instance, when it comes to see if an algorithm is more efficient than another, you shouldn't have the results be constraint by the particular CPU you have. Computer science is more akin to math, it is abstract.

Mainly shouldn't we avoid all abstractions and observe what really is happening at the very low-level?

No, you should not avoid abstractions. And yes, you can observe what happens at fairly low levels.

What time is it? Is it time to learn how stuff works or is it time to be productive?

If it is time to learn how stuff works, then don't avoid the abstractions... observe them in action from the lower level, you even create them yourself! Experience how and why they work the way they do.

If it is time to be productive, then don't avoid the abstractions... Select the right ones for the task at hand and leverage them.

If you are at all interested in learning how to be more productive, consider that the tools are there to be used. Why would you try to pierce the wood pushing a nail with your bare hands when there are hammers available? Why would you use a magnetized needle and a steady hand to flip bits in a hard disk when there are text editors at hand?

Don't avoid the abstractions, they are tools.

I know why abstraction is great, but doesn't that prevent you from learning how computers work? Am I missing something?

You forget that you can learn multiple things. Learning Java doesn't prevent you from learning C, Learning C doesn't prevent you from learning C#. Learning OOP doesn't prevent you from learning data structures, learning data structures doesn’t prevent you from learning AOP. Learning assembly language doesn't prevent you from learning electronics; learning electronics doesn't prevent you from learning logic gates.


Nobody knows all that there is to make your computer works. But that doesn’t mean that you can reach a comfortable level of knowledge all across. Starting with nuclear physics, and all the way up to user experience, you can find courses online and communities with people willing to help.

Edit: the above one may be an unpopular opinion. I have three things to say about it: 1) I said comfortable - I mean, with yourself. I never said you will be a field expert. 2) I mention online courses as an starting point - and yes, there are online courses on nuclear physics (paid if you want something good, yet online). There are also courses that take you from logic gates to primitive video games 3) I understand that there is value in specialziation, and I didn't mean to encourage to learn all.

Yet again, consider if this is the best use of your time... when the client hires you to create a new mobile app, will you halt the project because you are yet to understand how semiconductors work? No, of course you don't.

The why is the detail. You may say "no" becase you are not interested in physics or materials... yet, you should say "no", because it is time to be productive. Regardless we agree that you can choose what abstraction levels you learn.

You may pretent you have been avoiding abstractions in general, you have only been avoiding some of them.


The approach of understanding everything by dividing it in its components falls short, it has fallen short in many disciplines. That is because there are emergent behaviors that are only visible - and thus can only be studied - when the components are integrated.

Do civil engineers bother by the position of individual atoms? no, they don't. They don't even bother to try to create new materials (that job is for a materials engineer) - for the civil engineer the materials are a convenient abstraction. They don't need to know how atoms are arranged, they need how much they cost and how they behave (how they react to stress, humidly, etc..), and nonetheless they know the materials are not magic.

There is little you can learn about biology if you insist in breaking everything into atoms. There is little you can learn about psychology if you insist in breaking everything into neurons.

You get the idea.

2
  • "all business programs are pretty much the same: some database, some model classes, some CRUD, et" - yeah this is exactly what I think about business software. It's like I'm no longer programming I'm just sticking things together. Commented Jan 31, 2016 at 18:19
  • @recursivePointer First off, doing integration is not trivial. With that said, if you are working on information system for internal enterprise use, you will find that kind of software. But there is much more than that, if you want something as broad and yet dissimilar and arguably not boring, consider video games.
    – Theraot
    Commented Feb 1, 2016 at 0:09
3

Software engineering has multiple levels of detail. Your question is "what is the most rewarding, worthy, interesting level?"

It depends on your task or on what you want to be, what you care about. For big systems you should not care much about bit shifting and clock cycles. For embedded software running on a simple micro controller you will probably want to keep an eye on the ammount of memory you use and you may have to write some primitive timing routines.

Sometimes a choice made on a high abstraction level can impact performance or resource use of your system. Recognizing that and understanding why and how will help making the better choice or finding a way to make an existing system more efficient. This works both ways: knowing there is a high level utility available will keep you from inventing your own wheel in your low level domain. So having some understanding of other levels than the one you are working on may help you be more effective.

On a personal level you may want to ask yourself "do I like laying bricks or do I want to designing houses?" Or maybe make layouts of cities. Or improve the science that makes a stronger, lighter, cheaper brick?

2
  • this is not what is asked. "shouldn't we avoid all abstractions and observe what really is happening at the very low-level? I know why abstraction is great, but doesn't that prevent you from learning how computers work?" See How to Answer
    – gnat
    Commented Jan 30, 2016 at 20:47
  • @gnat To you the answer is "no". Commented Jan 30, 2016 at 20:55
3

The abstractions we teach in computer science are the things which, historically, have been found most beneficial to most people writing most programs. You can't write a modern program in assembly, just due to the sheer size of modern programs and the time constraints business will place on a developer. You have to be ready to accomplish your goals without a 100% understanding of what's going on. The abstractions are powerful tools to help you do just that.

Now in my first sentence, I had a lot of "mosts." In your career, you will find yourself in situations, from time to time, where the abstractions you learned will lead you astray. At these times, you'll need to dig up other knowledge. Sometimes, that's as simple as learning the right abstraction for the job. Other times, it requires exactly what you've done: trying to understand what the underlying system is doing when you use those abstractions.

To take an example, look at multithreading. Traditional multithreading using mutexes, critical sections, condition variables, etc, is very well understood at an abstract level. You don't have to understand how the kernel swaps threads in and out, or how the timer driven interrupts steal control away from your user threads to let the kernel do its magic. You may never need to learn what RCU is, or why at some point deep in the bowels of the kernel, you're obliged to stop using mutexes. In fact, if you try to understand it from the kernel level, you can make dangerous mistakes. I can't count the number of race cases I've had to fix because someone thought an operation was "sufficiently atomic" and didn't guard it properly with mutexes. You have to really understand kernels (and compilers) before that knowledge can help you write safe mutlithreaded code. It's far more effective to do everything at the abstract layer of mutexes.

Now lets push it a little. Instead of writing "most" mutlithreaded programs, which are just fine with the abstractions, lets start writing the really bleeding edge code, using atomic operations. Now you can learn atomic operations as an abstract concept, and use them successfully, but you'll be left scratching your head wondering what they were smoking when they put the API together. There's all sorts of things that show up in the memory synchronization details of atomic operations that make your head hurt. In this case, learning things from the ground up, like you have mentioned, is very helpful for understanding why the abstractions do what they do. Once you understand what caches do and how they peek and poke eachother to maintain synchronization, you can see why the absurdity of the atomic operation API came about -- it was a hardware necessity. Once you understand them, you'll be in a better position to understand how to eek those last few milliseconds out of your precious real time algorithm, and save the day!

So both approaches have their value. You'll be able to produce more valuable code faster if you're willing to accept abstractions without tying them all the way down to hardware. However, there will be cases where an understanding of the hardware lets you push the boundaries of the abstractions in a way other developers never thought possible. A happy medium, I say. Find a balance between both!

2

On the other end of the spectrum is another book that often gets praised as a classic of how to teach algorithms: Donald E. Knuth’s Art of Computer Programming. DEK gave all his algorithms in a (fake, abstracted) machine language, because in his view, programmers will tend to write code that’s simple and efficient in the language they’re using, and the size and performance of the machine code is what really matters.

That book really is a classic, and he made a great point—mostly. In order to avoid abstraction that hides the real cost of what the program was doing, his first editions had no operating system and the calling convention of his example programs used self-modifying code. On a modern desktop processor, programs like that would literally not even run, because the CPU and OS will not allow application code to talk to the hardware directly, and modern CPUs have instruction caches that self-modification would invalidate.

Similarly, if you want to write fast code, you’ll profile and see that the most performance-critical sections are loops that are called frequently. What’s a loop? It makes the following guarantees: each iteration will run sequentially, based on arbitrary statements, but usually the value of a loop index that can be modified arbitrarily and has an address in memory that you can retrieve with &. Only, it turns out that, on modern hardware, you get speedups with parallelism: either vectorizing the loop or sharing the work between threads. And it turns out that the low-level C construct, which mapped very closely to what the PDP machines it was written for originally were capable of doing, breaks a number of guarantees that would be very useful to a modern optimizer.

Some of the programming-language abstractions C doesn’t have, such as foreach, containers and iterators, or even functional programming, can compile most of these loops more efficiently and safely today than a lot of the optimizations that once made C code efficient, such as pointer arithmetic and Duff’s Device. Similarly, DOS games were extremely efficient because they ran so close to the metal, until people only ran them on emulators such as DOSBox. Nobody worries about code size today, and programmers don’t keep a bunch of counters to time their game because it’s so much simpler now to keep one timestamp and do division and remainder on it. Much like nobody does multiplication and division by converting to and from tables of logarithms.

One thing you want to understand is how your operations will perform when you implement them today. What runs fast on modern hardware? But also important is that mathematical elegance is timeless. You want to be able to express your design clearly and understandably, you want to be able to optimize it efficiently, and you want to be able to understand which parts of the project need refactoring at a lower level of abstraction.

1

As indicated in philipxy's answer, anything digital is an abstraction. Even the electrical engineering view of currents and voltages is an abstraction.

I've worked as a computer architect, a compiler writer, and an operating systems developer. I have had the experience of writing Java programs I intended to run on a server I helped design.

Cycle-by-cycle knowledge of exactly how a cache miss would work if the data were going to be modified and was currently in cache in read-only mode in a different processor was far too detailed to be useful when writing a Java program. On the other hand, when I was analyzing the effects on performance of adding a cycle of latency to certain cache miss cases, I needed that level of detail.

A key part of the art of programming is picking the right model for the problem you are solving.

1

You might like to read Zen and the Art of Motorcycle Maintenance which addresses this very question. The conclusion that it arrives at is that you should aim to generate the greatest 'Quality' at the level(s) of abstraction that you choose. Sometimes this means understanding more about the levels above and below you, but generally you won't be able to master all levels.

1

Abstractions are necessary to manage complexity, which is the Nemesis of all programmers. It's just as important to learn using abstractions as it is to learn the details behind them.

A solution to a real-world problem needs to have a representation that closely resembles the model of the problem. It's why a dice game has a class called Die with a method called roll(), and not a block of code with 011100110000001110101000100001010101000100 (assuming those binary digit abstractions made sense on some platform). This has been called the representational gap by Larman. Abstractions allow keeping this gap smaller -- a strategy known as Low Representaitonal Gap (LRG). It makes code easier to trace to requirements and to understand. The Domain-Driven Design (DDD) approach is similar.


UML class diagram of DiceGame

As programmers we are naturally attracted to complex puzzles. We are proud when we've solved a complex problem. But the paradox is that we as humans don't get much work done when things are too complex all the time. Keep it simple (via abstractions) is better, if we have a choice.

7
  • @MichaelT Not sure how to comment on this, but I rolled back the edit because the Image is important to stress "real world objects" (LRG) and putting the UML image on IMGUR will lose its source code (the plantuml URL has a key that is a hash of the code to edit the image, as well as the SVG format of the diagram contains text that can be indexed by web crawlers). Commented Feb 9, 2016 at 23:21
  • 1
    Then link to the source as additional material or put it in a comment. But there have been instances when a site has gone off line, bandwith was limited (and degrading user experience) or images were replaced with malicious content in the past. Images should be hosted on stack.imgur to make sure that there is proper accessibility and permanence of the material in the question.
    – user40980
    Commented Feb 9, 2016 at 23:24
  • 2
    As an aside, look to see if you can export them in a different format that embeds the material in the image. If you look at the images in this post and use draw.io you can import the image back in directly (example). Note that the image is still hosted on stack.imgur.
    – user40980
    Commented Feb 9, 2016 at 23:27
  • @MichaelT I like that example. I think PlantUML has a recover feature. I'll see if I can get it to work. Commented Feb 9, 2016 at 23:30
  • 1
    @MichaelT it works with PlantUML locally (-metadata option), but I don't think they have a cloud service yet (github.com/plantuml/plantuml-server/issues/21). I'll put my PlantUML diagrams on imgur from now on since I can get back the source. Thanks for the draw.io example. Commented Feb 9, 2016 at 23:56
0

As others have pointed out, everything is both an abstraction and a detail. The abstractions allow you to focus on understanding and manipulating the concepts involved while knowledge of the details allow you to implement them. To a solution architect, the progrmming language is just a detail, to a coder optimising a sort algorithm, a datatype is just an abstraction. I always recommend at least an awareness of the details at the level below those required which will help you avoid making poor or expensive implementation choices but equally, I recommend an awareness of the concepts at the level above those required to understand the context your solution is expected fits into.

2
  • 1
    this doesn't seem to offer anything substantial over points made and explained in prior 12 answers
    – gnat
    Commented Jan 31, 2016 at 18:06
  • Apart from being short and sweet, this answer makes clear the relationship between the abstract/detail level required and the level desired.
    – Paul Smith
    Commented Jan 31, 2016 at 18:49

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.