Software complexity, stability and its future

I think that most software today is bloated and inefficient, as well as, very fragile. I think it’s due to a culture of just piling and piling software layers on top of one another without thought. With more levels of abstractions people forget how stuff actually works down below, and make incorrect assumptions about how they should structure their code.

Why am I bringing this up? Coz most people tell me I am either plain wrong, or that I am overly pessimistic, or an ootist. Well, here are 2 very experienced devs talking about these points, and articulating them better than I can:

@admindev @cekim @BookrVII who else codes? anyways pretty good talks, please check em out

2 Likes

I code as a hobby and frameworks sicken me. Libraries I’m OK with. I mainly work on small projects so its usually a lot of fluff for no reason. Can’t say they’re not useful.
I was thinking of starting with qt though for some GUI software.

1 Like

Modularity, compartmentalization, specialization, abstraction and over time new linguistic constructs ease the impact of code bloat.

There’s no point in fighting the idea of growth of code. The market will decide what the appropriate balance of optimization, functionality, stability and time to market are at any given moment…

What is more important than screaming at the sky about code bloat is making sure that coders understand hardware. Our airborne armed forces require pilots to understand the engineering behind their vehicles for a reason. It teaches them to work within their limits.

Civilization will not collapse because code bloats. It will collapse because we allow dogma to divide and demonize those with whom we disagree.

1 Like

it’s been only half an hour since I posted, so I doubt you saw the talks, unless you saw them before :stuck_out_tongue:

I half agree and half disagree, we have plenty of those, and many if not most of those actually make it worse, although they claim otherwise.

No the market doesn’t even see that. The market decides what all that code does. Atm it’s webapps and shit. But software development is not controlled by market, it’s controlled by what programmers are taught in academia, and, these days, sketchy youtubers.

No, and that’s not the point, the point is if it does, it’s gonna be hell of a hard time to resurrect this kind of infrastructure that we enjoy.

Haven’t watched yet - but I’ve been following this discussion for the last 30 years lol…

No, the market does decide what time gets allocated to “hardening” and optimization. It does so by rewarding those who are “robust enough” yet get to market first with revenue and profits and punishing those who take too long or optimize too much with failure and obscurity.

Every aspect of development is influenced by the market as it “allocates resources” whether you want it to or not.

1 Like

yes I agree about what market rewards, but it doesn’t control how we get there, and things can seem robust for a period of time, but later fail spectacularly, because developers prioritized getting to market first. The market didn’t actually get what they wanted in the end.

Here’s the thing about the market/nature… it doesn’t care what you want… it delivers “what worked” given all current and concurrent conditions at the time.

So, the fact that you personally (and I don’t disagree) are unhappy with the perceived bloat relative to “what could have been” is irrelevant, the market has run its subtraction/integration filter and you got “what worked”.

As the extinction of species shows, its not a perfect system. Sometimes you get a do-do bird and no one can explain what conditions led to that being the “ideal solution to that particular problem”, but there it is… at least until its driven to extinction by new natural/market forces.

Put another way - there is nothing new about this existential fear and rage over growing complexity. I’ve seen it at every stage as we’ve moved from punch cards, to assembly, to C, to C++ to Java, Ruby, Python, etc…

Okay let me put it simply:

if devs knew what they were doing, they would make better products, and market would happily filter them too. You cannot convince me that they way developers learn and/or are being taught is good in any shape or form.

As for “existential crisis”, once again, I don’t in the slightest think that code will cause that, I think because everyone (devs) is floating in la-la land and don’t know how shit works underneath, if shit hits the fan there’s no way you’ll see the current state of affairs (tech wise) be revived in T+20 years if not more, depends how and what shit hit the fan.

But really, I just want my shit to run faster and without tearing reeee

1 Like

I watched the first video got 20 minutes and I have to somewhat interject:

Just because a kernel has x lines of code doesn’t mean that all of those lines are executed and therefore are bloat and are unnecessary. And like this person said most of the growth has been features which are optional.

From the little that I know about code and coding and working in a software company selling their own code as a product the Issue is mostly:

Feature in terms of what should it do -> how that is written in language -> reliance on code-language, frameworks and libraries -> usage of the provided software product -> improvements / regression over time because of further growth of the software product.

That being said: There are a lot of variables in how that manifests and how problematic that ultimately becomes. Modularity and compartmentalization can help to make it easier to maintain the code base of components to not regress over time when the usage and or product design changes.

My critic from the 20 minutes I watched is that the problem is not the line of code itself or the growth nor features but the changing product design in a changing market with a lot of competition and literally too much tools to do the job. But there are parts of the software product that needs to be on point with compliances and such that don’t care about your codebase and programming architecture although there can be some negotiation between the two.

the solution would be something like a custom function modules for software product and agile development process for improvement but even than: you own the product but as soon as the user is deploying it, you are under the control of your own architecture and can only improve as much as the codebase allows it.

and good luck trying to convince any CTO, CEO and CFO of not releasing a new major version but building a new codebase from scratch. that will not happen. additionally: if your code is not deployed on hardware products but as usable webapp or in a GUI form as standalone .exe or something, you can’t jump from any version to the next version without migrating all the user data and UI with it, otherwise you won’t have any customers buying your software and you are out of business.

so this whole issue is in my eyes an actual threat that will bite software companies ass in the next 10 years when for some reason the core counts and / or clocks speeds won’t rise so impressive as in the last 10 years.

until then it’s nothing more than people who cry about that 2019 software doesn’t run on 1990 hardware.

to finish I gotta say that I fundamentally agree with the point that modern software is bloat and the bigger and more monolithic software gets the shittier it is. but to yell about the reverse from 30 years ago is just not a realistic approach. at all.

2 Likes

Well… that depends on the setting in question.

In college, no… nor would I try… Having people start with python and java as their first language is not only sub-optimal its counter to developing an understanding. It is emphasizing reward over effort.

The self-taught nature of people sharing and digesting open source is much healthier. I don’t see “one path” operating in practice.

What Bookr said about re-writing things… that’s far more problematic than most engineers recognize. I’ve seen it done. I’ve done it. It never goes as well as you planned and sometimes its really, really bad.

and ditto on the “just because there’s N lines of code…” thought…

My beef with kernel code is that its often hard to read because it isn’t re-rationalized after abstraction for N platforms. The macro-indirection as a result of using C (and I love C don’t get me wrong), makes for some tough reading.

He mentioned that, he understands. But I agree with you, most of the code in Linux tree are drivers.

As for changing market, I disagree coz most programs/apps didn’t change cept UI and where they are (mainframes, PCs, phones, webapp). Most apps are just CRUD and apps these days require way more resources to run than programs of old, and no UI does not take most of the resources.

Middle-ware bloat is a fascinating one to me…

I’m constantly faced with needing to move structured data from one format to another (structure, API, etc…). Over time I often end up re-inventing/re-writing modules provided by other software because my use-case can benefit by orders of magnitude by not carrying the full API/abstraction layer they provide.

Not because they suck inherently (though some do), but because I have narrower parameters of use than they could safetly or did assume.

Looking to a future where APIs are written such that they are compiled real-time or first-time… So, imagine you have an API written as a declarative spec (BNF) … Your app applies those rules and poof! middle-ware layer shows up tuned to your use.

1 Like

Sounds great on paper, but I think that would add too much complexity :stuck_out_tongue:
What if there was just less middle-ware? :wink:

Well, that’s the exact thing… there is only as much middle-ware as you need… You specify declaratively what middle-ware you need… You get that and nothing else.

You say you need X, but X needs Y and Y needs H and J, and those need…

Right, but see BNF - dependencies are determined by the middle-ware spec…

I want read-only sql access to a DB… I get that and only whatever predicate layers that requires…

This is what package managers do at a broader level.

There’s also some “hard truth” that has to go into this… you say, “all I want is read-only sql”, but the reality is that this means all sorts of other stuff has to be there as well, but you didn’t even know about it.

You are assuming the dev has perfect information from top-to-bottom of how the OS works before writing an app. Not gonna happen. I don’t have that, never will…

Ya gotta stand on some shoulders sometimes or you never get anything done.

That is not correct because even tho CRUD is one thing the way a functionality is implemented and the user uses it are two different thing that are out of the scope of how you designed it and then improved it.

Loading user profiles with data from outside source like an active directory to show linked user from the same active directory as a link to the permissions of the file upload that is referenced in the application database and stored in the filestore is technically “just” CRUD but if you changed it a bunch because of product design is a lot different than “but the UI isnt eating up ressources”. It’s the user using a feature and how you implemented in your codebase.

so the comparison between an OS, webapp, firmware, monolithic “job” software (single purpose software) is vastly different and I don’t think that anyone of us can speak about what is underlying between all of them and how it is ultimately resolved.

1 Like

i.e we stand on other’s shoulders to get shit done…

I think that the developer should have sufficient knowledge of the software stack below the level they are atm, yes. And here’s the thing, if we simplify many unnecessary aspects of the current software systems, there’s much less to know!

I know I am being way too idealistic, but that’s what I think and what I want to see. I want to see experts. I want to come back to the architecture metaphor, they take time time to design, test and iterate on it till it’s perfect before building shit. The architects need to know the fundamental laws of physics and know math behind it all. Why do we just ship, ship, ship, ship unfinished hot garbage that breaks as soon as some edge case presents? Why do the developers who develop it don’t know what a cache line is? No we can do better and we must do so, sooner or later.

No it’s the same, it’s reading from a disk, whatever it may be. Cut the fat (abstractions), leave only what you absolutely need.