Computers will again become People
Do you know the origin of the word "computer"?
When I was a young student in the early 2000s, I got involved with a department at the Faculty of Electrical Engineering and Computing in Zagreb, where eventually I’ll end up spending almost 15 years. There I met professor Mario Žagar, who will become my mentor, and I still think he’s the best academician I’ve ever met, not losing sight of bigger issues, while working on the details.
At that time, the department was sort of concluding a multi-year project of researching software and hardware “interfaces” - which basically boils down to how to modularize hardware and software and enable the modules to talk to each other in some standardised ways. Being a young hot-shot, I’ve already gotten used to downloading libraries from the Internet and integrating them into my projects, so I kind of thought that’s a non-issue. Still, papers needed to be written.
The issues of how to combine modules into products remain today - but it’s not usually given that much attention. Virtually all current programming languages come with build tools or development environments, and have package managers integrated into basic commands.
For many programmers, their daily lives are basically spent integrating specialized pieces of code that do a single thing (e.g. a database, a table manager for the web UI, a login system, an e-mail library, and LLM…), and the actual “product” happens in how those modules are integrated. Integration engineering is certainly a huge subset of software engineering.
It’s also where most of the complexity happens. What if the table manager UI component does 95% of features right, but the specs say that it’s wrong in the other 5%? What if the login system assumes certain properties for user accounts that just don’t meet a particular app’s needs? Is it a software modularity problem if too much effort needs to be spent on the adaptations?
Current AI companies are investing a lot in making their products replace entire software development agencies - cutting out the middle-men, in a way. They are advocating for a future where a prospective company founder will agentically code an MVP, and then hire the CTO to do the work of an entire team to do the rest. How likely is this future to happen depends mostly in how good LLMs are in interpreting soft / unspoken needs and wants of the company owners - and they are constantly getting better at that.
Having being trained on a huge volume of open source products, LLMs don’t really need to do the whole “integration engineering” part, and just code everything into a (more or less) single coherent code base. There are still good reasons to depend on external libraries, like easier fixes of security issues, but that could also be automated - a M2M (or A2A) way to distribute description of security issues with enough details so LLMs can fix them. In this way, AI also means the death of modularity.
The next step is to increase efficiency by abandoning other aspects of software development that exist because humans need them, like syntactic sugar. Though naming variables and functions in a way that labels them meaningfully does help the current generation of LLMs to better understand how to compose them within projects, maybe this too can go away.
But why stop there? Does an LLM need to create a piece of code to handle HTTP? Today it needs to because of efficiency - hooking an LLM to a TCP socket isn’t really a good way of doing HTTP, both for efficiency and security reasons, but someday? It’s not far-fetched that the next big software (aka web) development framework will do just that.
Even deeper, why do we need HTTP at all? Why do we need browsers? Can’t everything function like Jarvis from Iron Man, or The Computer from Star Trek? Voice and simple visual commands will usually be more understandable to humans than squinting over huge Excel sheets.
An A2A system that replaces the entire web stack (from the TCP protocol onwards, for example), can even now be done with “LLMs all the way.” No need for programming languages optimized for LLMs - those are just prompts.
Back in the 1940ies, the word “computer” referred to a person that’s doing computational tasks - adding numbers in spreadsheets by hand, for example. They were replaced with digital calculators, then with machines that would themselves be called “computers”.
If we imagine “AI” (whatever that means) doing the entire software stack “by hand”, and doing everything the user asked for within that single (call it “agentic” or whatever) system, then it’s being a “computer” again - it’s sort of behaving like a person doing all of that by “working it out in its head”, without using programming interfaces and languages as a crutch to simplify talking to the hardware.
That said, I’m not saying this is the best possible outcome. Looks like the hype is leading to using LLMs for everything, but I’m not personally convinced it’s a good idea. Those human computers from the ancient past did deterministic work, even if they were fallible - we solved that by having multiple people doing the same calculations and comparing the results. Now, a single LLM is (overly) trusted to make important decisions. Here’s a good article how determinism is still king, especially in business.

