2026-03-14 - GenAI part 2, looking for the Middle Way
From coding to clauding
Last time we talked about the A thing, it was just, at least for me, something promising regarding sofware engineering, a new tool to investigate, an enhanced autocompletion. After a new exploration phase with Cursor, I started using Claude Code in December, and then I got my wahou moment. I am still quite surprised as I was somehow already exploring parts of what makes Claude Code great:
- The CLI part was already existing with projects like Continue Dev or Aider.
- The Sonnet 4.5 model was already the one I was using mostly with OpenWebUI.
- The Context Builder was already something important and documented But still, with Claude Code, everything became concrete, it was not about autocompletion anymore, I finally understood clearly what meant Andrej Karpathy by saying that the hottest new programming language is English. So I guess it was more linked to a lot of different knowledge on GenAI and experiences with related tools that finally came to make sense through this great tool, and also made me realize one thing: Sofware development will change forever. Gloupsss.
What a developer is
This news is spreading from some times now, around a year I would say, and it intensifies with many tools and models popping up like OpenClaw or Sonnet 4.6. Code is becoming a commodity, anybody can code, it is all about prompting now. This may be true, I am already seeing some non-tech people able to develop and deploy POC or small applications easily, this is cool, the promise of the free software movement was somehow that sofware should be able to be built by everyone, not just used. If your job is just about completing tickets and coding, it may be an issue, you may end as a machine, but most developers are not just coding, they are doing many more things like giving feedback to sofware architects, debugging tricky failures, helping support team or playing table tennis elegantly. Software development is about finding a solution to a problem, including softwares at some point, and eventually coding. Nobody mentioned from where the software should come. The No-code movement answered a similar question earlier, but at a different scale, for sure.
The cognitive load issue
This is a question that I find the most interesting about the AI breakthroughs lately: What is human intelligence? And then: Haven’t we overestimated it a bit? :) To add more intelligence in the world should not be such a worry when you are aware of our global problems regarding Nature diversity collapse, climat change and energy crisis. If we are worried of wars and end of civilisation, it may not be just an issue with AI.
Our cognitive system is being attacked from many sides lately, our mental bandwidth is occupied by infinite scrolling and worrisome news. In my opinion, AI adds a new layer at two different points: The hype, and the usage.
The hype about AI is everywhere, for multiple reasons. First with an economical one, we just put too much money in it already, it is a business too big too fail, especially for US economy, don’t talk about a bubble. So a second related reason is about soft-power, about narratives, if you want people to believe in you, you need to invent good stories, and again US has always been incredible for that. China tries to play a frontal card with the Deepseek moment, while Europ tries to import its digital sovereignty. So AI became the evergreen news in medias, with extre opinions going from accelerationists to doomers. What a time. We are all exposed to this, the people in tech bubble a bit more, we need to take it easy, no need to read every opinions and to jump on any new technologies, to keep curious and open to change is sufficient for now.
The other issue I am noticing with AI is more complex and related to its usage, especially amongst sofware engineers. You are now able to interact with tools like Claude Code, Google Antigravity or OpenAI Codex, these reasoning models will bombard you with questions, feedback or hallucinations. And it will be 24/7. Some people are blocked by this, they don’t have motivation or time to learn these new techniques and associated tools. They feel insecure about the situation as they could be ‘replaced’ by other people mastering the new technology. They may feel cognitively degraded and psychologically down. Other people, mostly tech-enthusiasts, jumped on this so-called revolution, tested many tools and were the first evangelists of the AI gospel. We saw that with crypto or metaverse movements. But the new concern here is that we have some of these enthousiasts becoming burnt out after a while, brain fried, cognitively exhausted by the AI overstimulation. We need to take care of this, we are humans, we do not need to compete with machines, we just need to use them for assistance and automation, at our pace. We don’t have to feel bad about it, we are still doing good with our jelly CPU powered by only 20 watts with easy maintenance.
Aside, you have also concerns of cognitive decline related to the use of AI, but I didn’t put it specifically related to it. Such issues are happening with any technology: Writing degraded our memory capacity, GPS degraded our orientation sense, hyperlinks smashed our dopaminergic system. We always made trade-offs with technologies. And not just on cognitive side.
The Middle Way approach, and the collective response
As mentioned earlier, I am trying to have a moderated approach upon the AI technology, not being absorbed and brainwashed by it, and neither being blindly rejecting with the risk of seeing it used and understood by extremists. I mentioned the Middle Way in the title just to explain my position of temperance, and maybe a bit for a marketing touch. To be a bit more precise, I will use a mix of curiosity, pragmatism and skepticism to define it. But this is just a personal approach, an individual one.
Behind these curtains, new balances of power are still appearing with important questions such as: What will happen to open source ecosystem with agentic development? How to protect from mass-surveillance? How to avoid autonomous weapons? … Some people are trying to make us think than the AI battle is a 1-to-1 versus machines, it is a old strategy to dilute people’s will and reduce political implication. They are trying to take your agentivity and to give it to machines. Narratives again. Don’t be discouraged, collective intelligence is the way, you can look at experiences in Kurdistan or Ukraine for examples, let’s just not wait for life and death situations to react, there are collectives everywhere, and if you don’t find one, you can create it, you just need two people :)
To get back to our sheeps, with all that said, I need to move to open source solutions and collective initiatives. I tried Claude Code, it was fine, I thought that Anthropic was different than others, but you cannot escape your business model, Claude Code was used inside Palantir for example. I am not blaming them for that, I just want to be able to switch easily to an other provider to get a leverage power as consumer. My next step will be to try Open Code and to build an integrated and open ecosystem to avoid locked-in services or products. I am still convinced that the main issue for AI is a product one, not a technology one for now, most people (me included) just don’t get how to use it efficiently for their issues. I am not forcing anybody to use it, I can also be really reluctent for changes, especially with such cognitive trade-off at stake. But this technology is there to stay, so if you need it, or want to control at it as much as possible, it is better to choose how to use it.
Conclusion
This article may be a bit indigest, we have gone from a technical point to a political question, through a neuroscience mess… Sorry about that, I am still new at blogging, but you can see at least that my concerns are not just technical. I try to promote a more holistic approach than a scientific one, I love science but I think in our agitated times, we need to be a bit more ‘inclusivists’ than reductionists.