LowCode and NoCode are misleading terms, as neither of them are true or make sense. Wheather the code gets generated out of a visual model or a nocode model is being interpreted at runtime in a proprietary runtime engine – in the end there is always code – no matter how high the level of abstraction is – everything end up being instructions to a machine that get executed. With a number of layers on top that add their bits until there is assembler and 0s and 1s – everything else is fake news. If we manage to distance ourselves from those terms – as in fact they are not too different from each other – we can talk about what they really are: They talk about a creation process. What these terms describe is the process of the creation of an application or enterprise software but in the end there is always code and software which solves problems.

So if the space we really need to talk about is problem solving then what those technologies fundamentally have managed to address, is the skillset problem, which is the real problem to solve and the reason why humanity creates tools. Software tools are equally just tools for a specialised worker to use. Abstraction and simplifications widens the audience that has access to these tools and mass productions makes them affordable. If we are able to make the process of building applications even simpler, then we can enable more people – less skilled people – to produce what previously only experts were able to archieve. Experts are expensive though and my grandmother is bored. So how can we turn her into a software developer ?

With the rise of Artificial Intelligence and especially Natural Language Processing (NLP) into consumer electronics such as Alexa, Cortana and the likes, the next step of evolutions is just that. Talk to a machine. No ? Well, what Integrated Development Environments (IDEs) such as eclipse, android or visual studio and all their friends do, is to enable people experienced in those tools and with software development to build and release applications. They are assistive technologies that have improved the efficiency of software development, but they are fundamentally on the same evolutionary ladder as notepad… just better. Low/No Code Platforms/IDEs sit a little bit higher on that ladder, but the fundamental difference is that they are a next generation tool. The problem – whilst partially lowering the skillset entry level into software development is that it’s still too complicated for most people – but hey, it’s a starting point. Evolution took for ever to get us where we are so let’s not be ungrateful. But: They are also just an assistive tool which require a human to interact with a computer using touch or a point and click device to drive and ultimately execute instructions that end up in code – sooner or later.

Let’s look at my grandmother(s) now and lets ignore the fact that they are both dead – let’s assume they are not. My grandparents grew up in a much more mechanical world in the 1920s than the one we were born in, but even in their times there were many things that were already beyond their skillset to understand. Today it’s a lot worse – in a good way. The majority of people understand very little of how the things they use work, yet they are able to use those devices – the reason is how the functionality is interfaced and thus accessible to interact with. My grandparents used TVs. They had to get up from the couch to turn it on. It had a few buttons for changing channels (1,2,3 or so ;)) and to change volume. Later came the remote control so they could slouch on the couch having only to get up for a beer or the likes. Today a remote control looks like a controller for a rocket ship – we are all applelised when it comes to the human device interaction. But: Make it simple, make is accessible – sell more, or allow more people to use it. Let’s get back to the point:

In order to turn my grandparents into software developers we need to lower the complexity bar to allow them to do it, even though they have absolutely no idea how things work on the inside. And that’s ok – they don’t need to as long as the thing does what we ‘tell’ it. We need to improve the interfaces with technology as we did for many decades now with the mouse, the touchscreen, gestures. We work on mind and eye controlling and more simply on natural language processing. Why do we need point and click devices to instruct IDEs ? And why do we need to be so precise in doing so and actually understand what we do. My grandmother will never understand that – not even my mother and even I start to doubt it. The IDE of the future has an NLP interface with an AI. It will execute what we said and start to understand us better as we explain what we meant when we said XYZ. It will turn our thoughts into Software (before someone actually develops the technology to do that directly). Volunteers ? Anyone ? No, Grandma, not yet – unfortunately – and not just because you are dead.

Let’s talk about #DigitalEvolution and make my grandma great again.