A developer learns a set of basic skills that he can combine into more complex combinations of these language constructs. Variables, Loops, Classes, Objects, Functions, and so on. He learns about existing solutions and libraries that solve a specific problem that he can use, instead of having to implement them. He remembers solutions he has implemented before as he gains more experience and starts to re-use some of his own code and improves his skills as he reads and learns more about development and solving the specific problems he is facing. He understands how to persist data, how to enable high volume data processing and to leverage physical or cloud based hardware resources. He learns about security, presentation, maybe front end development and design and so on. He specialises into something that he is interest in or excited about or that gives him certain outcomes based on his goals (money, career advance options, fulfillment, etc.). Isn’t there a way for us to replicate that ?

What an artificial system would fundamentally need is information described in a way that it can consume. Instructions in a language that it can understand and an evaluation function and feedback to understand the qualtity of the result in order to be able to improve.

Containerization and optimised container descriptions will soon enable AI to combine those dedicated black box building blocks into joint functionality. Let us look at some promising, yet old theory about knowledge based representation. A knowledge representation-based development tool is a tool, where a developer describes an application in a high-level, mostly declarative language, from which native code is generated for multiple environments. Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. In computer science, declarative programming is a programming paradigm – a style of building the structure and elements of computer programs – that expresses the logic of a computation without describing its control flow.

We need to develop a language that enables humans to interact with computers efficiently and move away from click and point devices, touch or typing and we need a language and descriptions that allow an AI to understand what certain code is for, to be able to make it accessible for use. We of course need to give feedback to the system as it learns and give guidance and directions. Natural language processing will play an important role in the Human Device Interaction and specialised AI could be trained to find code and describe what it does which then communicates with the core AI. It would not have to be just one single AI for everything – way too complicated and not neccesary. Specialist AIs, some of which might end up not even being AIs would collaborate to solve the problem – classic divide and conquer; basic specialisation.

When looking at discussion on the web about speech input to drive development, they made one fundamental mistake – they talk about voice input compared to typing to write code and come to the conclusion that you can’t possibly speak faster than type – voice control only makes sense for additional actions such as ‘save’ or ‘open’ or to replace complex shortcuts that would require hand/finger combinations that are inefficient compared to the main code writing. Whilst those attempts can certainly get a few percent efficiency out of a developer they won’t fundamentally change the landscape. Especially also because a lot of times it’s not just about writing the code but how to solve the problem in the first place. Now we have a few options, all of which are interesting for efficiency increase for example:

– We can increase the level of abstraction, which means rather than voice->AI guided coding, we control a higher level language such as OutSystems or other visual modelling tools as they basically place whole blocks of code with one voice command.
– We can use AI support to help the developer in search/find and how to apply it technically, so an intelligent suggestive system that recommends how to do certain things with code examples, access to information/training material, where to apply certain changes in the code (depending on the complexity and capabilities of the AI) and so on and also guided learning through the code.

If these systems learn from the interaction and are cloud based/always-on they can support the next developer how the problem was solved, possibly even share the “code” and allow for a whole new level of collaboration. It will also enable and allow the AI to apply certain code implementations in the future gradually taking over basic implementation jobs – even in a non visual 4GL language. You will be able to explain to a system what you want to do and it will provide you with implementation templates where you just need to connect the dots.

Stay tuned as we explore these ideas more in the future, follow us on LinkedIn, Twitter or Facebook.