Unlike analysts at large companies, who need to specialize in narrow market segments in order to avoid stepping on each other's toes, we at Intellyx have the luxury of covering cross-cutting topics that align with business needs.
One of our trade tools: to look closely at how two separate markets interrelate and thus provide value for industry. I'll consider the relation between low-code and cloud-native computing in today's Cortex.
Defining the Terms
Low-code tools simplify and accelerate the work of professional developers by providing a visual model-based environment for creating applications. Low-code frees developers from the responsibility of hand-coding integrations, authentication features, and other 'plumbing' technology so that they can concentrate on higher-value business needs-centered activities.
Cloud-based computing expands the cloud's best practices to all company IT, including horizontal scalability, elasticity, subscription-based distribution models, and more. Hybrid IT, edge computing, zero trust protection, and DevOps all form part of the story of cloud-native computing.
Today, Kubernetes is in the middle of the cloud-native computing storm, because much of the breakthrough was driven by containers and microservices. Cloud-native, however, is much wider than Kubernetes, spanning the entire spectrum of environments from conventional on-premises to serverless virtualization.
Lastly, we describe microservices as coherent, parsimonious execution units. Cohesive means that every microservice has one thing to do and it does well. Parsimonious applies to microservices being as small as possible but no smaller. And execution unit refers to the fact that microservices consist of modular, executable chunks of code that interact (and anything else) via APIs with other microservices.
The Other Side of Microservices
Low-code and cloud-native do not seem to have anything to do with one another at first glance — but several of the low-code vendors are still making the connection. Microservices after all, are pieces of software code, right? So why code them by hand if you can take a low-code approach to design your microservices?
Not so quick. In general, microservices concentrate on back-end functionality that simply doesn't lend itself to the low-code visual modeling sense. In addition, today's low-code solutions continue to concentrate on developing front-end software (often for mobile apps), and designing and automating business process workflows. It is doubtful that unique microservices will be on this list of low-code sweet spots.
From the above concept of microservices it is clear that they are code-centric and therefore do not lend themselves to low-code growth. Yet a separate story is how companies integrate microservices into applications.
Many low-code vendors will have you believe that microservices should be thought of as LEGO blocks that you can build into applications. This LEGO metaphor is on the right track, superficially-but the devil is in the details.
When microservices first came on the tech scene, developers were excited to be able to create apps with them. However, because they were parsimonious, developers needed more of them than they were used to from conventional object-oriented programming methods to create any serious business functionality. And instead they had to deal with operators to handle them, protect them and scale them up.
It soon became apparent that setting up any sort of free-for-all microservices where any microservice might communicate with every other microservice was a road to unnecessary complexity that would hinder the ability to scale the effort to create the application and handle the resulting implementation.
For this purpose, the LEGO block metaphor for microservices should be followed with care. Any vendor who suggests their low-code tool can build willy-nilly microservices is practicing extreme hand-waving.
The Rise of Cloud-Native Architecture
The challenges of assembling willy-nilly microservices helped drive the development of container orchestration platforms such as Kubernetes, as well as a wider set of best practices at the heart of cloud-native computing which we call cloud-native architecture. Cloud-native architecture both informs and leverages how Kubernetes orchestrates containers through pods and clusters, but that is simply scratching the surface.
Cloud-native architecture is, as with other design approaches, essentially technology-neutral. Rather, its most significant aspect is how it delineates the coherent and detailed abstraction that determines how cloud-native computing operates.
Abstractions are intentional simplifications which conceal the underlying complexity of technology by providing its users with useful representations of that technology. Typically, abstractions apply within specific technology contexts: compilers abstract object code, virtual machines abstract physical servers, etc. In contrast, with cloud-native architecture, the abstraction extends throughout the IT landscape, from on-site to edge, from cloud to serverless computing.
Essentially, we have drawn a line of water across everything that we do. The network underneath the line supports abstraction. Businesses, clients, consumers and anyone who wants to create apps that exploit abstracted IT assets are above the mark.
Once the abstraction is in place, it becomes clearer the role low-code plays in the cloud-native environment. Not only do we have composable microservices above the abstraction – we have stable, controlled, scalable representations of software functionality that lend themselves to low-code application construction. That microservices are that functionality doesn't matter to people above the water line.
Hence, cloud-native architecture is the key to allowing low-code to operate on a scale with microservices — or any other software capability.
Low-Code Below the Water Line
When they function properly, abstractions hide from view all manner of uncertainty-but the uncertainty still exists. If anything, endorsing abstractions without seams generally needs additional complexity below the water line of abstraction.
That is the cloud-native computing challenge. Anyone who has the privilege of working with Kubernetes (or any technology in their ecosystem) would understand that this environment is incredibly complex.
Fortunately, there are core architectural principles in the cloud that help with this complexity: lack of trust, statelessness and codelessness.
Of these three, codelessness is one which links low-code and cloud-native computing. Codlessness is basically the concept of 'cattle not dogs' that declarative definitions will drive all configurations of infrastructures. When you want to alter something in the production process, update and redeploy its definition (or recipe, or manifest, or chart).
This declarative concept as code movement is at the heart of the network – and that cloud-native codelessness takes this movement much further. After all, you just don't want to be coding to infrastructure. You want it to reflect the desired behavior in a declarative way.
So far, so good-just a static representation of desired behavior does not go far enough either. Think of a single YAML file or of a manifestation or recette. How are you uploading, versioning, checking and handling those representations?
Rather we want an abstract model representing the desired behavior. These models do not simply capture the behavior itself – they also capture the desire to alter this behavior as well as the constraints on such alter, and enable the people who work with them to enact and handle such changes.
Now we've come in full circle. Why? Keep in mind what tools with low code do. They have-you guessed-abstract models reflecting the desired actions of the application. In other words, the concept in codelessness in cloud-native computing and low-code tooling is designed for one another.
Given how engrained the idea of codelessness is in the cloud-native environment — although the term 'infrastructure as code' is much more popular — you would think that low-code might already have found a position among cloud-native infrastructure teams.
That is not the case. Instead, the professionals on such teams use a completely different set of abstracted models via the interface on the command line (CLI).
Where application builders above the cloud-native water line see interest in visual models, engineers below the waterline would rather peck one character at a time out simplified instructions.
I'm not going to get into a debate about which solution will be better. Everyone serves their own ends. Instead, I want to illustrate two significant things.
Firstly, the common CLI-based configuration in cloud-native infrastructure circles is itself a low-code (or at least analogous to one) build.
Second, if some vendors decide to create low-code visual model-based tools for cloud-native infrastructure, they may find a willing audience. After all, even for cloud-native network developers, CLI isn't really the right solution.
ThankYou!