Skip to main content
Skip table of contents


The qibb platform is designed as a fully cloud-native platform running on top of Kubernetes, the industry standard for container orchestration.

qibb is based on a microservice architecture, which is characterized by lightweight, container-based services. All services of the platform are optimized for the deployment on a distributed infrastructure and efficient use of system resources with the help of container virtualization. For example, a high resource utilization can be achieved by densely placing several containers on a node and thus minimizing idle processes. The platform and its workload can be rolled out and operated across several clusters, which in turn consist of several distributed nodes.

What is a cluster?

A cluster is a group of distributed nodes. It combines their computing power, memory & storage to one entity.

Multi-cluster architecture


The qibb platform can run and manage its workload across multiple clusters, which can be set up on different sites, locations or infrastructure environments. This allows site-specific requirements to be implemented.

For public cloud, we support the deployment on top of managed Kubernetes Services by major cloud providers, including AWS EKS, Azure AKS and Google Cloud GKE. For on-premises, we support the deployment of qibb into Kubernetes environments running on VMware vSphere, such as provisioned and managed by Rancher RKE.

For example, it is possible to implement the placement of services (e.g., qibb flows) at specific locations to meet availability, latency or regional compliance requirements. With qibb, the following typical location-based multi-cluster scenarios can be mapped:

  • Hybrid cloud by combining on-premise data centers and public cloud.

  • Multi-cloud by combining multiple public clouds.

  • Multi-region by combining several regions of a public cloud.

  • Multi-site by combining several on-premise data centers.

At the same time, a high degree of isolation can be achieved by operating multiple clusters:

  • Separation of clients based on dedicated clusters.

  • Separation of responsibilities within the organization.

  • Separation of infrastructure-based environments with independent lifecycles.

  • Limiting the failure radius to improve the reliability of services.

  • Security isolation of services according to their trustworthiness or sensitivity of data.

Learn more about deployments via qibb

Multi-cluster communication

Network connectivity must be ensured with suitable firewall rules to allow ingress and egress traffic between qibb clusters as well as for any communication between qibb workflows and connected third-party services.

The qibb platform distinguishes between different cluster types, which are based on their purpose and the services they contain. In addition, the respective clusters may have different features depending on their use case, such as different instance types or number of nodes.

Main Cluster

App Cluster

The Main Cluster of qibb represents the control plane of the platform. This cluster hosts the core components of qibb, which are responsible for the central management of all identities, deployed flows and connected clusters.

Depending on the active user base and the number of attached clusters, the main cluster may require more or less resources to manage them. The cluster can be dynamically resized on-demand depending on the usage.

The App Cluster is designed for running workloads (e.g., qibb flows). It serves as a base area for the creation of spaces and the rollout of application flows. For this purpose, it has an independent gateway to process network data streams independently of the Main Cluster.

In addition, this cluster type hosts independent services for monitoring, which collect logs and metrics from local workload and forward them to external monitoring services.

Learn more about qibb clusters


Each microservice is responsible for a specific application domain or task and has minimal dependencies on other services, providing fault containment through isolated and independent components.

Critical services can be rolled out redundantly. Different rollout methods are supported:

  • Distributed rollout across multiple nodes, which ensures continued operation in the event of a node failure.

  • Distributed rollout across multiple zones in the public cloud or multiple data centers on-premise, which ensures continued operation in the event of a zone failure.

  • Optimized rollout of databases with sharding and replication.

Failure of partial components (containers) as well as complete nodes is automatically detected and compensated by the container orchestrator. Recovery can take place within a very short time and is fully automated. Typically, a failed container can be restored within seconds, a failed node within minutes.

Architecture details

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.