OpenShift Installation Process

Rohit Dalal
5 min readApr 13, 2024

The following sequence shows the high-level steps required to install OpenShift:

  1. Fulfill the installation prerequisites.
  2. Create the installation directory.
  3. Create the installer configuration file, install-config.yaml.
  4. Generate the Kubernetes manifests.
  5. Generate the ignition configuration files.
  6. Deploy the OpenShift cluster.
  7. Verify the OpenShift cluster health.

The following sequence of detailed steps explains the OpenShift installation process:

Step 1: The user runs the OpenShift installer. The installer asks for the necessary cluster information and then creates the installation configuration file install-config.yaml accordingly.

Step 2: From the install-config.yaml installation configuration file content, the OpenShift installer creates the Kubernetes manifests. The Kubernetes manifests contain the necessary instructions to build the resources for the OpenShift installation.

Step 3: From the manifests content, the OpenShift installation process creates the ignition configuration files for the bootstrap node bootstrap.ign, control plane nodes master.ign, and compute nodes worker.ign.

OpenShift installation process — Ignition configuration files stage(Credit-Red Hat)

Step 4: The bootstrap node boots and fetches its remote resources (bootstrap.ign) from the initial ignition data source, and then finishes booting. At this stage, the Kubernetes API is running on the bootstrap node.

The bootstrap node hosts the remote resources required for control plane nodes to boot (ignition configuration files) in the Machine Configuration Server (MCS). It also runs a single instance of the etcd cluster.

OpenShift installation process — Bootstrap (bootkube) stage (Credit Red Hat)

During the OpenShift installation, the Kubernetes API Server runs first on the bootstrap node, and then it moves to the control plane nodes.

Step 5: The control plane nodes boot and fetch their remote resources (the master.ign ignition configuration file) from the bootstrap node, and then finish booting.

Step 6: The bootstrap node starts a temporary control plane and installs the etcd operator.

OpenShift installation process — Bootstrap (temporary control plane) stage (Credit Red Hat)
During the control plane nodes installation, fetching the ignition configuration files happens in two stages: stage-1 and stage-2. At the beginning of the control plane nodes installation (stage-1), the control plane nodes fetch their ignition configuration files (master.ign) from the initial ignition data source.

These ignition configuration files only contain a redirect instruction to get the corresponding ignition files from the Kubernetes API MCS. Finally, the control plane nodes fetch their ignition configuration from the Kubernetes API MCS (stage-2) and finish the installation.

Step 7: The etcd operator running on the bootstrap node scales up the etcd cluster to 3 instances using two control plane nodes.

Step 8: The temporary control plane running on the bootstrap node schedules the production control plane to the control plane nodes. The OpenShift installation process transfers the etcd cluster to the control plane nodes.

OpenShift installation process — Production control plane schedule stage (Credit Red Hat)
The temporary control plane is used only during the OpenShift installation. The OpenShift installation process transfers the temporary control plane to the production control plane running on the control plane nodes.

The production control plane is the definitive control plane that manages the OpenShift cluster.

Step 9: The temporary control plane shuts down, yielding to the production control plane. At this stage, the Kubernetes API is running on the production control plane.

Step 10: For full-stack automation installations, the installer shuts down the bootstrap node. Since this stage, the bootstrap node is no longer needed.

OpenShift installation process — Production control plane stage (Credit Red Hat)
The etcd cluster runs an etcd pod on each control plane node.

The Kubernetes API MCS service runs a Machine Config Server pod on each control plane node. The Kubernetes API MCS service hosts the master.ign and worker.ign ignition files.

Step 11: At this stage, the production control plane hosts the cluster remote resources (ignition configuration files) for control plane nodes and compute nodes in their MCS. The compute nodes boot and fetch their remote resources (the worker.ign ignition configuration file) from the control plane nodes, finish booting, and join the cluster.

If the pre-existing infrastructure installation method is used, the OpenShift installation process can also install the compute nodes with the RHEL 7 operating system instead of using the default RHCOS operating system.

Support for using RHEL 7 compute nodes is deprecated and will be removed in a future release of OpenShift 4.

OpenShift installation process — Compute nodes installation stage (Credit Red Hat)

During the compute nodes installation, fetching the ignition configuration files happens in two stages: stage-1 and stage-2. At the beginning of the compute nodes installation (stage-1), the compute nodes fetch their ignition configuration files (worker.ign) from the initial ignition data source.

These ignition configuration files only contain a redirect instruction to get the corresponding ignition files from the Kubernetes API MCS. Finally, the cluster nodes fetch their ignition configuration from the Kubernetes API MCS (stage-2) and finish the installation.

DNS Records Required for Installing OpenShift

Connectivity Requirements from All Cluster Nodes to All Cluster Nodes

Connectivity Requirements from All Cluster Nodes to Control Plane Nodes

Ensure that the load balancers used by the OpenShift cluster have both the front (load balancer) and back (cluster nodes) network ports accessible.

The API Load Balancer (API LB) provides a common endpoint to interact with and configure the platform. The Application Ingress Load Balancer (APP Ingress LB) provides an ingress point for application traffic flowing in from outside the cluster.

API Load Balancer Ports

Application Ingress Load Balancer Ports

--

--

Rohit Dalal

OpenShift & Kubernetes enthusiast | Linux aficionado | Automation lover with Ansible & Rundeck | DevOps practitioner | Grafana & Prometheus aficionado