Hey Kubernetes Devs, You're Probably an OpenShift Pro and Don't Even Know It!
Know Kubernetes? You're closer to mastering OpenShift than you think. It's the K8s you know, but with enterprise power-ups built-in. Learn the key differences—from its strict security model to traffic management—and use our cheat sheet to get your apps running on OpenShift today.

You have been deploying your application onto Kubernetes for your customers for a while now. But now, you have a new customer who wants to deploy to OpenShift. What do you need to do?
The good news is if you're a software engineer who speaks fluent kubectl
, manages Deployments
in your sleep, and crafts Ingress
rules for breakfast, you're already 90% of the way to mastering Red Hat OpenShift. You might have heard of OpenShift, perhaps dismissing it as "just another Kubernetes distro." But what if I told you it’s more like Kubernetes with a full suite of enterprise-grade power-ups, built right in?
Think of it this way: if Kubernetes is the powerful, unopinionated Linux kernel, OpenShift is a polished, feature-rich distribution like RHEL or Ubuntu. It takes the core of Kubernetes and bundles it with integrated tools and stricter security defaults designed for enterprise environments. For developers, this means less time spent cobbling together monitoring, logging, and CI/CD tools, and more time writing code.
But this "opinionated" nature comes with a few crucial differences. Your standard Kubernetes manifests might not work out of the box. Don't worry, the changes are straightforward. This post will walk you through the key differences and give you a practical cheat sheet to get your apps running on OpenShift in no time.
The Big Three: Security, Traffic, and Deployments
While OpenShift shares the same Kubernetes core, it has its own way of handling a few key areas. Understanding these is key to a smooth transition.
1. Security: The Biggest (and Best) Hurdle
Vanilla Kubernetes is permissive by default; you build up your security posture. OpenShift is the opposite: it's restrictive by default.
OpenShift uses Security Context Constraints (SCCs) to enforce a strict security model. The most significant policy is that containers are forbidden from running as the root user. Instead, OpenShift assigns your pod a random, high-numbered, non-root user ID that is unique to your project (its version of a Namespace
). This is a massive security win. If an attacker breaks out of your container, they land on the host as an unprivileged user, not root, dramatically limiting the potential damage.
2. Getting Traffic In: Ingress
vs Route
In Kubernetes, you use an Ingress resource to expose your service to the outside world. OpenShift has its own, more mature version called a Route.
The Route
object was created before Ingress
was a standard in Kubernetes and, as a result, has more features built-in, like native traffic splitting for canary deployments and advanced TLS options like re-encryption. The good news? OpenShift fully supports standard Kubernetes Ingress
objects. Its built-in router simply converts your Ingress
into a Route
behind the scenes, so your existing manifests will often work.
3. Deploying Code: Deployment
vs DeploymentConfig
Similarly, OpenShift has its own DeploymentConfig (DC) object, which predates the standard Kubernetes Deployment.
The killer feature of a DC is its native support for triggers. An ImageChangeTrigger
, for example, can automatically kick off a new deployment the moment a new version of your container image is pushed to a registry, enabling a simple and powerful CI/CD workflow right out of the box. Similarly, there also exists ConfigChangeTrigger
to redeploy on configuration updates.
Heads up: While powerful, DeploymentConfigs
are now considered legacy. Red Hat officially recommends using standard Kubernetes Deployments
for all new applications to ensure portability. Your Deployment
YAMLs are the future, even on OpenShift.
The "Make it Work on OpenShift" Cheat Sheet
Ready to deploy? Here’s a checklist to make your application OpenShift-compatible.
✅ Step 1: Fix Your Container Image (This is Non-Negotiable)
If you do only one thing, do this. To work with OpenShift's security model, your container image must be built to run as an arbitrary non-root user.
Set Correct Filesystem Permissions: This is the magic trick to handle the random user ID. Since your container will run with a user ID like 1008050000
, it won't have permission to write to directories owned by root
or the 1001
user you specified. However, OpenShift ensures that this random user is always part of the root
group (GID 0
). Therefore, you must change the group ownership of any directory your application needs to write to (like /tmp
, log directories, or data directories) to the root
group and grant it write permissions.
# Example for a directory your app writes to at /app/data
RUN chgrp -R 0 /app/data && \
chmod -R g+w /app/data
Use a Non-Root User in your Containerfile
: Explicitly switch to a non-root user.
# Use a high-numbered user ID. Don't use root (0).
USER 1001
Testing Locally
You can test these changes locally either using podman
/docker
or updating your compose
file.
Podman Example
podman run -it --rm \
-p 8080:8080 \
--name my-openshift-test \
--user 1008050000:0 \
my-fixed-app:latest
Compose file Example
services:
my-app:
image: my-fixed-app:latest
container_name: my-openshift-test
# This is the key line to simulate OpenShift's random user
user: "1008050000:0"
ports:
- "8080:8080"
✅ Step 2: Adapt Your Deployment Manifests (Helm/Terraform/YAML)
Make your deployment manifests flexible by adding a simple flag to toggle between Kubernetes and OpenShift configurations. Or add specific flags as you need them.
In a Helm values.yaml
, this could look like:
# values.yaml
openshift:
enabled: false
Now, use this flag to conditionally render the right resources.
Conditionally Create an Ingress
or a Route
: Create two template files, ingress.yaml
and route.yaml
, and wrap them in if
blocks.
# templates/ingress.yaml
{{- if not.Values.openshift.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
#... your ingress spec
{{- end }}
# templates/route.yaml
{{- if.Values.openshift.enabled }}
apiVersion: route.openshift.io/v1
kind: Route
#... your route spec
{{- end }}
Apply a Conditional securityContext
: In your Deployment
template, apply a stricter security context when deploying to OpenShift.
# templates/deployment.yaml
spec:
securityContext:
{{- if.Values.openshift.enabled }}
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
{{- else }}
runAsUser: 1001
runAsGroup: 1001
{{- end }}
Secure Computing Mode (seccomp
) is a Linux kernel feature that acts like a firewall for system calls (syscalls
), which are requests an application makes to the kernel.
Setting seccompProfile
to RuntimeDefault
tells Kubernetes to apply the default seccomp profile provided by the container runtime (like containerd
or CRI-O). This default profile blocks a list of historically dangerous syscalls, significantly reducing the container's attack surface without breaking most applications. This is a key best practice for hardening your workloads.
✅ Step 3: Use the Right Tools
OpenShift’s command-line tool, oc
, is a superset of kubectl
. This means every kubectl
command is also a valid oc
command.
You can keep using the tool you know and love, but oc
adds convenient commands for OpenShift-specific features, like oc login
, oc new-project
, and oc get route
.
Beyond Compatibility: Embracing the Platform
Once your app is running, you can start to leverage the integrated features that make OpenShift a powerful developer platform. Explore the web console for managing applications, visualize cluster topology, and dive into ImageStreams—an abstraction layer over container images that can help you govern which images are used and trigger deployments.
Quick Reference: Kubernetes vs. OpenShift
Here's a quick summary of the key differences to keep in mind:
Feature/Aspect | Vanilla Kubernetes | Red Hat OpenShift |
---|---|---|
Core Philosophy | Unopinionated framework; a "kernel" you build upon. | Opinionated platform; an integrated "distribution" with tools included. |
Security Model | Permissive by default; you must configure security policies. | Restrictive by default; enforces non-root containers and other policies out-of-the-box. |
Container User | Often runs as root unless specified otherwise. |
Must run as a non-root, arbitrary user ID. This is a critical change for your Containerfile . |
Exposing Services | Standard Ingress object. Functionality depends on the Ingress Controller used. |
Route object with advanced features like traffic splitting. Also supports Ingress . |
Deployments | Standard Deployment object. |
Supports Deployment , but also has a legacy DeploymentConfig with built-in triggers. |
Image Management | Requires an external image registry (e.g., Docker Hub, GCR). | Includes a built-in registry and ImageStream objects for abstracting and triggering builds. |
Developer Tools | kubectl CLI. Basic dashboard is an optional add-on. |
oc CLI (superset of kubectl ). Rich, integrated web console with login is standard. |
CI/CD | Requires external tools like Jenkins, GitLab CI, etc. | Integrated options like Source-to-Image (S2I) and OpenShift Pipelines (Tekton). |
Further Reading for Developers
- Red Hat Developers - OpenShift Overview: A great starting point for developer-focused features.
- OpenShift Documentation - Building Images: Official docs on image creation best practices.
- Building Images for OpenShift: A detailed guide on best practices for creating certified container images.
- Rootless Containers: An article explaining the working and security benefits of the rootless approach that OpenShift champions by default.
- OpenShift Interactive Learning Portal: Hands-on labs to learn OpenShift concepts right in your browser.
- Source-to-Image (S2I) Documentation: Learn how S2I can simplify your life by building container images for you.