A first look into the Kubernetes Gateway API on OpenShift
- - 6 min read
This blog post summarizes our first look into the Kubernetes Gateway API and how it is integrated in OpenShift.
The Kubernetes Gateway API is new implementation of the ingress, load balancing and service mesh API’s. See upstream for more information.
Also the OpenShift documentation provides an overview of the Gateway API and it’s integration.
Things to consider when using Gateway API with OpenShift
Currently UDN (User Defined Networks) with Gateway API are not supported.
Only TLS termination on the edge is supported (no pass-through or re-encrypt), this needs to be confirmed. We can’t find the original source of this statement
The standard OpenShift ingress controller manages Gateway API Resources
Gateway API provides a standard on how to get client traffic into a Kubernetes cluster. Vendors provide an implementation of the API. So OpenShift provides ONE possible implementation, but there could be more than one in a cluster.
We found the following sentence in the OpenShift documentation interesting:
Because OpenShift Container Platform uses a specific version of Gateway API CRDs, any use of third-party implementations of Gateway API must conform to the OpenShift Container Platform implementation to ensure that all fields work as expected
Setting up Gateway API on OpenShift
Before you begin, ensure you have the following:
OpenShift 4.19 or higher with cluster-admin access
First you need to create a GatewayClass object.
Be aware that the GatewayClass object is NOT namespaced. |
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: openshift-default
spec:
controllerName: openshift.io/gateway-controller/v1 (1)| 1 | The controller name needs to be exactly as shown. Otherwise the ingress controller will NOT manage the gateway and associated resources. |
This creates a new pod in the openshift-ingress namespace:
$ oc get po -n openshift-ingress
NAME READY STATUS RESTARTS AGE
istiod-openshift-gateway-7b567bc8b4-4lrt2 1/1 Running 0 12m (1)
router-default-6db958cbd-dlbwz 1/1 Running 12 14d| 1 | this pod got create after applying the gateway class resource |
router-default is the default openshift ingress pod. The first
difference seems to be the SCC (security context constraint) the pods
are using.
$ oc get po -n openshift-ingress -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations.openshift\.io/scc}{"\n"}{end}'
istiod-openshift-gateway-7b567bc8b4-4lrt2 restricted-v2
router-default-6db958cbd-dlbwz hostnetworkThe standard router used host networking for listing on port 80 and 443
on the node where it is running. Our GatewayClass currently only
provides a pod running Istiod awaiting further configuration. To
actually listen for client request additional configuration is
required.
A Gateway is required to listen for client requests. We create the
following gateway:
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: http-gateway
namespace: openshift-ingress (1)
spec:
gatewayClassName: openshift-default
listeners:
- name: http
protocol: HTTP
port: 80
hostname: "*.apps.ocp.lan.stderr.at"| 1 | We create this gateway in the same namespace as the istio deployment. This is required for OpenShift. |
This creates an additional pod in the openshift-ingress namespace:
$ oc get po
NAME READY STATUS RESTARTS AGE
http-gateway-openshift-default-d476664f5-h87mp 1/1 Running 0 36sWe also got a new service for the our http-gateway
➜ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-gateway-openshift-default LoadBalancer 172.30.183.48 10.0.0.150 15021:30251/TCP,80:30437/TCP 4m52sThe interesting thing is the TYPE of the service. It’s of type
LoadBalancer. We have
MetalLB
deployed in our cluster, this might be the reason for this. We will
try to configure a gateway without MetalLB in in upcoming post.
Lets take a look at the gateway resource
$ oc get gtw
NAME CLASS ADDRESS PROGRAMMED AGE
http-gateway openshift-default 10.0.0.150 True 3mSo this seems to be working. But know we have a problem: the *.apps
domain that we used for our gateway points already to the default
OpenShift Ingress. We could either
redeploy the gateway with a different wildcard domain (e.g. *.gtw…)
create a more specific DNS record that points to our new load balancer
Let’s try to confirm this with curl:
$ curl -I http://bla.apps.ocp.lan.stderr.at
HTTP/1.0 503 Service Unavailable
pragma: no-cache
cache-control: private, max-age=0, no-cache, no-store
content-type: text/html503 is the response of default OpenShift Ingress.
$ curl -I http://10.0.0.150
HTTP/1.1 404 Not Found
date: Fri, 29 Aug 2025 14:31:31 GMT
transfer-encoding: chunkedOur new gateway returns a 404 not found response. We choose the
first option and create another wildcard DNS entry for
*.gtw.ocp.lan.stderr.at. We re-deployed our gateway with the new hostname:
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: http-gateway
namespace: openshift-ingress
spec:
gatewayClassName: openshift-default
listeners:
- name: http
protocol: HTTP
port: 80
hostname: "*.gtw.ocp.lan.stderr.at" (1)| 1 | New hostname for resources exposed via our gateway |
$ oc apply -f gateway.yaml
gateway.gateway.networking.k8s.io/http-gateway createdThis also creates a DNSRecord resource:
$ oc describe dnsrecords.ingress.operator.openshift.io -n openshift-ingress http-gateway-c8d7bfc67-wildcard
Name: http-gateway-c8d7bfc67-wildcard
Namespace: openshift-ingress
Labels: gateway.istio.io/managed=openshift.io-gateway-controller-v1
gateway.networking.k8s.io/gateway-name=http-gateway
istio.io/rev=openshift-gateway
Annotations: <none>
API Version: ingress.operator.openshift.io/v1
Kind: DNSRecord
Metadata:
Creation Timestamp: 2025-08-29T14:49:45Z
Finalizers:
operator.openshift.io/ingress-dns
Generation: 1
Owner References:
API Version: v1
Kind: Service
Name: http-gateway-openshift-default
UID: a023de5d-c428-4249-a190-de3cbfeb6964
Resource Version: 141150968
UID: 7a61a867-216e-40b7-88f3-e3934493c477
Spec:
Dns Management Policy: Managed
Dns Name: *.gtw.ocp.lan.stderr.at.
Record TTL: 30
Record Type: A
Targets:
10.0.0.150
Events: <none>This resource is only internally used by the OpenShift ingress
operator (see oc explain dnsrecord for details).
Creating HTTPRoutes for exposing our service.
To actually expose a HTTP pod via our new gateway we need:
A Namespace to deploy an example pod. We will use a Nginx for this
A Service that exposes our Nginx pod
and finally a HTTPRoute resource
For the nginx deployment we used the following manifest:
---
apiVersion: v1
kind: Namespace
metadata:
name: gateway-api-test
spec:
finalizers:
- kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: gateway-api-test
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: quay.io/nginx/nginx-unprivileged:1.29.1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: gateway-api-test
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 8080Let’s see if our nginx pod got deployed successfully:
$ oc get po,svc -n gateway-api-test
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-796cdf7474-b7bqz 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP 172.30.42.36 <none> 8080/TCP 21sAnd finally confirm our Service is working:
$ oc port-forward -n gateway-api-test svc/nginx 8080 &
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
$ curl -I localhost:8080
Handling connection for 8080
HTTP/1.1 200 OK
Server: nginx/1.29.1
Date: Fri, 29 Aug 2025 15:45:12 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 13 Aug 2025 14:33:41 GMT
Connection: keep-alive
ETag: "689ca245-267"
Accept-Ranges: bytesWe received a response from our nginx pod, hurray!
So next let’s try to create a HTTPRoute to expose our nginx service to external clients:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: nginx-route
spec:
parentRefs:
- name: http-gateway
namespace: openshift-ingress
hostnames: ["nginx.gtw.ocp.lan.stderr.at"]
rules:
- backendRefs:
- name: nginx
namespace: gateway-api-test
port: 8080One important point here, the Gateway actually come in two flavors
dedicated gateways, only accepting HTTP routes in the same namespace (
openshift-ingress) in our case.shared gateways, which also accept HTTP route objects from other namespaces
see Gateway API deployment topologies in the OpenShift documentation for more information.
As this post is already rather long, we focus on the dedicated gateway topology for now.
| The HTTP route must be deployed in the same namespace as the gateway if the dedicated topology is used. |
So let’s deploy our HTTPRoute:
$ oc apply -f httproute.yamlVerify we can reach our nginx pod:
curl -I http://nginx.gtw.ocp.lan.stderr.at
HTTP/1.1 500 Internal Server Error
date: Fri, 29 Aug 2025 15:57:34 GMT
transfer-encoding: chunkedThis return a 500 error, something seems to be wrong with our route,
let’s take a look at the status of the HTTPRoute:
$ oc describe gtw http-gateway
.
. (output omitted)
.
Status:
Parents:
Conditions:
Last Transition Time: 2025-08-29T15:54:43Z
Message: Route was valid
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Last Transition Time: 2025-08-29T15:54:43Z
Message: backendRef nginx/gateway-api-test not accessible to a HTTPRoute in namespace "openshift-ingress" (missing a ReferenceGrant?) (2)
Observed Generation: 1
Reason: RefNotPermitted
Status: False (1)
Type: ResolvedRefs
Controller Name: openshift.io/gateway-controller/v1
Parent Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: http-gateway
Namespace: openshift-ingress| 1 | Something seems to be wrong as the status is False |
| 2 | Seems we are missing a ReferenceGrant |
Looking at the
upstream
documentation reveals a security feature of the Gateway API. Before a
HTTPRoute can reach a service in a different namespace we must
create a ReferenceGrant in the namespace providing the service.
So let’s try to deploy following ReferenceGrant in the gateway-api-test namespace:
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: nginx
namespace: gateway-api-test
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: openshift-ingress
to:
- group: ""
kind: ServiceChecking the status field of our HTTPRoute again:
(output omitted)
Status:
Addresses:
Type: IPAddress
Value: 10.0.0.150
Conditions:
Last Transition Time: 2025-08-29T15:47:07Z
Message: Resource accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Last Transition Time: 2025-08-29T15:47:08Z
Message: Resource programmed, assigned to service(s) http-gateway-openshift-default.openshift-ingress.svc.cluster.local:80
Observed Generation: 1
Reason: Programmed
Status: True (1)
Type: Programmed| 1 | Status is now True |
and finally calling the nginx pod again via our gateway:
$ curl -I http://nginx.gtw.ocp.lan.stderr.at
HTTP/1.1 200 OK
server: nginx/1.29.1
date: Fri, 29 Aug 2025 16:01:33 GMT
content-type: text/html
content-length: 615
last-modified: Wed, 13 Aug 2025 14:33:41 GMT
etag: "689ca245-267"
accept-ranges: bytesFinally everything seems to be in place and working.
Conclusion
In this blog post we took a first look at the Kubernetes Gateway API
and it’s integration into OpenShift. We enabled the Gateway API via a
GatewayClass resource, created a simple HTTP Gateway via a
Gateway, deploy a Nginx pod and a Service and exposed the service
via a HTTPRoute and a ReferenceGrant.
Hopefully an upcoming blog post will cover how to
How to deploy a Gateway without MetalLB
Deploy a TLS secured service
implement HTTP redirects
rewriting URL’s (if possible)
and other possibilities of the Gateway API
Copyright © 2020 - 2025 Toni Schmidbauer & Thomas Jungbauer
Toni Schmidbauer
