Controlling Pod Placement

Many infrastructure-related pods in an Okd cluster are configured to run on master nodes. Examples include pods for the DNS operator, the OAuth operator, and the OpenShift API server.

In some cases, this is accomplished by using the node selector node-role.kubernetes.io/ master: '' in the configuration of a daemon set or a replica set.

Similarly, some user applications might require running on a specific set of nodes. For example, certain nodes provide hardware acceleration for certain types of workloads, or the cluster administrator does not want to mix production applications with development applications. Use node labels and node selectors to implement these kinds of scenarios.

A node selector is part of an individual pod definition. Define a node selector in either a deployment or a deployment configuration resource, so that any new pod generated from that resource will have the desired node selector. If your deployment or deployment configuration resource is under version control, then modify the resource file and apply the changes using the oc apply -f command.

Alternatively, a node selector can be added or modified using either the oc edit command or the oc patch command. For example, to configure the myapp deployment so that its pods only run on nodes that have the env=qa label, use the oc edit command:

oc edit deployment/myapp
...output omitted...
spec:
...output omitted...
    template:
    metadata:
        creationTimestamp: null
        labels:
            app: myapp
    spec:
    nodeSelector:
        env: dev
    containers:
        - image: quay.io/redhattraining/scaling:v1.0
...output omitted...

The following oc patch command accomplishes the same thing:

oc patch deployment/myapp --patch \
> '{"spec":{"template":{"spec":{"nodeSelector":{"env":"dev"}}}}}'

Whether using the oc edit command or the oc patch command, the change triggers a new deployment and the new pods are scheduled according to the node selector.

Daftar Materi