Pod placement
You can configure Pod placement for HBase nodes as described in Pod placement.
Defaults
The default affinities created by the operator are:
-
Co-locate all the HBase Pods (weight 20)
-
Co-locate HBase regionservers with the underlying HDFS datanodes (weight 50)
-
Distribute all Pods within the same role across nodes so multiple instances don’t end up on the same Kubernetes node (masters, regionservers, rest servers) (weight 70)
All default affinities are only preferred and not enforced, as we can not expect all setups to have multiple Kubernetes nodes. If you want to have them enforced, you need to specify you own requiredDuringSchedulingIgnoredDuringExecution affinities.
|
Default Pod placement constraints for master nodes:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: cluster-name
app.kubernetes.io/name: hbase
topologyKey: kubernetes.io/hostname
weight: 20
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: cluster-name
app.kubernetes.io/name: hbase
topologyKey: kubernetes.io/hostname
weight: 70
Default Pod placement constraints for region server nodes:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: cluster-name
app.kubernetes.io/name: hbase
topologyKey: kubernetes.io/hostname
weight: 20
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: datanode
app.kubernetes.io/instance: hdfs-cluster-name
app.kubernetes.io/name: hdfs
topologyKey: kubernetes.io/hostname
weight: 50
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: regionserver
app.kubernetes.io/instance: cluster-node
app.kubernetes.io/name: hbase
topologyKey: kubernetes.io/hostname
weight: 70
Default Pod placement constraints for rest server nodes:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: test-hbase
app.kubernetes.io/name: hbase
topologyKey: kubernetes.io/hostname
weight: 20
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: restserver
app.kubernetes.io/instance: test-hbase
app.kubernetes.io/name: hbase
topologyKey: kubernetes.io/hostname
weight: 70
In the examples above cluster-name
is the name of the HBase custom resource that owns this Pod. The hdfs-cluster-name
is the name of the HDFS cluster that was configured in the hdfsConfigMapName
property.
It is important that the hdfsConfigMapName property contains the name the HDFS cluster. You could instead configure ConfigMaps of specific name or data roles, but for the purpose of pod placement, this will lead to faulty behavior.
|
Use custom pod placement
For general configuration of Pod placement, see the Pod placement concepts page. One example use-case for HBase would be to require the HBase masters to run on different Kubernetes nodes as follows:
spec:
masters:
config:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: hbase
app.kubernetes.io/instance: cluster-name # Replace with you HbaseCluster name!
app.kubernetes.io/component: master
topologyKey: kubernetes.io/hostname
roleGroups:
default:
replicas: 2
Please note that the Pods will be stuck in Pending , when your Kubernetes cluster does not have any node without a masters already running on it and sufficient compute resources.
|