First steps
After going through the Installation section and having installed all the operators, you will now deploy a Kafka cluster and the required dependencies. Afterwards you can verify that it works by producing test data into a topic and consuming it.
Setup
Two things need to be installed to create a Kafka cluster:
-
A ZooKeeper instance for internal use by Kafka
-
The Kafka cluster itself
We will create them in this order, each one is created by applying a manifest file. The operators you just installed will then create the resources according to the manifest.
ZooKeeper
Create a file named zookeeper.yaml
with the following content:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperCluster
metadata:
name: simple-zk
spec:
image:
productVersion: 3.8.0
stackableVersion: 23.1.0
servers:
roleGroups:
default:
replicas: 1
and apply it:
kubectl apply -f zookeeper.yaml
Create a file kafka-znode.yaml
with the following content:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: simple-kafka-znode
spec:
clusterRef:
name: simple-zk
and apply it:
kubectl apply -f kafka-znode.yaml
Kafka
Create a file named kafka.yaml
with the following contents:
---
apiVersion: kafka.stackable.tech/v1alpha1
kind: KafkaCluster
metadata:
name: simple-kafka
spec:
image:
productVersion: 3.3.1
stackableVersion: 23.1.0
clusterConfig:
tls:
serverSecretClass: null
zookeeperConfigMapName: simple-kafka-znode
brokers:
roleGroups:
default:
replicas: 3
and apply it:
kubectl apply -f kafka.yaml
This will create the actual Kafka instance.
Verify that it works
Next you will produce data into a topic and read it via kcat. Depending on your platform you may need to replace kafkacat
in the commands below with kcat
.
First, make sure that all the Pods in the StatefulSets are ready:
kubectl get statefulset
The output should show all pods ready:
NAME READY AGE simple-kafka-broker-default 3/3 5m simple-zk-server-default 3/3 7m
Then, create a port-forward for the Kafka Broker:
kubectl port-forward svc/simple-kafka 9092 2>&1 >/dev/null &
Create a file containing some data:
echo "some test data" > data
Write that data:
kafkacat -b localhost:9092 -t test-data-topic -P data
Read that data:
kafkacat -b localhost:9092 -t test-data-topic -C -e > read-data
Check the content:
cat read-data | grep "some test data"
And clean up:
rm data rm read-data
You successfully created a Kafka cluster and produced and consumed data.
What’s next
Have a look at the Usage page to find out more about the features of the Kafka Operator.