Step 7 â€” Test the Installation
Let us now publish and consume a "Hello World" message to make sure that the Kafka server is behaving correctly.
To publish messages, you should create a Kafka producer. You can easily create one from the command line using the kafka-console-producer.sh script. It expects the Kafka server's hostname and port, along with a topic name as its arguments.
Publish the string "Hello, World" to a topic called TutorialTopic by typing in the following:
echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null
As the topic doesn't exist, Kafka will create it automatically.
To consume messages, you can create a Kafka consumer using the kafka-console-consumer.sh script. It expects the ZooKeeper server's hostname and port, along with a topic name as its arguments.
The following command consumes messages from the topic we published to. Note the use of the --from-beginning flag, which is present because we want to consume a message that was published before the consumer was started.
~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic TutorialTopic --from-beginning
If there are no configuration issues, you should see Hello, World in the output now.
The script will continue to run, waiting for more messages to be published to the topic. Feel free to open a new terminal and start a producer to publish a few more messages. You should be able to see them all in the consumer's output instantly.
When you are done testing, press CTRL+C to stop the consumer script.
Step 8 â€” Install KafkaT (Optional)
KafkaT is a handy little tool from Airbnb which makes it easier for you to view details about your Kafka cluster and also perform a few administrative tasks from the command line. As it is a Ruby gem, you will need Ruby to use it. You will also need the build-essential package to be able to build the other gems it depends on. Install them using apt-get:
sudo apt-get install ruby ruby-dev build-essential
You can now install KafkaT using the gem command:
sudo gem install kafkat --source https://rubygems.org
Use vi to create a new file called .kafkatcfg.
This is a configuration file which KafkaT uses to determine the installation and log directories of your Kafka server. It should also point KafkaT to your ZooKeeper instance. Accordingly, add the following lines to it:
Â Â "kafka_path": "~/kafka",
Â Â "log_path": "/tmp/kafka-logs",
Â Â "zk_path": "localhost:2181"
You are now ready to use KafkaT. For a start, here's how you would use it to view details about all Kafka partitions:
You should see the following output:
output of kafkat partitions
Topic Â Â Â Â Â Â Â Â Â Â Partition Â Â Leader Â Â Â Â Â Replicas Â Â Â Â Â Â Â ISRs Â Â Â
TutorialTopic Â Â 0 Â Â Â Â Â Â Â Â Â Â Â Â 0 Â Â Â Â Â Â Â Â Â Â  Â Â Â Â Â Â Â Â Â Â 
To learn more about KafkaT, refer to its GitHub repository.
Step 9 â€” Set Up a Multi-Node Cluster (Optional)
If you want to create a multi-broker cluster using more Ubuntu 14.04 machines, you should repeat Step 1, Step 3, Step 4 and Step 5 on each of the new machines. Additionally, you should make the following changes in the server.properties file in each of them:
the value of the broker.id property should be changed such that it is unique throughout the cluster
the value of the zookeeper.connect property should be changed such that all nodes point to the same ZooKeeper instance
If you want to have multiple ZooKeeper instances for your cluster, the value of the zookeeper.connectproperty on each node should be an identical, comma-separated string listing the IP addresses and port numbers of all the ZooKeeper instances.
Step 10 â€” Restrict the Kafka User
Now that all installations are done, you can remove the kafka user's admin privileges. Before you do so, log out and log back in as any other non-root sudo user. If you are still running the same shell session you started this tutorial with, simply type exit.
To remove the kafka user's admin privileges, remove it from the sudo group.
sudo deluser kafka sudo
To further improve your Kafka server's security, lock the kafka user's password using the passwdcommand. This makes sure that nobody can directly log into it.
sudo passwd kafka -l
At this point, only root or a sudo user can log in as kafka by typing in the following command:
sudo su - kafka
In the future, if you want to unlock it, use passwd with the -u option:
sudo passwd kafka -u