Publishing a message to Kafka running inside docker

I am running Kafka inside docker container. I start my container using the following command

docker run --rm -p 2181:2181 -p 9092:9092 -p 8081:8081 --env 
ADVERTISED_HOST=\`docker-machine ip \\`docker-machine active\\`` --env 
-it --  name kafka spotify/kafka bash

I have written a simple program which I can copy inside the container and execute it and it works perfectly.

object KafkaProducerString {

  def SendStringMessage(msg: String) : Unit = {
    val inputRecord = new ProducerRecord[String, String]("test", null, msg)
    val producer: KafkaProducer[String, String] = CreateProducerString
    val rm = producer.send(inputRecord).get(10, SECONDS)
    println(s"offset: ${rm.offset()} partition: ${rm.partition()} topic: ${rm.topic()}")

  private def CreateProducerString: KafkaProducer[String, String] = {
    val props = new Properties()
    props.put("bootstrap.servers", "localhost:9092")
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    props.put("batch.size", "0")
    props.put("", "1")
    val producer = new KafkaProducer[String, String](props)

But if I run this same program from outside the container (from my mac). [I replace the "localhost" with the output from docker-machine ip]

I get this error

[error] (run-main-0) java.util.concurrent.TimeoutException: Timeout after waiting for 10000 ms.
java.util.concurrent.TimeoutException: Timeout after waiting for 10000 ms.
    at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(
    at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(
    at com.abhi.KafkaProducerString$.SendStringMessage(KafkaProducerString.scala:23)
    at com.abhi.KafkaMain$$anonfun$main$1.apply$mcVI$sp(KafkaMain.scala:19)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
    at com.abhi.KafkaMain$.main(KafkaMain.scala:17)
    at com.abhi.KafkaMain.main(KafkaMain.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    at java.lang.reflect.Method.invoke(

My understanding was that for a kafka producer to be remote the only ports I need to open are 2181 (zookeeper) and 9092 (kafka) and you can see that I have opened these.

But still The same program when executed outside the container times out but works when inside the container (with localhost).

Edit:: Based on the suggestions below, I tried the following

docker run --rm -p -p -p --env ADVERTISED_HOST=`docker-machine ip \`docker-machine 
active\`` --env ADVERTISED_PORT=9092 -v 
/Users/abhishek.srivastava/MyProjects/KafkaTest/target/scala-2.11:/app -it --
name kafka kafka_9.0 bash


docker run --rm -p -p -p 
--env ADVERTISED_HOST=`docker-machine ip \`docker-machine active\`` --env 
/Users/abhishek.srivastava/MyProjects/KafkaTest/target/scala-2.11:/app -it --
name kafka kafka_9.0 bash

But these did not solve the problem. I get exactly the same issue



answered 2 years ago t6nand #1

You will have to bind your docker container to local machine. This can be done by using docker run as:

docker run --rm -p -p -p ....

Alternatively you can use docker run with bind IP:

docker run --rm -p -p -p .....

If you want to make docker container routable on your network you can use:

docker run --rm -p <private-IP>:2181:2181 -p <private-IP>:9092:9092 -p <private-IP>:8081:8081 ....

Or finally you can go for not containerising your network interface by using:

docker run --rm -p 2181:2181 -p 9092:9092 -p 8081:8081 --net host ....

answered 1 year ago sutanu dalui #2

While I myself am facing the similar problem, I can try to explain this behavior.

Kafka producer will lookup for partition leader from Zookeeper, before publishing the record to the Topic. Zookeeper will be having the leader host entry as marked by the Kafka server, which is running inside of a Docker container.

Due to this fact, the IP marked by the server will be the Docker internal IP, instead of the host IP. Which of course will not be resolvable from the client machine and hence timing out.

A probable solution could be, is to have set to the host IP of the Docker machine. However, this will introduce another problem (as I faced!)

Broker metadata fetch by the server will start failing now. This is because now the Zookeeper entry has the host IP, which is not resolvable from inside of the container. As a consequence any consumer application would now start getting LEADER_NOT_AVAILABLE warnings.

This is a deadlock situation and the solution depends mainly on the host resolution strategy employed. I would like to know how people would suggest to go about here.

comments powered by Disqus