A Review Of 100% Correct CCDAK Braindumps
Pinpoint of CCDAK exam engine materials and exam guide for Confluent certification for customers, Real Success Guaranteed with Updated CCDAK pdf dumps vce Materials. 100% PASS Confluent Certified Developer for Apache Kafka Certification Examination exam Today!
Confluent CCDAK Free Dumps Questions Online, Read and Test Now.
NEW QUESTION 1
There are 3 brokers in the cluster. You want to create a topic with a single partition that is resilient to one broker failure and one broker maintenance. What is the replication factor will you specify while creating the topic?
- A. 6
- B. 3
- C. 2
- D. 1
Answer: B
Explanation:
1 is not possible as it doesn't provide resilience to failure, 2 is not enough as if we take a broker down for maintenance, we cannot tolerate a broker failure, and 6 is impossible as we only have 3 brokers (RF cannot be greater than the number of brokers). Here the correct answer is 3
NEW QUESTION 2
How will you find out all the partitions without a leader?
- A. kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions
- B. kafka-topics.sh --bootstrap-server localhost:2181 --describe --unavailable-partitions
- C. kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable-partitions
- D. kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions
Answer: C
Explanation:
Please note that as of Kafka 2.2, the --zookeeper option is deprecated and you can now usekafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable-partitions
NEW QUESTION 3
There are 3 producers writing to a topic with 5 partitions. There are 5 consumers consuming from the topic. How many Controllers will be present in the cluster?
- A. 3
- B. 5
- C. 2
- D. 1
Answer: D
Explanation:
There is only one controller in a cluster at all times.
NEW QUESTION 4
A Zookeeper ensemble contains 3 servers. Over which ports the members of the ensemble should be able to communicate in default configuration? (select three)
- A. 2181
- B. 3888
- C. 443
- D. 2888
- E. 9092
- F. 80
Answer: ABD
Explanation:
2181 - client port, 2888 - peer port, 3888 - leader port
NEW QUESTION 5
Which of the following statements are true regarding the number of partitions of a topic?
- A. The number of partitions in a topic cannot be altered
- B. We can add partitions in a topic by adding a broker to the cluster
- C. We can add partitions in a topic using the kafka-topics.sh command
- D. We can remove partitions in a topic by removing a broker
- E. We can remove partitions in a topic using the kafka-topics.sh command
Answer: C
Explanation:
We can only add partitions to an existing topic, and it must be done using the kafka- topics.sh command
NEW QUESTION 6
In Kafka, every broker... (select three)
- A. contains all the topics and all the partitions
- B. knows all the metadata for all topics and partitions
- C. is a controller
- D. knows the metadata for the topics and partitions it has on its disk
- E. is a bootstrap broker
- F. contains only a subset of the topics and the partitions
Answer: BEF
Explanation:
Kafka topics are divided into partitions and spread across brokers. Each brokers knows about all the metadata and each broker is a bootstrap broker, but only one of them is elected controller
NEW QUESTION 7
To get acknowledgement of writes to only the leader partition, we need to use the config...
- A. acks=1
- B. acks=0
- C. acks=all
Answer: A
Explanation:
Producers can set acks=1 to get acknowledgement from partition leader only.
NEW QUESTION 8
A Zookeeper ensemble contains 5 servers. What is the maximum number of servers that can go missing and the ensemble still run?
- A. 3
- B. 4
- C. 2
- D. 1
Answer: C
Explanation:
majority consists of 3 zk nodes for 5 nodes zk cluster, so 2 can fail
NEW QUESTION 9
Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?
- A. After cleanup, only one message per key is retained with the first value
- B. Each message stored in the topic is compressed
- C. Kafka automatically de-duplicates incoming messages based on key hashes
- D. After cleanup, only one message per key is retained with the latest value Compaction changes the offset of messages
Answer: D
Explanation:
Log compaction retains at least the last known value for each record key for a single topic partition. All compacted log offsets remain valid, even if record at offset has been compacted away as a consumer will get the next highest offset.
NEW QUESTION 10
CORRECT TEXT
If I want to send binary data through the REST proxy to topic "test_binary", it needs to be base64 encoded. A consumer connecting directly into the Kafka topic
- A. "test_binary" will receive
- B. binary data
- C. avro data
- D. json data
- E. base64 encoded data, it will need to decode it
Answer: B
Explanation:
On the producer side, after receiving base64 data, the REST Proxy will convert it into bytes and then send that bytes payload to Kafka. Therefore consumers reading directly from Kafka will receive binary data.
NEW QUESTION 11
If you enable an SSL endpoint in Kafka, what feature of Kafka will be lost?
- A. Cross-cluster mirroring
- B. Support for Avro format
- C. Zero copy
- D. Exactly-once delivery
Answer: C
Explanation:
With SSL, messages will need to be encrypted and decrypted, by being first loaded into the JVM, so you lose the zero copy optimization. See more information herehttps://twitter.com/ijuma/status/1161303431501324293?s=09
NEW QUESTION 12
A Zookeeper configuration has tickTime of 2000, initLimit of 20 and syncLimit of 5. What's the timeout value for followers to connect to Zookeeper?
- A. 20 sec
- B. 10 sec
- C. 2000 ms
- D. 40 sec
Answer: D
Explanation:
tick time is 2000 ms, and initLimit is the config taken into account when establishing a connection to Zookeeper, so the answer is 2000 * 20 = 40000 ms = 40s
NEW QUESTION 13
A Kafka producer application wants to send log messages to a topic that does not include any key. What are the properties that are mandatory to configure for the producer configuration? (select three)
- A. bootstrap.servers
- B. partition
- C. key.serializer
- D. value.serializer
- E. key
- F. value
Answer: ACD
Explanation:
Both key and value serializer are mandatory.
NEW QUESTION 14
Which of the following Kafka Streams operators are stateless? (select all that apply)
- A. map
- B. filter
- C. flatmap
- D. branch
- E. groupBy
- F. aggregate
Answer: ABCDE
Explanation:
Seehttps://kafka.apache.org/20/documentation/streams/developer-guide/dsl-api.html#stateless-transformations
NEW QUESTION 15
How will you set the retention for the topic named ‚Ä??my-topic‚Ä?? to 1 hour?
- A. Set the broker config log.retention.ms to 3600000
- B. Set the consumer config retention.ms to 3600000
- C. Set the topic config retention.ms to 3600000
- D. Set the producer config retention.ms to 3600000
Answer: C
Explanation:
retention.ms can be configured at topic level while creating topic or by altering topic. It shouldn't be set at the broker level (log.retention.ms) as this would impact all the topics in the cluster, not just the one we are interested in
NEW QUESTION 16
A consumer application is using KafkaAvroDeserializer to deserialize Avro messages. What happens if message schema is not present in AvroDeserializer local cache?
- A. Throws SerializationException
- B. Fails silently
- C. Throws DeserializationException
- D. Fetches schema from Schema Registry
Answer: D
Explanation:
First local cache is checked for the message schema. In case of cache miss, schema is pulled from the schema registry. An exception will be thrown in the Schema Registry does not have the schema (which should never happen if you set it up properly)
NEW QUESTION 17
You are using JDBC source connector to copy data from 3 tables to three Kafka topics. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?
- A. 2
- B. 1
- C. 3
- D. 6
Answer: A
Explanation:
here, we have three tables, but the max.tasks is 2, so that's the maximum number of tasks that will be created
NEW QUESTION 18
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100); try {
consumer.commitSync();
} catch (CommitFailedException e) { log.error("commit failed", e)
}
for (ConsumerRecord<String, String> record records)
{
System.out.printf("topic = %s, partition = %s, offset =
%d, customer = %s, country = %s ",
record.topic(), record.partition(), record.offset(), record.key(), record.value());
}
}
What kind of delivery guarantee this consumer offers?
- A. Exactly-once
- B. At-least-once
- C. At-most-once
Answer: C
Explanation:
Here offset is committed before processing the message. If consumer crashes before processing the message, message will be lost when it comes back up.
NEW QUESTION 19
......
P.S. Easily pass CCDAK Exam with 150 Q&As DumpSolutions.com Dumps & pdf Version, Welcome to Download the Newest DumpSolutions.com CCDAK Dumps: https://www.dumpsolutions.com/CCDAK-dumps/ (150 New Questions)