All Practice Exams

100+ Free EX482 Practice Questions

Pass your Red Hat Certified Specialist in Event-Driven Application Development (EX482) exam on the first try — instant access, no signup required.

✓ No registration✓ No credit card✓ No hidden fees✓ Start practicing immediately
~55-65% Pass Rate
100+ Questions
100% Free
1 / 100
Question 1
Score: 0/0

Which Kafka concept partitions a topic's log so consumers in a consumer group can scale out by reading different partitions in parallel?

A
B
C
D
to track
2026 Statistics

Key Facts: EX482 Exam

210/300

Passing Score (70%)

Red Hat

~3 hours

Single Section

Red Hat

$400

Exam Fee (USD)

Red Hat

100-150 hrs

Study Time

Recommended

$140-190K

Streaming Engineer Salary

Glassdoor 2024

3 years

Cert Valid

Red Hat renewal

EX482 is Red Hat's flagship event-driven development specialist certification — a roughly 3-hour, hands-on, performance-based exam with no multiple-choice questions. Passing score is 210/300 (70%) and the exam fee is approximately $400 USD. EX482 holders are typically senior platform/streaming engineers earning $140,000-190,000. The credential is valid for 3 years.

Sample EX482 Practice Questions

Try these sample questions to test your EX482 exam readiness. Each question includes a detailed explanation. Start the interactive quiz above for the full 100+ question experience with AI tutoring.

1Which Kafka concept partitions a topic's log so consumers in a consumer group can scale out by reading different partitions in parallel?
A.Replicas
B.Partitions
C.Segments
D.Brokers
Explanation: A Kafka topic is divided into partitions, and each partition is an independent ordered log. Consumers within a consumer group are assigned partitions, so the maximum parallelism of a single group equals the number of partitions on the topic.
2What does the replication factor of a Kafka topic determine?
A.The number of partitions in the topic
B.The number of brokers that hold a copy of each partition
C.The number of consumer groups allowed to read the topic
D.The number of producer instances allowed to write to the topic
Explanation: Replication factor is the number of brokers that store a copy of every partition in the topic. With replication factor 3, each partition has one leader and two follower replicas, so the cluster can tolerate up to two broker failures (depending on min.insync.replicas).
3Which producer configuration provides the strongest durability guarantee for an acknowledged write?
A.acks=0
B.acks=1
C.acks=all
D.acks=leader
Explanation: With acks=all (also written as acks=-1) the leader waits until all in-sync replicas have written the record before acknowledging the producer. Combined with min.insync.replicas, this is the strongest durability setting Kafka offers.
4Which producer setting must be true to enable Kafka's idempotent producer behavior?
A.enable.idempotence=true
B.idempotent.producer=true
C.producer.idempotent=enabled
D.transactional.idempotence=true
Explanation: enable.idempotence=true turns on the idempotent producer. Kafka assigns the producer a PID and sequence numbers per partition so the broker can deduplicate retries, eliminating duplicates from network retries. Modern Kafka versions default this to true.
5When two consumers in the same consumer group subscribe to a topic with four partitions, how does Kafka assign the partitions by default?
A.Each consumer reads all four partitions
B.Each consumer is assigned two partitions
C.Only one consumer is active and reads all partitions; the other is idle
D.Kafka rejects the second consumer because the partition count is not equal to the consumer count
Explanation: Within a consumer group each partition is consumed by exactly one consumer. With four partitions and two consumers the default assignor (RangeAssignor or CooperativeStickyAssignor) distributes two partitions to each consumer.
6What is the offset of a Kafka record?
A.Its byte position in the segment file
B.Its sequential ID within a single partition
C.Its global ID across the entire topic
D.Its insertion timestamp in milliseconds
Explanation: An offset is a 64-bit integer that uniquely identifies a record within a single partition; it is monotonically increasing within that partition only. Records in different partitions can share the same offset.
7A consumer starts and there is no committed offset for its consumer group. With auto.offset.reset=earliest, what does the consumer do?
A.Throws an OffsetOutOfRangeException and stops
B.Starts reading from the beginning of each assigned partition
C.Starts reading from the latest record (end of log)
D.Pauses until an administrator manually commits an offset
Explanation: auto.offset.reset=earliest tells the consumer to begin from the smallest available offset (start of the retained log) when no committed offset exists or the existing one is out of range. Use latest for tail-read behavior instead.
8Which retention policy keeps only the most recent value per key for a topic, making it suitable for compacted KTable change-logs?
A.cleanup.policy=delete
B.cleanup.policy=compact
C.cleanup.policy=time
D.cleanup.policy=key-only
Explanation: cleanup.policy=compact enables log compaction: Kafka guarantees that the latest value for each key is retained indefinitely (subject to delete.retention.ms for tombstones). This is the storage backing Kafka Streams KTables and many CDC use cases.
9Which special record value is used in a compacted topic to instruct Kafka to delete a key during compaction?
A.An empty string
B.The literal string DELETE
C.A null value (tombstone)
D.The value -1
Explanation: Producing a record with the same key and a null value writes a tombstone. Compaction will eventually remove all earlier values for that key and, after delete.retention.ms passes, remove the tombstone itself.
10Which broker configuration enforces a minimum number of in-sync replicas before an acks=all produce can succeed?
A.min.replicas
B.min.insync.replicas
C.replica.min.in.sync
D.isr.min.size
Explanation: min.insync.replicas is the broker- or topic-level setting paired with acks=all. If the partition's ISR drops below this number, producers using acks=all receive NotEnoughReplicasException instead of writing under-replicated data.

About the EX482 Exam

Performance-based certification for event-driven application developers. EX482 validates hands-on skills in Apache Kafka, Red Hat AMQ Streams on OpenShift, Quarkus reactive messaging, Kafka Streams API, Apicurio schema registry, Debezium CDC, error handling and exactly-once semantics, securing Kafka with TLS/SASL/OAuth, and monitoring with JMX and Prometheus.

Assessment

Single performance-based hands-on section on a live OpenShift cluster running AMQ Streams

Time Limit

~3 hours

Passing Score

210/300 (70%)

Exam Fee

$400 USD (Red Hat)

EX482 Exam Content Outline

18%

Apache Kafka Core

Topics, partitions, replication factor, brokers, producer and consumer APIs, consumer groups, offsets, log compaction, retention policies

16%

Quarkus Reactive Messaging with Kafka

SmallRye Reactive Messaging, @Channel, @Incoming, @Outgoing, @Broadcast, smallrye-kafka connector, mp.messaging.* configuration

12%

AMQ Streams on OpenShift

Strimzi/AMQ Streams Operator, Kafka, KafkaTopic, KafkaUser, KafkaConnect, KafkaConnector, KafkaMirrorMaker2, KafkaBridge CRDs

12%

Kafka Streams API

KStream and KTable, joins, windowing, state stores, topology, groupByKey, aggregate, windowedBy, materialized views

10%

Error Handling and Delivery Semantics

Idempotent producers, acks=all, transactional.id, isolation.level, exactly-once semantics, DLQ patterns, retry topics

10%

Securing Kafka

TLS encryption, mTLS authentication, SASL/SCRAM, OAuth via Keycloak, KafkaUser ACLs, KafkaListener configuration

8%

Schema Management with Apicurio Registry

Avro, JSON Schema, Protobuf serdes, schema evolution, compatibility levels, Apicurio Avro serializer/deserializer

7%

Debezium Change Data Capture

Debezium connectors for PostgreSQL and MySQL, KafkaConnect deployment, snapshot modes, CDC event structure

4%

Monitoring and Operations

JMX metrics, Prometheus integration, Kafka Exporter, lag monitoring, Cruise Control rebalancing

3%

MirrorMaker 2 and KafkaBridge

Cluster mirroring with MirrorMaker 2, KafkaBridge HTTP gateway for non-JVM clients

How to Pass the EX482 Exam

What You Need to Know

  • Passing score: 210/300 (70%)
  • Assessment: Single performance-based hands-on section on a live OpenShift cluster running AMQ Streams
  • Time limit: ~3 hours
  • Exam fee: $400 USD

Keys to Passing

  • Complete 500+ practice questions
  • Score 80%+ consistently before scheduling
  • Focus on highest-weighted sections
  • Use our AI tutor for tough concepts

EX482 Study Tips from Top Performers

1Stand up a real OpenShift cluster with the AMQ Streams Operator — CodeReady Containers or Developer Sandbox both work; CRDs must be reproducible end-to-end
2Drill the Kafka mental model: a topic is a partitioned, replicated log; partitions guarantee ordering only within themselves; consumer groups split partitions across consumers
3Memorize the Kafka core configs: acks=all, enable.idempotence=true, transactional.id, isolation.level=read_committed, auto.offset.reset, enable.auto.commit
4Practice authoring every AMQ Streams CRD: Kafka, KafkaTopic, KafkaUser, KafkaConnect, KafkaConnector, KafkaMirrorMaker2, KafkaBridge — and read the spec docs
5Get Quarkus reactive messaging into your fingers: mp.messaging.outgoing.<channel>.connector=smallrye-kafka, @Outgoing, @Incoming, @Channel, @Broadcast
6Build at least one Kafka Streams topology with a windowed aggregation and a KStream-KTable join — use groupByKey, aggregate, windowedBy(TimeWindows.of(...))
7Wire Apicurio Registry serdes (io.apicurio.registry.serde.avro.AvroKafkaSerializer / Deserializer) and practice schema evolution under BACKWARD compatibility
8Deploy Debezium for Postgres and MySQL through KafkaConnect — understand snapshot.mode, before/after event payloads, and the DDL change event format
9Implement an exactly-once pipeline: idempotent producer + transactional.id on the producer + isolation.level=read_committed on the consumer
10Configure all three listener authentication modes on a single Kafka CR: TLS mTLS, SASL/SCRAM-SHA-512, and OAuth — write a KafkaUser with ACLs to test each
11Practice DLQ patterns with smallrye-kafka: failure-strategy=dead-letter-queue with dead-letter-queue.topic=<dlq-topic> on @Incoming channels
12Run a full 3-hour timed mock lab the last two weeks — speed comes only from repetition, and EX482 punishes time spent debugging YAML typos

Frequently Asked Questions

What is the EX482 pass rate?

Red Hat does not officially publish pass rates. Industry estimates suggest approximately 55-65% of candidates pass on their first attempt. The passing score is 210/300 (70%). Most candidates need 100-150 hours of focused practice with Kafka, AMQ Streams on OpenShift, and Quarkus reactive messaging before they reliably hit the threshold.

What versions does EX482 cover?

EX482 currently aligns to Red Hat AMQ Streams 2.x (Apache Kafka 3.x), Quarkus, Apicurio Registry 2.x, and OpenShift 4.x. Always verify the active exam objectives on the official Red Hat exam page before scheduling. Practice on a real OpenShift cluster (CodeReady Containers or Developer Sandbox) so CRDs, Operator versions, and command flags match what you'll see.

How is EX482 different from EX378?

EX378 (Cloud-Native Developer with Quarkus) focuses on Quarkus REST services, Kubernetes-native packaging, and reactive programming basics. EX482 builds on Quarkus and adds deep Kafka work: AMQ Streams Operator CRDs, Kafka Streams topologies, Apicurio Registry serdes, Debezium CDC, mTLS/SASL/OAuth listeners, and exactly-once delivery with transactional.id. EX378 is recommended before EX482.

What are the EX482 prerequisites?

There is no enforced prerequisite — anyone can register. Red Hat strongly recommends EX378 (Cloud-Native Developer with Quarkus) plus the AD482 Developing Event-Driven Applications training or equivalent experience. Solid Java skills, OpenShift familiarity, and microservices experience are essential — without those, plan extra ramp time.

Does EX482 expire?

Yes — EX482 is valid for 3 years from the date you pass. You can recertify by passing the current version of EX482 again, passing a higher-level Red Hat exam, or maintaining other Red Hat credentials that count toward an architect track. Red Hat sends renewal notifications before expiration.

How long should I study for EX482?

Plan for 100-150 hours of hands-on study over 8-12 weeks if you already have Quarkus and OpenShift experience. If Kafka is new to you, double that. Build a real OpenShift cluster with the AMQ Streams Operator, write Quarkus apps with @Incoming/@Outgoing channels, build a Kafka Streams topology with joins and windowing, and run timed mock labs to develop exam-pace muscle memory.

What jobs can I get with EX482?

EX482 qualifies you for: Senior Streaming Platform Engineer ($140-190K), Event-Driven Architect ($150-210K), Kafka Site Reliability Engineer ($140-200K), Real-Time Data Engineer ($130-180K), and Microservices Engineer ($130-170K). Demand is strongest in financial services, e-commerce, IoT, telecom, and any organization running event-driven systems on Kafka.