Career upgrade: Learn practical AI skills for better jobs and higher pay.
Level up
All Practice Exams

100+ Free GCP Database Engineer Practice Questions

Pass your Google Cloud Professional Cloud Database Engineer exam on the first try — instant access, no signup required.

✓ No registration✓ No credit card✓ No hidden fees✓ Start practicing immediately
~55-65% Pass Rate
100+ Questions
100% Free
1 / 100
Question 1
Score: 0/0

A company needs to implement data masking for sensitive columns in their Cloud SQL PostgreSQL database so that non-privileged users see redacted values. What approach should they use?

A
B
C
D
to track
2026 Statistics

Key Facts: GCP Database Engineer Exam

55-65%

Est. Pass Rate

Industry estimate

Pass/Fail

Scoring

Scaled

90-120 hrs

Study Time

Recommended

120 min

Exam Duration

Google Cloud

$200

Exam Fee

Google Cloud

2 years

Cert Valid

Google Cloud

The GCP PCDE exam has approximately 50-60 questions in 120 minutes. The estimated pass rate is 55-65%. The exam covers database design, Cloud SQL/Spanner/Bigtable/AlloyDB/Firestore, migration with DMS, performance tuning, HA/DR, and security.

Sample GCP Database Engineer Practice Questions

Try these sample questions to test your GCP Database Engineer exam readiness. Each question includes a detailed explanation. Start the interactive quiz above for the full 100+ question experience with AI tutoring.

1A company needs a fully managed relational database on Google Cloud that supports automatic failover with a regional high-availability configuration. Which service should they use?
A.Cloud SQL with HA configuration
B.Cloud Spanner
C.AlloyDB
D.Bare Metal Solution for Oracle
Explanation: Cloud SQL with high-availability (HA) configuration provides a fully managed relational database with automatic failover using a primary and standby instance in different zones within the same region. It supports MySQL, PostgreSQL, and SQL Server. Cloud Spanner is globally distributed and more expensive, AlloyDB is PostgreSQL-compatible but a different service, and Bare Metal Solution is for lift-and-shift Oracle workloads.
2Which Google Cloud database service provides horizontal scalability with strong global consistency for relational workloads?
A.Cloud SQL
B.AlloyDB
C.Cloud Spanner
D.Firestore
Explanation: Cloud Spanner is a globally distributed, horizontally scalable relational database that provides strong external consistency (strict serializability) across regions. It supports SQL queries, schemas, and ACID transactions at global scale. Cloud SQL is regional and vertically scaled, AlloyDB is regional, and Firestore is a NoSQL document database. Spanner is ideal for workloads requiring both global distribution and relational consistency.
3What is the maximum storage capacity for a single Cloud SQL instance?
A.10 TB
B.30 TB
C.64 TB
D.Unlimited
Explanation: Cloud SQL supports up to 64 TB of storage per instance. Storage can be configured to auto-increase when the available space drops below a threshold. The storage type can be SSD (recommended for most workloads) or HDD (for lower-cost, lower-performance needs). Storage size affects I/O performance, with larger storage providing higher sustained IOPS and throughput.
4A data analyst needs to run complex analytical queries on an operational PostgreSQL database without impacting production performance. Which feature should they use?
A.Cloud SQL read replicas
B.AlloyDB Analytics with columnar engine
C.Cloud Spanner change streams
D.BigQuery federated queries
Explanation: AlloyDB's Analytics feature includes a built-in columnar engine that accelerates analytical queries up to 100x without requiring data export. The columnar engine automatically maintains a column-store cache of frequently accessed data alongside the row-store, enabling mixed OLTP/OLAP workloads on the same database. Read replicas reduce load but don't optimize analytical query patterns. BigQuery federated queries work but add data movement latency.
5Which NoSQL database service on Google Cloud is best suited for storing and querying time-series data at massive scale, such as IoT sensor readings?
A.Firestore
B.Cloud Bigtable
C.Memorystore
D.Cloud SQL
Explanation: Cloud Bigtable is a wide-column NoSQL database optimized for high-throughput, low-latency workloads including time-series data, IoT sensor readings, and analytics. It can handle millions of rows per second with single-digit millisecond latency. Its row key design supports efficient range scans for time-series queries. Firestore is better for mobile/web applications, Memorystore is an in-memory cache, and Cloud SQL is relational.
6What is the primary difference between Firestore in Native mode and Firestore in Datastore mode?
A.Native mode supports SQL queries, Datastore mode does not
B.Native mode provides real-time listeners and offline support, Datastore mode offers Datastore API compatibility
C.Datastore mode is faster than Native mode
D.Native mode is only available in a single region
Explanation: Firestore in Native mode provides real-time listeners for live data synchronization, offline support for mobile/web clients, and a document-collection data model. Datastore mode provides backward compatibility with the Datastore API while running on the Firestore infrastructure. Datastore mode does not support real-time listeners or offline capabilities. The choice between modes is made at project creation and cannot be changed later.
7A company is migrating a 5 TB Oracle database to Google Cloud. They want minimal code changes and need PostgreSQL compatibility. Which target database should they choose?
A.Cloud SQL for MySQL
B.Cloud Spanner
C.AlloyDB for PostgreSQL
D.Cloud SQL for SQL Server
Explanation: AlloyDB for PostgreSQL is a fully managed, PostgreSQL-compatible database that offers high performance for both transactional and analytical workloads. For Oracle migrations, AlloyDB provides strong PostgreSQL compatibility with performance characteristics closer to Oracle. It supports up to 128 vCPUs, 864 GB RAM, and 64 TB storage. The Database Migration Service can automate the migration from Oracle to AlloyDB with schema and data conversion.
8Which Google Cloud service provides a managed, serverless data migration pipeline for database migrations?
A.Dataflow
B.Database Migration Service (DMS)
C.Transfer Appliance
D.Cloud Data Fusion
Explanation: Database Migration Service (DMS) is a fully managed, serverless service that simplifies database migrations to Google Cloud. It supports continuous replication (CDC) for minimal-downtime migrations, automated schema conversion, and supports migrations to Cloud SQL, AlloyDB, and Cloud Spanner. Dataflow is for data processing pipelines, Transfer Appliance is for physical data transfer, and Cloud Data Fusion is for ETL/ELT pipelines.
9You are experiencing slow query performance on a Cloud SQL PostgreSQL instance. Which tool should you use first to identify problematic queries?
A.Cloud Monitoring CPU metrics
B.Query Insights dashboard
C.EXPLAIN ANALYZE on individual queries
D.Cloud Trace
Explanation: Query Insights is a built-in Cloud SQL feature that provides a dashboard for identifying and analyzing problematic queries. It shows query execution times, wait events, and resource consumption without requiring manual query analysis. It can identify top queries by execution time, lock waits, and rows examined. EXPLAIN ANALYZE is useful for individual query optimization but requires knowing which queries to investigate. Cloud Monitoring shows system-level metrics.
10What is the recommended approach for designing a Bigtable row key for a time-series workload to avoid hotspotting?
A.Use a monotonically increasing timestamp as the row key
B.Use a reversed timestamp as the row key prefix
C.Use a hash of the timestamp as the row key prefix
D.Prefix the row key with a unique identifier (e.g., device ID) followed by a reversed timestamp
Explanation: The recommended row key design for time-series data in Bigtable combines a unique identifier prefix (like device ID or sensor ID) with a reversed timestamp. This distributes writes across multiple tablet servers (avoiding hotspots from sequential timestamps) while enabling efficient range scans for a specific device's time range. Monotonically increasing timestamps cause all writes to go to a single tablet, creating a hotspot.

About the GCP Database Engineer Exam

The Google Cloud Professional Cloud Database Engineer certification validates the ability to design, manage, and troubleshoot databases on Google Cloud including Cloud SQL, Cloud Spanner, Bigtable, AlloyDB, and Firestore.

Questions

100 scored questions

Time Limit

120 minutes

Passing Score

Scaled (pass/fail)

Exam Fee

$200 (Google Cloud / Kryterion)

GCP Database Engineer Exam Content Outline

30%

Designing Scalable Databases

Database selection, schema design, HA/DR configuration, replication, and capacity planning

25%

Managing Multi-Database Solutions

Cross-database integration, connection management, and multi-database architectures

20%

Migrating Data Solutions

DMS, continuous replication, schema conversion, validation, and migration strategies

15%

Deploying Cloud Databases

Cloud SQL, AlloyDB, Spanner, Bigtable, Firestore deployment and configuration

10%

Database Security

IAM authentication, CMEK encryption, audit logging, SSL enforcement, and access control

How to Pass the GCP Database Engineer Exam

What You Need to Know

  • Passing score: Scaled (pass/fail)
  • Exam length: 100 questions
  • Time limit: 120 minutes
  • Exam fee: $200

Keys to Passing

  • Complete 500+ practice questions
  • Score 80%+ consistently before scheduling
  • Focus on highest-weighted sections
  • Use our AI tutor for tough concepts

GCP Database Engineer Study Tips from Top Performers

1Know when to use each database: Cloud SQL for relational, Spanner for global consistency, Bigtable for high-throughput, Firestore for mobile/web, AlloyDB for HTAP
2Master Cloud SQL HA, read replicas, PITR, and cross-region DR configurations
3Understand Bigtable row key design patterns and how to avoid hotspots
4Practice DMS migration workflows: initial load, continuous replication, and cutover
5Study database security: IAM authentication, CMEK, SSL enforcement, and audit logging

Frequently Asked Questions

How hard is the GCP Database Engineer exam?

It is considered challenging with a 55-65% estimated pass rate. The exam tests deep knowledge of multiple Google Cloud database services and when to use each one.

Which databases are covered on the exam?

Cloud SQL (MySQL, PostgreSQL, SQL Server), Cloud Spanner, Cloud Bigtable, Firestore, AlloyDB, Memorystore, and BigQuery. Know when to use each and their trade-offs.

How long should I study?

Most candidates study 8-12 weeks, investing 90-120 hours. Focus on hands-on experience with Cloud SQL, Spanner, and Bigtable, plus DMS migration workflows.

Is database migration heavily tested?

Yes, DMS (Database Migration Service), schema conversion, continuous replication, and migration validation are significant topics. Practice both homogeneous and heterogeneous migrations.