MICROSOFT CERTIFICATION

DP-420 Azure Cosmos DB Developer Specialty Practice Exam

Exam Number: 3116 | Last updated 16-Apr-26 | 787+ questions across 4 vendor-aligned objectives

The DP-420 Azure Cosmos DB Developer Specialty certification validates the skills of developers who design, implement, and monitor data solutions using Azure Cosmos DB. This exam measures your ability to work with Azure Cosmos DB (NoSQL, MongoDB, Cassandra, Gremlin, Table APIs), Change Feed, Partition Key Design, SDKs, demonstrating both conceptual understanding and practical implementation skills required in today’s enterprise environments.

The heaviest exam domains include Design and Implement Data Models (35–40%), Maintain an Azure Cosmos DB Solution (25–30%), and Optimize an Azure Cosmos DB Solution (15–20%). These areas collectively represent the majority of exam content and require focused preparation across their respective subtopics.

Additional domains tested include Design and Implement Data Distribution (5–10%), and Integrate an Azure Cosmos DB Solution (5–10%). Together, these areas round out the full exam blueprint and ensure candidates possess well-rounded expertise across the certification scope.

 Data modeling dominates at 35–40% weight. Deep-dive partition key selection strategies, hierarchical partition keys, and Request Unit cost optimization. Know the trade-offs between each Cosmos DB API type.

Every answer links to the source. Each explanation below includes a hyperlink to the exact Microsoft documentation page the question was derived from. PowerKram is the only practice platform with source-verified explanations. Learn about our methodology →

106

practice exam users

94.7%

satisfied users

90.6%

passed the exam

4.9/5

quality rating

Test your DP-420 Azure Cosmos DB Developer Specialty knowledge

10 of 787+ questions

Question #1 - Design and Implement Data Models

An e-commerce platform stores product catalog data in Cosmos DB. Products are queried by category, and each category contains 500-50,000 products. The team needs to choose the partition key.

Which partition key strategy provides the best performance?

A) Use the product creation timestamp
B) Use a single partition key value for all products
C) Use categoryId as the partition key
D) Use a random GUID as the partition key

 

Correct answers: C – Explanation:
categoryId groups related products for efficient category queries while distributing data across partitions. A single key creates a hot partition. Random GUIDs distribute data but make category queries cross-partition. Timestamps create hot partitions for recent data. Source: Check Source

An e-commerce platform stores products in Cosmos DB. Products are queried by category, with 500-50,000 products per category.

Which partition key provides the best query performance?

A) The categoryId field grouping products by category for efficient single-partition queries
B) A single fixed partition key value shared by every product document in the container
C) The product creation timestamp distributing documents chronologically across partitions
D) A randomly generated GUID assigned to each product ensuring maximum partition distribution

 

Correct answers: A – Explanation:
categoryId groups related products together for efficient single-partition category queries while distributing data across enough partitions to avoid hotspots. A single key creates one hot partition containing all data. Random GUIDs distribute data maximally but force category queries to scan cross-partition. Timestamp-based keys create hot partitions for recent data and distribute queries for older data inefficiently. Source: Check Source

A social media app stores user posts in Cosmos DB. The most common query retrieves a user’s recent posts. Posts also appear on follower feeds.

Which data modeling approach minimizes RU consumption for these queries?

A) Store posts in a separate container requiring cross-container lookups for each user query
B) Normalize all entities into separate containers connected by foreign key reference fields
C) Store each post as an independent document with no structural relationship to users
D) Embed recent posts within the user document and use change feed to propagate to feeds

 

Correct answers: D – Explanation:
Embedding recent posts in the user document satisfies the most common query with a single-document read operation. Change feed efficiently propagates new posts to follower feed containers. Separate containers require cross-partition queries joining data. Full normalization is an anti-pattern in Cosmos DB that increases RU cost. Independent documents without relationships make user-specific queries expensive and cross-partition. Source: Check Source

A healthcare platform stores patient records that grow over time with lab results and prescriptions. Documents frequently exceed 2 MB.

How should the data model handle these growing documents?

A) Migrate all data to a relational database service to avoid document size limitations
B) Split into a reference pattern with a compact patient document linking to sub-documents
C) Continue embedding all data in a single document and increase provisioned throughput capacity
D) Compress all document content before writing to reduce the storage footprint per item

 

Correct answers: B – Explanation:
The reference pattern keeps a compact patient document with ID references to sub-documents for labs, visits, and prescriptions, avoiding the 2 MB item size limit while reducing per-query RU cost. Increasing throughput does not solve the document size limit. Relational migration loses Cosmos DB’s global distribution and latency benefits. Compression adds application complexity and does not address the fundamental item size constraint. Source: Check Source

A real-time gaming platform needs player leaderboard reads with single-digit millisecond latency across 5 global regions.

Which Cosmos DB configuration meets these requirements?

A) Azure SQL Database configured with read replicas deployed across the five global regions
B) A single-region deployment with geo-redundant backup enabled for disaster recovery purposes
C) A single-region deployment with Azure CDN edge caching placed in front for global access
D) A multi-region account with multi-region reads enabled serving data from the closest replica

 

Correct answers: D – Explanation:
Multi-region Cosmos DB with reads enabled in all regions serves data from the nearest replica for single-digit ms latency globally. CDN caches static content but cannot cache dynamic database queries with real-time consistency. Geo-redundant backup is for disaster recovery, not read performance optimization. Azure SQL read replicas do not provide the guaranteed single-digit ms latency that Cosmos DB offers natively. Source: Check Source

Cost analysis reveals 70% of RU consumption comes from queries scanning entire partitions. The team needs to reduce costs.

Which optimization should be implemented first?

A) Increase the provisioned throughput capacity to accommodate the high RU consumption rate
B) Enable time-to-live on old documents to reduce the total volume of stored data
C) Add composite indexes aligned with the query filter and sort column patterns being used
D) Switch the container to serverless billing mode regardless of the workload consistency

 

Correct answers: C – Explanation:
Composite indexes enable efficient query execution by matching the filter and sort patterns, directly reducing RU consumption without adding capacity. Increasing throughput accommodates cost rather than reducing it. Serverless billing mode may be more expensive for consistent workloads than provisioned throughput. TTL reduces storage volume but does not address query efficiency on the remaining documents. Source: Check Source

A financial app writes 10,000 transactions per second but experiences throttling (429 errors) during peak trading hours.

Which approach reduces throttling?

A) Enable autoscale throughput and verify that the partition key distributes writes evenly
B) Retry all throttled requests immediately without any exponential backoff delay strategy
C) Lower the provisioned throughput to reduce costs during the high-write peak period
D) Switch to strong consistency level to ensure all replicas acknowledge each write operation

 

Correct answers: A – Explanation:
Autoscale dynamically adjusts provisioned throughput based on demand, and even partition key distribution prevents hot-partition throttling where one partition exhausts its share. Immediate retry without backoff creates retry storms that worsen throttling. Lowering throughput during peaks directly increases throttling frequency. Strong consistency adds cross-replica coordination latency and increases RU cost per write, worsening the problem. Source: Check Source

An order processing system needs downstream inventory updates and notification emails triggered whenever a new order is written to Cosmos DB.

Which Cosmos DB feature enables this event-driven processing?

A) Change feed processing with Azure Functions automatically reacting to insert and update events
B) Scheduled SQL queries polling the container at regular intervals to detect new order documents
C) Stored procedures executing custom JavaScript logic within the scope of a single partition
D) Manual application-level polling checking for new documents using a timestamp-based filter

 

Correct answers: A – Explanation:
Change feed automatically captures insert and update operations as a persistent ordered log, and Azure Functions process these events to trigger downstream systems in near-real-time. Stored procedures execute within transactions but cannot trigger external systems asynchronously. Scheduled polling introduces latency between order creation and downstream processing. Manual polling adds application complexity and resource consumption compared to the push-based change feed model. Source: Check Source

Compliance requires customer-managed encryption keys and point-in-time restore capability for all Cosmos DB data.

Which configurations should be enabled?

A) Default Microsoft-managed encryption with periodic backup providing daily restore granularity
B) Customer-managed keys via Azure Key Vault with continuous backup enabling point-in-time restore
C) Third-party encryption library integration with manual backup scripts executed on a schedule
D) Client-side application encryption with no platform backup configuration or key management

 

Correct answers: B – Explanation:
Customer-managed keys via Key Vault provide customer-controlled encryption, and continuous backup enables point-in-time restore at any second within the retention window. Microsoft-managed keys do not provide customer key control for compliance. Client-side encryption without platform backup lacks integrated key management and restore capability. Third-party solutions add unnecessary complexity when native platform features meet the requirements. Source: Check Source

A DevOps team needs to monitor Cosmos DB performance, identify hot partitions, and alert when RU consumption exceeds 80% of provisioned throughput.

Which monitoring solution should be configured?

A) Application error logs capturing only failed request details from the client SDK library
B) Periodic manual checks of partition metrics through the Azure portal Cosmos DB data explorer
C) Azure Monitor with Cosmos DB Insights workbook and configurable metric alert threshold rules
D) A third-party APM tool deployed independently without native Azure Cosmos DB integration

 

Correct answers: C – Explanation:
Azure Monitor with Cosmos DB Insights provides partition-level heat maps and RU consumption metrics, and metric alerts trigger on configurable thresholds. Application error logs capture failures but miss performance trends on successful requests. Manual portal checks are not proactive and miss issues outside of observation periods. Third-party tools without native integration lack the partition-level insights that Cosmos DB Insights provides natively. Source: Check Source

Get 787+ more questions with source-linked explanations

Every answer traces to the exact Microsoft documentation page — so you learn from the source, not just memorize answers.

Exam mode & learn mode · Score by objective · Updated 16-Apr-26

Learn more...

What the DP-420 Azure Cosmos DB Developer Specialty exam measures

  • Design and Implement Data Models (35–40%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
  • Design and Implement Data Distribution (5–10%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
  • Integrate an Azure Cosmos DB Solution (5–10%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
  • Optimize an Azure Cosmos DB Solution (15–20%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
  • Maintain an Azure Cosmos DB Solution (25–30%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.

  • Review the official exam guide to understand every objective and domain weight before you begin studying
  • Complete the relevant Microsoft Learn learning path to build a structured foundation across all exam topics
  • Get hands-on practice in an Azure free-tier sandbox or trial environment to reinforce what you have studied with real configurations
  • Apply your knowledge through real-world project experience — whether at work, in volunteer roles, or contributing to open-source initiatives
  • Master one objective at a time, starting with the highest-weighted domain to maximize your score potential early
  • Use PowerKram learn mode to study by individual objective and review detailed explanations for every question
  • Switch to PowerKram exam mode to simulate the real test experience with randomized questions and timed conditions

Earning this certification can open doors to several in-demand roles:

Microsoft provides comprehensive free training to prepare for the DP-420 Azure Cosmos DB Developer Specialty exam. Start with the official Microsoft Learn learning path for structured, self-paced modules covering every exam domain. Review the exam study guide for the complete skills outline and recent updates.

Related certifications to explore

Related reading from our Learning Hub