MICROSOFT CERTIFICATION

DP-900 Azure Data Fundamentals Practice Exam

Exam Number: 3119 | Last updated 16-Apr-26 | 790+ questions across 4 vendor-aligned objectives

The DP-900 Azure Data Fundamentals certification validates the skills of anyone seeking foundational knowledge of core data concepts and Azure data services. This exam measures your ability to work with Azure SQL Database, Azure Cosmos DB, Azure Data Lake Storage, Azure Synapse Analytics, Power BI, demonstrating both conceptual understanding and practical implementation skills required in today’s enterprise environments.

The heaviest exam domains include Describe Core Data Concepts (25–30%), Describe an Analytics Workload on Azure (25–30%), and Identify Considerations for Relational Data on Azure (20–25%). These areas collectively represent the majority of exam content and require focused preparation across their respective subtopics.

Additional domains tested include Describe Considerations for Working with Non-Relational Data on Azure (15–20%). Together, these areas round out the full exam blueprint and ensure candidates possess well-rounded expertise across the certification scope.

 Purely conceptual — no coding or hands-on configuration. Focus on differentiating relational vs. non-relational data models, understanding ETL/ELT patterns, and knowing when to recommend Azure Synapse Analytics vs. Azure Databricks.

Every answer links to the source. Each explanation below includes a hyperlink to the exact Microsoft documentation page the question was derived from. PowerKram is the only practice platform with source-verified explanations. Learn about our methodology →

870

practice exam users

93.5%

satisfied users

91.8%

passed the exam

4.8/5

quality rating

Test your DP-900 Azure Data Fundamentals knowledge

10 of 790+ questions

Question #1 - Describe Core Data Concepts

A bakery chain tracks daily sales across 30 locations. Each sale record includes store ID, date, product, quantity, and price. Management needs weekly summary reports.

Which type of data processing best describes this reporting requirement?

A) Real-time stream processing
B) Transactional processing
C) Edge computing
D) Batch processing

 

Correct answers: A – Explanation:
Batch processing collects data over a period and processes it in bulk for scheduled reports — exactly matching weekly summaries from daily records. Stream processing handles data in real time. Edge computing runs processing on devices near the data source. Transactional processing handles individual operations as they occur. Source: Check Source

A bakery chain tracks daily sales across 30 locations. Each record has store ID, date, product, quantity, and price. Management needs weekly reports.

Which type of data processing best describes this reporting requirement?

A) Edge computing running sales analysis directly on each store’s local point-of-sale terminal
B) Real-time stream processing analyzing each sale transaction as it occurs at the register
C) Batch processing collecting data over a period and processing it in bulk for scheduled reports
D) Transactional processing handling each individual sale as an isolated database operation

 

Correct answers: C – Explanation:
Batch processing collects data over a period and processes it in bulk for periodic reports — matching weekly summaries from daily accumulated records. Stream processing handles data immediately as it arrives in real time. Edge computing runs processing on local devices near the data source. Transactional processing manages individual atomic database operations as they happen. Source: Check Source

A hospital stores patient records with name, DOB, diagnosis, medications, and attending physician in a fixed format.

Which data model best describes this structured patient data?

A) A time-series database optimized for storing and querying sequential temporal data points
B) A graph database model representing entities as nodes connected by relationship edges
C) An unstructured document store holding free-form content without any predefined schema
D) A relational model organizing data into tables with defined rows, columns, and constraints

 

Correct answers: D – Explanation:
Fixed-format records with defined fields map naturally to a relational model with tables, typed columns, and referential integrity constraints. Graph models represent relationships between entities as traversable edges. Unstructured document stores hold free-form data without enforced schemas. Time-series databases optimize for temporal sequences, not general structured records. Source: Check Source

A logistics company needs to understand the difference between OLTP and OLAP to decide which Azure services to use.

Which statement correctly distinguishes OLTP from OLAP?

A) OLTP handles high-volume transactional operations while OLAP optimizes analytical queries on history
B) OLTP systems are optimized for running complex analytical queries across massive historical datasets
C) OLAP systems are designed to handle high volumes of individual transaction inserts and updates
D) OLTP and OLAP systems are identical in architecture and serve exactly the same workload patterns

 

Correct answers: A – Explanation:
OLTP systems process high volumes of individual transactions with fast reads and writes optimized for operational use. OLAP systems aggregate and analyze large historical datasets for business intelligence and reporting. They serve fundamentally different purposes with different data structures and query patterns. Source: Check Source

A startup needs a fully managed relational database with automatic backups, patching, and independent compute/storage scaling.

Which Azure service should they choose?

A) SQL Server installed on an Azure Virtual Machine requiring manual OS and database management
B) Azure Table Storage providing a simple key-value NoSQL store for semi-structured data
C) Azure Cosmos DB offering a globally distributed multi-model NoSQL database platform
D) Azure SQL Database providing a fully managed PaaS relational database with built-in automation

 

Correct answers: D – Explanation:
Azure SQL Database is a fully managed PaaS offering with automatic backups, patching, and independent compute/storage scaling requiring no infrastructure management. SQL on a VM requires manual OS patching, backup configuration, and maintenance. Cosmos DB is a NoSQL database, not relational. Table Storage is a basic key-value store without relational database capabilities. Source: Check Source

A university needs every enrollment record to reference a valid student. Invalid student references should be rejected by the database.

Which relational database concept enforces this data integrity?

A) A stored procedure that manually validates student existence before each enrollment insert
B) A foreign key constraint ensuring every enrollment references an existing student record
C) A database view that joins enrollments with students to present only valid combinations
D) A non-clustered index created on the enrollment table to accelerate student lookup queries

 

Correct answers: B – Explanation:
A foreign key constraint enforces referential integrity at the database engine level, rejecting any enrollment insert that references a non-existent student. Indexes improve query performance but do not enforce data integrity rules. Stored procedures can validate data but are bypassable through direct table access. Views present combined data but do not prevent invalid data from being inserted. Source: Check Source

A social platform stores user profiles with varying attributes — some users have addresses, some have phone numbers, and new fields are added frequently.

Which Azure service handles this flexible schema requirement?

A) Azure Cosmos DB storing JSON documents without requiring a predefined column structure
B) Azure SQL Database requiring all columns defined upfront in a fixed relational table schema
C) Azure Cache for Redis providing an in-memory key-value cache for transient session data
D) Azure Blob Storage holding unstructured binary files without any query or indexing capability

 

Correct answers: A – Explanation:
Cosmos DB stores JSON documents without a predefined schema, accommodating varying attributes per user and frequent field additions without migration. SQL Database requires fixed column definitions that must be altered for new attributes. Blob Storage stores raw files without document-level querying or indexing. Redis is a transient cache layer, not a primary persistent document store. Source: Check Source

An IoT company stores billions of telemetry readings with timestamps. Queries are almost exclusively by device ID and time range.

Which non-relational data model is best suited?

A) A column-family or time-series store optimized for high-volume sequential writes and ranges
B) A document database storing each telemetry reading as a self-contained JSON document
C) A key-value store optimized for single-key lookups without support for range-based queries
D) A graph database modeling device-to-device relationships as traversable network edges

 

Correct answers: A – Explanation:
Column-family and time-series stores optimize for high-volume sequential writes and efficient time-range queries by device, matching IoT telemetry access patterns precisely. Graph databases model entity relationships, not time-series data. Key-value stores excel at single-key lookups but lack efficient range query support. Document databases add schema overhead for simple timestamp-value telemetry records. Source: Check Source

A retailer collects data from POS, website clickstreams, and social media. They need a platform to ingest, transform, store, and analyze it all.

Which Azure service provides this unified analytics experience?

A) Azure SQL Database providing relational data storage for structured transactional workloads
B) Azure Synapse Analytics combining data ingestion, big data processing, and warehousing
C) Azure DevOps providing CI/CD pipelines for application build, test, and deployment workflows
D) Azure App Service hosting web applications and APIs on a managed platform infrastructure

 

Correct answers: B – Explanation:
Azure Synapse Analytics combines data ingestion, Spark-based big data processing, dedicated SQL data warehousing, and integrated BI in a single unified analytics platform. SQL Database handles relational transactional workloads, not unified analytics. App Service hosts web applications, not analytics platforms. DevOps manages software development pipelines, not data analytics. Source: Check Source

A company loads raw data from multiple sources, cleanses and transforms it in a central store, then analysts query the results.

What is this data processing pattern called?

A) OLTP processing individual business transactions in real time as they occur
B) Database mirroring replicating an entire database to a secondary server for failover
C) ELT extracting data from sources, loading it into a central store, then transforming it there
D) Real-time streaming processing events continuously as they arrive from source systems

 

Correct answers: C – Explanation:
ELT extracts data from sources, loads it into a central store like a data lake or warehouse, then transforms it using the destination platform’s compute resources. OLTP handles individual business transactions in operational databases. Real-time streaming processes events as they arrive, not in a load-then-transform pattern. Database mirroring replicates databases for high availability, not for analytics transformation. Source: Check Source

Get 790+ more questions with source-linked explanations

Every answer traces to the exact Microsoft documentation page — so you learn from the source, not just memorize answers.

Exam mode & learn mode · Score by objective · Updated 16-Apr-26

Learn more...

What the DP-900 Azure Data Fundamentals exam measures

  • Describe Core Data Concepts (25–30%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
  • Identify Considerations for Relational Data on Azure (20–25%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
  • Describe Considerations for Working with Non-Relational Data on Azure (15–20%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
  • Describe an Analytics Workload on Azure (25–30%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.

  • Review the official exam guide to understand every objective and domain weight before you begin studying
  • Complete the relevant Microsoft Learn learning path to build a structured foundation across all exam topics
  • Get hands-on practice in an Azure free-tier sandbox or trial environment to reinforce what you have studied with real configurations
  • Apply your knowledge through real-world project experience — whether at work, in volunteer roles, or contributing to open-source initiatives
  • Master one objective at a time, starting with the highest-weighted domain to maximize your score potential early
  • Use PowerKram learn mode to study by individual objective and review detailed explanations for every question
  • Switch to PowerKram exam mode to simulate the real test experience with randomized questions and timed conditions

Earning this certification can open doors to several in-demand roles:

Microsoft provides comprehensive free training to prepare for the DP-900 Azure Data Fundamentals exam. Start with the official Microsoft Learn learning path for structured, self-paced modules covering every exam domain. Review the exam study guide for the complete skills outline and recent updates.

Related certifications to explore

Related reading from our Learning Hub