MICROSOFT CERTIFICATION
PL-300 Power BI Data Analyst Associate Practice Exam
Exam Number: 3155 | Last updated 16-Apr-26 | 810+ questions across 4 vendor-aligned objectives
The PL-300 Power BI Data Analyst Associate certification validates the skills of data analysts who design and build scalable data models, clean and transform data, and create visualizations using Power BI. This exam measures your ability to work with Power BI Desktop, Power BI Service, Power Query, DAX, Dataflows, Row-Level Security, demonstrating both conceptual understanding and practical implementation skills required in today’s enterprise environments.
The heaviest exam domains include Model the Data (30–35%), Prepare the Data (25–30%), and Visualize and Analyze the Data (25–30%). These areas collectively represent the majority of exam content and require focused preparation across their respective subtopics.
Additional domains tested include Deploy and Maintain Assets (10–15%). Together, these areas round out the full exam blueprint and ensure candidates possess well-rounded expertise across the certification scope.
Every answer links to the source. Each explanation below includes a hyperlink to the exact Microsoft documentation page the question was derived from. PowerKram is the only practice platform with source-verified explanations. Learn about our methodology →
642
practice exam users
91%
satisfied users
89.8%
passed the exam
4.1/5
quality rating
Test your PL-300 Power BI Data Analyst knowledge
10 of 810+ questions
Question #1 - Prepare the Data
A data analyst receives sales data from five regional CSV files with inconsistent column names, date formats, and currency symbols. They need a repeatable process to clean and combine this data.
Which Power BI tool should be used for this repeatable data preparation?
A) Clean each file manually in Excel first
B) DAX formulas in the data model
C) Direct Query mode
D) Power Query Editor with transformation steps applied to each source and an Append Queries operation to combine them
Show solution
Correct answers: D – Explanation:
Power Query records each transformation step (rename, format, clean) and replays them on refresh. Append combines the cleaned sources into one table. Excel cleanup is not repeatable. DAX operates on loaded data, not raw sources. DirectQuery does not transform data. Source: Check Source
Question #2 - Prepare the Data
A data analyst receives sales data from five regional CSV files with inconsistent column names, date formats, and currency symbols.
Which Power BI tool should be used for this repeatable data preparation?
A) DAX formulas in the data model which operate on already-loaded data not raw source transformation
B) Direct Query mode which sends live queries without any data transformation or cleansing step
C) Power Query Editor with transformation steps applied to each source and Append to combine them
D) Clean each of the five files manually in Excel before importing which is not repeatable on refresh
Show solution
Correct answers: C – Explanation:
Power Query records each transformation step and replays them automatically on refresh. Append combines the cleaned sources into one unified table for analysis. Excel cleanup must be repeated manually every time new data files arrive and is not automated. DAX operates on data after loading and cannot perform source-level cleaning like column renaming or type conversion. DirectQuery sends live queries to the source without any intermediate transformation or cleansing capability. Source: Check Source
Question #3 - Prepare the Data
A Power Query column contains dates formatted as text strings like “15-Apr-2026”. The model needs proper Date type for time intelligence.
Which Power Query transformation converts this?
A) Import the dates into a separate table which adds unnecessary complexity to the data model
B) Leave the column as text and use DAX to parse date strings which adds runtime processing overhead
C) Change the column type to Date in Power Query using the locale-aware type conversion feature
D) Delete the column entirely and create a calculated column losing the original source data flow
Show solution
Correct answers: C – Explanation:
Power Query type conversion with locale settings properly parses text date strings into typed Date values before loading, ensuring the model has clean data for time intelligence. DAX parsing adds runtime calculation overhead for every query and is less efficient than pre-load conversion. Deleting the column loses the original data flow requiring reconstruction of the date transformation logic. Separate tables add model complexity and relationship management for what is a simple column type conversion. Source: Check Source
Question #4 - Prepare the Data
Sales data needs enrichment with product category information from a SQL Server database. Sales data is in a CSV loaded in Power Query.
Which Power Query operation joins these two sources?
A) Append Queries stacking rows from both tables vertically without any column-level joining logic
B) Merge Queries joining CSV sales data with SQL product categories on the shared Product ID column
C) Copy and paste between tables manually in the worksheet matching rows by visual inspection
D) VLOOKUP formulas in the Excel worksheet performing row-by-row lookups after data has loaded
Show solution
Correct answers: B – Explanation:
Merge Queries performs a relational join in Power Query, combining columns from both sources based on a shared key before loading into the model. Append stacks rows vertically from multiple tables without the column-level joining needed to add product names to sales rows. Manual copy-paste is error-prone, not repeatable, and does not create a refreshable transformation step. VLOOKUP is an Excel worksheet function and does not operate within the Power Query transformation pipeline. Source: Check Source
Question #5 - Model the Data
A data analyst builds a model with Sales fact and separate Product, Customer, and Date dimension tables.
Which modeling pattern should be implemented?
A) A star schema with one-to-many relationships from dimension tables to the central fact table
B) Many-to-many relationships between all tables which introduces aggregation ambiguity issues
C) A single flat denormalized table combining all fields which creates massive data redundancy
D) A snowflake schema with multiple normalized dimension levels adding unnecessary join complexity
Show solution
Correct answers: A – Explanation:
Star schema with one-to-many from dimensions to facts is the optimal pattern for Power BI. The VertiPaq engine compresses and queries this structure most efficiently. Flat tables create massive redundancy by repeating dimension values for every fact row. Many-to-many relationships introduce filter propagation ambiguity that complicates DAX calculations. Snowflake schema adds extra joins between normalized dimension levels reducing query performance without significant benefit. Source: Check Source
Question #6 - Model the Data
A sales report needs a measure showing total revenue that ignores all slicer filters — always displaying the grand total for percentages.
Which DAX function removes filter context?
A) SUM alone which respects all current filter context from slicers without removing any filters
B) FILTER which adds additional conditions to the evaluation context rather than removing them
C) AVERAGE which calculates the arithmetic mean of filtered values rather than the unfiltered total
D) CALCULATE with ALL removing filter context from the specified table for grand total calculation
Show solution
Correct answers: D – Explanation:
CALCULATE with ALL removes filter context from the specified table, returning the grand total regardless of active slicer selections enabling percentage-of-total calculations. SUM alone respects all current filters returning a filtered subtotal that changes with every slicer selection. FILTER adds conditions narrowing the evaluation context further rather than broadening it by removing filters. AVERAGE computes the mean of filtered values which is neither the filtered total nor the unfiltered grand total needed. Source: Check Source
Question #7 - Model the Data
An analyst creates a year-to-date revenue measure accumulating from January 1 through the current date selection.
Which DAX time intelligence function should be used?
A) FIRSTDATE returning the earliest date in the current filter context without any accumulation
B) DATEADD shifting dates by an interval without accumulating values from the beginning of year
C) PREVIOUSYEAR returning the corresponding period in the prior year without year-to-date accumulation
D) TOTALYTD accumulating the expression from year start through the current date filter context
Show solution
Correct answers: D – Explanation:
TOTALYTD accumulates the specified expression from the start of the year through the current date context, dynamically adjusting as users change date filter selections. DATEADD shifts dates by a specified interval for period-over-period comparison without year-to-date accumulation logic. PREVIOUSYEAR returns the prior year equivalent period for comparison rather than accumulating the current year values. FIRSTDATE returns the earliest date value in the current filter context without performing any accumulation calculation. Source: Check Source
Question #8 - Visualize and Analyze the Data
A retail dashboard needs a summary page where clicking a region in one visual filters all other visuals on the page.
Which Power BI interaction behavior enables this?
A) Cross-filtering where clicking a value in one visual automatically filters related visuals on page
B) No interaction between visuals requiring users to manually set filters on each visual independently
C) Export each visual to PDF separately losing all interactive filtering and exploration capability
D) Use separate report pages per region requiring navigation between pages rather than filtering
Show solution
Correct answers: A – Explanation:
Power BI visuals cross-filter each other by default. Selecting a region in a bar chart automatically filters the table, map, and card visuals to show only that region data. No-interaction mode must be explicitly configured and eliminates the interactive exploration experience. PDF export creates static snapshots without any interactive filtering, drilling, or visual cross-highlighting capability. Separate pages per region duplicate effort and eliminate the ability to compare regions through interactive visual filtering. Source: Check Source
Question #9 - Visualize and Analyze the Data
Stakeholders need a what-if slider adjusting discount percentage to see projected impact on revenue.
Which Power BI feature creates this interactive parameter?
A) A text box for manual input which is a static label element without any interactive calculation
B) A What-If parameter creating a calculated table with slicer and DAX measure for dynamic scenarios
C) A standard slicer filtering existing data values without creating new calculated scenario values
D) Edit the source data for each scenario which requires rebuilding the model for every comparison
Show solution
Correct answers: B – Explanation:
What-If parameters generate a calculated table with a slicer for user selection and a DAX measure referencing the selected value, enabling dynamic scenario analysis. Standard slicers filter existing data values but cannot create new calculated values based on user-adjustable parameters. Text boxes display static labels on report pages without any interactive calculation or parameter adjustment capability. Source data editing requires data refresh and model rebuilding for each scenario rather than enabling interactive real-time comparison. Source: Check Source
Question #10 - Visualize and Analyze the Data
An analyst wants users to explore root causes of revenue decline by drilling into hierarchical categories.
Which Power BI visual is purpose-built for this root cause exploration?
A) A Decomposition Tree letting users expand hierarchy levels interactively with AI suggestions
B) A standard table showing all columns in a flat grid without hierarchical drill-down navigation
C) A card visual displaying a single aggregate value without any drill-down or exploration options
D) A pie chart showing proportional composition at a single level without multi-level exploration
Show solution
Correct answers: A – Explanation:
Decomposition Trees allow interactive hierarchical drill-down with AI-powered suggestions identifying which dimension level explains the most variance at each expansion level. Standard tables show flat data rows without the hierarchical visual exploration or drill-down navigation capability. Pie charts display proportional composition at one level without the multi-level hierarchical analysis needed for root cause investigation. Card visuals show single summary numbers without any drill-down, hierarchy, or variance exploration capability. Source: Check Source
Get 810+ more questions with source-linked explanations
Every answer traces to the exact Microsoft documentation page — so you learn from the source, not just memorize answers.
Exam mode & learn mode · Score by objective · Updated 16-Apr-26
Learn more...
What the PL-300 Power BI Data Analyst exam measures
- Prepare the Data (25–30%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
- Model the Data (30–35%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
- Visualize and Analyze the Data (25–30%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
- Deploy and Maintain Assets (10–15%) — Evaluate your ability to implement and manage tasks within this domain, including real-world job skills and scenario-based problem solving.
How to prepare for this exam
- Review the official exam guide to understand every objective and domain weight before you begin studying
- Complete the relevant Microsoft Learn learning path to build a structured foundation across all exam topics
- Get hands-on practice in an Azure free-tier sandbox or trial environment to reinforce what you have studied with real configurations
- Apply your knowledge through real-world project experience — whether at work, in volunteer roles, or contributing to open-source initiatives
- Master one objective at a time, starting with the highest-weighted domain to maximize your score potential early
- Use PowerKram learn mode to study by individual objective and review detailed explanations for every question
- Switch to PowerKram exam mode to simulate the real test experience with randomized questions and timed conditions
Career paths and salary outlook
Earning this certification can open doors to several in-demand roles:
- Power BI Analyst: $85,000–$120,000 per year (based on Glassdoor and ZipRecruiter data)
- Business Intelligence Developer: $90,000–$130,000 per year (based on Glassdoor and ZipRecruiter data)
- Data Visualization Specialist: $80,000–$115,000 per year (based on Glassdoor and ZipRecruiter data)
Official resources
Microsoft provides comprehensive free training to prepare for the PL-300 Power BI Data Analyst Associate exam. Start with the official Microsoft Learn learning path for structured, self-paced modules covering every exam domain. Review the exam study guide for the complete skills outline and recent updates.
