SALESFORCE CERTIFICATION
Certified Platform Data Architect Practice Exam
Exam Number: 3706 | Last updated 14-Apr-26 | 1688+ questions across 6 vendor-aligned objectives
The Certified Platform Data Architect credential is for professionals who design and implement data solutions on the Salesforce platform at an enterprise scale. It covers data modeling, large data volume management, data migration strategies, and master data management principles within multi-cloud Salesforce environments.
A full 25% of the exam targets Data Modeling and Design, covering object relationships, schema design, external objects, and big objects. At 20%, Large Data Volume Management represents the single largest exam section, covering indexing, skinny tables, data skew, and query optimization. The exam allocates 20% to Data Migration, covering ETL strategies, Bulk API, data transformation, and validation. Candidates who master these top-weighted areas position themselves well for the majority of exam questions.
Additional sections test your breadth across the platform. Nearly 15% of questions test Data Governance, which spans master data management, duplicate management, and data quality frameworks. The Salesforce Data Solutions domain weighs in at 10%, which spans Salesforce Connect, Heroku Connect, and Data Cloud integration. With 10% of the exam, Data Security demands serious preparation, which spans field-level security, encryption, data classification, and compliance. Do not overlook these sections — the exam regularly weaves them into multi-concept scenarios.
Every answer links to the source. Each explanation below includes a hyperlink to the exact Salesforce documentation page the question was derived from. PowerKram is the only practice platform with source-verified explanations. Learn about our methodology →
363
practice exam users
97.5%
satisfied users
97.2%
passed the exam
4.2/5
quality rating
Test your Certified Data Architect knowledge
10 of 1688+ questions
Question #1 - Optimize and accelerate Salesforce Connect, Heroku Connect, and Data Cloud integration to shorten sales cycles, improve forecast accuracy, and maximize revenue capture
A global retail company is migrating 500 million historical transaction records to Salesforce. The records are rarely accessed but must be available for compliance reporting.
Which Salesforce data storage solution should the data architect recommend?
A) Big Objects to store the historical transactions with async SOQL for reporting
B) External Objects connected to the legacy system via Salesforce Connect
C) A custom Heroku database with OData integration
D) Custom objects with archival record types and a scheduled purge job
Show solution
Correct answers: A – Explanation:
Big Objects store massive volumes without consuming standard storage. Async SOQL enables querying large datasets. Custom objects would exceed storage. External Objects require the legacy system to remain operational. Heroku adds infrastructure complexity. Source: Salesforce Docs: Big Objects Guide
Question #2 - Structure and govern object relationships, schema design, and external objects to ensure clean, scalable data structures that power accurate reporting and integrations
A financial institution is experiencing slow SOQL query performance on an Account object with 20 million records. Most queries filter by Account Type and Region.
What should the data architect recommend to improve query performance?
A) Partition the Account object into separate custom objects by region
B) Add skinny tables through Salesforce Support
C) Convert the Account object to a Big Object
D) Create a custom composite index on Account Type and Region by requesting Salesforce Support
Show solution
Correct answers: D – Explanation:
Custom composite indexes on frequently filtered field combinations significantly improve SOQL query performance on large data volumes. Partitioning breaks the data model. Big Objects are for archival storage. Skinny tables are complementary but not the primary solution for filter performance. Source: Salesforce Docs: Large Data Volumes Guide
Question #3 - Transform and validate ETL strategies, Bulk API, and data transformation to move data and metadata between environments with zero data loss and minimal downtime
A company is planning a data migration from SAP to Salesforce involving Accounts, Contacts, and Opportunities with complex lookup relationships and 5 million records.
What is the correct approach for maintaining referential integrity?
A) Use Data Loader with auto-matching to resolve relationships after all data is loaded
B) Load all objects simultaneously using parallel Bulk API jobs
C) Load parent objects first, then children using external ID fields for relationship mapping
D) Create temporary text fields for relationship keys and use a post-migration flow
Show solution
Correct answers: C – Explanation:
Parent records must be loaded first so children can reference them. External ID fields allow the Bulk API to resolve relationships by matching legacy identifiers. Parallel loading risks orphaned records. Auto-matching does not exist. Temporary fields and post-migration flows are error-prone. Source: Salesforce Docs: Bulk API Guide
Question #4 - Optimize and accelerate Salesforce Connect, Heroku Connect, and Data Cloud integration to shorten sales cycles, improve forecast accuracy, and maximize revenue capture
An e-commerce company wants to display real-time product inventory data from their warehouse management system on Salesforce record pages without storing the data in Salesforce.
Which solution should the data architect recommend?
A) A scheduled batch job that syncs inventory data every 5 minutes
B) A Lightning Web Component that calls the warehouse API directly from the browser
C) Platform Events to stream inventory updates into a custom Salesforce object
D) External Objects using Salesforce Connect with an OData adapter
Show solution
Correct answers: D – Explanation:
External Objects with Salesforce Connect provide real-time, on-demand access to external data without storing it in Salesforce. Batch jobs create stale data. Browser-side API calls have CORS and security issues. Platform Events still require storing data in Salesforce. Source: Trailhead: Salesforce Connect
Question #5 - Monitor and report master data management, duplicate management, and data quality frameworks to meet regulatory requirements and maintain auditable records of system changes and access
A data architect is designing duplicate management rules for a healthcare organization where patient Contact records must be deduplicated based on Last Name, Date of Birth, and a custom National ID field.
Which configuration should the data architect implement?
A) A matching rule with fuzzy match on Last Name and exact match on other fields, paired with a duplicate rule that alerts users
B) A before-save flow that searches for duplicates and blocks the save
C) A Batch Apex job that runs nightly to merge duplicate Contacts
D) A validation rule that queries for existing records with the same field values
Show solution
Correct answers: A – Explanation:
Salesforce Duplicate Management uses matching rules (supporting fuzzy and exact matching) and duplicate rules for real-time deduplication. Validation rules cannot query other records. Flows lack the fuzzy matching engine. Batch jobs only catch duplicates after the fact. Source: Trailhead: Data Quality
Question #6 - Optimize and accelerate Salesforce Connect, Heroku Connect, and Data Cloud integration to shorten sales cycles, improve forecast accuracy, and maximize revenue capture
A manufacturing company has an Account ownership skew problem where a single integration user owns 3 million Account records.
What should the data architect recommend?
A) Increase the sharing recalculation timeout threshold
B) Convert the Account OWD to Public Read/Write
C) Redistribute ownership of the records based on region or business unit
D) Move the 3 million records to a Big Object
Show solution
Correct answers: C – Explanation:
Redistributing ownership eliminates skew that causes lock contention and slow sharing recalculations. Salesforce recommends no single user own more than 10,000 records. There is no configurable timeout. Big Objects cannot replace active data. Changing OWD removes security controls. Source: Salesforce Docs: Large Data Volumes Guide
Question #7 - Structure and govern object relationships, schema design, and external objects to ensure clean, scalable data structures that power accurate reporting and integrations
A telecommunications company needs to store Customer, Contract, and Service records. Each Customer has multiple Contracts, each Contract includes multiple Services. Roll-up summaries of Service revenue are needed at both levels.
How should the data architect design the data model?
A) Create master-detail relationships: Service to Contract to Customer, leveraging native roll-up summary fields
B) Use lookup relationships between all three objects and calculate rollups with Apex
C) Create external objects with Salesforce Connect for aggregation
D) Use a single flat object with record types for each entity
Show solution
Correct answers: A – Explanation:
Master-detail relationships enable native roll-up summary fields at each level without code. Lookup relationships do not support native roll-ups. A flat object violates normalization. External objects do not support roll-up summaries. Source: Trailhead: Data Modeling
Question #8 - Transform and validate ETL strategies, Bulk API, and data transformation to move data and metadata between environments with zero data loss and minimal downtime
A data architect is planning a weekend migration of 50 million records using the Bulk API within a 48-hour window.
Which Bulk API approach should maximize throughput?
A) Bulk API 1.0 with serial mode
B) REST API with individual record inserts
C) Data Loader with maximum batch size
D) Bulk API 2.0 with parallel processing and optimally sized CSV files
Show solution
Correct answers: D – Explanation:
Bulk API 2.0 automatically manages parallel processing and retries. Bulk API 1.0 serial mode is slower. REST API individual inserts are far too slow. Data Loader uses the Bulk API underneath but Bulk API 2.0 directly provides better control. Source: Salesforce Docs: Bulk API Guide
Question #9 - Monitor and report master data management, duplicate management, and data quality frameworks to meet regulatory requirements and maintain auditable records of system changes and access
A company’s data governance team found that 30% of Account records have missing or inconsistent Industry field values. They want to enforce quality going forward and clean existing data.
What strategy should the data architect implement?
A) A screen flow for data entry that requires Industry and Data Loader for existing records
B) A validation rule to enforce Industry and a Batch Apex cleanup of existing records
C) A page layout required field constraint
D) A report to identify bad records with emails to Account owners
Show solution
Correct answers: B – Explanation:
A validation rule enforces quality at the point of entry across all interfaces. Batch Apex cleans existing records systematically. Screen flows only enforce on specific pages. Page layout requirements are not enforced via API. Reports and emails rely on manual compliance. Source: Trailhead: Data Quality
Question #10 - Monitor and report master data management, duplicate management, and data quality frameworks to meet regulatory requirements and maintain auditable records of system changes and access
A data architect needs to design an archival strategy for an org approaching its storage limit. Historical Case records older than two years should be archived but still accessible.
Which archival approach should the data architect recommend?
A) Compress the Case records by removing attachment data
B) Move old Cases to a secondary Salesforce org
C) Delete old Cases and rely on the Recycle Bin
D) Export old Cases to Big Objects and create a custom Lightning component for lookup access
Show solution
Correct answers: D – Explanation:
Big Objects provide scalable storage for archived data. A custom component can query Big Objects for on-demand lookup. The Recycle Bin only retains records for 15 days. A secondary org adds complexity. Removing attachments may violate data retention requirements. Source: Salesforce Docs: Big Objects Guide
Get 1688+ more questions with source-linked explanations
Every answer traces to the exact Salesforce documentation page — so you learn from the source, not just memorize answers.
Exam mode & learn mode · Score by objective · Updated 14-Apr-26
Learn more...
What the Certified Data Architect exam measures
- Structure and govern object relationships, schema design, and external objects to ensure clean, scalable data structures that power accurate reporting and integrations
- Model and optimize indexing, skinny tables, and data skew to ensure clean, scalable data structures that power accurate reporting and integrations
- Transform and validate ETL strategies, Bulk API, and data transformation to move data and metadata between environments with zero data loss and minimal downtime
- Monitor and report master data management, duplicate management, and data quality frameworks to meet regulatory requirements and maintain auditable records of system changes and access
- Optimize and accelerate Salesforce Connect, Heroku Connect, and Data Cloud integration to shorten sales cycles, improve forecast accuracy, and maximize revenue capture
- Enforce and audit field-level security, encryption, and data classification to safeguard sensitive data and enforce least-privilege access across the organization
How to prepare for this exam
- Review the official exam guide for final gap analysis
- Study Salesforce’s data architecture Trailhead modules — focus on Large Data Volumes, Data Modeling, and External Data patterns
- Practice designing data models for a fictional enterprise scenario — include custom objects, junction objects, external objects, and rollup summaries
- Participate in a data migration project or build a mock migration using Data Loader and Bulk API in a sandbox environment
- Master one section at a time — Data Modeling and Large Data Volumes carry the most weight
- Use PowerKram’s learn mode to deepen your understanding of architectural trade-offs
- Run full-length practice exams in PowerKram’s exam mode to prepare for the real test experience
Career paths and salary outlook
Data architects command premium compensation and are essential for large Salesforce implementations:
- Salesforce Data Architect — $140,000–$185,000 per year, designing enterprise data strategies (Glassdoor salary data)
- Salesforce Technical Architect — $165,000–$220,000 per year, leading multi-cloud architecture decisions (Indeed salary data)
- Enterprise Data Architect — $155,000–$200,000 per year, managing data strategy across platforms (Glassdoor salary data)
Official resources
Follow the Data Architect Learning Path on Trailhead. The official exam guide outlines every tested objective.
