SALESFORCE CERTIFICATION
Certified Heroku Architect Practice Exam
Exam Number: 3724 | Last updated 14-Apr-26 | 2167+ questions across 6 vendor-aligned objectives
The Certified Heroku Architect exam validates your ability to design and deploy scalable applications on Heroku — Salesforce’s cloud platform-as-a-service (Paa S). It covers application architecture, dyno management, add-on ecosystems, data services, and integration patterns between Heroku apps and Salesforce CRM.
The Heroku Platform Architecture domain weighs in at 25%, covering dynos, buildpacks, add-ons, and platform capabilities. With 25% of the exam, Application Design demands serious preparation, covering twelve-factor app, scaling patterns, and microservices architecture. Questions on data services make up 20% of the test, covering Heroku Postgres, Redis, Kafka, and data management strategies. Combined, these sections account for the lion’s share of the exam and reflect the skills employers value most.
The remaining sections balance the blueprint. Salesforce Integration carries the heaviest weight at 20%, which spans Heroku Connect, APIs, platform events, and data synchronization. A full 10% of the exam targets security and compliance, which spans Private Spaces, Shield, SSL, and enterprise security features. Do not overlook these sections — the exam regularly weaves them into multi-concept scenarios.
Every answer links to the source. Each explanation below includes a hyperlink to the exact Salesforce documentation page the question was derived from. PowerKram is the only practice platform with source-verified explanations. Learn about our methodology →
382
practice exam users
91.1%
satisfied users
99.7%
passed the exam
4.5/5
quality rating
Test your Certified Heroku Architect knowledge
10 of 2167+ questions
Question #1 - Integrate and monitor Heroku Connect, APIs, and platform events to keep data flowing reliably between Salesforce and external systems with minimal latency
A company needs to build a customer-facing web application that handles 100,000 concurrent users with unpredictable traffic spikes. The application must integrate with their Salesforce CRM data.
Why is Heroku a suitable platform for this application?
A) Heroku requires no code changes from the development team
B) Heroku offers auto-scaling dynos that handle traffic spikes, managed data services, and native Salesforce integration through Heroku Connect
C) Heroku is the only platform that supports Salesforce integration
D) Heroku provides unlimited free hosting for any application
Show solution
Correct answers: B – Explanation:
Heroku’s auto-scaling adjusts dyno count based on traffic, ensuring performance during spikes. Managed services (Postgres, Redis) reduce operational overhead. Heroku Connect provides bidirectional data synchronization with Salesforce without custom API code. Heroku is not free at scale. Many platforms integrate with Salesforce. Code may need adaptation for the twelve-factor methodology. Source: Trailhead: Heroku Enterprise Basics
Question #2 - Design and deliver twelve-factor app, scaling patterns, and microservices architecture to deliver intuitive, responsive interfaces that drive user adoption and productivity
A Heroku architect is designing an application that must follow the twelve-factor app methodology. The development team currently stores configuration values like database URLs and API keys in the application code.
How should the architect address this according to twelve-factor principles?
A) Store configuration in a separate configuration file committed to Git
B) Move all configuration to Heroku config vars (environment variables), ensuring strict separation of config from code
C) Continue storing configuration in code for simplicity
D) Create a configuration database that the application queries at startup
Show solution
Correct answers: B – Explanation:
The twelve-factor app methodology requires strict separation of configuration from code. Heroku config vars store environment-specific values (database URLs, API keys, feature flags) as environment variables, accessible to the application at runtime. This enables the same codebase to run across different environments. Code-embedded config violates twelve-factor. Git-committed config files mix config with code. Configuration databases add complexity for simple key-value config. Source: The Twelve-Factor App
Question #3 - Integrate and monitor Heroku Connect, APIs, and platform events to keep data flowing reliably between Salesforce and external systems with minimal latency
An architect needs to design a Heroku application that processes large CSV file uploads (up to 500MB) from customers, transforms the data, and loads it into Salesforce.
What Heroku architecture should the architect design?
A) Store the file on the web dyno’s local filesystem for later processing
B) Use web dynos to accept the upload, queue the processing job with a worker dyno using Redis-backed job queue, and use Heroku Connect or Bulk API for Salesforce loading
C) Process the file synchronously in the web dyno during the HTTP request
D) Limit uploads to 10MB to avoid processing complexity
Show solution
Correct answers: B – Explanation:
Web dynos should handle the upload and immediately queue a background job. Worker dynos process the CSV asynchronously, avoiding HTTP timeout limits. Redis-backed queues manage job distribution. Heroku Connect or the Bulk API handles high-volume Salesforce data loading. Synchronous processing blocks the web dyno and risks timeouts. Local filesystem storage is ephemeral on Heroku. Limiting file size ignores the business requirement. Source: Heroku Dev Center: Background Jobs
Question #4 - Route and escalate Heroku Postgres, Redis, and Kafka to reduce resolution times, improve customer satisfaction, and balance agent workloads
A Heroku architect is selecting the data services for an application that needs a relational database, a caching layer for session management, and a message queue for asynchronous processing.
Which Heroku data services should the architect use?
A) A single Heroku Postgres database for all three needs
B) Heroku Postgres for relational data, Heroku Redis for session caching, and Heroku Kafka or a Redis-based queue for message queuing
C) MongoDB for all data storage and messaging
D) File-based SQLite for the database and local memory for caching
Show solution
Correct answers: B – Explanation:
Each data service is purpose-built: Postgres handles relational data with ACID compliance, Redis provides sub-millisecond caching for sessions, and Kafka (or Redis-based queues) handles message queuing with durability. Using Postgres for caching adds latency. MongoDB is a document store, not ideal for all three. SQLite and local memory are ephemeral on Heroku and do not persist across dyno restarts. Source: Heroku Dev Center: Heroku
Question #5 - Integrate and monitor Heroku Connect, APIs, and platform events to keep data flowing reliably between Salesforce and external systems with minimal latency
A company wants to display Salesforce Account and Contact data in their Heroku-hosted customer portal with real-time synchronization. Changes in Salesforce should appear in the portal within seconds.
Which integration approach should the architect use?
A) A nightly batch export from Salesforce to a Heroku Postgres database
B) Heroku Connect with the Salesforce Connector configured for the Account and Contact objects with low-latency polling or Streaming API for near-real-time sync
C) Direct Salesforce REST API queries from the Heroku app on every page load
D) Embed Salesforce Lightning pages in an iframe within the Heroku app
Show solution
Correct answers: B – Explanation:
Heroku Connect synchronizes Salesforce data to Heroku Postgres bidirectionally, providing the portal with local database performance. Streaming API or low-latency polling options achieve near-real-time sync. Direct API queries add latency and consume API limits. Nightly batches create stale data. iframes provide poor user experience and limited integration. Source: Heroku Dev Center: Heroku Connect
Question #6 - Route and escalate Heroku Postgres, Redis, and Kafka to reduce resolution times, improve customer satisfaction, and balance agent workloads
A Heroku application needs to handle a sudden 10x traffic increase during a marketing event without pre-provisioning resources.
What scaling strategy should the architect implement?
A) Accept degraded performance during traffic spikes as unavoidable
B) Configure Heroku Autoscaling for web dynos based on response time and throughput metrics, with a defined maximum dyno limit
C) Scale vertically to the largest available dyno size
D) Deploy 10x the normal dyno count permanently to handle potential spikes
Show solution
Correct answers: B – Explanation:
Heroku Autoscaling automatically adjusts web dyno count based on defined metrics (response time, throughput), scaling up during spikes and down during lulls. A maximum limit controls costs. Permanent over-provisioning wastes resources. Accepting degradation loses customers. Vertical scaling has a ceiling and does not handle concurrent request distribution as well as horizontal scaling. Source: Heroku Dev Center: Scaling
Question #7 - Integrate and monitor Heroku Connect, APIs, and platform events to keep data flowing reliably between Salesforce and external systems with minimal latency
A Heroku architect needs to ensure that an application processing sensitive financial data meets enterprise security requirements including network isolation and compliance certifications.
What Heroku feature should the architect use?
A) Heroku Private Spaces with dedicated network isolation, private data services, and compliance certifications (SOC, ISO, PCI)
B) A VPN tunnel from the client’s data center to standard Heroku
C) Standard Heroku Common Runtime with SSL enabled
D) Host the application on a personal Heroku account for simplicity
Show solution
Correct answers: A – Explanation:
Heroku Private Spaces provide dedicated, isolated network environments with compliance certifications. Data services run in the private network, and traffic is isolated from the shared runtime. Standard Runtime with SSL provides encryption but not network isolation. VPN to standard Heroku does not provide the same isolation level. Personal accounts lack enterprise security features. Source: Heroku Dev Center: Private Spaces
Question #8 - Route and escalate Heroku Postgres, Redis, and Kafka to reduce resolution times, improve customer satisfaction, and balance agent workloads
An architect is designing a Heroku application with multiple microservices that need to communicate with each other asynchronously without tight coupling.
What communication pattern should the architect implement?
A) A shared Postgres database that all microservices read and write to
B) Event-driven communication using Heroku Kafka (Apache Kafka) as a message broker between microservices
C) File-based communication through a shared filesystem
D) Direct HTTP calls between each microservice pair
Show solution
Correct answers: B – Explanation:
Apache Kafka on Heroku provides event-driven, asynchronous communication that decouples microservices. Each service publishes and subscribes to topics without knowing about other services. Direct HTTP creates tight coupling and cascading failures. Shared databases violate microservice data isolation principles. Heroku’s ephemeral filesystem does not support shared file-based communication. Source: Heroku Dev Center: Apache Kafka
Question #9 - Route and escalate Heroku Postgres, Redis, and Kafka to reduce resolution times, improve customer satisfaction, and balance agent workloads
A Heroku architect needs to implement a CI/CD pipeline for a Node.js application with automated testing, staging review, and production deployment.
What Heroku features should the architect configure?
A) Direct pushes to the production app from each developer’s machine
B) A custom Jenkins server deployed on a separate Heroku dyno
C) Manual deployments from the Heroku Dashboard after local testing
D) Heroku Pipelines with GitHub integration, review apps for pull requests, automated CI testing, and promotion from staging to production
Show solution
Correct answers: D – Explanation:
Heroku Pipelines provide a structured CI/CD workflow: GitHub integration triggers builds, Review Apps create ephemeral environments for pull request testing, Heroku CI runs automated tests, and the pipeline promotes validated builds from staging to production. Manual deployments skip validation. Custom Jenkins adds infrastructure management. Direct production pushes skip all quality gates. Source: Heroku Dev Center: Pipelines
Question #10 - Route and escalate Heroku Postgres, Redis, and Kafka to reduce resolution times, improve customer satisfaction, and balance agent workloads
A Heroku application is experiencing increased Postgres database latency. The architect suspects the database connection pool is exhausted during peak traffic.
What should the architect do to diagnose and resolve this?
A) Switch from Postgres to a NoSQL database
B) Restart the application dynos to clear connections
C) Upgrade to the largest Postgres plan available
D) Monitor connection pool usage with Heroku Postgres metrics, implement connection pooling (PgBouncer), optimize long-running queries, and configure appropriate pool size limits
Show solution
Correct answers: D – Explanation:
Connection pool exhaustion requires monitoring current usage, implementing connection pooling middleware (PgBouncer) to multiplex connections, identifying and optimizing long-running queries that hold connections, and setting pool sizes appropriate for the dyno count and Postgres plan. Upgrading may increase connection limits but does not fix pool management. Restarts are temporary. Switching databases is a disproportionate response. Source: Heroku Dev Center: Postgres Connection Pooling
Get 2167+ more questions with source-linked explanations
Every answer traces to the exact Salesforce documentation page — so you learn from the source, not just memorize answers.
Exam mode & learn mode · Score by objective · Updated 14-Apr-26
Learn more...
What the Certified Heroku Architect exam measures
- Establish and manage dynos, buildpacks, and add-ons to support daily platform operations and evolving business requirements
- Design and deliver twelve-factor app, scaling patterns, and microservices architecture to deliver intuitive, responsive interfaces that drive user adoption and productivity
- Route and escalate Heroku Postgres, Redis, and Kafka to reduce resolution times, improve customer satisfaction, and balance agent workloads
- Integrate and monitor Heroku Connect, APIs, and platform events to keep data flowing reliably between Salesforce and external systems with minimal latency
- Lock down and govern Private Spaces, Shield, and SSL to safeguard sensitive data and enforce least-privilege access across the organization
How to prepare for this exam
- Review the official exam guide
- Complete the Heroku Architect trail on Trailhead and study Heroku’s developer documentation on scaling and data services
- Deploy a multi-dyno application on Heroku that integrates with a Salesforce Developer Org using Heroku Connect
- Build a side project that exercises Heroku Postgres, Redis caching, and background worker dynos for real-world architecture experience
- Focus on Platform Architecture and Application Design — they combine for 50% of the exam
- Use PowerKram’s learn mode for Heroku-specific architecture questions
- Test readiness in PowerKram’s exam mode
Career paths and salary outlook
Heroku architects serve organizations building custom apps that extend the Salesforce ecosystem:
- Heroku Architect — $140,000–$190,000 per year, designing cloud-native applications on Heroku (Glassdoor salary data)
- Cloud Platform Engineer — $125,000–$170,000 per year, building and operating PaaS-based application infrastructure (Indeed salary data)
- Full-Stack Architect — $150,000–$200,000 per year, designing end-to-end application solutions spanning frontend, backend, and data (Glassdoor salary data)
Official resources
Follow the Heroku Architect Learning Path on Trailhead. The official exam guide provides the full objective list.
