SALESFORCE CERTIFICATION
Certified MuleSoft Developer II Practice Exam
Exam Number: 3735 | Last updated 14-Apr-26 | 1196+ questions across 5 vendor-aligned objectives
The Certified Mule Soft Developer II exam is the advanced developer credential for the Mule Soft ecosystem. It targets experienced developers who design and implement complex integration solutions involving custom connectors, advanced Data Weave patterns, and enterprise-grade error handling and reliability patterns on Anypoint Platform.
Expect about 30% of exam content to cover advanced mule development, covering batch processing, transactions, custom connectors, and advanced patterns. Advanced Data Weave commands 20% of the blueprint, covering custom modules, streaming, recursive operations, and optimization. Nearly one-fifth of questions test reliability and performance, covering error handling, idempotency, caching, and connection management. Combined, these sections account for the lion’s share of the exam and reflect the skills employers value most.
Beyond the core areas, the exam also evaluates complementary skills. The API Governance and Policies domain weighs in at 15%, which spans custom policies, SLA tiers, and API versioning strategies. With 15% of the exam, Testing and Operations demands serious preparation, which spans advanced MUnit, monitoring, and production troubleshooting. Although individually lighter, these topics frequently appear in scenario-based questions that blend multiple skill areas.
Every answer links to the source. Each explanation below includes a hyperlink to the exact Salesforce documentation page the question was derived from. PowerKram is the only practice platform with source-verified explanations. Learn about our methodology →
292
practice exam users
94.9%
satisfied users
94.2%
passed the exam
4.1/5
quality rating
Test your Certified Mulesoft Developer Ii knowledge
10 of 1196+ questions
Question #1 - Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
A senior developer is building a Mule application that processes order transactions requiring atomicity — if the database insert succeeds but the external API call fails, the database insert must be rolled back.
What pattern should the developer implement?
A) Process the database and API operations asynchronously without coordination
B) Ignore the failure and let the database retain the partial data
C) Use separate Mule applications for the database and API operations
D) Implement a Try scope with transactional configuration using XA or local transactions that span the database operation, with compensating logic in the error handler to reverse the insert if the API call fails
Show solution
Correct answers: D – Explanation:
Transactional Try scopes ensure atomicity across multiple operations. XA transactions coordinate distributed resources, while local transactions handle single-resource rollback. Compensating transactions (explicit reversal logic) handle cross-system scenarios where XA is not available. Ignoring failures creates data inconsistency. Separate applications lose transactional context. Asynchronous processing without coordination risks partial completion. Source: MuleSoft Docs: Documentation
Question #2 - Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
A developer needs to build a custom connector for a proprietary internal API that does not have a pre-built MuleSoft connector. The connector should be reusable across multiple Mule projects.
What approach should the developer take?
A) Use HTTP Request connectors with hardcoded configurations in every project
B) Build a custom connector using the MuleSoft XML SDK or Java SDK, package it as a reusable asset, and publish it to Anypoint Exchange for team-wide consumption
C) Use Groovy scripting components to call the API
D) Write custom Java code directly in each Mule application
Show solution
Correct answers: B – Explanation:
Custom connectors built with the MuleSoft SDK provide a clean, reusable abstraction of the proprietary API. Publishing to Anypoint Exchange makes the connector discoverable and installable in any Mule project via the palette. Hardcoded HTTP requests in every project create duplication. Inline Java code is not reusable across projects. Groovy scripting lacks the connector framework’s configuration UI and documentation. Source: MuleSoft Docs: Connector Devkit
Question #3 - Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
A developer is implementing a caching strategy in a Mule application to reduce redundant calls to a slow external reference data API. The cache should expire after 30 minutes and be invalidated when the reference data changes.
What MuleSoft component should the developer use?
A) Call the external API every time regardless of previous responses
B) Store API responses in a file and read from the file
C) Use a static DataWeave variable to hold the cached data
D) Use the Object Store connector with a TTL (time-to-live) of 30 minutes for cache entries, and implement cache invalidation logic triggered by a notification from the reference data system
Show solution
Correct answers: D – Explanation:
The Object Store provides a persistent key-value cache with configurable TTL. Entries expire after 30 minutes automatically, and explicit invalidation (deleting the cache key) handles data change events. File storage is not designed for caching and lacks TTL. Calling the API every time defeats the caching purpose. DataWeave variables are request-scoped and do not persist across requests. Source: MuleSoft Docs: Object Store Connector
Question #4 - Implement and maintain batch processing, transactions, and custom connectors to deliver reliable platform solutions that meet real-world business demands
A developer needs to implement an idempotent message processor in a Mule application that prevents duplicate order processing when the same order message is received multiple times.
What pattern should the developer implement?
A) Delete messages from the source queue immediately without processing
B) Process every message regardless of duplicates and rely on the downstream system to handle deduplication
C) Log duplicate messages and continue processing them
D) Use the Idempotent Message Validator with an Object Store that tracks processed message IDs, rejecting messages with IDs that have already been processed
Show solution
Correct answers: D – Explanation:
The Idempotent Message Validator checks each message’s unique identifier against a store of previously processed IDs. Duplicates are rejected without reprocessing. The Object Store persists processed IDs with configurable TTL. Relying on downstream deduplication shifts responsibility. Deleting without processing loses messages. Processing duplicates creates data integrity issues. Source: MuleSoft Docs: Documentation
Question #5 - Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
A developer is building a complex integration where multiple API calls must be orchestrated: call API A, use the result to call API B, then merge both results and call API C.
What advanced DataWeave technique should the developer use to merge the results?
A) Use DataWeave variables and multi-input transformations to combine payloads from multiple sources, leveraging the operator for object merging and map/flatMap for nested data combination
B) Concatenate the JSON strings manually in a Set Payload component
C) Use a Java class to merge the data structures
D) Store intermediate results in global variables and combine them in the final transformation
Show solution
Correct answers: A – Explanation:
DataWeave handles complex data merging natively. Variables store intermediate results, the operator merges objects, and functions like map and flatMap combine nested structures. Multi-input transformations can reference variables, attributes, and payload simultaneously. Global variables break encapsulation. String concatenation is fragile and error-prone. Java adds unnecessary complexity for what DataWeave handles declaratively. Source: MuleSoft Docs: Documentation
Question #6 - Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
A developer needs to implement a circuit breaker pattern for an external API call that has been experiencing intermittent failures, to prevent cascading failures in the integration.
How should the developer implement this in Mule?
A) Implement a circuit breaker using the Until Successful scope with failure thresholds, or use a custom implementation with Object Store tracking failure counts and states (closed, open, half-open)
B) Add a Try scope that catches all errors and returns a default response without tracking failure patterns
C) Remove the external API call entirely to avoid failures
D) Increase the HTTP timeout to infinity to wait for the API to respond
Show solution
Correct answers: A – Explanation:
Circuit breakers track failure patterns and open the circuit (stop calling the failing service) after a threshold is reached, allowing the service time to recover. Object Store tracks failure counts and circuit state. Half-open state tests if the service has recovered. Removing the API call loses functionality. Infinite timeouts block resources. Silent error swallowing masks systemic issues and does not prevent cascading failures. Source: MuleSoft Docs: Documentation
Question #7 - Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
A developer is creating a custom API policy for Anypoint API Manager that validates JWT tokens against a specific identity provider and extracts custom claims for downstream processing.
How should the developer build this custom policy?
A) Modify the built-in JWT validation policy source code
B) Use a third-party API gateway for JWT handling instead of Anypoint
C) Create a custom policy using the Mule Policy SDK, defining the policy YAML specification and implementing the JWT validation and claims extraction logic in a Mule configuration
D) Add JWT validation logic to every Mule application individually
Show solution
Correct answers: C – Explanation:
Custom policies built with the Mule Policy SDK are reusable across APIs in API Manager. The YAML specification defines configuration parameters, and the Mule configuration implements validation logic. Publishing to Exchange makes the policy available to all APIs. Modifying built-in policies is not supported. Per-application logic duplicates effort. Third-party gateways miss the Anypoint ecosystem integration. Source: MuleSoft Docs: Documentation
Question #8 - Implement and maintain batch processing, transactions, and custom connectors to deliver reliable platform solutions that meet real-world business demands
A developer is optimizing a Mule application that processes 10,000 messages per minute from a Kafka topic. The application is experiencing memory pressure and slow processing.
What performance optimizations should the developer implement?
A) Reduce the Kafka topic partitions to simplify processing
B) Increase the JVM heap size to the maximum available memory
C) Switch from Kafka to file-based message processing
D) Optimize by configuring Kafka consumer batch sizes, implementing streaming DataWeave transformations to avoid loading entire payloads into memory, and tuning thread pool sizes for the processing flows
Show solution
Correct answers: D – Explanation:
Performance optimization requires multiple approaches: batch size tuning controls memory per fetch, streaming DataWeave processes data without full payload buffering, and thread pool tuning balances parallelism with resource consumption. Simply increasing heap delays the problem. Reducing partitions reduces throughput capacity. File-based processing loses Kafka’s real-time streaming benefits. Source: MuleSoft Docs: Documentation
Question #9 - Optimize and cache error handling, idempotency, and caching to maintain fast response times and high availability even under peak traffic loads
A developer needs to implement advanced MUnit tests that verify a Mule flow’s behavior when an external service returns different HTTP status codes (200, 400, 404, 500) and various response payloads.
What MUnit testing strategy should the developer use?
A) Create parameterized MUnit tests with multiple mock configurations for each HTTP status code scenario, asserting different flow behaviors (payload transformation, error handling, routing) for each case
B) Test each scenario manually in Postman instead of MUnit
C) Write a single test that only validates the happy path
D) Mock the HTTP connector to always return 200 and skip error scenario testing
Show solution
Correct answers: A – Explanation:
Comprehensive MUnit testing covers multiple scenarios with different mock configurations. Each test case configures the HTTP mock to return a specific status code and payload, then asserts the expected flow behavior for that scenario. This ensures error handling and edge cases are verified. Single happy-path tests miss error scenarios. Manual Postman testing is not automated for CI/CD. Mocking only 200 responses leaves error handling untested. Source: MuleSoft Docs: Documentation
Question #10 - Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
A developer is troubleshooting a production Mule application using Anypoint Monitoring. The application has intermittent latency spikes that correlate with specific API endpoints.
What monitoring approach should the developer use to identify the root cause?
A) Review application logs from the last year to find patterns
B) Add System.out.println statements to the application code and redeploy
C) Restart the application to clear the latency issue
D) Use Anypoint Monitoring dashboards to analyze API response times by endpoint, enable distributed tracing to identify slow components within flows, and set up alerts for latency threshold breaches
Show solution
Correct answers: D – Explanation:
Anypoint Monitoring provides API-level performance dashboards that break down response times by endpoint. Distributed tracing reveals which flow components (connectors, transformers) contribute to latency. Alerts provide proactive notification. Print statements require redeployment and produce noisy output. Restarts are temporary fixes. Year-long log review is impractical for intermittent issues. Source: MuleSoft Docs: Monitoring
Get 1196+ more questions with source-linked explanations
Every answer traces to the exact Salesforce documentation page — so you learn from the source, not just memorize answers.
Exam mode & learn mode · Score by objective · Updated 14-Apr-26
Learn more...
What the Certified Mulesoft Developer Ii exam measures
- Implement and maintain batch processing, transactions, and custom connectors to deliver reliable platform solutions that meet real-world business demands
- Structure and govern custom modules, streaming, and recursive operations to ensure clean, scalable data structures that power accurate reporting and integrations
- Connect and synchronize custom policies, SLA tiers, and API versioning strategies to keep data flowing reliably between enterprise systems and APIs with minimal latency
- Optimize and cache error handling, idempotency, and caching to maintain fast response times and high availability even under peak traffic loads
- Debug and resolve advanced MUnit, monitoring, and production troubleshooting to catch issues before they reach production and maintain code quality across releases
How to prepare for this exam
- Review the official exam guide
- Complete the MuleSoft Developer II trail and study the advanced Anypoint Platform documentation
- Build a batch processing application that handles large data sets with error handling, retries, and dead letter queue patterns
- Work on a production MuleSoft deployment and gain experience troubleshooting performance and reliability issues
- Focus on Advanced Development and Reliability — they combine for 50% of the exam
- Use PowerKram’s learn mode for advanced MuleSoft scenarios
- Run timed exams in PowerKram’s exam mode
Career paths and salary outlook
Advanced MuleSoft developers command premium compensation for complex integration work:
- Senior MuleSoft Developer — $130,000–$175,000 per year, building enterprise-grade integrations (Glassdoor salary data)
- MuleSoft Architect — $145,000–$195,000 per year, designing integration strategy and platform architecture (Indeed salary data)
- Integration Platform Lead — $140,000–$185,000 per year, leading middleware teams and API governance (Glassdoor salary data)
Official resources
Follow the MuleSoft Certified Developer II Learning Path. The official exam guide provides the full objective breakdown.
