Skip to content

Salesforce developer procedures at PWG

Status: draft. These procedures describe what is possible and what is observable. The actual branching model, deploy pipeline, and approval flow need to be confirmed with the engineering team.

TODO — document the current deploy path end-to-end:

  • Source of truth for Apex/LWC/metadata — which repo?
  • Branching model — trunk-based? GitFlow?
  • Deploy tool — SFDX source deploy, Copado, Gearset, change sets?
  • Approvals — who signs off, where is the gate?
  • Rollback — metadata redeploy, or Own for Salesforce restore?
  • Full Copy refresh cadence: TODO.
  • Partial Copy and Developer sandboxes: TODO.
  • Post-refresh checklist (suggested):
    • Repoint Named Credentials to sandbox endpoints where applicable
    • Disable or repoint the aad.provision SCIM user
    • Disable or mute email deliverability
    • Rebuild integration user JWT certs if they are environment-scoped
  • Bulk inserts/updates run through dataloader.io or the Salesforce Data Loader. Both register as Connected Apps.
  • Always authenticate as an integration user, not a human adviser.
  • For data that needs to survive a refresh cycle, capture the load script or config file in a repo — single-shot changes executed against a live org are not reproducible.
  • Per the PWG global rules: data-mart and schema changes ship through a pipeline, not through live SQL. For Salesforce equivalents, prefer SFDX unpackaged metadata over click-based DML for post-deploy seed data.

Internal code follows:

SObjectTrigger (dispatcher)
SObjectTriggerHandler (dispatch by trigger context)
Service / helper classes

When adding new automation:

  1. Prefer Flow for declarative work — data updates based on field changes, simple branching, record creation.
  2. Use Apex for anything that is transactional, high-volume, needs platform-event publishing, or needs callouts on insert.
  3. Never add a second trigger to an object that already has one unless it is a managed-package trigger. If you find yourself needing to, extend the existing handler.
  • API usage report: analysis/api-usage-{date}.md.
  • Bulk API 2.0 is the default for any data load over 10k records — avoid hitting the synchronous governor limits.
  • SOQL row limit: 50k per transaction. Apex CPU: 10s sync / 60s async. If a Flow fails with “CPU limit exceeded”, start with record-triggered Flow optimisation before moving logic to Apex.

If the data you need is already in docs/generated/ — use it. Otherwise:

  1. Try mcp-server tools (soql_query, tooling_query, describe_object) via Claude Code.
  2. If it belongs in the standing audit set, extend mcp-server/scripts/run-audits.ts.
  3. One-off queries: mcp-server/scripts/smoke-query.ts is the scratch pad.

Code review checklist (internal Apex / Flow)

Section titled “Code review checklist (internal Apex / Flow)”
  • Bulk-safe? No SOQL / DML inside loops.
  • Selector pattern? Or at least field-list-aware queries.
  • Error handling: catches sensible exceptions, logs to a diagnostic object or platform event, rethrows or suppresses deliberately.
  • Tests: at least one bulk (200 records) path.
  • Flow equivalent: check whether a Flow would do this more simply.
  • docs/human/admin-procedures.md — business-facing operations
  • docs/dev/architecture.md — trigger and Flow layout
  • docs/dev/known-issues.md — platform gotchas