Salesforce developer procedures at PWG
Salesforce developer procedures at PWG
Section titled “Salesforce developer procedures at PWG”Status: draft. These procedures describe what is possible and what is observable. The actual branching model, deploy pipeline, and approval flow need to be confirmed with the engineering team.
Deployment
Section titled “Deployment”TODO — document the current deploy path end-to-end:
- Source of truth for Apex/LWC/metadata — which repo?
- Branching model — trunk-based? GitFlow?
- Deploy tool — SFDX
source deploy, Copado, Gearset, change sets? - Approvals — who signs off, where is the gate?
- Rollback — metadata redeploy, or Own for Salesforce restore?
Sandbox management
Section titled “Sandbox management”- Full Copy refresh cadence: TODO.
- Partial Copy and Developer sandboxes: TODO.
- Post-refresh checklist (suggested):
- Repoint Named Credentials to sandbox endpoints where applicable
- Disable or repoint the
aad.provisionSCIM user - Disable or mute email deliverability
- Rebuild integration user JWT certs if they are environment-scoped
Data loads
Section titled “Data loads”- Bulk inserts/updates run through
dataloader.ioor the Salesforce Data Loader. Both register as Connected Apps. - Always authenticate as an integration user, not a human adviser.
- For data that needs to survive a refresh cycle, capture the load script or config file in a repo — single-shot changes executed against a live org are not reproducible.
- Per the PWG global rules: data-mart and schema changes ship through a pipeline, not through live SQL. For Salesforce equivalents, prefer SFDX unpackaged metadata over click-based DML for post-deploy seed data.
Trigger handler pattern
Section titled “Trigger handler pattern”Internal code follows:
SObjectTrigger (dispatcher) ↓SObjectTriggerHandler (dispatch by trigger context) ↓Service / helper classesWhen adding new automation:
- Prefer Flow for declarative work — data updates based on field changes, simple branching, record creation.
- Use Apex for anything that is transactional, high-volume, needs platform-event publishing, or needs callouts on insert.
- Never add a second trigger to an object that already has one unless it is a managed-package trigger. If you find yourself needing to, extend the existing handler.
Governor limits and platform capacity
Section titled “Governor limits and platform capacity”- API usage report:
analysis/api-usage-{date}.md. - Bulk API 2.0 is the default for any data load over 10k records — avoid hitting the synchronous governor limits.
- SOQL row limit: 50k per transaction. Apex CPU: 10s sync / 60s async. If a Flow fails with “CPU limit exceeded”, start with record-triggered Flow optimisation before moving logic to Apex.
Running a new report / extract
Section titled “Running a new report / extract”If the data you need is already in docs/generated/ — use it.
Otherwise:
- Try
mcp-servertools (soql_query,tooling_query,describe_object) via Claude Code. - If it belongs in the standing audit set, extend
mcp-server/scripts/run-audits.ts. - One-off queries:
mcp-server/scripts/smoke-query.tsis the scratch pad.
Code review checklist (internal Apex / Flow)
Section titled “Code review checklist (internal Apex / Flow)”- Bulk-safe? No SOQL / DML inside loops.
- Selector pattern? Or at least field-list-aware queries.
- Error handling: catches sensible exceptions, logs to a diagnostic object or platform event, rethrows or suppresses deliberately.
- Tests: at least one bulk (200 records) path.
- Flow equivalent: check whether a Flow would do this more simply.
Further reading
Section titled “Further reading”docs/human/admin-procedures.md— business-facing operationsdocs/dev/architecture.md— trigger and Flow layoutdocs/dev/known-issues.md— platform gotchas