11.2 Trailhead Playground Hands-On Drill Plan
Key Takeaways
- Hands-on review should mirror administrator workflows: inspect evidence, make a scoped change, test with representative users, and document the result.
- A Trailhead Playground or Developer Edition org is the safest place to practice Setup navigation, security diagnosis, object changes, reports, flows, and Agentforce concepts.
- Drills should cover both successful configuration and failure analysis, such as missing access, bad data, overbroad automation, or report visibility surprises.
- The goal is not to memorize clicks but to build a repeatable admin diagnostic routine under closed-reference conditions.
Practicing Like an Administrator
A hands-on drill is different from clicking through a Trailhead step and forgetting it. In final review, every Playground session should answer four questions: what business problem am I solving, where in Setup would I inspect it, what change would I make, and how would I prove the result is correct. This turns the Trailhead Playground or Developer Edition org into a controlled lab instead of a badge-completion space.
Use one practice org consistently if possible. Create a simple naming convention for test users, permission sets, fields, flows, reports, and dashboards. Add a date or initials if multiple learners use similar material. The goal is not production-grade change management, but the habit matters. Platform administrators work in environments where unclear names and undocumented test changes become support debt.
| Drill area | Hands-on task | What to explain afterward |
|---|---|---|
| User access | Create or inspect a user, profile, permission set, and permission set assignment | Which layer grants app, object, field, and record access. |
| Object Manager | Add a custom field, page layout change, or compact layout change | Who sees it, where it appears, and what data quality risk exists. |
| Lightning App Builder | Compare an app page, record page, and home page | Which audience each page serves and how activation matters. |
| Data management | Build an import checklist and test a small CSV if safe in the lab | How IDs, validation, duplicates, and rollback exports affect the job. |
| Reports | Create a report with filters, grouping, and a folder | Why rows or fields may differ by user. |
| Flow | Inspect or build a small record-triggered or screen flow | Entry conditions, user impact, testing, and fault handling. |
| Agentforce | Review available setup concepts and trusted AI learning | Use case, permissions, grounding, testing, monitoring, and when not to use AI. |
Start each drill with inspection before configuration. For access problems, do not immediately assign a broad permission. Inspect the user record, profile, permission sets, permission set groups if present, role, public groups, queues, sharing settings, and field-level security. For report problems, inspect the report type, filters, folder access, dashboard running user if relevant, and sharing. This evidence-first pattern is the difference between an admin who fixes the symptom and an admin who understands the system.
Use a "make it fail" step. Create a test user who lacks access to a field, then observe what changes when you grant field permission. Build a report that excludes records because the report type requires a child object, then change the report approach. Create a validation rule in a lab, test an invalid save, and then ask how that same rule would affect Data Loader, integrations, and flows. Failure drills expose exam traps better than perfect demos.
Hands-On Drill Workflow
- Name the scenario in one sentence, such as "sales user cannot edit a custom field on accounts."
- Identify the possible layers before opening Setup.
- Inspect current metadata and record evidence.
- Make the smallest lab change that should solve the scenario.
- Test as the affected user or with the most realistic available substitute.
- Record the Setup path, result, side effect, and rollback step.
- Close references and explain the solution from memory.
Data drills deserve extra care because the current outline gives Data and Analytics Management the highest weight. In a lab, prepare a tiny CSV with clean rows and flawed rows. Practice mapping fields, spotting required fields, checking picklist values, and thinking about duplicate rules before import. Even if you do not run a risky operation, write the plan: export first, include record IDs, test a sample, review success and error files, and preserve the mapping. The exam often rewards the planning control more than the button name.
Reports and dashboards should be practiced with visibility in mind. Build a simple report, save it to a folder, and ask which users can open the folder, see the records, and see the fields. Then consider a dashboard. Who is the running user, and what does the viewer actually see? If a manager wants a dashboard but individual users should not see all underlying records, the answer must respect analytics security and business intent.
Automation drills should teach caution. A Flow can solve many admin problems, but a final-review candidate should know when not to automate. If a requirement is unclear, the data is unreliable, or users need a policy decision first, automation can multiply errors. Practice reading entry criteria, record-trigger timing, update targets, fault paths, and testing plans. For the exam, the strongest answer often uses Flow when repeatable logic is needed, but it also includes testing and governance.
Agentforce practice may be more conceptual depending on what is available in the org. Keep it at Platform Administrator depth. Review use cases, permission and data access implications, prompt or agent configuration boundaries, grounding in approved data, testing tools, deployment readiness, channels, monitoring, feedback, and audit information. Also practice saying no. If the use case requires exposing sensitive data without a clear access model, or if deterministic workflow is required, AI may not be the right first choice.
End each Playground session with cleanup notes. Which metadata did you create? Which users or permissions did you alter? Which reports or flows would confuse future practice if left unnamed? In production, cleanup and documentation protect the org. In final review, they protect your mental model. You should be able to return a week later, understand what you built, and repeat the diagnostic lesson without relying on open notes during the exam.
A learner completes Trailhead readings but wants a better hands-on final review routine. Which drill pattern is strongest?
Why should final Playground drills include failure cases?
A team asks the candidate to practice Agentforce for the Platform Administrator exam. What is the right depth?