How to Write STAR Stories That Score 5/5 in APS Selection Panels
Most candidates describe what happened. The ones who score 5/5 show what they personally drove. Here's the exact formula APS panels are looking for — with worked examples at APS5 and EL1 level.
Every APS candidate knows the STAR method. Situation, Task, Action, Result — it's in every application guide, every interview prep checklist. And yet most STAR stories that reach selection panels score 2 or 3 out of 5.
The problem isn't that candidates don't know the structure. The problem is that they use STAR as a reporting framework instead of an evidence framework.
A 2/5 STAR story tells the panel what happened. A 5/5 STAR story proves what the candidate personally drove.
The Anatomy of a 5/5 STAR Story
Situation — 15–20% of your total word count
Set the context — but be ruthless about brevity. You need the panel to understand why this situation was challenging, not the full project history.
What to include:
- The business problem or risk that existed
- Why it mattered (scale, urgency, stakeholder impact)
- Who else was involved (briefly)
What to cut:
- Background that doesn't affect the story
- Vague openers like "In my current role, I regularly..."
- Anything that starts with "we" before you've established context
APS5 example:
"In mid-2024, our team of six faced a 40% spike in casework volume following a machinery-of-government change that merged two agencies' client registers. The merged register contained 12,000 duplicate records that were creating incorrect payment advices and generating around 80 ministerial complaints per month."
That's two sentences. The panel knows the scale (12,000 duplicates, 80 ministerials), the trigger (MoG change), and why it mattered (incorrect payments, ministerial pressure). That's enough.
Task — 10–15%
State your specific responsibility. The panel is assessing you, not your team. They need to understand exactly what you were accountable for — not what the broader project aimed to achieve.
Good: "I was tasked with designing and implementing a data-matching protocol to identify and resolve duplicate records before the payment cycle on 1 September."
Bad: "Our team was responsible for the data quality improvement project."
One uses "I" and names a specific deliverable with a deadline. The other is a team description with no individual accountability.
Action — 50–60% — This is where the score is determined
The Action is the heart of your STAR story. Panels read thousands of applications. They are specifically looking for judgment, initiative, and specific decisions made by the individual.
What they want to see:
- The specific steps you took (not a general description)
- Decisions you made and why
- Obstacles you hit and how you navigated them
- Stakeholders you engaged and what you asked of them
- Methods or approaches that were distinctive
What "good" looks like at each level:
- APS5: Show independent judgment within a defined scope. You make decisions about how to do the work, not whether the work should be done.
- EL1: Show that you made strategic decisions about what work to prioritise and which approach to take. Your decisions had downstream consequences for others.
- EL2 and above: Show that you set direction, allocated resources, and managed risk at a branch or division level.
The "I" discipline:
Read your Action section out loud. Every sentence with "we" needs to be examined. "We held a workshop" tells the panel nothing about you. "I designed and facilitated a workshop with representatives from three branches to reach consensus on the deduplication rules, which resolved a two-week impasse between the data governance and operations teams" — that's evidence.
Result — 15–20%
Quantify. If you can't quantify exactly, estimate and label it as such.
The panel is looking for:
- Numbers: "reduced processing time by 30%", "resolved 9,400 of 12,000 duplicate records"
- Scale: "affected 4,200 clients", "delivered a $2.4M program on time"
- Formal recognition: "commended by SES at the division all-staff", "adopted as business-as-usual process agency-wide"
- Secondary impact: "the protocol is now used quarterly to maintain data quality"
If the candidate has no numbers, the story falls back on vague claims like "the project was successful" — which every candidate says, regardless of whether it was.
A Worked Example: APS5
The brief: A story about improving a team process.
Version A — scores 2/5:
"In my role as APS5, I identified an inefficiency in our document processing workflow. I worked with the team to redesign the process and this improved our turnaround times significantly. My manager was pleased with the result and it became standard practice."
Version B — scores 4–5/5:
"When I joined the complaints team in January 2024, our average document processing time was 8.3 business days against a 5-day target — a gap that was generating around 45 escalation calls per week to the Director's office. I analysed six months of processing records and identified that 60% of delays were caused by a single manual re-keying step that had been added as a temporary workaround in 2021 and never removed. I designed a revised workflow that eliminated the step using an existing system feature, documented it, and ran two training sessions with the eight-person team. By the end of February, average processing time had dropped to 4.6 days — 0.4 days inside target — and escalation calls had fallen to fewer than 10 per week. The revised workflow was formally adopted by the section and presented at the branch quality forum."
The second version has a specific problem (8.3 days vs 5-day target), an identified root cause (manual re-keying step from 2021), clear personal action (analysis, workflow design, documentation, training), and a quantified result (4.6 days, escalation calls from 45 to 10).
Common Mistakes That Cost Points
1. Describing the team's work, not yours
"We developed a stakeholder engagement strategy" — what was your contribution specifically? Did you draft the plan? Facilitate the workshops? Manage the resistant stakeholder? Say so.
2. Padding the Situation
Long context paragraphs don't impress panels — they frustrate them. If your Situation is more than 20% of your total word count, cut it.
3. Action verbs without substance
"I led the project", "I managed stakeholders", "I developed the solution" — these are category descriptions, not evidence. What specifically did you do? What did "lead" mean in practice?
4. Results without numbers
"The project was delivered on time and under budget" — every candidate says this. "The project was delivered 11 days ahead of the August milestone and $180,000 under the approved budget" — that's evidence.
5. Matching the wrong ILS capability
Each STAR story maps best to one of the five ILS capabilities. Submitting a story about process improvement for a "Communicates with Influence" criterion is a mismatch that costs you points — even if the story is well-written.
What Role Ascent Does
Role Ascent's STAR Story Builder reads your raw notes and restructures them into polished STAR responses calibrated to your target band. It handles the word count calibration (APS3 targets ~150 words; EL1 targets ~300), maps the story to the right ILS capability, and scores your story for quality, panel likelihood, and distinctiveness.
If you're preparing for a selection process, it's the fastest way to turn your career experience into evidence-ready stories.
Ready to put this into practice?
Role Ascent optimises your resume, builds STAR stories, and prepares you for panel interviews — tailored to the exact job description.
Get started freeRole Ascent Team
Writing about APS careers, interview preparation, and resume strategy for Australian Public Service applicants.