You can’t stop what you can’t see
One of the most consistent things I’ve seen looking at overtime data is that organisations don’t actually have a clear way of separating overtime that genuinely adds value from overtime that just… happens.
Claims that get approved, get paid, and sit there looking completely reasonable.
And that’s the problem - because if you look at any individual claim, you can usually justify it: someone stayed back because something needed to get done.
There’s always a story that makes sense, but when you step back and look across the full dataset, over months and years, a different picture shows up. You start to see patterns where overtime is being used in ways that don’t really move anything forward.
It’s consistent, repeatable, and it doesn’t trigger anything because nothing looks out of place, and there was no metric that indicated "we didn't get much value from this" - so it the pattern repeats over and over, month after month after month.
That’s how you end up with organisations losing more than 10% of their overtime spend every month to work that isn’t really delivering value. Not because anyone is trying to game the system in a blatant way, but because there’s no mechanism that actually asks the question: was this worth it?
Most reporting doesn’t help here. You’ll see totals, trends, maybe spikes. What you won’t see is whether the spend was justified in any meaningful sense. So the organisation keeps paying for both: the overtime that matters, and the overtime that doesn’t.
All you need to be able to do to make the difference is be able to identify the low value claims from high value claims, once you can do that you see where the problems are.
That's what we do, and when we do it - you stop money just going out the door.