Back

The job that got rescheduled four times

One installation. Four engineer visits. Eighteen days. Five separate system failures, none of them catastrophic on their own, each one compounding the last. This is what a bad week looks like when everything goes slightly wrong.

This is a true story in the way that all composite stories are true: it didn’t happen to one customer, but it has happened to many. The details have been assembled from the kinds of failures we see repeatedly. If you manage field operations, you will recognise it.

Monday — the first visit

A customer orders a new fibre broadband service. The order goes through the website, validates cleanly and an installation appointment is booked for the following Monday between 8am and 1pm. The customer takes a morning off work.

The engineer arrives at 9:15. The address is correct. The customer is home. But the job sheet specifies cabinet 4, port 22. Cabinet 4, port 22 is occupied — has been for at least two years. The engineer calls the office. The office checks the inventory system. Nobody can quickly identify an available port. The engineer cannot complete the job. He marks it as an abortive visit — equipment not available — and leaves. The customer is promised a call back.

Root cause: inventory record incorrect. Port 22 had been reallocated during a cabinet capacity upgrade and never updated in the provisioning system.

Wednesday — the callback that wasn’t

The promised callback doesn’t come on Monday or Tuesday. The customer calls on Wednesday. The agent who takes the call can see the abortive visit note but doesn’t have visibility of what caused it — the note says “port unavailable,” not why. She reschedules the appointment for Friday and flags it for investigation. The investigation request goes into a queue.

The inventory correction is made on Thursday, the day before the rescheduled visit. Port 18 is assigned. The job is updated. Nobody tells the engineer who has been assigned Friday’s job that the port assignment has changed from the original job sheet — the update is in the system, but the engineer is looking at a cached version of the job on his mobile app.

Friday — the second visit

The engineer arrives. Checks the job sheet on his app. Goes to cabinet 4, port 22 — the original assignment, which is still showing on his device because the app hadn’t synced after the update. Port 22 is occupied. He calls the office. This time they can tell him port 18 is the correct assignment. He goes to port 18. Connects successfully.

But the customer’s property has an internal wiring issue — the master socket is in a difficult location and the installation requires some internal work the engineer hadn’t been told about and doesn’t have the right equipment for. He completes the external connection and leaves the internal work for a specialist visit. The customer has a working connection but a dangling cable.

The following Tuesday — the third visit

A specialist engineer is dispatched for the internal wiring. He arrives, assesses the situation and realises the internal wiring job is larger than the brief suggested — the previous engineer’s notes were minimal. He can complete part of it but needs a specific cable run that requires a second person. He makes a start, documents what’s needed, and leaves. Third visit, job still incomplete.

The Friday after — the fourth visit

Two engineers. The internal wiring is completed. The service is tested and confirmed working. The customer has taken a third day off work. The total elapsed time from original appointment to completion: eighteen days. The SLA was ten working days. Miss.

Total engineer visits: four. A job that should have been one.

What caused each failure

It’s tempting to look at this story and identify a single moment where it went wrong. There wasn’t one. There were five separate failures, each independent, each preventable.

The inventory error that sent the first engineer to an occupied port was a records maintenance failure. The port had been reallocated and never updated. Basic inventory hygiene.

The callback that didn’t happen was a follow-up process failure. The commitment was made but not tracked in a way that would have flagged it as unresolved.

The app sync failure that sent the second engineer to the wrong port was a mobile data architecture problem. Job updates weren’t being pushed to field devices in real time.

The internal wiring surprise was a site survey failure. A job that was known to potentially involve internal work should have been surveyed or at minimum flagged before dispatch.

The incomplete notes that meant the third engineer had insufficient context were a job documentation failure. Engineers completing partial jobs need to record enough for the next engineer to continue without a phone call.

None of these failures required a bad engineer or a careless process owner. They were structural — gaps in systems and integrations that, in combination, turned a one-visit job into a four-visit saga.

Stop the cascade before it starts

Confideo OSS validates inventory before dispatch, pushes job updates in real time and tracks every commitment through to resolution.