Polls

Is there an unsecured wireless network near you?
 
What came first? The chicken, the egg, or the bodge to make everything work?
Friday, 10 December 2021 19:30

On Call It's another tale from the world of telephony where everything goes wrong in this week's On Call.

Today's story comes from a Register reader Regomised as "Greg." Greg was the applications manager for a now-defunct telco, and was the throat to choke for all the applications in the customer-facing side of the business. The network side, where telephone calls were actually connected, was a whole other ball game for which he was most definitely not responsible.

"On my side of the world," he told us, not at all highlighting a worryingly siloed side to the business, "we had a customer service application, which is where the sales clerks entered the details of new customers.

"The clerk would type up the new customer's details, the application would save them and tell the network side to set up the customer on the telephone exchanges. When done, the network side would tell the customer front end that this was all complete."

The system hadn't been so much designed as "evolved" over time, the dependencies likely lost in the mists of poorly commented code and unhelpful documentation that had accumulated over the years like crud in a cutlery drawer.

"It worked, most of the time," he said, "but that was all you could say."

And as much as one might hope for.

However, there came a time when a total power-down was needed. Everything in the building had to be turned off – computers, servers, everything. There was to be no UPS, no battery backup. All kit was to be shut down.

This shouldn't have presented a problem in itself. The remote telephone switches would deal with network calls, but customer service was to be totally offline.

"It took weeks of planning," said Greg. "We had never tested a total power down before, because we had redundant systems, so a total outage was unthinkable."

And so all manner of engineers were put on standby. On Call? Well, best get on-site. Just in case. Although everything had been planned. Nothing could go wrong. Right?

Everything was carefully shut down.

All OK.

The procedure to bring the systems back up was started ...

... and didn't work.

It transpired that the team had missed a dependency: "The customer front end system couldn't boot if it couldn't find the network system. And vice versa, the network system wouldn't play ball unless it could talk to the customer front end system."

Deadlock.

As is so often the case, management began the headless chicken dance, seeking someone to blame for why all this expensive hardware was unresponsive despite all those weeks of preparation.

Greg, however, donned his thinking cap and began to work through the problem. It transpired that the network system didn't need to see much more than a glorified heartbeat to confirm the customer service system was active. A bit of C and some shell scripts would be enough to spoof things.

The spoofing worked, and the network side of the business was brought up successfully.

Then Greg's bodge was carefully unpicked and the proper customer system was brought online. "Happy days! Everything worked!" he recalled gleefully. The hero of the hour.

Or maybe not.

"I never let it be known that I might just have missed this during some of our technical meetings," until now, that is.

"But I didn't fail to take credit for the solution!"

What's your most creative solution when called on to sort out that thing that could never happen? Or, considering the time of year, tell us your festive call-out moments. All it takes is an email to This e-mail address is being protected from spambots. You need JavaScript enabled to view it . ®

Source: https://bit.ly/338AS4k