Eventually the problem was "solved" by both the increase in search depth, as well as improvements in search techniques (ie capture extensions).
However it still happens these days, but often in another way. There are a number of positions where even strong engines get the evaluation wrong due to not being able to see far enough. It isn't that the engine is pushing the threat further down the road with bad moves, just that the winning idea takes quite a number of moves to execute. For humans, spotting the key idea can be quite simple, and our more abstract thought processes then allow us to recognise the eventual outcome well before a computer does.
In fact an example of this occurred last night at the ANU Chess Club. Mark Hummel (someone who has written his own chess engine btw) decided to be a little adventurous and sacrifice his queen in return for a charging passed pawn. Eventually the pawn was halted at d2, and the material balance crystallised as R+B v Q. The problem for White was that the pawn was blockaded by the queen, and any move away just allowed Black to queen. Feeding this into the computer, the engine kept returning 0.00 as the evaluation for quite a while. It was only when the program spotted the idea of protecting the pawn with the bishop (freeing up the rook) did it finally realise that Black was simply winning, something that Hummel spotted a number of moves earlier.
Hathiramani,Dillon - Hummel,Mark [C56]
ANU Challengers, 04.03.2015