Most of the best modern methods for large-scale optimization involve making a local quadratic approximation to the objective function, moving towards the critical point of that approximation, then repeating. This includes Newton's method, L-BFGS, and so on.
A function can only be locally well-approximated by a quadratic with a minimum if the Hessian at the current point is positive definite. If the Hessian is indefinite, then either
The local quadratic approximation is a good local approximation to the objective function and is therefore a saddle surface. Then using this quadratic approximation would suggest moving towards a saddle point, which is likely to be in the wrong direction, or
The local quadratic approximation is forced to have a minimum by construction, in which case it is likely to be a poor approximation to the original objective function.
(The same sort of issues arise if the Hessian is negative-definite, in which case it locally looks like an upside-down bowl)
So, these methods will work best if the Hessian is positive definite everywhere, which is equivalent to convexity for smooth functions.
Of course, all good modern methods have safeguards in place to ensure convergence when moving through regions where the Hessian is indefinite - E.g., line search, trust regions, stopping a linear solve when a direction of negative curvature is encountered, etc. However, in such indefinite regions the convergence is generally much slower, since full curvature information about the objective function cannot be used.