I would like to push back on several misconceptions in this question.
Life is tough, deadlines are strict, and failures are costly. But reality usually is not black and white. One can try again and one can improve oneself in real life.
Your premise contradicts the last part. Life is tough, deadlines are strict. This immediately implies that in many cases you will not have a chance for a re-do. Clients impose project deadlines, products need to be rolled out to market, and collaborations often depend on you completing your tasks on time.
An academic career will fill your life with many strict deadlines. Papers have to be submitted by a certain time, grant proposals have deadlines, your grad students will need to propose/graduate by a certain deadline, your tenure case will be evaluated at a certain time and so on.
If I pass an exam with a low, but passing grade, then this grade sticks for the rest of my life and I can do nothing about it. It can ruin my prospects of an academic career.
Again, not necessarily true twice: first of all, there is almost always something you can do about it. You can often talk to the instructor to see if there's an option for extra credit, a lot of universities offer retakes (as others mention), and you can also simply retake the class (or even request for it to be potentially dropped from your transcript).
These things take time and effort on your part - but that's the point! You are an adult and need to take ownership of your failures.
Secondly, a single bad grade usually won't preclude you from graduate school, especially if you show excellence in other aspects: doing a research project, excelling in extracurriculars, or community outreach activities are all looked at.
Finally, this may be unpleasant to hear, but if you have consistently bad grades in some field, this is usually a sign that you may not be a good fit for an academic career in that field.
Why is such a strict rule common?
Several technical reasons: allowing students to retake exams takes time, effort and money. It also interferes with a lot of downstream university processes.
Someone needs to invigilate these exams, those people need to be paid.
Exams happen in study halls that could be used for other things (e.g.
renting them out for events which often happen during semester breaks, or for winter/summer classes).
Instructors need to write additional exams, and their difficulty needs to be calibrated with those of the first exam. This is not easy to do fairly.
One could question whether final exams are an effective method of assessment altogether. This is an excellent question in itself with lots of different answers. I personally try to avoid them, but that is only because I usually teach small classes. In large classes, final exams are one of the few methods of impartial large-scale assessment with less potential for cheating. We do see some movement towards more continuous assessment methods, but these are also problematic, e.g. they can be biased based on how much an instructor likes/dislikes a student, they're time consuming and (at least from my limited observations) tend to disadvantage first-gen college students.