The most common analyses I've seen are generated automatically by scantron grading software, and are thus only used for multiple choice questions. But a motivated person could perform the analyses themselves.
I've read a few articles on test analysis over the years, and this summary of a particular scantron software's results seems like a good overview of test analysis in general. A quick summary to improve searchability:
Item statistics:
Item difficulty - percentage of students answering correctly. Desirable = above chance.
Item discrimination - how much item correlates with test as a whole (students who did well on the exam get this right more than low-level students). Uses stat like Pearson Product Moment correlation. A "good" question has a score over 0.2.
Test statistics:
Reliability coefficient - A general measure of test length, breadth and its intercorrelations. Scores above 0.8 are considered excellent for a classroom test.
Some other interesting measurements are used more in standardized testing, rather than classroom testing. These definitions are taken from this overview.
Construct validity - does the exam actually measure the subject, or some other variable like reading skill. Generally uses a panel of "experts" or feedback by students.
Split-half reliability - measures whether different test items that purport to test the same concept produce similar results within a single exam.
Criterion-related validity - measures how well the new test correlates with a known exam, like an ETS field test or GRE subject test.
If you don't get a basic output from scantron software, is there a department on your campus that will analyze your exam for you?