Tests are supposed to be tightly coupled to the application.
The fact that many tests fail when changes are made is a good thing. Far better than throwing it over the wall and have your customer/user find the errors.
However, code base aside, this rather assumes that you have good tests in place.
Some questions to ask when tests fail:
Is the test valid?
Does the logic under test still apply? If not, remove it.
Are the same tests failing?
If the same sorts of tests are failing, this highlights where your code is particularly brittle. A flippant remark you might think, but check through your source control and you may find the same tests have been repaired again and again and again.
Is the test in a reasonable area?
How close to the action is the failing test? If tests are failing in areas where you wouldn't expect, again, you have an issue you need to address.
Do you have to massage a test thru?
Do you have test that require a number of changes just to get them thru? If so, re-engineer the test so it is testing a smaller area of functionality.
Do the same fixes have to be applied to different tests?
If you're applying the same fix to multiple tests, this indicates a code smell. Either the test is too large or there is common code can be refactored out.
Do the tests following FIRST principles?
Ensure the tests are following the basic rules.
As for how you can improve the code without making wholesale changes, that is difficult to advise on without knowing the code base. As a starting point, I'd be inclined to target your changes on areas where tests are consistently failing.