With some clarifications obtained from the comments, I would now attempt to answer. Brace yourself for a long and, perhaps, a bit controversial post.
Some efforts are made to make science more reproducible and transparent. But I believe that what you are hoping to achieve here is fundamentally incompatible with science as we know it. And the key reason for that is that (applied) science is not the most rigorous activity on the face of the planet. Perhaps it is a bit ironic given that it is used as a standard of rigor and thoroughness, but the industry might actually have an edge over it.
Scientists' job is to find new things that might be useful. It is unrealistic to expect them to be able to make things ISO-compliant and provide installation instructions for half a dozen of distros and fix every bug arising from an update in an external dependency. Some fields do better in this regard, some worse, but without serious industry backing, the resources required for this level of coverage and continuous support are just not there. The same argument can be made for open source software. The way the whole system is currently set up is that scientists do curation and heuristics, and only then some serious engineering effort is poured in. I would expect the bottom line to rise over time, but still be consistently behind that in the industry. It is, and will be for foreseeable future, a tooling problem: most scientific software can produce, say, r^2 values, but that would be far behind what is actually required. As soon as you know the requirements and specification in enough detail, it is not science, it is engineering, and not the most creative part of the latter, either. We will not have nearly enough people and money to do bleeding edge research with full reproducibility - instead, the best studies will eventually stand out and grow, while the worse ones will be buried under the sands of time. Like in many things in life, it is often more efficient to invest $1000 in one idea than $1 in a 1000 ideas each. And the balance is generally somewhere in between, some ideas will get $500, some $100, some $1. Scientists generate lots of ideas, so naturally, most of them would not be getting $1 in this example just to keep going. What that $1 gets you changes over time, but it is never much.
As an anecdote, I once sat down with people from a big IT company who were giving us a product presentation for a joint project and describing roles for a team of 7-10 people who were supposedly doing the technical part of this project. That felt utterly out of touch, I could feel schizophrenia kicking in, as I was literally the only person covering all these roles on our side. And every time I configure iptables or Docker containers or figure out JWT authorization, I am hurting my research career (and, arguably, wasting taxpayers' money). There are better examples of academia and industry working together: if you look at some of the foundational works in machine learning or some branches of computer science, the code is very poorly written. There are no automatic tests - this is another huge problem, by the way: to write tests, you need to know the answer already. Early internet name resolution system in a form of a plain text file was maintained by just one amazing gal before someone finally decided to invest resources for a proper overhaul. All these things were solved, just not by the academia alone.
Another problem to consider is a socioeconomic one. You have citizens paying taxes, and they are ultimately interested in consumer goods and well-being of the industry. Being an average Joe who can't do much of rocket science but, you know, comes to the factory five days a week and produces parts for Quantum Superiority™ 3000. Prestige is fine and well, but the reason many people care about it is not having to pay triple for Quantum Leap™ 2 produces in some other country because they do not have a comparable technology. Scientists are generally more liberal and cooperative people than average, but for a government, 100% reproducibility with minimal effort in all but the most fundamental sciences means securing international cooperation and playing the centipede game on a global scale. Scientists may strive to benefit all of the humanity, but funding agencies operate differently, and there are lots of politics involved in these decisions (say, NSF would strive to reduce the friction and waste domestically while benefitting scientists from Russia and China as little as possible).
You list web dev tooling as an example, but it only became open and any good at all when companies started competing on something else. At first, the concept of the wheel is a closely guarded secret, but then it is out, you are building cars, and finally may agree to come together and standardize wheels and roads.
All that is to say that we are not getting zero-effort full reproducibility anytime soon, if ever. One could define some arbitrary standards and develop tooling to easily achieve them, but that would just move the goalposts. Science keeps self-correcting to stay one hit ahead of the flock, and paradoxically that means striving for making our lives just hard enough to stay busy. As soon as something "just works", we would move on to the next thing that still does not. This is just our human condition.
paperswithcodeexist in the field of ML, but it is just a collection of github repos - running the code is a massive, hard-to-automate effort. Or did you mean something like Zenodo, just with more granular tags? – Lodinn Jan 04 '23 at 16:43