9

I am currently doing some research on an algorithm that may (or may not) end up being slower than other similar algorithms. The idea behind it seems solid although it will probably not be as fast as other, already known and used solutions. Finishing and testing is of course part of the science, so the research will get carried out nevertheless.

So here is my question: is writing an article about conducted, but failed work a good thing?

My thoughts are that publishing such an article would help people conducting similar research, who would otherwise waste their own time, trying to do the same thing. Saving their time is crucial for the scientific community in my personal opinion, but I am unsure how others perceive the same problem and I do not want others to have negative opinion about my work.

There are many similar questions on this site, but none seem to duplicate this exact one. This question, for instance, covers confirming or denying others research carried out and described elsewhere, while I am talking strictly about inventing and disproving the idea myself. Many other talk about the rationale behind not posting negative results, but that's also a slightly different subject (although undoubtedly related).

  • 1
  • 1
    @Corvus I think the linked topic is similar, but not a duplicate. That topic covers confirming or denying others research carried out and described elsewhere, while I am talking strictly about inventing and disproving the idea myself. Thus - it only partly answers my question, unless I were to separate my own work into two articles (one describing the idea, and the other showing negative results). – Paweł Stawarz Mar 15 '15 at 02:50
  • That question is just one of many. Try searching the site for "negative results". – Corvus Mar 15 '15 at 02:52
  • @Corvus I just did, and I cannot see any strict duplicates. On StackOverflow asking similar questions is acceptable, so I thought the same applies here. If I am wrong and this question is too similar to others, I guess I will have to get along with it. – Paweł Stawarz Mar 15 '15 at 02:58
  • Searching "failures" also produced some interesting hits, for example: Why don't people publish failures? – Mad Jack Mar 15 '15 at 03:16
  • I think it's close, but not a duplicate. Other questions are on incentives or the rationale for behavior, this one is asking for a good practice (and these are not the same things). – Piotr Migdal Mar 15 '15 at 03:23
  • These kinds of things happen in more numerical sciences. For instance, finite difference schemes are pretty useful and are very fast but there are different ways to compute derivatives numerically which can also be useful and these have garnered attention as well. Different techniques can be enlightening and lead to new research for sure. I clearly can't comment on your situation but I would see what people in your field think. – Cameron Williams Mar 15 '15 at 03:24
  • Try to find something positive to say about it: identify some circumstances (however obscure) in which the algorithm performs better, or some obstacles which if removed would make it perform better, or some variants that merit further research which could improve the performance. At the very least be very clear about the insights gained: why is it that an idea that on the surface looks so promising turned out not to work in practice? – Michael Kay Apr 27 '18 at 17:17

3 Answers3

6

In order to give your work sufficient rigour and novelty for publication, you might need to set out the theoretical basis for why you thought your new algorithm would be quicker; and an analysis of why, despite that theoretical underpinning, it was not quicker: i.e. how did reality interfere with the theory?

In theory, the test for whether or not it's a good idea to publish failed attempts, is: would it help other researchers advance? In practice, even if it did, you might struggle to find a journal that considered it of sufficient interest to be worth publishing.

Some fields do now have a journal of negative results, which could be a suitable venue. That seems, for now, to be mostly restricted to fields within the life sciences, which might not be your field.

So you might want to consider going to one of the new mega-journals, which select only on the basis of whether your work is methodologically sound and on-topic, and whether you've paid their publishing fee.

And now you can see one risk there: you've got to be careful to avoid predatory publishers.

Another, less obvious risk, is that given some megajournals have incredibly wide scope (e.g. "science"), they are not always able to get suitable peer-reviewers: so your work could appear alongside some really shoddy work, leaving readers doubting the quality of your work, even if the journal is widely held to be non-predatory.

410 gone
  • 25,792
  • 6
  • 79
  • 135
  • 2
    For what it is worth, the blog post to which you link might not be considered unbiased. I find the Scholarly Kitchen blog to have a strong anti-open access bent and author Kent Anderson (now the publisher of Science) has a long history of criticizing open access in general and the PLoS family of journals in particular. This is not to say that bad articles don't get through the editorial process at PLoS One; if you publish 30,000+ articles a year it will happen. But I think there is less reason to fear the open access mega-journals than Anderson would have you believe. – Corvus Mar 15 '15 at 08:20
  • @Corvus I find Scholarly Kitchen and Beall's list of predatory publishers to be invaluable perspectives to give some relief from the flood of uncritical open-access promotion. – 410 gone Mar 15 '15 at 09:52
3

The key question to analyze for this decision is whether you just have a "failure" or a "negative result." In this case, I am characterizing the distinction as follows:

  • A "failure" is any case where you didn't get what you wanted in the study. This might be a negative result, but it might also be due to error, mistakes, design problems, management problems, etc.
  • A "negative result" is a special type of failure, which clearly establishes that the system that you are dealing with could not produce the result you wanted or expected.

Negative results are often harder to establish than positive results, because there are generally more alternatives to rule out. They are, however, legitimate contributions to knowledge and should be published. Thus, if you have a project that is a failure, you should analyze it and ask: how hard would it be to turn this failure into a publishable negative result? If it is not too difficult, you should definitely publish your negative result.

For some cases, such as medical studies, the answer is obvious and publication may even be required. For others, such as certain complex software systems projects, it may be effectively impossible to actually establish a negative result. For your own case, of investigating an algorithm, it could go either way, depending on the nature of the algorithm and your investigation thereof.

jakebeal
  • 187,714
  • 41
  • 655
  • 920
-2

Its a very interesting idea. showing what you call failed attempts. however we need to realize that many problems, and the more complicated they are , may need many attempts or different approaches. its more steps toward a goal , than failed attempts. I think the word fail , actually implies giving up, maybe give yourself a D grade , or a C, but yes this would show the process , which is very important. so like Ford Model T was most successful, and there were many before that, in development, ford model A, model B, model C , etc, so these were predecessors, and they were in development , toward, maybe the goal of the Model T, which proved most profitable

mike J
  • 35
  • 2
  • 2
    Can you please try to clarify what you are saying a bit? I can't understand how this answers the question as currently written... – jakebeal Apr 15 '15 at 19:09