We recently submitted https://www.medrxiv.org/content/10.1101/2025.08.14.25333653v1.full to PLOS computational biology. Unfortunately, we got desk rejected (not the end of the world it happens) but its left me feeling a little confused as I was fairly confident this was in scope and hence we had tuned it with this in mind. The only feedback we have had so far is that it was outside policies.
My guess is that it was judged to be not novel enough and too software focussed? I'd be very keen to hear what people think as I have a few projects that I thought were a good fit for PLOS Comp B coming up and now I am thinking it is a better use of time to look elsewhere.
I guess there are two parts of this.
1. Actual scope mismatch
2. Not great communication in i.e the abstract of the scope fit by us (though I am in two minds if this is the sort of thing we should be doing).
So feedback split by those would be awesome!
Below is a extract of us looking for some feedback (sadly we got moved into the appeals process so I'm not sure we are going to learn more) which might give some useful context as to how we are thinking about these things:
Thank you for considering our submission, "Baseline nowcasting methods for handling delays in epidemiological data". We would appreciate clarification on why the manuscript was deemed out of scope.
We felt PLOS Computational Biology was a good match as our work directly extends two papers previously published in the journal (Wolffram et al., 2023:[](https://url.uk.m.mimecastprotect.com/s/5a8yCZ01QiqxM3BuzfKsBG8gE?domain=journals.plos.org "https://url.uk.m.mimecastprotect.com/s/5a8yCZ01QiqxM3BuzfKsBG8gE?domain=journals.plos.org")[https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011394](https://url.uk.m.mimecastprotect.com/s/5a8yCZ01QiqxM3BuzfKsBG8gE?domain=journals.plos.org "https://url.uk.m.mimecastprotect.com/s/5a8yCZ01QiqxM3BuzfKsBG8gE?domain=journals.plos.org"); Mellor et al., 2024: [](https://url.uk.m.mimecastprotect.com/s/zQN3C14LACAOp7WIGhRsVA2qD?domain=journals.plos.org "https://url.uk.m.mimecastprotect.com/s/zQN3C14LACAOp7WIGhRsVA2qD?domain=journals.plos.org") [https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012849](https://url.uk.m.mimecastprotect.com/s/zQN3C14LACAOp7WIGhRsVA2qD?domain=journals.plos.org "https://url.uk.m.mimecastprotect.com/s/zQN3C14LACAOp7WIGhRsVA2qD?domain=journals.plos.org")). The work aligns with the Epidemiology & Public Health section by enabling the better use of surveillance data to understand and model the distribution of diseases in human populations.
Our paper significantly extends Wolffram et al.'s method by addressing their identified major limitations: we add day-of-week specific delay estimation to account for weekday effects and support for zero counts, which are important for many real-world settings. We also identified an issue with their method implementation and highlighted the impact this had on their findings. Fixing this bug and adding our extensions resulted in improved nowcast accuracy when validated on the same German COVID-19 data. When applied to norovirus data from Mellor et al., our method achieved much better performance than their baseline and provided a better-motivated comparison. Our baseline revealed when and why their evaluated methods performed better or worse, providing new computational insights into relative model performance. Across both case studies, we found that tuning baseline methods to context improves performance. We provide novel methods and tools, including multiple observation models and schemes for sharing information across strata, for others to perform this tuning in their applications.
Beyond methodological improvements, we propose our implementation as a computational benchmark for nowcasting method development and justify why it meets the criteria for a benchmark approach. While the journal's 2018 editorial "Putting benchmarks in their rightful place": [https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006494](https://url.uk.m.mimecastprotect.com/s/2gsGC2WMBT26kNyCBivs5ZC48?domain=journals.plos.org "https://url.uk.m.mimecastprotect.com/s/2gsGC2WMBT26kNyCBivs5ZC48?domain=journals.plos.org") focused on benchmarks that compare multiple existing methods, our work provides a complementary type of benchmark: a standardised baseline implementation that serves as a reference point against which new methods can be evaluated. This addresses the same core need for unbiased performance comparisons in computational biology. Methodological advances in nowcasting are hampered by the absence of standardised comparisons. We expect future nowcasting papers, including those submitted to PLOS Computational Biology, to use our method as a starting point for performance comparisons.
The software implementation in our R package _baselinenowcast,_ which supports this work, represents a significant advance as an open-source tool of broad utility. The package provides support for new biological insights by enabling better use of incomplete surveillance data. This enables both further methodological development and practical application, aligning with the journal's scope for software that provides new biological insights through improved data analysis.