24 September 2010

python and peer review

devoted a questionable amount of time issuing posts to or editing invisible parts of astropython.org. yes, the time average postings to astropython are up due to two of us; still there are few commentors and no new contributors.

then i went back to kindergarten -- cutting figures out of (printed) papers to compare with other figures all as part of a being a journal referee. besides making me print something out, this effort typifies for me the actual barrier in the current publishing paradigm -- papers (sans data) as slim advertisements of research -- to meaningful refereeing: if the closest thing to data i have are cut out paper figures then how exactly am I suppose to provide the high quality / blue ribbon value that journals claim peer-review provides for arbitrating scientific progress?

To oppose "actual" I imply the "imaginary" problem of a referee having too much data to review.

Nevertheless, the review is undone, an editor is likely peeved, and my papers, hypothetically much more tightly written, conceptually woven than the one I'm engaged in review, languish.

No comments:

Post a Comment