And kind of upset. I thought the reviews were rather harsh. They seemed to misunderstand and downplay the humanistic payoff of the paper, instead discounting the piece for not following a (IMO stale) intro-methods-results-conclusion rubric, with heavy citation to related methods, as if I were only developing new tools rather than deriving new historical insights from them. I’m not familiar with this event but I thought this was more computational humanities than computational social science or computer science. What disciplinary backgrounds do the reviewers hail from?
Congratulations, though, to those accepted, and thanks to the organisers for putting this together. Sorry for being a bit petty: I’ve just been rejected from ADHO a few times now, I think for being a bit too pitched to a computational-heavy audience, and now rejected from CHR, for being a bit too pitched to a humanistic audience, and wondering where my work even belongs anymore. But I will work to improve the piece and submit elsewhere.
Hi Ryan! It was also my impression that CHR was on the “more computational humanities” side, but I heard people were having somewhat similar and discouraging experience with reviews.
The situation with our expectations could be worth discussing, I think. To me, CHR is trying to strike the balance between domain-driven knowledge and technical nuance; possibly some more clarity in guidelines, or mission statement would reconfigure reviewers’ vision a bit?
sorry to hear you were rejected, I’d gladly read your work!
I’m sorry you got rejected and absolutely don’t want to devalidate your experience but wanted to say that I have been accepted with (what I think is) a similar approach (although I’m not sure I understood you correctly):
I did a short paper which isn’t crazy technical and definitely has a strong Humanities application.
I’ve also had a, in my opinion, very nice experience with the reviews. I’m no stranger to mean comments but my reviewers were really sweet and super helpful/thoughtful, even though I didn’t get full points by any means. I just wanted to add this experience to say that it doesn’t seem to be a general problem and more depending on individual reviewers.
However, I guess guidelines can never hurt to work against the inclination to harsh reviews.
Maybe you were also unlucky because your topic is a bit off the beaten path (has happened to me with other conferences a lot in the past and I know I’ve also felt very much misunderstood).
Not having a good fit with the reviewers often leads to this result. One of my reviewers at CHR, for example, prefaced the review with the info that my topic wasn’t really their thing. Maybe adding a declaration of familiarity with the topic or the reviewers‘ own (dis)interest helps understand where a comment is coming from and thus you can judge better what to make of the comment (or how seriously to take it) on the receiving end.
Best of luck for the future- I’m sure your paper is a valuable contribution!
Sarah
I’m going to echo Sarah’s comments. We got really helpful comments on our paper that has made it much better, both at the computational level and also concerning the theory and framing. I was actually going to tweet about this when I announced the paper because it was such a positive experience.
I think finding the right balance between humanistic analysis and computational foundations in any paper in any format is really hard. It’s one of the challenges facing the field. I always try to model work on past papers and if they tend to look a little more computationally grounded then that is the clearer disposition of the journal. I always go off past work to understand what to expect for reviews.
I suspect it will take some time before the field finds a balance between more technically oriented venues and ones that are more open to more humanistic speculation with less technically grounded work. I think there should definitely be room for all of that but no one publication can do it all.
Also, rejection always sucks no matter what form. It’s the crappiest part of our profession. And never goes away…
Well, I’m glad you guys had different experiences. I don’t want to out them here but there was a discussion on twitter where 3 others also felt their reviews were harsh and their papers rejected. I guess it’s partly the luck of the draw of the reviewers. I also wouldn’t characterise my paper as “more humanistic speculation with less technically grounded work”. It was an examination of Koselleck’s “Sattelzeit” hypothesis (modern sociopolitical concepts undergo revolutionary change in 1770-1830) using word embedding models, lexical semantic change, and novelty metrics. It cited a handful of previous studies in those fields but by no means gave an exhaustive account of the literature, which I suppose is a fault of the piece that I will correct in future submissions. But I thought it was doing interesting and even innovative methodological and historical work. It was by no means flawless and the reviewers did point out many things which could be improved, but on the whole I did not feel their criticisms outweighed the contributions of the paper. But then: I’m biased! Congratulations to you and Sarah and good luck with your presentations.
The last thing I’ll say is that I’ve noticed the organisers and program committee for the conference seem to be made up almost entirely of Britons and Europeans. I know the EU and US DH scenes have different styles and emphases on computational method and historical outcome and it may be in CHR’s interest to diversify its organising body.
this may just be the case of the randomness of peer-review. it does sound interesting so you should think about submitting to the journal of cultural analytics.