Where to publish Computational Humanities research?

Hi everyone,

Having a venue where we can publish our research is a very important aspect of creating a new scholarly community. As we know, there is no dedicated journal for Computational Humanities, and I would like to hear from you what your thoughts are on this topic.

For the Computational Humanities Research Workshop we’ve decided to publish the accepted papers in a separate conference proceedings venue. Then, the idea is to publish selected papers in a special issue of the Journal of Open Humanities Data (https://openhumanitiesdata.metajnl.com/). JOHD (of which I’m the editor-in-chief) is a data journal, but it also publishes full-length research articles on various aspects of the creation, collection, management, access, processing, or analysis of data in humanities research (https://openhumanitiesdata.metajnl.com/about/). JOHD is fully open access and its publication fees are reasonably low (£100+VAT for short data papers, and £300+VAL if applicable for full-length research papers). I’m working to expand the scope of the journal to cover more computational research.

What are your thoughts on this journal and other options for publishing Computational Humanities research?

3 Likes

Hi Barbara,

Here is a list of some journals/venues I have compiled over time. It’s incomplete, but maybe we can start a community effort to make it longer and with more info (OA/no OA, fee, submission format, etc.). The aim is to have a list so I am listing everything here, even those that might not especially welcome “computationally complex” papers. Maybe we can then put them in a different colour, or something else.

@everyone: please reply to this post with additions and I’ll edit this post (unless the admins prefer to turn this into a wiki, or something else)

changelog: list last edited 2020-05-15

Journals:

Workshops/conferences:

Misc:

8 Likes

Hi there,

I would also add:

7 Likes

A couple more to add here. Some of these are discipline specific but have published computationally driven scholarship.

  1. Post45 (https://post45.org/contact-us/)
  2. Publicbooks.org (https://www.publicbooks.org/submissions/)
  3. The Programming Historian (peer reviewed lessons, https://programminghistorian.org/en/contribute)
  4. Code4Lib Journal (https://journal.code4lib.org/call-for-submissions)
  5. MLQ (https://depts.washington.edu/mlq/submissions/)
2 Likes

Thanks, I have edited the list. I’ve also created a “Misc” category for PH and things that weren’t super clear to me after a very quick look at the website, let me know if it’s wrong (+ if you have the info for OA/LaTeX/etc.)

2 Likes

@simon The Programming Historian does everything in markdown, if that’s helpful. They use Githuba Pages/jekyll to power the site.

2 Likes

Perhaps a bit CS / CV heavy, but:

1 Like

Also JOSS, the Journal of Open Source Software. “Papers ( paper.md and BibTeX files, plus any figures) must be hosted in a Git-based repository, ideally together with your software.”

1 Like

Dear all,
how important is placing your publications in venues indexed in WoS/Scopus, actually? Ought DH conferences publish all accepted papers in indexed proceedings or in a special issue of a well-indexed journal, say, DSH or rather not? Do your grant providers and employers care?

This has been a burning issue for me, since my national research evaluation system (Czechia) hardly considers other publications, in particular conference papers. This means that I cannot even report the two major DH (ADHO, EADH) conferences in my project reports. Is it a country- or discipline-specific problem, or a common one, so that it would deserve being raised with the conference organizers? Please help me find out by filling out an anonymous Google form here: https://forms.gle/LB8NmgKFXegWfZVCA. It does neither collect e-mail addresses nor even require logging in to Google. If you don’t disapprove of my initiative, please spread the form link among your colleagues. It will be open till end June 2020. And, of course, I’d be glad if anyone else is interested and feels like discussing this topic immediately here.

2 Likes

Thanks for this! Just completed the questionnaire!

Filled in as well.

This is a fair point. In Finland, there is the JUFO index where every journal/venue is given a rating, per field. Researchers are expected to report research outputs to their universities (using Pure), universities report that to the Ministry of Education. Researchers are allowed to submit journals for consideration by the JUFO people, but new journals are always low rated. (See the history for Glossa, for example, although it’s a very special case)

The DH conference abstracts don’t count for us either:

Screenshot from 2020-05-19 10-57-36

Here’s a blog post with some interesting, alternative thoughts: https://grasshoppermouse.github.io/2019/07/12/should-scientific-publishing-move-to-github-and-friends/

Curious if anyone has experience with the International Conference on Culture and Computing. It’s conducted as part of HCI International, which only requires extended abstracts for acceptance decisions (not sure if that is a bad sign or completely normal).

I have no experience with this particular conference. In the humanities and digital humanities, it’s quite common to base acceptance decisions on abstracts. As a reviewer I find it quite difficult to assess a paper on an abstract, and as an author, the feedback based on an abstract is often not very helpful.

2 Likes

Thanks for this list!

I recommend submitting to an ACL or ACL-like conference if your paper uses text data. COLING, ACL, LREC, NAACL, EMNLP, … The publications are open access and the reviews are generally very helpful. Several of the venues have also placed special emphasis on reproducibility (e.g., code and data availability), something I think is valuable.

EMNLP also has a new experiment which I think will be of interest to folks here, “Findings of EMNLP”. It’s a parallel publication to the conference which publishes papers which are deemed publishable by reviewers. This allows EMNLP to accept more papers than there are presentation spots for in the conference. (They still manage a 38% acceptance rate.) Their criteria for “publishability” are taken from PLOS ONE.

Here’s a blog post describing the experiment: https://2020.emnlp.org/blog/2020-04-19-findings-of-emnlp/

2 Likes

Aside from LREC (where publishing is “not too hard”), I’ve had some very disappointing experiences with the xACL events to be honest. Some of the issues with those venues are addressed in this recent paper (https://arxiv.org/abs/2010.03863), some of which I list here:

  • Results not surpassing SOTA
  • Narrow topics
  • Work not on English

Quite often we “solve tasks” that aren’t established tasks, meaning that there’s no SOTA, no benchmark, etc. Evaluation is thus intrinsic and ‘qualitative by one (or several) expert(s)’, which quite often fails to convince NLP reviewers. The narrow topics go hand-in-hand with the “work not on English”, as frankly no one really cares about spelling normalisation of 19th-C Finnish, etc.

There are NLP venues specifically for these topics, it’s true, but quite often they’re not considered to be as good as the xACLvenues – which then makes it harder to get recognition from the NLP field.

2 Likes