The “mega-journal” trend, which has arrived in the humanities (including linguistics) may turn out to seriously disadvantage junior researchers, independent researchers, and researchers from low-income countries. This is not good for science.
In the 20th century, scientific publication served two purposes simultaneously: dissemination and reputation-building. The first is important for scientific progress, the second is important for the careers of scientists. Both are needed by the system, and they happily coexisted because there was no conflict.
Publishers needed paying customers, so they adopted restrictive, exclusive, selective policies: Only the best works were published in their journals and book imprints. High-profile publishers were able to sell more copies and charge less for their books (and journals), so authors had a double incentive to aim for a good brand.
In the 21st century, two things happened that destroyed this system: (1) Globalization (and the expansion of science budgets) turned scientific publication from a niche occupation into a global market with a few major international players, and (2) the internet decoupled dissemination from reputation-building. The first meant that publishers tried to extract the maximum amount of money from science budget, and the libraries revolted (leading to pro-Open Access declarations). The internet allows anyone to put their papers on the web, either on their personal web page, or using “blue open access” (Academia.edu, ResearchGate). Dissemination is now effectively possible at zero cost.
But what about reputation-building? Most open-access advocates seem to have forgotten about this aspect of (20th-century style) scientific publication. But can one simply transform high-profile journals into author-pays journals? Will a high-profile publisher like Oxford University Press charge authors for publishing freely available books, and everything else will remain the same?
I think that the answer is no. If the author pays for publication, there is no particular reason why publishers should adopt an exclusive, selective approach. The more papers and books they publish, the greater their income. The number of readers is irrelevant for publishers, so they should attract authors in great numbers. And indeed, there is an increasing tendency to abandon selectiveness, and to publish whatever is submitted – this trend goes by the name of “mega-journals”.
The best-known mega-journal is PLOS ONE, which publishes tens of thousands of articles each year (and charges USD 1350 per article). But more and more such journals are springing up, also in areas relevant to linguistics:
Brill Open Humanities (Brill)
Modern Languages Open (Liverpool University Press, GBP 500)
Open Linguistics (De Gruyter Open)
Open Library of the Humanities (not yet launched)
Frontiers in (50+ journals, e.g. Frontiers in Psychology with linguistics papers, EUR 575-2000)
The journal websites do not always say explicitly that they have abandoned selectiveness, but in the absence of a limit on the number of papers they will publish, they are clearly moving in the direction of “significance-neutral peer review”.
With the established mega-journals, the idea that evaluation is not for significance (or “impact”) but merely for “soundness” is an explicit policy. PLOS ONE says on its website that it does not have “subjective” acceptance criteria, and advocates of mega-journals argue that this is good, because it also allows the publication of negative results.
There seems to be a strong trend in many fields of science toward mega-journals (see this article by Peter Binfield, a former editor of PLOS ONE). Suppose that it also takes off in linguistics, the field where I work – what would it mean?
It would mean that linguists from low-income countries or early-career researchers would no longer have good access to mainstream publication modes, because they do not have the funds to pay the publication fees. The rich would get richer. But of course, if a journal publishes without any selectiveness, then publishing in it would not be a sign of scientific excellence (only of “soundness”). So how can scientific excellence be measured? Advocates of mega-journals commonly cite Article-Level Metrics (ALM) or “altmetrics”, i.e. social-network impact.
Yes, it is indeed possible to imagine that we may no longer associate the places of publication with any kind of reputation in the future. “Blue open access” is not restricted, and it measures a scholar’s impact. My ResearchGate profile tells everyone that my RG score is 16.47. But I have two questions:
(1) Why do we need specific mega-journals if we are going to be evaluated by article-level metrics anyway? Isn’t it sufficient to upload everything to Academia.edu and count the views and downloads? Does “peer-review for soundness” really matter that much?
(2) If reputation is based on article-level metrics, then what will scholars do in order to enhance their reputation?
I have no answer to the first question, but I can imagine some creative solutions in the second case. For instance, if an altmetrics system counts the tweets in which a paper is mentioned, then I might set up a couple of new Twitter accounts, or order the tweets from a specialized company (as is well-known, it’s relatively cheap to buy a couple of hundred Facebook “likes” for your company’s Facebook page).
More seriously: People attach reputation to labels, i.e. names – that’s an anthropological fact that is not going to change. Just as nowadays a paper published in a high-profile journal will get more tweets, the number of tweets will in the future be associated not only with a paper’s content, but with names. Hence, if journal names are no longer associated with reputation because one can buy oneself into a journal, other names will take up that space. Reputation may instead be attached to names of universities, or names of individuals. If you’re at a prestigious university, your chances of being read will increase greatly, and if you’re at a low-prestige university, nobody will read your papers. If you already have an established name, you will have many readers, regardless of how good it is (it will take readers a while to realize that a senior author has become lazy and no longer produces excellent work). By contrast, the work of junior scholars will hardly be read. At present, linguists do not put their names on papers written by their students, but in the future, students may be begging their renowned professors to become their coauthors. Even the names of cities and countries may become much more relevant than they are now, because they are associated with the publication.
Thus, if we give up the selectiveness/exclusiveness that is associated with traditional publication models, a completely new dynamic will probably unfold. Mega-journals are typically praised by open-access advocates, and one can indeed hope that they will bring the costs of the system down. But they may also damage science, because they remove a key ingredient in the old system, which has to be replaced by something else. Exclusive journal and book publication creates a kind of level field among all researchers – senior researchers and researchers from high-profile institutions need work hard to get into the good labels. (Thus, I’m not so sure that the current system is “a grading method from hell“, as Colin Phillips has recently called it.)
Thus, before we all jump happily on the new bandwagon, we should think hard about the possible consequences. My own favourite model is still the publisher-pays model, where universities pay for journals because they want to profit from the prestige generated by the journal. Such publisher-pays journals (and book imprints) would still be exclusive, but they would be open-access at the same time. (And in fact, most open-access journals are publisher-pays journals, as Stuart Shieber has observed.)
Interesting post, Martin. I agree that researchers will continue to want to find ways to attach grades (or prestige) to their work, and I agree that the mega-journals that publish for “soundness” don’t solve that problem. Article level metrics based on usage etc. won’t solve that problem, as you point out. Traditional journals do provide a way to attach prestige, and you are right that they do so in an egalitarian fashion (well, somewhat egalitarian). But they do so at an immense cost, one that dwarfs author fees etc. When articles go through many rounds of (re-)review and revision at multiple venues, or when they fail to see the light of day because authors give up the fight, that costs an enormous amount of researcher time. And somebody is paying for that time. At a steep price.
I’m not a big fan of the publisher-pays model, since that even more transparently places the power in the hands of the wealthy. In the form of institutions. And I suspect that the prestige associated with being the publisher might decline in the internet age. I would prefer a scenario in which the publisher is collectively owned by the researchers, i.e., professional societies, who are accountable to all of their members, via membership fees etc., and who are motivated to ensure transparent practices and regular turnover of control.
We already have relatively efficient review processes that we use for grant submissions, conference submissions etc. You submit once, you get reviewed once (preferably by multiple people), and you get graded once. The system isn’t perfect, for sure, but it’s probably the best-functioning part of the prestige-generation machine currently.
What I find amazing is that there are so many platinum journals (open access without fees) in the less rich countries: All the linguistics journals in Croatia, Finland, Taiwan, Mexico, Brazil etc are open access, and they don’t charge fees. Running a journal is apparently not all that expensive, and doing it in a rich country like Switzerland or the Netherlands may be one of the wrong decisions. Of course, these journals aren’t very prestigious, but that’s mainly because the rich linguists don’t submit their work there.
On the publisher-pays model, submission to a journal is a bit like submitting a grant proposal: You ask the publisher for a small grant that covers the publication costs.
But I agree that many rounds of (re-)submission are counterproductive. Journals should accept or reject a paper (no “revise and resubmit”), and they should do so within eight weeks. If they can’t find enough reviewers within such a time span, the paper should count as rejected. Such a commitment would put quite a bit of pressure on editors to work efficiently.
From what I have seen of platinum OA journals, the cost is not much different than other formats. Semantics and Pragmatics is a high-prestige platinum OA journal, now jointly funded by the LSA and the editors’ institutions. It costs well above $1000 per published article for that journal, and I’m sure there’s a lot of additional editorial time that is being ‘donated’ by the participating institutions.
I agree with your characterization of the publisher-pays model as being like a “small grant”. But that’s also why I’m squeamish about having the control in the hands of a small number of wealthy institutions.
I wonder how those costs arise. Part of the problem may be that “Semantics & Pragmatics” has a very detailed stylesheet that the editors apparently meticulously adhere to (and maybe the authors are not so meticulous). Other journals are far less pedantic, so maybe if one were more tolerant of style differences (or if one could get the authors to be more meticulous), one could cut costs. In any event, journals like “Semantics & Pragmatics” can be easily produced all over the world, so there is no danger of leaving the “control in the hands of a small number of wealthy institutions” (by contrast, “Frontiers” journals are apparently owned by Holtzbrinck, a German media giant).
I’m afraid I don’t know enough to comment knowledgeably on the breakdown of costs, but I suspect that the costs are not mostly tied to production and hosting, but rather to the people involved in the editorial and management process. And even if there’s little explicit payment, the fact that the editors’ institutions pay the editors for their time, they’re effectively paying for the journal (so the $1000-$2000 per article is perhaps a low estimate of the real costs). A key reason behind the success of S&P is that the editors are very good at what they do, academically, and as editors. That’s not something that one can easily transfer to the lowest bidder.
While I would *not* want to defend all aspects of the Frontiers model (that was not the aim of my original post), being a topic editor has been interesting. Compared to my other editing experiences, it is an impressively easy system to work with, meaning that my time is almost entirely devoted to the substance of the review process. I don’t need to pay any attention to other stuff, because their staff and software takes care of it. I have no idea what it costs them to maintain such infrastructure, but I bet it isn’t cheap.
One of the key things that I take away from your post is that people want to find ways to attach prestige to their work, under whatever publishing model we use. I think that’s right. And as far as I can tell, that is always going to be costly — for somebody. I wish there was a way around that.
But are the costs of the editing process (organizing reviews and so on) always carried by some academic institution? Or do you get payed by frontiers? So the question is: What does MIT or whoever pay for typesetting and organizing the interaction with authors that belong to this step of the publication process.
@ Stefan. As a special theme editor, I get $0 from Frontiers. So my institution, in effect, supports the time that I spend on it. But it’s not a great deal of time; and I’m working with good co-editors. For me, a big attraction is that the process is streamlined, which makes it easier for me to take on this role. Persuading people to take on editorial roles is one of the key challenges in publishing. Folks are not, in general, fighting to take on those jobs.
For the cases that I know of, all involving journals with some prestige in their field. #1 is platinum open access, and the sponsoring organizations pay jointly $15,000/year. I’m guessing that this is mostly for the time of people involved in managing communications with reviewers/authors. #2 is traditional format, and the publisher pays for a 25% editorial assistant (~$20,000), and a small honorarium for the editor (less than the time devoted to it). The time mostly goes to interacting w/ authors/reviewers. #3 is traditional format, and the editor’s institution pays for one PhD student to serve as editorial assistant (~$25,000). #3 uses lower-paid staff than #2, but that requires rapid turn-over of staff, and hence more time from the editor. What I find striking is that the costs are in the same general area on any model. And nobody comes close to producing a prestigious journal for less than $1000/article. Simple reason: it requires expertise, and that is costly. And most of the cost goes into the articles that do not get published.
Pingback: L’édition scientifique en linguistique (2): faire publier sa thèse | ELIS
Pingback: Paysage éditorial en sciences du langage (3): Quel Open Access pour demain? | ELIS
Pingback: How to make scientific publication more efficient: Enforce deadlines without mercy, eliminate revise-and-resubmit | Free Science Blog
Pingback: Quo vadis linguistics in the 21st century | Diversity Linguistics Comment
@Colin: I kept thinking about this. I think peer reviewed journals have a very valuable function. They act as a filter. I do not have the time any longer to read everything on a particular topic. I get the TOC of several journals and check whether new interesting stuff is coming that way. I basically ignore everything else. I may look around more carefully once I start to work on a certain topic, but I do not surf the net and search for something on UG and innateness (to pic just an arbitrary topic about which people write a lot). So, if somebody writes a paper on this topic and pays $1000 to have somebody check whether it is consistent, I will ignore it. If it is relevant, it may get into the citation circles after some time and then people will get aware of it and maybe read and cite it. This may take several years and of course this is the time span you need to go through one or two revise and resubmit cycles. The advantage of doing it the traditional way is that you will get feedback that improves your paper. A paper can be consistent but nevertheless wrong. I almost always benefited from reviews of Language, Theoretical Linguistics, Journal of Linguistics. If people get rejected with an A level journal, they try level B. If they took the reviews seriously, the paper improved already. If B rejects them and they try C, we get a C paper and we somehow know what to expect.