### Editorial Departures at Elsevier’s *Journal of Human Evolution*: An Insight into the Challenges in Academic Publishing
During a dramatic holiday weekend upheaval, all but one member of the editorial board of Elsevier’s *Journal of Human Evolution* (JHE) resigned collectively. Their choice, made with “deep sadness and significant regret,” shines a light on the escalating conflicts within the academic publishing domain. This marks the 20th instance of mass resignations from scientific journals since 2023, as reported by *Retraction Watch*. The departures arise from a mix of high publication fees, worries about editorial autonomy, and the disputed implementation of artificial intelligence (AI) in editorial practices.
This event reflects broader discontent within the academic sphere over the changing methodologies of profit-oriented publishers. The departing editors highlighted several modifications made by Elsevier that they perceive as detrimental to the journal’s quality, editorial ethics, and dedication to the academic community.
—
### Core Reasons for the Resignation
#### 1. **Editorial Autonomy and Reorganization**
The editorial board raised alarms about Elsevier’s unilateral moves related to the journal’s organization. In 2023, Elsevier mandated that all associate editors renew their contracts each year, a step the board found to compromise their autonomy. Furthermore, the company set in motion a significant reorganization that would cut the number of associate editors by more than fifty percent. This alteration would obligate the remaining editors to tackle a considerably heavier workload, often addressing topics outside their fields of expertise.
The board also took issue with Elsevier’s intention to implement a third-tier editorial board, which they characterized as mainly symbolic and devoid of substantial editorial duties. They contended that these changes weakened the journal’s capacity to uphold its high academic standards.
#### 2. **AI in Editorial Functions**
One of the most disputable points was Elsevier’s adoption of AI in the journal’s production workflow, rolled out in 2023 without prior notice to the editorial board or authors. The resigning editors noted that AI frequently produced errors in style, formatting, and even the content of manuscripts. One particularly problematic case involved the AI altering the formatting of already-accepted papers, leading to a six-month holdup in resolution.
The editors pointed out that AI processing is still in use, necessitating considerable oversight from both authors and editors during the proofreading phase. They argued that this practice not only diminishes the journal’s quality but also breaches transparency and trust. Authors, including anthropologist John Hawks, lamented not being informed about AI’s involvement in the editorial process, indicating that they would have opted to submit their work elsewhere if they had been aware.
#### 3. **High Costs and Accessibility**
The resigning editors also objected to the journal’s author page fees, which are considerably more expensive than those of other Elsevier journals and open-access alternatives like *Scientific Reports*. They contended that these costs are prohibitive for numerous researchers, especially those from underfunded institutions or developing nations. This contradicts the journal’s proclaimed commitment to equality and inclusiveness in academic publishing.
#### 4. **Reduction of Editorial Assistance**
In the past ten years, Elsevier has eliminated crucial support roles, such as copy editors and special issues editors, placing these tasks upon the editorial board. When the board asked for a copy editor, Elsevier reportedly dismissed the necessity, stating that editors should not be concerned with language, grammar, or formatting. This lack of support further burdened the editorial team and undermined the journal’s quality.
—
### The Function of AI in Academic Publishing: A Double-Edged Tool
The integration of AI into academic publishing is an increasing trend, yet its execution has evoked mixed feedback. While AI can potentially optimize processes and boost efficiency, its improper use may result in grave errors and ethical dilemmas.
For instance, earlier this year, a peer-reviewed article in *Frontiers* included AI-generated illustrations, one depicting a rat with grotesquely exaggerated genitalia. The paper faced widespread mockery and was ultimately retracted, illustrating the dangers of relying on AI without adequate oversight.
Conversely, AI has been effectively employed for tasks such as detecting manipulated images in scientific literature. For example, the publisher *Science* has recently embraced AI-powered software to identify fraudulent imagery, a decision praised for bolstering the integrity of published research. However, as Ars Science Editor John Timmer noted, even these tools come with constraints and can be evaded by adept fraudsters.
Anthropologist John Hawks recognized the inevitability of AI in scientific publishing but stressed the importance of transparency and responsible application. He criticized Elsevier for employing AI to lessen human oversight while demanding transparency from authors. Hawks proposed that authors should post preprints of their work to ensure that original, unaltered versions remain available to readers.
—
### Wider Implications for Academic Publishing
The resignations at JHE signify a component of a larger movement within the academic community aimed at confronting the practices of profit-driven publishers. In recent