Étiquette : poor review

Nearly one-third of social media research has undisclosed ties to industry, preprint claims

 

Social Media + Zoom + TikTok

“The team then looked through information from OpenAlex—an open-access catalog of scientific papers—and industry announcements to check for ties between authors and social media companies during the period when the journals required disclosure. They found about half of all papers had some such tie, suggesting authors in 30% of all studies were not declaring potential conflicts of interest. For the subset of papers that revealed the identity of the editors and peer reviewers, the authors also examined whether they, too, had any conflicts of interest. Adding those who did to the analysis pushed up the proportion of papers that had some industry tie to 66%. Extrapolating to cases where the reviewers were anonymous, the researchers estimated that only one in five papers were likely to have remained fully independent of industry throughout the publication process.”

Source : Nearly one-third of social media research has undisclosed ties to industry, preprint claims | Science | AAAS

A new preprint server welcomes papers written and reviewed by AI

“At most scientific publications, papers co-authored by artificial intelligence (AI) are not welcome. At a new open platform called aiXiv, they are embraced. The platform goes all in on AI: It accepts both AI- and human-authored work, uses built-in AI reviewers for baseline quality screening, and guides authors through revisions based on the chatbots’ feedback. “AI-generated knowledge shouldn’t be treated differently,” says Guowei Huang, one of aiXiv’s creators and a Ph.D. candidate specializing in AI and business at the University of Manchester. “We should only care about quality—not who produced it.”The platform is still at an early stage; after a mid-November update, it hosts just a few dozen papers and early-stage proposals. But many researchers say it promises a welcome reprieve for the overloaded human peer-review system, which has been forced to shoulder the ongoing surge of papers driven by both legitimate and banned use of AI. “It’s extremely important that the automated science community take responsibility for how they are going to evaluate their own research,” says Thomas Dietterich, an emeritus professor of computer science at Oregon State University.”

Source : A new preprint server welcomes papers written and reviewed by AI | Science | AAAS

https://no-flux.beaude.net/wp-content/uploads/2020/02/d41586-019-03572-7_17405256.jpg

“A number of researchers have analysed various selection methods and suggested that incorporating randomness has advantages over the current system, such as reducing the bias that research routinely shows plagues grant-giving, and improving diversity among grantees1.”

Source : Science funders gamble on grant lotteries

“Contributions to open science and open access were considered the least important aspects of academic work for researcher assessment among a list of aspects presented to respondents by the EUA, the association reported on 22 October. Only 38 per cent of 197 institutional respondents considered open science and open access “important” or “very important” to their evaluations. Thirty-six per cent attributed little or no importance to these aspects of academic work.”

Source : Open science not a priority in evaluation, survey finds – Research Professional News

“Networking, attending conferences and delivering lectures should give your ideas an edge, help you to disseminate your research, and result in higher quality papers that get more citations. And the fastest way to do all of these things in person is to fly. But even when accounting for department, position and gender, we found no relationship between how much academics travel and their total citation count or their hIa (a version of h-index adjusted for academic age).”

Source : Do the best academics fly more? | Impact of Social Sciences

« Last week, development studies journal Third World Quarterly published an article that, by many common metrics used in academia today, will be the most successful in its 38-year history. The paper has, in a few days, already achieved a higher Altmetric Attention Score than any other TWQ paper. By the rules of modern academia, this is a triumph. The problem is, the paper is not.The article, “The case for colonialism”, is a travesty, the academic equivalent of a Trump tweet, clickbait with footnotes ».

Source : Impact of Social Sciences – Clickbait and impact: how academia has been hacked

© 2026 no-Flux

Theme by Anders NorenUp ↑