Predatory Journals and Final Projects – A guide

A few days ago, while reviewing projects and supervising Master’s theses, I came across something that concerns me more and more: not just the now classic AI-generated texts with no real references, but also students citing articles that appear to be academic, yet are published in journals of very questionable quality. Journals that, at first glance, seem to have everything needed to appear “reliable”: DOI, professional formatting, “peer review,” impact badges (even if we have no idea where they come from). In reality, these are predatory journals—designed precisely to look scientific without actually being so. In one paper, I found no fewer than three articles from journals called LATAM and Ciencia Latina.

What worries me most is not that students cite these articles, but that they have no real way of knowing something is wrong. Many have little or no research training, are unfamiliar with indexing systems, or are unaware of the existence of fake metrics. And in today’s search environments—especially when using AI—these journals show up easily. Some even appear in superficial academic searches, making this not just an individual problem, but a structural one.

What’s especially concerning is that, in undergraduate programs, there are fewer and fewer opportunities to teach students how to conduct a proper literature review. Yet in their final projects, we expect them to know already how to do it… when in reality, few have ever been taught properly. As supervisors, we support the process, of course, but we can’t—and shouldn’t—do the review for them. And often, we realise too late that what they’ve read and cited doesn’t hold up, even if it’s been carefully written and impeccably referenced (though I should say this isn’t the norm).

That’s why—after several tutorials that ended up being more about journals than about the articles themselves—I decided to pause and put together a guide. It’s not definitive, not exhaustive, not extensive. It’s a practical tool, created to help. It was made in a rush, like many things that emerge in the middle of an overflowing calendar, but with real care in its core intention. The goal is to give students a quick way to filter and decide whether a source is worth citing. Without unnecessary jargon, and with real examples.

The guide includes basic checking criteria (Is it indexed? Is the metric legitimate? Do the editors exist? Does the DOI work?), a table of free tools, a couple of obvious case examples, and a small “quick check card” they can use before citing an article. All designed so they can make an informed decision in less than ten minutes. Not to be perfect, but to make fewer mistakes.

This is not a crusade, nor is it trying to define what is or isn’t valid science—and the conversation about predatory journals is neither simple nor quick. It’s an invitation to care. To teach students how to take a second look, more calmly and critically. To avoid reproducing, even unintentionally, a publishing model that thrives on negligence. And to do so from the classroom, from tutorials, from the margins of our own limited time.

If it helps someone avoid a questionable citation in their final project—or even sparks a more critical conversation about what it means to cite well—then it has done its job. In the meantime, it remains open, like all tools born from a desire to improve things together.

You can find it linked here. It’s only available in Spanish for now, but I hope it’s still useful (https://www.lindacastaneda.com/wp-content/uploads/2025/08/antipredadoras.pdf)

Leave a Reply

Your email address will not be published. Required fields are marked *