EngiSphere icone
EngiSphere

Phishing, Be Gone! 🎣🚫 How Small AI Models Are Learning to Outsmart Big Email Scammers

: ; ;

Boosting phishing email 📧 detection ⚠️ using small large language models (LLMs) with prompt engineering, fine-tuning, and ensemble techniques 🕵️‍♂️

Published May 13, 2025 By EngiSphere Research Editors
AI Robot Analyzing an Email © AI Illustration
AI Robot Analyzing an Email © AI Illustration

The Main Idea

This research shows that small language models, when enhanced with prompt engineering, explanation-augmented fine-tuning, and ensemble methods, can detect phishing emails with high accuracy—rivaling or surpassing larger, more expensive models.


The R&D

🛡️ Phishing emails—those sneaky, scammy messages that try to trick you into clicking bad links or sharing personal info—are still a big problem in the digital world. And while massive AI models like GPT-4 or LLaMA-70B can detect these traps with high accuracy, they come with an equally massive cost in computing power 💸.

But what if we could train smaller AI models—ones that run on your regular consumer-grade GPU—to sniff out phishing attacks just as well? 🤔 That’s exactly what researchers Zijie Lin, Zikang Liu, and Hanbo Fan set out to explore. Their study is all about making phishing detection smarter, faster, and cheaper.

Let’s dive in! 🏊‍♂️

📧 The Problem: Catching Scams Without Breaking the Bank

Phishing detection has come a long way:

  • From rule-based filters 🧱
  • To machine learning classifiers like Naive Bayes and SVM ⚙️
  • To deep learning models like CNNs and RNNs 🧠
  • And now to transformer-based giants like GPT and BERT 🚀

These big models are incredibly powerful, but they’re also resource hogs. Imagine needing a supercomputer just to check your inbox! 🤯

That’s where small LLMs (around 3 billion parameters) come in. They're lighter, cheaper, and can run on GPUs like an RTX 3090—but the catch is, they don’t perform well out of the box for phishing detection.

So how do we teach these smaller models to punch above their weight? 🥊

🛠️ The Toolkit: 3 Simple Techniques That Make a Big Difference

The researchers introduced three powerful tricks to turn small LLMs into phishing-busting pros:

1️⃣ Prompt Engineering 🧾

Instead of asking models to just say "Phishing" or "Safe", they prompted the models to explain their reasoning first, then give a verdict. Like this:

Reason: ###This email contains suspicious links and urgent language###
Answer: ###Phishing###

💡 Why it works: LLMs are trained for natural, open-ended text generation. Forcing them into single-word answers makes them weirdly biased and inconsistent. Letting them "talk it out" feels more natural and yields better results.

2️⃣ Explanation-Augmented Fine-Tuning 🧠

Fine-tuning a model usually means feeding it emails and telling it whether they’re phishing or not.

But here's the twist: Instead of just using labels, the researchers added explanations to each training example—generated by GPT-4o-mini. So the models weren’t just learning what the answer was—they were learning why.

🧑‍🏫 Training Data Example:

  • Subject + Body
  • Explanation of why it's phishing
  • Final Label

This made training more aligned with how LLMs like to learn—through text generation—not rigid classification.

3️⃣ Model Ensemble 🧑‍🤝‍🧑

Why rely on one model when you can ask a few and take the best answer?

The team fine-tuned three small LLMs:

🦙 LLaMA-3.2-3B-Instruct
🧠 Phi-4-mini-Instruct
🐉 Qwen-2.5-1.5B-Instruct

Then they used ensemble methods:

  • Majority vote: Two or more agree = final answer
  • Confidence-based: Pick the answer with the highest statistical confidence

This helped even more in borderline cases where a single model might get confused.

🧪 The Experiments: Testing the Mini-Models in the Wild

The researchers tested everything on two classic datasets: SpamAssassin and CEAS_08.

⚙️ Setup:

  • Fine-tuned on just 1,000 samples
  • Used LoRA (Low-Rank Adaptation) for efficient training
  • Ran everything on a single RTX 3090 GPU

📊 Key Metrics:

  • Accuracy
  • Precision
  • Recall
  • F1 Score
🔍 Results: Small But Mighty

Here’s where it gets exciting 📈

✅ Without Fine-Tuning (Vanilla Prompting)

Performance was… not great 😬

  • F1 Scores hovered around 0.5
  • Models struggled with consistency and bias
✅ With Explanation-Augmented Fine-Tuning

🚀 Huge improvements!

ModelAccuracyF1 Score
LLaMA-3.2-3B96.3%92.8%
Phi-4-mini96.8%94.4%
Qwen-2.5-1.5B86.0%67.3%

Some of these even outperformed GPT-3.5-Turbo, GPT-4o-mini, and LLaMA-70B on certain benchmarks 🤯.

🤝 Teamwork Makes the Dream Work: Ensemble Results

Combining models gave an extra boost:

EnsembleAccuracyF1 Score
Confidence Ensemble97.5%95.3%
Majority Vote97.6%95.9%

While these didn’t always beat the best solo model, they offered more robustness and consistency—especially across tricky emails.

🌍 Real-World Testing: Can It Generalize?

They also tested how well the models worked on unseen datasets like Enron and Ling.

🎯 Even when trained on one dataset, the small fine-tuned LLMs performed better than traditional ML models and often beat large LLMs too.

For example:

DatasetPhi-4-mini Accuracy
Enron90.8%
Ling97.2%

This shows that these models aren’t just good students—they’re flexible thinkers too 🧘‍♂️.

🤔 Limitations

Even though the results are super promising, the researchers acknowledge a few gaps:

📉 Only two datasets were used for training
🧪 Transferability could still be optimized further
💰 No detailed analysis of cost savings (but it’s definitely cheaper than GPT-4!)
🧮 Ensemble strategies were basic—there’s room for smarter combinations

Still, this study proves you don’t need a billion-dollar supercomputer to fight email scams 💪.

🔮 What’s Next? The Future of Lightweight Email Defenders

This research opens a whole new door for affordable and accessible cybersecurity:

✅ Startups and small businesses could deploy their own phishing detectors on modest hardware
📩 Email clients might one day offer built-in AI-based phishing protection without cloud-based tools
🧠 Educational platforms could teach students to build interpretable LLMs with real-world utility

And since these models give explanations for their judgments, users can actually understand why something was flagged—not just blindly trust the AI. That’s a big step toward transparency and trust in AI.

🧵 TL;DR – The Key Takeaways

✨ Small LLMs CAN detect phishing emails effectively
🧩 Prompt engineering + explanation-based fine-tuning = powerful combo
🤖 Ensembles add an extra edge for reliability
💡 Interpretability matters—and this method gives it
💻 You don’t need GPT-4 to stay secure!

🔐 Final Thoughts

Cybersecurity shouldn’t be a luxury—it should be scalable, accessible, and smart. Thanks to creative approaches like explanation-augmented fine-tuning, even small AI models can become email watchdogs that protect users from digital traps 🐕📩.

And that’s a win for everyone on the web 🌐💚


Concepts to Know

🧠 LLM (Large Language Model) - A powerful AI trained on tons of text to understand and generate human-like language—like ChatGPT or GPT-4. - More about this concept in the article "Smarter Skies ✈️ How AI and Math Are Revolutionizing Urban Drone Swarms".

💌 Phishing Email - A fake or scammy email that tries to trick you into sharing private info or clicking harmful links—like a digital con artist.

🧾 Prompt Engineering - The art of writing smart instructions (prompts) to get AI to give better, more accurate answers—think of it as talking to AI in its favorite language. - More about this concept in the article "AI Takes Flight: How Claude 3.5 is Revolutionizing Aviation Safety 🛫🤖".

🛠️ Fine-Tuning - A way to teach a pre-trained AI model new tricks by training it on a specific task—like giving a general-purpose robot a crash course in email safety. - More about this concept in the article "The Illusion of Role Separation in LLMs 🎭 Why AI Struggles to Distinguish Between System and User Roles (And How to Fix It!)".

📚 Explanation-Augmented Fine-Tuning - An upgraded fine-tuning method where the model learns not just the right answers, but the reasons behind them—like showing your work in math class.

🧠 LoRA (Low-Rank Adaptation) - A technique that fine-tunes AI models efficiently by adjusting only small parts—making training faster and cheaper without using tons of computing power. - More about this concept in the article "AuditWen 🕵️‍♀️ How AI is Revolutionizing the Future of Auditing".

🗳️ Model Ensemble - Combining multiple models to make decisions as a team—like a panel of judges voting on the best answer.

📈 F1 Score - A score that balances how well a model catches phishing emails (recall) and avoids false alarms (precision)—the higher, the better! - More about this concept in the article "AI-Powered Nursing: Transforming Elderly Care with Large Language Models ❤️ 🧓 👵🏽".

🧪 Transferability - A model’s ability to work well on new, unseen data—not just the stuff it was trained on. Think: "Can it handle surprises?"


Source: Zijie Lin, Zikang Liu, Hanbo Fan. Improving Phishing Email Detection Performance of Small Large Language Models. https://doi.org/10.48550/arXiv.2505.00034

From: National University of Singapore.

© 2025 EngiSphere.com