From Consultant to Founder: A Pragmatic Approach to Fraud Prevention and Detection

Blog Posts

Author: Jodson Santos, DigiF9 CTO

After more than a decade working as an information security consultant, I’ve spent the past 18 months focused entirely on building DigiF9. My background spans software architecture, AI, and data analysis—but what’s always driven my work is a mix of curiosity and scepticism, especially when it comes to tackling fraud.

Most of what I’ve learned about fraud prevention hasn’t come from books. I tried that route early on but found the material outdated—focused on forged documents, card cloning, chargebacks, and other legacy threats. In reality, what works tends to come from teams developing domain-specific knowledge within the business, aligned with its operations and risks.

This article is a practical overview of the principles, metrics, and tools I’ve used and built. It’s aimed at technical and fraud-prevention teams in companies that are serious about reducing fraud without compromising customer experience.

The Two Core Functions: Prevention and Detection

When it comes to fraud, everything breaks down into two primary functions: prevention and detection.

  • Prevention is what happens before an interaction occurs. This is about proactively stopping bad actors before they complete an action, such as creating an account, making a payment, or logging in with stolen credentials.
  • Detection kicks in when prevention isn’t enough. It’s about identifying fraud either during or after it happens, so that you can flag, investigate, or remediate the incident.

Both are essential. Prevention tends to rely more on deterministic models (e.g. “this device is on a blocklist”), while detection often involves probabilistic or behavioural analysis (e.g. “this is the third high-value transaction in 10 minutes, which deviates from the user’s typical behaviour”).

Core Metrics That Matter

To evaluate your fraud strategy, these are the metrics I focus on most:

  • Detection Rate (True Positive Rate): Of all fraud cases that actually occurred, how many did you catch?
  • False Positive Rate: How many legitimate actions were incorrectly flagged as fraud?
  • False Negative Rate: How many fraud cases slipped through undetected?
  • Detection Latency: How long does it take to detect fraud after it occurs? Hours? Days? Weeks?

Most teams track these at the aggregate level, but that’s rarely enough. It’s critical to break them down by fraud type (e.g. credential stuffing, account takeover, payment fraud) and channel (mobile, web, etc.) to get an accurate picture.

Choosing the Right Tools and Data Points

There’s no one-size-fits-all solution. A good fraud strategy adapts to the context of your product, customers, and threat model. That said, these are the areas I prioritise when building a detection or prevention layer:

1. Device Intelligence

Collecting signals such as:

  • Device fingerprint
  • App integrity
  • Emulators or rooted devices
  • Network anomalies (e.g. VPN, TOR, proxy usage)
  • Geolocation inconsistencies

Use case: If a device has previously been linked to fraudulent activity—either internally or via an intelligence provider—it can be blocked or flagged for further checks.

2. Behavioural Analysis

How a user interacts with your product can reveal a lot. Common indicators include:

  • Session patterns (speed, navigation, interaction timing)
  • Typing cadence
  • Mouse or gesture dynamics

These can be embedded in machine learning models to assess whether a session aligns with known user behaviour or looks suspicious.

3. Real-Time Risk Scoring

Create composite risk scores based on weighted signals across various layers (device, network, user behaviour, transaction context). Don’t rely on a single indicator—combine multiple sources to increase confidence before taking action.

4. Feedback Loops

A detection system is only as good as its feedback mechanism. Every confirmed fraud case should be fed back into the system to retrain models, improve rule accuracy, and update blocklists. Similarly, every false positive should be analysed to reduce friction for legitimate users.

Business Impact vs Technical Accuracy

One lesson I’ve learned repeatedly: technical precision means nothing if it doesn’t translate to business outcomes. The goal isn’t just to detect fraud—it’s to reduce losses while maintaining customer trust and experience.

Here’s what that balance looks like in practice:

DecisionMetric ImpactBusiness Impact
Stricter prevention rules↑ Detection Rate, ↑ False PositivesMore blocked fraud, but higher customer complaints
Looser detection rules↓ Detection Rate, ↓ False PositivesSmoother UX, but potentially increased fraud losses
Delayed intervention↑ Detection Latency, ↓ False PositivesReduced disruption, but less control over live fraud

The point is to build a strategy that suits your business risk tolerance. High-ticket B2B software will have different thresholds compared to a consumer fintech app.

Key Takeaways

Context matters: There’s no universal approach. Your strategy must reflect your users, products, and fraud landscape.

  1. Metrics drive maturity: You can’t improve what you don’t measure. Define and track the right metrics per fraud type.
  2. Prevention and detection must coexist: Treat them as complementary, not interchangeable.
  3. Feedback is critical: Systems must evolve based on real outcomes—fraud confirmed, users lost, friction created.
  4. Tech without clarity is dangerous: Build models and tools that are explainable, auditable, and practical.

At DigiF9, we work with companies that are serious about building smart fraud defences—especially those that want to go beyond plug-and-play tools. If you’re looking for a more strategic, technical approach to fraud prevention, get in touch.

Original article: https://www.linkedin.com/pulse/equipe-de-prevenção-e-detecção-à-fraudes-jodson-santos-l7v4f/?trackingId=0Osy82iF188Nw%2BGpBGDOtw%3D%3D

Tags :

Blog Posts

Share This :

Copyright © 2025. All rights reserved