Blog

Local File Inclusion (LFI): When One “../” Wrecks Everything

0 Mins Read

·

Tuesday, August 19, 2025

Brad Geesaman

Principal Security Engineer

Why path traversal still owns apps in 2025 — and how to stop it

Local File Inclusion sounds old-school. Like a relic from the days of CGI scripts and bare-metal servers. But here’s the thing: LFI is very much alive — and it's showing up in modern codebases more than you'd think.

With microservices, serverless functions, and cloud-based file handlers, a single unsanitized path can still expose configs, credentials, or source code. And in some cases, LFI is just one step away from remote code execution (RCE).

Let’s talk about what LFI is, why it’s still a threat, and how to bulletproof your app against it.

What is LFI?

LFI (Local File Inclusion) happens when your application includes or reads a file path based on user input — without properly sanitizing it.

For example, imagine this endpoint in PHP:

$page = $_GET['page'];
include("/templates/" . $page . ".php");

Looks harmless… until someone sends:

?page=../../../../etc/passwd

The server ends up including a system file like /etc/passwd. In other languages, you’ll see similar patterns — in Node, Python, Go — anywhere user input touches the filesystem.

That’s a file disclosure. But LFI can go further. Attackers can use it to:

  • Read source code or internal configs

  • Dump environment variables

  • Pull tokens or credentials from .env files

  • Hit log files or temp files to extract session data

  • In some cases, escalate to RCE by injecting code into an included file

Real-world LFI examples


  • Magento (2023): An LFI in a templating path allowed attackers to access sensitive system files and config settings — leading to a full admin takeover after pulling API keys.


  • Bug bounty bonanza: LFI is a favorite on HackerOne and Bugcrowd. Why? It’s often missed in internal audits but dead simple to exploit — especially with encoded traversal strings or upload-based tricks.

Even in 2025, the common mistake is this:
Dev trusts the path input because “users would never do that.”

Spoiler: they would. And they do.

How attackers abuse LFI

Let’s say you let users request files, avatars, logs, or templates. If your code doesn’t strictly validate that path, attackers can use:

  • ../ sequences to traverse directories

  • URL-encoded versions like %2e%2e%2f

  • Double encoding to bypass filters (..%252f)

  • Null byte injections (.php%00)

  • Language-specific wrappers (like php://filter) to read source code as base64

Some apps even let attackers upload a file (like a profile image) and then request that file later. Combine that with LFI and you've got a basic remote shell.

How to prevent LFI (the right way)

You don’t need a dozen filters — you need a few hard rules.

1. Never include user-controlled paths directly.
If your app dynamically includes files, don’t let the filename come straight from user input. Better: map allowed values to safe paths on the backend.

# safer
templates = {
  "home": "home.html",
  "about": "about.html"
}
filename = templates.get(user_input)

2. Canonicalize the path before using it.
Resolve the full path and compare it to a safe root. If the resolved path is outside your expected directory — block it.

# Python example
import os
safe_dir = "/app/templates"
requested = os.path.realpath(os.path.join(safe_dir, user_input))
if not requested.startswith(safe_dir):
    reject_request()

3. Reject traversal patterns explicitly.
If you must allow some file flexibility (like logs or user uploads), block traversal patterns like ../, %2e%2e, etc. Don’t just rely on regex — actually resolve and validate the path.

4. Use allowlists, not blocklists.
Only permit files with known safe extensions (.txt, .html, etc.). Blocklists get bypassed. Allowlists force intent.

5. Never let users specify full paths.
Make sure your app controls the file root. Avoid user-controlled absolute paths, and especially never combine user input with sensitive functions like include(), open(), or require().

6. Disable dangerous stream wrappers or eval-based includes.
In languages like PHP, disable wrappers like php://input or data:// that allow file reads from weird sources. In general: if you see eval(file_get_contents()), run.

Ghost’s Approach to Finding LFI

Some bugs don’t show up with basic pattern matching. A well-obfuscated LFI might involve:

  • A helper function that sanitizes nothing

  • A wrapper that hides the raw input

  • File names built via template strings in another module

Ghost’s AI doesn’t just look at one file — it maps the whole app.
It traces file operations, emulates user input, and flags risky file accesses — including things buried under abstractions or obfuscated code. It also correlates path traversal inputs with runtime file permissions and business logic to give you accurate severity scores.

You won’t just get a “maybe LFI” warning. You’ll see:

  • The line of code that’s vulnerable

  • The exact input that could exploit it

  • A recommended, hardened code fix

And you can scan one repo for free to see how it works. That includes LFI — but also SSRF, CSRF, auth issues, and more.

Final thoughts

LFI isn’t a relic. It’s just... patient.
Waiting for one helper function to skip a check.
One file path to go unchecked.
One developer to assume the input is “safe.”

If you’re dealing with user-supplied filenames or file operations of any kind — treat them as high risk. And scan early.

Because one misplaced “../” can turn into a resume-generating event real fast.

Step Into The Underworld Of
Autonomous AppSec

Step Into The Underworld Of
Autonomous AppSec

Step Into The Underworld Of
Autonomous AppSec

Ghost Security provides autonomous app security with Agentic AI, enabling teams to discover, test, and mitigate risks in real time across complex digital environments.

Join our E-mail list

Join the Ghost Security email list—where we haunt vulnerabilities and banish breaches!

© 2025 Ghost Security. All rights reserved

Ghost Security provides autonomous app security with Agentic AI, enabling teams to discover, test, and mitigate risks in real time across complex digital environments.

Join our E-mail list

Join the Ghost Security email list—where we haunt vulnerabilities and banish breaches!

© 2025 Ghost Security. All rights reserved

Ghost Security provides autonomous app security with Agentic AI, enabling teams to discover, test, and mitigate risks in real time across complex digital environments.

Join our E-mail list

Join the Ghost Security email list—where we haunt vulnerabilities and banish breaches!

© 2025 Ghost Security. All rights reserved