Null cache entries have size 0, preventing LRU eviction #89033
Copy link
Description
Contributor
Issue body actions
Link to the code that reproduces this issue
https://github.com/iMUngHee/nextjs-lru-memory-leak
To Reproduce
npm install --ignore-scriptsnpm run build && npm startnode test-memory-leak.mjs(sends 50k requests with unique IDs)- Take heap snapshot of the server process
- Search for
LRUNode- count matches request count
Current vs. Expected behavior
Current:
LRUNode instances grow unbounded. Each unique dynamic route path creates a new entry that never gets evicted.
Expected:
LRU cache should evict old entries when reaching maxSize (1MB).
Provide environment information
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.6.0: Mon Aug 11 21:16:31 PDT 2025; root:xnu-11417.140.69.701.11~1/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 11
Binaries:
Node: 23.4.0
npm: 10.9.2
Yarn: 1.22.22
pnpm: 9.12.3
Relevant Packages:
next: 16.2.0-canary.8 // Latest available version is detected (16.2.0-canary.8).
eslint-config-next: N/A
react: 19.2.3
react-dom: 19.2.3
typescript: N/A
Next.js Config:
output: N/AWhich area(s) are affected? (Select all that apply)
Route Handlers, Performance
Which stage(s) are affected? (Select all that apply)
Other (Deployed), next start (local)
Additional context
I am running version 15.5.9 with self-hosting, and while investigating the cause of memory leaks, I discovered that LRUNode instances were accumulating abnormally in heap dumps.
Metadata
Metadata
Assignees
Labels
PerformanceAnything with regards to Next.js performance.Anything with regards to Next.js performance.Route HandlersRelated to Route Handlers.Related to Route Handlers.locked
Type
Projects
Milestone
Relationships
Development
Notifications
You're not receiving notifications from this thread.
Activity
output: standalone#85914github-actions commentedlast weekon Feb 10, 2026
This closed issue has been automatically locked because it had no new activity for 2 weeks. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you.
Add a comment