Skip to content

fix: limit decompressed request body size to prevent memory exhaustion#1091

Open
vnykmshr wants to merge 2 commits intonode-formidable:masterfrom
vnykmshr:fix/decompression-body-limit
Open

fix: limit decompressed request body size to prevent memory exhaustion#1091
vnykmshr wants to merge 2 commits intonode-formidable:masterfrom
vnykmshr:fix/decompression-body-limit

Conversation

@vnykmshr
Copy link
Copy Markdown
Contributor

The JSON and querystring parsers (this.chunks[], this.buffer) have no size check on the decompressed body. A 50KB gzip request decompresses to ~50MB - maxTotalFileSize only fires in the multipart path.

This adds a bytesReceived check in write(). When total decompressed bytes exceed maxTotalFileSize, parsing stops with a 413.

My earlier suggestion in #1063 about maxOutputLength was wrong - that option only applies to sync zlib (gunzipSync), not streaming transforms.

Refs #1063 (remaining item after #1069)

JSON and urlencoded parsers buffer the entire decompressed body with no
size check. A gzip-compressed request can force unbounded memory
allocation (50KB compressed, 200MB+ RSS with default settings).

Add bytesReceived check in write() against maxTotalFileSize. Multipart
was already protected via _handlePart; this closes the gap for JSON and
urlencoded paths.

Refs node-formidable#1063
@tunnckoCore
Copy link
Copy Markdown
Member

will check out when i get up.

thanks.

we'll also land maxRequestBodySize option soon.

i'm planning the migration to monorepo setup

@tunnckoCore tunnckoCore added the do not merge When something is not yet finished. Could be used with Status:blocked label Apr 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do not merge When something is not yet finished. Could be used with Status:blocked

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants