Skip to content

nftables: validate csumOffset bounds in payloadSet evaluate#13161

Open
ibondarenko1 wants to merge 1 commit into
google:masterfrom
ibondarenko1:hardening/nftables-csum-offset-bounds-check
Open

nftables: validate csumOffset bounds in payloadSet evaluate#13161
ibondarenko1 wants to merge 1 commit into
google:masterfrom
ibondarenko1:hardening/nftables-csum-offset-bounds-check

Conversation

@ibondarenko1
Copy link
Copy Markdown

Summary

payloadSet.evaluate in pkg/tcpip/nftables/nft_payload_set.go
reads and writes the IP checksum field at payload[op.csumOffset:]
without validating op.csumOffset against len(payload). When the
rule's NFTA_PAYLOAD_CSUM_OFFSET netlink attribute is set so that
the checksum field falls outside the packet payload, the indexed
slice access panics at runtime with "slice bounds out of range" or
"index out of range".

Background

op.csumOffset is sourced from the optional netlink attribute
NFTA_PAYLOAD_CSUM_OFFSET in initPayloadSet (line 227). The
comment at line 225 verbatim reads "Optional attributes; validation
is not required." newPayloadSet does not validate it either.

The data bounds check at line 105 already covers op.offset+op.blen
(the data being written), but the checksum reads at lines 154 and
160 use a separate attacker-controlled offset attribute that skips
the equivalent check.

Linux behavior

Linux kernel net/netfilter/nft_payload.c::nft_payload_csum_inet
performs the equivalent bounds check via the skb_copy_bits and
skb_ensure_writable helpers, which return -1 when the requested
offset+size exceeds the skb length:

if (skb_copy_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
    return -1;
...
if (skb_ensure_writable(skb, csum_offset + sizeof(sum)) ||
    skb_store_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
    return -1;

The -1 return propagates to nft_payload_set_eval which then
goto err. Linux gracefully fails the rule evaluation; gVisor's
Go port uses raw slice indexing which panics on out-of-bounds
instead of returning an error.

Reachability

Reachable from a process holding CAP_NET_ADMIN in the network
namespace. The nftables netlink handler gates rule installation
on CAP_NET_ADMIN at
pkg/sentry/socket/netlink/netfilter/protocol.go:76:

if !s.NetworkNamespace().HasCapability(ctx, linux.CAP_NET_ADMIN) {
    return syserr.ErrNotPermittedNet
}

Once the rule is installed, the malformed csumOffset value plus
a sufficiently short packet that hits the rule trigger the panic
at line 154. Severity is localized sandbox availability: the
sentry panic kills the sandbox. This is not CVE-grade given the
privilege requirement, but matches the hardening-class pattern of
recent panic fixes such as 1985b831b9, 25a1144119, c6c0e528b9.

Change

Add an early NFT_BREAK return at the entry of the
NFT_PAYLOAD_CSUM_INET branch when int(op.csumOffset)+2 > len(payload).
Returns the same NFT_BREAK verdict the existing offset+blen guard
uses at line 105.

5 added lines, 0 deletions in nft_payload_set.go. Two new test cases in
nftables_test.go exercise the boundary (csumOffset == len and
csumOffset > len). No behavior change for well-formed rules.

Test

Verified at runtime via a standalone Go reproducer of the line 154
pattern. Output:

[csumOffset_eq_len]         payloadLen=5  csumOffset=5    -> PANIC: runtime error: index out of range [1] with length 0
[csumOffset_gt_len]         payloadLen=5  csumOffset=250  -> PANIC: runtime error: slice bounds out of range [250:5]
[csumOffset_eq_len_minus_1] payloadLen=5  csumOffset=4    -> PANIC: runtime error: index out of range [1] with length 1
[csumOffset_safe]           payloadLen=10 csumOffset=5    -> OK
[empty_payload]             payloadLen=0  csumOffset=0    -> PANIC: runtime error: index out of range [1] with length 0

Added two TestEvaluatePayloadSet cases that exercise boundary
conditions (csumOffset equal to len, csumOffset > len). Pre-fix
these panic; post-fix they return NFT_BREAK.

payloadSet.evaluate at pkg/tcpip/nftables/nft_payload_set.go:154
reads the existing checksum via binary.BigEndian.Uint16(payload[op.csumOffset:])
and writes the updated checksum via checksum.Put at line 160. Neither
call validates op.csumOffset against len(payload) before access.

op.csumOffset is sourced from the optional netlink attribute
NFTA_PAYLOAD_CSUM_OFFSET in initPayloadSet (line 227). The comment
at line 225 verbatim reads: "Optional attributes; validation is not
required." newPayloadSet does not validate it either.

When op.csumType == NFT_PAYLOAD_CSUM_INET (a valid type accepted
by validateChecksumType at line 63) and op.csumOffset + 2 > len(payload),
the indexed slice access panics:
  payload[op.csumOffset:] with csumOffset > len: slice bounds out of range
  binary.BigEndian.Uint16 internal b[0]/b[1]: index out of range

Within the same function, the data write at line 105 already validates
the analogous bound (len(payload) < offset+op.blen) and returns
NFT_BREAK on overflow. The checksum reads at lines 154/160 use the
same kind of attacker-controlled offset attribute but skip the
equivalent validation.

Linux kernel net/netfilter/nft_payload.c handles this via the
skb_copy_bits and skb_ensure_writable helpers inside
nft_payload_csum_inet, which return -1 when the requested offset+size
exceeds skb->len:
  if (skb_copy_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
      return -1;
  ...
  if (skb_ensure_writable(skb, csum_offset + sizeof(sum)) ||
      skb_store_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
      return -1;
gVisor's Go port uses raw slice indexing which panics on
out-of-bounds instead of returning an error. Add an explicit length
check that gVisor's slice approach requires.

The check returns NFT_BREAK early when the requested csum location
is outside the packet payload, matching the existing NFT_BREAK
handling at line 105. No behavior change for well-formed rules.

Verified at runtime: a standalone Go reproducer of the line 154
code pattern panics with the Go runtime errors quoted above when
csumOffset equals, exceeds, or is one less than the payload length.
The safe case (csumOffset+2 <= len(payload)) is unchanged.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants