From: Michael Tremer <michael.tremer@ipfire.org>
To: development@lists.ipfire.org
Subject: Re: [PATCH] zlib: Pick up upstream patch for memory corruption fix
Date: Thu, 24 Mar 2022 08:42:36 +0000 [thread overview]
Message-ID: <B19D7B5D-4D92-4DB6-8FF0-3E4C494BA977@ipfire.org> (raw)
In-Reply-To: <f3fcf602-bbcd-1cb6-6de4-63426ca0051a@ipfire.org>
[-- Attachment #1: Type: text/plain, Size: 17852 bytes --]
Very interesting find.
Reviewed-by: Michael Tremer <michael.tremer(a)ipfire.org>
> On 24 Mar 2022, at 08:37, Peter Müller <peter.mueller(a)ipfire.org> wrote:
>
> See: https://www.openwall.com/lists/oss-security/2022/03/24/1
>
> Signed-off-by: Peter Müller <peter.mueller(a)ipfire.org>
> ---
> lfs/zlib | 6 +-
> ...ate-on-some-input-when-using-Z_FIXED.patch | 338 ++++++++++++++++++
> 2 files changed, 343 insertions(+), 1 deletion(-)
> create mode 100644 src/patches/zlib-fix-a-bug-that-can-crash-deflate-on-some-input-when-using-Z_FIXED.patch
>
> diff --git a/lfs/zlib b/lfs/zlib
> index 2c89b4803..c2386778d 100644
> --- a/lfs/zlib
> +++ b/lfs/zlib
> @@ -1,7 +1,7 @@
> ###############################################################################
> # #
> # IPFire.org - A linux based firewall #
> -# Copyright (C) 2007-2021 IPFire Team <info(a)ipfire.org> #
> +# Copyright (C) 2007-2022 IPFire Team <info(a)ipfire.org> #
> # #
> # This program is free software: you can redistribute it and/or modify #
> # it under the terms of the GNU General Public License as published by #
> @@ -78,6 +78,10 @@ $(TARGET) : $(patsubst %,$(DIR_DL)/%,$(objects))
> @$(PREBUILD)
> @rm -rf $(DIR_APP) && cd $(DIR_SRC) && tar axf $(DIR_DL)/$(DL_FILE)
> cd $(DIR_APP) && CROSS_PREFIX=$(CROSS_PREFIX) ./configure --prefix=$(PREFIX) --shared
> +
> + # https://www.openwall.com/lists/oss-security/2022/03/24/1
> + cd $(DIR_APP) && patch -Np1 -i $(DIR_SRC)/src/patches/zlib-fix-a-bug-that-can-crash-deflate-on-some-input-when-using-Z_FIXED.patch
> +
> cd $(DIR_APP) && make $(MAKETUNING)
> cd $(DIR_APP) && make install
>
> diff --git a/src/patches/zlib-fix-a-bug-that-can-crash-deflate-on-some-input-when-using-Z_FIXED.patch b/src/patches/zlib-fix-a-bug-that-can-crash-deflate-on-some-input-when-using-Z_FIXED.patch
> new file mode 100644
> index 000000000..2b46bcce5
> --- /dev/null
> +++ b/src/patches/zlib-fix-a-bug-that-can-crash-deflate-on-some-input-when-using-Z_FIXED.patch
> @@ -0,0 +1,338 @@
> +commit 5c44459c3b28a9bd3283aaceab7c615f8020c531
> +Author: Mark Adler <madler(a)alumni.caltech.edu>
> +Date: Tue Apr 17 22:09:22 2018 -0700
> +
> + Fix a bug that can crash deflate on some input when using Z_FIXED.
> +
> + This bug was reported by Danilo Ramos of Eideticom, Inc. It has
> + lain in wait 13 years before being found! The bug was introduced
> + in zlib 1.2.2.2, with the addition of the Z_FIXED option. That
> + option forces the use of fixed Huffman codes. For rare inputs with
> + a large number of distant matches, the pending buffer into which
> + the compressed data is written can overwrite the distance symbol
> + table which it overlays. That results in corrupted output due to
> + invalid distances, and can result in out-of-bound accesses,
> + crashing the application.
> +
> + The fix here combines the distance buffer and literal/length
> + buffers into a single symbol buffer. Now three bytes of pending
> + buffer space are opened up for each literal or length/distance
> + pair consumed, instead of the previous two bytes. This assures
> + that the pending buffer cannot overwrite the symbol table, since
> + the maximum fixed code compressed length/distance is 31 bits, and
> + since there are four bytes of pending space for every three bytes
> + of symbol space.
> +
> +diff --git a/deflate.c b/deflate.c
> +index 425babc..19cba87 100644
> +--- a/deflate.c
> ++++ b/deflate.c
> +@@ -255,11 +255,6 @@ int ZEXPORT deflateInit2_(strm, level, method, windowBits, memLevel, strategy,
> + int wrap = 1;
> + static const char my_version[] = ZLIB_VERSION;
> +
> +- ushf *overlay;
> +- /* We overlay pending_buf and d_buf+l_buf. This works since the average
> +- * output size for (length,distance) codes is <= 24 bits.
> +- */
> +-
> + if (version == Z_NULL || version[0] != my_version[0] ||
> + stream_size != sizeof(z_stream)) {
> + return Z_VERSION_ERROR;
> +@@ -329,9 +324,47 @@ int ZEXPORT deflateInit2_(strm, level, method, windowBits, memLevel, strategy,
> +
> + s->lit_bufsize = 1 << (memLevel + 6); /* 16K elements by default */
> +
> +- overlay = (ushf *) ZALLOC(strm, s->lit_bufsize, sizeof(ush)+2);
> +- s->pending_buf = (uchf *) overlay;
> +- s->pending_buf_size = (ulg)s->lit_bufsize * (sizeof(ush)+2L);
> ++ /* We overlay pending_buf and sym_buf. This works since the average size
> ++ * for length/distance pairs over any compressed block is assured to be 31
> ++ * bits or less.
> ++ *
> ++ * Analysis: The longest fixed codes are a length code of 8 bits plus 5
> ++ * extra bits, for lengths 131 to 257. The longest fixed distance codes are
> ++ * 5 bits plus 13 extra bits, for distances 16385 to 32768. The longest
> ++ * possible fixed-codes length/distance pair is then 31 bits total.
> ++ *
> ++ * sym_buf starts one-fourth of the way into pending_buf. So there are
> ++ * three bytes in sym_buf for every four bytes in pending_buf. Each symbol
> ++ * in sym_buf is three bytes -- two for the distance and one for the
> ++ * literal/length. As each symbol is consumed, the pointer to the next
> ++ * sym_buf value to read moves forward three bytes. From that symbol, up to
> ++ * 31 bits are written to pending_buf. The closest the written pending_buf
> ++ * bits gets to the next sym_buf symbol to read is just before the last
> ++ * code is written. At that time, 31*(n-2) bits have been written, just
> ++ * after 24*(n-2) bits have been consumed from sym_buf. sym_buf starts at
> ++ * 8*n bits into pending_buf. (Note that the symbol buffer fills when n-1
> ++ * symbols are written.) The closest the writing gets to what is unread is
> ++ * then n+14 bits. Here n is lit_bufsize, which is 16384 by default, and
> ++ * can range from 128 to 32768.
> ++ *
> ++ * Therefore, at a minimum, there are 142 bits of space between what is
> ++ * written and what is read in the overlain buffers, so the symbols cannot
> ++ * be overwritten by the compressed data. That space is actually 139 bits,
> ++ * due to the three-bit fixed-code block header.
> ++ *
> ++ * That covers the case where either Z_FIXED is specified, forcing fixed
> ++ * codes, or when the use of fixed codes is chosen, because that choice
> ++ * results in a smaller compressed block than dynamic codes. That latter
> ++ * condition then assures that the above analysis also covers all dynamic
> ++ * blocks. A dynamic-code block will only be chosen to be emitted if it has
> ++ * fewer bits than a fixed-code block would for the same set of symbols.
> ++ * Therefore its average symbol length is assured to be less than 31. So
> ++ * the compressed data for a dynamic block also cannot overwrite the
> ++ * symbols from which it is being constructed.
> ++ */
> ++
> ++ s->pending_buf = (uchf *) ZALLOC(strm, s->lit_bufsize, 4);
> ++ s->pending_buf_size = (ulg)s->lit_bufsize * 4;
> +
> + if (s->window == Z_NULL || s->prev == Z_NULL || s->head == Z_NULL ||
> + s->pending_buf == Z_NULL) {
> +@@ -340,8 +373,12 @@ int ZEXPORT deflateInit2_(strm, level, method, windowBits, memLevel, strategy,
> + deflateEnd (strm);
> + return Z_MEM_ERROR;
> + }
> +- s->d_buf = overlay + s->lit_bufsize/sizeof(ush);
> +- s->l_buf = s->pending_buf + (1+sizeof(ush))*s->lit_bufsize;
> ++ s->sym_buf = s->pending_buf + s->lit_bufsize;
> ++ s->sym_end = (s->lit_bufsize - 1) * 3;
> ++ /* We avoid equality with lit_bufsize*3 because of wraparound at 64K
> ++ * on 16 bit machines and because stored blocks are restricted to
> ++ * 64K-1 bytes.
> ++ */
> +
> + s->level = level;
> + s->strategy = strategy;
> +@@ -552,7 +589,7 @@ int ZEXPORT deflatePrime (strm, bits, value)
> +
> + if (deflateStateCheck(strm)) return Z_STREAM_ERROR;
> + s = strm->state;
> +- if ((Bytef *)(s->d_buf) < s->pending_out + ((Buf_size + 7) >> 3))
> ++ if (s->sym_buf < s->pending_out + ((Buf_size + 7) >> 3))
> + return Z_BUF_ERROR;
> + do {
> + put = Buf_size - s->bi_valid;
> +@@ -1113,7 +1150,6 @@ int ZEXPORT deflateCopy (dest, source)
> + #else
> + deflate_state *ds;
> + deflate_state *ss;
> +- ushf *overlay;
> +
> +
> + if (deflateStateCheck(source) || dest == Z_NULL) {
> +@@ -1133,8 +1169,7 @@ int ZEXPORT deflateCopy (dest, source)
> + ds->window = (Bytef *) ZALLOC(dest, ds->w_size, 2*sizeof(Byte));
> + ds->prev = (Posf *) ZALLOC(dest, ds->w_size, sizeof(Pos));
> + ds->head = (Posf *) ZALLOC(dest, ds->hash_size, sizeof(Pos));
> +- overlay = (ushf *) ZALLOC(dest, ds->lit_bufsize, sizeof(ush)+2);
> +- ds->pending_buf = (uchf *) overlay;
> ++ ds->pending_buf = (uchf *) ZALLOC(dest, ds->lit_bufsize, 4);
> +
> + if (ds->window == Z_NULL || ds->prev == Z_NULL || ds->head == Z_NULL ||
> + ds->pending_buf == Z_NULL) {
> +@@ -1148,8 +1183,7 @@ int ZEXPORT deflateCopy (dest, source)
> + zmemcpy(ds->pending_buf, ss->pending_buf, (uInt)ds->pending_buf_size);
> +
> + ds->pending_out = ds->pending_buf + (ss->pending_out - ss->pending_buf);
> +- ds->d_buf = overlay + ds->lit_bufsize/sizeof(ush);
> +- ds->l_buf = ds->pending_buf + (1+sizeof(ush))*ds->lit_bufsize;
> ++ ds->sym_buf = ds->pending_buf + ds->lit_bufsize;
> +
> + ds->l_desc.dyn_tree = ds->dyn_ltree;
> + ds->d_desc.dyn_tree = ds->dyn_dtree;
> +@@ -1925,7 +1959,7 @@ local block_state deflate_fast(s, flush)
> + FLUSH_BLOCK(s, 1);
> + return finish_done;
> + }
> +- if (s->last_lit)
> ++ if (s->sym_next)
> + FLUSH_BLOCK(s, 0);
> + return block_done;
> + }
> +@@ -2056,7 +2090,7 @@ local block_state deflate_slow(s, flush)
> + FLUSH_BLOCK(s, 1);
> + return finish_done;
> + }
> +- if (s->last_lit)
> ++ if (s->sym_next)
> + FLUSH_BLOCK(s, 0);
> + return block_done;
> + }
> +@@ -2131,7 +2165,7 @@ local block_state deflate_rle(s, flush)
> + FLUSH_BLOCK(s, 1);
> + return finish_done;
> + }
> +- if (s->last_lit)
> ++ if (s->sym_next)
> + FLUSH_BLOCK(s, 0);
> + return block_done;
> + }
> +@@ -2170,7 +2204,7 @@ local block_state deflate_huff(s, flush)
> + FLUSH_BLOCK(s, 1);
> + return finish_done;
> + }
> +- if (s->last_lit)
> ++ if (s->sym_next)
> + FLUSH_BLOCK(s, 0);
> + return block_done;
> + }
> +diff --git a/deflate.h b/deflate.h
> +index 23ecdd3..d4cf1a9 100644
> +--- a/deflate.h
> ++++ b/deflate.h
> +@@ -217,7 +217,7 @@ typedef struct internal_state {
> + /* Depth of each subtree used as tie breaker for trees of equal frequency
> + */
> +
> +- uchf *l_buf; /* buffer for literals or lengths */
> ++ uchf *sym_buf; /* buffer for distances and literals/lengths */
> +
> + uInt lit_bufsize;
> + /* Size of match buffer for literals/lengths. There are 4 reasons for
> +@@ -239,13 +239,8 @@ typedef struct internal_state {
> + * - I can't count above 4
> + */
> +
> +- uInt last_lit; /* running index in l_buf */
> +-
> +- ushf *d_buf;
> +- /* Buffer for distances. To simplify the code, d_buf and l_buf have
> +- * the same number of elements. To use different lengths, an extra flag
> +- * array would be necessary.
> +- */
> ++ uInt sym_next; /* running index in sym_buf */
> ++ uInt sym_end; /* symbol table full when sym_next reaches this */
> +
> + ulg opt_len; /* bit length of current block with optimal trees */
> + ulg static_len; /* bit length of current block with static trees */
> +@@ -325,20 +320,22 @@ void ZLIB_INTERNAL _tr_stored_block OF((deflate_state *s, charf *buf,
> +
> + # define _tr_tally_lit(s, c, flush) \
> + { uch cc = (c); \
> +- s->d_buf[s->last_lit] = 0; \
> +- s->l_buf[s->last_lit++] = cc; \
> ++ s->sym_buf[s->sym_next++] = 0; \
> ++ s->sym_buf[s->sym_next++] = 0; \
> ++ s->sym_buf[s->sym_next++] = cc; \
> + s->dyn_ltree[cc].Freq++; \
> +- flush = (s->last_lit == s->lit_bufsize-1); \
> ++ flush = (s->sym_next == s->sym_end); \
> + }
> + # define _tr_tally_dist(s, distance, length, flush) \
> + { uch len = (uch)(length); \
> + ush dist = (ush)(distance); \
> +- s->d_buf[s->last_lit] = dist; \
> +- s->l_buf[s->last_lit++] = len; \
> ++ s->sym_buf[s->sym_next++] = dist; \
> ++ s->sym_buf[s->sym_next++] = dist >> 8; \
> ++ s->sym_buf[s->sym_next++] = len; \
> + dist--; \
> + s->dyn_ltree[_length_code[len]+LITERALS+1].Freq++; \
> + s->dyn_dtree[d_code(dist)].Freq++; \
> +- flush = (s->last_lit == s->lit_bufsize-1); \
> ++ flush = (s->sym_next == s->sym_end); \
> + }
> + #else
> + # define _tr_tally_lit(s, c, flush) flush = _tr_tally(s, 0, c)
> +diff --git a/trees.c b/trees.c
> +index 4f4a650..decaeb7 100644
> +--- a/trees.c
> ++++ b/trees.c
> +@@ -416,7 +416,7 @@ local void init_block(s)
> +
> + s->dyn_ltree[END_BLOCK].Freq = 1;
> + s->opt_len = s->static_len = 0L;
> +- s->last_lit = s->matches = 0;
> ++ s->sym_next = s->matches = 0;
> + }
> +
> + #define SMALLEST 1
> +@@ -948,7 +948,7 @@ void ZLIB_INTERNAL _tr_flush_block(s, buf, stored_len, last)
> +
> + Tracev((stderr, "\nopt %lu(%lu) stat %lu(%lu) stored %lu lit %u ",
> + opt_lenb, s->opt_len, static_lenb, s->static_len, stored_len,
> +- s->last_lit));
> ++ s->sym_next / 3));
> +
> + if (static_lenb <= opt_lenb) opt_lenb = static_lenb;
> +
> +@@ -1017,8 +1017,9 @@ int ZLIB_INTERNAL _tr_tally (s, dist, lc)
> + unsigned dist; /* distance of matched string */
> + unsigned lc; /* match length-MIN_MATCH or unmatched char (if dist==0) */
> + {
> +- s->d_buf[s->last_lit] = (ush)dist;
> +- s->l_buf[s->last_lit++] = (uch)lc;
> ++ s->sym_buf[s->sym_next++] = dist;
> ++ s->sym_buf[s->sym_next++] = dist >> 8;
> ++ s->sym_buf[s->sym_next++] = lc;
> + if (dist == 0) {
> + /* lc is the unmatched char */
> + s->dyn_ltree[lc].Freq++;
> +@@ -1033,30 +1034,7 @@ int ZLIB_INTERNAL _tr_tally (s, dist, lc)
> + s->dyn_ltree[_length_code[lc]+LITERALS+1].Freq++;
> + s->dyn_dtree[d_code(dist)].Freq++;
> + }
> +-
> +-#ifdef TRUNCATE_BLOCK
> +- /* Try to guess if it is profitable to stop the current block here */
> +- if ((s->last_lit & 0x1fff) == 0 && s->level > 2) {
> +- /* Compute an upper bound for the compressed length */
> +- ulg out_length = (ulg)s->last_lit*8L;
> +- ulg in_length = (ulg)((long)s->strstart - s->block_start);
> +- int dcode;
> +- for (dcode = 0; dcode < D_CODES; dcode++) {
> +- out_length += (ulg)s->dyn_dtree[dcode].Freq *
> +- (5L+extra_dbits[dcode]);
> +- }
> +- out_length >>= 3;
> +- Tracev((stderr,"\nlast_lit %u, in %ld, out ~%ld(%ld%%) ",
> +- s->last_lit, in_length, out_length,
> +- 100L - out_length*100L/in_length));
> +- if (s->matches < s->last_lit/2 && out_length < in_length/2) return 1;
> +- }
> +-#endif
> +- return (s->last_lit == s->lit_bufsize-1);
> +- /* We avoid equality with lit_bufsize because of wraparound at 64K
> +- * on 16 bit machines and because stored blocks are restricted to
> +- * 64K-1 bytes.
> +- */
> ++ return (s->sym_next == s->sym_end);
> + }
> +
> + /* ===========================================================================
> +@@ -1069,13 +1047,14 @@ local void compress_block(s, ltree, dtree)
> + {
> + unsigned dist; /* distance of matched string */
> + int lc; /* match length or unmatched char (if dist == 0) */
> +- unsigned lx = 0; /* running index in l_buf */
> ++ unsigned sx = 0; /* running index in sym_buf */
> + unsigned code; /* the code to send */
> + int extra; /* number of extra bits to send */
> +
> +- if (s->last_lit != 0) do {
> +- dist = s->d_buf[lx];
> +- lc = s->l_buf[lx++];
> ++ if (s->sym_next != 0) do {
> ++ dist = s->sym_buf[sx++] & 0xff;
> ++ dist += (unsigned)(s->sym_buf[sx++] & 0xff) << 8;
> ++ lc = s->sym_buf[sx++];
> + if (dist == 0) {
> + send_code(s, lc, ltree); /* send a literal byte */
> + Tracecv(isgraph(lc), (stderr," '%c' ", lc));
> +@@ -1100,11 +1079,10 @@ local void compress_block(s, ltree, dtree)
> + }
> + } /* literal or match pair ? */
> +
> +- /* Check that the overlay between pending_buf and d_buf+l_buf is ok: */
> +- Assert((uInt)(s->pending) < s->lit_bufsize + 2*lx,
> +- "pendingBuf overflow");
> ++ /* Check that the overlay between pending_buf and sym_buf is ok: */
> ++ Assert(s->pending < s->lit_bufsize + sx, "pendingBuf overflow");
> +
> +- } while (lx < s->last_lit);
> ++ } while (sx < s->sym_next);
> +
> + send_code(s, END_BLOCK, ltree);
> + }
> --
> 2.34.1
prev parent reply other threads:[~2022-03-24 8:42 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-24 8:37 Peter Müller
2022-03-24 8:42 ` Michael Tremer [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=B19D7B5D-4D92-4DB6-8FF0-3E4C494BA977@ipfire.org \
--to=michael.tremer@ipfire.org \
--cc=development@lists.ipfire.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox