C++ 如何快速解码霍夫曼代码?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/2235208/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to decode huffman code quickly?
提问by Jichao
I have implementated a simple compressorusing pure huffman code under Windows.But I do not know much about how to decode the compressed file quickly,my bad algorithm is:
我在 Windows 下使用纯霍夫曼代码实现了一个简单的压缩器。但是我对如何快速解码压缩文件不太了解,我的错误算法是:
Enumerate all the huffman code in the code table then compare it with the bits in the compressed file.It turns out horrible result:decompressing 3MB file would need 6 hours.
枚举代码表中的所有霍夫曼代码,然后将其与压缩文件中的位进行比较。结果是可怕的结果:解压 3MB 文件需要 6 小时。
Could you provide a much more efficient algorithm?Should I use Hash or something?
你能提供一个更有效的算法吗?我应该使用哈希还是什么?
Update: I have implementated the decoderwith state table,based on my friend Lin's advice.I think this method should be better than travesal huffman tree,3MB within 6s.
更新:我已经根据我朋友林的建议实现了带有状态表的解码器。我认为这种方法应该比遍历哈夫曼树更好,6s 内 3MB。
thanks.
谢谢。
回答by Steve314
One way to optimise the binary-tree approach is to use a lookup table. You arrange the table so that you can look up a particular encoded bit-pattern directly, allowing for the maximum possible bit-width of any code.
优化二叉树方法的一种方法是使用查找表。您可以安排表格,以便您可以直接查找特定的编码位模式,从而允许任何代码的最大可能位宽。
Since most codes don't use the full maximum width, they are included at multiple locations in the table - one location for each combination of the unused bits. The table indicates how many bits to discard from the input as well as the decoded output.
由于大多数代码不使用完整的最大宽度,因此它们包含在表中的多个位置 - 未使用位的每个组合一个位置。该表指示要从输入和解码输出中丢弃多少位。
If the longest code is too long, so the table is impractical, a compromise is to use a tree of smaller fixed-width-subscript lookups. For example, you can use a 256-item table to handle a byte. If the input code is more than 8 bits, the table entry indicates that decoding is incomplete and directs you to a table that handles the next up-to 8 bits. Larger tables trade memory for speed - 256 items is probably too small.
如果最长的代码太长,那么该表不切实际,折衷方案是使用较小的固定宽度下标查找树。例如,您可以使用一个 256 项的表来处理一个字节。如果输入代码超过 8 位,则表条目指示解码不完整,并将您定向到处理接下来最多 8 位的表。较大的表以内存换取速度 - 256 项可能太小了。
I believe this general approach is called "prefix tables", and is what BobMcGees quoted code is doing. A likely difference is that some compression algorithms require the prefix table to be updated during decompression - this is not needed for simple Huffman. IIRC, I first saw it in a book about bitmapped graphics file formats which included GIF, some time before the patent panic.
我相信这种通用方法称为“前缀表”,这就是 BobMcGees 引用的代码所做的。一个可能的区别是一些压缩算法需要在解压缩期间更新前缀表 - 这对于简单的 Huffman 来说是不需要的。IIRC,我第一次看到它是在一本关于位图图形文件格式的书中,其中包括 GIF,那是在专利恐慌之前的一段时间。
It should be easy to precalculate either a full lookup table, a hashtable equivalent, or a tree-of-small-tables from a binary tree model. The binary tree is still the key representation of the code - this lookup table is just optimisation.
从二叉树模型预先计算完整的查找表、等效的哈希表或小表树应该很容易。二叉树仍然是代码的关键表示——这个查找表只是优化。
回答by BobMcGee
Why not take a look at how the GZIP sourcedoes it, specifically the Huffman decompression code in specifically unpack.c? It's doing exactly what you are, except it's doing it much, much faster.
为什么不看看GZIP 源代码是如何做到的,特别是 unpack.c 中的 Huffman 解压代码?它完全按照您的方式做事,只是它的速度要快得多。
From what I can tell, it's using a lookup array and shift/mask operations operating on whole words to run faster. Pretty dense code though.
据我所知,它使用查找数组和对整个单词运行的移位/掩码操作来运行得更快。虽然非常密集的代码。
EDIT: here is the complete source
编辑:这是完整的来源
/* unpack.c -- decompress files in pack format.
* Copyright (C) 1992-1993 Jean-loup Gailly
* This is free software; you can redistribute it and/or modify it under the
* terms of the GNU General Public License, see the file COPYING.
*/
#ifdef RCSID
static char rcsid[] = "$Id: unpack.c,v 1.4 1993/06/11 19:25:36 jloup Exp $";
#endif
#include "tailor.h"
#include "gzip.h"
#include "crypt.h"
#define MIN(a,b) ((a) <= (b) ? (a) : (b))
/* The arguments must not have side effects. */
#define MAX_BITLEN 25
/* Maximum length of Huffman codes. (Minor modifications to the code
* would be needed to support 32 bits codes, but pack never generates
* more than 24 bits anyway.)
*/
#define LITERALS 256
/* Number of literals, excluding the End of Block (EOB) code */
#define MAX_PEEK 12
/* Maximum number of 'peek' bits used to optimize traversal of the
* Huffman tree.
*/
local ulg orig_len; /* original uncompressed length */
local int max_len; /* maximum bit length of Huffman codes */
local uch literal[LITERALS];
/* The literal bytes present in the Huffman tree. The EOB code is not
* represented.
*/
local int lit_base[MAX_BITLEN+1];
/* All literals of a given bit length are contiguous in literal[] and
* have contiguous codes. literal[code+lit_base[len]] is the literal
* for a code of len bits.
*/
local int leaves [MAX_BITLEN+1]; /* Number of leaves for each bit length */
local int parents[MAX_BITLEN+1]; /* Number of parents for each bit length */
local int peek_bits; /* Number of peek bits currently used */
/* local uch prefix_len[1 << MAX_PEEK]; */
#define prefix_len outbuf
/* For each bit pattern b of peek_bits bits, prefix_len[b] is the length
* of the Huffman code starting with a prefix of b (upper bits), or 0
* if all codes of prefix b have more than peek_bits bits. It is not
* necessary to have a huge table (large MAX_PEEK) because most of the
* codes encountered in the input stream are short codes (by construction).
* So for most codes a single lookup will be necessary.
*/
#if (1<<MAX_PEEK) > OUTBUFSIZ
error cannot overlay prefix_len and outbuf
#endif
local ulg bitbuf;
/* Bits are added on the low part of bitbuf and read from the high part. */
local int valid; /* number of valid bits in bitbuf */
/* all bits above the last valid bit are always zero */
/* Set code to the next 'bits' input bits without skipping them. code
* must be the name of a simple variable and bits must not have side effects.
* IN assertions: bits <= 25 (so that we still have room for an extra byte
* when valid is only 24), and mask = (1<<bits)-1.
*/
#define look_bits(code,bits,mask) \
{ \
while (valid < (bits)) bitbuf = (bitbuf<<8) | (ulg)get_byte(), valid += 8; \
code = (bitbuf >> (valid-(bits))) & (mask); \
}
/* Skip the given number of bits (after having peeked at them): */
#define skip_bits(bits) (valid -= (bits))
#define clear_bitbuf() (valid = 0, bitbuf = 0)
/* Local functions */
local void read_tree OF((void));
local void build_tree OF((void));
/* ===========================================================================
* Read the Huffman tree.
*/
local void read_tree()
{
int len; /* bit length */
int base; /* base offset for a sequence of leaves */
int n;
/* Read the original input size, MSB first */
orig_len = 0;
for (n = 1; n <= 4; n++) orig_len = (orig_len << 8) | (ulg)get_byte();
max_len = (int)get_byte(); /* maximum bit length of Huffman codes */
if (max_len > MAX_BITLEN) {
error("invalid compressed data -- Huffman code > 32 bits");
}
/* Get the number of leaves at each bit length */
n = 0;
for (len = 1; len <= max_len; len++) {
leaves[len] = (int)get_byte();
n += leaves[len];
}
if (n > LITERALS) {
error("too many leaves in Huffman tree");
}
Trace((stderr, "orig_len %ld, max_len %d, leaves %d\n",
orig_len, max_len, n));
/* There are at least 2 and at most 256 leaves of length max_len.
* (Pack arbitrarily rejects empty files and files consisting of
* a single byte even repeated.) To fit the last leaf count in a
* byte, it is offset by 2. However, the last literal is the EOB
* code, and is not transmitted explicitly in the tree, so we must
* adjust here by one only.
*/
leaves[max_len]++;
/* Now read the leaves themselves */
base = 0;
for (len = 1; len <= max_len; len++) {
/* Remember where the literals of this length start in literal[] : */
lit_base[len] = base;
/* And read the literals: */
for (n = leaves[len]; n > 0; n--) {
literal[base++] = (uch)get_byte();
}
}
leaves[max_len]++; /* Now include the EOB code in the Huffman tree */
}
/* ===========================================================================
* Build the Huffman tree and the prefix table.
*/
local void build_tree()
{
int nodes = 0; /* number of nodes (parents+leaves) at current bit length */
int len; /* current bit length */
uch *prefixp; /* pointer in prefix_len */
for (len = max_len; len >= 1; len--) {
/* The number of parent nodes at this level is half the total
* number of nodes at parent level:
*/
nodes >>= 1;
parents[len] = nodes;
/* Update lit_base by the appropriate bias to skip the parent nodes
* (which are not represented in the literal array):
*/
lit_base[len] -= nodes;
/* Restore nodes to be parents+leaves: */
nodes += leaves[len];
}
/* Construct the prefix table, from shortest leaves to longest ones.
* The shortest code is all ones, so we start at the end of the table.
*/
peek_bits = MIN(max_len, MAX_PEEK);
prefixp = &prefix_len[1<<peek_bits];
for (len = 1; len <= peek_bits; len++) {
int prefixes = leaves[len] << (peek_bits-len); /* may be 0 */
while (prefixes--) *--prefixp = (uch)len;
}
/* The length of all other codes is unknown: */
while (prefixp > prefix_len) *--prefixp = 0;
}
/* ===========================================================================
* Unpack in to out. This routine does not support the old pack format
* with magic header 77.
*
* IN assertions: the buffer inbuf contains already the beginning of
* the compressed data, from offsets inptr to insize-1 included.
* The magic header has already been checked. The output buffer is cleared.
*/
int unpack(in, out)
int in, out; /* input and output file descriptors */
{
int len; /* Bit length of current code */
unsigned eob; /* End Of Block code */
register unsigned peek; /* lookahead bits */
unsigned peek_mask; /* Mask for peek_bits bits */
ifd = in;
ofd = out;
read_tree(); /* Read the Huffman tree */
build_tree(); /* Build the prefix table */
clear_bitbuf(); /* Initialize bit input */
peek_mask = (1<<peek_bits)-1;
/* The eob code is the largest code among all leaves of maximal length: */
eob = leaves[max_len]-1;
Trace((stderr, "eob %d %x\n", max_len, eob));
/* Decode the input data: */
for (;;) {
/* Since eob is the longest code and not shorter than max_len,
* we can peek at max_len bits without having the risk of reading
* beyond the end of file.
*/
look_bits(peek, peek_bits, peek_mask);
len = prefix_len[peek];
if (len > 0) {
peek >>= peek_bits - len; /* discard the extra bits */
} else {
/* Code of more than peek_bits bits, we must traverse the tree */
ulg mask = peek_mask;
len = peek_bits;
do {
len++, mask = (mask<<1)+1;
look_bits(peek, len, mask);
} while (peek < (unsigned)parents[len]);
/* loop as long as peek is a parent node */
}
/* At this point, peek is the next complete code, of len bits */
if (peek == eob && len == max_len) break; /* end of file? */
put_ubyte(literal[peek+lit_base[len]]);
Tracev((stderr,"%02d %04x %c\n", len, peek,
literal[peek+lit_base[len]]));
skip_bits(len);
} /* for (;;) */
flush_window();
Trace((stderr, "bytes_out %ld\n", bytes_out));
if (orig_len != (ulg)bytes_out) {
error("invalid compressed data--length error");
}
return OK;
}
回答by unwind
The typical way to decompress a Huffman code is using a binary tree. You insert your codes in the tree, so that each bit in a code represents a branch either to the left (0) or right (1), with decoded bytes (or whatever values you have) in the leaves.
解压缩霍夫曼代码的典型方法是使用二叉树。您将代码插入树中,以便代码中的每一位代表向左 (0) 或向右 (1) 的分支,在叶子中包含解码字节(或您拥有的任何值)。
Decoding is then just a case of reading bits from the coded content, walking the tree for each bit. When you reach a leaf, emit that decoded value, and keep reading until the input is exhausted.
解码只是从编码内容中读取位的情况,为每个位遍历树。当你到达一片叶子时,发出解码后的值,并继续阅读直到输入用完。
Update:this pagedescribes the technique, and has fancy graphics.
更新:此页面描述了该技术,并有精美的图形。
回答by Charles Stewart
You can perform a kind of batch lookup on the usual Huffmann tree lookup:
您可以对通常的霍夫曼树查找执行一种批量查找:
- Choosing a bit depth (call it depth n); this is a trade-off between speed, memory, and time investment to construct tables;
- Build a lookup table for all 2^nbit strings of length n. Each entry may encode several complete tokens; there will commonly also be some bits left over that are only a prefix of Huffman codes: for each of these, make a link to a further lookup table for that code;
- Build the further lookup tables. The total number of tables is at most one less than the number of entries coded in the Huffmann tree.
- 选择位深度(称之为深度n);这是在构建表的速度、内存和时间投资之间的权衡;
- 为所有长度为n 的2^ n位串构建一个查找表。每个条目可能编码几个完整的令牌;通常还会剩下一些仅作为霍夫曼代码前缀的位:对于这些位中的每一个,创建指向该代码的进一步查找表的链接;
- 构建进一步的查找表。表的总数至多比在 Huffmann 树中编码的条目数少 1。
Choosing a depth that is a multiple of four, e.g., depth 8, is a good fit for bit shifting operations.
选择四的倍数的深度,例如深度 8,非常适合位移操作。
PostscriptThis differs from the idea in potatoswatter's comment on unwind's answer and from Steve314's answer in using multiple tables: this means that all of the n-bit lookup is put to use, so should be faster but makes table construction and lookup significantly trickier, and will consume much more space for a given depth.
后记这与potatoswatter 对unwind 答案的评论中的想法以及Steve314 使用多个表的答案不同:这意味着所有n位查找都已投入使用,因此应该更快,但会使表构建和查找变得更加棘手,并且对于给定的深度,将消耗更多的空间。
回答by wallyk
Why not use the decompress algorithm in the same source module? It appears to be a decent algorithm.
为什么不在同一个源模块中使用解压算法呢?这似乎是一个不错的算法。