| 
									
										
										
										
											2007-07-10 17:22:24 -07:00
										 |  |  | /*
 | 
					
						
							|  |  |  |  *  LZO1X Decompressor from MiniLZO | 
					
						
							|  |  |  |  * | 
					
						
							|  |  |  |  *  Copyright (C) 1996-2005 Markus F.X.J. Oberhumer <markus@oberhumer.com> | 
					
						
							|  |  |  |  * | 
					
						
							|  |  |  |  *  The full LZO package can be found at: | 
					
						
							|  |  |  |  *  http://www.oberhumer.com/opensource/lzo/
 | 
					
						
							|  |  |  |  * | 
					
						
							|  |  |  |  *  Changed for kernel use by: | 
					
						
							|  |  |  |  *  Nitin Gupta <nitingupta910@gmail.com> | 
					
						
							|  |  |  |  *  Richard Purdie <rpurdie@openedhand.com> | 
					
						
							|  |  |  |  */ | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
											  
											
												lib: add support for LZO-compressed kernels
This patch series adds generic support for creating and extracting
LZO-compressed kernel images, as well as support for using such images on
the x86 and ARM architectures, and support for creating and using
LZO-compressed initrd and initramfs images.
Russell King said:
: Testing on a Cortex A9 model:
: - lzo decompressor is 65% of the time gzip takes to decompress a kernel
: - lzo kernel is 9% larger than a gzip kernel
:
: which I'm happy to say confirms your figures when comparing the two.
:
: However, when comparing your new gzip code to the old gzip code:
: - new is 99% of the size of the old code
: - new takes 42% of the time to decompress than the old code
:
: What this means is that for a proper comparison, the results get even better:
: - lzo is 7.5% larger than the old gzip'd kernel image
: - lzo takes 28% of the time that the old gzip code took
:
: So the expense seems definitely worth the effort.  The only reason I
: can think of ever using gzip would be if you needed the additional
: compression (eg, because you have limited flash to store the image.)
:
: I would argue that the default for ARM should therefore be LZO.
This patch:
The lzo compressor is worse than gzip at compression, but faster at
extraction.  Here are some figures for an ARM board I'm working on:
Uncompressed size: 3.24Mo
gzip  1.61Mo 0.72s
lzo   1.75Mo 0.48s
So for a compression ratio that is still relatively close to gzip, it's
much faster to extract, at least in that case.
This part contains:
 - Makefile routine to support lzo compression
 - Fixes to the existing lzo compressor so that it can be used in
   compressed kernels
 - wrapper around the existing lzo1x_decompress, as it only extracts one
   block at a time, while we need to extract a whole file here
 - config dialog for kernel compression
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: cleanup]
Signed-off-by: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Tested-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Russell King <rmk@arm.linux.org.uk>
Acked-by: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
											
										 
											2010-01-08 14:42:42 -08:00
										 |  |  | #ifndef STATIC
 | 
					
						
							| 
									
										
										
										
											2007-07-10 17:22:24 -07:00
										 |  |  | #include <linux/module.h>
 | 
					
						
							|  |  |  | #include <linux/kernel.h>
 | 
					
						
							| 
									
										
											  
											
												lib: add support for LZO-compressed kernels
This patch series adds generic support for creating and extracting
LZO-compressed kernel images, as well as support for using such images on
the x86 and ARM architectures, and support for creating and using
LZO-compressed initrd and initramfs images.
Russell King said:
: Testing on a Cortex A9 model:
: - lzo decompressor is 65% of the time gzip takes to decompress a kernel
: - lzo kernel is 9% larger than a gzip kernel
:
: which I'm happy to say confirms your figures when comparing the two.
:
: However, when comparing your new gzip code to the old gzip code:
: - new is 99% of the size of the old code
: - new takes 42% of the time to decompress than the old code
:
: What this means is that for a proper comparison, the results get even better:
: - lzo is 7.5% larger than the old gzip'd kernel image
: - lzo takes 28% of the time that the old gzip code took
:
: So the expense seems definitely worth the effort.  The only reason I
: can think of ever using gzip would be if you needed the additional
: compression (eg, because you have limited flash to store the image.)
:
: I would argue that the default for ARM should therefore be LZO.
This patch:
The lzo compressor is worse than gzip at compression, but faster at
extraction.  Here are some figures for an ARM board I'm working on:
Uncompressed size: 3.24Mo
gzip  1.61Mo 0.72s
lzo   1.75Mo 0.48s
So for a compression ratio that is still relatively close to gzip, it's
much faster to extract, at least in that case.
This part contains:
 - Makefile routine to support lzo compression
 - Fixes to the existing lzo compressor so that it can be used in
   compressed kernels
 - wrapper around the existing lzo1x_decompress, as it only extracts one
   block at a time, while we need to extract a whole file here
 - config dialog for kernel compression
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: cleanup]
Signed-off-by: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Tested-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Russell King <rmk@arm.linux.org.uk>
Acked-by: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
											
										 
											2010-01-08 14:42:42 -08:00
										 |  |  | #endif
 | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2007-07-10 17:22:24 -07:00
										 |  |  | #include <asm/unaligned.h>
 | 
					
						
							| 
									
										
											  
											
												lib: add support for LZO-compressed kernels
This patch series adds generic support for creating and extracting
LZO-compressed kernel images, as well as support for using such images on
the x86 and ARM architectures, and support for creating and using
LZO-compressed initrd and initramfs images.
Russell King said:
: Testing on a Cortex A9 model:
: - lzo decompressor is 65% of the time gzip takes to decompress a kernel
: - lzo kernel is 9% larger than a gzip kernel
:
: which I'm happy to say confirms your figures when comparing the two.
:
: However, when comparing your new gzip code to the old gzip code:
: - new is 99% of the size of the old code
: - new takes 42% of the time to decompress than the old code
:
: What this means is that for a proper comparison, the results get even better:
: - lzo is 7.5% larger than the old gzip'd kernel image
: - lzo takes 28% of the time that the old gzip code took
:
: So the expense seems definitely worth the effort.  The only reason I
: can think of ever using gzip would be if you needed the additional
: compression (eg, because you have limited flash to store the image.)
:
: I would argue that the default for ARM should therefore be LZO.
This patch:
The lzo compressor is worse than gzip at compression, but faster at
extraction.  Here are some figures for an ARM board I'm working on:
Uncompressed size: 3.24Mo
gzip  1.61Mo 0.72s
lzo   1.75Mo 0.48s
So for a compression ratio that is still relatively close to gzip, it's
much faster to extract, at least in that case.
This part contains:
 - Makefile routine to support lzo compression
 - Fixes to the existing lzo compressor so that it can be used in
   compressed kernels
 - wrapper around the existing lzo1x_decompress, as it only extracts one
   block at a time, while we need to extract a whole file here
 - config dialog for kernel compression
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: cleanup]
Signed-off-by: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Tested-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Russell King <rmk@arm.linux.org.uk>
Acked-by: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
											
										 
											2010-01-08 14:42:42 -08:00
										 |  |  | #include <linux/lzo.h>
 | 
					
						
							| 
									
										
										
										
											2007-07-10 17:22:24 -07:00
										 |  |  | #include "lzodefs.h"
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | #define HAVE_IP(x, ip_end, ip) ((size_t)(ip_end - ip) < (x))
 | 
					
						
							|  |  |  | #define HAVE_OP(x, op_end, op) ((size_t)(op_end - op) < (x))
 | 
					
						
							|  |  |  | #define HAVE_LB(m_pos, out, op) (m_pos < out || m_pos >= op)
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | #define COPY4(dst, src)	\
 | 
					
						
							|  |  |  | 		put_unaligned(get_unaligned((const u32 *)(src)), (u32 *)(dst)) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | int lzo1x_decompress_safe(const unsigned char *in, size_t in_len, | 
					
						
							|  |  |  | 			unsigned char *out, size_t *out_len) | 
					
						
							|  |  |  | { | 
					
						
							|  |  |  | 	const unsigned char * const ip_end = in + in_len; | 
					
						
							|  |  |  | 	unsigned char * const op_end = out + *out_len; | 
					
						
							|  |  |  | 	const unsigned char *ip = in, *m_pos; | 
					
						
							|  |  |  | 	unsigned char *op = out; | 
					
						
							|  |  |  | 	size_t t; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 	*out_len = 0; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 	if (*ip > 17) { | 
					
						
							|  |  |  | 		t = *ip++ - 17; | 
					
						
							|  |  |  | 		if (t < 4) | 
					
						
							|  |  |  | 			goto match_next; | 
					
						
							|  |  |  | 		if (HAVE_OP(t, op_end, op)) | 
					
						
							|  |  |  | 			goto output_overrun; | 
					
						
							|  |  |  | 		if (HAVE_IP(t + 1, ip_end, ip)) | 
					
						
							|  |  |  | 			goto input_overrun; | 
					
						
							|  |  |  | 		do { | 
					
						
							|  |  |  | 			*op++ = *ip++; | 
					
						
							|  |  |  | 		} while (--t > 0); | 
					
						
							|  |  |  | 		goto first_literal_run; | 
					
						
							|  |  |  | 	} | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 	while ((ip < ip_end)) { | 
					
						
							|  |  |  | 		t = *ip++; | 
					
						
							|  |  |  | 		if (t >= 16) | 
					
						
							|  |  |  | 			goto match; | 
					
						
							|  |  |  | 		if (t == 0) { | 
					
						
							|  |  |  | 			if (HAVE_IP(1, ip_end, ip)) | 
					
						
							|  |  |  | 				goto input_overrun; | 
					
						
							|  |  |  | 			while (*ip == 0) { | 
					
						
							|  |  |  | 				t += 255; | 
					
						
							|  |  |  | 				ip++; | 
					
						
							|  |  |  | 				if (HAVE_IP(1, ip_end, ip)) | 
					
						
							|  |  |  | 					goto input_overrun; | 
					
						
							|  |  |  | 			} | 
					
						
							|  |  |  | 			t += 15 + *ip++; | 
					
						
							|  |  |  | 		} | 
					
						
							|  |  |  | 		if (HAVE_OP(t + 3, op_end, op)) | 
					
						
							|  |  |  | 			goto output_overrun; | 
					
						
							|  |  |  | 		if (HAVE_IP(t + 4, ip_end, ip)) | 
					
						
							|  |  |  | 			goto input_overrun; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 		COPY4(op, ip); | 
					
						
							|  |  |  | 		op += 4; | 
					
						
							|  |  |  | 		ip += 4; | 
					
						
							|  |  |  | 		if (--t > 0) { | 
					
						
							|  |  |  | 			if (t >= 4) { | 
					
						
							|  |  |  | 				do { | 
					
						
							|  |  |  | 					COPY4(op, ip); | 
					
						
							|  |  |  | 					op += 4; | 
					
						
							|  |  |  | 					ip += 4; | 
					
						
							|  |  |  | 					t -= 4; | 
					
						
							|  |  |  | 				} while (t >= 4); | 
					
						
							|  |  |  | 				if (t > 0) { | 
					
						
							|  |  |  | 					do { | 
					
						
							|  |  |  | 						*op++ = *ip++; | 
					
						
							|  |  |  | 					} while (--t > 0); | 
					
						
							|  |  |  | 				} | 
					
						
							|  |  |  | 			} else { | 
					
						
							|  |  |  | 				do { | 
					
						
							|  |  |  | 					*op++ = *ip++; | 
					
						
							|  |  |  | 				} while (--t > 0); | 
					
						
							|  |  |  | 			} | 
					
						
							|  |  |  | 		} | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | first_literal_run: | 
					
						
							|  |  |  | 		t = *ip++; | 
					
						
							|  |  |  | 		if (t >= 16) | 
					
						
							|  |  |  | 			goto match; | 
					
						
							|  |  |  | 		m_pos = op - (1 + M2_MAX_OFFSET); | 
					
						
							|  |  |  | 		m_pos -= t >> 2; | 
					
						
							|  |  |  | 		m_pos -= *ip++ << 2; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 		if (HAVE_LB(m_pos, out, op)) | 
					
						
							|  |  |  | 			goto lookbehind_overrun; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 		if (HAVE_OP(3, op_end, op)) | 
					
						
							|  |  |  | 			goto output_overrun; | 
					
						
							|  |  |  | 		*op++ = *m_pos++; | 
					
						
							|  |  |  | 		*op++ = *m_pos++; | 
					
						
							|  |  |  | 		*op++ = *m_pos; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 		goto match_done; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 		do { | 
					
						
							|  |  |  | match: | 
					
						
							|  |  |  | 			if (t >= 64) { | 
					
						
							|  |  |  | 				m_pos = op - 1; | 
					
						
							|  |  |  | 				m_pos -= (t >> 2) & 7; | 
					
						
							|  |  |  | 				m_pos -= *ip++ << 3; | 
					
						
							|  |  |  | 				t = (t >> 5) - 1; | 
					
						
							|  |  |  | 				if (HAVE_LB(m_pos, out, op)) | 
					
						
							|  |  |  | 					goto lookbehind_overrun; | 
					
						
							|  |  |  | 				if (HAVE_OP(t + 3 - 1, op_end, op)) | 
					
						
							|  |  |  | 					goto output_overrun; | 
					
						
							|  |  |  | 				goto copy_match; | 
					
						
							|  |  |  | 			} else if (t >= 32) { | 
					
						
							|  |  |  | 				t &= 31; | 
					
						
							|  |  |  | 				if (t == 0) { | 
					
						
							|  |  |  | 					if (HAVE_IP(1, ip_end, ip)) | 
					
						
							|  |  |  | 						goto input_overrun; | 
					
						
							|  |  |  | 					while (*ip == 0) { | 
					
						
							|  |  |  | 						t += 255; | 
					
						
							|  |  |  | 						ip++; | 
					
						
							|  |  |  | 						if (HAVE_IP(1, ip_end, ip)) | 
					
						
							|  |  |  | 							goto input_overrun; | 
					
						
							|  |  |  | 					} | 
					
						
							|  |  |  | 					t += 31 + *ip++; | 
					
						
							|  |  |  | 				} | 
					
						
							|  |  |  | 				m_pos = op - 1; | 
					
						
							| 
									
										
										
										
											2008-07-25 01:45:27 -07:00
										 |  |  | 				m_pos -= get_unaligned_le16(ip) >> 2; | 
					
						
							| 
									
										
										
										
											2007-07-10 17:22:24 -07:00
										 |  |  | 				ip += 2; | 
					
						
							|  |  |  | 			} else if (t >= 16) { | 
					
						
							|  |  |  | 				m_pos = op; | 
					
						
							|  |  |  | 				m_pos -= (t & 8) << 11; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 				t &= 7; | 
					
						
							|  |  |  | 				if (t == 0) { | 
					
						
							|  |  |  | 					if (HAVE_IP(1, ip_end, ip)) | 
					
						
							|  |  |  | 						goto input_overrun; | 
					
						
							|  |  |  | 					while (*ip == 0) { | 
					
						
							|  |  |  | 						t += 255; | 
					
						
							|  |  |  | 						ip++; | 
					
						
							|  |  |  | 						if (HAVE_IP(1, ip_end, ip)) | 
					
						
							|  |  |  | 							goto input_overrun; | 
					
						
							|  |  |  | 					} | 
					
						
							|  |  |  | 					t += 7 + *ip++; | 
					
						
							|  |  |  | 				} | 
					
						
							| 
									
										
										
										
											2008-07-25 01:45:27 -07:00
										 |  |  | 				m_pos -= get_unaligned_le16(ip) >> 2; | 
					
						
							| 
									
										
										
										
											2007-07-10 17:22:24 -07:00
										 |  |  | 				ip += 2; | 
					
						
							|  |  |  | 				if (m_pos == op) | 
					
						
							|  |  |  | 					goto eof_found; | 
					
						
							|  |  |  | 				m_pos -= 0x4000; | 
					
						
							|  |  |  | 			} else { | 
					
						
							|  |  |  | 				m_pos = op - 1; | 
					
						
							|  |  |  | 				m_pos -= t >> 2; | 
					
						
							|  |  |  | 				m_pos -= *ip++ << 2; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 				if (HAVE_LB(m_pos, out, op)) | 
					
						
							|  |  |  | 					goto lookbehind_overrun; | 
					
						
							|  |  |  | 				if (HAVE_OP(2, op_end, op)) | 
					
						
							|  |  |  | 					goto output_overrun; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 				*op++ = *m_pos++; | 
					
						
							|  |  |  | 				*op++ = *m_pos; | 
					
						
							|  |  |  | 				goto match_done; | 
					
						
							|  |  |  | 			} | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 			if (HAVE_LB(m_pos, out, op)) | 
					
						
							|  |  |  | 				goto lookbehind_overrun; | 
					
						
							|  |  |  | 			if (HAVE_OP(t + 3 - 1, op_end, op)) | 
					
						
							|  |  |  | 				goto output_overrun; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 			if (t >= 2 * 4 - (3 - 1) && (op - m_pos) >= 4) { | 
					
						
							|  |  |  | 				COPY4(op, m_pos); | 
					
						
							|  |  |  | 				op += 4; | 
					
						
							|  |  |  | 				m_pos += 4; | 
					
						
							|  |  |  | 				t -= 4 - (3 - 1); | 
					
						
							|  |  |  | 				do { | 
					
						
							|  |  |  | 					COPY4(op, m_pos); | 
					
						
							|  |  |  | 					op += 4; | 
					
						
							|  |  |  | 					m_pos += 4; | 
					
						
							|  |  |  | 					t -= 4; | 
					
						
							|  |  |  | 				} while (t >= 4); | 
					
						
							|  |  |  | 				if (t > 0) | 
					
						
							|  |  |  | 					do { | 
					
						
							|  |  |  | 						*op++ = *m_pos++; | 
					
						
							|  |  |  | 					} while (--t > 0); | 
					
						
							|  |  |  | 			} else { | 
					
						
							|  |  |  | copy_match: | 
					
						
							|  |  |  | 				*op++ = *m_pos++; | 
					
						
							|  |  |  | 				*op++ = *m_pos++; | 
					
						
							|  |  |  | 				do { | 
					
						
							|  |  |  | 					*op++ = *m_pos++; | 
					
						
							|  |  |  | 				} while (--t > 0); | 
					
						
							|  |  |  | 			} | 
					
						
							|  |  |  | match_done: | 
					
						
							|  |  |  | 			t = ip[-2] & 3; | 
					
						
							|  |  |  | 			if (t == 0) | 
					
						
							|  |  |  | 				break; | 
					
						
							|  |  |  | match_next: | 
					
						
							|  |  |  | 			if (HAVE_OP(t, op_end, op)) | 
					
						
							|  |  |  | 				goto output_overrun; | 
					
						
							|  |  |  | 			if (HAVE_IP(t + 1, ip_end, ip)) | 
					
						
							|  |  |  | 				goto input_overrun; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 			*op++ = *ip++; | 
					
						
							|  |  |  | 			if (t > 1) { | 
					
						
							|  |  |  | 				*op++ = *ip++; | 
					
						
							|  |  |  | 				if (t > 2) | 
					
						
							|  |  |  | 					*op++ = *ip++; | 
					
						
							|  |  |  | 			} | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 			t = *ip++; | 
					
						
							|  |  |  | 		} while (ip < ip_end); | 
					
						
							|  |  |  | 	} | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 	*out_len = op - out; | 
					
						
							|  |  |  | 	return LZO_E_EOF_NOT_FOUND; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | eof_found: | 
					
						
							|  |  |  | 	*out_len = op - out; | 
					
						
							|  |  |  | 	return (ip == ip_end ? LZO_E_OK : | 
					
						
							|  |  |  | 		(ip < ip_end ? LZO_E_INPUT_NOT_CONSUMED : LZO_E_INPUT_OVERRUN)); | 
					
						
							|  |  |  | input_overrun: | 
					
						
							|  |  |  | 	*out_len = op - out; | 
					
						
							|  |  |  | 	return LZO_E_INPUT_OVERRUN; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | output_overrun: | 
					
						
							|  |  |  | 	*out_len = op - out; | 
					
						
							|  |  |  | 	return LZO_E_OUTPUT_OVERRUN; | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | lookbehind_overrun: | 
					
						
							|  |  |  | 	*out_len = op - out; | 
					
						
							|  |  |  | 	return LZO_E_LOOKBEHIND_OVERRUN; | 
					
						
							|  |  |  | } | 
					
						
							| 
									
										
											  
											
												lib: add support for LZO-compressed kernels
This patch series adds generic support for creating and extracting
LZO-compressed kernel images, as well as support for using such images on
the x86 and ARM architectures, and support for creating and using
LZO-compressed initrd and initramfs images.
Russell King said:
: Testing on a Cortex A9 model:
: - lzo decompressor is 65% of the time gzip takes to decompress a kernel
: - lzo kernel is 9% larger than a gzip kernel
:
: which I'm happy to say confirms your figures when comparing the two.
:
: However, when comparing your new gzip code to the old gzip code:
: - new is 99% of the size of the old code
: - new takes 42% of the time to decompress than the old code
:
: What this means is that for a proper comparison, the results get even better:
: - lzo is 7.5% larger than the old gzip'd kernel image
: - lzo takes 28% of the time that the old gzip code took
:
: So the expense seems definitely worth the effort.  The only reason I
: can think of ever using gzip would be if you needed the additional
: compression (eg, because you have limited flash to store the image.)
:
: I would argue that the default for ARM should therefore be LZO.
This patch:
The lzo compressor is worse than gzip at compression, but faster at
extraction.  Here are some figures for an ARM board I'm working on:
Uncompressed size: 3.24Mo
gzip  1.61Mo 0.72s
lzo   1.75Mo 0.48s
So for a compression ratio that is still relatively close to gzip, it's
much faster to extract, at least in that case.
This part contains:
 - Makefile routine to support lzo compression
 - Fixes to the existing lzo compressor so that it can be used in
   compressed kernels
 - wrapper around the existing lzo1x_decompress, as it only extracts one
   block at a time, while we need to extract a whole file here
 - config dialog for kernel compression
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: cleanup]
Signed-off-by: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Tested-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Russell King <rmk@arm.linux.org.uk>
Acked-by: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
											
										 
											2010-01-08 14:42:42 -08:00
										 |  |  | #ifndef STATIC
 | 
					
						
							| 
									
										
										
										
											2007-07-10 17:22:24 -07:00
										 |  |  | EXPORT_SYMBOL_GPL(lzo1x_decompress_safe); | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | MODULE_LICENSE("GPL"); | 
					
						
							|  |  |  | MODULE_DESCRIPTION("LZO1X Decompressor"); | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
											  
											
												lib: add support for LZO-compressed kernels
This patch series adds generic support for creating and extracting
LZO-compressed kernel images, as well as support for using such images on
the x86 and ARM architectures, and support for creating and using
LZO-compressed initrd and initramfs images.
Russell King said:
: Testing on a Cortex A9 model:
: - lzo decompressor is 65% of the time gzip takes to decompress a kernel
: - lzo kernel is 9% larger than a gzip kernel
:
: which I'm happy to say confirms your figures when comparing the two.
:
: However, when comparing your new gzip code to the old gzip code:
: - new is 99% of the size of the old code
: - new takes 42% of the time to decompress than the old code
:
: What this means is that for a proper comparison, the results get even better:
: - lzo is 7.5% larger than the old gzip'd kernel image
: - lzo takes 28% of the time that the old gzip code took
:
: So the expense seems definitely worth the effort.  The only reason I
: can think of ever using gzip would be if you needed the additional
: compression (eg, because you have limited flash to store the image.)
:
: I would argue that the default for ARM should therefore be LZO.
This patch:
The lzo compressor is worse than gzip at compression, but faster at
extraction.  Here are some figures for an ARM board I'm working on:
Uncompressed size: 3.24Mo
gzip  1.61Mo 0.72s
lzo   1.75Mo 0.48s
So for a compression ratio that is still relatively close to gzip, it's
much faster to extract, at least in that case.
This part contains:
 - Makefile routine to support lzo compression
 - Fixes to the existing lzo compressor so that it can be used in
   compressed kernels
 - wrapper around the existing lzo1x_decompress, as it only extracts one
   block at a time, while we need to extract a whole file here
 - config dialog for kernel compression
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: cleanup]
Signed-off-by: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Tested-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Russell King <rmk@arm.linux.org.uk>
Acked-by: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
											
										 
											2010-01-08 14:42:42 -08:00
										 |  |  | #endif
 |