[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::digital

Title:The Digital way of working
Moderator:QUARK::LIONELON
Created:Fri Feb 14 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:5321
Total number of notes:139771

3441.0. "Data compression : where to go for help" by DUGROS::ROSS (Living large, earning small) Thu Oct 13 1994 13:23

I have an acquaintance who seems to be a potential "boy-wonder" in
software development.   He claims to have a working program that can
compress 10MB of data to 10 bytes.    I have no idea how he does it,
but right now the performance is the biggest issue (i.e. it takes hours
of compute time).

Questions:

	a) Is this "interesting" or "unbelievable" or "who cares?"

	b) If it is interesting, who in Digital would be a good contact
	   for this person to talk to?

Thanks.
T.RTitleUserPersonal
Name
DateLines
3441.1QUARK::LIONELFree advice is worth every centThu Oct 13 1994 13:266
Unbelievable, unless one can be extremely selective as to what the contents
of the 10MB are.  (I can represent 10MB of zeroes in four bytes.)

I wouldn't waste any effort on this.

				Steve
3441.2I do not take thatBIS1::WAUTERSThu Oct 13 1994 13:314
    I feel like a super "boy-wonder" today : I can compress 10MB down to 0
    bytes if you want. But don't ask me to uncompress it ...
    
    For me what you say is "unbelievable" !
3441.3see comp.compression.research ZIGLAR::FOXWELLOSF/ULTRIX/UNIX SpecialistThu Oct 13 1994 13:393
    See the extensive discussions on the (Internet) newsgroup,
    comp.compression.research.
    
3441.4Magic beans, anyone?DPDMAI::EYSTERFluoride&Prozac/NoCavities/No prob!Thu Oct 13 1994 13:397
>	b) If it is interesting, who in Digital would be a good contact
>	   for this person to talk to?
    
    It IS interesting!...but the best person would be the company
    psychiatrist.
    
    								Tex
3441.5Sure!ANARKY::BREWERnevermind....Thu Oct 13 1994 14:194
    
    	This is easily accomplished using a device called a Write Only
    Memory...
    	/john
3441.6ThanksMSDOA::MCLEODThu Oct 13 1994 17:342
    Thanks guys, I needed a good laugh today!
    
3441.7TLE::REAGANAll of this chaos makes perfect senseThu Oct 13 1994 17:504
    Drat!  and I just finished my "10MB to 11 bytes" algorithm!  Boy am I
    depressed!
    
    				-John
3441.8Glad to provide some laughs :-)DUGROS::ROSSLiving large, earning smallThu Oct 13 1994 18:104
I must be a glutton for punishment, but can someone explain why this
is so ludicrous?   This person has run his algorithms by math 
professors at Clemson and apparently they  didn't laugh   at him.

3441.9OK, I'll byteKELVIN::SCHMIDTCynical OptimistThu Oct 13 1994 18:319
    
        Think about it.  How can you reduce information content by 
        a factor of a million and come up with anything that resembles 
        the original?  Unless only one part per million or so in the 
        original contains any information.
    
    
        Peter
    
3441.10SDOS01::WALDO_IRThu Oct 13 1994 18:398
    I can envision such compression, it would require many passes to get
    that ratio.  I do have a problem with decompression, path selection for
    recreating the data would have to be a nightmare.  The real question
    becomes "Is that magnitude of compression worth the compute time in
    both directions?".
    
    Being on the left coast I would not have any idea who to contact but I
    have meet some really brillant people from DEC Palo Alto.
3441.11Wait a minute - this could be usefulBBRDGE::LOVELL� l'eau; c'est l'heureThu Oct 13 1994 18:4612
�	b) If it is interesting, who in Digital would be a good contact
�	   for this person to talk to?
    
    This algorithm (if commercially realisable) would appear
    to have a number of immediate practical applications within
    Digital in the area of corporate infrastructure (e.g. VP pyramids,
    stock option consolidation, etc.).
    
    The fact that he hasn't yet mastered the decompression 
    stage should not necessarily be viewed as a disadvantage.  :-)
    
    /Chris.
3441.12QUARK::LIONELFree advice is worth every centThu Oct 13 1994 18:5016
    It's just not possible to do a lossless compression which even comes
    close to approaching a million-to-one ratio on average data.  If
    you don't mind losing most of the information, then anything is
    possible.
    
    Consider that 10 bytes represents at most 2**80 combinations.
    10MB is 2**80000000 combinations.  Something has to give.
    
    The best lossless compression methods give ratios of 3:1, but you
    can get this only when the data has inherent redundancy (ASCII text
    tends to contain repetitions which can be compactly represented in
    a dictionary - this is the basis of the Lempel-Ziv algorithm.)  If
    you're talking random binary data, 2:1 seems to be about the best
    you reasonably get to.
    
    						Steve
3441.13You DEFINITELY provided some chucklesDPDMAI::EYSTERFluoride&Prozac/NoCavities/No prob!Thu Oct 13 1994 19:2826
>I must be a glutton for punishment, but can someone explain why this
>is so ludicrous?   This person has run his algorithms by math 
>professors at Clemson and apparently they  didn't laugh   at him.
    
    This might explain why so few professors leave Academia for Reality. :^]
    
    OK, OK, I envision it went something like this...
    
    "Vell, Johnny, dat ist veeerrrrry interestink vhat you have dere, ja? 
    Holt on, I do dis snuff to clleearrr up mine sinuses (snooooooooort). 
    Ah, ja, dat is better, ja?  Vere ver ve, Johnny?  Oh, ja, ja, der
    compressionk alkorithm, ja, I remember now."
    
    "Vell, Johnny, have you ever made it verk?  No?  Good, good!  Dis is a
    COLLEGE, ja?  Not vun of Bill Gate's shtinky factories, nien!  Ve
    search vor knowledge here, not truth!  Scusen, please
    (snoooooooooort....sniff, snifff....snooooooooooort)  Ah, ja, much
    better!"
    
    "Yes, Johnny, I tink I begin to see it now!  I must aks for und biiig
    research grant mach-schnell, nien?  Dis ist no laughink matter, ja
    wohl? (Sniff)  Johnny, you are a verrrrry bright liebchkin.  You
    (sniff) vill be my assistink ven der grant comes in, nien?  Here,
    Johnny, try this snuff...."
    
    					One Texan's Guess :^]
3441.14Run away fromthis guySTAR::BUDAI am the NRAThu Oct 13 1994 19:2916
RE: Note 3441.0 by DUGROS::ROSS

>-< Data compression : where to go for help >-

Best place for help for this guy is the shrink...

>I have an acquaintance who seems to be a potential "boy-wonder" in
>software development.   He claims to have a working program that can
>compress 10MB of data to 10 bytes.    I have no idea how he does it,
>but right now the performance is the biggest issue (i.e. it takes hours
>of compute time).

The guy is SO FULL OF IT...  Take anything this guy says with an ocean of salt.
It is IMPOSSIBLE to do with random data -- 10mb to 80 bits...

	- mark
3441.15WRKSYS::SEILERLarry SeilerThu Oct 13 1994 19:355
    But Mark!  If you reduce the data by 2:1 when you compress it,
    you just have to compress it 20 times to get from 10MB to 10B!
    
    	:-)   (that's a smiley, for the humor-impaired)
    	Larry
3441.16.0001% JPEG?KLUSTR::SOUTHY::GardnerSouthie MudsharkThu Oct 13 1994 22:2712
	compression of this magnitude would (have to) be *extremely*
	lossy (that's not lousy misspelled ;-)....as such, meaningful
	decompression would be incredibly content dependant......such
	an algorithm would be useful only to an extremely small subset
	of applications requiring compression techniques (like really
	close to none)..........

	I can't help but wonder if .0 has misposted/misinterpretted
	his "boy-wonder" especially with the stated lack of understanding
	of compression..........

	_kelley
3441.17RotN, Part IVDUGROS::ROSSLiving large, earning smallThu Oct 13 1994 23:3714
>	I can't help but wonder if .0 has misposted/misinterpretted
>	his "boy-wonder" especially with the stated lack of understanding
>	of compression..........

You're right... I misinterpreted... ignorance is truly not bliss. 

The correct claimed compression ratio is 1000:1 ... he has done 32MB to 32K.. 
not theoretically, but on an Alpha under Windows NT.  When I asked for further 
clarification, he talked about searching for patterns and shifting  series of 
bits in 2,3, and 4 dimensioned matrices.    As an earlier reply mentioned, it
is a time consuming operation.

I'm glad some people got a good laugh out of this... hope nobody broke 
their slide rule in the process.   
3441.18How big is the program?PEKING::RICKETTSKIt sucks - change it!Fri Oct 14 1994 05:136
      If he can compress the size of the program as well NASA would be
    extremely interested. The Galileo probe is going to be sending piccies
    of Jupiter back at 40 bits/second, even somewhat lossy 1000:1 compression
    would be vastly better than what they have now.
    
    Ken
3441.1910 years too late for us !BONNET::WLODEKNetwork pathologist.Fri Oct 14 1994 05:2219
    
    The guy actually made a very intelligent choice in approaching Digital.
    We are not specialist in compression ( but he is ) while we are best in
    class in decompression. Look at any business/marketing plan for things
    like DECmcc, 9000 etc ( just to take the old and obvious examples).
    The 10 bytes of reality were easily decompressed into Mbytes .

    Only timing is not good, he arrives to late. There is little billions
    to spent on 10 bytes anymore.

    On the other hand, give a specific 10Mbyte file, one can calculate
    exactly what is the minimum amount of bits needed to represent it
    without losing a bit of data. And this without actually proposing any
    compression scheme .This is the subject of "Information theory"
    developed by a well known bicycle collector and mathematician in the 50'. 
    If my memory is right this is something like "information entropy".
    One thing is obvious though, to represent a random 10mb stream ( truly
    random!) one needs ... 10 Mb of data !. 

3441.20Lets stay civilised.BONNET::WLODEKNetwork pathologist.Fri Oct 14 1994 05:2911
    
    I'm bit upset by a tone of some of the replies , that directly question
    the young engineers mental health. Dear moderators, please don't delete
    these notes, the best punishment is to let stay notes that witness of
    somebodies primitive and vulgar habits.

    I have always had lots of "bright" ideas and one of the great things
    about Digital culture is that many really competent people took time, 
    patience and humour to explain problems with these ideas and thus educate 
    me and other listeners. 

3441.21humor is politer than scornWRKSYS::SEILERLarry SeilerFri Oct 14 1994 07:5923
    re .20:  No one has posted humorous repsonses about the claimed
    1000:1 compression.  This is an astounding number but not completely
    absurd.  The 1M:1 number *is* completely absurd -- so much so that
    humor is the politest and most civilized possible response!  People
    should respect revolutionary ideas but people who correctly identify
    something as absurd should not be labeled primitive and vulgar.
    
    About the 32M file compressed to 32K, though -- just what sort of
    data is the boy-wonder compressing?  Is this image data?  Is it 
    random ascii text?  There's really no way to comment on the
    plausibility of the claim without knowing something about the
    data that is being compressed.  For example (humor alert) existing
    image-compression algorithms achieve this ratio if the picture is
    of a black cat in a coal cellar at midnight.  Lossless, too!
    
    I doubt that this would be of value for space probes -- they have
    limited bandwidth to transmit pictures, but they also have limited
    computing power on-board.  Is the decompression as compute-intensive
    as the compression?  Is it loss-less or lossy?  If lossy, is it good
    quality or lousy?  :-)  Inquiring minds want to know!
    
    	Enjoy,
    	Larry
3441.22worksGVPROD::DOIGTE::ChisholmFri Oct 14 1994 08:129
re -.1

>    For example (humor alert) existing
>    image-compression algorithms achieve this ratio if the picture is
>    of a black cat in a coal cellar at midnight.  Lossless, too!

This even works when the cat moves...


3441.23Secrets of the Universe, Chapter 4AIMHI::TINIUSIt&#039;s always something.Fri Oct 14 1994 08:379
In 3441.13, DPDMAI::EYSTER writes:

>    This might explain why so few professors leave Academia for Reality. :^]
    
During my time at Ohio State I learned why there is so much knowledge to be
found at colleges and universities: each freshman (freshperson?) brings in
a little, and each graduate leaves with none...the knowledge accumulates!

-stephen
3441.24LANDO::CANSLERFri Oct 14 1994 09:497
    
    it is called vision........... or is the earth still flat.
    
    sorry I almost fell off the edge.
    
    bc
    
3441.25Cheeky get this one, 'eh?DPDMAI::EYSTERFluoride&amp;Prozac/NoCavities/No prob!Fri Oct 14 1994 10:2516
    In 3441.20 WLODEK writes:
    
>    I'm bit upset by a tone of some of the replies , that directly question
>    the young engineers mental health. Dear moderators, please don't delete
>    these notes, the best punishment is to let stay notes that witness of
>    somebodies primitive and vulgar habits.
    
>    I have always had lots of "bright" ideas...
    
    And humility...
    
    and a sense of humor...
    
    but no grammar checker.
    
    						:^]  Tex
3441.26An attempt at a useful reply.PASTIS::MONAHANhumanity is a trojan horseFri Oct 14 1994 11:5737
    	The group in DEC that he should go to is the group in DEC that is
    interested in the type of data for which he is getting such compression
    ratios.
    
    	Not all data can be usefully compressed. The theory about this is
    based on the work of Shannon (pre-war) on information theory. The
    fallacy that all data can be compressed can be exposed as already
    mentioned by the example of repeated compression - if an algorithm
    *always* reduces the amount of data, then repeated application of the
    algorithm will eventually reduce the data to a single bit.
    
    	Given data of particular characteristics, various algorithms and
    degrees of compression are possible. This note contains nothing much
    apart from alphabetic data, so it would be trivial to reduce it from
    its current 8 bits per character down to approximately  5 bits per
    character.
    
    Basic interpreters as found on some PCs often reduce Basic keywords to
    a single byte, and constants such as "3.1415926" to 4 bytes floating
    point representation. Note that they would also "compress" constants
    such as ".1" to 4 bytes. Cobol source code would be particularly
    susceptible to this type of compression technique.
    
    Pictures are sometimes compressed using two-dimensional Walsh
    functions. I think this is the technique that was used for the Mariner
    spacecraft pictures.
    
    If encrypted data can be compressed by some algorithm, then you have
    found a flaw in the encryption algorithm, not a good compression
    algorithm.
    
    	If your friend can characterise the type of data for which he is
    getting that compression ratio then there may be some group in DEC that
    is interested in compressing that type of data.
    
    Dave, who was given a floppy disk to UNZIP and discovered he needed 50
    megabytes on a hard disk...
3441.27BHAJEE::JAERVINENOra, the Old Rural AmateurFri Oct 14 1994 13:1811
    Basically, it is not difficult to calculate the entropy of a piece of
    data with a fairly good accuracy. If the data is really random, the
    information content is exactly one bit per bit; i.e. compression is not
    possible without loss of accuracy. Most real data is not random, and
    can be compressed with varying degrees of compression ratio (but note,
    the more you compress, the more 'random' the data gets, so repeated
    compression gains little).
    
    In other words, if the entropy of this reply is about 1 bit / byte, the
    most you can compress it would be 1:8 without loss of information, and
    the compressed form cannot be re-compressed.
3441.28Tsk, tsk! CAPO::GOLDMAN_MABlondes have more Brains!Fri Oct 14 1994 14:4020
    Even though it turns out that the basenoter had his numbers jumbled a
    bit, I must say that all of you Doubting Thomases will someday 
    eat your words!  Today's impossibility is tomorrow's breakthrough
    technology.
    
    Think back to 1984 -- could you have even imagined 64-bit chips, 150mhz
    speeds, and tower PC's that have the ability to run an entire small
    business?  A 50 lb. mini-computer (4100s, Sables, etc.) you can carry
    under your arm replacing that huge, 1800 lb. VAX 8800?  Or even 3:1 
    data compression ratios?
    
    Bite your tongues, people.  Someday, somewhere, some young upstart in a
    quiet lab at midnight, will get that 1M:1 compression.  Just you
    wait.  
    
    Of course, by then, we'll probably be looking at 160 bits and
    1000mhz or something, but that's not the point.  Never say never...
    
    
    M.
3441.29.HLDE01::VUURBOOM_RRoelof Vuurboom @ APD, DTN 829 4066Fri Oct 14 1994 15:032
    Indeed, an to all you doubting Thomases I have put my (compressed)
    reply in the heading.
3441.30y 2cPOBOX::CORSONHigher, and a bit more to the rightFri Oct 14 1994 15:194
    
    	And it will all fit nicely on a 1K EPROM.
    
    		t gyk
3441.31Practical testKELVIN::SCHMIDTCynical OptimistFri Oct 14 1994 16:4414
    
    
        Love the idea about compressing the VP population; how 
        soon can we try it?  Never mind a million to one, I'd 
        settle for a 10:1 compression, even 2:1 .
    
        But for a practical test, how's about putting the original 
        question in .0 through the algorithm to see what emerges?
        I suspect the result will be self-answering.
    
    
        Peter
    
    
3441.32a test on 'known data' would be nice..,ICS::MORRISEYFri Oct 14 1994 17:479
     
    	know absolutely nothing about data compression ...
    
    	   but - just because a data stream is 'random' doesn't mean that
    	   there aren't patterns there -- and variations on patterns.
    
           By using multi-dimensinal arrays, I'd buy that one might be
           able to do some compression.
    
3441.33A good laugh a day... :-)STAR::BUDAI am the NRAFri Oct 14 1994 17:5429
RE: Note 3441.17 by DUGROS::ROSS

>You're right... I misinterpreted... ignorance is truly not bliss. 

>The correct claimed compression ratio is 1000:1 ... he has done 32MB to 32K.. 
>not theoretically, but on an Alpha under Windows NT.  When I asked for further 
>clarification, he talked about searching for patterns and shifting  series of 
>bits in 2,3, and 4 dimensioned matrices.    As an earlier reply mentioned, it
>is a time consuming operation.

1000:1?  Compressing this depends on the type of data.  I have no doubt
that it is a VERY specific type of data he is compressing.  I would ask
you to ask your friend to give us a few paragraphs of the TYPE of data
he is compressing.

I am skeptical, but now curious.  Standard ASCII files, I would be
surprised he gets 1000:1 compression, but I would be willing to
listen/learn...

I know of someone at wrl who has been working on compression and has
come up with a better than ZIP algorithm...  Will see if I can dig the
persons name up (Mike somthing...)

>I'm glad some people got a good laugh out of this... hope nobody broke 
>their slide rule in the process.   

I did not even get my slide rule out, just had to stop laughing so hard. 
You made my night, when I read that!!!! :-)

3441.34Hey, Beavis. Wanna hear me compress? hunh-huhDPDMAI::EYSTERFluoride&amp;Prozac/NoCavities/No prob!Fri Oct 14 1994 18:149
> I am skeptical, but now curious.  
    
    Me, too.  Maybe he's got 1000:1 compression, but the key takes up a gig
    or two? :^]  Keep us posted on the boy wonder, ja?  -.1's right...it
    ain't valid unless it works on random data.  One of Dan Quayle's
    speeches could be compressed to a single byte due to the mental white
    space, as could an infinitely sized ASCII file containing just the word
    "yo", but random data can't.
    							Tex
3441.35a dose of (visual) realityKLUSTR::GARDNERThe secret word is Mudshark.Fri Oct 14 1994 18:2222
	those of you who have responded with the "doubting Thomas" theme
	are clearly not compression experts (and neither am I ;-)...you
	should be aware, however, that the science of data compression
	is hardly new vs say the science of CMOS.....

	certainly it is plain that the "boy-wonder's" technique must be
	a lossy compression, so data entropy is only part of the puzzle...
	lossy JPEG for example (JPEG=Joint Photoprgaic Experts Group), the
	current rage in color image compression, uses a process known
	as Discrete Cosine Transform (DCT) which has entropy coding as
	only a small part of the technique....I have seen lossy JPEG
	produce very acceptable (i.e. imperceptable to doctors) results 
	on 1500x1000 truecolor medical images, with compression ratios of
	10:1 or more.....

	however, this is nowhere close to 1K:1 (much less 10M:1 ;-)...
	there is some pretty heavy math involve just to get to that...
	and JPEG compression starts to become perceptable (eg useless)
	at much more than that...so let's just say I have good reason
	to be doubtful........

	_kelley
3441.36I'm very compressed by the whole thingDPDMAI::EYSTERFluoride&amp;Prozac/NoCavities/No prob!Fri Oct 14 1994 18:257
    
    This is all beginning to sound a little far-fetched, kinda like....
    
    
    
    
    			"Compression Park"
3441.37Closed chapter, that's why Shannon cpllects bikes.BONNET::WLODEKNetwork pathologist.Fri Oct 14 1994 18:3821
    
    Well, "random" in my reply means something very specific, not "random"
    in a casual meaning, but in mathematical, that is "NO patterns" or just
    something like "white noise" . 
    One can easily prove the randomness of a data set by running a correlation
    operation or "convolution function" ( boy it has been 20 years ago I
    took information theory class ...). 

    Analysing a data set for "information content" gives information about
    how much it is possible to reduce the data without losing anything.
    This is the theoretical limit against which you can measure an actual
    compression algorithm. It's a hard limit, one can't do much about it.
    Something like speed of light .

    --
    re: 64 bit computing, well "general purpose microcomputing"is new , back
    (boy, yes 15 years ago) I have installed an RT-11 system at a company
    that was designing 128 bit computers for real time analysis of military
    aircraft radar data...
    The computer technology progress is rather predictable.

3441.38TLE::REAGANAll of this chaos makes perfect senseFri Oct 14 1994 19:114
    An example of random would be the number PI.  It has an infinite number 
    of decimal places with no repeating pattern.
    
    				-John
3441.39GEMGRP::gemnt3.zko.dec.com::WinalskiCareful with that AXP, EugeneFri Oct 14 1994 19:2415
RE: .-1

    Bite your tongues, people.  Someday, somewhere, some young upstart in a
    quiet lab at midnight, will get that 1M:1 compression.  Just you
    wait.  
    
    Of course, by then, we'll probably be looking at 160 bits and
    1000mhz or something, but that's not the point.  Never say never...

One can say "never" when mathematical theory says it's impossible.  To think 
otherwise is to fall into the same morass of ignorance that led the Ohio 
legislature to believe that they could pass a law setting the value of pi to 
3.

--PSW
3441.40QUARK::LIONELFree advice is worth every centFri Oct 14 1994 21:446
    Re: .39
    
    That was Indiana..  But the other state (I forget which) which wanted
    it set to 4 was even worse...
    
    					Steve
3441.41fwiwKLUSTR::SOUTHY::GardnerSouthie MudsharkFri Oct 14 1994 22:1915
	re:�.38

	umm...PI is not a "random" number...it is a transcendental
	function without resolution......although its "pattern"
	my appear random to you, it is actually quite easily expressed
	with fair accuracy using a simple series.....

	but I digress ;-)

	it has occured to me that the "boy-wonder" may be playing with
	"fractal compression", a technique that seems to generating
	some press lately....although I don't know any details, I do
	know that this method is extremely compute intensive............

	_k
3441.42No problem if you have a sharp knifeHGOVC::JOELBERMANSat Oct 15 1994 01:0019
    A group of Aliens are on a five year mission to gather all of the
    knowledge on planets with sentinent beings.  Whn they get to Earth the
    take all writen knowledge and turn it into ascii codes, then put a decimal
    point in front of that number.  It is now a fraction.
    
    Then they pick up a stick and take out their Swiss Army knife and make
    a mark on that stick exactly in the same ration as the number they had.
    
    When they get back home, they measure the cut, turn it into a fraction
    and convert it back to characters.  And have all of Earths written
    information.
    
    And the stick has plenty of room to grahost all of the other planets
    knowledge too!.
    
    Of course they have better rulers than we do, and sharper knives.
    
    /joel
    
3441.43They lost the stick...HLDE01::VUURBOOM_RRoelof Vuurboom @ APD, DTN 829 4066Sat Oct 15 1994 04:024
    So what they do now is just shoot off two laser pulses with the
    calculated time interval.
    
    Of course, they have better clocks than we do.
3441.44And then they lost the clock...HLDE01::VUURBOOM_RRoelof Vuurboom @ APD, DTN 829 4066Sat Oct 15 1994 04:033
    So now they just send a single pulse with a calulated color.
    
    Of course, they have better eyes than we do.
3441.45PASTIS::MONAHANhumanity is a trojan horseSat Oct 15 1994 08:1710
    	There is no need to postulate analogue methods of encoding data
    (even if you could assume they were completely analogue - can your
    Swiss army knife actually cut the stick to a precision less than the
    width of an electron) since what is being discussed here is clearly
    digital in nature, both initially and in its compressed representation.
    
    	Ignoring the doubting Thomases that don't believe Shannon's
    mathematics, what type of data is he compressing, and how much
    information loss (if any) does his technique give with that type of
    data?
3441.46just the FAQsKLUSTR::GARDNERThe secret word is Mudshark.Sat Oct 15 1994 12:325298
	I should have followed .3's advice earlier....there is an extensive
	FAQ at comp.compression.research and mucho discussion at
	comp.compression....I have included the FAQ below for the
	Internet Impaired........

	anyway, this "boy-wonder's" name wouldn't happen to be
	Barnsley would it? Barnsley and his "followers" have formed
	a company that is said to be working on (you guessed it)
	1K:1 compression! however most knowledgable compression experts
	are extremely pessimistic about whether they will achieve it...
	below is the summary from the FAQ's Introduction to Fractal
	compression (yes my hunch was right):

"Seven things you should know about Fractal Image Compression (assuming that 
you want to know about it).
 
   1. It is a promising new technology, arguably superior to JPEG -- 
      but only with an argument.
   2. It is a lossy compression method.
   3. The fractals in Fractal Image Compression are Iterated Function
      Systems.
   4. It is a form of Vector Quantization, one that employs a virtual
      codebook.
   5. Resolution enhancement is a powerful feature but is not some
      magical way of achieving 1000:1 compression.
   6. Compression is slow, decompression is fast.
   7. The technology is patented."

	so there you have it...salt shakers appear totally appropriate
	for this conversation....

	_kelley

	FAQ follows...

Article: 1434
Path: nntpd.lkg.dec.com!crl.dec.com!crl.dec.com!decwrl!hookup!swrinde!howland.reston.ans.net!EU.net!julienas!chorus!chorus.fr
From: [email protected] (Jean-loup Gailly)
Newsgroups: comp.compression,comp.compression.research,news.answers,comp.answers
Subject: comp.compression Frequently Asked Questions (part 1/3)
Summary: *** READ THIS BEFORE POSTING ***
Keywords: data compression, FAQ
Message-ID: <[email protected]>
Date: 12 Oct 94 16:53:44 GMT
Expires: 30 Nov 94 16:17:20 GMT
Sender: [email protected]
Reply-To: [email protected]
Followup-To: comp.compression
Lines: 2814
Approved: [email protected]
Supersedes: <[email protected]>
Xref: nntpd.lkg.dec.com comp.compression:15050 comp.compression.research:1434 news.answers:30151 comp.answers:7745
 
Archive-name: compression-faq/part1
Last-modified: Oct 12th, 1994
 
		"I've already explained this once, but repetition is
		the very soul of the net."		(from alt.config)
 
This file is part 1 of a set of Frequently Asked Questions (FAQ) for
the groups comp.compression and comp.compression.research.  If you
can't find part 2 or 3, see item 53 below. A copy of this FAQ is available
by ftp in rtfm.mit.edu:/pub/usenet/news.answers/compression-faq/part[1-3].
This FAQ is also accessible in the World Wide Web at
http://www.cis.ohio-state.edu/hypertext/faq/usenet/compression-faq/top.html.
 
Certain questions get asked time and again, and this is an attempt to
reduce the bandwidth taken up by these posts and their associated
replies.  If you have a question, *please* check this file before you
post.  It may save a lot of peoples time.
 
If you have not already read the overall Usenet introductory material
posted to "news.announce.newusers", please do. It is also available by
ftp in garbo.uwasa.fi:/pc/doc-net/usenews.zip (see item 2 below about .zip).
 
If you don't want to see this FAQ regularly, please add the subject
line to your kill file. (If you don't know what a kill file is, get
by ftp the file rtfm.mit.edu:/pub/usenet/news.answers/killfile-faq.)
If you have corrections or suggestions for this FAQ, send them to
Jean-loup Gailly <[email protected]>.  Thank you.
 
Part 1 is oriented towards practical usage of compression programs.
Part 2 is more intended for people who want to know how compression works.
Part 3 is a long list of image compression hardware.
 
Main changes relative to the previous version: 
 
- added new binhex for MSDOS [item 2]
- new version of JPEG sofware [item 15]
- new version of vmpeg [item 15]
 
 
Contents
========
 
General questions:
 
[1]  What are these newsgroups about?
[2]  What is this .xxx file type?
     Where can I find the corresponding compression program?
[3]  What is the latest pkzip version?
[4]  What is an archiver?
[5]  What is the best general purpose compression program?
[7]  Which books should I read?
[8]  What about patents on data compression algorithms?
[9]  The WEB 16:1 compressor.
[11] What is the V.42bis standard?
[12] I need source for the winners of the Dr Dobbs compression contest
[13] I need source for arithmetic coding
 
Image and audio compression:
 
[15] Where can I get image compression programs?
[16] What is the state of the art in lossless image compression?
[17] What is the state of fractal compression?
[18] I need specs and source for TIFF and CCITT group 4 Fax.
[19] What is JPEG?
[20] I am looking for source of an H.261 codec and MPEG
[25] Fast DCT (Discrete Cosine Transform) algorithms
[26] Are there algorithms and standards for audio compression?
 
Common problems:
 
[30] My archive is corrupted!
[31] pkunzip reports a CRC error!
[32] VMS zip is not compatible with pkzip!
[33] I have a problem with Stacker or DoubleSpace!
 
Questions which do not really belong to comp.compression:
 
[50] What is this 'tar' compression program?
[51] I need a CRC algorithm
[52] What about those people who continue to ask frequently asked questions?
[53] Where are FAQ lists archived?
[54] I need specs for graphics formats
[55] Where can I find Lenna and other images?
[56] I am looking for a message digest algorithm
 
 
Part 2: (Long) introductions to data compression techniques
 
[70] Introduction to data compression (long)
       Huffman and Related Compression Techniques
       Arithmetic Coding
       Substitutional Compressors
          The LZ78 family of compressors
          The LZ77 family of compressors
 
[71] Introduction to MPEG (long)
       What is MPEG?
       Does it have anything to do with JPEG?
       Then what's JBIG and MHEG?
       What has MPEG accomplished?
       So how does MPEG I work?
       What about the audio compression?
       So how much does it compress?
       What's phase II?
       When will all this be finished?
       How do I join MPEG?
       How do I get the documents, like the MPEG I draft?
 
[72] What is wavelet theory?
[73] What is the theoretical compression limit?
[74] Introduction to JBIG
[75] Introduction to JPEG
[76] What is Vector Quantization?
[77] Introduction to Fractal compression
 
 
Part 3: (Long) list of image compression hardware
 
[85] Image compression hardware
[99] Acknowledgments
 
 
Search for "Subject: [#]" to get to question number # quickly. Some news
readers can also take advantage of the message digest format used here.
 
If you know very little about data compression, read question 70 in
part 2 first.
 
------------------------------------------------------------------------------
 
Subject: [1] What are these newsgroups about?
 
 
comp.compression is the place to discuss about data compression, both
lossless (for text or data) and lossy (for images, sound, etc..).
comp.compression.research was created later to provide a forum for
current research on data compression and data compression algorithms;
this group is now moderated. If you are not experienced in data compression,
please post in comp.compression only.
 
There is no archive for comp.compression, the volume is too high.
(If you know an ftp site archiving all of comp.compression, tell me
at [email protected]).
 
If you only want to find a particular compression program for a
particular operating system, please read first this FAQ and the
article "How to find sources" which is regularly posted in
news.answers.
 
If you can't resist posting such a request, other groups are probably
more appropriate (comp.binaries.ibm.pc.wanted, comp.os.msdos.apps,
comp.sources.wanted, comp.sys.mac.wanted, comp.archives.msdos.d, comp.dsp,
alt.graphics.pixutils). Please post your request in comp.compression
only as a last resource.
 
If your question is about graphics only (no compression), please
post to comp.graphics, *after* reading the comp.graphics FAQ (see
item 54 below). For some unknown reason, many questions about
graphics are incorrectly posted to comp.compression.
For questions related to audio compression, check also comp.dsp.
 
Please do not post any program in binary form to comp.compression.
Very short sources can be posted, but long sources should be be posted
to the specialized source groups, such as comp.sources.* or alt.sources.
If the program is already available by ftp, just give the name of the
ftp site and the full path name of the file.
 
As for any newsgroups, do not post the same message separately to
comp.compression and comp.compression.research.
 
------------------------------------------------------------------------------
 
Subject: [2] What is this .xxx file type?
             Where can I find the corresponding compression program?
 
 
All the programs mentioned in this section are lossless.
For most programs, one US and one European ftp site are given.
(oak.oakland.edu: 141.210.10.117, garbo.uwasa.fi: 128.214.87.1)
Many other sites (in particular wuarchive.wustl.edu: 128.152.135.4)
have the same programs.
 
To keep this list to a reasonable size, many programs are not
mentioned here. Additional information can be found in the file
ftp.cso.uiuc.edu:/doc/pcnet/compression [128.174.5.61] maintained by
David Lemson ([email protected]). When several programs can handle
the same archive format, only one of them is given.
 
Sources for additional lossless data compressors can be found in
garbo.uwasa.fi:/pc/programming/lds_11.zip and
oak.oakland.edu:/pub/msdos/archivers/lz-comp2.zip.
See also turing.imag.fr:/pub/compression/codecs.dir.tar.gz
for sources of RLE, Huffman and LZW.
Sources in Pascal are in garbo.uwasa.fi:/pc/turbopas/preskit2.zip.
Splay tree compression code is available on ftp.cs.uiowa.edu in
directory /pub/jones/compress.
 
For Macintosh programs, look on sumex-aim.stanford.edu:/info-mac [36.44.0.6].
For VM/CMS, look on vmd.cso.uiuc.edu:public.477 [128.174.5.98].
For Atari, look on atari.archive.umich.edu [141.211.165.41]
For Amiga, look on ftp.cso.uiuc.edu:/pub/amiga [128.174.5.59]
	
If you don't know how to use ftp or don't have ftp access, read the
article "How to find sources" which is regularly posted in news.answers.
 
If you can't find a program given below, it is likely that a newer
version exists in the same directory. (Tell me <[email protected]>)
 
A very short description of the compression algorithm is given for
most programs. For the meaning of LZ77, LZ78 and LZW, see question 70
in part 2 of the FAQ.) If you are looking for the file format of a
specific compression program, get the sources of the decompressor.
For the format of uuencode, do "man 5 uuencode" on a Unix box.
 
ext:  produced by or read by
 
..arc, .ark: arc, pkarc for MSDOS. (LZW algorithm)
	wuarchive.wustl.edu:/mirrors/msdos/starter/pk361.exe
	garbo.uwasa.fi:/pc/arcers/pk361.exe
 
      arc for Unix
        wuarchive.wustl.edu:/mirrors/misc/unix/arc521e.tar-z
	garbo.uwasa.fi:/unix/arcers/arc.tar.Z
        Contact: Howard Chu <[email protected]>
 
      arc for VMS
	wuarchive.wustl.edu:/packages/compression/vax-vms/arc.exe
 
      arcmac for Mac
	mac.archive.umich.edu:/mac/utilities/compressionapps/arcmac.hqx
 
      arc for Amiga
	ftp.funet.fi:pub/amiga/fish/001-100/ff070/arc.lha
 
..arj: arj for MSDOS (LZ77 with hashing, plus secondary static Huffman
          encoding on a block basis)
        Contact: Robert K Jung <[email protected]>
        wuarchive.wustl.edu:/mirrors/msdos/archivers/arj241a.exe
	garbo.uwasa.fi:/pc/arcers/arj241a.exe
 
      unarj for Unix. Decompresses only. (There is no arj compressor for Unix.
        Don't post a request.)
        wuarchive.wustl.edu:/mirrors/misc/unix/unarj241.tar-z
	garbo.uwasa.fi:/unix/arcers/unarj241.tar.Z
 
      unarj for Mac
        mac.archive.umich.edu:/mac/util/compression/unarjmac.cpt.hqx      
 
      unarj for Amiga
	ftp.funet.fi:pub/amiga/utilities/archivers/unarj-0.5.lha
 
..bck: VMS BACKUP. BACKUP is *not* a compression program. Do "help backup".
 
..cpt: Compact Pro for Mac and Power PC
        sumex-aim.stanford.edu:/info-mac/cmp/compact-pro-134.hqx (Mac version)
        sumex-aim.stanford.edu:/info-mac/cmp/compact-pro-135-ppc.hqx (Power PC)
 
      For Unix:
	sumex-aim.stanford.edu:/info-mac/unix/macutil-20b1.shar
        ftp.cwi.nl:/pub/macutil2.0b3.shar.Z
 
      For DOS:
	garbo.uwasa.fi:/pc/arcers/ext-pc.zip
 
..exe: self-extracting MSDOS executable (creates files on disk when run)
      Run the file, or try unzip, lha or arj on it.
 
..exe: compressed MSDOS executable (decompresses itself in memory then runs
      the decompressed code). To get the original uncompressed .exe:
        oak.oakland.edu:/pub/msdos/execomp/unp330.zip
      To create such files:
        oak.oakland.edu:/pub/msdos/execomp/lzexe91e.zip
	nic.funet.fi:/pub/msdos/windows/util/winlite1.zip (for Windows exe)
 
..gif: gif files are images compressed with the LZW algorithm. See the
      comp.graphics FAQ list for programs manipulating .gif files. See
      suffix .Z below for source of LZW.
 
..gz, .z: gzip (or pack, see .z below). gzip uses the same algorithm as
         zip 2.0x (see below); it can also extract packed and compressed files.
      For Unix, MSDOS, OS/2, VMS, Atari, Amiga, Primos:
	prep.ai.mit.edu:/pub/gnu/gzip-1.2.4.tar  (or .shar or .tar.gz : source)
	prep.ai.mit.edu:/pub/gnu/gzip-msdos-1.2.4.exe (MSDOS, lha self-extract)
	garbo.uwasa.fi:/unix/arcers/gzip-1.2.4.tar.Z  (source)
	garbo.uwasa.fi:/pc/unix/gzip124.zip           (MSDOS exe)
	ftp.uu.net:/pub/archiving/zip/VMS/gzip124x.vax_exe (VMS exe)
        mac.archive.umich.edu:/mac/util/compression/macgzip0.22.cpt.hqx (Mac)
        src.doc.ic.ac.uk:/computing/systems/mac/info-mac/cmp/mac-gzip-02.hqx
        mac.archive.umich.edu:/mac/development/source/macgzip0.2src.cpt.hqx
 
..ha:  ha 0.98 for MSDOS (improved PPMC - 4th order Markov modeling)
	garbo.uwasa.fi:/pc/arcers/ha098.zip
 
..hap: Hamarsoft HAP 3.00 archiver. Contact: [email protected]
        garbo.uwasa.fi:/pc/arcers/hap300re.zip
 
..hpk: hpack
        garbo.uwasa.fi:/unix/arcers/hpack78src.tar.Z
 
..hqx: Macintosh BinHex format.. (BinHex is *not* a compression program,
	it is similar to uuencode but handles multiple forks.)
       for Mac:
	 mac.archive.umich.edu:/mac/utilities/compressionapps/binhex4.0.bin
 
       for Unix:
         sumex-aim.stanford.edu:/info-mac/cmp/mcvert-215.shar.gz
         garbo.uwasa.fi:/mac/arcers/mcvert-215.shar
 
       for MSDOS:
	 oak.oakland.edu:/pub/msdos/mac/binhex13.zip
	 oak.oakland.edu:/pub/msdos/mac/xbin23.zip
         garbo.uwasa.fi:/pc/unix/xbin23.zip
 
..lha:
..lzh: lha for MSDOS (LZ77 with a trie data structure, plus secondary static
          Huffman coding on a block basis)
	oak.oakland.edu:/pub/msdos/archiver/lha213.exe   (exe)
	oak.oakland.edu:/pub/msdos/archiver/lha211sr.zip (sources)
	garbo.uwasa.fi:/pc/arcers/lha255b.exe
 
      lharc for Unix. (LZ77 with hash table and binary trees, plus secondary
          Huffman coding)
           Warning: lharc can extract .lzh files created by
           lharc 1.xx but not those created by lha. See lha for Unix below.
        wuarchive.wustl.edu:/mirrors/misc/unix/lharc102a.tar-z
	garbo.uwasa.fi:/unix/arcers/lha101u.tar.Z
 
      lharc for VMS. Same warning as for Unix lharc.
	wuarchive.wustl.edu:/packages/compression/vax-vms/lharc.exe
 
      lha for Unix. Warning: all doc is in Japanese.
	wuarchive.wustl.edu:/mirrors/misc/unix/lha101u.tar-z
	garbo.uwasa.fi:/unix/arcers/lha-1.00.tar.Z
          Contact: [email protected] or [email protected]
      lha for Mac
        mac.archive.umich.edu:/mac/utilities/compressionapps/maclha2.0.cpt.hqx
 
      lha for Amiga
	ftp.funet.fi:pub/amiga/utilities/archivers/LhA_e138.run
 
 
..pak: pak for MSDOS (LZW algorithm)
        wuarchive.wustl.edu:/mirrors/msdos/archivers/pak251.exe
	garbo.uwasa.fi:/pc/arcers/pak251.exe
 
..pit: PackIt (Macintosh)
       for Mac:
         sumex-aim.stanford.edu:/info-mac/util/stuffit-151.hqx  [36.44.0.6]
 
       for Unix:
         sumex-aim.stanford.edu:/info-mac/cmp/mcvert-215.shar.gz
         garbo.uwasa.fi:/mac/arcers/mcvert-215.shar
 
..pp: PowerPacker (Amiga)
	ftp.funet.fi:pub/amiga/fish/501-600/ff561/PPLib.lha
 
..rar: RAR (MSDOS) Contact: [email protected]
        oak.oakland.edu:/SimTel/msdos/archiver/rar1_52.exe
        garbo.uwasa.fi:/pc/arcers/rar1_52.exe
        ftp.kiae.su:/msdos/arcers/rar*.exe
 
..sea: self-extracting archive (Macintosh)
         Run the file to extract it. The self-extraction code can be
         removed with:
 	 mac.archive.umich.edu:/mac/utilities/compressionapps/desea1.11.cpt.hqx
 
..sdn: used by the Shareware Distribution Network.
      Try the decompressors for .pak or .arj (see above)
 
..shar:  Shell archive. This is not a compression program. Use "sh foo.shar"
        to extract.
 
..sit: Stuffit for Macintosh
       for Mac:
         sumex-aim.stanford.edu:/info-mac/cmp/stuffit-lite-307.hqx [36.44.0.6]
 
       for Unix:
         sumex-aim.stanford.edu:/info-mac/cmp/unsit-15-unix.shar [36.44.0.6]
 
       for Amiga:
	 ftp.funet.fi:pub/amiga/utilities/archivers/unsit-1.5c2.lha
 
       for MSDOS:
	 garbo.uwasa.fi:/pc/arcers/unsit30.zip
 
..?q?: Squeeze for MSDOS (do not confuse with other 'squeeze' below).
      Static Huffman coding.
	oak.oakland.edu:/pub/msdos/starter/sqpc12a.com (squeeze)
	oak.oakland.edu:/pub/msdos/starter/nusq110.com (unsqueeze)
 
..sqz: Squeeze for MSDOS (do not confuse with other 'squeeze' above)
      LZ77 with hashing.
	wuarchive.wustl.edu:/mirrors/msdos/archivers/sqz1083e.exe
        garbo.uwasa.fi:/pc/arcers/sqz1083e.exe
 
..tar: tar is *not* a compression program. However, to be kind for you:
      for MSDOS
	oak.oakland.edu:/pub/msdos/starter/tarread.exe
	garbo.uwasa.fi:/pc/unix/tar4dos.zoo
 
      for Unix
        tar (you have it already. To extract: tar xvf file.tar)
 
      for VMS
	wuarchive.wustl.edu:/packages/compression/vax-vms/tar.exe
 
      for Macintosh
        sumex-aim.stanford.edu:/info-mac/util/tar-30.hqx
 
      for Amiga:
	ftp.funet.fi:pub/amiga/fish/401-500/ff445/Tar.lha
 
..tar.Z, .tar-z, .taz: tar + compress
      For Unix:     zcat file.tar.Z | tar xvf -
      with GNU tar: tar xvzf file.tar.Z
      for MSDOS:
	garbo.uwasa.fi:/pc/unix/tar315us.zip (MSDOS exe)
        ftp.kiae.su:/msdos/arcers/tar315sr.zip (sources)
      Other OS: first uncompress (see .Z below) then untar (see .tar above)
 
..tar.gz, tar.z, .tgz: tar + gzip
      For Unix: gzip -cd file.tar.gz | tar xvf -
      with GNU tar: tar xvzf file.tar.gz
      for MSDOS: oak.oakland.edu:/pub/msdos/archiver/tar315us.zip
                 garbo.uwasa.fi:/pc/unix/tar315us.zip
      Other OS: first uncompress (see .gz above) then untar (see .tar above)
 
..td0: (compressed MS-DOS floppy image produced by TeleDisk)
      oak.oakland.edu:/pub/msdos/diskutil/teled212.zip
 
..uc2: UC2 for MSDOS and OS/2. (LZ77 with secondary static Huffman encoding on
	 a block basis, and dynamic dictionaries shared among files.)
	Contact: [email protected]
      garbo.uwasa.fi:/pc/arcers/uc2r2.exe
 
..z:   pack or gzip (see .gz above). pack uses static Huffman coding.
      To extract, see .gz above.
 
..zip: pkzip 1.10 for MSDOS. (LZ77 with hashing, plus secondary static
          Shannon-Fano encoding on whole file)
           Contact: [email protected]
         wuarchive.wustl.edu:/mirrors/msdos/zip/pkz110eu.exe.
         garbo.uwasa.fi:/pc/goldies/pkz110eu.exe.
           Note: pkz110eu.exe is an 'export' version without encryption.
 
      zip 1.1 for Unix, MSDOS, VMS, OS/2, ... (compatible with pkzip 1.10.
	 For corresponding unzip, see unzip 5.12 below).
	 ftp.uu.net:/pub/archiving/zip/zip11.zip
 
      arcutil 2.0 for VM/CMS (unzip only, not yet compatible with pkzip 2.04)
         vmd.cso.uiuc.edu:/public.477/arcutil.* [128.174.5.98].
 
      pkzip 2.04g for MSDOS. (LZ77 with hashing, plus secondary static
           Huffman coding on a block basis)
	 oak.oakland.edu:/pub/msdos/zip/pkz204g.exe
         garbo.uwasa.fi:/pc/arcers/pkz204g.exe
 
      zip 2.0.1 and unzip 5.12 for Unix, MSDOS, VMS, OS/2, Amiga, ...
          Compatible with pkzip 2.04g (LZ77 with hashing, plus secondary static
          Huffman coding on a block basis). Contact: [email protected]
          (On SGI, do not confuse with the editor also named 'zip'.)
            
	 ftp.uu.net:/pub/archiving/zip/zip201.zip          (source)
	 ftp.uu.net:/pub/archiving/zip/unzip512.tar.Z      (source)
	 ftp.uu.net:/pub/archiving/zip/MSDOS/zip20x.zip    (MSDOS exe)
	 ftp.uu.net:/pub/archiving/zip/MSDOS/unz512x*.exe (MSDOS exe)
	 ftp.uu.net:/pub/archiving/zip/VMS/zip20x-vms.zip  (Vax/VMS exe)
	 ftp.uu.net:/pub/archiving/zip/VMS/unz512x-vax.exe (Vax/VMS exe)
	 ftp.uu.net:/pub/archiving/zip/OS2/*               (OS/2 exe 16&32 bit)
           See also AMIGA, ATARI, MAC, NT, SCO_UNIX, LINUX, Ultrix-MIPS,
	   WINDOWS and SOLARIS2 subdirectories.
	 ftp.uu.net:/pub/archiving/zip/zcrypt24.zip        (encryption source)
         (Non US residents must get the crypt versions from garbo,see below)
 
	 garbo.uwasa.fi:/unix/arcers/zip201.zip     (source)
	 garbo.uwasa.fi:/unix/arcers/unzip512.tar.Z (source)
	 garbo.uwasa.fi:/pc/arcers/zip20x.zip       (MSDOS exe)
	 garbo.uwasa.fi:/pc/arcers/unz512x3.exe     (MSDOS exe)
         garbo.uwasa.fi:/unix/arcers/zcrypt23.zip   (encryption source)
 
       for Macintosh:
         mac.archive.umich.edu:/mac/util/compression/unzip2.01.cpt.hqx
	 mac.archive.umich.edu:/mac/util/compression/zipit1.2.cpt.hqx
	 ftp.uu.net:/pub/archiving/zip/MAC/unz512.hqx
 
..zoo: zoo 2.10 for MSDOS (algorithm copied from that of lha, see lha above)
      Contact: Rahul Dhesi <[email protected]>
	 wuarchive.wustl.edu:/mirrors/msdos/zoo/zoo210.exe
	 garbo.uwasa.fi:/pc/arcers/zoo210.exe
 
      zoo 2.10 for Unix, VMS
         oak.oakland.edu:/pub/misc/unix/zoo210.tar.Z
	 garbo.uwasa.fi:/unix/arcers/zoo210.tar.Z
 
      zoo for Mac
 	 mac.archive.umich.edu:/mac/utilities/compressionapps/maczoo.sit.hqx
 
      zoo for Amiga
	ftp.funet.fi:pub/amiga/utilities/archivers/Zoo-2.1.lha
 
..F: freeze for Unix (LZ77 with hashing, plus secondary dynamic Huffman
         encoding)
     wuarchive.wustl.edu:/usenet/comp.sources.misc/volume35/freeze/part0[1-3].Z
     ftp.inria.fr:/system/arch-compr/freeze-2.5.tar.Z
     Contact: Leonid A. Broukhis <[email protected]>
 
..Y: yabba for Unix, VMS, ... (Y coding, a variant of LZ78)
  wuarchive.wustl.edu:/usenet/comp.sources.unix/volume24/yabbawhap/part0[1-4].Z
  ftp.inria.fr:/system/arch-compr/yabba.tar.Z
  Contact: Dan Bernstein <[email protected]>
 
..Z: compress for Unix ('the' LZW algorithm)
      It is likely that your Unix system has 'compress' already. Otherwise:
	wuarchive.wustl.edu:/packages/compression/compress-4.1.tar
        (not in .Z format to avoid chicken and egg problem)
 
    compress for MSDOS
	oak.oakland.edu:/pub/msdos/compress/comp430[ds].zip
	garbo.uwasa.fi:/pc/unix/comp430d.zip
	garbo.uwasa.fi:/pc/source/comp430s.zip
 
    compress for Macintosh
        sumex-aim.stanford.edu:/info-mac/cmp/maccompress-32.hqx
 
    compress for Amiga
	ftp.funet.fi:pub/amiga/utilities/archivers/compress-4.1.lha
 
    compress for Vax/VMS
	wuarchive.wustl.edu:/packages/compression/vax-vms/lzcomp.exe
	wuarchive.wustl.edu:/packages/compression/vax-vms/lzdcmp.exe
 
------------------------------------------------------------------------------
 
Subject: [3]  What is the latest PKZIP version?
 
The latest official version is 2.04g. Release 2.04c had serious bugs,
corrected in 2.04e and 2.04g.
 
Be warned that there are countless bogus PKZIP 1.20, 2.0, 2.02,
3.05 and whatever scams floating around.  They usually are hacks of
PKZIP 1.93A beta test version.  Some of them are trojans and / or
carry computer virii.
 
Note about pkzip 2.06 from a PKware employee:
 
    Version 2.06 was released as an INTERNAL use only IBM version.
    It is identical to 2.04G, but it has IBM names in the help
    screens and such. That release is meant for IBM only.
 
If pkunzip indicates that you need version 2.8 to extract an
archive, your archive has been corrupted by a transfer not
made in binary mode (see item 30 below).
------------------------------------------------------------------------------
 
Subject: [4] What is an archiver?
 
 
There is a distinction between archivers and other compression
programs:
 
- an archiver takes several input files, compresses them and produces
  a single archive file. Examples are arc, arj, lha, zip, zoo.
 
- other compression programs create one compressed file for each
  input file. Examples are freeze, yabba, compress, gzip. Such programs
  are often combined with tar to create compressed archives (see
  question 50: "What is this tar compression program?").
 
For a comparison of zip and gzip, see the gzip README file. (In short:
zip is an archiver, gzip is not; only zip is compatible with pkzip.)
 
------------------------------------------------------------------------------
 
Subject: [5] What is the best general purpose compression program?
 
 
The answer is: it depends. (You did not expect a definitive answer,
did you?)
 
It depends whether you favor speed, compression ratio, a standard and
widely used archive format, the number of features, etc...  Just as
for text editors, personal taste plays an important role. compress has
4 options, arj 2.30 has about 130 options; different people like
different programs. *Please* do not start or continue flame wars on
such matters of taste.
 
The only objective comparisons are speed and compression ratio. Here
is a short table comparing various programs on a 33Mhz Compaq 386.
All programs have been run on Unix SVR4, except pkzip and arj which
only run on MSDOS. (MSDOS benchmarks are available by ftp on
oak.oakland.edu:/pub/msdos/info/arctst*.zip.)
 
*Please* do not post your own benchmarks made on your own files that
nobody else can access. If you think that you must absolutely post yet
another benchmark, make sure that your test files are available by
anonymous ftp.
 
The programs compared here were chosen because they are the most
popular or because they run on Unix and source is available.  For ftp
information, see above. Three programs (hpack, comp-2 and ha) have
been added because they achieve better compression (at the expense of
speed) and one program (lzrw3-a) has been added because it favors
speed at the expense of compression:
 
- comp-2 is in wuarchive.wustl.edu:/mirrors/msdos/ddjmag/ddj9102.zip
  (inner zip file nelson.zip),
- hpack is in garbo.uwasa.fi:/unix/arcers/hpack78src.tar.Z
- ha 0.98 is in garbo.uwasa.fi:/pc/arcers/ha098.zip
- ftp.adelaide.edu.au:/pub/compression/lzrw3-a.c  [129.127.40.3]
 
The 14 files used in the comparison are from the standard Calgary
Text Compression Corpus, available by ftp on ftp.cpsc.ucalgary.ca 
in /pub/projects/text.compression.corpus/text.compression.corpus.tar.Z.
 
The whole corpus includes 18 files, but the 4 files paper[3-6] are
generally omitted in benchmarks. It contains several kinds of file
(ascii, binary, image, etc...) but has a bias towards large files.
You may well get different ratings on the typical mix of files that
you use daily, so keep in mind that the comparisons given below are
only indicative.
 
The programs are ordered by decreasing total compressed size. For a
fair comparison between archivers and other programs, this size is
only the size of the compressed data, not the archive size.
 
The programs were run on an idle machine, so the elapsed time
is significant and can be used to compare Unix and MSDOS programs.
 
[Note: I did not have time to run again all benchmarks with more
recent versions of zip, freeze, arj and hpack. To be done for some
future revision of this FAQ.]
 
       size     lzrw3a   compress    lharc    yabba     pkzip    freeze 
version:                   4.0       1.02      1.0       1.10     2.3.5
options:                                    -m300000                    
       ------    -----    ------    ------    ------    ------   ------
bib    111261    49040     46528     46502     40456     41354    41515 
book1  768771   416131    332056    369479    306813    350560   344793 
book2  610856   274371    250759    252540    229851    232589   230861 
geo    102400    84214     77777     70955     76695     76172    68626 
news   377109   191291    182121    166048    168287    157326   155783 
obj1    21504    12647     14048     10748     13859     10546    10453 
obj2   246814   108040    128659     90848    114323     90130    85500 
paper1  53161    24522     25077     21748     22453     20041    20021 
paper2  82199    39479     36161     35275     32733     32867    32693 
pic    513216   111000     62215     61394     65377     63805    53291 
progc   39611    17919     19143     15399     17064     14164    14143 
progl   71646    24358     27148     18760     23512     17255    17064 
progp   49379    16801     19209     12792     16617     11877    11686 
trans   93695    30292     38240     28092     31300     23135    22861 
    3,141,622  1,400,105 1,259,141 1,200,580 1,159,340 1,141,821 1,109,290
real             0m35s     0m59s     5m03s     2m40s              5m27s
user             0m25s     0m29s     4m29s     1m46s              4m58s
sys              0m05s     0m10s     0m07s     0m18s              0m08s
MSDOS:                                                   1m39s
                        
 
         zoo       lha       arj      pkzip    zip      hpack   comp-2    ha
        2.10    1.0(Unix)   2.30      2.04g    1.9      0.75a            0.98
         ah    2.13(MSDOS)   -jm       -ex      -6                        a2
       ------    ------    ------    ------  -------   ------   ------  ------
bib     40742     40740     36090     35186    34950    35619    29840   26927
book1  339076    339074    318382    313566   312619   306876   237380  235733
book2  228444    228442    210521    207204   206306   208486   174085  163535
geo     68576     68574     69209     68698    68418    58976    64590   59356
news   155086    155084    146855    144954   144395   141608   128047  123335
obj1    10312     10310     10333     10307    10295    10572    10819    9799
obj2    84983     84981     82052     81213    81336    80806    85465   80381
paper1  19678     19676     18710     18519    18525    18607    16895   15675
paper2  32098     32096     30034     29566    29674    29825    25453   23956
pic     52223     52221     53578     52777    55051    51778    55461   51639
progc   13943     13941     13408     13363    13238    13475    12896   11795
progl   16916     16914     16408     16148    16175    16586    17354   15298
progp   11509     11507     11308     11214    11182    11647    11668   10498
trans   22580     22578     20046     19808    18879    20506    21023   17927
    1,096,166 1,096,138 1,036,934 1,019,451 1,021,043 1,005,367 890,976 845,854
real    4m07s     6m03s                        1m49s   1h22m17s  27m05s
user    3m47s     4m23s                        1m43s   1h20m46s  19m27s
sys     0m04s     0m08s                        0m02s      0m12s   2m03s
MSDOS:            1m49s     2m41s     1m43s                              14m43s
 
Notes:
 
- the compressed data for 'zoo ah' is always two bytes longer than for
  lha. This is simply because both programs are derived from the same
  source (ar002, written by Haruhiko Okumura, available by ftp in
  oak.oakland.edu:/pub/msdos/archiver/ar002.zip).
 
- hpack 0.75a gives slightly different results on SunOS. (To be checked
  with latest version of hpack).
 
- the MSDOS versions are all optimized with assembler code and were run
  on a RAM disk. So it is not surprising that they often go faster than
  their Unix equivalent.
 
------------------------------------------------------------------------------
 
Subject: [7] Which books should I read?
 
 
[BWC 1989] Bell, T.C, Cleary, J.G. and Witten, I.H, "Text Compression",
    Prentice-Hall 1989. ISBN: 0-13-911991-4. Price: approx. US$60
    The reference on text data compression.
 
[Nel 1991] Mark Nelson, "The Data Compression Book"
    M&T Books, Redwood City, CA, 1991.  ISBN 1-55851-216-0.
    Price $36.95 including two 5" PC-compatible disks bearing
    all the source code printed in the book.
    A practical introduction to data compression.
    The book is targeted at a person who is comfortable reading C code but
    doesn't know anything about data compression.  Its stated goal is to get
    you up to the point where you are competent to program standard
    compression algorithms.
 
[Will 1990] Williams, R.  "Adaptive Data Compression", Kluwer Books, 1990.
    ISBN: 0-7923-9085-7. Price: US$75.
    Reviews the field of text data compression and then addresses the
    problem of compressing rapidly changing data streams.
 
[Stor 1988] Storer, J.A.  "Data Compression: Methods and Theory", Computer
    Science Press, Rockville, MD. ISBN: 0-88175-161-8.
    A survey of various compression techniques, mainly statistical
    non-arithmetic compression and LZSS compression.  Includes complete Pascal
    code for a series of LZ78 variants.
 
[Stor 1992] Storer, J.A. "Image and Text Compression", Kluwer Academic
    Publishers, 1992, ISBN 0-7923-9243-4
 
[ACG 1991] Advances in Speech Coding, edited by Atal, Cuperman, and Gersho,
    Kluwer Academic Press, 1991.
 
[GG 1991] Vector Quantization and Signal Compression, by Gersho and Gray,
    Kluwer Acad. Press, 1991, ISBN 0-7923-9181-0.
 
[CT 1991] Elements of Information Theory, by T.M.Cover and J.A.Thomas
     John Wiley & Sons, 1991. ISBN 0-471-06259-6.
 
Review papers:
 
[BWC 1989] Bell, T.C, Witten, I.H, and Cleary, J.G.  "Modeling for Text
    Compression", ACM Computing Surveys, Vol.21, No.4 (December 1989), p.557
    A good general overview of compression techniques (as well as modeling for
    text compression); the condensed version of "Text Compression".
 
[Lele 1987] Lelewer, D.A, and Hirschberg, D.S.  "Data Compression", ACM
    Computing Surveys, Vol.19, No.3 (September 1987), p.261.
    A survey of data compression techniques which concentrates on Huffman
    compression and makes only passing mention of other techniques.
 
 
------------------------------------------------------------------------------
 
Subject: [8] What about patents on data compression algorithms?
 
 
[Note: the appropriate group for discussing software patents is
comp.patents (or misc.legal.computing), not comp.compression.]
 
All patents mentioned here are US patents, and thus probably
not applicable outside the US. See item 70, "Introduction to data
compression" for the meaning of LZ77, LZ78 or LZW.
 
 
(a) Run length encoding
 
- Tsukiyama has two patents on run length encoding: 4,586,027 and 4,872,009
  granted in 1986 and 1989 respectively. The first one covers run length
  encoding in its most primitive form: a length byte followed by the
  repeated byte. The second patent covers the 'invention' of limiting the
  run length to 16 bytes and thus the encoding of the length on 4 bits.
  Here is the start of claim 1 of patent 4,872,009, just for pleasure:
 
    1. A method of transforming an input data string comprising a plurality
    of data bytes, said plurality including portions of a plurality of
    consecutive data bytes identical to one another, wherein said data
    bytes may be of a plurality of types, each type representing different
    information, said method comprising the steps of: [...]
 
- O'Brien has patented (4,988,998) run length encoding followed by LZ77.
 
 
(b) LZ77
 
- Waterworth patented (4,701,745) the algorithm now known as LZRW1,
  because Ross Williams reinvented it later and posted it on
  comp.compression on April 22, 1991. (See item 5 for the ftp site
  with all LZRW derivatives.) The *same* algorithm has later been
  patented by Gibson & Graybill (see below). The patent office failed
  to recognize that the same algorithm was patented twice, even though
  the wording used in the two patents is very similar.
 
  The Waterworth patent is now owned by Stac Inc, which won a lawsuit
  against Microsoft, concerning the compression feature of MSDOS 6.0.
  Damages awarded were $120 million.
 
- Fiala and Greene obtained in 1990 a patent (4,906,991) on all
  implementations of LZ77 using a tree data structure. Claim 1 of the
  patent is much broader than the algorithms published by Fiala and
  Greene in Comm.ACM, April 89. The patent covers the algorithm
  published by Rodeh and Pratt in 1981 (J. of the ACM, vol 28, no 1,
  pp 16-24).  It also covers the algorithm previously patented by
  Eastman-Lempel-Ziv (4,464,650), and the algorithms used in lharc,
  lha and zoo.
 
- Notenboom (from Microsoft) 4,955,066 uses three levels of
  compression, starting with run length encoding.
 
- The Gibson & Graybill patent 5,049,881 covers the LZRW1 algorithm
  previously patented by Waterworth and reinvented by Ross Williams.
  Claims 4 and 12 are very general and could be interpreted as
  applying to any LZ algorithm using hashing (including all variants
  of LZ78):
 
     4. A compression method for compressing a stream of input data into
     a compressed stream of output data based on a minimum number of
     characters in each input data string to be compressed, said
     compression method comprising the creation of a hash table, hashing
     each occurrence of a string of input data and subsequently searching
     for identical strings of input data and if such an identical string
     of input data is located whose string size is at least equal to the
     minimum compression size selected, compressing the second and all
     subsequent occurrences of such identical string of data, if a string
     of data is located which does not match to a previously compressed
     string of data, storing such data as uncompressed data, and for each
     input strings after each hash is used to find a possible previous
     match location of the string, the location of the string is stored
     in the hash table, thereby using the previously processed data to
     act as a compression dictionary.
 
  Claim 12 is identical, with 'method' replaced with 'apparatus'.  Since
  the 'minimal compression size' can be as small as 2, the claim could
  cover any dictionary technique of the LZ family. However the text of the
  patent and the other claims make clear that the patent should cover the
  LZRW1 algorithm only. (In any case the Gibson & Graybill patent is likely
  to be invalid because of the prior art in the Waterworth patent.)
 
- Phil Katz, author of pkzip, also has a patent on LZ77 (5,051,745)
  but the claims only apply to sorted hash tables, and when the hash
  table is substantially smaller than the window size.
 
- IBM patented (5,001,478) the idea of combining a history buffer (the
  LZ77 technique) and a lexicon (as in LZ78).
 
- Stac Inc patented (5,016,009 and 5,126,739) yet another variation of LZ77
  with hashing. The '009 patent was used in the lawsuit against Microsoft
  (see above). Stac also has patents on LZ77 with parallel lookup in
  hardware (4,841,092 and 5,003,307).
 
- Robert Jung, author of 'arj', has been granted patent 5,140,321
  for one variation of LZ77 with hashing.  This patent covers the LZRW3-A
  algorithm, also previously discovered by Ross Williams. LZRW3-A was posted
  on comp.compression on July 15, 1991. The patent was filed two months later
  on Sept 4, 1991. (The US patent system allows this because of the
  'invention date' rule.)
 
- Chambers 5,155,484 is yet another variation of LZ77 with hashing.
  The hash function is just the juxtaposition of two input bytes,
  this is the 'invention' being patented. The hash table is named
  'direct lookup table'.
 
 
(c) LZ78
 
- One form of the original LZ78 algorithm was patented (4,464,650) by
  its authors Lempel, Ziv, Cohn and Eastman.
 
- The LZW algorithm used in 'compress' is patented by IBM (4,814,746)
  and Unisys (4,558,302). It is also used in the V.42bis compression
  standard (see question 11 on V.42bis below) and in Postscript Level 2.
  (Unisys sells the license to modem manufacturers for a onetime
  $25,000 fee.) The IBM patent application was filed three weeks
  before that of Unisys, but the US patent office failed to recognize
  that they covered the same algorithm. (The IBM patent is more
  general, but its claim 7 is exactly LZW.)
 
- AP coding is patented by Storer (4,876,541). (Get the yabba package
  for source code, see question 2 above, file type .Y)
 
 
(d) arithmetic coding
 
- IBM holds many patents on arithmetic coding (4,286,256 4,295,125
  4,463,342 4,467,317 4,633,490 4,652,856 4,891,643 4,905,297 4,935,882).
  It has patented in particular the Q-coder implementation of arithmetic
  coding.  The arithmetic coding option of the JPEG standard requires
  use of the patented algorithm.  No JPEG-compatible method is
  possible without infringing the patent, because what IBM actually
  claims rights to is the underlying probability model (the heart of
  an arithmetic coder). (See the JPEG FAQ for details.)
 
  AT&T has 3 patents on arithmetic coding (4,973,961, 5,023,611, 5,025,258).
 
(e) predictor
 
- The 'predictor' algorithm was first described in the paper
 
    Raita, T. and Teuhola, J. (1987), "Predictive text compression by hashing",
    ACM Conference on Information Retrieval
 
  This algorithm has been patented (5,229,768) by K. Thomas in 1993. It
  is used in the Internet Draft "PPP Predictor Compression Protocol" (see
  venera.isi.edu:/internet-drafts/draft-ietf-pppext-predictor-00.txt).
 
 
As can be seen from the above list, some of the most popular compression
programs (compress, pkzip, zoo, lha, arj) are now covered by patents.
(This says nothing about the validity of these patents.)
 
Here are some references on data compression patents. A number of them are
taken from the list prep.ai.mit.edu:/pub/lpf/patent-list.
 
3,914,586
Data compression method and apparatus
filed 10/25/73, granted 10/21/75
General Motors Corporation, Detroit MI
Duane E. McIntosh, Santa Ynez CA
Data compression apparatus is disclosed is operable in either a bit
pair coding mode of a word coding mode depending on the degree of
redundancy of the data to be encoded.
 
3,976,844
Data communication system for transmitting data in compressed form
filed Apr. 4, 1975, granted Aug. 24, 1976
inventor  Bernard K. Betz, assignee Honeywell Information Systems, Inc.
[encode differences with previous line]
 
4,021,782
Data compaction system and apparatus
inventor Hoerning
filed 04/30/1975, granted 05/03/1977
[A primitive form of LZ77 with implicit offsets (compare with previous record)]
 
4,054,951
Data expansion apparatus
inventor R.D. Jackson, assignee IBM
filed Jun. 30, 1976, granted Oct. 18, 1977
[Covers only decompression of data compressed with a variant of LZ77.]
 
4,087,788
Data compression system
filed 1/14/77, granted 5/2/78
NCR Canada LTD - NCR Canada Ltee, Mississauga CA
Brian J. Johannesson, Waterloo CA
A data compression system is disclosed in which the left hand boundary
of a character is developed in the form of a sequence of Freeman
direction codes, the codes being stored in digital form within a
processor.
 
4,286,256
Method and means for arithmetic coding using a reduced number of operations.
granted Aug 25, 1981
assignee IBM
 
4,295,125
A method and means for pipeline decoding of the high to low order pairwise
combined digits of a decodable set of relatively shifted finite number of
strings
granted Oct 13, 1981
assignee IBM
 
4,412,306
System for minimizing space requirements for storage and transmission of
digital signals
filed May 14, 1981, granted Oct. 25, 1983
inventor  Edward W. Moll
 
4,463,342
A method and means for carry-over control in a high order to low order
combining of digits of a decodable set of relatively shifted finite number
strings.
granted Jul 31, 1984
assignee IBM
 
4,491,934
Data compression process
filed May 12, 1982, granted Jan. 1, 1985
inventor  Karl E. Heinz
 
4,464,650
Apparatus and method for compressing data signals and restoring the
compressed data signals
inventors Lempel, Ziv, Cohn, Eastman
assignees Sperry Corporation and At&T Bell Laboratories
filed 8/10/81, granted 8/7/84
A compressor parses the input data stream into segments where each
segment comprises a prefix and the next symbol in the data stream
following the prefix.
 
4,467,317
High-speed arithmetic compression using using concurrent value updating.
granted Aug 21, 1984
assignee IBM
 
4,494,108
Adaptive source modeling for data file compression within bounded memory
filed Jun. 5, 1984, granted Jan. 15, 1985
invntors Glen G. Langdon, Jorma J. Rissanen
assignee IBM
order 1 Markov modeling
 
4,558,302
High speed data compression and decompression apparatus and method
inventor Welch
assignee Sperry Corporation (now Unisys)
filed 6/20/83, granted 12/10/85
The text for this patent can be ftped from rusmv1.rus.uni-stuttgart.de
(129.69.1.12) in /info/comp.patents/US4558302.Z.
 
4,560,976
Data compression
filed 6/5/84, granted 12/24/85
Codex Corporation, Mansfield MA
Steven G. Finn, Framingham, MA
A stream of source characters, which occur with varying relative
frequencies, is encoded into a compressed stream of codewords, each
having one, two or three subwords, by ranking the source characters by
their current frequency of appearance, encoding the source characters
having ranks no higher than a first number as one subword codewords,
source characters having ranks higher than the first number but no
higher than a second number as two subword codewords, and the
remaining source characters as three subword codewords.
 
4,586,027
Method and system for data compression and restoration
inventor Tsukimaya et al.
assignee Hitachi
filed 08/07/84, granted 04/29/86
patents run length encoding
 
4,597,057
System for compressed storate of 8-bit ascii bytes using coded strings
of 4-bit nibbles.
inventor Snow, assignee System Development corporation.
filed 12/31/1981, granted 06/24/1986.
Compression using static dictionary of common words, prefixes and suffixes.
 
4,612,532
Data compression apparatus and method
inventor Bacon, assignee Telebyte Corportion
filed Jun. 19, 1984, granted Sep. 16, 1986
[Uses followsets as in the pkzip 0.92 'reduce' algorithm, but the
followsets are dynamically updated. This is in effect a sort of order-1
Markov modeling.]
 
4,622,545
Method and apparatus for image compression and Manipulation
inventor William D. Atkinson
assignee Apple computer Inc.
filed 9/30/82
granted 11/11/86
 
4,633,490
Symmetrical adaptive data compression/decompression system.
granted Dec 30, 1985
assignee IBM
 
4,652,856
A multiplication-free multi-alphabet arithmetic code.
granted Feb  4, 1986
assignee IBM
 
4,667,649
Data receiving apparatus
filed 4/18/84, granted 6/30/87
inventors Kunishi et al.
assignee Canon Kabushiki Kaisha, Tokyo Japan
compression of Fax images.
 
4,682,150
Data compression method and apparatus
inventors Mathes and Protheroe, 
assignee NCR Corporation, Dayton OH
A system and apparatus for compressing redundant and nonredundant
binary data generated as part of an operation of a time and attendance
terminal in which the data represents the time an employee is present
during working hours.
 
4,701,745
Data compression system
inventor Waterworth John R
assignee Ferranti PLC GB, patent rights now acquired by Stac Inc.
filed 03/03/1986 (03/06/1985 in GB), granted 10/20/1987
Algorithm now known as LZRW1 (see above)
I claim:
1. A data compression system comprising an input store for receiving
and storing a plurality of bytes of uncompressed data from an outside
source, and data processing means for processing successive bytes of
data from the input store;
the data processing means including circuit means operable to check
whether a sequence of successive bytes to be processed identical with
a sequence of bytes already processed, and including hash generating
means responsive to the application of a predetermined number of
bytes in sequence to derive a hash code appropriate to those bytes, a
temporary store in which the hash code may represent the address of a
storage location, and a pointer counter operable to store in the
temporary store at said address a pointer indicative of the position
in the input store of one of the predetermined number of bytes;
output means operable to apply to a transfer medium each byte of data
not forming part of such an identical sequence; and
encoding means responsive to the identification of such a sequence to
apply to the transfer medium an identification signal which identifies
both the location in the input store of the previous occurrence of the
sequence of bytes and the number of bytes contained in the sequence.
 
4,730,348
Adaptive data compression system
inventor MacCrisken, assignee Adaptive Computer Technologies
filed Sep. 19, 1986, granted Mar. 8, 1988
[order-1 Markov modeling + Huffman coding + LZ77]
 
4,758,899
Data compression control device
inventor Tsukiyama, assignee Hitachi
filed 11/20/1985, granted 07/19/1988
Limits compression to ensure that tape drive stays busy.
 
4,809,350
Data compression system
filed Jan. 30, 1987, granted Feb. 28, 1989
inventor Yair Shimoni & Ron Niv
assignee Elscint Ltd., Haifa, Israel
[Image compression via variable length encoding of differences with
predicted data.]
 
4,814,746
Data compression method
inventors Victor S. Miller, Mark N. Wegman
assignee IBM
filed 8/11/86, granted 3/21/89
A previous application was filed on 6/1/83, three weeks before the
application by Welch (4,558,302)
Communications between a Host Computing System and a number of remote
terminals is enhanced by a data compression method which modifies the
data compression method of Lempel and Ziv by addition of new character
and new string extensions to improve the compression ratio, and
deletion of a least recently used routine to limit the encoding tables
to a fixed size to significantly improve data transmission efficiency.
 
4,841,092
continued in 5,003,307
 
4,853,696
Code converter for data compression/decompression
filed 4/13/87, granted 8/1/89
inventor Amar Mukherjee, Maitland FL
assignee University of Central Florida, Orlando FL
Another hardware Huffman encoder:
A code converter has a network of logic circuits connected in reverse
binary tree fashion with logic paths between leaf nodes and a common
root node.
 
4,872,009
Method and apparatus for data compression and restoration
inventor Tsukimaya et al.
assignee Hitachi
filed 12/07/87, granted 10/03/89
This patent on run length encoding covers the 'invention' of limiting
the run length to 16 bytes and thus the encoding of the length on 4 bits.
 
4,876,541
Stem [sic] for dynamically compressing and decompressing electronic data
filed 10/15/87, granted 10/24/89
inventor James A. Storer
assignee Data Compression Corporation
A data compression system for encoding and decoding textual data,
including an encoder for encoding the data and for a decoder for
decoding the encoded data.
 
4,891,643
Arithmetic coding data compression/de-compression by selectively
employed, diverse arithmetic encoders and decoders.
granted Jan  2, 1990
assignee IBM
 
4,905,297
granted Feb 27, 1990
assignee IBM
Arithmetic coding encoder and decoder system.
 
4,906,991
Textual substitution data compression with finite length search window
filed 4/29/1988, granted 3/6/1990
inventors Fiala,E.R., and Greene,D.H.
assignee Xerox Corporation
 
4,935,882
Probability adaptation for arithmetic coders.
granted Jun 19, 1990
assignee IBM
 
4,941,193
Barnsley, fractal compression.
 
4,943,869
Compression Method for Dot Image Data
filed 1988-05-04, granted 1990-07-24
assignee Fuji Photo Film Co.
Lossy and lossless image compression schemes.
 
4,955,066
Compressing and Decompressing Text Files
filed  10/13/89, granted 09/04/90
inventor Notenboom, L.A.
assignee Microsoft
Now extended as 5,109,433
[Noted in signon screen of Word 5.5 and on the outside of the MS-DOS 5.0
Upgrade.]
A method of compressing a text file in digital form is disclosed.
A full text file having characters formed into phrases is provided by an
author.  The characters are digitally represented by bytes.  A first pass
compression is sequentially followed by a second pass compression of the
text which has previously been compressed.  A third or fourth level of
compression is serially performed on the compressed text.  For example, in
a first pass, the text is run-length compressed.  In a second pass, the
compressed text is further compressed with key phrase compression.  In a
third pass, the compressed text is further compressed with Huffman
compression.  The compressed text is stored in a text file having a Huffman
decode tree, a key phrase table, and a topic index.  The data is
decompressed in a single pass and provided one line at a time as an output.
Sequential compressing of the text minimizes the storage space required for
the file.  Decompressing of the text is performed in a single pass.  As a
complete line is decompressed, it is output rapidly, providing full text to
the user.
 
4,973,961
Method and apparatus for carry-over control in arithmetic coding.
granted Nov 27, 1990
assignee AT&T
 
4,988,998
Data compression system for successively applying at least two data
compression methods to an input data stream.
inventor O'Brien
assignee Storage Technology Corporation, Louisville, Colorado
filed Sep 5, 1989, granted Jan 29, 1991.
Run length encoding followed by LZ77.
 
5,001,478
Method of Encoding Compressed Data
filed 12/28/89, granted 03/19/91
inventor Michael E. Nagy
assignee IBM
1. A method of encoding a compressed data stream made up of a sequence of
literal references, lexicon references and history references, which
comprises the steps of:
assigning to each literal reference a literal identifier;
assigning to each history reference a history identifier;
assigning to each lexicon reference a lexicon identifier;
and emitting a data stream with said identifiers assigned to said references.
Gordon Irlam <[email protected]> says:
The invention can probably be best understood by considering the
decompressor.  It consists of a history buffer, and a lexicon buffer, both
of which are initially empty.  The history buffer contains the last n
symbols emitted.  Whenever a history buffer reference is to be output the
string so referenced is subsequently moved to the lexicon buffer for future
reference.  Thus the history buffer keeps track of strings that may be
repeated on a very short term basis, while the lexicon buffer stores items
for a longer time.  Furthermore a history reference involves specifying
both the offset and length within the history buffer, whereas a lexicon
reference simply specifies a number denoting the string.  Both buffers have
a finite size.
 
5,003,307
Data compression apparatus with shift register search means
filed Oct. 6, 1989, granted Mar. 26, 1991
inventors George Glen A, Ivey Glen E, Whiting Douglas L
assignee Stac Inc
continuation of 4,841,092
 
5,016,009
Data compression apparatus and method
filed 01/13/1989, granted 05/14/1991
inventors George Glen A, Ivey Glen E, Whiting Douglas L
assignee Stac Inc
LZ77 with offset hash table (extended in 5,126,739)
 
5,023,611
Entropy encoder/decoder including a context extractor.
granted Jun 11, 1991
assignee AT&T
 
5,025,258
Adaptive probability estimator for entropy encoder/decoder.
granted Jun 18, 1991
assignee AT&T
 
5,049,881
Apparatus and method for very high data rate-compression incorporating
lossless data compression and expansion utilizing a hashing technique
inventors Dean K. Gibson, Mark D. Graybill
assignee Intersecting Concepts, Inc.
filed 6/18/90, granted 9/17/91
[covers lzrw1, almost identical with Waterworth 4,701,745]
 
5,051,745
String searcher, and compressor using same
filed  8/21/90, granted 9/24/91
inventor  Phillip W. Katz (author of pkzip)
In the string search method and apparatus pointers to the string to be
searched are indexed via a hashing function and organized according to the
hashing values of the string elements pointed to. The hashing function is
also run on the string desired to be found, and the resulting hashing value
is used to access the index. If the resulting hashing value is not in the
index, it is known that the target string does not appear in the string
being searched. Otherwise the index is used to determine the pointers which
correspond to the target hashing value, these pointers pointing to likely
candidates for matching the target string. The pointers are then used to
sequentially compare each of the locations in the string being searched to
the target string, to determine whether each location contains a match to
the target string.
In the method and apparatus for compressing a stream of data symbols, a
fixed length search window, comprising a predetermined contiguous portion
of the symbol stream, is selected as the string to be searched by the
string searcher. If a string to be compressed is found in the symbol
stream, a code is output designating the location within the search window
of the matching string and the length of the matching string.
 
5,065,447
Barnsley, fractal compression
 
5,109,433
Compressing and decompressing text files
inventor Notenboom
assignee Microsoft
extension of 4,955,066
 
5,126,739
Data Compression Apparatus and Method
filed Nov. 27, 1990, granted June 30, 1992.
inventor Whiting et. al
assignee Stac Inc
LZ77 with offset hash table (extension of 5,016,009)
 
5,140,321
Data compression/decompression method and apparatus
filed 9/4/91, granted 8/18/92
inventor Robert Jung
assignee Prime Computer
 
5,155,484
Fast data compressor with direct lookup table indexing into history buffer
filed 9/13/1991, granted 10/13/1992
inventor Chambers, IV, Lloyd L., Menlo Park, California
assignee Salient Software, Inc., Palo Alto, California (02)
Uses a 64K hash table indexed by the first two characters of
the input string. Includes several claims on the LZ77 file format
(literal or pair offset,length).
 
5,179,378
file Jul. 30, 1991, granted Jan. 12, 1993
inventor Ranganathan
assignee University of South Florida
Method and apparatus for the compression and decompression of data
using Lempel-Ziv based techniques.
[This covers LZ77 hardware compression with a systolic array of
processors working in parallel.]
 
5,229,768
Adaptive data compression system
granted Jul. 20, 1993
inventor Kasman E. Thomas
assignee Traveling Software, Inc. 
  A system for data compression and decompression is disclosed. A series of
fixed length overlapping segments, called hash strings, are formed from an
input data sequence. A retrieved character is the next character in the input
data sequence after a particular hash string. A hash function relates a
particular hash string to a unique address in a look-up table (LUT). An
associated character for the particular hash string is stored in the LUT at
the address. When a particular hash string is considered, the content of the
LUT address associated with the hash string is checked to determine whether
the associated character matches the retrieved character following the hash
string. If there is a match, a Boolean TRUE is output; if there is no match,
a Boolean FALSE along with the retrieved character is output. Furthermore, if
there is no match, then the LUT is updated by replacing the associated
character in the LUT with the retrieved character. [...]
[This algorithm is used in the Internet Draft "PPP Predictor
Compression Protocol".]
 
Japan 2-46275
Coding system
granted Feb 26, 1990
[Patents one form of arithmetic coding.]
 
------------------------------------------------------------------------------
 
Subject: [9]  The WEB 16:1 compressor.
 
 
[WARNING: this topic has generated the greatest volume of news in the
history of comp.compression. Read this before posting on this subject.]
 
[WARNING 2: it is quite possible that the story is repeating itself
with a compressor called MINC by Premier Research Corporation, Ltd.
They claim a breakthrough in lossless data compression using a recursive
method, that is, applying the compressor to the compressed output of
the previous run.]
 
[WARNING 3: the OWS program, which also claims incredible compression
ratios, is a hoax. It just remembers the clusters which contained
the data. The data can be recovered only if those clusters are not
used again for another file. Needless to say, never trust such a
lossy program.]
 
(a) What the press says
 
April 20, 1992  Byte Week Vol 4. No. 25:
 
   "In an announcement that has generated high interest - and more than a
   bit of skepticism - WEB Technologies (Smyrna, GA) says it has
   developed a utility that will compress files of greater than 64KB in
   size to about 1/16th their original length.  Furthermore, WEB says its
   DataFiles/16 program can shrink files it has already compressed."
   [...]
   "A week after our preliminary test, WEB showed us the program successfully
   compressing a file without losing any data.  But we have not been able
   to test this latest beta release ourselves."
   [...]
   "WEB, in fact, says that virtually any amount of data can be squeezed 
   to under 1024 bytes by using DataFiles/16 to compress its own output
   multiple times."
 
June 1992 Byte, Vol 17 No 6:
 
   [...] According to Earl Bradley, WEB Technologies' vice president of
   sales and marketing, the compression algorithm used by DataFiles/16
   is not subject to the laws of information theory. [...]
 
 
(b) First details, by John Wallace <[email protected]>:
 
I called WEB at (404)514-8000 and they sent me some product
literature as well as chatting for a few minutes with me on the phone.
Their product is called DataFiles/16, and their claims for it are
roughly those heard on the net.
 
According to their flier:
 
"DataFiles/16 will compress all types of binary files to approximately
one-sixteenth of their original size ... regardless of the type of
file (word processing document, spreadsheet file, image file,
executable file, etc.), NO DATA WILL BE LOST by DataFiles/16."
(Their capitalizations; 16:1 compression only promised for files >64K
bytes in length.)
 
"Performed on a 386/25 machine, the program can complete a
compression/decompression cycle on one megabyte of data in less than
thirty seconds"
 
"The compressed output file created by DataFiles/16 can be used as the 
input file to subsequent executions of the program.  This feature of 
the utility is known as recursive or iterative compression, and will 
enable you to compress your data files to a tiny fraction of the 
original size.  In fact, virtually any amount of computer data can 
be compressed to under 1024 bytes using DataFiles/16 to compress its 
own output files muliple times.  Then, by repeating in reverse the 
steps taken to perform the recusive compression, all original data 
can be decompressed to its original form without the loss of a single 
bit."
 
Their flier also claims: 
 
"Constant levels of compression across ALL TYPES of FILES"
"Convenient, single floppy DATA TRANSPORTATION"
 
From my telephone conversation, I was was assured that this is an
actual compression program.  Decompression is done by using only the 
data in the compressed file; there are no hidden or extra files.
 
 
(c) More information, by Rafael Ramirez <[email protected]>:
 
   Today (Tuesday, 28th) I got a call from Earl Bradley of Web
who now says that they have put off releasing a software version of
the algorithm because they are close to signing a major contract with
a big company to put the algorithm in silicon.  He said he could not
name the company due to non-disclosure agreements, but that they had
run extensive independent tests of their own and verified that the
algorithm works. [...]
 
He said the algorithm is so simple that he doesn't want anybody
getting their hands on it and copying it even though he said they
have filed a patent on it. [...] Mr. Bradley said the silicon version
would hold up much better to patent enforcement and be harder to copy.
 
   He claimed that the algorithm takes up about 4K of code, uses only
integer math, and the current software implementation only uses a 65K
buffer.  He said the silicon version would likely use a parallel
version and work in real-time. [...]
 
 
(d) The impossiblity proofs.
 
It is impossible for a given program to compress without loss *all*
files greater than a certain size by at least one bit. This can be
proven by a simple counting argument. (Many other proofs have been
posted on comp.compression, *please* do not post yet another one.)
 
Assume that the program can compress without loss all files of size >= N
bits.  Compress with this program all the 2^N files which have
exactly N bits.  All compressed files have at most N-1 bits, so there
are at most (2^N)-1 different compressed files [2^(N-1) files of size
N-1, 2^(N-2) of size N-2, and so on, down to 1 file of size 0]. So at
least two different input files must compress to the same output file.
Hence the compression program cannot be lossless. (Stronger results
about the number of incompressible files can be obtained, but the
proofs are a little more complex.)
 
This argument applies of course to WEB's case (take N = 64K*8 bits).
Note that no assumption is made about the compression algorithm.
The proof applies to *any* algorithm, including those using an
external dictionary, or repeated application of another algorithm,
or combination of different algorithms, or representation of the
data as formulas, etc... All schemes are subject to the counting argument.
There is no need to use information theory to provide a proof, just
basic mathematics.
 
This assumes of course that the information available to the decompressor
is only the bit sequence of the compressed data. If external information
such as a file name or a number of iterations is necessary to decompress
the data, the bits providing the extra information must be included in
the bit count of the compressed data. (Otherwise, it would be sufficient
to consider any input data as a number, use this as the iteration
count or file name, and pretend that the compressed size is zero.)
For an example of storing information in the file name, see the program
lmfjyh in the 1993 International Obfuscated C Code Contest, available
on all comp.sources.misc archives (Volume 39, Issue 104).
 
[See also question 73 "What is the theoretical compression limit?" in
part 2 of this FAQ.]
 
 
(e) No software version
 
Appeared on BIX, reposted by Bruce Hoult <[email protected]>:
 
tojerry/chaos #673, from abailey, 562 chars, Tue Jun 16 20:40:34 1992
Comment(s). 
----------
TITLE: WEB Technology
I promised everyone a report when I finally got the poop on WEB's
16:1 data compression. After talking back and forth for a year
and being put off for the past month by un-returned phone calls,
I finally got hold of Marc Spindler who is their sales manager.
 
_No_ software product is forth coming, period!
 
He began talking about hardware they are designing for delivery
at the end of the year. [...]
 
 
(f) Product cancelled
 
Posted by John Toebes <[email protected]> on Aug 10th, 1992:
 
[Long story omitted, confirming the reports made above about the
original WEB claims.]
 
10JUL92 - Called to Check Status.  Was told that testing had uncovered a
          new problem where 'four numbers in a matrix were the same
          value' and that the programmers were off attempting to code a
          preprocessor to eliminate this rare case.  I indicated that he
          had told me this story before.  He told me that the
          programmers were still working on the problem.
 
31JUL92 - Final Call to Check Status.  Called Earl in the morning and
          was told that he still had not heard from the programmers. [...]
          Stated that if they could not resolve the problem then there would
          probably not be a product.
 
03AUG92 - Final Call.  Earl claims that the programmers are unable to
          resolve the problem.  I asked if this meant that there would
          not be a product as a result and he said yes.
 
 
(g) Conclusion
 
The last report given above should put an end to the WEB story.
 
[Note from the FAQ maintainer: I intended to remove this story from
the FAQ, but the recent announcement of the MINC compressor has some
similarities with the WEB story so it is useful to keep it a little
longer.]
 
------------------------------------------------------------------------------
 
Subject: [11] What is the V.42bis standard?
 
 
A description of the V.42bis standard is given in "The V.42bis
standard for data-compressing modems," by Clark Thomborson
<[email protected]>, IEEE Micro, Oct 1992, pp. 41-53. 
 
Short introduction, by Alejo Hausner <[email protected]>:
 
The V.42bis Compression Standard was proposed by the International
Consultative Committee on Telephony and Telegraphy (CCITT) as an
addition to the v.42 error-correction protocol for modems. Its purpose
is to increase data throughput, and uses a variant of the
Lempel-Ziv-Welch (LZW) compression method.  It is meant to be
implemented in the modem hardware, but can also be built into the
software that interfaces to an ordinary non-compressing modem.
 
V.42bis can send data compressed or not, depending on the
data.  There are some types of data that cannot be
compressed.  For example, if a file was compressed first,
and then sent through a V.42bis modem, the modem would not
likely reduce the number of bits sent.  Indeed it is likely
that the amount of data would increase somewhat.
 
To avoid this problem, the algorithm constantly monitors the
compressibility of the data, and if it finds fewer bits
would be necessary to send it uncompressed, it switches to
transparent mode.  The sender informs the receiver of this
transition through a reserved code word.  Henceforth the
data is passed as plain bytes.
 
While transmitting in transparent mode, the sender maintains
the LZW trees of strings, and expects the receiver to do
likewise.  If it finds an advantage in returning to
compressed mode, it will do so, first informing the receiver
by a special escape code.  Thus the method allows the
hardware to adapt to the compressibility of the data.
 
The choice of escape code is clever.  Initially, it is a
zero byte.  Any occurrence of the escape code is replaced,
as is customary, by two escape codes.  In order to prevent a
string of escape codes from temporarily cutting throughput
in half, the escape code is redefined by adding 51 mod 256
each time it is used.
 
The CCITT standards documents used to be available by ftp on
ftp.uu.net in /doc/standards/ccitt, but this service has been
discontinued. If you ftp to digital.resource.org, in directory pub/standards
there is a file that says that making the standards available in the
first place was just an experiment.
 
The documents are now on src.doc.ic.ac.uk, but the directory name
keeps changing. Check one of:
  /computing/ccitt/ccitt-standards/ccitt/
  /computing/ccitt/standards/ccitt
  /doc/ccitt-standards/ccitt
in this order. The v42bis standard is in *standards/ccitt/1992/v/v42bis.asc.Z.
 
 
A mail server for CCITT documents is available at [email protected]
or [email protected]. A Gopher server is also available:
 
Name=International Telecommunication Union (ITU)
Host=info.itu.ch
Port=70
 
For more information, contact Robert Shaw <[email protected]> or
Antoinette Bautista <[email protected]>.  Warning by John Levine
<[email protected]> (probably obsolete by now):
 
  This teledoc thing is much less than meets the eye.  What it
  actually has is one-page abstracts of some but not all CCITT
  recommendations, along with junk like lists of the national
  representatives to CCITT.  If you want the actual text of a
  recommendation, you have to send large amounts of money to
  Switzerland, same as ever. However, a closer reading of the Teledoc
  announcement shows that they say they're planning to make the actual
  text of some CCITT recommendations available on-line sometime in 1993.
 
 
See also the Standards FAQ posted to news.answers or get it by ftp in
rtfm.mit.edu:/pub/usenet/news.answers/standards-faq.
 
------------------------------------------------------------------------------
 
Subject: [12] I need source for the winners of the Dr Dobbs compression contest
 
 
The source of the top 6 programs of the Feb 91 Dr Dobbs data compression
contest are available by ftp on
  oak.oakland.edu:/pub/msdos/compress/ddjcompr.zip
  garbo.uwasa.fi:/pc/source/ddjcompr.zip [128.214.87.1]
 
The sources are in MSDOS end-of-line format, one directory per
program.  Unix or VMS users, use "unzip -a ddjcompr" to get correct
end-of-lines (add -d to recreate the directory structure if you are
using an obsolete version of unzip such as 4.1). Three of the 6
programs are not portable and only run on MSDOS. compact and urban
work on Unix, sixpack only requires minor modifications.
 
------------------------------------------------------------------------------
 
Subject: [13] I need source for arithmetic coding
 
 
(See question 70 for an introduction to arithmetic coding.)
 
The source for the arithmetic coder described in Chap.5 of Bell,
Cleary, and Witten's book "Text Compression" (see question 7 above)
(or, equivalently, in: Witten, Neal, and Cleary's article "Arithmetic
Coding for data Compression" from Communications of the Association
for Computing Machinery, 30 (6), pp.520-540, June, 1987) is available
via anonymous ftp from ftp.cpsc.ucalgary.ca (136.159.7.18) in directory
/pub/projects/arithmetic.coding.  It only comes with a simple order-0 model
but it's set up so that adding your own more sophisticated one is
straightforward.
 
A low precision arithmetic coding implementation avoiding hardware
division is available on the same site (ftp.cpsc.ucalgary.ca) in
/pub/projects/arithmetic.coding/low.precision.version, file
low.precision.version.shar.
 
Kris Popat <[email protected]> has worked on "Scalar Quantization
with Arithmetic Coding."  It describes an arithmetic coding technique
which is quite general and computationally inexpensive.  The
documentation and example C code are available via anonymous ftp from
media-lab.media.mit.edu (18.85.0.2), in /pub/k-arith-code.
 
The program 'urban' in ddjcompr.zip (see item 12 above) is a high order
arithmetic coder working at the bit level. It is written by Urban Koistinen
<[email protected]>.
 
------------------------------------------------------------------------------
 
Subject: [15] Where can I get image compression programs?
 
 
JPEG:
    Source code for most any machine:
    ftp.uu.net:/graphics/jpeg/jpegsrc.v5.tar.Z [137.39.1.9]
    nic.funet.fi:/pub/graphics/packages/jpeg/jpegsrc.v5.tar.Z [128.214.6.100]
    Contact: [email protected] (Independent JPEG Group)
 
    havefun.stanford.edu:pub/jpeg/JPEGv1.2.tar.Z (supports lossless mode)
    Contact: Andy Hung <[email protected]> (see item 20 below)
 
    xv, an image viewer which can read JPEG pictures, is available in
    export.lcs.mit.edu: contrib/xv-2.21.tar.Z [18.24.0.12]
 
MPEG:
    havefun.stanford.edu:/pub/mpeg/MPEGv1.2.alpha.tar.Z
    Contact: Andy Hung <[email protected]> (see item 20 below)
 
    s2k-ftp.cs.berkeley.edu:/pub/multimedia/mpeg/mpeg_play-2.0.tar.Z
    s2k-ftp.cs.berkeley.edu:/pub/multimedia/mpeg/mpeg_encode-1.3.tar.Z.
    Contact: [email protected]
 
    flash.bu.edu:/pub/code/mpeg_system/mpeg_system_source_v1.0.tar.gz
    (MPEG-I Multi-Stream System Layer encoder/player; includes an
     enhanced version of mpeg_play)
    Contact: Jim Boucher <[email protected]> or Ziv Yaar <[email protected]>
 
    yorick.mni.mcgill.ca:/pub/mpeg/mpeg_lib-1.0.tar.gz [MPEG library]
    Contact: Gregory Ward <[email protected]>
 
    ftp.netcom.com:/pub/cfogg/mpeg/vmpeg12a.zip
      Contact: Stefan Eckart <[email protected]>
    decel.ecel.uwa.edu.au:/users/michael/mpegw32f.zip (for Windows and NT)
 
    nvr.com:/pub/NVR-software/Product-1.0.4.tar.Z   (192.82.231.50)
    (free demo copy of NVR's software toolkit for SPARCstations)
    Contact: Todd Brunhoff <[email protected]>
 
    ftp.netcom.com:/pub/cfogg/mpeg2/* (MPEG-2 encoder and decoder)
    Contact: [email protected] (MPEG Software Simulation Group)
 
H.261(P*64):
    havefun.stanford.edu:pub/p64/P64v1.2.tar.Z
    Contact: Andy Hung <[email protected]> (see item 20 below)
 
    zenon.inria.fr:/rodeo/ivs/last_version/ivs*-src.tar.gz
    (Inria videoconference system)
    Contact: Thierry Turletti <[email protected]> (see item 20 below).
 
JBIG:
    nic.funet.fi:/pub/graphics/misc/test-images/jbig.tar.gz.
 
mg: (the MG system for compressing and indexing text and images, see item 16)
    munnari.oz.au:/pub/mg/*
    Contact: Stuart Inglis <[email protected]>
 
epic: (pyramid wavelet coder, see item 72)
    whitechapel.media.mit.edu:/pub/epic.tar.Z [18.85.0.125]
    Contact: Eero P. Simoncelli <[email protected]>
    The "Lenna" test image is available as part of the EPIC package,
    where it is named "test_image".
 
hcompress: (wavelet impage compression, see item 72)
    stsci.edu:/software/hcompress/hcompress.tar.Z
 
wavethresh: (wavelet software for the language S)
    gdr.bath.ac.uk:/pub/masgpn/wavethresh2.2.Z
    Contact: [email protected]
 
rice-wlet: (wavelet software, see item 72)
    cml.rice.edu:/pub/dsp/software/rice-wlet-tools.tar.Z
 
scalable: (2 & 3 dimensional subband transformation)
    robotics.eecs.berkeley.edu:/pub/multimedia/scalable2.tar.Z
    Contact: [email protected]
 
compfits:
    uwila.cfht.hawaii.edu:/pub/compfits/compfits.tar.Z  [128.171.80.50]
    Contact: Jim Wright <[email protected]>
 
fitspress:
    cfata4.harvard.edu:/pub/fitspress08.tar.Z [128.103.40.79]
 
tiff:
    For source and sample images, see question 18 below.
 
DCT algorithms:
    etro.vub.ac.be:/pub/DCT_ALGORITHMS/*
    Contact: Charilos Christopoulos <[email protected]>
 
xanim: (X11 animation viewer, supports Quicktime and several other formats)
    ftp.x.org:/contrib/applications/xanim2683.tar.Z
 
A demo of image compression using neural networks is available in
http://www.ee.duke.edu/~cec/index.html.
 
For fractal compression programs, see item 17 below.
For image compression hardware, see item 85 in part 3 of this FAQ.
 
------------------------------------------------------------------------------
 
Subject: [16] What is the state of the art in lossless image compression?
 
 
The current state-of-the-art is the JBIG algorithm.  For an
introduction to JBIG, see question 74 in part 2.
 
JBIG works best on bi-level images (like faxes) and also works well on
Gray-coded grey scale images up to about six or so bits per pixel.  You
just apply JBIG to the bit planes individually.  For more bits/pixel,
lossless JPEG provides better performance, sometimes. (For JPEG, see
question 19 below.)
 
You can find a description of JBIG in ISO/IEC CD 11544, contained in
document ISO/IEC JTC1/SC2/N2285.  The only way to get it is to ask
your National Standards Body for a copy. In the USA, call ANSI at
(212) 642-4900.
 
See also the MG system containing 'FELICS' which is reported to be
better than JBIG. (See item 15 above for ftp location). From the MG
README file:
 
  The MG system is a suite of programs for compressing and
  indexing text and images. Most of the functionality implemented
  in the suite is as described in the book ``Managing Gigabytes:
  Compressing and Indexing Documents and Images'', I.H. Witten, A.
  Moffat, and T.C. Bell; Van Nostrand Reinhold, New York, 1994, ISBN
  0-442-01863-0; US $54.95; call 1 (800) 544-0550 to order.
 
  These features include:
 
  -- text compression using a Huffman-coded semi-static word-based
     scheme
  -- two-level context-based compression of bi-level images
  -- FELICS lossless compression of gray-scale images
  -- combined lossy/lossless compression for textual images
  -- indexing algorithms for large volumes of text in limited main
     memory
  -- index compression
  -- a retrieval system that processes Boolean and ranked queries
  -- an X windows interface to the retrieval system
 
------------------------------------------------------------------------------
 
Subject: [17] What is the state of fractal compression?
 
It is recommended to read first item 77 in part 2 of this FAQ:
"Introduction to Fractal compression".
 
 
from Tal Kubo <[email protected]>:
 
According to Barnsley's book 'Fractals Everywhere', this method is
based on a measure of deviation between a given image and its
approximation by an IFS code.  The Collage Theorem states that there is
a convergent process to minimize this deviation.  Unfortunately,
according to an article Barnsley wrote for BYTE a few years ago, this
convergence was rather slow, about 100 hours on a Cray, unless assisted by
a person.
 
Barnsley et al are not divulging any technical information beyond the
meager bit in 'Fractals Everywhere'.  The book explains the idea of IFS
codes at length, but is vague about the application of the Collage theorem
to specific compression problems.
 
There is reason to believe that Barnsley's company has
*no algorithm* which takes a given reasonable image and achieves
the compression ratios initially claimed for their fractal methods.
The 1000-to-1 compression advertised was achieved only for a 'rigged'
class of images, with human assistance. The best unaided
performance I've heard of is good lossy compression of about 80-1.
 
Steve Tate <[email protected]> confirms:
 
Compression ratios (unzoomed) seem to range from 20:1 to 60:1...  The
quality is considerably worse than wavelets or JPEG on most of the
non-contrived images I have seen.
 
But Yuval Fisher <[email protected]> disagrees:
 
Their performance has improved dramatically beyond what they were
talking about in BYTE a few years ago.  Human assistance to the
compression is no longer needed and the compression time is
reasonable, although the more time and compute power you throw at the
compression, the smaller the resulting file for the same level of
quality.
 
Geoffrey A Stephenson <[email protected]> adds:
 
Iterated systems are shipping a general purpose compressor at about
300 Pounds in the UK that claims "640x480 24 bit colour compression of
about 1 min at 922k -> 10k on a 486/50 software only, decomp. to 8
bits in 3 secs, etc." At a recent multimedia conference in London they
handed out free demo disks that show the decomp. in action. The
package runs under both DOS anf WIN (DLLs provided for use in
applications). They also sell a board to speed up compression and
offer versions supporting full motion video (but not apparently at all
SVGA sizes like the static picture version). I have not yet got my
hands on a full version to test different types of pictures, but
friends have a and claim it looks good.
 
 
Thomas W. Colthurst <[email protected]> clarifies the distinction
between IFS and the Fractal Transform:
 
It is time, once and for all, to put to death the Barnsley myth that
IFSs are good for image compression.  They are not.  Various algorithms
have been proposed for this "inverse problem" ranging from the trendy
(genetic algorithms) to the deep (moment methods) to the ad hoc (the
hungry algorithm) to the absurd (the so-called "graduate student
algorithm", consisting of locking up a grad student in a tiny office
with a SGI workstation and not letting them out until they come up
with a good IFS for your image).  They are all useless for practical
image compression.
 
In fact, there are even good theoretical reasons for believing that
IFSs will never be useful for image compression.  For example, even
if you have an IFS for object A and an IFS for object B, there is no
way to combine these IFSs to get an IFS for object A union B or
object A intersect B.
 
Even Barnsley himself admits, in his latest book, that he doesn't use
IFS image compression.  Instead, he uses the so-called "fractal
transform," which is really just a variant of vector quantization
where you use the image itself, sampled at a higher scale, as the
VQ codebook.  To be fair, the fractal transform can be analyzed using
local IFSs, but local IFSs are immensely more complicated and general
than normal IFSs, to the point where one feels suspect even using the
word "IFS" to describe them.
 
It should be emphasized that the fractal transform is a real, working
method that performs about as well as other existing methods like VQ
or the discrete cosine transform. The fractal transform will probably
never beat vector quantization (VQ) as for size of the compressed
image, but does have the advantage that you don't need to carry your
codebook around.  The latest results have it slightly winning over
the discrete cosine transform; only time and more research will tell
if this advantage persists.  Just like VQ, the fractal transform
takes a while to compress, but is quick at decompression (Barnsley's
company has hardware to do this in realtime).
 
In short, IFSs are good for just about everything fractals are (and
more!), but are absolutely horrid for image compression.
 
 
Programs:
 
A fractal image compression program is available by ftp in
lyapunov.ucsd.edu:/pub/young-fractal/unifs10.zip. (Unix users, See
item 2 above for unzip on Unix.) Note the file size before you ftp it:
1.2 MB. The package contains source for compression and decompression,
source for X-windows decompression, MSDOS executables and images.
A newer version of the program is in yuvpak20.zip.
 
A fractal image decompression program (note: decompression only) is
available in /pub/inls-ucsd/fractal-2.0.tar on on the same ftp site
(lyapunov.ucsd.edu). Note the file size before you ftp it: 1.3 MB.
This file also contains a paper by Yuval Fisher (see reference below),
and some executables and sample images. Reading this paper is required
to understand how the Young compression programs (see above) works.
 
A note from Yuval Fisher <[email protected]>:
 
    Ftp to legendre.ucsd.edu and look in pub/Research/Fisher.  There
    is information there on my new book of contributed articles on
    fractal image compression, as well as the book's table of
    contents, some C code to encode and decode raw byte files of any
    size using a quadtree method, a manual explaining the use of the
    code, a fractal image compression bibliography (not guaranteed to
    be complete or close to it), some better executable code with
    sample encodings, and the SIGGRAPH '92 course notes on fractal
    image compression (these are based on appendix A of Chaos and
    Fractals by Peitgen et al., Springer Verlag). (See also
    http://legendre.ucsd.edu/Research/Fisher/fractal.html.)
 
 
The source code for the program published in the Oct 93 issue of
Byte is in ftp.uu.net:/published/byte/93oct/fractal.exe. This is
self-extractible zip file (use "unzip fractal.exe" to extract on
non MSDOS systems). The source code is for a TARGA video board.
 
Another fractal compression program is available by ftp in
vision.auc.dk:/pub/Limbo/Limbo*.tar.Z.
 
Several papers on fractal image compression are available by ftp
on ftp.informatik.uni-freiburg.de in directory /documents/papers/fractal.
A biliography is in schmance.uwaterloo.ca:/pub/Fractal/fractal.biblio.ps.Z.
 
References:
  A. Jacquin, 'Fractal image coding based on a theory of iterated
    contractive image transformations', Visual Comm. and Image
    Processing, vol SPIE-1360, 1990. (The best paper that explains
    the concept in a simple way.)
 
  A. Jacquin, "A Fractal Theory of Iterated Markov Operators with
    Applications to Digital Image Coding", PhD Thesis, Georgia Tech, 1989.
  It can be obtained from university microfilms for $35, phone 1-800-521-0600.
 
  M. Barnsley, L. Anson, "Graphics Compression Technology, SunWorld,
    October 1991, pp. 42-52.
  M.F. Barnsley, A. Jacquin, F. Malassenet, L. Reuter & A.D. Sloan,
    'Harnessing chaos for image synthesis', Computer Graphics,
    vol 22 no 4 pp 131-140, 1988.
  M.F. Barnsley, A.E. Jacquin, 'Application of recurrent iterated
    function systems to images', Visual Comm. and Image Processing,
    vol SPIE-1001, 1988.
  A. Jacquin, "Image Coding Based on a Fractal Theory of Iterated Contractive
    Image Transformations" p.18, January 1992 (Vol 1 Issue 1) of IEEE Trans
    on Image Processing.
  A.E. Jacquin, 'A novel fractal block-coding technique for digital
    images', Proc. ICASSP 1990.
  G.E. Oien, S. Lepsoy & T.A. Ramstad, 'An inner product space
    approach to image coding by contractive transformations',
    Proc. ICASSP 1991, pp 2773-2776.
  D.S. Mazel, Fractal Modeling of Time-Series Data, PhD Thesis,
    Georgia Tech, 1991.    (One dimensional, not pictures)
  S. A. Hollatz, "Digital image compression with two-dimensional affine
    fractal interpolation functions", Department of Mathematics and
    Statistics, University of Minnesota-Duluth, Technical Report 91-2.
    (a nuts-and-bolts how-to-do-it paper on the technique)
  Stark, J., "Iterated function systems as neural networks",
    Neural Networks, Vol 4, pp 679-690, Pergamon Press, 1991.
  Monro D M and Dudbridge F, "Fractal block coding of images",
    Electronics Letters 28(11):1053-1054 (1992)
  Beaumont J M, "Image data compression using fractal techniques",
    British Telecom Technological Journal 9(4):93-108 (1991)
  Fisher Y, "Fractal image compression", Siggraph 92
  Graf S, "Barnsley's Scheme for the Fractal Encoding of Images",
    Journal Of Complexity, V8, 72-78 (1992).
  Monro D.M. 'A hybrid fractal transform', Proc ICASSP 93, pp. V: 169-72
  Monro D.M. & Dudbridge F. 'Fractal approximation of image blocks',
    Proc ICASSP 92, pp. III: 485-488
  Monro D.M., Wilson D., Nicholls J.A. 'High speed image coding with the Bath
    Fractal Transform', IEEE International Symposium on Multimedia Technologies
    Southampton, April 1993
  Jacobs, E.W., Y. Fisher and R.D. Boss.  "Image Compression:  A study
    of the Iterated Transform Method."  _Signal Processing 29_  (1992) 25-263
  Vrscay, Edward R.  "Iterated Function Systems:  Theory, Applications,
    and the Inverse Problem."  _Fractal Geometry and Analysis_,
    J. Belair and S. Dubuc (eds.)  Kluwer Academic, 1991.  405-468.
 
Book:
    Fractal Image Compression
    Michael F. Barnsley and Lyman P. Hurd
    ISBN 0-86720-457-5, ca. 250 pp., $49.95
    Copies can be ordered directly from the publisher by sending a message
    to [email protected] with name, address and a Mastercard or
    Visa card number with expiration date.
 
Barnsley's company is:
 
Iterated Systems, Inc.
5550A Peachtree Parkway, Suite 650
Norcross, GA  30092
tel: 404-840-0310 or 1-800-4FRACTL
fax: 404-840-0806
In UK: Phone (0734) 880261, Fax (0734) 880360
 
------------------------------------------------------------------------------
 
Subject: [18] I need specs and source for TIFF and CCITT group 4 Fax
 
 
Specs for Group 3 and 4 image coding (group 3 is very similar to group 4)
are in CCITT (1988) volume VII fascicle VII.3. They are recommendations
T.4 and T.6 respectively. There is also an updated spec contained in 1992
recommendations T.1 to T.6.
 
CCITT specs are available by anonymous ftp (see above answer on
V.42bis).  The T.4 and T.6 specs are on src.doc.ic.ac.uk in directory
/computing/ccitt/ccitt-standards/ccitt/1988/ascii, files 7_3_01.txt.Z and
7_3_02.txt.Z respectively.
 
The following paper covers T.4, T.6 and JBIG:
 
  "Review of standards for electronic imaging for facsimile systems"
  in Journal of Electronic Imaging, Vol. 1, No. 1, pp. 5-21, January 1992.
 
 
Source code can be obtained as part of a TIFF toolkit - TIFF image
compression techniques for binary images include CCITT T.4 and T.6:
 
    sgi.com:/graphics/tiff/v3.2.tar.Z    [192.48.153.1]
    Contact: [email protected]
 
There is also a companion compressed tar file (v3.0pics.tar.Z) that
has sample TIFF image files. A draft of TIFF 6.0 is in TIFF6.ps.Z.
Concerning JPEG compression in TIFF 6.0, Tom Lane <[email protected]> adds:
 
  The TIFF document won't do you much good unless you also have the official
  JPEG standard. You can buy it from ANSI or your national ISO member
  organization (DIN over there, I suppose). [See also the book by Pennebaker
  and Mitchell referenced in item 75 of this FAQ.]
 
  Worse, the TIFF 6.0 spec has serious problems in its JPEG features.  It is
  probable that section 22 (JPEG) will be rewritten from scratch.  If you are
  considering implementing TIFF/JPEG, please contact me at [email protected] for
  the latest word.
 
Software for reading and writing CCITT Group 3 and 4 images is
also available in directory merry.cs.monash.edu.au:/pub/alanf/TIFF_FAX
(130.194.67.101). Contact: Alan Finlay <[email protected]>.
 
 
See also question 54 below.
 
------------------------------------------------------------------------------
 
Subject: [19] What is JPEG?
 
 
JPEG (pronounced "jay-peg") is a standardized image compression mechanism.
JPEG stands for Joint Photographic Experts Group, the original name of the
committee that wrote the standard.  JPEG is designed for compressing either
full-color or gray-scale digital images of "natural", real-world scenes.
It does not work so well on non-realistic images, such as cartoons or line
drawings.
 
JPEG does not handle black-and-white (1-bit-per-pixel) images, nor does it
handle motion picture compression.  Standards for compressing those types
of images are being worked on by other committees, named JBIG and MPEG
respectively.
 
Regular JPEG is "lossy", meaning that the image you get out of decompression
isn't quite identical to what you originally put in.  The algorithm achieves
much of its compression by exploiting known limitations of the human eye,
notably the fact that small color details aren't perceived as well as small
details of light-and-dark.  Thus, JPEG is intended for compressing images that
will be looked at by humans.  If you plan to machine-analyze your images, the
small errors introduced by JPEG may be a problem for you, even if they are
invisible to the eye.  The JPEG standard includes a separate lossless mode,
but it is not widely used and does not give nearly as much compression as the
lossy mode.
 
Question 75 "Introduction to JPEG" (in part 2 of this FAQ) gives an overview
of how JPEG works and provides references for further reading.  Also see the
JPEG FAQ article, which covers JPEG software and usage hints.  The JPEG FAQ is
posted regularly in news.answers by Tom Lane <[email protected]>.  (See question
53 "Where are FAQ lists archived" if this posting has expired at your site.)
 
For JPEG software, see item 15 above.
For JPEG hardware, see item 85 in part 3 of this FAQ.
 
------------------------------------------------------------------------------
 
Subject: [20] I am looking for source of an H.261 codec and MPEG
 
 
The H.261 spec is available on src.doc.ic.ac.uk in
/computing/ccitt/standards/ccitt/1992/h/h261.doc.Z (or h261.rtf.Z).
 
 
For H.261 hardware, see item 85 in part 3 of this FAQ.
 
from Thierry TURLETTI <[email protected]>:
 
     IVS (INRIA VIDEOCONFERENCING SYSTEM)
      - X11-based videoconferencing tool for SPARC, HP,  DEC  and
     Silicon Graphic workstations.
 
     ivs allows users  to  conduct  multi-host  audio  and  video
     conferences  over  the  Internet. ivs requires a workstation
     with a screen with 1, 4, 8 or  24  bits  depth.   Multi-host
     conferences  require  that  the  kernel support multicast IP
     extensions (RFC 1112).
 
     On video input, video frames are grabbed  by  the  VideoPix,
     SunVideo or Parallax boards for SparcStations or Raster Rops
     board for HP stations or the IndigoVideo board for SGI  IRIS
     Indigo workstations.  or the VIDEOTX board for DEC stations.
     No special hardware apart from  the  workstation's  build-in
     audio hardware is required for audio conference.
 
     Video encoding is done according to the H.261 standard.
     The video stream can be encoded in either Super CIF 
     (704x576 pixels) format or  CIF  (352x288  pixels) format or 
     QCIF (176x144 pixels). Default format is CIF.
 
     Sources, binaries & manuals are freely available by anonymous 
     ftp from zenon.inria.fr in the rodeo/ivs directory. An INRIA
     report describing this application is also available in the 
     same directory.
 
     If you ftp & use this package, please send all remarks or 
     modifications made to <[email protected]>. If you want 
     to be added or deleted to the ivs-users mailing list, please send 
     e-mail to [email protected].
 
 
from Andy Hung <[email protected]>:
 
Public domain UNIX C source code to do both image and image sequence
compression and decompression is available by anonymous ftp:
 
MPEG-I			havefun.stanford.edu:pub/mpeg/MPEGv1.2.alpha.tar.Z
CCITT H.261(P*64)	havefun.stanford.edu:pub/p64/P64v1.2.alpha.tar.Z
JPEG			havefun.stanford.edu:pub/jpeg/JPEGv1.2.beta.tar.Z
 
These codecs operate on raw raster scanned images.
 
A software program to display raw raster-scanned YUV images and image
sequences on X grayscale or color monitors is provided by a program in
the anonymous ftp directory havefun.stanford.edu pub/cv/CVv1.1.tar.Z.
If you are using the codecs above, we recommend that you ftp this file
over as well.
 
The source code has been compiled on DEC and SUN workstations.
Caution: the P64 codec has not been tested compliant (any available
p64 video streams would be much appreciated - please let us know at
[email protected]).  The other codecs have been tested with
streams from other encoders.
 
We also have some IPB MPEG-I video coded streams in pub/mpeg/*.mpg;
and P64 video streams in pub/p64/*.p64 that we have generated using
our codecs.
 
For a more complete description see the file
havefun.stanford.edu:pub/README.
 
------------------------------------------------------------------------------
 
Subject: [25] Fast DCT (Discrete Cosine Transform) algorithms
 
 
Many image compression methods, including the JPEG, MPEG, and H.261 standards,
are based on the discrete cosine transform.  A good overall introduction to
DCT is the book "Discrete Cosine Transform---Algorithms, Advantages,
Applications" by K.R. Rao and P. Yip (Academic Press, London, 1990),
ISBN 0-12-580203-X. This has an extensive, though already dated, bibliography.
 
Here are some newer references provided by Tom Lane <[email protected]>.
Most of these are in IEEE journals or conference proceedings, notably
ICASSP = IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing.
ICCAS = IEEE Intl. Conf. on Circuits and Systems.
DCC = Data Compression Conference.
 
Polynomial Transform Computation of the 2-D DCT, Duhamel & Guillemot,
  ICASSP '90 p. 1515.
A Forward-Mapping Realization of the Inverse DCT, McMillan & Westover,
  DCC '92 p. 219.
A Fast Algorithm for 2-D DCT, Cho, Yun & Lee, ICASSP '91 p. 2197.
Fast Algorithm and Implementation of 2-D DCT, Cho & Lee, Tr. CAS v38 p. 297.
A DCT Chip based on a new Structured and Computationally Efficient DCT
  Algorithm, Duhamel, Guillemot & Carlach, ICCAS '90 p. 77.
Trade-offs in the Computation of Mono- and Multi-dimensional DCTs,
  Vetterli, Duhamel & Guillemot, ICASSP '89 p. 999.
Practical Fast 1-D DCT Algorithms with 11 Multiplications,
  Loeffler, Ligtenberg & Moschytz, ICASSP '89 p. 988.
New Scaled DCT Algorithms for Fused Multiply/Add Architectures,
  Linzer & Feig, ICASSP '91 p. 2201.
Fast Algorithms for the 2-D Discrete Cosine Transform, Kamangar & Rao,
  IEEE Tr. Computers, v C-31 p. 899.
Fast 2-D Discrete Cosine Transform, Vetterli, ICASSP '85 p. 1538.
A Two-Dimensional Fast Cosine Transform, Haque, Tr. ASSP v ASSP-33 p. 1532.
Real-Time Parallel and Fully Pipelined 2-D DCT Lattice Structures with
  Application to HDTV Systems, Chiu & Liu, Tr. CAS for Video Tech, v 2 p. 25.
J.F. Blinn, "What's the Deal with the DCT", IEEE Computer Graphics and
  Applications, July 1993, pp.78-83.
 
The free JPEG code (jpegsrc.v5.tar.Z) has one of the fastest implementations
of the DCT code.  It's all in the files jfwddct.c and jrevdct.c (which do
the dct and idct, respectively). See item 15 for ftp locations.
 
------------------------------------------------------------------------------
 
Subject: [26] Are there algorithms and standards for audio compression?
 
 
Yes. See the introduction to MPEG given in part 2 of this FAQ.
 
A lossless compressor for 8bit and 16bit audio data (.au) is available by
anonymous ftp at svr-ftp.eng.cam.ac.uk:/comp.speech/sources/shorten.tar.Z.
(An MSDOS executable used to be in shn109.exe but has disappeared.)
Shorten works by using Huffman coding of prediction residuals.
Compression is generally better than that obtained by applying general
purpose compression utilities to audio files. Also supports lossy
compression.  Contact: Tony Robinson <[email protected]>.
 
Audio software is available on sunsite.unc.edu in subdirectories of
/pub/electronic-publications/IUMA/audio_utils:
- An MPEG audio player is in mpeg_players/Workstations/maplay1_2.tar.Z.
- The sources of the XING MPEG audio player for Windows is in
  mpeg_players/Windows/mpgaudio.zip.
- An encoder/decoder is in converters/source/mpegaudio.tar.Z.
 
 
Copied from the comp.dsp FAQ posted by [email protected] (Guido van Rossum):
 
  Strange though it seems, audio data is remarkably hard to compress
  effectively.  For 8-bit data, a Huffman encoding of the deltas between
  successive samples is relatively successful.  For 16-bit data,
  companies like Sony and Philips have spent millions to develop
  proprietary schemes.
 
  Public standards for voice compression are slowly gaining popularity,
  e.g. CCITT G.721 and G.723 (ADPCM at 32 and 24 kbits/sec).  (ADPCM ==
  Adaptive Delta Pulse Code Modulation.)  Free source code for a *fast*
  32 kbits/sec ADPCM (lossy) algorithm is available by ftp from ftp.cwi.nl
  as /pub/adpcm.shar.  (** NOTE: if you are using v1.0, you should get
  v1.1, released 17-Dec-1992, which fixes a serious bug -- the quality
  of v1.1 is claimed to be better than uLAW **)
 
  (Note that U-LAW and silence detection can also be considered
  compression schemes.)
 
 
You can get a G.721/722/723 package by email to [email protected], with
 
GET ITU-3022
 
as the *only* line in the body of the message.
 
 
A note on u-law from Markus Kuhn <[email protected]>:
 
  u-law (more precisely (greek mu)-law or 5-law if you have an 8-bit
  ISO terminal) is more an encoding then a compression method,
  although a 12 to 8 bit reduction is normally part of the encoding.
  The official definition is CCITT recommendation G.711. If you want
  to know how to get CCITT documents, check the Standards FAQ
  posted to news.answers or get the file standards-faq by ftp in
  directory rtfm.mit.edu:/pub/usenet/news.answers.
 
 
See also the comp.dsp FAQ for more information on:
 
- The U.S. DoD's Federal-Standard-1016 based 4800 bps code excited linear
  prediction voice coder version 3.2a (CELP 3.2a)
- The U.S. DoD's Federal-Standard-1015/NATO-STANAG-4198 based 2400 bps
  linear prediction coder version 53 (LPC-10e v53)
- Realtime DSP code and hardware for FS-1015 and FS-1016
 
You can find the comp.dsp FAQ in comp.dsp or news.answers with subject:
"FAQ: Audio File Formats" or by ftp on rtfm.mit.edu
in /pub/usenet/news.answers/audio-fmts/part1.
 
 
CELP C code for Sun SPARCs is available for anonymous ftp at
furmint.nectar.cs.cmu.edu, in directory celp.audio.compression.
Version 3.2a is also in super.org:/pub/celp_3.2a.tar.Z.
 
 
Recommended reading:
  Digital Coding of Waveforms: Principles and Applications to Speech and
  Video.  N. S. Jayant and Peter Noll.  Prentice-Hall, 1984, ISBN
  0-13-211913-7.
 
 
from Markus Kuhn <[email protected]>:
 
  One highest quality sound compression format is called ASPEC and has
  been developped by a team at the Frauenhofer Institut in Erlangen (Germany)
  and others.
 
  ASPEC produces CD like quality and offers several bitrates, one is
  128 kbit/s. It is a lossy algorithm that throws away frequencys that
  aren't registered in the human cochlea in addition to sophisticated
  entropy coding. The 64 kbit/s ASPEC variant might soon bring hifi
  quality ISDN phone connections. It has been implemented on standard DSPs.
 
  The Layer 3 MPEG audio compression standard now contains what is officially
  called the best parts of the ASPEC and MUSICAM algorithms. A reference is:
 
    K.Brandenburg, G.Stoll, Y.F.Dehery, J.D.Johnston, L.v.d.Kerkhof,
    E.F.Schroeder: "The ISO/MPEG-Audio Codec: A Generic Standard for Coding
    of High Quality Digital Audio",
    92nd. AES-convention, Vienna 1992, preprint 3336
 
 
from Jutta Degener <[email protected]> and Carsten Bormann
<[email protected]>:
 
  GSM 06.10 13 kbit/s RPE/LTP speech compression available
  --------------------------------------------------------
 
  The Communications and Operating Systems Research Group (KBS) at the
  Technische Universitaet Berlin is currently working on a set of
  UNIX-based tools for computer-mediated telecooperation that will be
  made freely available.
 
  As part of this effort we are publishing an implementation of the
  European GSM 06.10 provisional standard for full-rate speech
  transcoding, prI-ETS 300 036, which uses RPE/LTP (residual pulse
  excitation/long term prediction) coding at 13 kbit/s.
 
  GSM 06.10 compresses frames of 160 13-bit samples (8 kHz sampling
  rate, i.e. a frame rate of 50 Hz) into 260 bits; for compatibility
  with typical UNIX applications, our implementation turns frames of 160
  16-bit linear samples into 33-byte frames (1650 Bytes/s).
  The quality of the algorithm is good enough for reliable speaker
  recognition; even music often survives transcoding in recognizable 
  form (given the bandwidth limitations of 8 kHz sampling rate).
 
  Version 1.0 of the implementation is available per anonymous ftp from
  tub.cs.tu-berlin.de as /pub/tubmik/gsm-1.0.tar.Z.  Questions and bug
  reports should be directed to [email protected].  
  Note that the distribution is not available via E-mail (please use one
  of the ftp-via-E-mail servers).
 
 
from Bob Kimball <[email protected]>:
 
  I work for Qualcomm Inc. and we are designing a digital cellular telephone
  system.  Our phone uses our variable rate vocoder (QCELP) which is designed
  for speach and compresses 64Kb/s speach to 8Kb/s through 1Kb/s with 8Kb/s
  being full rate and 1Kb/s for 1/8 rate speach.  It works great for speach.
 
  The QCELP process is documented in our Common Air Interface (CAI) which is
  available for anonymous ftp from lorien.qualcomm.com in /pub/cdma
  each chapter is a postscript file.  The vocoder is described in appendix A.
  The whole document is quite large.  This is the document which is currently
  going through the TIA standard committee so it is not a final version.  The
  appendix on the vocoder should be almost identical to the final version...
  whenever that comes out.
 
 
from Nicola Ferioli <[email protected]>:
 
  oak.oakland.edu:/pub/msdos/sound/vocpak20.zip
    Lossless 8-bit sound file compressor
 
  VOCPACK is a compressor/decompressor for 8-bit digital sound using a
  lossless algorithm; it is useful to save disk space without degrading
  sound quality.  It can compress signed and unsigned data, sampled at any
  rate, mono or stereo.  Since the method used is not lossy, it isn't
  necessary to strip file headers before compressing.
 
  VOCPACK was developed for use with .VOC (SoundBlaster) and .WAV (Windows)
  files, but any 8-bit sound can be compressed since the program takes no
  assumptions about the file structure.
 
  The typical compression ratio obtained goes from 0,8 for files sampled at
  11 KHz to 0,4 for 44 Khz files.  The best results are obtained with 44 KHz
  sounds (mono or stereo): general-purpose archivers create files that can be
  twice longer than the output of VOCPACK.  You can obtain smaller values
  using lossy compressors but if your goal is to keep the sound quality
  unaltered you should use a lossless program like VOCPACK.
 
from Harald Popp <[email protected]>:
 
  new version 1.0 of ISO/MPEG1 Audio Layer 3 Shareware available
 
  major improvements of the new version:
       - encoder works twice as fast
       - improved file handling for encoder including .WAV files
 
  You may download the shareware from fhginfo.fhg.de (153.96.1.4)
  from the directory /pub/layer3
 
  The source code for the MPEG1 audio decoder layer 1, 2 and 3 is
  now available on fhginfo.fhg.de (153.96.1.4) in /pub/layer3/public_c.
 
  There are two files:
     mpeg1_iis.tar.Z     (Unix: lines seperated by line feed only)
     mpeg1iis.zip        (PC: lines seperated by carriage return and line feed)
 
from Monty <[email protected]>:
 
  A *beta* release of the SQUISH audio compression/decompression utility is
  now available via anonymous ftp from DeskFish.mit.edu in the
  /pub/compression_apps directory.  Binaries are available for Linux, Dec
  Ultrix, RS6k AIX and Sun4.
 
  SQUISH is a compression package designed to reduce the file size of
  digitized 8 and 16 bit audio samples (or samples of any periodic
  data).  SQUISH will operate on files sampled at any speed, but it is
  designed to work with very high quality samples, for example, CD
  quality samples.
 
------------------------------------------------------------------------------
 
Subject: [30] My archive is corrupted!
 
 
The two most common reasons for this are
 
(1) failing to use the magic word "tenex" (when connected to SIMTEL20 and
    other TOPS20 systems) or "binary" (when connected to UNIX systems) when
    transferring the file from an ftp site to your host machine.  The
    reasons for this are technical and boring.  A synonym for "tenex" is
    "type L 8", in case your ftp doesn't know what "tenex" means.
 
(2) failing to use an eight-bit binary transfer protocol when transferring
    the file from the host to your PC.  Make sure to set the transfer type
    to "binary" on both your host machine and your PC.
 
gopher is also known to corrupt binary files. In particular, if gzip
complains about a multi-part file, it's likely that the .gz file
has been corrupted by gopher. Use ftp in binary mode instead.
 
------------------------------------------------------------------------------
 
Subject: [31] pkunzip reports a CRC error!
 
 
The portable zip 1.1 contains many workarounds for undocumented restrictions
in pkunzip. Compatibility is ensured for pkunzip 1.10 only. All previous
versions (pkunzip 1.0x) have too many bugs and cannot be supported. This
includes Borland unzip.
 
So if your pkunzip reports a CRC error, check that you are not using
an obsolete version. Get either pkzip 2.04g or unzip 5.12 (see question
2 above for ftp sites). To generate zip files compatible with pkunzip 1.10,
use zip 1.1 (see item 2 above for ftp site).
 
------------------------------------------------------------------------------
 
Subject: [32] VMS zip is not compatible with pkzip!
 
 
The problem is most likely in the file transfer program.
 
Many use kermit to transfer zipped files between PC and VMS VAX.  The
following VMS kermit settings make VMS-ZIP compatible with PKZIP:
 
                                             VMS kermit        PC kermit
                                           ---------------   --------------
 
Uploading PKZIPped file to be UNZIPped:    set fi ty fixed    set fi ty bi
Downloading ZIPped file to be PKUNZIPped:  set fi ty block    set fi ty bi
 
If you are not using kermit, transfer a file created by pkzip on MSDOS
to VMS, transfer it back to your PC and check that pkunzip can extract it.
 
------------------------------------------------------------------------------
 
Subject: [33] I have a problem with Stacker or DoubleSpace!
 
 
The newsgroup comp.compression is *not* the appropriate place to
discuss about one specific program on one specific operating system.
Since you have bought a legal copy of Stacker or MSDOS 6.x, you have
the documentation of your product; please read it. If you can't find
the answer in the documentation, please report the problem to the Stac
or Microsoft customer support. (For Stac: [email protected] or
[email protected].)  If you really feel that the net has to know
about your problem, please post in one of the MSDOS newsgroups, such
as comp.os.msdos.apps or comp.binaries.ibm.pc.d.
 
------------------------------------------------------------------------------
 
Subject: [50] What is this 'tar' compression program?
 
 
tar is not a compression program. It just combines several files
into one, without compressing them. tar file are often compressed with
'compress', resulting in a .tar.Z file. See question 2, file type .tar.Z.
GNU tar has the capability to (de)compress files as well.
 
When you have to archive a lot of very small files, it is often
preferable to create a single .tar file and compress it, than to
compress the individual files separately. The compression program can
thus take advantage of redundancy between separate files.  The
disadvantage is that you must uncompress the whole .tar file to
extract any member. You can also improve compression by grouping
files by type, as in:
 
  tar cvf - `ls | sort -t. +1` | gzip > file.tar.gz
 
------------------------------------------------------------------------------
 
Subject: [51] I need a CRC algorithm
 
 
As its name implies (Cyclic Redundancy Check) a crc adds redundancy whereas
the topic of this group is to remove it. Yet this question comes up often in
comp.compression.
 
The file ftp.adelaide.edu.au:/pub/rocksoft/crc_v3.txt is a pretty comprehensive
description of the whole CRC concept, including a C program.
 
See also:
- Schwaderer W.D., "CRC Calculation", April 85 PC Tech Journal, pp.118-133.
- "Calculating CRCs by Bits and Bytes", BYTE Magazine, September 1986
- Ramabadran T.V., Gaitonde S.S., "A tutorial on CRC computations", IEEE
  Micro, Aug 1988.
- ftp.uni-erlangen.de:/pub/doc/ISO/english/async-HDLC
- the source of all archivers, such as the file makecrc.c in the sources of
  zip 2.0.1 (see item 2).
 
 
The following C code (by Rob Warnock <[email protected]>) does CRC-32 in
BigEndian/BigEndian byte/bit order.  That is, the data is sent most
significant byte first, and each of the bits within a byte is sent most
significant bit first, as in FDDI. You will need to twiddle with it to do
Ethernet CRC, i.e., BigEndian/LittleEndian byte/bit order. [Left as an
exercise for the reader.]
 
The CRCs this code generates agree with the vendor-supplied Verilog models
of several of the popular FDDI "MAC" chips.
 
u_long crc32_table[256];
/* Initialized first time "crc32()" is called. If you prefer, you can
 * statically initialize it at compile time. [Another exercise.]
 */
 
u_long crc32(u_char *buf, int len)
{
        u_char *p;
        u_long  crc;
 
        if (!crc32_table[1])    /* if not already done, */
                init_crc32();   /* build table */
        crc = 0xffffffff;       /* preload shift register, per CRC-32 spec */
        for (p = buf; len > 0; ++p, --len)
                crc = (crc << 8) ^ crc32_table[(crc >> 24) ^ *p];
        return ~crc;            /* transmit complement, per CRC-32 spec */
}
 
/*
 * Build auxiliary table for parallel byte-at-a-time CRC-32.
 */
#define CRC32_POLY 0x04c11db7     /* AUTODIN II, Ethernet, & FDDI */
 
init_crc32()
{
        int i, j;
        u_long c;
 
        for (i = 0; i < 256; ++i) {
                for (c = i << 24, j = 8; j > 0; --j)
                        c = c & 0x80000000 ? (c << 1) ^ CRC32_POLY : (c << 1);
                crc32_table[i] = c;
        }
}
 
------------------------------------------------------------------------------
 
Subject: [52] What about those people who continue to ask frequently asked
              questions in spite of the frequently asked questions document?
 
 
Just send them a polite mail message, referring them to this document.
There is no need to flame them on comp.compression.  That would just
add more noise to this group.  Posted answers that are in the FAQ are
just as annoying as posted questions that are in the FAQ.
 
------------------------------------------------------------------------------
 
Subject: [53] Where are FAQ lists archived?
 
 
Many are crossposted to news.answers.  That newsgroup should have a
long expiry time at your site; if not, talk to your sysadmin.
 
FAQ lists are available by anonymous FTP from rtfm.mit.edu.
The comp.compression FAQ that you are reading is in directory
   /pub/usenet/news.answers/compression-faq
 
This FAQ is also accessible in the World Wide Web at
http://www.cis.ohio-state.edu/hypertext/faq/usenet/compression-faq/top.html.
 
If you don't have FTP access, you can access the archives by mail
server.  Send an email message to [email protected]
containing the commands
    send usenet/news.answers/compression-faq/part1
    send usenet/news.answers/compression-faq/part2
    send usenet/news.answers/compression-faq/part3
For instructions, send an email message to the same address with the
words "help" and "index" (no quotes) on separate lines. If you don't
get a reply, check your return address, or add a line such as
    path [email protected]
 
------------------------------------------------------------------------------
 
Subject: [54] I need specs for graphics formats
 
Get the book by Murray & vanRyper "Encyclopedia of graphics file formats",
O'Reilly & associates, ISBN 1-56592-058-9. Or have a look in directory
/pub/graphics.formats on zamenhof.cs.rice.edu; it contains descriptions of
gif, tiff, fits, etc...
 
See also the FAQ list for comp.graphics. See item 53 for an ftp site.
 
------------------------------------------------------------------------------
 
Subject: [55] Where can I find Lenna and other images?
 
 
A bunch of standard images (lenna, baboon, cameraman, crowd, moon
etc..) are on ftp site eedsp.gatech.edu (130.207.226.2) in directory
/database/images. The images are in 256-level grayshades (256x256
pixels, 256 "colors").
 
[Note: the site ipl.rpi.edu mentioned below keeps changing. Images
stay there for a while then disappear. They are again available at
the time of writing (27 Dec 93).]
 
The site ipl.rpi.edu (128.113.14.50) has standard images in two
directories:
   ipl.rpi.edu:/pub/image/still/usc 
   ipl.rpi.edu:/pub/image/still/canon
 
(The directory /pub/image/sequence was taken offline because of
possible copyright problems, but has come back again. In particular,
Miss America is in subdirectories of /pub/image/sequence/missa.)
 
In each of those directories are the following directories:
   bgr     - 24 bit blue, green, red
   color   - 24 bit red, green, blue
   gray    - 8 bit grayscale uniform weighted
   gray601 - 8 bit grayscale CCIR-601 weighted
 
And in these directories are the actual images.  
 
For example, the popular lena image is in
   ipl.rpi.edu:/pub/image/still/usc/color/lena  # 24 bit RGB
   ipl.rpi.edu:/pub/image/still/usc/bgr/lena    # 24 bit BGR
   ipl.rpi.edu:/pub/image/still/usc/gray/lena   # 8 bit gray
 
All of the images are in Sun rasterfile format.  You can use the pbm
utilities to convert them to whatever format is most convenient.
[pbm is available in ftp.ee.lbl.gov:/pbmplus*.tar.Z].
Questions about the ipl archive should be sent to [email protected].
 
 
There are few gray-scale still images and some raw data of test results
available in directory nic.funet.fi:/pub/graphics/misc/test-images.
There are lots of .gif images in nic.funet.fi:/pub/pics.
 
Medical images can be found in decaf.stanford.edu:/pub/images/medical/mri,
eedsp.gatech.edu:/database/images/wchung/medical, and
omicron.cs.unc.edu:/pub/softlab/CHVRTD.
 
 
Rodney Peck <[email protected]> is interested in some method
of establishing a canonical ftp database of images but does not have
the resources to provide an ftp site for that database. Send suggestions to
[email protected].
 
 
Beware: the same image often comes in many different forms, at
different resolutions, etc... The original lenna image is 512 wide,
512 high, 8 bits per pel, red, green and blue fields.  Gray-scale
versions of Lenna have been obtained in two different ways from the
original:
 (1) Using the green field as a gray-scale image, and
 (2) Doing an RGB->YUV transformation and saving the Y component.
Method (1) makes it easier to compare different people's results since
everyone's version should be the same using that method.  Method (2)
produces a more correct image.
 
For the curious: 'lena' or 'lenna' is a digitized Playboy centerfold,
from November 1972. (Lenna is the spelling in Playboy, Lena is the
Swedish spelling of the name.) Lena Soderberg (ne Sjooblom) was last
reported living in her native Sweden, happily married with three kids
and a job with the state liquor monopoly.  In 1988, she was
interviewed by some Swedish computer related publication, and she was
pleasantly amused by what had happened to her picture.  That was the
first she knew of the use of that picture in the computer business.
 
The editorial in the January 1992 issue of Optical Engineering (v. 31
no. 1) details how Playboy has finally caught on to the fact that
their copyright on Lenna Sjooblom's photo is being widely infringed.
It sounds as if you will have to get permission from Playboy to
publish it in the future.
 
The CCITT test images are available on nic.funet.fi in directory
pub/graphics/misc/test-images, files ccitt1.tif to ccitt8.tif.
However Stuart Inglis <[email protected]> indicates that
there are problems with these images, and provides alternate versions
in http://www.cs.waikato.ac.nz/~singlis/ccitt.html.
 
 
Note on the CCITT test images, by Robert Estes <[email protected]>:
 
The ccitt files are in ipl.rpi.edu:/image-archive/bitmap/ccitt
(128.113.14.50). [Note from FAQ maintainer: this directory has
now disappeared; ipl.rpi.edu is a very volatile ftp site :-).]
They are named ccitt-n.ras.Z where n goes from 1 to 8.
Each file has an accompanying doc file called ccitt-n.ras.doc which
describes the image file. Here's the doc file for ccitt-1.ras:
 
Name ccitt-1.ras
Size 1728 x 2376 x 1
Type 1 bit standard format sun rasterfile 
Keywords binary standard image 1 bit fax
Description
One of eight images from the standard binary CCITT test image set.
 
This set is commonly used to compare binary image compression
techniques. The images are are 1728x2376 pixels.
 
------------------------------------------------------------------------------
 
Subject: [56] I am looking for a message digest algorithm
 
 
Look on the ftp site rsa.com, in directory /pub. MD4 and MD5 are there.
This question would be more appropriate on sci.crypt.
 
 
 
          End of part 1 of the comp.compression faq.

Article: 1432
Path: nntpd.lkg.dec.com!crl.dec.com!crl.dec.com!decwrl!hookup!swrinde!howland.reston.ans.net!EU.net!julienas!chorus!chorus.fr
From: [email protected] (Jean-loup Gailly)
Newsgroups: comp.compression,comp.compression.research,news.answers,comp.answers
Subject: comp.compression Frequently Asked Questions (part 2/3)
Summary: *** READ THIS BEFORE POSTING ***
Keywords: data compression, FAQ
Message-ID: <[email protected]>
Date: 12 Oct 94 16:50:10 GMT
Expires: 30 Nov 94 16:17:20 GMT
References: <[email protected]>
Sender: [email protected]
Reply-To: [email protected]
Followup-To: comp.compression
Lines: 1788
Approved: [email protected]
Supersedes: <[email protected]>
Xref: nntpd.lkg.dec.com comp.compression:15048 comp.compression.research:1432 news.answers:30149 comp.answers:7743
 
Archive-name: compression-faq/part2
Last-modified: Sept 15th, 1994
 
This file is part 2 of a set of Frequently Asked Questions for the
groups comp.compression and comp.compression.research.
If you did not get part 1 or 3, you can get them by ftp
on rtfm.mit.edu in directory
   /pub/usenet/news.answers/compression-faq
 
If you don't want to see this FAQ regularly, please add the subject
line to your kill file. If you have corrections or suggestions for
this FAQ, send them to Jean-loup Gailly <[email protected]>.  Thank you.
 
Contents
========
 
Part 2: (Long) introductions to data compression techniques
 
[70] Introduction to data compression (long)
       Huffman and Related Compression Techniques
       Arithmetic Coding
       Substitutional Compressors
          The LZ78 family of compressors
          The LZ77 family of compressors
 
[71] Introduction to MPEG (long)
       What is MPEG?
       Does it have anything to do with JPEG?
       Then what's JBIG and MHEG?
       What has MPEG accomplished?
       So how does MPEG I work?
       What about the audio compression?
       So how much does it compress?
       What's phase II?
       When will all this be finished?
       How do I join MPEG?
       How do I get the documents, like the MPEG I draft?
 
[72] What is wavelet theory?
[73] What is the theoretical compression limit?
[74] Introduction to JBIG
[75] Introduction to JPEG
[76] What is Vector Quantization?
[77] Introduction to Fractal compression
 
Part 3: (Long) list of image compression hardware
 
[85] Image compression hardware
[99] Acknowledgments
 
 
Search for "Subject: [#]" to get to question number # quickly. Some news
readers can also take advantage of the message digest format used here.
 
------------------------------------------------------------------------------
 
Subject: [70] Introduction to data compression (long)
 
 
Written by Peter Gutmann <[email protected]>.
 
 Huffman and Related Compression Techniques
 ------------------------------------------
 
  *Huffman compression* is a statistical data compression technique which 
gives a reduction in the average code length used to represent the symbols of 
a alphabet.  The Huffman code is an example of a code which is optimal in the 
case where all symbols probabilities are integral powers of 1/2.  A Huffman 
code can be built in the following manner:
 
  (1) Rank all symbols in order of probability of occurrence.
    
  (2) Successively combine the two symbols of the lowest probability to form
      a new composite symbol; eventually we will build a binary tree where
      each node is the probability of all nodes beneath it.
 
  (3) Trace a path to each leaf, noticing the direction at each node.
 
  For a given frequency distribution, there are many possible Huffman codes,
but the total compressed length will be the same. It is possible to
define a 'canonical' Huffman tree, that is, pick one of these alternative
trees. Such a canonical tree can then be represented very compactly, by
transmitting only the bit length of each code. This technique is used
in most archivers (pkzip, lha, zoo, arj, ...).
 
 
  A technique related to Huffman coding is *Shannon-Fano coding*, which
works as follows:
 
  (1) Divide the set of symbols into two equal or almost equal subsets
      based on the probability of occurrence of characters in each
      subset.  The first subset is assigned a binary zero, the second
      a binary one.
 
  (2) Repeat step (1) until all subsets have a single element.
 
The algorithm used to create the Huffman codes is bottom-up, and the
one for the Shannon-Fano codes is top-down. Huffman encoding always
generates optimal codes, Shannon-Fano sometimes uses a few more bits.
 
 
 Arithmetic Coding
 -----------------
 
  It would appear that Huffman or Shannon-Fano coding is the perfect
means of compressing data.  However, this is *not* the case.  As
mentioned above, these coding methods are optimal when and only when
the symbol probabilities are integral powers of 1/2, which is usually
not the case.
 
  The technique of *arithmetic coding* does not have this restriction:
It achieves the same effect as treating the message as one single unit
(a technique which would, for Huffman coding, require enumeration of
every single possible message), and thus attains the theoretical
entropy bound to compression efficiency for any source.
 
  Arithmetic coding works by representing a number by an interval of real 
numbers between 0 and 1.  As the message becomes longer, the interval needed 
to represent it becomes smaller and smaller, and the number of bits needed to 
specify that interval increases.  Successive symbols in the message reduce 
this interval in accordance with the probability of that symbol. The more
likely symbols reduce the range by less, and thus add fewer bits to the   
message.
 
     1                                             Codewords
    +-----------+-----------+-----------+           /-----\
    |           |8/9 YY     |  Detail   |<- 31/32    .11111
    |           +-----------+-----------+<- 15/16    .1111
    |    Y      |           | too small |<- 14/16    .1110
    |2/3        |    YX     | for text  |<- 6/8      .110
    +-----------+-----------+-----------+
    |           |           |16/27 XYY  |<- 10/16    .1010
    |           |           +-----------+
    |           |    XY     |           |
    |           |           |   XYX     |<- 4/8      .100
    |           |4/9        |           |
    |           +-----------+-----------+
    |           |           |           |
    |    X      |           |   XXY     |<- 3/8      .011
    |           |           |8/27       |
    |           |           +-----------+
    |           |    XX     |           |
    |           |           |           |<- 1/4      .01
    |           |           |   XXX     |
    |           |           |           |
    |0          |           |           |
    +-----------+-----------+-----------+
 
  As an example of arithmetic coding, lets consider the example of two
symbols X and Y, of probabilities 0.66 and 0.33. To encode this message, we
examine the first symbol: If it is a X, we choose the lower partition; if
it is a Y, we choose the upper partition.  Continuing in this manner for
three symbols, we get the codewords shown to the right of the diagram above
- they can be found by simply taking an appropriate location in the
interval for that particular set of symbols and turning it into a binary
fraction. In practice, it is also necessary to add a special end-of-data
symbol, which is not represented in this simpe example.
        
  In this case the arithmetic code is not completely efficient, which is due 
to the shortness of the message - with longer messages the coding efficiency 
does indeed approach 100%.
 
  Now that we have an efficient encoding technique, what can we do with it? 
What we need is a technique for building a model of the data which we can 
then use with the encoder.  The simplest model is a fixed one, for example a 
table of standard letter frequencies for English text which we can then use 
to get letter probabilities.  An improvement on this technique is to use an 
*adaptive model*, in other words a model which adjusts itself to the data 
which is being compressed as the data is compressed.  We can convert the 
fixed model into an adaptive one by adjusting the symbol frequencies after 
each new symbol is encoded, allowing the model to track the data being 
transmitted.  However, we can do much better than that.
 
Using the symbol probabilities by themselves is not a particularly good
estimate of the true entropy of the data: We can take into account
intersymbol probabilities as well.  The best compressors available today
take this approach: DMC (Dynamic Markov Coding) starts with a zero-order
Markov model and gradually extends this initial model as compression
progresses; PPM (Prediction by Partial Matching) looks for a match of the
text to be compressed in an order-n context.  If no match is found, it
drops to an order n-1 context, until it reaches order 0.  Both these
techniques thus obtain a much better model of the data to be compressed,
which, combined with the use of arithmetic coding, results in superior
compression performance.
 
  So if arithmetic coding-based compressors are so powerful, why are they not 
used universally?  Apart from the fact that they are relatively new and 
haven't come into general use too much yet, there is also one major concern:  
The fact that they consume rather large amounts of computing resources, both 
in terms of CPU power and memory.  The building of sophisticated models for 
the compression can chew through a fair amount of memory (especially in the 
case of DMC, where the model can grow without bounds); and the arithmetic 
coding itself involves a fair amount of number crunching.
There is however an alternative approach, a class of compressors generally 
referred to as *substitutional* or *dictionary-based compressors*.
 
 Substitutional Compressors
 --------------------------
 
  The basic idea behind a substitutional compressor is to replace an 
occurrence of a particular phrase or group of bytes in a piece of data with a 
reference to a previous occurrence of that phrase.  There are two main 
classes of schemes, named after Jakob Ziv and Abraham Lempel, who first 
proposed them in 1977 and 1978.
 
<The LZ78 family of compressors>
 
  LZ78-based schemes work by entering phrases into a *dictionary* and then, 
when a repeat occurrence of that particular phrase is found, outputting the 
dictionary index instead of the phrase.  There exist several compression 
algorithms based on this principle, differing mainly in the manner in which 
they manage the dictionary.  The most well-known scheme (in fact the most 
well-known of all the Lempel-Ziv compressors, the one which is generally (and 
mistakenly) referred to as "Lempel-Ziv Compression"), is Terry Welch's LZW 
scheme, which he designed in 1984 for implementation in hardware for high- 
performance disk controllers.
 
Input string: /WED/WE/WEE/WEB
 
Character input:    Code output:    New code value and associated string:
    /W                  /                   256 = /W
    E                   W                   257 = WE
    D                   E                   258 = ED
    /                   D                   259 = D/
    WE                  256                 260 = /WE
    /                   E                   261 = E/
    WEE                 260                 262 = /WEE
    /W                  261                 263 = E/W
    EB                  257                 264 = WEB
    <END>               B
    
  LZW starts with a 4K dictionary, of which entries 0-255 refer to individual 
bytes, and entries 256-4095 refer to substrings.  Each time a new code is 
generated it means a new string has been parsed.  New strings are generated 
by appending the current character K to the end of an existing string w.  The 
algorithm for LZW compression is as follows:
 
  set w = NIL
  loop
      read a character K
      if wK exists in the dictionary
          w = wK
      else
          output the code for w
          add wK to the string table
          w = K
  endloop
 
  A sample run of LZW over a (highly redundant) input string can be seen in 
the diagram above.  The strings are built up character-by-character starting 
with a code value of 256.  LZW decompression takes the stream of codes and 
uses it to exactly recreate the original input data.  Just like the 
compression algorithm, the decompressor adds a new string to the dictionary 
each time it reads in a new code.  All it needs to do in addition is to 
translate each incoming code into a string and send it to the output.  A 
sample run of the LZW decompressor is shown in below.
 
Input code: /WED<256>E<260><261><257>B
 
Input code:        Output string:     New code value and associated string:
    /                  /            
    W                  W                      256 = /W
    E                  E                      257 = WE
    D                  D                      258 = ED
    256                /W                     259 = D/
    E                  E                      260 = /WE
    260                /WE                    261 = E/
    261                E/                     262 = /WEE
    257                WE                     263 = E/W
    B                  B                      264 = WEB
           
  The most remarkable feature of this type of compression is that the entire 
dictionary has been transmitted to the decoder without actually explicitly 
transmitting the dictionary.  At the end of the run, the decoder will have a 
dictionary identical to the one the encoder has, built up entirely as part of 
the decoding process.
    LZW is more commonly encountered today in a variant known as LZC, after 
its use in the UNIX "compress" program.  In this variant, pointers do not 
have a fixed length.  Rather, they start with a length of 9 bits, and then 
slowly grow to their maximum possible length once all the pointers of a 
particular size have been used up.  Furthermore, the dictionary is not frozen 
once it is full as for LZW - the program continually monitors compression 
performance, and once this starts decreasing the entire dictionary is 
discarded and rebuilt from scratch.  More recent schemes use some sort of 
least-recently-used algorithm to discard little-used phrases once the 
dictionary becomes full rather than throwing away the entire dictionary.  
 
Finally, not all schemes build up the dictionary by adding a single new 
character to the end of the current phrase. An alternative technique is to 
concatenate the previous two phrases (LZMW), which results in a faster 
buildup of longer phrases than the character-by-character buildup of the 
other methods.  The disadvantage of this method is that a more sophisticated 
data structure is needed to handle the dictionary.
 
[A good introduction to LZW, MW, AP and Y coding is given in the yabba
package. For ftp information, see question 2 in part one, file type .Y]
 
 
<The LZ77 family of compressors>
 
  LZ77-based schemes keep track of the last n bytes of data seen, and when a 
phrase is encountered that has already been seen, they output a pair of 
values corresponding to the position of the phrase in the previously-seen 
buffer of data, and the length of the phrase.  In effect the compressor moves 
a fixed-size *window* over the data (generally referred to as a *sliding 
window*), with the position part of the (position, length) pair referring to 
the position of the phrase within the window.  The most commonly used 
algorithms are derived from the LZSS scheme described by James Storer and 
Thomas Szymanski in 1982.  In this the compressor maintains a window of size 
N bytes and a *lookahead buffer* the contents of which it tries to find a 
match for in the window:
 
  while( lookAheadBuffer not empty )
      {
      get a pointer ( position, match ) to the longest match in the window
          for the lookahead buffer;
 
      if( length > MINIMUM_MATCH_LENGTH )
          {
          output a ( position, length ) pair;
          shift the window length characters along;
          }
      else
          {
          output the first character in the lookahead buffer;
          shift the window 1 character along;
          }
      }
        
  Decompression is simple and fast:  Whenever a ( position, length ) pair is 
encountered, go to that ( position ) in the window and copy ( length ) bytes 
to the output.
 
  Sliding-window-based schemes can be simplified by numbering the input text
characters mod N, in effect creating a circular buffer.  The sliding window
approach automatically creates the LRU effect which must be done explicitly in
LZ78 schemes.  Variants of this method apply additional compression to the
output of the LZSS compressor, which include a simple variable-length code
(LZB), dynamic Huffman coding (LZH), and Shannon-Fano coding (ZIP 1.x)), all
of which result in a certain degree of improvement over the basic scheme,
especially when the data are rather random and the LZSS compressor has little
effect.
  Recently an algorithm was developed which combines the ideas behind LZ77 and
LZ78 to produce a hybrid called LZFG.  LZFG uses the standard sliding window,
but stores the data in a modified trie data structure and produces as output
the position of the text in the trie.  Since LZFG only inserts complete
*phrases* into the dictionary, it should run faster than other LZ77-based
compressors.
 
All popular archivers (arj, lha, zip, zoo) are variations on the LZ77 theme.
 
------------------------------------------------------------------------------
 
Subject: [71] Introduction to MPEG (long)
 
 
For MPEG players, see item 15 in part 1 of the FAQ.  Frank Gadegast
<[email protected]> also posts a FAQ specialized in MPEG, available in
ftp.cs.tu-berlin.de:/pub/msdos/dos/graphics/mpegfa*.zip.
Chad Fogg <[email protected]> also has another FAQ in preparation.
The site ftp.crs4.it dedicated to the MPEG compression standard,
see the directory mpeg and subdirectories. The MPEG FAQ is also available
on the World Wide Web at http://www.crs4.it/HTML/LUIGI/MPEG/mpegfaq.html.
 
A description of MPEG can be found in: "MPEG: A Video Compression
Standard for Multimedia Applications" Didier Le Gall, Communications
of the ACM, April 1991, Vol 34. No.4, pp.46-58.
 
MPEG-2 bitstreams are available on wuarchive.wustl.edu in directory
/graphics/x3l3/pub/bitstreams.
 
Introduction to MPEG originally written by Mark Adler 
<[email protected]> around January 1992; modified and updated by 
Harald Popp <[email protected]> in March 94:
 
Q: What is MPEG, exactly?
 
A: MPEG is the "Moving Picture Experts Group", working under the 
   joint direction of the International Standards Organization (ISO) 
   and the International Electro-Technical Commission (IEC). This 
   group works on standards for the coding of moving pictures and 
   associated audio.
 
Q: What is the status of MPEG's work, then? What's about MPEG-1, -2, 
   and so on?
 
A: MPEG approaches the growing need for multimedia standards step-by-
   step. Today, three "phases" are defined:
   
   MPEG-1: "Coding of Moving Pictures and Associated Audio for 
           Digital Storage Media at up to about 1.5 MBit/s"  
 
   Status: International Standard IS-11172, completed in 10.92
   
   MPEG-2: "Generic Coding of Moving Pictures and Associated Audio"
   
   Status: Comittee Draft CD 13818 as found in documents MPEG93 / 
           N601, N602, N603 (11.93)   
 
   MPEG-3: does not longer exist (has been merged into MPEG-2)
   
   MPEG-4: "Very Low Bitrate Audio-Visual Coding"
   
   Status: Call for Proposals 11.94, Working Draft in 11.96 
 
Q: MPEG-1 is ready-for-use. How does the standard look like?
 
A: MPEG-1 consists of 4 parts:
 
   IS 11172-1: System
   describes synchronization and multiplexing of video and audio
 
   IS 11172-2: Video
   describes compression of non-interlaced video signals
   
   IS 11172-3: Audio
   describes compression of audio signals 
   
   CD 11172-4: Compliance Testing
   describes procedures for determining the characteristics of coded 
   bitstreams and the decoding porcess and for testing compliance 
   with the requirements stated in the other parts
 
Q. Does MPEG have anything to do with JPEG? 
 
A. Well, it sounds the same, and they are part of the same 
   subcommittee of ISO along with JBIG and MHEG, and they usually meet 
   at the same place at the same time.  However, they are different 
   sets of people with few or no common individual members, and they 
   have different charters and requirements.  JPEG is for still image 
   compression.
 
Q. Then what's JBIG and MHEG?
 
A. Sorry I mentioned them. Ok, I'll simply say that JBIG is for binary
   image compression (like faxes), and MHEG is for multi-media data
   standards (like integrating stills, video, audio, text, etc.).
   For an introduction to JBIG, see question 74 below.
 
Q. So how does MPEG-1 work? Tell me about video coding!
 
A. First off, it starts with a relatively low resolution video
   sequence (possibly decimated from the original) of about 352 by
   240 frames by 30 frames/s (US--different numbers for Europe),
   but original high (CD) quality audio.  The images are in color,
   but converted to YUV space, and the two chrominance channels
   (U and V) are decimated further to 176 by 120 pixels.  It turns
   out that you can get away with a lot less resolution in those
   channels and not notice it, at least in "natural" (not computer
   generated) images.
 
   The basic scheme is to predict motion from frame to frame in the
   temporal direction, and then to use DCT's (discrete cosine
   transforms) to organize the redundancy in the spatial directions.
   The DCT's are done on 8x8 blocks, and the motion prediction is
   done in the luminance (Y) channel on 16x16 blocks.  In other words,
   given the 16x16 block in the current frame that you are trying to
   code, you look for a close match to that block in a previous or
   future frame (there are backward prediction modes where later
   frames are sent first to allow interpolating between frames).
   The DCT coefficients (of either the actual data, or the difference
   between this block and the close match) are "quantized", which
   means that you divide them by some value to drop bits off the
   bottom end.  Hopefully, many of the coefficients will then end up
   being zero.  The quantization can change for every "macroblock"
   (a macroblock is 16x16 of Y and the corresponding 8x8's in both
   U and V).  The results of all of this, which include the DCT
   coefficients, the motion vectors, and the quantization parameters
   (and other stuff) is Huffman coded using fixed tables.  The DCT
   coefficients have a special Huffman table that is "two-dimensional"
   in that one code specifies a run-length of zeros and the non-zero
   value that ended the run.  Also, the motion vectors and the DC
   DCT components are DPCM (subtracted from the last one) coded.
 
Q. So is each frame predicted from the last frame?
 
A. No.  The scheme is a little more complicated than that.  There are
   three types of coded frames.  There are "I" or intra frames.  They
   are simply a frame coded as a still image, not using any past
   history.  You have to start somewhere.  Then there are "P" or
   predicted frames.  They are predicted from the most recently
   reconstructed I or P frame.  (I'm describing this from the point
   of view of the decompressor.)  Each macroblock in a P frame can
   either come with a vector and difference DCT coefficients for a
   close match in the last I or P, or it can just be "intra" coded
   (like in the I frames) if there was no good match.
 
   Lastly, there are "B" or bidirectional frames.  They are predicted
   from the closest two I or P frames, one in the past and one in the
   future.  You search for matching blocks in those frames, and try
   three different things to see which works best.  (Now I have the
   point of view of the compressor, just to confuse you.)  You try 
   using the forward vector, the backward vector, and you try 
   averaging the two blocks from the future and past frames, and 
   subtracting that from the block being coded.  If none of those work 
   well, you can intracode the block.
 
   The sequence of decoded frames usually goes like:
 
   IBBPBBPBBPBBIBBPBBPB...
 
   Where there are 12 frames from I to I (for US and Japan anyway.)
   This is based on a random access requirement that you need a
   starting point at least once every 0.4 seconds or so.  The ratio
   of P's to B's is based on experience.
 
   Of course, for the decoder to work, you have to send that first
   P *before* the first two B's, so the compressed data stream ends
   up looking like:
 
   0xx312645...
 
   where those are frame numbers.  xx might be nothing (if this is
   the true starting point), or it might be the B's of frames -2 and
   -1 if we're in the middle of the stream somewhere.
 
   You have to decode the I, then decode the P, keep both of those
   in memory, and then decode the two B's.  You probably display the
   I while you're decoding the P, and display the B's as you're
   decoding them, and then display the P as you're decoding the next
   P, and so on.
 
Q. You've got to be kidding.
 
A. No, really!
 
Q. Hmm.  Where did they get 352x240?
 
A. That derives from the CCIR-601 digital television standard which
   is used by professional digital video equipment.  It is (in the US)
   720 by 243 by 60 fields (not frames) per second, where the fields
   are interlaced when displayed.  (It is important to note though
   that fields are actually acquired and displayed a 60th of a second
   apart.)  The chrominance channels are 360 by 243 by 60 fields a
   second, again interlaced.  This degree of chrominance decimation
   (2:1 in the horizontal direction) is called 4:2:2.  The source
   input format for MPEG I, called SIF, is CCIR-601 decimated by 2:1
   in the horizontal direction, 2:1 in the time direction, and an
   additional 2:1 in the chrominance vertical direction.  And some
   lines are cut off to make sure things divide by 8 or 16 where
   needed.
 
Q. What if I'm in Europe?
 
A. For 50 Hz display standards (PAL, SECAM) change the number of lines
   in a field from 243 or 240 to 288, and change the display rate to
   50 fields/s or 25 frames/s.  Similarly, change the 120 lines in
   the decimated chrominance channels to 144 lines.  Since 288*50 is
   exactly equal to 240*60, the two formats have the same source data
   rate.
 
Q. What will MPEG-2 do for video coding?
 
A. As I said, there is a considerable loss of quality in going from
   CCIR-601 to SIF resolution.  For entertainment video, it's simply
   not acceptable.  You want to use more bits and code all or almost
   all the CCIR-601 data.  From subjective testing at the Japan
   meeting in November 1991, it seems that 4 MBits/s can give very
   good quality compared to the original CCIR-601 material.  The
   objective of MPEG-2 is to define a bit stream optimized for 
   these resolutions and bit rates.
 
Q. Why not just scale up what you're doing with MPEG-1?
 
A. The main difficulty is the interlacing.  The simplest way to extend
   MPEG-1 to interlaced material is to put the fields together into
   frames (720x486x30/s).  This results in bad motion artifacts that
   stem from the fact that moving objects are in different places
   in the two fields, and so don't line up in the frames.  Compressing
   and decompressing without taking that into account somehow tends to
   muddle the objects in the two different fields.
 
   The other thing you might try is to code the even and odd field
   streams separately.  This avoids the motion artifacts, but as you
   might imagine, doesn't get very good compression since you are not
   using the redundancy between the even and odd fields where there
   is not much motion (which is typically most of image).
 
   Or you can code it as a single stream of fields.  Or you can
   interpolate lines.  Or, etc. etc.  There are many things you can
   try, and the point of MPEG-2 is to figure out what works well.
   MPEG-2 is not limited to consider only derivations of MPEG-1.
   There were several non-MPEG-1-like schemes in the competition in
   November, and some aspects of those algorithms may or may not
   make it into the final standard for entertainment video 
   compression.
 
Q. So what works?
 
A. Basically, derivations of MPEG-1 worked quite well, with one that
   used wavelet subband coding instead of DCT's that also worked very
   well.  Also among the worked-very-well's was a scheme that did not
   use B frames at all, just I and P's.  All of them, except maybe 
   one, did some sort of adaptive frame/field coding, where a decision 
   is made on a macroblock basis as to whether to code that one as 
   one frame macroblock or as two field macroblocks.  Some other 
   aspects are how to code I-frames--some suggest predicting the even 
   field from the odd field.  Or you can predict evens from evens and 
   odds or odds from evens and odds or any field from any other field, 
   etc.
 
Q. So what works?
 
A. Ok, we're not really sure what works best yet.  The next step is
   to define a "test model" to start from, that incorporates most of
   the salient features of the worked-very-well proposals in a
   simple way.  Then experiments will be done on that test model,
   making a mod at a time, and seeing what makes it better and what
   makes it worse.  Example experiments are, B's or no B's, DCT vs.
   wavelets, various field prediction modes, etc.  The requirements,
   such as implementation cost, quality, random access, etc. will all
   feed into this process as well.
 
Q. When will all this be finished?
 
A. I don't know.  I'd have to hope in about a year or less.
 
Q: Talking about MPEG audio coding, I heard a lot about "Layer 1, 2 
   and 3". What does it mean, exactly?   
 
A: MPEG-1, IS 11172-3, describes the compression of audio signals 
   using high performance perceptual coding schemes. It specifies a 
   family of three audio coding schemes, simply called Layer-1,-2,-3, 
   with increasing encoder complexity and performance (sound quality 
   per bitrate). The three codecs are compatible in a hierarchical 
   way, i.e. a Layer-N decoder is able to decode bitstream data 
   encoded in Layer-N and all Layers below N (e.g., a Layer-3 
   decoder may accept Layer-1,-2 and -3, whereas a Layer-2 decoder 
   may accept only Layer-1 and -2.)
 
Q: So we have a family of three audio coding schemes. What does the 
   MPEG standard define, exactly?
   
A: For each Layer, the standard specifies the bitstream format and 
   the decoder. To allow for future improvements, it does *not* 
   specify the encoder , but an informative chapter gives an example 
   for an encoder for each Layer.    
 
Q: What have the three audio Layers in common?
 
A: All Layers use the same basic structure. The coding scheme can be  
   described as "perceptual noise shaping" or "perceptual subband / 
   transform coding". 
 
   The encoder analyzes the spectral components of the audio signal 
   by calculating a filterbank or transform and applies a 
   psychoacoustic model to estimate the just noticeable noise-
   level. In its quantization and coding stage, the encoder tries 
   to allocate the available number of data bits in a way to meet 
   both the bitrate and masking requirements.
 
   The decoder is much less complex. Its only task is to synthesize 
   an audio signal out of the coded spectral components.
   
   All Layers use the same analysis filterbank (polyphase with 32 
   subbands). Layer-3 adds a MDCT transform to increase the frequency 
   resolution.
   
   All Layers use the same "header information" in their bitstream, 
   to support the hierarchical structure of the standard.
   
   All Layers use a bitstream structure that contains parts that are 
   more sensitive to biterrors ("header", "bit allocation", 
   "scalefactors", "side information") and parts that are less 
   sensitive ("data of spectral components").  
 
   All Layers may use 32, 44.1 or 48 kHz sampling frequency.
   
   All Layers are allowed to work with similar bitrates:
   Layer-1: from 32 kbps to 448 kbps
   Layer-2: from 32 kbps to 384 kbps
   Layer-3: from 32 kbps to 320 kbps
 
Q: What are the main differences between the three Layers, from a 
   global view?
 
A: From Layer-1 to Layer-3,
   complexity increases (mainly true for the encoder),
   overall codec delay increases, and
   performance increases (sound quality per bitrate).
 
Q: Which Layer should I use for my application?
 
A: Good Question. Of course, it depends on all your requirements. But 
   as a first approach, you should consider the available bitrate of 
   your application as the Layers have been designed to support 
   certain areas of bitrates most efficiently, i.e. with a minimum 
   drop of sound quality.
 
   Let us look a little closer at the strong domains of each Layer.
    
   Layer-1: Its ISO target bitrate is 192 kbps per audio channel.
 
   Layer-1 is a simplified version of Layer-2. It is most useful for 
   bitrates around the "high" bitrates around or above 192 kbps. A 
   version of Layer-1 is used as "PASC" with the DCC recorder.
 
   Layer-2: Its ISO target bitrate is 128 kbps per audio channel.
   
   Layer-2 is identical with MUSICAM. It has been designed as trade-
   off between sound quality per bitrate and encoder complexity. It 
   is most useful for bitrates around the "medium" bitrates of 128 or 
   even 96 kbps per audio channel. The DAB (EU 147) proponents have 
   decided to use Layer-2 in the future Digital Audio Broadcasting 
   network.      
 
   Layer-3: Its ISO target bitrate is 64 kbps per audio channel.
   
   Layer-3 merges the best ideas of MUSICAM and ASPEC. It has been 
   designed for best performance at "low" bitrates around 64 kbps or 
   even below. The Layer-3 format specifies a set of advanced 
   features that all address one goal: to preserve as much sound 
   quality as possible even at rather low bitrates. Today, Layer-3 is 
   already in use in various telecommunication networks (ISDN, 
   satellite links, and so on) and speech announcement systems. 
 
Q: Tell me more about sound quality. How do you assess that?
 
A: Today, there is no alternative to expensive listening tests. 
   During the ISO-MPEG-1 process, 3 international listening tests 
   have been performed, with a lot of trained listeners, supervised 
   by Swedish Radio. They took place in 7.90, 3.91 and 11.91. Another 
   international listening test was performed by CCIR, now ITU-R, in 
   92.      
   
   All these tests used the "triple stimulus, hidden reference" 
   method and the CCIR impairment scale to assess the audio quality.
   The listening sequence is "ABC", with A = original, BC = pair of 
   original / coded signal with random sequence, and the listener has 
   to evaluate both B and C with a number between 1.0 and 5.0. The 
   meaning of these values is:
   
   5.0 = transparent (this should be the original signal)
   4.0 = perceptible, but not annoying (first differences noticable)  
   3.0 = slightly annoying   
   2.0 = annoying
   1.0 = very annoying
 
   With perceptual codecs (like MPEG audio), all traditional 
   parameters (like SNR, THD+N, bandwidth) are especially useless. 
   Fraunhofer-IIS works on objective quality assessment tools, like 
   the NMR meter (Noise-to-Mask-Ratio), too. BTW: If you need more 
   informations about NMR, please contact [email protected].
 
Q: Now that I know how to assess quality, come on, tell me the 
   results of these tests.
   
A: Well, for low bitrates, the main result is that at 60 or 64 kbps 
   per channel), Layer-2 scored always between 2.1 and 2.6, whereas 
   Layer-3 scored between 3.6 and 3.8. This is a significant increase 
   in sound quality, indeed! Furthermore, the selection process for 
   critical sound material showed that it was rather difficult to 
   find worst-case material for Layer-3 whereas it was not so hard to 
   find such items for Layer-2.
  
Q: OK, a Layer-2 codec at low bitrates may sound poor today, but 
   couldn't that be improved in the future? I guess you just told me 
   before that the encoder is not fixed in the standard.
   
A: Good thinking! As the sound quality mainly depends on the encoder 
   implementation, it is true that there is no such thing as a "Layer-
   N"- quality. So we definitely only know the performance of the 
   reference codecs during the international tests. Who knows what 
   will happen in the future? What we do know now, is:
   
   Today, Layer-3 already provides a sound quality that comes very 
   near to CD quality at 64 kbps per channel. Layer-2 is far away 
   from that.
   
   Tomorrow, both Layers may improve. Layer-2 has been designed as a 
   trade-off between quality and complexity, so the bitstream format 
   allows only limited innovations. In contrast, even the current
   reference Layer-3-codec exploits only a small part of the powerful 
   mechanisms inside the Layer-3 bitstream format.  
 
Q: All in all, you sound as if anybody should use Layer-3 for low 
   bitrates. Why on earth do some vendors still offer only Layer-2 
   equipment for these applications?
   
A: Well, maybe because they started to design and develop their 
   system rather early, e.g. in 1990. As Layer-2 is identical with 
   MUSICAM, it has been available since summer of 90, at latest. In 
   that year, Layer-3 development started and could be successfully 
   finished in spring 92. So, for a certain time, vendors could only 
   exploit the existing part of the new MPEG standard.   
   
   Now the situation has changed. All Layers are available, the 
   standard is completed, and new systems need not limit themselves, 
   but may capitalize on the full features of MPEG audio.
 
Q: How do I get the MPEG documents?
 
A: You may order it from your national standards body.
 
   E.g., in Germany, please contact:
   DIN-Beuth Verlag, Auslandsnormen
   Mrs. Niehoff, Burggrafenstr. 6, D-10772 Berlin, Germany
   Phone: 030-2601-2757, Fax: 030-2601-1231
 
   E.g., in USA, you may order it from ANSI or 
   buy it from companies like OMNICOM phone +44 438 742424
                                      FAX   +44 438 740154
 
Q. How do I join MPEG?
 
A. You don't join MPEG.  You have to participate in ISO as part of a
   national delegation.  How you get to be part of the national
   delegation is up to each nation.  I only know the U.S., where you
   have to attend the corresponding ANSI meetings to be able to
   attend the ISO meetings.  Your company or institution has to be
   willing to sink some bucks into travel since, naturally, these
   meetings are held all over the world.  (For example, Paris,
   Santa Clara, Kurihama Japan, Singapore, Haifa Israel, Rio de
   Janeiro, London, etc.)
 
------------------------------------------------------------------------------
 
Subject: [72] What is wavelet theory?
 
 
Preprints and software are available by anonymous ftp from the
Yale Mathematics Department computer ceres.math.yale.edu[130.132.23.22],
in pub/wavelets and pub/software.
 
epic and hcompress are wavelet coders. (For source code, see item 15
in part one).
 
Bill Press of Harvard/CfA has made some things available for anonymous
ftp on cfata4.harvard.edu [128.103.40.79] in directory /pub. There is
a short TeX article on wavelet theory (wavelet.tex, to be included in
a future edition of Numerical Recipes), some sample wavelet code
(wavelet.f, in FORTRAN - sigh), and a beta version of an astronomical
image compression program which he is currently developing (FITS
format data files only, in fitspress08.tar.Z).
 
The Rice Wavelet Toolbox Release 2.0 is available on cml.rice.edu in
directories /pub/dsp/software and /pub/dsp/papers.  This is a
collection of MATLAB of "mfiles" and "mex" files for twoband and
M-band filter bank/wavelet analysis from the DSP group and
Computational Mathematics Laboratory (CML) at Rice University,
Houston, TX.  This release includes application code for Synthetic
Aperture Radar despeckling and for deblocking of JPEG decompressed
Images.  Contact: Ramesh Gopinath <[email protected]>.
 
 
A mailing list dedicated to research on wavelets has been set up at the
University of South Carolina. To subscribe to this mailing list, send a
message with "subscribe" as the subject to [email protected].
 
 
A 5 minute course in wavelet transforms, by Richard Kirk <[email protected]>:
 
Do you know what a Haar transform is? Its a transform to another orthonormal
space (like the DFT), but the basis functions are a set of square wave bursts
like this...
 
   +--+                         +------+
   +  |  +------------------    +      |      +--------------
      +--+                             +------+
 
         +--+                                 +------+
   ------+  |  +------------    --------------+      |      +
            +--+                                     +------+
 
               +--+             +-------------+
   ------------+  |  +------    +             |             +
                  +--+                        +-------------+
 
                     +--+       +---------------------------+
   ------------------+  |  +    +                           +
                        +--+
 
This is the set of functions for an 8-element 1-D Haar transform. You
can probably see how to extend this to higher orders and higher dimensions
yourself. This is dead easy to calculate, but it is not what is usually
understood by a wavelet transform.
 
If you look at the eight Haar functions you see we have four functions
that code the highest resolution detail, two functions that code the
coarser detail, one function that codes the coarser detail still, and the 
top function that codes the average value for the whole `image'.
 
Haar function can be used to code images instead of the DFT. With bilevel
images (such as text) the result can look better, and it is quicker to code.
Flattish regions, textures, and soft edges in scanned images get a nasty
`blocking' feel to them. This is obvious on hardcopy, but can be disguised on
color CRTs by the effects of the shadow mask. The DCT gives more consistent
results.
 
This connects up with another bit of maths sometimes called Multispectral
Image Analysis, sometimes called Image Pyramids.
 
Suppose you want to produce a discretely sampled image from a continuous 
function. You would do this by effectively `scanning' the function using a
sinc function [ sin(x)/x ] `aperture'. This was proved by Shannon in the 
`forties. You can do the same thing starting with a high resolution
discretely sampled image. You can then get a whole set of images showing 
the edges at different resolutions by differencing the image at one
resolution with another version at another resolution. If you have made this
set of images properly they ought to all add together to give the original 
image.
 
This is an expansion of data. Suppose you started off with a 1K*1K image.
You now may have a 64*64 low resolution image plus difference images at 128*128
256*256, 512*512 and 1K*1K. 
 
Where has this extra data come from? If you look at the difference images you 
will see there is obviously some redundancy as most of the values are near 
zero. From the way we constructed the levels we know that locally the average
must approach zero in all levels but the top. We could then construct a set of
functions out of the sync functions at any level so that their total value 
at all higher levels is zero. This gives us an orthonormal set of basis 
functions for a transform. The transform resembles the Haar transform a bit,
but has symmetric wave pulses that decay away continuously in either direction
rather than square waves that cut off sharply. This transform is the
wavelet transform ( got to the point at last!! ).
 
These wavelet functions have been likened to the edge detecting functions
believed to be present in the human retina.
 
 
Loren I. Petrich <[email protected]> adds that order 2 or 3 Daubechies
discrete wavelet transforms have a speed comparable to DCT's, and
usually achieve compression a factor of 2 better for the same image
quality than the JPEG 8*8 DCT. (See item 25 in part 1 of this FAQ for
references on fast DCT algorithms.)
 
------------------------------------------------------------------------------
 
Subject: [73] What is the theoretical compression limit?
 
 
This question can be understood in two different ways:
 
(a) For a given compressor/decompressor, what is the best possible
    lossless compression for an arbitrary string (byte sequence)
    given as input?
 
(b) For a given string, what is the best possible lossless
    compressor/decompressor?
 
For case (a), the question is generally meaningless, because a
specific compressor may compress one very large input file down to a
single bit, and enlarge all other files by only one bit.  There is no
lossless compressor that is guaranteed to compress all possible input
files. If it compresses some files, then it must enlarge some others.
This can be proven by a simple counting argument (see item 9).  In
case (a), the size of the decompressor is not taken into account for
the determination of the compression ratio since the decompressor is
fixed and it may decompress an arbitrary number of files of arbitrary
length.
 
For case (b), it is of course necessary to take into account the size
of the decompressor. The problem may be restated as "What is the
shortest program p which, when executed, produces the input string s?".
The size of this program is known as the Kolmogorov complexity
of the string s.  Strings that are truly random are not compressible:
the smallest representation of the string is the string itself.
On the other hand, the output of a pseudo-random number generator
can be extremely compressible, since it is sufficient to know the
parameters and seed of the generator to reproduce an arbitrary
long sequence.
 
References: "An Introduction to Kolmogorov Complexity and its Applications",
      Ming Li and Paul Vitanyi, Springer-Verlag
 
------------------------------------------------------------------------------
 
Subject: [74] Introduction to JBIG
 
 
JBIG software and the JBIG specification are available on nic.funet.fi
in /pub/graphics/misc/test-images/jbig.tar.gz.
 
 
A short introduction to JBIG, written by Mark Adler <[email protected]>:
 
  JBIG losslessly compresses binary (one-bit/pixel) images.  (The B stands
  for bi-level.)  Basically it models the redundancy in the image as the
  correlations of the pixel currently being coded with a set of nearby
  pixels called the template.  An example template might be the two
  pixels preceding this one on the same line, and the five pixels centered
  above this pixel on the previous line.  Note that this choice only
  involves pixels that have already been seen from a scanner.
 
  The current pixel is then arithmetically coded based on the eight-bit
  (including the pixel being coded) state so formed.  So there are (in this
  case) 256 contexts to be coded.  The arithmetic coder and probability
  estimator for the contexts are actually IBM's (patented) Q-coder.  The
  Q-coder uses low precision, rapidly adaptable (those two are related)
  probability estimation combined with a multiply-less arithmetic coder.
  The probability estimation is intimately tied to the interval calculations
  necessary for the arithmetic coding.
 
  JBIG actually goes beyond this and has adaptive templates, and probably
  some other bells and whistles I don't know about.  You can find a
  description of the Q-coder as well as the ancestor of JBIG in the Nov 88
  issue of the IBM Journal of Research and Development.  This is a very
  complete and well written set of five articles that describe the Q-coder
  and a bi-level image coder that uses the Q-coder.
 
  You can use JBIG on grey-scale or even color images by simply applying
  the algorithm one bit-plane at a time.  You would want to recode the
  grey or color levels first though, so that adjacent levels differ in
  only one bit (called Gray-coding).  I hear that this works well up to
  about six bits per pixel, beyond which JPEG's lossless mode works better.
  You need to use the Q-coder with JPEG also to get this performance.
 
  Actually no lossless mode works well beyond six bits per pixel, since
  those low bits tend to be noise, which doesn't compress at all.
 
  Anyway, the intent of JBIG is to replace the current, less effective
  group 3 and 4 fax algorithms.
 
 
Another introduction to JBIG, written by Hank van Bekkem <[email protected]>:
 
  The following description of the JBIG algorithm is derived from
  experiences with a software implementation I wrote following the
  specifications in the revision 4.1 draft of September 16, 1991. The
  source will not be made available in the public domain, as parts of
  JBIG are patented.
 
  JBIG (Joint Bi-level Image Experts Group) is an experts group of ISO,
  IEC and CCITT (JTC1/SC2/WG9 and SGVIII). Its job is to define a
  compression standard for lossless image coding ([1]). The main
  characteristics of the proposed algorithm are:
  - Compatible progressive/sequential coding. This means that a
    progressively coded image can be decoded sequentially, and the
    other way around.
  - JBIG will be a lossless image compression standard: all bits in
    your images before and after compression and decompression will be
    exactly the same.
 
  In the rest of this text I will first describe the JBIG algorithm in
  a short abstract of the draft. I will conclude by saying something
  about the value of JBIG.
 
 
  JBIG algorithm.
  --------------
 
  JBIG parameter P specifies the number of bits per pixel in the image.
  Its allowable range is 1 through 255, but starting at P=8 or so,
  compression will be more efficient using other algorithms. On the
  other hand, medical images such as chest X-rays are often stored with
  12 bits per pixel, while no distorsion is allowed, so JBIG can
  certainly be of use in this area. To limit the number of bit changes
  between adjacent decimal values (e.g. 127 and 128), it is wise to use
  Gray coding before compressing multi-level images with JBIG. JBIG
  then compresses the image on a bitplane basis, so the rest of this
  text assumes bi-level pixels.
 
  Progressive coding is a way to send an image gradually to a receiver
  instead of all at once. During sending, more detail is sent, and the
  receiver can build the image from low to high detail. JBIG uses
  discrete steps of detail by successively doubling the resolution. The
  sender computes a number of resolution layers D, and transmits these
  starting at the lowest resolution Dl. Resolution reduction uses
  pixels in the high resolution layer and some already computed low
  resolution pixels as an index into a lookup table. The contents of
  this table can be specified by the user.
 
  Compatibility between progressive and sequential coding is achieved
  by dividing an image into stripes. Each stripe is a horizontal bar
  with a user definable height. Each stripe is separately coded and
  transmitted, and the user can define in which order stripes,
  resolutions and bitplanes (if P>1) are intermixed in the coded data.
  A progressive coded image can be decoded sequentially by decoding
  each stripe, beginning by the one at the top of the image, to its
  full resolution, and then proceeding to the next stripe. Progressive
  decoding can be done by decoding only a specific resolution layer
  from all stripes.
 
  After dividing an image into bitplanes, resolution layers and
  stripes, eventually a number of small bi-level bitmaps are left to
  compress. Compression is done using a Q-coder. Reference [2]
  contains a full description, I will only outline the basic principles
  here.
 
  The Q-coder codes bi-level pixels as symbols using the probability of
  occurrence of these symbols in a certain context. JBIG defines two
  kinds of context, one for the lowest resolution layer (the base
  layer), and one for all other layers (differential layers).
  Differential layer contexts contain pixels in the layer to be coded,
  and in the corresponding lower resolution layer.
 
  For each combination of pixel values in a context, the probability
  distribution of black and white pixels can be different. In an all
  white context, the probability of coding a white pixel will be much
  greater than that of coding a black pixel. The Q-coder assigns, just
  like a Huffman coder, more bits to less probable symbols, and so
  achieves compression. The Q-coder can, unlike a Huffmann coder,
  assign one output codebit to more than one input symbol, and thus is
  able to compress bi-level pixels without explicit clustering, as
  would be necessary using a Huffman coder.
 
  Maximum compression will be achieved when all probabilities (one set
  for each combination of pixel values in the context) follow the
  probabilities of the pixels. The Q-coder therefore continuously
  adapts these probabilities to the symbols it sees.
 
 
  JBIG value.
  ----------
 
  In my opinion, JBIG can be regarded as two combined devices:
  - Providing the user the service of sending or storing multiple
    representations of images at different resolutions without any
    extra cost in storage. Differential layer contexts contain pixels
    in two resolution layers, and so enable the Q-coder to effectively
    code the difference in information between the two layers, instead
    of the information contained in every layer. This means that,
    within a margin of approximately 5%, the number of resolution
    layers doesn't effect the compression ratio.
  - Providing the user a very efficient compression algorithm, mainly
    for use with bi-level images. Compared to CCITT Group 4, JBIG is
    approximately 10% to 50% better on text and line art, and even
    better on halftones. JBIG is however, just like Group 4, somewhat
    sensitive to noise in images. This means that the compression ratio
    decreases when the amount of noise in your images increases.
 
  An example of an application would be browsing through an image
  database, e.g. an EDMS (engineering document management system).
  Large A0 size drawings at 300 dpi or so would be stored using five
  resolution layers. The lowest resolution layer would fit on a
  computer screen. Base layer compressed data would be stored at the
  beginning of the compressed file, thus making browsing through large
  numbers of compressed drawings possible by reading and decompressing
  just the first small part of all files. When the user stops browsing,
  the system could automatically start decompressing all remaining
  detail for printing at high resolution.
 
  [1] "Progressive Bi-level Image Compression, Revision 4.1", ISO/IEC
      JTC1/SC2/WG9, CD 11544, September 16, 1991
  [2] "An overview of the basic principles of the Q-coder adaptive
      binary arithmetic coder", W.B. Pennebaker, J.L. Mitchell, G.G.
      Langdon, R.B. Arps, IBM Journal of research and development,
      Vol.32, No.6, November 1988, pp. 771-726 (See also the other
      articles about the Q-coder in this issue)
 
------------------------------------------------------------------------------
 
Subject: [75] Introduction to JPEG
 
Here is a brief overview of the inner workings of JPEG, plus some
references for more detailed information, written by Tom Lane
<[email protected]>.  Please read item 19 in part 1 first.
 
JPEG works on either full-color or gray-scale images; it does not handle
bilevel (black and white) images, at least not efficiently.  It doesn't
handle colormapped images either; you have to pre-expand those into an
unmapped full-color representation.  JPEG works best on "continuous tone"
images; images with many sudden jumps in color values will not compress well.
 
There are a lot of parameters to the JPEG compression process.  By adjusting
the parameters, you can trade off compressed image size against reconstructed
image quality over a *very* wide range.  You can get image quality ranging
from op-art (at 100x smaller than the original 24-bit image) to quite
indistinguishable from the source (at about 3x smaller).  Usually the
threshold of visible difference from the source image is somewhere around 10x
to 20x smaller than the original, ie, 1 to 2 bits per pixel for color images.
Grayscale requires a little bit less space.
 
JPEG defines a "baseline" lossy algorithm, plus optional extensions for
progressive and hierarchical coding.  There is also a separate lossless
compression mode; this typically gives about 2:1 compression, ie about 12
bits per color pixel.  Most currently available JPEG hardware and software
handles only the baseline mode.
 
 
Here's the outline of the baseline compression algorithm:
 
1. Transform the image into a suitable color space.  This is a no-op for
grayscale, but for color images you generally want to transform RGB into a
luminance/chrominance color space (YCbCr, YUV, etc).  The luminance component
is grayscale and the other two axes are color information.  The reason for
doing this is that you can afford to lose a lot more information in the
chrominance components than you can in the luminance component; the human eye
is not as sensitive to high-frequency color info as it is to high-frequency
luminance.  (See any TV system for precedents.)  You don't have to change the
color space if you don't want to, as the remainder of the algorithm works on
each color component independently, and doesn't care just what the data is.
However, compression will be less since you will have to code all the
components at luminance quality.
 
2. (Optional) Downsample each component by averaging together groups of
pixels.  The luminance component is left at full resolution, while the color
components are usually reduced 2:1 horizontally and either 2:1 or 1:1 (no
change) vertically.  In JPEG-speak these alternatives are usually called
2h2v and 2h1v sampling, but you may also see the terms "411" and "422"
sampling.  This step immediately reduces the data volume by one-half or
one-third, while having almost no impact on perceived quality.  (Obviously
this would not be true if you tried it in RGB color space...)  Note that
downsampling is not applicable to gray-scale data.
 
3. Group the pixel values for each component into 8x8 blocks.  Transform each
8x8 block through a discrete cosine transform (DCT); this is a relative of the
Fourier transform and likewise gives a frequency map, with 8x8 components.
Thus you now have numbers representing the average value in each block and
successively higher-frequency changes within the block.  The motivation for
doing this is that you can now throw away high-frequency information without
affecting low-frequency information.  (The DCT transform itself is reversible
except for roundoff error.)  See question 25 for fast DCT algorithms.
 
4. In each block, divide each of the 64 frequency components by a separate
"quantization coefficient", and round the results to integers.  This is the
fundamental information-losing step.  A Q.C. of 1 loses no information;
larger Q.C.s lose successively more info.  The higher frequencies are normally
reduced much more than the lower.  (All 64 Q.C.s are parameters to the
compression process; tuning them for best results is a black art.  It seems
likely that the best values are yet to be discovered.  Most existing coders
use simple multiples of the example tables given in the JPEG standard.)
 
5. Encode the reduced coefficients using either Huffman or arithmetic coding.
(Strictly speaking, baseline JPEG only allows Huffman coding; arithmetic
coding is an optional extension.)   Notice that this step is lossless, so it
doesn't affect image quality.  The arithmetic coding option uses Q-coding;
it is identical to the coder used in JBIG (see question 74).  Be aware that
Q-coding is patented.  Most existing implementations support only the Huffman
mode, so as to avoid license fees.  The arithmetic mode offers maybe 5 or 10%
better compression, which isn't enough to justify paying fees.
 
6. Tack on appropriate headers, etc, and output the result.  In an
"interchange" JPEG file, all of the compression parameters are included
in the headers so that the decompressor can reverse the process.  For
specialized applications, the spec permits the parameters to be omitted
from the file; this saves several hundred bytes of overhead, but it means
that the decompressor must know what parameters the compressor used.
 
 
The decompression algorithm reverses this process, and typically adds some
smoothing steps to reduce pixel-to-pixel discontinuities.
 
 
Extensions:
 
The progressive mode is intended to support real-time transmission of images.
It allows the DCT coefficients to be sent incrementally in multiple "scans"
of the image.  With each scan, the decoder can produce a higher-quality
rendition of the image.  Thus a low-quality preview can be sent very quickly,
then refined as time allows.  Notice that the decoder must do essentially a
full JPEG decode cycle for each scan, so this scheme is useful only with fast
decoders (meaning dedicated hardware, at least at present).  However, the
total number of bits sent can actually be somewhat less than is necessary in
the baseline mode, especially if arithmetic coding is used.  So progressive
coding might be useful even if the decoder will simply save up the bits and
make only one output pass.
 
The hierarchical mode represents an image at multiple resolutions.  For
example, one could provide 512x512, 1024x1024, and 2048x2048 versions of the
image.  The higher-resolution images are coded as differences from the next
smaller image, and thus require many fewer bits than they would if stored
independently.  (However, the total number of bits will be greater than that
needed to store just the highest-resolution frame.)  Note that the individual
frames in a hierarchical sequence may be coded progressively if desired.
 
 
Lossless JPEG:
 
The separate lossless mode does not use DCT, since roundoff errors prevent a
DCT calculation from being lossless.  For the same reason, one would not
normally use colorspace conversion or downsampling, although these are
permitted by the standard.  The lossless mode simply codes the difference
between each pixel and the "predicted" value for the pixel.  The predicted
value is a simple function of the already-transmitted pixels just above and
to the left of the current one (eg, their average; 8 different predictor
functions are permitted).  The sequence of differences is encoded using the
same back end (Huffman or arithmetic) used in the lossy mode.
 
The main reason for providing a lossless option is that it makes a good
adjunct to the hierarchical mode: the final scan in a hierarchical sequence
can be a lossless coding of the remaining differences, to achieve overall
losslessness.  This isn't quite as useful as it may at first appear, because
exact losslessness is not guaranteed unless the encoder and decoder have
identical IDCT implementations (ie identical roundoff errors).
 
 
References:
 
For a good technical introduction to JPEG, see:
	Wallace, Gregory K.  "The JPEG Still Picture Compression Standard",
	Communications of the ACM, April 1991 (vol. 34 no. 4), pp. 30-44.
(Adjacent articles in that issue discuss MPEG motion picture compression,
applications of JPEG, and related topics.)  If you don't have the CACM issue
handy, a PostScript file containing a revised version of this article is
available at ftp.uu.net, graphics/jpeg/wallace.ps.Z.  The file (actually a
preprint for an article to appear in IEEE Trans. Consum. Elect.) omits the
sample images that appeared in CACM, but it includes corrections and some
added material.  Note: the Wallace article is copyright ACM and IEEE, and
it may not be used for commercial purposes.
 
An alternative, more leisurely explanation of JPEG can be found in "The Data
Compression Book" by Mark Nelson ([Nel 1991], see question 7).  This book
provides excellent introductions to many data compression methods including
JPEG, plus sample source code in C.  The JPEG-related source code is far from
industrial-strength, but it's a pretty good learning tool.
 
An excellent textbook about JPEG is "JPEG Still Image Data Compression
Standard" by William B. Pennebaker and Joan L. Mitchell.  Published by Van
Nostrand Reinhold, 1993, ISBN 0-442-01272-1.  650 pages, price US$59.95.
(VNR will accept credit card orders at 800/842-3636, or get your local
bookstore to order it.)  This book includes the complete text of the ISO
JPEG standards, DIS 10918-1 and draft DIS 10918-2.  Review by Tom Lane:
"This is by far the most complete exposition of JPEG in existence.  It's
written by two people who know what they are talking about: both serve on the
ISO JPEG standards committee.  If you want to know how JPEG works or why it
works that way, this is the book to have."
 
There are a number of errors in the first printing of the Pennebaker
& Mitchell book.  An errata list is available at ftp.uu.net:
graphics/jpeg/pm.errata.  At last report, all were fixed in the
second printing.
 
The official specification of JPEG is not currently available on-line.
I hear that CCITT specs may be on-line sometime soon, which would change this.
At the moment, your best bet is to buy the Pennebaker and Mitchell textbook.
 
 
------------------------------------------------------------------------------
 
Subject: [76] What is Vector Quantization?
 
Some vector quantization software for data analysis that is available
from cochlea.hut.fi (130.233.168.48) in the /pub directory.  One
package is lvq_pak and one is som_pak (som_pak generates Kohonen maps
of data using lvq to cluster it).
 
For a book on Vector Quantization, see the reference (Gersho and Gray)
given in item 7 of this FAQ.
 
A short introduction to Vector Quantization, written by Alex Zatsman
<[email protected]>:
 
In Scalar  Quantization one represents  the values by  fixed subset of
representative values. For  examples, if you  have  16 bit  values and
send  only 8 most  signifcant bits, you  get an  approximation  of the
original data  at  the expense of  precision.  In this case  the fixed
subset is all the 16-bit numbers divisable by 256, i.e 0, 256, 512,...
 
In  Vector Quantization  you   represent   not individual values   but
(usually small) arrays of them. A typical  example  is a color  map: a
color picture can  be  represented by  a  2D array  of  triplets  (RGB
values). In most pictures  those triplets do  not cover the  whole RGB
space but  tend to   concetrate  in certain   areas. For  example, the
picture of a forest will typically have a lot of green. One can select
a relatively  small  subset (typically 256 elements) of representative
colors, i.e RGB  triplets,  and then approximate  each  triplet by the
representative of  that small set. In case  of 256 one  can use 1 byte
instead of 3 for each pixel.
 
One  can do   the  same   for  any large  data  sets,  especialy  when
consecutive points are correlated in some way. CELP speech compression
algorithms  use  those subsets  "codebooks" and   use them to quantize
exciation vectors  for linear prediction  -- hence the name CELP which
stands for Codebook Excited Linear Prediction.  (See item 26 in part 1
of this FAQ for more information about CELP.)
 
Note that Vector Quantization, just like Scalar Quantization, is a lossy
compression.
 
------------------------------------------------------------------------------
 
Subject: [77] Introduction to Fractal compression (long)
 
 
Written by John Kominek <[email protected]>.
 
Seven things you should know about Fractal Image Compression (assuming that 
you want to know about it).
 
   1. It is a promising new technology, arguably superior to JPEG -- 
      but only with an argument.
   2. It is a lossy compression method.
   3. The fractals in Fractal Image Compression are Iterated Function
      Systems.
   4. It is a form of Vector Quantization, one that employs a virtual
      codebook.
   5. Resolution enhancement is a powerful feature but is not some
      magical way of achieving 1000:1 compression.
   6. Compression is slow, decompression is fast.
   7. The technology is patented.
 
That's the scoop in condensed form. Now to elaborate, beginning with a little 
background.
 
 
 A Brief History of Fractal Image Compression
 --------------------------------------------
 
The birth of fractal geometry (or rebirth, rather) is usually traced to IBM 
mathematician Benoit B. Mandelbrot and the 1977 publication of his seminal 
book The Fractal Geometry of Nature. The book put forth a powerful thesis: 
traditional geometry with its straight lines and smooth surfaces does not 
resemble the geometry of trees and clouds and mountains. Fractal geometry, 
with its convoluted coastlines and detail ad infinitum, does.
 
This insight opened vast possibilities. Computer scientists, for one, found a 
mathematics capable of generating artificial and yet realistic looking land-
scapes, and the trees that sprout from the soil. And mathematicians had at 
their disposal a new world of geometric entities.
 
It was not long before mathematicians asked if there was a unity among this 
diversity. There is, as John Hutchinson demonstrated in 1981, it is the branch 
of mathematics now known as Iterated Function Theory. Later in the decade 
Michael Barnsley, a leading researcher from Georgia Tech, wrote the popular 
book Fractals Everywhere. The book presents the mathematics of Iterated Func-
tions Systems (IFS), and proves a result known as the Collage Theorem. The 
Collage Theorem states what an Iterated Function System must be like in order 
to represent an image.
 
This presented an intriguing possibility. If, in the forward direction, frac-
tal mathematics is good for generating natural looking images, then, in the 
reverse direction, could it not serve to compress images? Going from a given 
image to an Iterated Function System that can generate the original (or at 
least closely resemble it), is known as the inverse problem. This problem 
remains unsolved.
 
Barnsley, however, armed with his Collage Theorem, thought he had it solved. 
He applied for and was granted a software patent and left academia to found 
Iterated Systems Incorporated (US patent 4,941,193. Alan Sloan is the co-
grantee of the patent and co-founder of Iterated Systems.)  Barnsley announced 
his success to the world in the January 1988 issue of BYTE magazine. This 
article did not address the inverse problem but it did exhibit several images 
purportedly compressed in excess of 10,000:1. Alas, it was not a breakthrough. 
The images were given suggestive names such as "Black Forest" and "Monterey 
Coast" and "Bolivian Girl" but they were all manually constructed. Barnsley's 
patent has come to be derisively referred to as the "graduate student algo-
rithm."
 
Graduate Student Algorithm
   o Acquire a graduate student.
   o Give the student a picture.
   o And a room with a graphics workstation.
   o Lock the door.
   o Wait until the student has reverse engineered the picture.
   o Open the door.
 
Attempts to automate this process have met little success. As Barnsley admit-
ted in 1988: "Complex color images require about 100 hours each to encode and 
30 minutes to decode on the Masscomp [dual processor workstation]." That's 100 
hours with a _person_ guiding the process.
 
Ironically, it was one of Barnsley's PhD students that made the graduate 
student algorithm obsolete. In March 1988, according to Barnsley, he arrived 
at a modified scheme for representing images called Partitioned Iterated 
Function Systems (PIFS). Barnsley applied for and was granted a second patent 
on an algorithm that can automatically convert an image into a Partitioned 
Iterated Function System, compressing the image in the process. (US patent 
5,065,447. Granted on Nov. 12 1991.) For his PhD thesis, Arnaud Jacquin imple-
mented the algorithm in software, a description of which appears in his land-
mark paper "Image Coding Based on a Fractal Theory of Iterated Contractive 
Image Transformations." The algorithm was not sophisticated, and not speedy, 
but it was fully automatic. This came at price: gone was the promise of 
10,000:1 compression. A 24-bit color image could typically be compressed from 
8:1 to 50:1 while still looking "pretty good." Nonetheless, all contemporary 
fractal image compression programs are based upon Jacquin's paper.
 
That is not to say there are many fractal compression programs available. 
There are not. Iterated Systems sell the only commercial compressor/decompres-
sor, an MS-Windows program called "Images Incorporated." There are also an 
increasing number of academic programs being made freely available. Unfor-
tunately, these programs are -- how should I put it? -- of merely academic 
quality. 
 
This scarcity has much to do with Iterated Systems' tight lipped policy about 
their compression technology. They do, however, sell a Windows DLL for pro-
grammers. In conjunction with independent development by researchers else-
where, therefore, fractal compression will gradually become more pervasive. 
Whether it becomes all-pervasive remains to be seen.
 
Historical Highlights:
   1977 -- Benoit Mandelbrot finishes the first edition of The Fractal 
           Geometry of Nature.
   1981 -- John Hutchinson publishes "Fractals and Self-Similarity."
   1983 -- Revised edition of The Fractal Geometry of Nature is 
           published.
   1985 -- Michael Barnsley and Stephen Demko introduce Iterated
           Function Theory in "Iterated Function Systems and the Global
           Construction of Fractals."
   1987 -- Iterated Systems Incorporated is founded.
   1988 -- Barnsley publishes the book Fractals Everywhere.
   1990 -- Barnsley's first patent is granted.
   1991 -- Barnsley's second patent is granted.
   1992 -- Arnaud Jacquin publishes an article that describes the first
           practical fractal image compression method.
   1993 -- The book Fractal Image Compression by Michael Barnsley and Lyman 
           Hurd is published.
        -- The Iterated Systems' product line matures. 
   1994 -- Put your name here.
 
 
 On the Inside
 -------------
 
The fractals that lurk within fractal image compression are not those of the 
complex plane (Mandelbrot Set, Julia sets), but of Iterated Function Theory. 
When lecturing to lay audiences, the mathematician Heinz-Otto Peitgen intro-
duces the notion of Iterated Function Systems with the alluring metaphor of a 
Multiple Reduction Copying Machine. A MRCM is imagined to be a regular copying 
machine except that:
 
  1. There are multiple lens arrangements to create multiple overlapping
     copies of the original.
  2. Each lens arrangement reduces the size of the original.
  3. The copier operates in a feedback loop, with the output of one
     stage the input to the next. The initial input may be anything.
 
The first point is what makes an IFS a system. The third is what makes it 
iterative. As for the second, it is implicitly understood that the functions 
of an Iterated Function Systems are contractive. 
 
An IFS, then, is a set of contractive transformations that map from a defined 
rectangle of the real plane to smaller portions of that rectangle. Almost 
invariably, affine transformations are used. Affine transformations act to 
translate, scale, shear, and rotate points in the plane. Here is a simple 
example:
 
 
     |---------------|              |-----|
     |x              |              |1    |
     |               |              |     |
     |               |         |---------------|
     |               |         |2      |3      |
     |               |         |       |       |
     |---------------|         |---------------|  
 
         Before                      After
 
     Figure 1. IFS for generating Sierpinski's Triangle.
 
This IFS contains three component transformations (three separate lens ar-
rangements in the MRCM metaphor). Each one shrinks the original by a factor of 
2, and then translates the result to a new location. It may optionally scale 
and shift the luminance values of the rectangle, in a manner similar to the 
contrast and brightness knobs on a TV.
 
The amazing property of an IFS is that when the set is evaluated by iteration, 
(i.e. when the copy machine is run), a unique image emerges. This latent image 
is called the fixed point or attractor of the IFS. As guaranteed by a result 
known as the Contraction Theorem, it is completely independent of the initial 
image. Two famous examples are Sierpinski's Triangle and Barnsley's Fern. 
Because these IFSs are contractive, self-similar detail is created at every 
resolution down to the infinitesimal. That is why the images are fractal.
 
The promise of using fractals for image encoding rests on two suppositions: 1. 
many natural scenes possess this detail within detail structure (e.g. clouds), 
and 2. an IFS can be found that generates a close approximation of a scene 
using only a few transformations. Barnsley's fern, for example, needs but 
four. Because only a few numbers are required to describe each transformation, 
an image can be represented very compactly. Given an image to encode, finding 
the optimal IFS from all those possible is known as the inverse problem.
 
The inverse problem -- as mentioned above -- remains unsolved. Even if it 
were, it may be to no avail. Everyday scenes are very diverse in subject 
matter; on whole, they do not obey fractal geometry. Real ferns do not branch 
down to infinity. They are distorted, discolored, perforated and torn. And the 
ground on which they grow looks very much different.
 
To capture the diversity of real images, then, Partitioned IFSs are employed. 
In a PIFS, the transformations do not map from the whole image to the parts, 
but from larger parts to smaller parts. An image may vary qualitatively from 
one area to the next (e.g. clouds then sky then clouds again). A PIFS relates 
those areas of the original image that are similar in appearance. Using Jac-
quin's notation, the big areas are called domain blocks and the small areas 
are called range blocks. It is necessary that every pixel of the original 
image belong to (at least) one range block. The pattern of range blocks is 
called the partitioning of an image.
 
Because this system of mappings is still contractive, when iterated it will 
quickly converge to its latent fixed point image. Constructing a PIFS amounts 
to pairing each range block to the domain block that it most closely resembles 
under some to-be-determined affine transformation. Done properly, the PIFS 
encoding of an image will be much smaller than the original, while still 
resembling it closely.
 
Therefore, a fractal compressed image is an encoding that describes:
   1. The grid partitioning (the range blocks).
   2. The affine transforms (one per range block).
 
The decompression process begins with a flat gray background. Then the set of 
transformations is repeatedly applied. After about four iterations the attrac-
tor stabilizes. The result will not (usually) be an exact replica of the 
original, but reasonably close.
 
 
 Scalelessnes and Resolution Enhancement
 ---------------------------------------
 
When an image is captured by an acquisition device, such as a camera or scan-
ner, it acquires a scale determined by the sampling resolution of that device. 
If software is used to zoom in on the image, beyond a certain point you don't 
see additional detail, just bigger pixels. 
 
A fractal image is different. Because the affine transformations are spatially 
contractive, detail is created at finer and finer resolutions with each itera-
tion. In the limit, self-similar detail is created at all levels of resolu-
tion, down the infinitesimal. Because there is no level that 'bottoms out' 
fractal images are considered to be scaleless.
 
What this means in practice is that as you zoom in on a fractal image, it will 
still look 'as it should' without the staircase effect of pixel replication. 
The significance of this is cause of some misconception, so here is the right 
spot for a public service announcement.
 
                        /--- READER BEWARE ---\
 
Iterated Systems is fond of the following argument. Take a portrait that is, 
let us say, a grayscale image 250x250 pixels in size, 1 byte per pixel. You 
run it through their software and get a 2500 byte file (compression ratio = 
25:1). Now zoom in on the person's hair at 4x magnification. What do you see? 
A texture that still looks like hair. Well then, it's as if you had an image 
1000x1000 pixels in size. So your _effective_ compression ratio is 25x16=400.
 
But there is a catch. Detail has not been retained, but generated. With a 
little luck it will look as it should, but don't count on it. Zooming in on a 
person's face will not reveal the pores. 
 
Objectively, what fractal image compression offers is an advanced form of 
interpolation. This is a useful and attractive property. Useful to graphic 
artists, for example, or for printing on a high resolution device. But it does 
not bestow fantastically high compression ratios.
 
                        \--- READER BEWARE ---/
 
That said, what is resolution enhancement? It is the process of compressing an 
image, expanding it to a higher resolution, saving it, then discarding the 
iterated function system. In other words, the compressed fractal image is the 
means to an end, not the end itself.
 
 
 The Speed Problem
 -----------------
 
The essence of the compression process is the pairing of each range block to a 
domain block such that the difference between the two, under an affine trans-
formation, is minimal. This involves a lot of searching. 
 
In fact, there is nothing that says the blocks have to be squares or even 
rectangles. That is just an imposition made to keep the problem tractable. 
 
More generally, the method of finding a good PIFS for any given image involves 
five main issues: 
   1. Partitioning the image into range blocks. 
   2. Forming the set of domain blocks. 
   3. Choosing type of transformations that will be considered. 
   4. Selecting a distance metric between blocks. 
   5. Specifying a method for pairing range blocks to domain blocks. 
 
Many possibilities exist for each of these. The choices that Jacquin offered 
in his paper are: 
   1. A two-level regular square grid with 8x8 pixels for the large
      range blocks and 4x4 for the small ones. 
   2. Domain blocks are 16x16 and 8x8 pixels in size with a subsampling 
      step size of four. The 8 isometric symmetries (four rotations, 
      four mirror flips) expand the domain pool to a virtual domain 
      pool eight times larger. 
   3. The choices in the last point imply a shrinkage by two in each 
      direction, with a possible rotation or flip, and then a trans-
      lation in the image plane. 
   4. Mean squared error is used. 
   5. The blocks are categorized as of type smooth, midrange, simple
      edge, and complex edge. For a given range block the respective
      category is searched for the best match.
 
The importance of categorization can be seen by calculating the size of the 
total domain pool. Suppose the image is partitioned into 4x4 range blocks. A 
256x256 image contains a total of (256-8+1)^2  = 62,001 different 8x8 domain 
blocks. Including the 8 isometric symmetries increases this total to 496,008. 
There are (256-4+1)^2 = 64,009 4x4 range blocks, which makes for a maximum of 
31,748,976,072 possible pairings to test. Even on a fast workstation an ex-
haustive search is prohibitively slow. You can start the program before de-
parting work Friday afternoon; Monday morning, it will still be churning away. 
 
Increasing the search speed is the main challenge facing fractal image com-
pression.
 
 
 Similarity to Vector Quantization
 ---------------------------------
 
To the VQ community, a "vector" is a small rectangular block of pixels. The 
premise of vector quantization is that some patterns occur much more frequent-
ly than others. So the clever idea is to store only a few of these common 
patterns in a separate file called the codebook. Some codebook vectors are 
flat, some are sloping, some contain tight texture, some sharp edges, and so 
on -- there is a whole corpus on how to construct a codebook. Each codebook 
entry (each domain block) is assigned an index number. A given image, then, is 
partitioned into a regular grid array. Each grid element (each range block) is 
represented by an index into the codebook. Decompressing a VQ file involves 
assembling an image out of the codebook entries. Brick by brick, so to speak.
 
The similarity to fractal image compression is apparent, with some notable 
differences. 
   1. In VQ the range blocks and domain blocks are the same size; in an
      IFS the domain blocks are always larger. 
   2. In VQ the domain blocks are copied straight; in an IFS each domain 
      block undergoes a luminance scaling and offset.
   3. In VQ the codebook is stored apart from the image being coded; in 
      an IFS the codebook is not explicitly stored. It is comprised of
      portions of the attractor as it emerges during iteration. For that
      reason it is called a "virtual codebook." It has no existence
      independent of the affine transformations that define an IFS.
   4. In VQ the codebook is shared among many images; in an IFS the 
      virtual codebook is specific to each image.
 
There is a more refined version of VQ called gain-shape vector quantization in 
which a luminance scaling and offset is also allowed. This makes the similari-
ty to fractal image compression as close as can be.
 
 
 Compression Ratios
 ------------------
 
Exaggerated claims not withstanding, compression ratios typically range from 
4:1 to 100:1. All other things equal, color images can be compressed to a 
greater extent than grayscale images.
 
The size of a fractal image file is largely determined by the number of trans-
formations of the PIFS. For the sake of simplicity, and for the sake of com-
parison to JPEG, assume that a 256x256x8 image is partitioned into a regular 
partitioning of 8x8 blocks. There are 1024 range blocks and thus 1024 trans-
formations to store. How many bits are required for each?
 
In most implementations the domain blocks are twice the size of the range 
blocks. So the spatial contraction is constant and can be hard coded into the 
decompression program. What needs to be stored are:
 
   x position of domain block        8     6
   y position of domain block        8     6
   luminance scaling                 8     5
   luminance offset                  8     6
   symmetry indicator                3     3
                                    --    --
                                    35    26 bits
 
In the first scheme, a byte is allocated to each number except for the symme-
try indicator. The upper bound on the compression ratio is thus (8x8x8)/35 = 
14.63. In the second scheme, domain blocks are restricted to coordinates 
modulo 4. Plus, experiments have revealed that 5 bits per scale factor and 6 
bits per offset still give good visual results. So the compression ratio limit 
is now 19.69. Respectable but not outstanding.
 
There are other, more complicated, schemes to reduce the bit rate further. The 
most common is to use a three or four level quadtree structure for the range 
partitioning. That way, smooth areas can be represented with large range 
blocks (high compression), while smaller blocks are used as necessary to 
capture the details. In addition, entropy coding can be applied as a back-end 
step to gain an extra 20% or so.
 
 
 Quality: Fractal vs. JPEG
 -------------------------
 
The greatest irony of the coding community is that great pains are taken to 
precisely measure and quantify the error present in a compressed image, and 
great effort is expended toward minimizing an error measure that most often is 
-- let us be gentle -- of dubious value. These measure include signal-to-noise 
ratio, root mean square error, and mean absolute error. A simple example is 
systematic shift: add a value of 10 to every pixel. Standard error measures 
indicate a large distortion, but the image has merely been brightened.
 
With respect to those dubious error measures, and at the risk of over-sim-
plification, the results of tests reveal the following: for low compression 
ratios JPEG is better, for high compression ratios fractal encoding is better. 
The crossover point varies but is often around 40:1. This figure bodes well 
for JPEG since beyond the crossover point images are so severely distorted 
that they are seldom worth using. 
 
Proponents of fractal compression counter that signal-to-noise is not a good 
error measure and that the distortions present are much more 'natural looking' 
than the blockiness of JPEG, at both low and high bit rates. This is a valid 
point but is by no means universally accepted. 
 
What the coding community desperately needs is an easy to compute error meas-
ure that accurately captures subjective impression of human viewers. Until 
then, your eyes are the best judge.
 
 
 Finding Out More
 ----------------
 
 Please refer to item 17 in part 1 of this FAQ for a list of references,
available software, and ftp sites concerning fractal compression.
 
End of item 77.
 
          End of part 2 of the comp.compression faq.

Article: 1433
Path: nntpd.lkg.dec.com!crl.dec.com!crl.dec.com!decwrl!hookup!swrinde!cs.utexas.edu!howland.reston.ans.net!EU.net!julienas!chorus!chorus.fr
From: [email protected] (Jean-loup Gailly)
Newsgroups: comp.compression,comp.compression.research,news.answers,comp.answers
Subject: comp.compression Frequently Asked Questions (part 3/3)
Summary: *** READ THIS BEFORE POSTING ***
Keywords: data compression, FAQ
Message-ID: <[email protected]>
Date: 12 Oct 94 16:50:18 GMT
Expires: 30 Nov 94 16:17:20 GMT
References: <[email protected]> <[email protected]>
Sender: [email protected]
Reply-To: [email protected]
Followup-To: comp.compression
Lines: 603
Approved: [email protected]
Supersedes: <[email protected]>
Xref: nntpd.lkg.dec.com comp.compression:15049 comp.compression.research:1433 news.answers:30150 comp.answers:7744
 
Archive-name: compression-faq/part3
Last-modified: Aug 24th, 1993
 
This file is part 3 of a set of Frequently Asked Questions for the
groups comp.compression and comp.compression.research.
If you did not get part 1 or 2, you can get them by ftp
on rtfm.mit.edu in directory
   /pub/usenet/news.answers/compression-faq
 
If you don't want to see this FAQ regularly, please add the subject
line to your kill file. If you have corrections or suggestions for
this FAQ, send them to Jean-loup Gailly <[email protected]>.  Thank you.
 
Contents
========
 
Part 3: (Long) list of image compression hardware
 
[85] Image compression hardware
[99] Acknowledgments
 
 
Search for "Subject: [#]" to get to question number # quickly. Some news
readers can also take advantage of the message digest format used here.
 
------------------------------------------------------------------------------
 
Subject: [85] Image compression hardware
 
Here is a list of sources of image compression hardware (JPEG, MPEG,
H.261 and others), reposted with the author's permission.  The list is
certainly a little dated already, but it is a good starting point for
seeking compression chips. (Please send corrections/additions to
[email protected]). References are taken from:
 
   VIDEO COMPRESSION OPTIONS, IEEE CICC 6-May-92
   John J. Bloomer, [email protected], Fathy F. Yassa, Aiman A. Abdel-Malek
   General Electric Corporate R&D, KWC317 Signals and Systems Laboratory
   PO Box 8, Schenectady NY, 12301
 
(Too many people have sent comments, corrections or additions so I am
just making a common acknowledgment here.)
 
 
 Pipelined Processors, Building Blocks (Chip Sets)
 -------------------------------------------------
 
STI3200, IMSA121, STI3208 - SGS-Thompson DCT processors.  602-867-6279
	- 3200 has multiple block size options, DC to 13.5 MHz
	- A121 8x8 fixed blocks, DC to 20MHz, add/sub loop, CCITT compatible
	- 3208 8x8 fixed blocks, DC to 40MHz, CCITT compatible at 20MHz
 
STI3220 - SGS-Thompson motion estimator (H.261, MPEG).  602-867-6279
	- 8-bit input pixels, 4-bit H and V vectors out
	- adjustable block size matcher (8x8, 8x16, 16x16)
	- +7/-8 search window
	- 5V, 2W at 18MHz (max), 68 pin PLCC
 
L64765 , L64735 , L64745 - 3-chip LSI Logic JPEG set.  408-433-4383
	- L64765 raster-to-block and color-space converter, jointly developed
          with Rapid Tech.
	- L64735 block DCT processor
	- L64745 JPEG coder support, stand-alone lossless DPCM codec, dynamic
          Huffman
	- 27 MB/s on CCIR601 frames
	- minimal support logic, color and gray scale
	- 68-pin PGA or PLCC, 27 and 20 MHz versions
 
L647*0 and L6471* families - LSI Logic H.621/MPEG pieces.  408-433-8000
	- L64720 motion estimator, 30/40MHz, 8x8, 16x16 blocks, 32x32 or 16x16 
          search window, 68-pin CPGA or PPGA
	- L64730 & 735 8x8 DCT processors (12 & 8-9 bits)
	- L64740 8x8 block quantization
	- L64760 intra/inter-frame coding decision
	- L64715 BCH error correction
	- L64750/L64751 variable length encode/decode (H.261-specific)
 
ZR36020 and ZR36031 - Zoran DCT processor & quantization/encoding. 408-986-1314
	- JPEG-like scheme using 16-bit, two's complement fixed point
          arithmetic
	- includes bit-rate controls for constant # of pictures per card
	- 7.4 MHz, < 1W, 20mW in standby mode, 7.5 frames/s (f/s)
	- 36020 - 44-pin plastic quad flatpack (PQFP) or 48-pin ceramic DIP
	- 36031 - 100-pin PQFP or 85-pin PGA.
	- co-developments with Fuji Photo Film Co. Ltd. digital IC-card
          camera market
 
	Does 2-passes of image: generate histogram for optimum Huffman
	tables and quantization compute step size (ala H.261 and
	MPEG-I) for each macroblock or minimum coded unit (MCU).
 
	JPEG-compatible codec expected soon.
 
LDM3104  - Olympus DCT coefficient encoder
	- constant rate, digital IC-card camera market
	- 750 mW, 25 mW standby, 100-poin QFP
 
TMC2312 - TRW quantizer/Huffman encoder, TMC2313 Huffman decoder/dequantizer
 
TMC2311 - TRW CMOS Fast Cosine Transform Processor.  
	- 12 Bits, 15 M pixels/s
	- complies with the CCITT SGXV ( e.g. JPEG, H.261 and MPEG )
	- includes an adder-subtractor for linear predictive coding
 
MN195901 - Matsushita Electric Industrial Co.   See ISSCC 1992
	- 16-bit, 60 MIP video signal processor
	- 25 uS instruction processing
	- on-board DCT and absolute differencing
	- Philips Signetics US fab.
 
HGCT - Ricoh CRC, Generalized Chen Transform demonstration chip.  408-281-1436
	- 2D JPEG/MPEG/H.261 compatible DCT
	- includes quantization
	- 30MHz, 15K gates
	- licensing possible
 
GCTX64000 - Graphic Communication Technology Corp. chipset
	- provides CCITT H.261
	- VLSI Technology and Hitachi supply H.261 codec core. 1 micron CMOS.
 
BT - British Telecommunications  plc., Martlesham labs designed
	- H.261 codec chipset, Motorola fab.
	- 13 chips total for codec.
 
 
 Pipelined Processors, Monolithic, Programmable
 ----------------------------------------------
 
Vision Processor - Integrated Information Technology Inc. 408-727-1885
	- generic DCT, motion compensated & entropy coding codec
	- microcode for still- and motion-video compression (JPEG, H.261 and
          MPEG1)
	- 1 micron CMOS, 20 MHz and 33 MHz, PGA and 84-pin QFP  
	- JPEG only and JPEG/H/261/MPEG versions available, H.261 at 30 f/s.  
	- used by Compression Labs, Inc. CDV teleconferencing system
	- rumored to be the heart of the AT&T picture phone
 
MN195901 - Matsushita Electric Industrial Corp
	- 40 MHz DSP, built-in DCT
	- 16-bit fixed-point
 
AVP1000 -  AT&T  JPEG, MPEG and H.261 codec chipset.  800-372-2447
	- 1400D decoder, 1400C system controller
	- 1300E H.261 (CIF, QCIF, CIF240) at 30 f/s, I-frame only MPEG.
	- 1400E is superset of 1300E, motion with 1/2 pixel resolution over +/-
          32 pixels
	- YCbCr video or digital input, on-board rate FIFOs, external RAM
          required
	- 0.75 micron, 50 MHz CMOS
 
	AVP1000 is from AT&T Microelectronics.  The AT&T chip set
	handles MPEG-1, H.261, and JPEG.  1400D has on board color
	space convertor.  Limited to 4Mb/s coded rate.  The DSP does
	the MUSICAM decoding (up to layer II ?)
 
 
82750PB, 82750DB - Intel DVI  pixel and display YUV color space processors.
	- proprietary machine code employed for compression
	- usable for other algorithms (e.g., JPEG, H.261 or MPEG1 at reduced
          data rates)
 
 
 Pipelined Processors, Monolithic, Fixed Lossless - Entropy Coders, DPCM, VQ
 ---------------------------------------------------------------------------
 
DCP - Integrated Information Tech. Inc. Data Compressor Processor  408-727-1885
	- LZ codec with on-chip dictionary store
	- on-chip buffers supporting block moves
	- targeting disk drives and network controller markets
	- 3.3V, 84-pin PQFP
 
Mystic - HP's DC-LZ codec.  408-749-9500
 
AHA3210 - Advanced Hardware Architectures DC-LZ codec. 208-883-8000
	- two independent DMA ports for 10 MB/s compress, decompress &
          pass-thru
	- addressing allows up to 16 MB record compression
	- 20 MHz internal clock, 200 mW, 100-pin PQFP
	- interface to AHA5101/5121 QIC tape controller/formatter
	- HP licensee
 
AHA3xxx/xxy - Rice (UNC) algorithm, 20M samples/sec, 4 to 14 bits. 208-883-8000
 
CRM1000 - CERAM Inc. entropy codec, proprietary algorithm.  719-540-8500
 
Rice - UNC algorithm prototype, 180 Mb/s.  See IEEE CICC 1992
	- other CICC 1992 papers:
	   +JS.E. Kerneny et.al. differential read, pyramidal output CCD
  	   + A. Aggoun et.al. DPCM processing
 
DCD - Philips Data Compressor Decompressor IC. 914-945-6000
	- See CICC 1990 proceedings, H. Blume, et.al.
	- LZ codec, 20 MHz clock
	- Internal FIFOs, separate input/output buses, max 10 Mword/s data in
	- 5 V CMOS, 175-pin PGA
 
9705 - Second generation Stac Electronics accelerator chip.  619-431-7474
	- Stacker LZA compression scheme(LZ-based)
	- compress at approx. 2.5 MB/s, decompress at 6 MB/s (39+ faster than
          9704)
	- standby mode 300uA
	- embedded in tapes and disks (e.g., QIC-122 Ten X Technology
          512-346-8360)
	- file compression board & software:
	    + for the PC/AT - from Stac
	    + for the Macintosh - from Sigma Design 415-770-0100 (40 MHz 9703)
	- InfoChip Systems Inc. - proprietary string-matching technology
          408-727-0514
 
VCEP or OTI95C71/Am95C71 - Oak Technology Inc. 408-737-0888
	- AMD CCITT B&W fax image compression
 
 
 Pipelined Processors, Monolithic, Fixed Lossy
 ---------------------------------------------
 
MB86356B - Fujitsu LTD.
	- JPEG DIS 10918-1 baseline  codec
	- on-chip quantizer tables
	- 2.5M pixel/sec input, up to 10MB/sec output
	- supports progressive and DPCM lossless modes 
	- 135 pin PGA.
 
CL550-30 - C-Cube Microsystems 408-944-8103, [email protected]
	- JPEG-8-R2 compliant baseline codec
	- 350-level pipeline, on-chip Huffman and quantizer table
	- 44.1 MB/sec (15 MB/sec for -10)
	- RGB, YUV, CMYK supported, CCIR 601 in real-time
	- 16/32-bit host interface 
	- 144 pin PGA or QFP, 2.5W at 29.41 MHz
 
	Limited to 2MB/sec (15Mb/s) coded rate.  35MHz PGA version
	available.  2:1 horizontal filter, on board programmable color
	space convertor.  Allows on pair of quantization tables to be
	loaded while other pair is used to code or decode data stream.
	Needs maintanence by host.
 
STI140 - SGS-Thompson JPEG baseline codec.  617-259-0300 [** Now cancelled **]
	- see CICC 1991 proceedings, M. Bolton.
	- 20 Mpixel/sec input, up to 20 MB/sec output
	- supports 24-bit color, 8-bit grey and 12-bit extended pixels
	- on chip Huffman and quantizer tables
	- 144 pin PQFP, 5V, < 2W., 10mW power-down mode
	- 1.2 micron, 3-layer metal CMOS, 20 MHz.	`
 
UVC7710 - UVC Corp. Integrated Multimedia Processor. Was 714-261-5336, out
          of business now.
	- proprietary, patented intra-frame compression, on-chip code tables
	- 20-35:1, 12.5 Mpixels/sec., compressed audio
	- includes much of the PC-AT (16-bit ISA) bus interface logic
	- 128 pin PJQFP plastic
 
CL950 - C-Cube/JVC implementation of the MPEG-JVC or extended mode MPEG2 
        announced.  6-9 Mb/sec.
 
	JVC mode is not MPEG-II compliant (there isn't an MPEG2 standard yet)
	but is an extension of MPEG1 at a higher rate plus interlace video
	handling.
 
CL450 - Announced June 1992.  Scaled down version of CL950, with 3Mb/sec
        limit.  Does not code or decode JPEG, only MPEG-I decoding.
 
CD-I - ASICs planned for CD-ROM, Compact Disk-Interactive defacto standards
	- CD-ROM XA - Sony-Philips-Motorola-Microsoft
	- CDTV -  Commodore.  YUV processing.
	- audio ADPCM encode/decode PC/AT boards available from Sony
          408-432-0190
 
Motorola MCD250 Full Motion Video Decoder.   512-928-5053.
    This is a CD-I MPEG Video decoder which requires only a single
    4Mbit DRAM for FMV decoding. Decodes System and Video Layers
    at up to 5Mbits/sec, converts from 24/25/30 fps IPB streams to
    25/30 fps output video in 24bit RGB/YUV format. Supports extra
    CD-I functions such as windowing and still picture mode.
    Targetted at low cost consumer applications such as CD-I,
    CD-Karaoke, Video-CD and cable TV.
 
Motorola MCD260 MPEG Audio Host Interface and DSP56001-33.  512-928-5053.
    The MCD260 is a low cost interface IC which goes between a 68K
    bus and a DSP56K and strips out the MPEG System Layer whilst also
    buffering and synchronising. A 33MHz 56001 with 8Kwords of DSPRAM
    decodes the MPEG Audio (Layer 1/II @ 44.1KHz, all modes and bit rates)
 
Pixel Magic PM-2. Contact: Don Shulsinger, Pixel Magic, 508-688-4410.
- JBIG compression/decompression at 30 Mb/s (typical and worst-case)
- G3/G4 compression/decompression at 90 Mb/s (typical), 30 Mb/s (worst-case)
- scaling on decode from 1/256 to 256X
- rotation by 90 degrees
- clipping to a window and padding out to another
- lookup table filtering on 3x3 neighborhoods
- on-chip FIFOs on input and output
- bookmarking for suspend/resume
 
 
 Codecs Chips Under Development
 ------------------------------
 
MPEG1 codec chips due from - TI, Brooktree, Cypress Semiconductor, Motorola 
(successor to the DSP96002 Multimedia Engine), Xing Technology/Analog Devices, 
Sony and C-Cube
 
 
Windbond Electronics Corp. is developing a DSP chip for CD-I, MPEG and JPEG
 
 
 Using these Chips: Board Level Compression Hardware
 ------------------------------------------------------
 
+ JPEG Using CL550
 
+ JPEG Using Other Chip Sets
 
+ DSP Chip Based JPEG/MPEG Solutions
 
+ Integrated Compressed Digital Video Boards 
 
 
 JPEG Using CL550
 ---------------
C-Cube - 408-944-6300 ISA and NuBus boards 
	- for development and limited time-constraint applications
	- 1-2.5 MB/sec host bus constraints
	- Image Compression Interface (ISI) software for 3rd party CL550
          integration
 
VideoSpigot/SuperSqueeze - SuperMac Technology  408 541-6100
	- a CL550A on a NuBus board
	- 24 frame/s with CD-quality audio
	- reads from Winchester and magneto-optic drives
 
Fluency VSA-1000 - Fluent Machines, Inc. AT board set.  508 626-2144
	- compress/decompress real-time synced audio & video to a i386 PC
          Winchester
	- NTSC or PAL input, 320x240 pixels saved
	- uses i960 chip, no additional boards needed
	- M/S Windows support, 3rd party S/W (e.g., AimTech 603-883-0220)
 
Super Motion Compression - New Media Graphics  PC/AT board.  800 288-2207
	- 8Khz, 8-bit compressed audio
	- 30 f/s JPEG to & from disk
	- earlier reports: still-frame compression in several seconds per MB
 
 
Leadview  - Lead Tech Inc. AT board uses the CL550 to compress/decompress
            JFIF or JTIF format files
 
Monalisa - Opta Inc. AT board uses the CL550
 
Squeeze - Rapid Technology AT board
	- Integrated by a number of vendors into 3rd party multimedia,
          video-editing PC stations
 
Parallax Graphics - SBus, VME and PC-AT boards.  408-727-2220 or
          [email protected]
 
Chips and Technologies - JPEG development kit due.
 
Image Manipulation Systems, Inc - SBus compression/framebuffer/video I/O boards
	800-745-5602 or imsinfo%[email protected]
 
 
 JPEG Using Other Chipsets
 -------------------------
 
Visionary - Rapid Technology JPEG AT board.  716-833-8534
	-  LSI Logic JPEG chips L647-35, -45 & -65
	- 30 f/s motion JPEG
	- 256x240 pixel compression and display from CCIR-601 input
	- private codec-frame buffer bus
	- also integrated with TrueVision multimedia hardware
 
Media 100  - Data Translation nonlinear video production system for the
          Macintosh  (QuickTime).  22 MB/s (PAL) and 18MB/s (NTSC) throughput.
 
Alice - Telephoto Communications Inc. 619-452-0903
	- Alice-H350 (PC/AT) and -H365 (PS/2) codec boards
	- use a 40 MHz TMS320C51 DSP and a IMSA121 DCT processor chip
	- JPEG (lossy and lossless), CCITT G3/G4, color and grey-scale images
 
Xing Technology - Hardware accelerator.  [email protected] or 805-473-0145
	- compatible with their VT-Express JPEG Turbo Accelerator Software
 
Video/1 - PsiTech Inc. 714-968-7818
	- includes a 6U VME/VSB JPEG Processing Card 
	- compresses RS-170, NTSC, PAL or Secam video into 8 MB of on-board RAM
 
 
 DSP Chip Based JPEG/MPEG Solutions
 ----------------------------------
 
Optipac - Optivision Inc. PC/AT, ISA & VME codecs.  800-562-8934
	- JPEG (lossless and lossy), CCITT III/IV
	- 1 to 5 TMS32C025s
	- 512x400x16-bit images in < 1 sec.
 
XCeed ICDP-II - Micron Technology Inc. NuBus card
	- uses two AT&T Microelectronics DSP-16 DSP chips
	- driven by Storm Technologies PicturePress software
	- executes an enhanced JPEG algorithm at near-realtime.
 
PicturePress Accelerator - Storm Technology 415-691-1111 (see above)
	- also has a line of VME compression boards
	- Micro Dynamics Ltd. imaging systems use Storm accelerator
          301-589-6300
 
Picture Packer Accelerator - Video & Image Compression Corp.
	- AT and NuBus boards use the JPEG Open Standard and a TMS320C25
 
SunVideo (S-bus) - Sun Microsystems
        - PAL/NTSC framegrabber with an integrated C-Cube CL4000
	  VideoRISC processor and downloadable video compression for
	  MPEG, MJPEG or CellB. It achieves about 10 Frames/sec on
	  MPEG Level 1 picture data compression. The board costs about $1K.
 
Phoenix System - T/one Inc. uses an Optivision Optipac 3250 to talk to a Storm 
                 Technologies NuBus PicturePress Accelerator to talk JPEG over 
                 analog phone lines.
 
Nextdimension - NeXt Computer Inc. 415-780-3912
	- 24+8-bit alpha, 640x480, 30 f/s decompression
	- CL550 version not shipping as announced. 
 
Spirit-40  - Sonitech International Inc. ISA card. 617-235-6824
	- two TMS320C40 DSPs for 80 MFLOPS
	- connect 16 boards in a hypercube for up to 1280 MFLOPS
	- JPEG, MPEG-1 audio and other voice coding applications included 
 
HardPak - CERAM Inc., ISA and EISA file compression board.  719-540-8500
	- 3.4 x 1.8 inch footprint (notebook, laptops)
	- 32KB on-board write-thru file compression cache
	- CERAM also has an SBus compressive swap-space accelerator for Suns
 
macDSP - Spectral Innovations, AT&T DSPC32-based accelerator.  408-727-1314
	- JPEG functions available
	- 30 MFLOPS on the NuBus
 
 
VCA-1  - Video compression accelerator for Sun workstations.
           Tel: (310)829-7733, FAX: (310)829-1694, Internet: [email protected]
         Special-Purpose Hardware for Motion Estimation and DCTs
	 Performs 8x8 DCTs in 21 microsec after first DCT at 52 microsec.*
	 Performs 32x32 cross search for 16x16 block in 239 microsec.*
         (*Stated times are for a 25-MHz SBus.)
	 Mounts in a single SBus slot.
	 Included software allows user-transparent access.
	 Price: $2,900 (subject to change without notice).
 
 Integrated Digital Video Boards - Miscellaneous Multimedia, Video Conferencing
 ------------------------------------------------------------------------------
 
VCI/oem - Vista Communication Instruments, Inc. +358 0 460 099
	- two AT-board H.261 video codec, PAL or NTSC cameras and monitors
	-56 kbps (64 kbps) to 2 Mbps, 64 kbps increments
	- H.221 framing and synchronizing 	- H.241 network signalling 
	- H.200/AV.254 forthcoming standard for compressed audio
	- network interface boards available
 
MediaStation- VideoLogic Inc., JPEG compression board for ISA bus. 617-494-0530
	- works with VideoLogic DVA-4000/ISA motion video board, custom bus
	- CL-550 plus ADPCM and PCM audio support
	- Inmos Transputer for I/O scheduling
	- Microsoft Windows Multimedia Extensions and proprietary interfaces
 
DECspin - Digital Equipment CorpSound/Picture Information Network 508-493-5111
	- full motion, true-color (24-bit) and greyscale (8-bit black & white)
	- variable frame size and rate up to 640 x480 x30 NTSC true-color 
	- Internet or DECnet transmission and disk I/O of live synchronized
          video/audio
	- video teleconferencing using standard network protocols 
	- create and edit of audio and video sequences
	- voice grade live audio sequences
	- DECmedia DECvideo and DECaudio hardware and software required
 
ActionMedia II - Intel/IBM DVI PS/2 and PC/AT boards.  914-642-5472
	- i750 processor boards for capture and delivery systems
	- Microsoft programming support libraries
	- proprietary RTV and PLV compression algorithms resident, time and
          time/space VQ
	- Real Time  Video (RTV) algorithm 1.5 , effective 128x120 pixel
          sequence at 30 f/s.
	- RTV 1.0 is 128x240 at 10 f/s. 
	- Presentation Level Video (PLV) - extensive off-line processing,
          exploits inter-frame coherence.  
	- i750 processor capable of playing-back PLV-compressed 256x240
          sequences at 30 f/s. 
 
DVI Board - Fast Electronic U.S. Inc. laptop board.  508-655-3278
	- uses Intel i750 chipset
	- compress or decompress video at up to 30 f/s
 
EyeQ - New Video Corp. DVI boards for the Macintosh.  213-396-0282
	- uses Intel i750 chipset
	- 150 KB/s full-motion compressed video
	- T1 and Winchester integration paths
 
Copernicus 1000 & 2000 - DesignTech, 408-453-9510
	- DVI-based presentation and authoring systems 
 
Spectrum Signal Processing - DSP96002-based PC-AT board
	- up to four boards in cascade
	- other TI, Analog Devices and AT&T-based DSP offerings
 
Ariel Corp. - Dual DSP96002 PC-AT board with compression support.  201-429-2900
 
Capture I - UVC Corp., 16-bit ISA bus board.  was 714-261-5336, out
          of business now.
	- 30 f/s of 640/480 interlace capture and record (uses UVC7710)
	- NTSC or PAL input
	- VPC200/201 development board set - proprietary NTSC video codec
         (audio card required).  
 
Leadview  - Lead Technologies, Inc. accelerates an enhanced JPEG algorithm
   on ISA
 
IBM - near-term availability:
	(1) IBM United Kingdom and British Telecommunications plc. 
	- PC or PS/2 add-on boards by end of 1993
	- interface to ISDN 2 service (one or two 64kb/s channels)
	- BT also planning residential videophone product with GEC Marconi Ltd.
 
(2) IBM Japan PS/2 board 
	- uses GCTX64000 for H.261 
	- ISDN (narrowband 64kb/s ) and IEEE 802.5 LAN interfaces
 
 
Optibase 100 - Optibase, Inc. DSP-based compression/expansion boards.
   818-719-6566
	- supports JPEG
	- supports CCITT G.721 and ANSI T1.301 & T1.303  drafts (voice and
          music) 
	- and proprietary compression (AADCT, lossless)
 
Optibase MPEG Lab Pro and MPEG Lab +. Phone: 214-386-2040 or 800-451-510
 
Motorola - DSP56002 (fixed-point 40MHz version of the 56001)
 
AT&T JPEG coder (George Warner <[email protected]>)
        - runs on a DSP3210 under the VCOS operating system.
	  The coder can be used to simultaneously compress/decompress
	  multiple images and/or be used in conjunction with other DSP
	  modules to preprocess or postprocess the image data.
 
	  Other modules available for the DSP3210 include audio coders
	  (such as MPEG, SBC, CDXA, and G.722), modem/fax data pumps
	  (V.32bis, V.22bis, and V.29), DTMF, call progress detection,
	  sample rate conversion, and more.
 
 
MWave - TI, IBM, Intermetrics multimedia system, due from IBM in 1993. 
 
Misc. NuBus boards - RasterOps , Radius, Mass Microsystems, Orange Micro,
IBM M - - Motion.
 
P.OEM - Interated Systems Inc. fractal compression boards for the PC.
404-840-0310
 
two desktop video conferencing products for Sparc's
with the Parallax XVIDEO board:
 
Communique! - desktop video conferencing products for Sparcs with the Parallax
              XVIDEO board:
        InSoft, Inc., 4718 Old Gettsburg Road, Executive Park West I, Suite 307
        Mechanicsburg, PA 17055, USA. email: [email protected]
        phone: 717-730-9501, fax: 717-730-9504
 
PSVC - desktop video conferencing products for Sparcs with the Parallax
              XVIDEO board:
	Paradise Software, Inc., 55 Princeton Heightstown Rd, Suite 109
	Princeton, NJ 08550, USA. email: [email protected]
	phone: 609-275-4475, fax: 609-275-4702
 
North Valley Research - video and other time-based media in a UNIX environment
        North Valley Research; 15262 NW Greenbriar Pkwy; Beaverton, OR  97006
        Phone (503) 531-5707, Fax (503) 690-2320. Todd Brunhoff <[email protected]>
 
Visionetics MPEG MasterTM for 386 PC. phone (310) 316-7940,
         fax (310) 316-7457, e-mail [email protected].
    MPEG Video compliant with ISO-11172 
    Up to 16 million colors regardless of VGA card or VGA mode 
    704 x 480 NTSC, 704x576 PAL resolution 
    Full motion video NTSC ( 30 fps), Pal (25 fps) 
    VIGABUSTM high speed digital video bus 
 
    MPEG audio  Layer I and II 
    16 bit stereo digital audio playback 
    20 Hz - 20 KHz frequency response 
    Stereo headphone/preamp output 
 
 
 Boards Under Development
 ------------------------
 
Matrox - Matrox Studio line of PC boards will include a 64-bit MOVIE bus and
         JPEG compression.
 
 
------------------------------------------------------------------------------
 
Subject: [99] Acknowledgments
 
 
There are too many people to cite. Thanks to all people who directly
or indirectly contributed to this FAQ.
3441.47And Bumblebees cannot flyHGOVC::JOELBERMANSun Oct 16 1994 06:3310
    re: .45
    
    Shannon's math is right for now, but his idea of noise levels and
    signal quality were a bit off.  I think all of the multi-bit MODEM
    methods beat his limits.  Didn;t he postulate a max telephone baud rate
    that has been long exceeded?  My books are in storage, but I seem to
    remember his maximum was kind of slow.
    
    Limits have a funny way of getting broken, with attendant new formulae
    and 'isn;t it obvious NOW' explanations coming soon after.
3441.48I have never seen a mathematical definition of a bumblebeePASTIS::MONAHANhumanity is a trojan horseMon Oct 17 1994 04:1834
    	As far as I know the mathematics of his baud rate limit are still
    correct, though it was more than 20 years ago that I was actually
    designing modems. There is a lot of confusion between baud and bits per
    second.
    
    	A telephone circuit, particularly if it goes through any
    multiplexing equipment, will not be able to transmit more than about
    2400Bd.  However, if it is completely noise free, it can transmit an
    infinite number of bits per second. Simple data compression that
    assumes that most of the data is ASCII text can improve this by a
    factor of 2 or more when the assumption is correct.
    
    	The baud rate is determined only by the upper usable limit of the
    frequency spectrum. The maximum bits per second is obtained by 
    integrating the product of the frequency spectrum multiplied by the
    signal/noise ratio curve, so with no noise you have an infinite number
    of bits per second on a line of any baud rate. Equally, on a noisy
    connection (such as the one with the Mariner spacecraft at long
    distances) the theoretical (and actual) bits per second may be less
    than the baud rate.
    
    	His mathematics are just as valid as they were in the 1930s, but
    the quality of telephone lines and the techniques for using them have
    improved a little over the last 60 years. He may well have stated that
    over a typical telephone line you would not get more than 3kBd. and not
    more than 10k bits per second if he was talking about actual noisy
    telephone lines as they existed then, but as a mathematician rather
    than a physicist his theorems have nothing to do with the transmission
    medium. A phycisist will tell you that you can never get infinite
    signal to noise ratio because of quantum effects, and might then go on
    to give you a limit for the number of bits per second that some channel
    might transmit, but Shannon wasn't a phycisist. If he quoted any actual
    numbers I am sure they would have been merely numerical examples to
    show the applicability of his theory.
3441.49Depends on the entropyDIODE::CROWELLJon CrowellTue Oct 18 1994 09:598
    
    If there's none or very little information in a file you can compress
    it by orders of magnitude.  I fear they must have got the 1000:1
    ratios on a digital marketing plan.  This isn't a fair test.  Rumor
    has it that HP's marketing plans sometime grow when compressed. 8*).
    
    Jon  8*)
    
3441.50Negativity is self fulfilling. REGENT::LEVINETHIS week is NEXT week&#039;s LAST week.Tue Oct 18 1994 12:1622
    The negativity here is really ticking me off. Lets say for just one
    moment that this "boy wonder" has an idea that works. Perhaps lossy,
    but maybe unique and maybe a marketable intellectual property. He is
    interested in sounding it out. You spent as much effort proving his
    idea was bogus, without ever seeing it, as I'll spend reviewing it
    when I talk to him.
    
    yes, I've looked at the fractal technology and it is interesting. Not
    very useful ,yet, but for some applications it might be very cool
    when it gets faster and cheaper.
    
    Not one of you offered any contact path for him. You dismissed the idea
    and at the same time you flexed your own muscles. Now we all know just
    how much you know, and how closed you are to new things.
    
    Your friend may contact me. I'm an engineering project leader in the
    network printing space, and we're doing NT work right now, and compression 
    is of some interest to me, but more than that, I know people elsewhere in 
    Digital who might be interested. 
    
    
    
3441.51WLDBIL::KILGOREHelp! Stuck inside looking glass!Tue Oct 18 1994 12:226
    
    To be fair, some notes in this string have asked for samples of the
    data being compressed to the claimed ratio; in effect, they said,
    "Prove it!" This is a time-honored approach to the advancements of new
    theories.
    
3441.52function NOT implementation PLEASEREGENT::LEVINETHIS week is NEXT week&#039;s LAST week.Tue Oct 18 1994 12:2822
    Oh and clearly any such conversation will be quite general. I will
    not allow him to tell me implementation details. All I want to know
    is 
    [1]how fast it ran 
    [2]and on what platform and op sys version
    [3]and how big the compression exe was 
    [4]and what data it ran on 
    [5]and was he able to decompress 
    [6]and what resemblance did the output have to the input.
    
    In fact, why dont you gather the answers to these questions and even
    gather sample input and output data and post them. You've already had
    the contact with him on a personal basis and there are certain legal
    concerns here. The questions above will give me enough of an idea
    as to the viability of the concept for now.
    
    I dont want to know ANYTHING about how he implemented it, and if you're
    wise you wont want to know either.
    
    Thanks
    
    Rick L
3441.53ANGST::BECKPaul BeckTue Oct 18 1994 12:568
    .52 has some excellent points about intellectual property control.
    Just remember the Microsoft-Stac dispute: it would be a bad idea to
    reveal technical details of the algorithm, and if down the road DEC
    patents an unrelated compression algorithm to be looking down both
    barrels of an infringement lawsuit.
    
    He may not have anything generally useful, but caution is still
    advisable.
3441.54even more...ULYSSE::MAHIEUXTue Oct 18 1994 13:563
    Better...
    Just finished compressing 11Mbytes in 8 bytes  ;-)
    
3441.55DPDMAI::EYSTERFluoride&amp;Prozac/NoCavities/No prob!Tue Oct 18 1994 14:083
>    The negativity here is really ticking me off. 
    
    Whoops!  Time to increase the dosage!  :^]
3441.56none meant, none takenKLUSTR::GARDNERThe secret word is Mudshark.Tue Oct 18 1994 16:4215
	re: .50

	if you meant to chastise me, then you are reading something into
	my replies that I did not write! I meant no insult by requoting
	the phrase "boy-wonder"; I was merely using the same words as the
	author of .0...and I am in no position to "help" this person;
	I am doing full-time delivery on a medical imaging project now and
	possibly for the next five years...indeed, the only reason I
	responded at all is that my current work requires some basic knowledge
	of the topic...in light of the facts (or FAQs as the case may be ;-)
	I posted earlier, it easy to be somewhat pessimistic...but if this
	guy suceeds I'll give him a round of applause...and if Digital
	somehow gets a piece of the action, I'll use both hands ;-)

	_k
3441.58Convinced nowKELVIN::SCHMIDTCynical OptimistWed Oct 19 1994 09:406
    
    
        OK, I'm convinced.  1 million to 1 compression would work 
        on  .57
    
    
3441.59BHAJEE::JAERVINENOra, the Old Rural AmateurWed Oct 19 1994 09:461
    not quite... it's got too many upper-case letters.
3441.60Poor QuantumFROCKY::TRINHWed Oct 19 1994 12:165
    re .54
    
>   Just finished compressing 11Mbytes in 8 bytes  ;-)
    
    The Quantum stock price went down to $.02 each after your announcement.
3441.61Note 14 in Storage Management ConferenceAYRPLN::CORKUMDCS OMS Business &amp; Technical ServicesWed Oct 19 1994 15:0714

For those interested in pragmatic discussion on data compression
there is a topic in the Storage Management conference
that is reserved for that purpose.  Refer to note 14 DATA COMPRESSION
in the file ARCANA::STORAGE_MGMT_IN_DIGITAL.  Two existing software 
products are noted, one for (VMS) file compression and another on
(VMS) disk compression.  Other related conferences are listed.

Data compression represents an untapped opportunity within Digital.

Regards,
Bill Corkum

3441.62WELSWS::HILLNIt&#039;s OK, it&#039;ll be dark by nightfallThu Oct 20 1994 04:2616
    "Everything that can be invented has been",
    said the Head of the US Patent Office in 1896,
    yet just look how much has been invented since
    then.
    
    "I can't see you", said the ostrich with its
    head in the sand, "and so you can't see me".
    
    So I don't think we should be so dismissive of
    a new anything, whether or not we could have 
    thought of it.  Not that is until after we've 
    seen the results of using it.
    
    My only caveat on this data compression 
    algorithm would be that there needs to be an 
    associated decompression technique.
3441.63Port VMS to Pentium, add compression...PASTIS::MONAHANhumanity is a trojan horseThu Oct 20 1994 04:5113
    	The interesting thing here is the impact of encryption. More and
    more, on MS-DOS and similar systems, data is being encrypted because
    the system doesn't offer adequate security. Encrypting a file makes it
    virus resistant and also secures it from other users who might have
    access to the system.
    
    	Now encrypted files *CAN'T* be compressed - mathematical
    impossibility, so files on PCs will increasingly not be able to take
    advantage of this 1000:1 compression, while secure systems like VMS and
    MVS will since they don't need the encryption.
    
    	Imagine your VMS system running with a couple of floppy disks for
    data storage while a similar PC system requires gigabytes!
3441.64RUSURE::EDPAlways mount a scratch monkey.Thu Oct 20 1994 10:5262
    Re .63:
    
    > The interesting thing here is the impact of encryption. More and
    > more, on MS-DOS and similar systems, data is being encrypted because
    > the system doesn't offer adequate security.
    
    Using compression in lieu of encryption is silly.
    
    > Now encrypted files *CAN'T* be compressed - mathematical
    > impossibility, . . .
    
    That depends on the encryption algorithm -- it is correct that good
    encryption algorithms generally produce ciphertext that has a higher
    complexity than even already compressed files, but it is not a
    mathematical impossibility.
    
    > . . . so files on PCs will increasingly not be able to take advantage
    > of this 1000:1 compression, . . .
    
    One simply compresses and then encrypts.  I have an encrypted (not
    compressed) partition on my PC; I could achieve compression and
    encryption simply by placing the compression container file in the
    encrypted partition.
    
    
    Re .62:
    
    > So I don't think we should be so dismissive of a new anything,
    > whether or not we could have  thought of it.
    
    Then why are you so dismissive of information theory, which tells us
    how absurd 1000000:1 compression, and even 1000:1 compression, is in
    the general case?  You don't even need information theory; just count: 
    There are 256 different files that are one byte long, but there are
    65536 files that are two bytes long.  So only 256 of the two-byte files
    can be compressed into one byte -- the other 65,280 files can't be
    compressed because there are no other choices of one-byte files to
    compress them into.  And even then, the 256 one-byte files have to
    expand into larger files under the algorithm to make room to compress
    the two-byte files.  This is true in general; there are much fewer
    shorter files than longer files, so only a very small fraction of the
    larger files can be compressed into shorter files.  The reason
    compression works is that we use very few of the larger files:  The
    files we create contain redundant information.  It is the amount of
    redundancy that limits compression, not the algorithm.  If the
    information in a 1000-byte Word Perfect file or GIF or a spreadsheet is
    such that 2^4000 different combinations of that type of file might be
    produced, then those files can only be compressed generally to 4000
    bits -- 500 bytes.  This is just based on counting; to count to 2^4000
    different combinations, you need 500 bytes worth of counters.
    
    The ratio to compress two bytes to one is 256.  The ratio to compress
    20 bytes to 10 is 1208925819614629174706176.  The ratio to compress
    1,000,000 bytes to 1,000 can't be printed here; it has 2,405,832
    digits.
    
    
    				-- edp
    
    
Public key fingerprint:  8e ad 63 61 ba 0c 26 86  32 0a 7d 28 db e7 6f 75
To find PGP, read note 2688.4 in Humane::IBMPC_Shareware.
3441.65DUGROS::ROSSLiving large, earning smallThu Oct 20 1994 16:3211
This week's PC Week (11/8) has a tutorial on, ta da, compression using 
Iterated Function System fractals. Here's a quote:

"Like JPEG, IFS image compression is a lossy technique meaning that the
uncompressed image may not match the original pixel for pixel.  But 
whereas 50-to-1 compression ratio is good for JPEG and similar methods, 
IFS image compressors frequently achieve ratios upwards of 10,000 to 1."

I can't do the article justice without typing the whole thing in... 


3441.66TLE::REAGANAll of this chaos makes perfect senseThu Oct 20 1994 17:096
    Yes, and IFS deals with very specific sets of input data and has
    algorithms to exploit them.  Nobody has argued that certain data
    streams (like images) can be compressed in large numbers especially
    where bit-for-bit uncompression isn't required.
    
    				-John
3441.67junk food for thoughtDPDMAI::EYSTERFluoride&amp;Prozac/NoCavities/No prob!Thu Oct 20 1994 17:329
    I'm thinking of commercial applications here.  What if the USS
    Enterprise used the lossy compression algorithm to save a couple bucks? 
    What happens after you've been compressed, transmitted, uncompressed
    several times?  Do you start looking like a cartoon character?  Would
    it cause people to blink when they looked at you, cause you look kinda
    out of focus?
    
    I think we need to research some of THESE ramifications before we start
    compressing old .GIFs of Cindy Crawford, don't you?
3441.68AXEL::FOLEYRebel without a ClueThu Oct 20 1994 18:036
RE: .67

	Actually, there was an episode of STNG that touched upon
	going thru the transporter too much..

							mike
3441.69It's been doneDECC::REINIGThis too shall changeFri Oct 21 1994 10:345
    I much preferred MAD magazine treatment of the transporter.  A real
    lossy algorithm.  When you came out ears, arms, and legs were pasted
    onto the body wherever.  Spock, the pointy elbowed Vulcan.
    
                                August
3441.70RUSURE::EDPAlways mount a scratch monkey.Fri Oct 21 1994 14:1433
    What happens when you try to get too much compression?  Hallmark cards
    found out:
    
                      World Literature -- Condensed
    
                                MOBY DICK
    
         Ahab chases whale.  Whale chases Ahab.  Whale prevails.
    
                              ROMEO & JULIET
    
              Two teenagers fall in love and then they die.
    
                            GONE WITH THE WIND
    
          Scarlett's a yuppie.  The South falls.  Rhett splits.
    
                           A TALE OF TWO CITIES
    
         Good times.  Bad times.  The peasants win.  Marie loses.
    
                              WAR AND PEACE
    
       A lot of Russians with long names doing complicated things.
                             The Czars lose.
    
    
    				-- edp
    
    
Public key fingerprint:  8e ad 63 61 ba 0c 26 86  32 0a 7d 28 db e7 6f 75
To find PGP, read note 2688.4 in Humane::IBMPC_Shareware.
                                           
3441.71Very funny!TNPUBS::JONGSteveFri Oct 21 1994 15:363
    Along the same lines, Woody Allen once joked that he'd taken a
    speed-reading course and then read _War and Peace_ in an hour.  He said
    it was about Russia 8^)