[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | Welcome to the CD Notes Conference |
Notice: | Welcome to COOKIE |
Moderator: | COOKIE::ROLLOW |
|
Created: | Mon Feb 17 1986 |
Last Modified: | Fri Mar 03 1989 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 1517 |
Total number of notes: | 13349 |
31.0. "Sampling and Digital Filtering" by KRYPTN::T_KOPEC () Thu Sep 13 1984 10:25
This topic keeps coming up, so I figured I'd lay out what I was taught in
Engineering school to see if it adds to the confusion. If anybody thinks I'm
wrong on a point or three, let's get a discussion going...
The whole process starts with an analog signal with, let's say, infinite
bandwidth. We, being mere mortals, are only interested in some of that
bandwidth; maybe the range from 20Hz to 20,000Hz. Claude Shannon told us
a while ago that if we could build 'ideal' filters, we could record that
signal with only 40,000 samples per second and later be able to reconstruct
it perfectly (given our bandwidth restriction). What we would have to do is
filter the input signal with an ideal low-pass filter that cuts off at 20kHz,
feed it into our sampler running at 40k samples per second, run it thru our
'channel' (in this case the CD and its read/write/ECC stuff), run our samples
thru another ideal low-pass 20kHz filter and voila! -- we have a perfect copy
of our original 20-20kHz signal (actually, it's 0-20kHz, but that's only a
practical matter).
The only magic here is those 'ideal' filters. They have constant gain in their
passband and zero gain outside; they also have zero phase shift within the
passband (actually, a phase factor of k*f won't hurt us here... that's just
a linear time delay). We can't build them (with finite resources, anyway).
Why is it important that they be 'ideal'? It has to do with this thing called
'aliasing'. The name comes from what happens on a frequency-domain plot of the
system performance if you don't do the filtering right. if your input filtering
does not have the ideal cut-off, any information above half the sampling rate
wraps around to the low-frequency end of the spectrum... and is there forever
(it is indistinguisable from a 'real' signal and, thus, no amount of filtering
after sampling, digital or otherwise, will get rid of it). On the reconstruction
end of the process, the samples can be viewed as either impulses at the sampling
rate or as pedestals which change at the sampling rate. either way, you end up
with the spectrum of the signal duplicating itself every n*fs, where fs is the
sampling rate. This results in ultrasonic junk in the output signal which can
cause all sorts of nasties; but most importantly, it wasn't in the input signal
(there goes perfect reproduction).
How can one get around the ideal filter problem? the only ways at the input are
to raise the sampling rate so that your aliases don't start to form until
outside of the range where your practical filter cuts off, and try your danmdest
to build a good filter. Once your sampling rate is set, the amount of infor-
mation you have is fixed. (A CD HAS 44K PIECES OF INFO PER SECOND ON IT; READ
IT ALL YOU WANT, THAT'S ALL THERE IS!). When you try to reconstruct it, you can
either build another sharp-cutoff filter (the 'analog' filter) to do the
reconstruction, or you can take your samples and build new ones out of them
(the 'digital' filter, for the purposes of this discussion).
What good does it do us to make new samples out of our old ones? After all,
the amount of information we have is fixed, isn't it?
There is a trick. Let's say we have a sample in our hands, and we remember what
the last sample was. If we build a new sample which is the average of the old
and the new and put that into our data stream (which now has twice the data
rate, but actually has no more 'information'), we can then reconstruct the
signal assuming an effective sampling rate of twice what we had before. Lo
and behold, the spectrum from half the original sampling to the original
sampling rate is EMPTY! That's right... there ain't nothing there (in theory,
at least). Good old Mr. Shannon came thru for us on that one...
Now that we have a nice chunk of empty spectrum right above the band of
interest, we don't have to worry so much about what our filter does in that
band; this allows us to build a nice simple filter 'cuz we don't have to worry
so much about the cutoff slope and all the phase gremlins that show up there.
That, in a nutshell, is why you would want to oversample. I'm not exactly
sure about the ease of making the spectrum empty above the passband, and I
think that's why some players will build four samples out of each real sample
(I did this note to keep from getting bored during a sysgen... you can't expect
me to think about Z-transforms at a time like this!).
Sorry this is so long... but in grad school you can take two semesters of this;
I'd hate to compress it into 4 sentences.
...tek (ex-signal-processing major)
T.R | Title | User | Personal Name | Date | Lines |
---|
31.1 | | MEMORY::SLATER | | Mon Nov 23 1987 09:30 | 6 |
| Seems like 2x oversampling could use a 17 bit DAC and 4x oversampling
could use 18 bit DAC to get most out of original signal. Also slope
prediction or curve fitting might be better than averaging to get
intermediate samples.
Les
|