T.R | Title | User | Personal Name | Date | Lines |
---|
1488.1 | does this have to with radar by chance? | ALLVAX::JROTH | I know he moves along the piers | Tue Aug 27 1991 16:21 | 8 |
| I have some information at home that I believe is on this topic, since
I recall this coming up in connection with radar ambiguity functions.
I may be wrong, but if that's the context I may have it...
How did this arise?
- Jim
|
1488.2 | Use in Kanji Recognition | FASDER::MTURNER | Mark Turner * DTN 425-3702 * MEL4 | Tue Aug 27 1991 18:27 | 18 |
| re: .1:
Gabor filters came up in connection with a neural net application in
Kanji character recognition. The technique was described as a useful method
(along with Fourier transform, principal components analysis and others) of
preprocessing the Kanji characters. Here's the bit on Gabor filters:
"Gabor filters bear some similarity to Fourier transform, but
are localized in space rather than global to the image. Each Gabor
filter is the product of a sine wave and an exponential. Thus they
are designed to detect stroke length and direction in subfields of
the larger image. The spatial frequency of the sine wave imparts
sensitivity to stroke width."
Does that sound compatible with what you've seen?
Mark
|
1488.3 | a clue | CORREO::BELDIN_R | Pull us together, not apart | Wed Aug 28 1991 12:09 | 6 |
| I think that a work on sets of orthogonal functions may have what
you're looking for, or at least clues. Fourier transforms are just one
example. There are sets with the product of an exponential by a
harmonic function, for example, the Tchebycheff polynomials.
Dick
|
1488.4 | | ALLVAX::JROTH | I know he moves along the piers | Thu Aug 29 1991 09:22 | 39 |
| I forgot to look up the material when I was at home :-(
However, this sounds like what I had in mind. There is some similarity
to wavelets here as well as connections with other orthogonal function
sets.
I *worked* on a Japanese handwriting recognition project a few years ago!
This ran on the PRO/350 computer and was stroke based. That is, characters
were input by a stylus tablet and the "raw" data was a polyline
approximation of the strokes.
The system preprocessed these strokes into features with a (rather
kludgy) rule based front end. These features were then indicated in
bit strings of various lengths and the system was "trained" on zillions
of examples of the characters that had been collected in Japan. We had
many magtapes of stroked characters.
It's too bad we didn't have a better computer, since a good workstation
was really necessary for this kind of research. You can imagine the
hassles of task-building an overlaid program and making it fit on the
PRO. My involvement was not so much the fundamental algorithms per se
but making systems aspects work and doing performance tuning, for which
we made great strides. I did do some research into the matter
in the hopes of finding better algorithms and have a file of many of the
important papers on character recognition, but due to lack of time
didn't get to try any of them out, though I did do some informal
experiments on the data we had. I did look into neural nets which
were just beginning to be written about, as well as multidimensional
searching.
My current feeling is that for character recognition a rule based
approach is the best way to approach the problem.
I'll try and remember to get the info tonight. If you have access
to a library, Gabor was an electrical engineer and his papers often
appeared in the IEEE transactions.
- Jim
|