T.R | Title | User | Personal Name | Date | Lines |
---|
1406.1 | | ALLVAX::JROTH | I know he moves along the piers | Thu Mar 28 1991 17:48 | 34 |
| Check out "Number Theory in Digital Signal Processing" edited by Rader
and someone else; this book contains many key papers on FFT
algorithms, including number theoretic ones using Fermat primes,
for digital signal processing and fast arithmetic. I think Pollards
paper is the one you really want to read.
Essentially, high precision multiplication is convolution which
FFT algorithms convert to pointwise multiplication. So one transforms,
multiplies componentwise, and transforms back to get the product
in N LOG(N) time instead of N^2 time. Fourier transforms can be
done over finite fields as well as the complex field, hence the
interest in adapting the FFT algorithm for integer multiplies.
Roots, reciprocals, and so on can be calculated by quadratically
convergent Newton algorithms involving mutiplies and adds which can
now be done fast, so you have fast super high precision arithmetic.
The crossover point where it pays is in the thousands of digits
range, but it's still interesting, and special hardware would certainly
speed things up. However, such hardware is not new - it's been tried
before, almost as soon as the theory was proposed.
Knuth has a discussion in Vol 2.
If I remember I'll enter the citations you would be interested in
but the book isn't handy right now. To do it on the cheap,
DSP chips could be adapted to the task. Not as fast as true
number theoretic transform hardware, but useful anyway. Actually
its just a page or so of C code to make a basic algorithm to do it.
Hope this helps - it's been quite a while since I played with this
stuff.
- Jim
|
1406.2 | Fermat number trivia | GUESS::DERAMO | Dan D'Eramo | Thu Mar 28 1991 18:17 | 27 |
| If a number of the form 2^k + 1 is prime (k = 0,1,2,...),
then it can be shown fairly easily that k must be of the
form k=2^n. When you consider the converse, you get to
the Fermat numbers. These are the numbers of the form
n
(2 )
F(n) = 2 + 1 n = 0,1,2,3,4,...
Fermat conjectured that they are all prime. The first
five -- F(0), F(1), F(2), F(3), and F(4) -- are prime.
But all other F(n) for which primality vs compositeness
has been determined have turned out to be composite. I
believe it is not yet known if the number of prime F(n)
is finite or infinite, or even if it is greater than
five.
I think if n does not equal m then F(n) and F(m) are
relatively prime, which gives another proof that there
are infinitely many primes. Just let G(n) be the smallest
prime factor of F(n) [F(n) itself if it is prime]; now
consider G(0), G(1), G(2), ....
I think it has also been shown that the prime factors of
F(n) are all of the form (2^(n+2))m + 1.
Dan
|
1406.3 | Trivium du jour | VMSDEV::HALLYB | The Smart Money was on Goliath | Fri Mar 29 1991 09:20 | 7 |
| > I think it has also been shown that the prime factors of
> F(n) are all of the form (2^(n+2))m + 1.
Yes. Note that m need not be odd, and m > 1 always. See, for example,
Math. Comp. Vol. 29 # 129 (Jan 1975), pp. 109-112.
John
|
1406.4 | | HANNAH::OSMAN | see HANNAH::IGLOO$:[OSMAN]ERIC.VT240 | Fri Mar 29 1991 12:17 | 18 |
| > These are the numbers of the form
>
> n
> (2 )
> F(n) = 2 + 1 n = 0,1,2,3,4,...
>
> Fermat conjectured that they are all prime. The first
> five -- F(0), F(1), F(2), F(3), and F(4) -- are prime.
> But all other F(n) for which primality vs compositeness
> has been determined have turned out to be composite.
So Fermat was wrong ?
|
1406.5 | | GUESS::DERAMO | Dan D'Eramo | Fri Mar 29 1991 12:35 | 8 |
| re .4 "So Fermat was wrong ?"
It does turn out to be the case that the intricate nature
of the primality or lack thereof of the members of the
given sequence is indeed more complex than anticipated by
the person for whom it is named.
Dan
|
1406.6 | installment 1 | EAGLE1::BEST | R D Best, sys arch, I/O | Tue Apr 09 1991 19:49 | 35 |
| Extracted without permission from 'Science News', 6-oct-1990
------------------------------------------------------------
'Little Fermat: A scheme for speeding up multiplication leads to
a unique computer' by Ivars Peterson
Multiplying million-digit numbers takes time -- lots of time.
Even today's supercomputers are poorly equipped for efficient, error-less
number crunching on such a scale. Nonetheless, many mathematical and
scientific applications, from identifying prime numbers to modeling weather
patterns, require large-number computations.
Is there a faster way of multiplying gigantic numbers ? Nearly four years
ago, M.M. (Monty) Denneau of the IBM Thomass J. Watson Research Center in
Yorktown Heights, N.Y., and mathematicians David V. and Gregory V. Chudnovsky of
Columbia University in New York City decided there really is, and they designed
a new machine to prove their point.
The resulting computer, painstakingly assembled from commercially available
parts by MIT graduate student Saed G. Younis, now stands nearly 6 feet tall in a
laboratory and ready to take on the world. Dubbed "Little Fermat" after the
17-th century French mathematician Pierre de Fermat, it works with instructions
and data expressed in 257-bit "words" and uses a special kind of arithmetic
based on so-called Fermat numbers. These characteristics clearly differentiate
the new machine from conventional computers.
"Little Fermat is a high-performance general-purpose scientific computer,"
David Chudnovsky says. Its novel features make it particularly efficient for
solving a variety of numerical problems ordinarily plagued with errors
because of the way conventional computers express and round off numbers.
"There's no machine like it in the world," Gregory Chudnovsky asserts. Indeed,
he adds, Little Fermat vividly demonstrates the kind of capabilities that could
enhance the performance of future supercomputers.
[ to be continued .. ]
|
1406.7 | installment 2 | EAGLE1::BEST | R D Best, sys arch, I/O | Tue Apr 16 1991 17:08 | 74 |
| Extracted without permission from 'Science News', 6-oct-1990
------------------------------------------------------------
'Little Fermat: A scheme for speeding up multiplication leads to
a unique computer' by Ivars Peterson
[ continues ... ]
Using pencil and paper, human beings can add, subtract, multiply, or divide
numbers of any length, albeit slowly. Computers, on the other hand, are
designed to manipulate numbers of a fixed length. For instance, simple personal
computers typically work with digit strings, or words, that consist of eight
digits, or bits, each bit being a one or a zero. Today's most advanced
supercomputers handle 64-bit words.
By using longer words, a computer can calculate with greater precision and
make finer distinctions when converting, say, an audio signal into strings of
digits. For example, an 8-bit signal processor divides an audio signal into at
most 256 intensity levels, providing a relatively crude approximation of the
original waveform. In contrast, a 16-bit signal processor -- the sort used to
record music on compact disks -- samples many times more levels, producing
digital audio signals of significantly higher quality and much less distortion.
In scientific computations, the loss of precision caused by using shorter
words can have serious consequences. Many physical processes, such as the flow
of water past a ship's hull, are full of inherent instabilities. When a
computer simulates such processes, it must perform trillions of arithmetic
operations. Even a slight inaccuracy in the description of how a physical
system changes over time, or in rounding off numbers during a computation, can
lead to the wrong answer.
But the penalty for increased word length is a corresponding increase in the
amount of circuitry and wires needed to build the computer and in the time the
computer takes to execute an instruction. Schemes that allow small-word
computers to handle longer words circumvent the problem, but such hybrid
operations generally prove astonshingly slow and cumbersome.
To fit more numbers into a given word length, computer scientists over the years
have developed special formats for representing decimal or real numbers in a
computer, along with specific rules for rounding off or truncating such numbers
to make sure they stay within the assigned word length. Most computers now use
such "floating-point arithmetic" schemes for representing and manipulating
numbers. But small errors inherent in the way real numbers are represented in
a computer can accumulate, sometimes causing major precision problems in
numerical calculations.
Number theory offers a way to rid calculations of these intrinsic errors by
combining a special procedure called modular arithmetic with a set of numbers
known as Fermat numbers.
In modular arithmetic, only remainders left over after division of one whole
number by another count. For example, suppose the divisor, or modulus, happens
to be 5. Dividing 5 into a given whole number produces a certain remainder,
which constitutes the answer. Thus, dividing 5 into 7 or 12 produces the same
answer - the remainder 2.
Fermat numbers have the form 2**x + 1 where x = 2**n. When n=0, the first
Fermat number, F0, is 3; when n=1, the second Fermat number, F1, is 5;
similarly, F2=17; and so on (SN: 6/23/90, p. 389). Using a Fermat number as the
divisor in modular arithmetic provides a handy way of speeding up certain types
of calculations and circumvents the need to deal with real numbers.
In 1975, James H. McClellan of MIT's Lincoln Laboratory in Lexington, Mass.,
built a digital signal processing device based on Fermat arithmetic,
demonstrating that the electronic circuitry needed to do modular arithmetic
based on Fermat numbers can operate faster than the circuitry used for
performing real-number operations. Thus [? /rdb], the answer is always exact
and correct, provided that it's less than the Fermat number used in the
operations.
Little Fermat's answer to achieving faster multiplication while avoiding the
errors associated with floating-point arithmetic is to combine increased word
length with numerical recipes, based on modular arithmetic and Fermat numbers.
[ to be continued .. ]
|
1406.8 | installment 3 | EAGLE1::BEST | R D Best, sys arch, I/O | Wed Apr 17 1991 12:51 | 45 |
| Extracted without permission from 'Science News', 6-oct-1990
------------------------------------------------------------
'Little Fermat: A scheme for speeding up multiplication leads to
a unique computer' by Ivars Peterson
[ continues ... ]
Armed with these key ideas, Denneau and the Chudnovsky brothers prepared a
flow chart, then a detailed design for a machine capable of rapid, error-free
multiplication of large numbers. Then the real headaches began.
Commercially available integrated-circuit components limited the word size
to 257 bits. Wiring constraints restricted the size of the boards on which the
electronic parts could be mounted. Instead of laying out the computer on a
single circuit board, the designers had to break up the circuitry to fit onto
six boards -- each a square 25.6 inches wide, densely packed with chips and
covered with a rat's nest of connecting wires.
Before Younis could set the first chip in place, the researchers had to check
their design for flaws. The trouble was that they had designed Little Fermat to
have capabilities exceeding those of any conventional computer that could be
used to simulate the way its logic worked. In the end, they had to settle for
testing their design in pieces, never as a complete unit.
"Even then, it was a staggering task," Gregory Chudnovsky says.
Younis spent more than a year building the computer, then roughly another year
testing the completed machine to correct all the assembly and design defects
that he found. The biggest assembly problems involved the 82,500 individual
wires (totaling about 5 miles) connecting 6,700 integrated-circuit chips and
other components.
Those problems ranged from chips that sporadically continued working even when
no electrical power reached them to wires that shrank and disconnected when they
cooled after the machine was turned off. And because the computer was designed
for rapid calculation, and electronic signals travel at finite speed, even wire
length became an important consideration. The most nightmarish defects --
especially those that made their presence felt intermittently -- took weeks to
track down, but Younis persisted.
"Now it's running," David Chudnovsky says. "Rarely has a hardware project of
such magnitude been carried through to its completion by a single man. It was
an unbelievable achievement."
[ to be continued .. ]
|
1406.9 | last installment | EAGLE1::BEST | R D Best, sys arch, I/O | Wed Apr 17 1991 19:37 | 41 |
| Extracted without permission from 'Science News', 6-oct-1990
------------------------------------------------------------
'Little Fermat: A scheme for speeding up multiplication leads to
a unique computer' by Ivars Peterson
[ continues ... ]
To compute with Little Fermat, a user writes a program in a language now
called Younis. That language provides a set of instructions expressed in
240-bit chunks, which can be combined in various ways to perform a number of
functions. A personal computer attached to Little Fermat loads the program into
the machine, monitors the computation, and unloads and displays the results when
the computation is finished.
"We are now checking [Little Fermat's] performance," Gregory Chudnovsky says.
"We have to be sure it does what we want it to do. And we would be happy to
find someone interested in programming the machine for a particular
application."
So far, the Chudnovskys have used Little Fermat primarily for computations in
number theory that involve gargantuan numbers -- searching for prime Fermat
numbers, factoring large numbers and testing whether certain huge numbers are
primes.
But the machine's special characteristics make it ideal for digital signal and
image processing, as well as for solving the differential equations used by
researchers modeling the behavior of physical systems. Such computational
problems regularly surface in aerodynamics, hydrodynamics, chemistry,
geophysics and many other disciplines.
Only one Little Fermat exists today, but that's more than can be said for the
many other new computer designs that never made it to the hardware stage,
instead remaining "paperware" -- described in a paper but never built. "This
machine is alive and well and working," David Chudnovsky says. "It's real."
"We showed it can be done," Gregory Chudnovsky says. "Even if it remains a
one-of-a-kind machine, Little Fermat stands as a demonstration of what should be
added to a supercomputer to improve its performance. It would be very cheap to
put additional Fermat circuitry into future supercomputers."
[ end. ]
|