| No, division is not done using floating point by default. (In fact,
at this particular moment, integer division is *never* done using floating
point, though that could well change someday, particularly for 32b cases,
in which case this option would inhibit that transform. Note that
the largest FP mantissa is 53 bits, which isn't sufficient for arbitrary
64b integer division...)
The most important things this does are:
1) Turn on warnings whenever any FP instructions are generated.
(This can be useful for detecting accidental introduction
of FP code by the user.)
2) Prevents the compiler from silently inserting floating point
ops. (For example, we could theoretically do memory copies
using FP regs, but we don't today.) One thing that does happen
today is it shuts off use of floating point nops for aligning
code in the instruction stream. Another than can show up
is use of certain types of floating point loads as prefetches.
The most typical use is for this today is compiling code for use in OS
kernels, etc., where the code may not be able to rely on the state
of the FPU.
|
| Just to add to what Kent said, code in the OS is built this way for
a couple of reasons. First, we avoid any floating point faults and
having to handle them. Second, a process running on the system which
has not used floating point does not need its floating point registers
saved and restored during context switches. So context switching is
faster. It would be kind of a lousy thing to do to a process if the OS
itself used floating point instructions in the context of a process and
slowed down the context switching of that process as a result. Now if
the application running in that process is doing floating point stuff,
that's a different matter. VMS tried to only do the save and restore of
floating point registers when it has to during context switching.
|