[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::fortran

Title:Digital Fortran
Notice:Read notes 1.* for important information
Moderator:QUARK::LIONEL
Created:Thu Jun 01 1995
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:1333
Total number of notes:6734

1150.0. "Performance query." by COMICS::EDWARDSN (Dulce et decorum est pro PDP program) Mon Jan 27 1997 09:19

My gut feeling is that this is a Unix tuning issue and has nothing to do with 
Fortran at all, but the customer is adamant that this is a Fortran optimizer
problem. What's worse is the presence of the Nag libraries which make a 
reproducer somewhat difficult.

Does anyone have any ideas as to what could be the cause.
I've mentioned that he may want to use a version of the RTL higher than 371.

Customer has the following problem:

Case 1:
        f77 v3.8 Unix 3.2C 300MHz 8400 AXP
        All modules compiled f77 -fast -tune ev5
        They are then linked against -ldxml -lnag

        When arrays are 128 long:

        Real 9.1
        User 8.1
        Sys  0.5

        When arrays are 250 long:

        Real 35.0
        User 8.9
        Sys  24.1

        The test input files are the same and the result of the
        calculation is the same. The timings are reliable.

Case2:
        f77 X4.0-43 Unix 3.2C DEC 7000.

        128 long arrays:
        Real 23.6
        User 20.3
        Sys   3.1

        250 long arrays:
        Real 19.8
        User 18.9
        Sys   0.7

Case3:
        Same executable as above run on 8400 AXP by a user

        Real 36.3
        User 8.9
        Sys  25.0

        run from root account.

        Real 20.1
        User 19.2
        Sys  0.7

Case4:
        f77 X4.0-43 Unix 3.2C 8400 AXP

        128 long:
        Real 9.2
        User 8.1
        Sys  0.5

        250 long:

        Real 83.0
        User 8.9
        Sys  66.7

Further discussion.
If all modules are compiled -O0 then no large sys time is produced
for the 8400. The user time increases as would be expected.
If optimization is turned on, one source file at a time, then a point
is reached when the large system times occur. However, this can be
ANY module which causes this triggering of the new behaviour.
If the sources are concatenated and compiled using the -O4 flag, then
everything seems to be o.k. i.e. there is no excessive system time
being used.

Please help with any pointers even to somewhere else that I could look.
End of my tether,

Neil.
T.RTitleUserPersonal
Name
DateLines
1150.1Limits?HPCGRP::MANLEYMon Jan 27 1997 10:406
Are your "limits" the same on both systems?

>        f77 X4.0-43 Unix 3.2C DEC 7000.
>        f77 X4.0-43 Unix 3.2C 8400 AXP

1150.2A few hintsTLE::EKLUNDAlways smiling on the inside!Mon Jan 27 1997 11:0720
    	Try -check underflow on all the sources (or at
    least on those you have).  When you increase the
    size of the arrays, just about anything might happen.
    I presume that this increases the size of the "problem"
    being solved, and NOT just increases the size of the
    arrays.  The big question is where all the system
    time is coming from.  Could be underflow handling.
    Changing the size of the arrays should NOT change the
    generated code (although I'd be interested if changing
    the 250 to 256 has an interesting effect).  It should
    merely change either the SIZE of the problem, or the
    memory layout (cache problems introduced?).
    
    	When you say that the arrays are 128 and 250, are they
    128 x 128 and 250 x 250 (or are they really vectors of
    length 128 and 250)?  There's a big difference!
    
    Cheers!
    Dave Eklund
    
1150.3More info, ahem...COMICS::EDWARDSNDulce et decorum est pro PDP programTue Jan 28 1997 09:4377
Well. This is the information which came out of a bit of prodding and
to my mind has something of the answer contained in it, but then I'm 
not much more than an Ada programmer and know nothing of the finer 
points of Unix performance etc.

Basically, the two machines are different, 1 has 5x as much memory as
the other. This sounds as if it could be promising in terms of where
the time is being spent.

If these figures mean more to you than they do to me, then you may 
be able to glean more information out of them than I did, but those 
are my thoughts so far.

He has also discovered that if the executable is built on the slower 
machine -non_shared, then the sys time is back to 0.7 again with 
the 250 long array.

He also mentions that there are some arrays in there which are declared
as "a real mix".

e.g. REAL*8 (250), REAL*8(250,3) or even REAL*8 (3*250,250,3)

from this we can assume that the memory requirements of the running 
program are going to be VASTLY different between one and the other
which is why my finger has pointed at:


DEC 7000   i.e. the fast machine

ulimit -a gives:

	time(seconds)        unlimited
	file(blocks)         unlimited
	data(kbytes)         131072	
	stack(kbytes)        2048
>>> 	memory(kbytes)       508448
	coredump(blocks)     unlimited
	nofiles(descriptors) 4096
	vmemory(kbytes)      1048576

 PROGRAM SHELL 4.1
real   21.3
user   18.8
sys    0.8


Alphaserver 8400


ulimit -a  gives:

	time(seconds)        1800
	file(blocks)         unlimited
	data(kbytes)         131072
	stack(kbytes)        2048
>>> 	memory(kbytes)       102400
	coredump(blocks)     unlimited
	nofiles(descriptors) 4096
	vmemory(kbytes)      2097152


with the finger of doom pointing directly (>>>) at the differences
which are of note since increasing the data size to be the same 
as the 7000 makes absolutely no difference to the performance of the
machine which exhibits this slow behaviour.

Now all I need is a means of convincing him that the memory is what is
causing the problem.
I have told him to decrease the limit on the memoryuse to be the same
as the 8400 on the 7000, this should cause the 7000 to start to swap
rather than gobble up as much memory as it needs.

Does any of this make sense?


Neil.

1150.4Yet more info - please!PERFOM::HENNINGTue Jan 28 1997 11:2284
    Greetings
    
    A curious set of circumstances, indeed.  It would be of interest to
    find out more.  It *appears* that the system time goes through the roof 
    but it is not clear to me whether this is because of a lack of physical
    memory or a lack of virtual memory.
    
    If the customer is willing, please ask him/her to do the following:
    
    1)  ON BOTH MACHINES:
        Become superuser
        more /var/adm/messages
        go down to bottom (just press G) then scroll backwards looking
            for most recent boot record.
        What does the boot section say about exact unix version, exact
            hardware model number, physical memory, and firmware?  E.g. -

Jun  6 17:07:44 dstant vmunix: DEC OSF/1 V3.2 (Rev. 214); Tue Jun  6 17:06:33 
								EDT 1995 
Jun  6 17:07:44 dstant vmunix: physical memory = 128.00 megabytes.
Jun  6 17:07:44 dstant vmunix: available memory = 115.97 megabytes.
Jun  6 17:07:44 dstant vmunix: using 483 buffers containing 3.77 megabytes of memory
Jun  6 17:07:44 dstant vmunix: AlphaStation 200 4/233 system
Jun  6 17:07:44 dstant vmunix: Apecs pass II Sio rev II
Jun  6 17:07:44 dstant vmunix: Firmware revision: 4.2
Jun  6 17:07:44 dstant vmunix: PALcode: OSF version 1.35

    2) Give the images some sort of mnemonic names, like "myexe.f38" and
       "myexe.f40" and maybe "myexe.f40_Optimize0".  Tell us what the names
       mean.  Add a little C shell wrapper that includes the results of 
       the vmstat, w, and limit commands and which uses the time command, 
       looking something like this:
    
          cat > runem.csh
          set verbose
          vmstat 2 3
          limit
          uptime
          time ./myexe.f38
          time ./myexe.f40
          time ./myexe.f38
          time ./myexe.f40
          unlimit
          limit
          uptime
          time ./myexe.f38
          time ./myexe.f40
          time ./myexe.f38
          time ./myexe.f40
          uptime
    
       The C-shell is picked here because of its informative variant of
       the time command.  The programs are run twice just to get a
       primitive reading on run-to-run variation.  They are run both before
       and after an "unlimit" command to get some sensitivity to resource
       limits.  The uptime command summarizes CPU activity on the system.
    
       Run this little script *BOTH* as an ordinary user and as superuser,
       on *BOTH* machines, with "script" watching, and post the results.
       I.e. do something like this:
    
            ftp> put both exes to both machines
            ftp> put the cshell script to both machines
            script
            csh runem.csh
            su
            csh runem.csh
            rlogin other_sys
            csh runem.csh
            su
            csh runem.csh
            exit
            repeat "exit" until it says "Script done, file is typescript"
            Email typescript to digital
    
    Feel free to forward this note to the customer, including my email
    address, if you wish.
    
    /John Henning
     CSD Performance Group
     Digital Equipment Corporation
     [email protected]
     Speaking for myself, not Digital
    
1150.5It's gone awfully quiet.COMICS::EDWARDSNDulce et decorum est pro PDP programThu Jan 30 1997 08:1215
The customer has gone awfully quiet after I pointed out 
that they have x5 more memory in one machine than the other.
Maybe they are trying a little harder to diagnose the problem
themselves.
I'll post what I get back. I don't like passing e-mail addresses
to customers, they can sometimes tend to be over familiar in 
their use at a later date and then complain that the service
they get isn't quite what it's supposed to be when they don't
bother giving all the information to the Support Centre because
we've been short-circuited.
Hey ho, sigh,

more soon, or maybe not at all,

Neil.
1150.6There's more, unfortunately.COMICS::EDWARDSNDulce et decorum est pro PDP programMon Mar 10 1997 05:5821
O.k. customer has diagnosed that the slowness has come about since the change from
DXML 3.1 to DXML 3.3... which makes me wonder if I should be raising this in there
rather than here.
I can do that soon.
The facts of the matter are that now that he has upgraded the DXML on the machine 
which ran faster, they both exhibit the same performance hit.
So it looks as if there has been a significant reduction in performance between
DXML 3.1 and 3.3 - maybe, maybe not. I've read things about DXML which point
fingers at its use of large amounts of memory and such, I'm wondering if this 
is a symptom.

I have a tarset of something which demonstrates the problem as seen by the user with 
some associated data files and shell script outputs.

If someone could pick this up I would be grateful.
I don't have the kind of machinery here.

Neil.

anonymous ftp from alexei.uvo.dec.com /pub/cclrc/mbt.tar

1150.7Increase datasize limit ...HPCGRP::MANLEYMon Mar 10 1997 14:0166
I pulled the tar file and built the application. I set my limits as follows:

	time(seconds)        unlimited
	file(blocks)         unlimited
	data(kbytes)         131072
	stack(kbytes)        2048
	memory(kbytes)       508448
	coredump(blocks)     0
	nofiles(descriptors) 4096
	vmemory(kbytes)      2097152

entered this command:

	/bin/time shell < test.dat

and got this output:

Scream.Eng.PKO.DEC.Com> /bin/time shell < test.dat
4285:./shell: /sbin/loader: Fatal Error: set_program_attributes failed to set he
ap start

real   0.0
user   0.0
sys    0.0

Then, trying to aggravate paging, I set limits to:

	time(seconds)        unlimited
	file(blocks)         unlimited
	data(kbytes)         262144
	stack(kbytes)        2048
	memory(kbytes)       0
	coredump(blocks)     0
	nofiles(descriptors) 4096
	vmemory(kbytes)      2097152

re-entered the command:

 	/bin/time shell < test.dat

and got this output:

 PROGRAM SHELL 4.1

real   9.2
user   8.7
sys    0.4

rebuilt the application using the -non_shared option and re-ran getting:

 PROGRAM SHELL 4.1

real   7.9
user   7.4
sys    0.4

This program was built and run on a 8400 5/300 system using DXML 3.3 running
Digital UNIX V4.0A.

So, I can't reproduce the problem. Now what?

I strongly encourage the user to increase the datasize limit. Using the 128K
datasize limit with coredumpsize set unlimited, the non_shared version of the
program core dumps - producing very bad "real" and "sys" run time numbers.

1150.8Oh dear, my brain has just exploded.COMICS::EDWARDSNDulce et decorum est pro PDP programWed Mar 12 1997 06:11331
I'm lost. I suppose I should be used to that by now, but no.

Here's an output of what you requested in .4
I hope that it makes some sense to you. I'm in over my head.

Neil.


>    1)  ON BOTH MACHINES:
>        Become superuser
>        more /var/adm/messages
>        go down to bottom (just press G) then scroll backwards looking
            for most recent boot record.
        What does the boot section say about exact unix version, exact
            hardware model number, physical memory, and firmware?  E.g. -

On the Alphaserver 8400:
Feb 12 09:57:32 columbus vmunix: Digital UNIX V3.2C  (Rev. 148); Wed Nov 13 12:0
6:06 GMT 1996
Feb 12 09:57:32 columbus vmunix: physical memory = 2048.00 megabytes.
Feb 12 09:57:32 columbus vmunix: available memory = 2007.95 megabytes.
Feb 12 09:57:32 columbus vmunix: using 7855 buffers containing 61.36 megabytes o
f memory
Feb 12 09:57:32 columbus vmunix: Firmware revision: 4.1
Feb 12 09:57:32 columbus vmunix: PALcode: OSF version 1.21
Feb 12 09:57:32 columbus vmunix: AlphaServer 8400 Model 5/300
Feb 12 09:57:32 columbus vmunix: Master cpu at slot 0.

On the DEC 7000:
Jan 29 15:11:23 magellan vmunix: Digital UNIX V3.2C  (Rev. 148); Thu Oct 24 16:1
4:14 BST 1996
Jan 29 15:11:23 magellan vmunix: physical memory = 512.00 megabytes.
Jan 29 15:11:23 magellan vmunix: available memory = 496.53 megabytes.
Jan 29 15:11:23 magellan vmunix: using 1958 buffers containing 15.29 megabytes o
f memory
Jan 29 15:11:23 magellan vmunix: Firmware revision: 4.3
Jan 29 15:11:23 magellan vmunix: PALcode: OSF version 1.35
Jan 29 15:11:23 magellan vmunix: DEC 7000 system
Jan 29 15:11:23 magellan vmunix: cpu at node 0


>    2) Give the images some sort of mnemonic names, like "myexe.f38" and
>       "myexe.f40" and maybe "myexe.f40_Optimize0".  Tell us what the names
>       mean.  Add a little C shell wrapper that includes the results of
>       the vmstat, w, and limit commands and which uses the time command,
>       looking something like this:

Images are:
shell.128.shared    - size parameter set to 128, optimze is -O, shared libraries
shell.128.static    - size parameter set to 128, optimze is -O, static libraries
 (-non_shared)
shell.250.shared    - size parameter set to 250, optimze is -O, shared libraries
shell.250.static    - size parameter set to 250, optimze is -O, static libraries
 (-non_shared)
shell.250.shared_o1 - size parameter set to 250, optimze is -O1, shared librarie
s


On the 8400:
Script started on Wed Mar 05 10:18:45 1997
% csh runem.csh
vmstat 2 3
Virtual Memory Statistics: (pagesize = 8192)
  procs    memory         pages                          intr        cpu
  r  w  u  act  free wire fault cow zero react pin pout  in  sy  cs  us  sy  id
 11150 23   63K 160K  32K 5664M  21M 251M 190M  26M 240K 160  2K 778  81   5  15

 13155 22   71K 151K  33K 4525  465 3338   16  514    0  45  1K 589  77   8  16
 14154 22   74K 149K  33K 8413    2 8409   21    4    0   8 664 353  82   3  15
limit
cputime  unlimited
filesize  unlimited
datasize  131072 kbytes
stacksize  2048 kbytes
coredumpsize  unlimited
memoryuse  2056144 kbytes
descriptors  4096 files
addressspace  2097152 kbytes
uptime
10:18  up 21 days, 26 mins,  22 users,  load average: 5.20, 5.23, 5.16
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
8.02u 0.46s 0:08 94% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
7.94u 0.41s 0:09 90% 0+14k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
8.48u 71.50s 1:26 92% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
Insufficent memory to open Fortran RTL message catalog, message #41.
0.00u 0.05s 0:00 12% 0+0k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
12.70u 0.69s 0:15 88% 0+26k 0+0io 0pf+0w
unlimit
limit
cputime  unlimited
filesize  unlimited
datasize  2097152 kbytes
stacksize  32768 kbytes
coredumpsize  unlimited
memoryuse  2056144 kbytes
descriptors  4096 files
addressspace  2097152 kbytes
uptime
10:20  up 21 days, 28 mins,  22 users,  load average: 6.17, 5.99, 5.99
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
8.15u 0.50s 0:08 97% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
7.96u 0.41s 0:12 68% 0+13k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
8.74u 73.43s 1:25 95% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
 PROGRAM SHELL 4.1
8.09u 0.50s 0:12 70% 0+24k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
12.50u 0.50s 0:13 96% 0+26k 0+0io 0pf+0w
uptime
10:23  up 21 days, 31 mins,  23 users,  load average: 5.58, 6.12, 6.14
% su
Password:
# csh runem.csh
vmstat 2 3
Virtual Memory Statistics: (pagesize = 8192)
  procs    memory         pages                          intr        cpu
  r  w  u  act  free wire fault cow zero react pin pout  in  sy  cs  us  sy  id
 14150 22   73K 147K  35K 5665M  21M 252M 190M  26M 240K 160  2K 778  81   5  15

 11152 23   73K 147K  35K  484   35  387   23   26    0  29  1K 400  83   2  15
 13151 22   73K 147K  35K  148    0  146   16    0    0  10 610 352  83   1  16
limit
cputime  unlimited
filesize  unlimited
datasize  131072 kbytes
stacksize  2048 kbytes
coredumpsize  unlimited
memoryuse  2056144 kbytes
descriptors  4096 files
addressspace  2097152 kbytes
uptime
10:30  up 21 days, 38 mins,  22 users,  load average: 5.11, 5.17, 5.09
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
8.22u 0.45s 0:09 94% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
7.79u 0.42s 0:09 89% 0+13k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
8.58u 70.72s 1:25 93% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
Insufficent memory to open Fortran RTL message catalog, message #41.
0.00u 0.03s 0:00 8% 0+0k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
12.55u 0.52s 0:15 86% 0+26k 0+0io 0pf+0w
unlimit
limit
cputime  unlimited
filesize  unlimited
datasize  2097152 kbytes
stacksize  32768 kbytes
coredumpsize  unlimited
memoryuse  2056144 kbytes
descriptors  4096 files
addressspace  2097152 kbytes
uptime
10:32  up 21 days, 40 mins,  22 users,  load average: 4.78, 5.62, 5.91
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
8.17u 0.43s 0:08 98% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
7.79u 0.39s 0:08 98% 0+13k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
8.77u 70.95s 1:25 93% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
 PROGRAM SHELL 4.1
7.88u 0.49s 0:11 70% 0+24k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
12.61u 0.51s 0:13 99% 0+26k 0+0io 0pf+0w
uptime
10:35  up 21 days, 42 mins,  23 users,  load average: 5.58, 5.76, 5.81
# exit
% exit
%
script done on Wed Mar 05 10:35:11 1997


On the 7000:
Script started on Wed Mar 05 11:07:19 1997
% csh runem.csh
vmstat 2 3
Virtual Memory Statistics: (pagesize = 8192)
  procs    memory         pages                          intr        cpu
  r  w  u  act  free wire fault cow zero react pin pout  in  sy  cs  us  sy  id
  2 85 17   10K  46K 6367  53M 181K  52M   1M 207K  211   7 161 237   0   1  98
  2 85 17   11K  46K 6367  245   31  170    0   26    0   8  44 251   0   2  97
  2 85 17   11K  46K 6367  150    0  150    0    0    0   7 304 248   0   4  96
limit
cputime  unlimited
filesize  unlimited
datasize  131072 kbytes
stacksize  2048 kbytes
coredumpsize  unlimited
memoryuse  508448 kbytes
descriptors  4096 files
ddressspace  1048576 kbytes
uptime
11:07  up 34 days, 19:57,  4 users,  load average: 0.19, 0.23, 0.24
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
19.78u 0.76s 0:20 97% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
18.89u 0.78s 0:20 96% 0+14k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
18.04u 57.81s 1:22 91% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
Insufficent memory to open Fortran RTL message catalog, message #41.
0.00u 0.02s 0:00 66% 0+0k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
27.58u 0.75s 0:29 96% 0+26k 0+0io 0pf+0w
unlimit
limit
cputime  unlimited
filesize  unlimited
datasize  1048576 kbytes
stacksize  32768 kbytes
coredumpsize  unlimited
memoryuse  508448 kbytes
descriptors  4096 files
addressspace  1048576 kbytes
uptime
11:10  up 34 days, 19:59,  4 users,  load average: 1.00, 1.31, 1.43
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
19.81u 0.72s 0:20 98% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
18.85u 0.74s 0:20 97% 0+14k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
18.09u 57.69s 1:23 91% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
 PROGRAM SHELL 4.1
21.55u 0.79s 0:22 98% 0+24k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
27.62u 0.72s 0:28 98% 0+26k 0+0io 0pf+0w
uptime
11:13  up 34 days, 20:02,  4 users,  load average: 1.10, 1.24, 1.38
% exit            su
Password:
# csh runem.csh
vmstat 2 3
Virtual Memory Statistics: (pagesize = 8192)
  procs    memory         pages                          intr        cpu
  r  w  u  act  free wire fault cow zero react pin pout  in  sy  cs  us  sy  id
  2 86 17   11K  46K 5141  53M 182K  52M   1M 207K  211   7 161 237   0   1  98
  2 86 17   11K  46K 5141  322   35  243    0   26    0   9  60 250   0   2  97
  2 86 17   11K  46K 5141   76    0   76    0    0    0   4 297 241   0   4  96
limit
cputime  unlimited
filesize  unlimited
datasize  131072 kbytes
stacksize  2048 kbytes
coredumpsize  unlimited
memoryuse  508448 kbytes
descriptors  4096 files
addressspace  1048576 kbytes
uptime
11:34  up 34 days, 20:24,  4 users,  load average: 0.15, 0.07, 0.03
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
19.96u 0.81s 0:21 97% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
18.33u 0.78s 0:19 98% 0+14k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
18.36u 57.76s 1:23 91% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
Insufficent memory to open Fortran RTL message catalog, message #41.
0.00u 0.03s 0:00 75% 0+0k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
28.05u 0.76s 0:29 97% 0+26k 0+0io 0pf+0w
unlimit
limit
cputime  unlimited
filesize  unlimited
datasize  1048576 kbytes
stacksize  32768 kbytes
coredumpsize  unlimited
memoryuse  508448 kbytes
descriptors  4096 files
addressspace  1048576 kbytes
uptime
11:37  up 34 days, 20:27,  4 users,  load average: 1.27, 1.39, 1.47
time ./shell.128.shared < test.dat
 PROGRAM SHELL 4.1
20.01u 0.73s 0:21 97% 0+15k 0+0io 0pf+0w
time ./shell.128.static < test.dat
 PROGRAM SHELL 4.1
18.37u 0.76s 0:19 96% 0+14k 0+0io 0pf+0w
time ./shell.250.shared < test.dat
 PROGRAM SHELL 4.1
18.33u 57.76s 1:23 91% 0+24k 0+0io 0pf+0w
time ./shell.250.static < test.dat
 PROGRAM SHELL 4.1
18.78u 0.76s 0:19 97% 0+24k 0+0io 0pf+0w
time ./shell.250.shared_o1 < test.dat
 PROGRAM SHELL 4.1
28.01u 0.79s 0:29 97% 0+26k 0+0io 0pf+0w
uptime
11:40  up 34 days, 20:30,  4 users,  load average: 1.01, 1.17, 1.37
# exit
% exit
%
script done on Wed Mar 05 13:26:28 1997

1150.9???HPCGRP::MANLEYWed Mar 12 1997 16:1014
The very first few lines of test.dat:

TITLE MgOBa - 128 atom cell
LATTICE CF 16.99357698
Note will not be entered in the conference
BASIS 128

seem to indicate that the data is intended for use in a 128 size model.
We're using it for all sizes. Is that correct? The fact that the program
sometimes fails make me suspicious.

Also, what is the "shell.250" file included in the tar file?

1150.10Sorry for the delay...COMICS::EDWARDSNDulce et decorum est pro PDP programWed Apr 02 1997 11:149
Yes. The user says that the test.dat file should be OK for either the 128 or 250 version.

As to the program failing, there is no other version. The user has the program running and
working on IBM, SGI, Linux etc. and claims not to see any problem.

The shell.250 file is a binary built on the 8400 with the MAXNSL parameter set to 250.
It was a mistake to include it in the tar file.

Neil.
1150.11HPCGRP::MANLEYWed Apr 02 1997 11:509
Neil,

As you know, I am NOT able to reproduce your problem. Please use the archive
version of DXML with the shared versions of everything else (ie. do not use
-non_shared). If your problem persists, you can remove DXML from the loop.

	- Dwight -