[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::digital_unix

Title:DIGITAL UNIX(FORMERLY KNOWN AS DEC OSF/1)
Notice:Welcome to the Digital UNIX Conference
Moderator:SMURF::DENHAM
Created:Thu Mar 16 1995
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:10068
Total number of notes:35879

10049.0. "Memory required operating system 4.0B" by SPANIX::ANA () Thu Jun 05 1997 08:15

    Hello.
    
    The customer has AlphaStation 500 with 256Mb, and all the memory banks
    full. We can not install more memory.
    The customer is trying to run some applications that consumes a lot of
    memory. They are running Unix 4.0B, and some days ago they couln't run
    an application with 414MB RAM as requirement. 
    When they start the application this runs, but the system is
    completely collapsed, nobody else can work, nobody can login, even the
    windows in the console are stopped. Until the application is finished (some
    days later) nothing else can be done in the system. The application
    runs fine, the results are good, but the system only does one thing.  
    I have changed some vfs, advfs and vm parameters, to get operating
    system consumes less memory, and we have a very small vmunix kernel
    file unloading some drivers, and now we can run that application with 
    414MB required memory.
    But we get the same problem with an application that requires 440MB.
    
    My questions are:
    - How much memory we can expect to use in this system with 256MB, Unix
    4.0B. No more software is installed.
    - Is there any way to improve the system behaviour?. Why is completely
    collapsed?
    
    Ana.
    
T.RTitleUserPersonal
Name
DateLines
10049.1Get the right hardware...SSDEVO::ROLLOWDr. File System's Home for Wayward Inodes.Thu Jun 05 1997 12:5755
	The system is most likely paging and swapping.  The out of
	box tuning of recent versions seems to prefer to swap out
	large processes so the smaller ones can their share of time.
	But when the large one runs again it swaps back in and
	everything else pages or swaps out.  Since the big process
	is too large to fit in memory, it probably pages a lot
	and may even page against itself, making the performance
	even worse than usual.

	My first recommendation is to get a system that will support
	the load required by the program.  Virtual memory is wonderful
	stuff, but it is no substitute for large amounts of physical
	memory.  This appears to be an application in need of more
	physical memory than this system can offer.

	If this isn't feasible, looking into upgrading the I/O subsystem
	to use faster paging disks.  Spread the disks over more busses
	(okay, on the 200 maybe another bus).

	Look into redesigning the application.  If its memory use is
	one big array or a few arrays that are all needed at once
	there isn't much that can be done.  If the data being used
	is a data structure can small objects be used for some of
	the members?  At the same time recognize Alpha's preference
	for 32 bit or larger integers.  I don't know how much of
	an affect it would have, but consider memory mapping the
	arrays.  That might allow more detailed controller of access
	patterns and using volume and file system management to
	provide better I/O than the page/swap might.

	I/O and allocations to the page/swap space are round-robin
	from what I've read.  When the paging load is spread over
	many processes this will probably balance the load out in
	way that is nearly as good as striping.  For when one big
	process is paging, it may not offer the high bandwidth
	needed.  Using a striped file under AdvFS or volume striping
	with LSM might.

	If the big process needs all of its memory at the same time
	then each time it runs everything else that can will have to
	page or swap out.  That's a fair amount of memory that has
	to be written to and read from disk.  The process itself is
	certainly more and probably ends up paging against itself.
	See if there is a way to tune the system to allow the big
	process to run less often but run longer when it has the
	memory.  I have doubts that this will help any since it
	probably won't run efficently anyway, but it may be worth
	looking at.

	And as a last resort, do brute force process scheduling by
	hand.  When the system is needed for interface use, send the
	process a SIGSTOP.  The system will quickly figure out that
	it is idle and will bleed away all its memory.  When the
	system isn't needed for interactive use, send it a SIGCONT.
	eff
10049.2You should be able to get more memoryWIBBIN::NOYCEPulling weeds, pickin' stonesThu Jun 05 1997 14:354
According to http://www.workstation.digital.com/products/a500/io.html
you can put up to 512 MB on AS 500/266 or /333, and up to 1 Gbyte on
AS 500/400 and /500.  Granted, you will have to remove some of the
memory you have in there now...
10049.3NABETH::alanDr. File System's Home for Wayward Inodes.Thu Jun 05 1997 17:1314
	I as mentioned (well, my evil twin mentioned) in .1, out of the
	box, Digital UNIX seems to prefer to swap large processes.  Now,
	this particular process isn't very friendly no matter what you
	do, but it could help the rest of the system if it didn't have
	to swap it out to allow the other things to run.  Assuming a
	2 MB/sec disk, it will take around 3 minutes to swap that 
	process out and that much again to swap it back in.

	If you re-tune the system to not swap out the large process,
	when the others want to run they will cause pages to be trimmed
	off the big one so they can run.  The paging activity will
	probably still be excessive, but it may be enough of a change
	to the system usable for interactive work; you don't wait three
	minutes for swapin/swapout.