T.R | Title | User | Personal Name | Date | Lines |
---|
9680.1 | | NNTPD::"[email protected]" | Shashi Mangalat | Wed Apr 30 1997 14:49 | 10 |
| >On 3.x series unix., it has no problems.
Is that 3.2x or 3.0? Total wired pages are calculated differently in 3.0.
What seems to happen is that the system has reached the wired page limit.
This is a percentage (defaults to 80%) of the pages that VM manages. How
much memory is on the system? What does vmstat show?
--shashi
[Posted by WWW Notes gateway]
|
9680.2 | Here are the stats: | NNTPD::"[email protected]" | Sri | Thu May 01 1997 16:39 | 72 |
| Program works fine with 3.2x series OS, not 4.0b if the mlockall
issued after shmat (more than 128 Meg) call.
Our lab system (8400, 4Gig RAM) reproduced it as follows:
vm:
vm-syswiredpercent = 80
# vmstat 4
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
4107 21 19K 476K 18K 203 13 148 0 9 0 17 604 192 50 1 49
The program itself is:
#include <stdio.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdlib.h>
#include <sys/mman.h>
int main(void)
{
int mem_key;
int mem_id;
char *mem_address;
/*----------------------------------------------*/
/* GENERATE A KEY */
/*----------------------------------------------*/
mem_key = ftok("/tmp",'v');
if (mem_key < 0)
{perror(" ftok trouble! \n"); exit(EXIT_FAILURE); }
/*----------------------------------------------*/
/* CREATE SHARED MEMORY */
/*----------------------------------------------*/
mem_id = shmget(mem_key,500004096, (IPC_CREAT | 0666));
if (mem_id < 0)
{perror(" shmget trouble! \n"); exit(EXIT_FAILURE); }
/*----------------------------------------------*/
/* MLOCKALL ATTEMPT BEFORE shmat */
/*----------------------------------------------*/
if (mlockall( MCL_CURRENT | MCL_FUTURE ) != 0 )
{perror( "mlockall error before shmat:");}
/*----------------------------------------------*/
/* GET ADDRESS OF SHARED MEMORY */
/*----------------------------------------------*/
mem_address = shmat(mem_id,0,0);
if (mem_address == 0)
{perror(" shmat trouble! \n"); exit(EXIT_FAILURE); }
/*----------------------------------------------*/
/* MLOCKALL ATTEMPT AFTER shmat */
/*----------------------------------------------*/
if (mlockall( MCL_CURRENT | MCL_FUTURE ) != 0 )
{perror( "mlockall error after shmat:");}
}
-Sri
[Posted by WWW Notes gateway]
|
9680.3 | | NNTPD::"[email protected]" | Shashi Mangalat | Fri May 02 1997 00:51 | 10 |
| Are you using SSM or gh-chunks? Does it fail without either of them? While
testing your sample program, I noticed a peculiar behaviour from SSM, in that
unaligned size fails to attach. You might consider filing a QAR. I'll send
you mail when I have more info.
The test for shmat() failure is incorrect in the test program. You should be
checking for (char *) -1 instead of 0.
--shashi
[Posted by WWW Notes gateway]
|
9680.4 | | NNTPD::"[email protected]" | Shashi Mangalat | Tue May 06 1997 18:39 | 16 |
| Sri,
The problem I ran into with attach was that I didn't have vm-vpagemax
set correctly. Maybe you are running into that?
The unaligned size causes kernel internal vpage array to be allocated
on locking. There is a per-process limit (defaults to 16k) on the
array size. The size is calculated based on the segment size
rounded-up to an 8Meg boundary. The #of vpage structures needed is
roundup_to_8M(size)/page_size.
Did you fix the test and tried it again? Also try an 8Meg aligned size.
--shashi
[Posted by WWW Notes gateway]
|
9680.5 | vm-vpagemax did it | NNTPD::"[email protected]" | Sri | Wed May 07 1997 17:08 | 7 |
| Hi Shashi,
Thanks for your suggestion on vm-vpagemax. It did it.
Regards
-Sri
[Posted by WWW Notes gateway]
|