[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | DIGITAL UNIX (FORMERLY KNOWN AS DEC OSF/1) |
Notice: | Welcome to the Digital UNIX Conference |
Moderator: | SMURF::DENHAM |
|
Created: | Thu Mar 16 1995 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 10068 |
Total number of notes: | 35879 |
9837.0. "? on trace who uses wired memory, increases steady on DigitalUNIX v4.0A" by RULLE::BRINGH () Thu May 15 1997 13:11
Hi
I have a system running DigitalUNIX v4.0A and DCE/DFS v2.0A.
Wired memory is slowly, sometimes faster, increasing up to a point
where the system is unusable and has to be rebooted. Any suggestions
on how I can see who is eating up wired memory? If I guess that a process
is to blaim, should wired memory decrease eventually, immediatelly
or not at all, if that process is killed?
I include output from vmstat, vmstat -s, vmstat -P and vmstat -M from
three different times and one ps laxw.
vmstat -P shows that wired memory increased with 7707-5164=2543 and most
of it is in malloc pages, 4156-1748=2408
vmstat -M shows a large increase in several places, to mention a few.
KALLOC = 20442320-7064784
VNODE = 3144224-1173792
SOCKET = 489920-203840
SONAME = 302080-86016
FILE = 113536-51904
DEVBUF = 859072-338368
MIPC = 1369056-400544
Regards,
Mikael Bringh
Unix-group, TSC, Sweden
~>uptime
16:00 up 1 day, 5:58, 19 users, load average: 2.01, 1.43, 1.24
~>vmstat 5
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
4357 45 9693 199 5164 2M 534K 1M 175K 761K 15K 367 2K 33K 27 40 33
3345 45 9341 639 5076 784 70 587 0 39 0 368 5K 1K 64 36 0
~>vmstat -s
Virtual Memory Statistics: (pagesize = 8192)
1303 active pages
3256 inactive pages
492 free pages
5087 wired pages
2674032 virtual memory page faults
534628 copy-on-write page faults
1329124 zero fill page faults
175506 reattaches from reclaim list
761298 pages paged in
15161 pages paged out
3650449394 task and thread context switches
39711092 device interrupts
282089872 system calls
~>vmstat -P
Total Physical Memory = 128.00 M
= 16384 pages
Physical Memory Clusters:
start_pfn end_pfn type size_pages / size_bytes
0 256 pal 256 / 2.00M
256 16384 os 16128 / 126.00M
Physical Memory Use:
start_pfn end_pfn type size_pages / size_bytes
279 280 scavenge 1 / 8.00k
*
283 821 text 538 / 4.20M
821 941 data 120 / 960.00k
941 1009 bss 68 / 544.00k
1010 1014 cfgmgmt 4 / 32.00k
1015 1019 logs 4 / 32.00k
1019 1077 unixtable 58 / 464.00k
*
1099 1332 vmtables 233 / 1.82M
1332 16384 managed 15052 / 117.59M
============================
Total Physical Memory Use: 16078 / 125.61M
Managed Pages Break Down:
active pages = 1352
inactive pages = 3256
wired pages = 5103
ubc pages = 4923
==================
Total = 15056
WIRED Pages Break Down:
vm wired pages = 1109
ubc wired pages = 1025
meta data pages = 483
malloc pages = 1748
contig pages = 10
user ptepages = 711
kernel ptepages = 11
free ptepages = 6
==================
Total = 5103
~>vmstat -M
Memory usage by bucket:
bucket# element_size elements_in_use elements_free bytes_in_use
0 16 21982 4130 351712
1 32 8854 362 283328
2 64 13738 86 879232
3 96 9196 324 882816
4 128 5677 3347 726656
5 160 7513 188 1202080
6 192 1775 157 340800
7 256 3843 125 983808
8 320 7342 33 2349440
9 384 1926 6 739584
10 448 20 16 8960
11 512 2489 599 1274368
12 576 6 22 3456
13 640 77 19 49280
14 704 404 14 284416
15 768 68 72 52224
16 896 7 11 6272
17 1024 22 10 22528
18 1152 223 155 256896
19 1344 7 5 9408
20 1600 5 5 8000
21 2048 149 15 305152
22 2688 128 10 344064
23 4096 25 5 102400
24 8192 62 8 507904
25 12288 26 12 319488
26 16384 5 4 81920
27 32768 6 0 196608
28 65536 31 0 2031616
29 131072 5 0 655360
30 262144 1 0 262144
31 524288 0 0 0
Total memory being used from buckets = 15521920 bytes
Total free memory in buckets = 1621600 bytes
Memory usage by type: Type and Number of bytes being used
MBUF = 35328 MCLUSTER = 10240 SOCKET = 203840
PCB = 139264 ROUTETBL = 6656 IFADDR = 1216
SONAME = 86016 MBLK = 7680 MBLKDATA = 13248
STRHEAD = 9216 STRQUEUE = 27648 STRMODSW = 2400
STRSYNCQ = 7104 STREAMS = 2496 FILE = 51904
DEVBUF = 338368 PATHNAME = 4320 KERNEL TBL = 133184
ADVFS = 2534384 IPM ADDR = 128 IFM ADDR = 256
VNODE = 1173792 KALLOC = 7064784 TEMP = 52416
PMAP = 72384 CRED = 80640 TASK = 311808
THREAD = 282304 SELQ = 8096 SVIPC = 144
FILEBUF = 12288 MOUNT = 24800 NAMEI = 2048
MIPC = 400544 VMOBJ = 548896 VMANON = 294480
VMSEG = 16832 VMLOCK = 3456 VMMAP = 23424
VMENTRY = 381408 FLOCK = 192 FIFO = 76544
M_ANON = 604784 M_VMVPAGE = 54464 NCALLOUT = 11872
M_WS = 16384 SIGACT = 28384 HEADER = 1536
DLI = 6016 VMSWAP = 16768
----------------------------------------
~>uptime
09:08 up 3 days, 22:24, 14 users, load average: 3.38, 3.30, 3.17
~>vmstat 5
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in y cs us sy id
7309 45 8825 88 6143 5M 1M 2M 471K 1M 28K 358 2K 5K 28 15 57
5310 46 8882 37 6137 256 17 182 122 9 0 1K 1K 2K 60 40 0
~>vmstat -s
Virtual Memory Statistics: (pagesize = 8192)
1194 active pages
2669 inactive pages
70 free pages
6153 wired pages
5994279 virtual memory page faults
1164606 copy-on-write page faults
2970449 zero fill page faults
471847 reattaches from reclaim list
1736479 pages paged in
28780 pages paged out
1826017883 task and thread context switches
121984101 device interrupts
794853451 system calls
~>vmstat -P
Total Physical Memory = 128.00 M
= 16384 pages
Physical Memory Clusters:
start_pfn end_pfn type size_pages / size_bytes
0 256 pal 256 / 2.00M
256 16384 os 16128 / 126.00M
Physical Memory Use:
start_pfn end_pfn type size_pages / size_bytes
279 280 scavenge 1 / 8.00k
*
283 821 text 538 / 4.20M
821 941 data 120 / 960.00k
941 1009 bss 68 / 544.00k
1010 1014 cfgmgmt 4 / 32.00k
1015 1019 logs 4 / 32.00k
1019 1077 unixtable 58 / 464.00k
*
1099 1332 vmtables 233 / 1.82M
1332 16384 managed 15052 / 117.59M
============================
Total Physical Memory Use: 16078 / 125.61M
Managed Pages Break Down:
free pages = 65
active pages = 1198
inactive pages = 2653
wired pages = 6155
ubc pages = 4985
==================
Total = 15056
WIRED Pages Break Down:
vm wired pages = 1037
ubc wired pages = 1152
meta data pages = 483
malloc pages = 2834
contig pages = 10
user ptepages = 620
kernel ptepages = 11
free ptepages = 8
==================
Total = 6155
~>vmstat -M
Memory usage by bucket:
bucket# element_size elements_in_use elements_free bytes_in_use
0 16 26607 4625 425712
1 32 17398 266 556736
2 64 28899 157 1849536
3 96 9187 248 881952
4 128 9690 486 1240320
5 160 11232 192 1797120
6 192 817 611 156864
7 256 8743 89 2238208
8 320 19065 35 6100800
9 384 6829 17 2622336
10 448 21 15 9408
11 512 2499 509 1279488
12 576 7 21 4032
13 640 69 39 44160
14 704 384 12 270336
15 768 55 45 42240
16 896 9 9 8064
17 1024 16 8 16384
18 1152 223 120 256896
19 1344 9 3 12096
20 1600 6 9 9600
21 2048 162 30 331776
22 2688 124 5 333312
23 4096 25 5 102400
24 8192 68 4 557056
25 12288 27 9 331776
26 16384 6 4 98304
27 32768 7 0 229376
28 65536 30 0 1966080
29 131072 4 0 524288
30 262144 1 0 262144
31 524288 0 0 0
Total memory being used from buckets = 24558800 bytes
Total free memory in buckets = 1191696 bytes
Memory usage by type: Type and Number of bytes being used
MBUF = 29696 MCLUSTER = 47104 SOCKET = 378240
PCB = 199040 ROUTETBL = 6656 IFADDR = 1216
SONAME = 230912 MBLK = 9472 MBLKDATA = 20800
STRHEAD = 6656 STRQUEUE = 19968 STRMODSW = 2400
STRSYNCQ = 5184 STREAMS = 2176 FILE = 91200
DEVBUF = 335488 PATHNAME = 4448 KERNEL TBL = 133184
ADVFS = 2508496 IPM ADDR = 128 IFM ADDR = 256
VNODE = 1192544 KALLOC = 15323440 TEMP = 52192
PMAP = 60512 CRED = 85504 TASK = 317184
THREAD = 266112 SELQ = 7744 SVIPC = 176
FILEBUF = 12288 MOUNT = 24800 NAMEI = 2048
MIPC = 854208 VMOBJ = 475328 VMANON = 284080
VMSEG = 15040 VMLOCK = 3008 VMMAP = 23808
VMENTRY = 336000 FLOCK = 192 FIFO = 187904
M_ANON = 555968 M_VMVPAGE = 57184 NCALLOUT = 11808
M_WS = 16384 SIGACT = 27904 HEADER = 1792
DLI = 6016 VMSWAP = 16768
--------------------------------------
# uptime
16:48 up 5 days, 6:04, 24 users, load average: 1.63, 1.51, 1.50
# vmstat 5
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy
id
8384 47 7324 23 7709 9M 1M 4M 1M 2M 119K 370 2K 4K 36 15
49
# vmstat -s
Virtual Memory Statistics: (pagesize = 8192)
1762 active pages
3644 inactive pages
88 free pages
7707 wired pages
9335886 virtual memory page faults
1729403 copy-on-write page faults
4494020 zero fill page faults
1275744 reattaches from reclaim list
2866462 pages paged in
119758 pages paged out
1999435211 task and thread context switches
168198870 device interrupts
1045534684 system calls
# vmstat -P
Total Physical Memory = 128.00 M
= 16384 pages
Physical Memory Clusters:
start_pfn end_pfn type size_pages / size_bytes
0 256 pal 256 / 2.00M
256 16384 os 16128 / 126.00M
Physical Memory Use:
start_pfn end_pfn type size_pages / size_bytes
279 280 scavenge 1 / 8.00k
*
283 821 text 538 / 4.20M
821 941 data 120 / 960.00k
941 1009 bss 68 / 544.00k
1010 1014 cfgmgmt 4 / 32.00k
1015 1019 logs 4 / 32.00k
1019 1077 unixtable 58 / 464.00k
*
1099 1332 vmtables 233 / 1.82M
1332 16384 managed 15052 / 117.59M
============================
Total Physical Memory Use: 16078 / 125.61M
Managed Pages Break Down:
free pages = 87
active pages = 1763
inactive pages = 3644
wired pages = 7707
ubc pages = 1855
==================
Total = 15056
WIRED Pages Break Down:
vm wired pages = 1177
ubc wired pages = 1061
meta data pages = 483
malloc pages = 4156
contig pages = 10
user ptepages = 803
kernel ptepages = 11
free ptepages = 6
==================
Total = 7707
# vmstat -M
Memory usage by bucket:
bucket# element_size elements_in_use elements_free bytes_in_use
0 16 39086 4946 625376
1 32 23769 551 760608
2 64 38441 87 2460224
3 96 12241 764 1175136
4 128 15005 675 1920640
5 160 18534 1917 2965440
6 192 1258 632 241536
7 256 11455 193 2932480
8 320 26129 46 8361280
9 384 8097 9 3109248
10 448 24 12 10752
11 512 6349 1235 3250688
12 576 6 22 3456
13 640 76 32 48640
14 704 472 34 332288
15 768 82 68 62976
16 896 10 8 8960
17 1024 16 16 16384
18 1152 680 580 783360
19 1344 9 3 12096
20 1600 5 10 8000
21 2048 165 31 337920
22 2688 161 13 432768
23 4096 25 5 102400
24 8192 65 8 532480
25 12288 36 10 442368
26 16384 7 4 114688
27 32768 7 0 229376
28 65536 32 0 2097152
29 131072 6 0 786432
30 262144 1 0 262144
31 524288 0 0 0
Total memory being used from buckets = 34427296 bytes
Total free memory in buckets = 2589344 bytes
Memory usage by type: Type and Number of bytes being used
MBUF = 25344 MCLUSTER = 12288 SOCKET = 489920
PCB = 264000 ROUTETBL = 6656 IFADDR = 1216
SONAME = 302080 MBLK = 4864 MBLKDATA = 1792
STRHEAD = 11264 STRQUEUE = 33792 STRMODSW = 2400
STROSR = 192 STRSYNCQ = 8640 STREAMS = 2752
FILE = 113536 DEVBUF = 859072 PATHNAME = 4448
KERNEL TBL = 133184 ADVFS = 2619296 IPM ADDR = 128
IFM ADDR = 256 VNODE = 3144224 KALLOC = 20442320
TEMP = 52736 PMAP = 80864 CRED = 110336
TASK = 413952 THREAD = 328064 SELQ = 9504
SVIPC = 176 FILEBUF = 12288 MOUNT = 24800
NAMEI = 2048 MIPC = 1369056 VMOBJ = 1165376
VMANON = 438336 VMSEG = 18432 VMLOCK = 4096
VMMAP = 30720 VMENTRY = 468384 FLOCK = 192
FIFO = 92928 M_ANON = 782064 M_VMVPAGE = 80704
NCALLOUT = 12128 M_WS = 16384 SIGACT = 37120
HEADER = 1536 DLI = 6016 VMSWAP = 16768
# ps laxw
UID PID PPID CP PRI NI VSZ RSS WCHAN S TTY TIME
COMMAND
0 0 0 14 32 -12 221M 13M * R < ?? 38:09.78
[kernel idle]
0 1 0 0 44 0 440K 0K pause IW ?? 0:06.72
/sbin/init -a
0 3 1 0 44 0 1.02M 0K sv_msg_ IW ?? 0:00.28
/sbin/kloadsrv
0 82 1 0 42 0 1.55M 48K pause I ?? 5:36.83
/sbin/update
0 149 1 0 42 0 1.61M 72K event S ?? 0:02.67
/usr/sbin/syslogd
0 151 1 0 42 0 1.59M 0K event IW ?? 0:00.03
/usr/sbin/binlogd
0 211 1 0 44 0 1.61M 96K event S ?? 0:14.33
/usr/sbin/routed -q
0 317 1 0 44 0 1.64M 0K event IW ?? 0:02.66
/usr/sbin/portmap
0 319 1 0 44 0 1.55M 0K 75c050 I ?? 0:00.16
/usr/sbin/nfsiod 7
0 322 1 0 44 -10 1.65M 0K event IW<+ ?? 0:00.02
/usr/sbin/rpc.statd
0 324 1 0 44 0 1.78M 0K event IW ?? 0:00.04
/usr/sbin/rpc.lockd
0 384 1 0 34 -10 1.91M 0K socket IW< ?? 0:03.73
-accepting connections (sendmail)
0 426 1 0 44 0 1.68M 48K event S ?? 0:34.22
/usr/sbin/snmpd
0 430 1 0 44 0 2.84M 80K event S ?? 0:01.93
/usr/sbin/os_mibs
0 435 1 0 43 -1 5.26M 288K * S < ?? 17:06.63
/usr/sbin/advfsd
0 446 1 0 44 0 1.62M 0K event IW ?? 0:10.81
/usr/sbin/inetd
0 479 1 0 44 0 1.59M 0K 7264494 IW ?? 0:33.33
/usr/sbin/cron
0 493 1 0 44 0 1.70M 0K event IW ?? 0:00.86
/usr/lbin/lpd
0 752 446 0 44 0 1.66M 0K event IW ?? 0:00.26
telnetd
0 854 1 0 44 0 27.4M 13M * S ?? 0:59.95
/opt/dcelocal/bin/dfsbind
0 861 1 0 42 0 9.84M 8K * U ?? 0:00.04
/opt/dcelocal/bin/fxd -mainprocs 7 -admingroup subsys/dce/dfs-admin
0 862 1 0 42 0 9.84M 8K * U ?? 0:00.04
/opt/dcelocal/bin/fxd -mainprocs 7 -admingroup subsys/dce/dfs-admin
0 869 1 0 42 0 8.93M 8K * U ?? 0:00.01
/opt/dcelocal/bin/dfsd
0 870 1 0 42 0 8.93M 8K * U ?? 1:24.71
/opt/dcelocal/bin/dfsd
0 871 1 0 42 0 8.93M 8K * U ?? 1:23.76
/opt/dcelocal/bin/dfsd
0 892 1 0 42 0 6.20M 8K * IW ?? 0:00.27
/usr/bin/mmeserver -config /var/mme/system.ini
1 917 1 0 44 0 1.73M 0K event IW ?? 0:07.60
lpsad -M van6 -f -u
0 919 1 0 34 -10 1.74M 40K event S < ?? 0:00.24
lpsbootd -F /etc/lpsodb -l 0 -x 1
0 932 1 0 34 -10 8.57M 592K * S < ?? 1:41.08
/usr/physto/bin/qmaster
0 1001 446 0 42 0 5.37M 0K event IW ?? 0:00.15
rpc.ttdbserverd
0 1392 1 0 44 0 13.4M 2.3M * S ?? 0:09.65
/opt/dcelocal/bin/dced -c
0 1413 1 0 42 0 9.91M 488K * S ?? 0:33.88
/opt/dcelocal/bin/cdsadv
0 1417 1413 0 44 0 11.7M 1.2M * S ?? 0:12.04
/opt/dcelocal/bin/cdsclerk -w
FATAL:STDERR:-;FILE.32.256:/opt/dcelocal/var/svc/fatal.log -w
ERROR:STDERR:-;FILE.32.256:/opt/dcelocal/var/svc/error.log -w
WARNING:STDERR:-;FILE.32.256:/opt/dcelocal/var/svc/warning.log -w
NOTICE:DISCARD: -w NOTICE_VERBOSE:DISCARD:
0 1428 1 0 42 0 16.7M 1.7M * S ?? 0:02.52
/opt/dcelocal/bin/dtsd -s
0 1871 1 0 42 0 9.34M 8K * I ?? 0:03.01
/usr/dt/bin/dtlogin -daemon
0 1879 1871 31 42 -2 19.1M 2.8M event S < ?? 15:18:58
/usr/bin/X11/X :0 -auth /var/dt/authdir/authfiles/A:0-aabApa
0 1882 1871 0 44 0 10.2M 200K * S ?? 0:17.13
dtlogin <:0> -daemon
239 1903 1882 0 44 0 15.7M 216K * S ?? 2:16.74
/usr/dt/bin/dtsession
239 1960 1 0 44 0 6.23M 0K event IW ?? 0:23.87
/usr/dt/bin/ttsession -s
239 1968 1903 0 44 0 17.5M 1.4M * I ?? 2:28.86
dtwm
239 1969 1903 0 42 0 2.04M 0K wait IW ?? 0:00.02
sh -c /usr/bin/X11/dxconsole
239 1970 1903 2 44 0 14.4M 432K * S ?? 4:23.05
/usr/dt/bin/dtterm -session dtaaaDAa -ls
239 1971 1903 0 42 0 14.5M 8K * I ?? 0:39.75
/usr/dt/bin/dtterm -session dtaaaEua -ls
0 1973 1969 0 42 0 11.9M 16K * I ?? 0:39.51
/usr/bin/X11/dxconsole
239 2002 1968 0 44 0 4.72M 0K event IW ?? 0:00.15
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_101_1 /usr/dt/bin/dtterm -ls
239 2003 2002 0 44 0 14.4M 8K * I ?? 2:02.69
/usr/dt/bin/dtterm -ls
239 2088 1968 0 44 0 4.72M 0K event IW ?? 0:00.16
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_102_1 xrsh -l tardell -auth xauth msia02.msi.se exmh
239 2089 2088 0 44 0 7.15M 8K * IW ?? 0:00.71
rsh msia02.msi.se -l tardell exec /bin/csh -cf "setenv DISPLAY
130.237.208.9:0.0; exec exmh < /dev/null >>& /dev/null " < /dev/null >
/dev/null
239 2100 2089 0 0 -44 0K 0K - < ?? 0:00.00
<defunct>
239 2256 1968 0 44 0 4.72M 0K event IW ?? 0:00.17
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_103_1 /usr/dt/bin/dtterm -ls
239 2257 2256 0 44 0 14.4M 16K * I ?? 0:12.94
/usr/dt/bin/dtterm -ls
239 2424 2421 0 42 0 1.57M 0K event IW ?? 0:00.02
/usr/lib/emacs/bin/emacsserver
239 4084 1968 0 44 0 4.72M 0K event IW ?? 0:00.14
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_104_1 /usr/dt/bin/dtterm -ls
239 4085 4084 0 44 0 14.4M 576K * I ?? 1:21.95
/usr/dt/bin/dtterm -ls
213 7366 8132 0 42 0 4.73M 0K event IW ?? 0:00.14
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pUL-Y 01 8216 1342177280 1 0 213
130.237.208.9 3_101_1 /usr/dt/bin/dtterm -ls
213 8115 8149 0 44 0 14.3M 24K * I ?? 0:04.13
/usr/dt/bin/dtterm -session dtaaqBla -ls
213 8131 8149 0 42 0 2.04M 0K wait IW ?? 0:00.02
sh -c /usr/bin/X11/dxconsole
213 8132 8149 0 44 0 15.9M 480K * S ?? 0:12.60
dtwm
213 8149 8162 0 44 0 9.73M 0K event IW ?? 0:01.52
/usr/dt/bin/dtsession
0 8162 1871 0 44 0 10.2M 224K * S ?? 0:02.90
dtlogin <vanep6:0> -daemon
213 8174 8149 0 42 0 16.4M 24K * I ?? 0:03.55
dtfile -session dtaaqBxa
213 8178 8149 0 44 0 9.91M 0K event IW ?? 0:00.78
/usr/dt/bin/dtstyle -session dtaaqBra
213 8204 8149 0 44 0 9.79M 0K event IW ?? 0:00.90
dtcalc -session dtaaqBBa
213 8205 8149 0 44 0 10.8M 0K event IW ?? 0:01.30
/usr/dt/bin/dthelpview -helpVolume Intromgr
213 8215 8149 0 42 0 15.2M 24K * I ?? 0:01.75
/usr/dt/bin/dtcm -session dtaaqBqa
213 8216 1 0 44 0 5.66M 0K event IW ?? 0:00.32
/usr/dt/bin/ttsession -s
0 8221 8131 0 44 0 6.13M 0K event IW ?? 0:00.70
/usr/bin/X11/dxconsole
213 8223 8149 0 44 0 9.40M 0K event IW ?? 0:00.79
dthelpview -helpVolume browser
213 8227 8149 0 44 0 9.40M 0K event IW ?? 0:00.78
dthelpview -helpVolume browser
213 8229 8149 0 44 0 9.79M 0K event IW ?? 0:00.97
dtcalc -session dtaaqBAa
213 8231 8149 0 44 0 9.40M 0K event IW ?? 0:00.76
dthelpview -helpVolume browser
213 8235 8149 0 42 0 14.4M 24K * I ?? 0:03.50
/usr/dt/bin/dtterm -session dtaaqvsa -ls
213 8238 8149 0 44 0 14.3M 8K * IW ?? 0:01.22
/usr/dt/bin/dtterm -session dtaaqBsa -ls
213 8245 8149 0 44 0 14.3M 24K * I ?? 0:01.26
/usr/dt/bin/dtterm -session dtaasDCa -ls
213 8344 7366 0 44 0 14.4M 24K * I ?? 0:46.18
/usr/dt/bin/dtterm -ls
0 8424 446 0 44 0 7.25M 104K * S ?? 0:02.31
telnetd
0 8428 446 0 44 0 7.25M 8K * I ?? 0:05.49
telnetd
0 9878 446 0 44 0 7.25M 8K * I ?? 0:00.37
telnetd
0 11504 446 1 44 0 7.12M 424K * S ?? 0:00.31
rlogind
0 11512 446 0 44 0 7.25M 112K * S ?? 0:00.27
telnetd
0 13816 13818 0 44 0 9.80M 416K * S ?? 1:06.64
/opt/dcelocal/bin/upclient -server /.:/hosts/sysman -path
/opt/dcelocal/var/dfs/admin.bos /opt/dcelocal/var/dfs/admin.ft
0 13817 13818 0 44 0 14.1M 288K * S ?? 0:18.24
/opt/dcelocal/bin/ftserver
0 13818 1 0 44 0 12.1M 440K * S ?? 0:17.81
/opt/dcelocal/bin/bosserver -adminlist admin.bos
239 15674 23698 0 42 0 15.7M 16K * I ?? 0:06.84
/usr/dt/bin/dtcm
1 15701 446 0 42 0 14.3M 16K * I ?? 0:02.64
rpc.cmsd
239 15870 1968 0 42 0 4.72M 0K event IW ?? 0:00.14
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_105_1 /usr/dt/bin/dtterm -ls
239 15882 15870 0 44 0 14.4M 8K * I ?? 0:24.07
/usr/dt/bin/dtterm -ls
239 16283 1968 0 44 0 4.72M 0K event IW ?? 0:00.14
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_106_1 /usr/dt/bin/dtterm -ls
239 16290 16283 0 44 0 14.5M 8K * I ?? 2:16.53
/usr/dt/bin/dtterm -ls
239 16327 1968 0 44 0 4.72M 0K event IW ?? 0:00.21
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_107_1 /usr/dt/bin/dtterm -ls
239 16328 16327 0 44 0 14.4M 8K * I ?? 0:25.15
/usr/dt/bin/dtterm -ls
239 16531 16532 0 44 0 14.3M 16K * I ?? 0:02.71
/usr/dt/bin/dtterm -ls
239 16532 1968 0 44 0 4.72M 0K event IW ?? 0:00.16
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_108_1 /usr/dt/bin/dtterm -ls
0 17463 1413 0 42 0 10.6M 968K * S ?? 0:02.57
/opt/dcelocal/bin/cdsclerk -w
FATAL:STDERR:-;FILE.32.256:/opt/dcelocal/var/svc/fatal.log -w
ERROR:STDERR:-;FILE.32.256:/opt/dcelocal/var/svc/error.log -w
WARNING:STDERR:-;FILE.32.256:/opt/dcelocal/var/svc/warning.log -w
NOTICE:DISCARD: -w NOTICE_VERBOSE:DISCARD:
239 23698 1968 0 44 0 4.72M 0K event IW ?? 0:00.19
/usr/dt/bin/dtexec -open 0 -ttprocid 2.pSl3x 01 1960 1342177279 1 0 239
130.237.208.9 3_109_1 /usr/dt/bin/dtcm
0 1881 1 0 45 0 432K 0K ttyin IW + console 0:00.44
/usr/sbin/getty console console vt100
239 15880 15882 0 44 0 7.92M 8K * I + ttyp0 0:02.89
-tcsh (tcsh)
239 16507 16512 0 44 0 18.3M 0K - TW ttyp0 0:04.26
/.../physto.se/fs/group/bub/cern/97a/bin/pawX11
239 16512 15880 0 42 0 2.04M 0K - TW ttyp0 0:00.04
sh /.../physto.se/fs/group/bub/cern/97a/bin/paw
0 753 752 0 42 0 2.12M 0K tty IW + ttyp1 0:00.57
-tcsh (tcsh)
239 1974 1971 0 44 0 2.16M 0K pause IW ttyp3 0:00.78
-tcsh (tcsh)
239 12549 1974 0 42 0 1.76M 0K event IW + ttyp3 0:03.68
telnet atlas01
239 1975 1970 0 44 0 7.93M 16K * I ttyp4 0:04.09
-tcsh (tcsh)
239 2421 1975 0 42 0 14.2M 24K * I ttyp4 4:42.46
/bin/emacs
239 6180 1975 0 44 0 7.15M 8K * IW ttyp4 0:00.48
rsh msia02.msi.se -l tardell exec /bin/csh -cf "setenv DISPLAY
130.237.208.9:0.0; exec dxterm -ls -title 'msia02' < /dev/null >>&
/dev/null " < /dev/null > /dev/null
239 8176 6180 0 0 -44 0K 0K - < ttyp4 0:00.00
<defunct>
0 10861 1975 32 44 0 7.79M 736K * S + ttyp4 2:27.27
top
239 14998 1975 0 44 0 1.65M 0K - TW ttyp4 0:00.10
nslookup
239 2004 2003 0 44 0 2.16M 0K pause IW ttyp5 0:00.61
-tcsh (tcsh)
239 2057 2004 0 44 0 1.76M 0K event IW + ttyp5 0:08.27
telnet vanh
239 2131 16284 0 42 0 2.04M 0K - TW ttyp6 0:00.04
sh /.../physto.se/fs/group/bub/cern/97a/bin/paw
239 2179 2131 0 42 0 18.1M 0K - TW ttyp6 0:01.56
/.../physto.se/fs/group/bub/cern/97a/bin/pawX11
0 2544 16284 0 42 0 7.79M 8K * TW ttyp6 0:03.82
top
0 2713 16284 0 44 0 7.79M 8K * TW ttyp6 0:11.89
top
239 4598 27020 0 0 -44 0K 0K - < ttyp6 0:00.00
<defunct>
239 4710 27020 0 0 -44 0K 0K - < ttyp6 0:00.00
<defunct>
239 4745 27020 0 51 0 8.97M 0K 7091094 IW ttyp6 0:08.31
gs -sDEVICE=x11 -dNOPAUSE -dSAFER -q -
239 9441 16284 0 44 0 1.65M 0K - TW ttyp6 0:00.05
nslookup
239 11471 16284 0 42 0 1.65M 0K tty IW + ttyp6 0:00.03
nslookup
239 16284 16290 0 44 0 7.95M 16K * I ttyp6 0:17.81
-tcsh (tcsh)
239 27020 16284 0 44 0 6.47M 0K event IW ttyp6 0:11.81
xdvi.bin -name xdvi lic
239 27089 27020 0 0 -44 0K 0K - < ttyp6 0:00.00
<defunct>
239 28211 27020 0 0 -44 0K 0K - < ttyp6 0:00.00
<defunct>
239 28533 27020 0 0 -44 0K 0K - < ttyp6 0:00.00
<defunct>
239 30308 27020 0 0 -44 0K 0K - < ttyp6 0:00.00
<defunct>
239 32066 27020 0 0 -44 0K 0K - < ttyp6 0:00.00
<defunct>
239 4086 4085 0 44 0 2.16M 0K pause IW ttyp7 0:00.53
-tcsh (tcsh)
0 4104 4086 0 44 0 7.77M 8K * I ttyp7 0:01.32
-csh (tcsh)
0 22497 4104 0 42 0 7.80M 8K * T ttyp7 16:13.43
top
0 27053 4104 0 44 0 2.15M 0K tty IW + ttyp7 0:00.48
-bin/tcsh (tcsh)
239 16334 16328 0 44 0 2.16M 0K pause IW ttyp8 0:00.57
-tcsh (tcsh)
239 16351 16334 0 44 0 1.76M 0K event IW + ttyp8 0:02.05
telnet alv.nada.kth.se
239 16526 16531 0 42 0 2.16M 0K tty IW + ttyp9 0:00.49
-tcsh (tcsh)
239 2258 2257 0 44 0 7.80M 16K * I + ttypa 0:02.93
-tcsh (tcsh)
239 11510 2258 0 44 0 21.0M 2.2M * S ttypa 0:07.15
/usr/physto/bin/netscape-3.01 -name netscape
213 8244 8238 0 42 0 2.16M 0K tty IW + ttypb 0:00.56
-tcsh (tcsh)
213 8255 8245 0 46 0 2.16M 0K tty IW + ttypc 0:00.53
-tcsh (tcsh)
213 8254 8235 0 44 0 2.16M 0K pause IW ttypd 0:00.62
-tcsh (tcsh)
213 9873 8254 0 42 0 1.73M 0K event IW + ttypd 0:00.14
telnet vanz
213 8257 8115 0 44 0 2.15M 0K tty IW + ttype 0:00.56
-tcsh (tcsh)
213 2120 8344 0 44 0 2.16M 0K tty IW + ttypf 0:00.55
-tcsh (tcsh)
254 8404 8424 0 44 0 2.15M 0K pause IW ttyq0 0:00.68
-tcsh (tcsh)
0 11502 8404 18 44 0 7.80M 736K * S + ttyq0 0:23.48
top
254 8422 8428 0 44 0 7.91M 16K * I ttyq1 0:03.91
-tcsh (tcsh)
254 11453 8422 0 42 0 1.88M 0K wait IW + ttyq1 0:00.10
ksh diceprod.job www
254 11492 11453 239 62 10 64.3M 10M - R N+ ttyq1 8:27.68
../bin/dicejets
229 9863 9878 0 42 0 2.17M 0K tty IW + ttyq2 0:01.02
-tcsh (tcsh)
229 9900 9863 2 44 0 21.0M 520K * S ttyq2 0:49.22
./netscape
229 9910 9863 0 44 0 12.3M 8K * IW ttyq2 0:01.65
/bin/emacs -i lp1340
901 11517 11512 0 44 0 2.13M 280K pause S ttyq3 0:00.40
-tcsh (tcsh)
0 11541 11542 67 44 0 2.13M 632K - R + ttyq3 0:00.24
ps laxw
901 11542 11517 4 42 0 2.04M 168K wait S + ttyq3 0:00.06
sh ./wired_mem_chk.sh
238 11511 11504 2 44 0 2.16M 416K pause S ttyq4 0:00.73
-tcsh (tcsh)
T.R | Title | User | Personal Name | Date | Lines |
---|
9837.1 | | SMURF::DENHAM | Digital UNIX Kernel | Thu May 15 1997 18:57 | 10 |
| You running the polycenter performance monitor (???). Or something
simliar? These long running performance collectors are using
Mach IPC (a la the ps command) to gather process data. There's
a common bug where they don't disable Mach IPC port deletion
notifications, and tons of these message pile up in the
collector's address space. Thus, your MIPC keeps going up
(Mach IPC).
There's a patch for the polycenter thing. I forget the specifics,
but they're discussed elsewhere in this conference.
|
9837.2 | No Performance Monitor | RULLE::BRINGH | | Tue May 20 1997 12:09 | 7 |
| Hi
No, they are not running Performance Monitor but they are running top and monitor.
But they also exit them and most of time they are not running them.
/Mikael
|
9837.3 | | SMURF::DENHAM | Digital UNIX Kernel | Tue May 20 1997 14:27 | 4 |
| Well, those other programs are good candidates for the problem
as well. If you nuke them all, the MIPC bucket shrink and wired
memory come back? You've certainly got the symptom -- I've
never seen anything make MIPC grow like that.
|