[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::digital_unix

Title:DIGITAL UNIX(FORMERLY KNOWN AS DEC OSF/1)
Notice:Welcome to the Digital UNIX Conference
Moderator:SMURF::DENHAM
Created:Thu Mar 16 1995
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:10068
Total number of notes:35879

8963.0. "vmstat's "id" field, CPU consumption and PID 0..." by AMCUCS::SWIERKOWSKI (Quot homines tot sententiae) Tue Feb 25 1997 20:07

Greetings!

  Digital UNIX V3.2C
  AlphaStation 200 4/166 (256MB)

  I've got an ISV who questions the true function of PID 0 (i.e. the [kernel
idle] process).  What he is observing via the "id" field reported from 'vmstat'
when contrasted with the TIME consumed as reported by 'ps aux' (or the table() 
interface used by this ISV's application) seems contradictory.  Here's a piece 
of the email he sent me:

<--- begin part of mail from customer --->
...
The main question is whether PID 0 is indeed the idle process. If yes, its CPU
utilization should match the global idle time. There are 2 ways in which the
lack of such a match can be demonstrated.

1. Low global idle time (say from vmstat) but high consumption by PID 0. This
is the situation your message addresses.

2. High global idle time but low CPU consumption by PID 0. This you can observe
on any DEC system as we did with Jeff during our phone conversation.

The conclusion from the above is that PID 0 cannot be a true idle process. In
fact, our own product adds up CPU consumed by all processes, PID 0 included,
and compares it to the global number. The 2 numbers always match pretty well.

Therefore, the remaining question comes down to one of the 2 choices:

1) what is the true nature of PID 0 and why it was misleadingly labeled
2) it is the idle process after all but the kernel instrumentation is wrong and
it gets allocated CPU from other activities.

...
<--- end part of mail from customer --->

  I've researched this extensively in COMETS, other assorted conferences and
talked with some folks locally.  If I understand what vmstat is reporting, then
"high global idle time" (as indicated by a high value for the "id" percentage
from vmstat), means the system is either "really idle" waiting for processes 
to schedule or possibly I/O bound and the system is "idle waiting for I/O".
This seems confirmed by note 1451.2 in this conference.  That note states that
vmstat "combines" both types of "idle" time when reporting the "id" percentage.

  If this is true, then one would expect a high value reported for the "id" 
percentage by vmstat on either a system that was "really idle" or a system that
was "idle waiting for I/O".  Conversely, a low value in this field suggests a 
"busy" system.

  In a similar vein, a high value reported for CPU consumption by PID 0 suggests
an "idle" system.  Conversely, a low value reported for CPU consumption by 
PID 0 suggests a "busy" system since presumbly the [kernel idle] process only 
gets scheduled when there are no other higher priority, runnable processes to 
schedule.

  The combination of "global idle time" increasing in conjunction with the CPU 
consumption by PID 0 being a correspondly large value makes sense if the [kernel
idle] process is truly a low-priority "thumb twiddler".  Likewise, as CPU 
consumption by PID 0 decreases, a corresponding decrease in the "global idle 
time" makes sense.

  If the [kernel idle] process actually does other "housekeeping" functions,
(possibly initiating I/O?) or other system overhead activity is charged to the 
[kernel idle] process), then an apparent discrepancy between vmstat's "id" 
percentage and ps's "TIME" value makes sense.  Note 1451.1 suggests that the
"sy" percentage from vmstat is a composite of "system overhead" functions, and 
therefore I assume a system tangled up excessively with assorted "system 
overhead" activity would show a low value for vmstat's "id" percentage.  That
same note doesn't address what impact (if any?) a system with a high value 
for "sy" percentage would have on PID 0.  Does PID 0 "get charged" for this
"system overhead" activity?

  On the assumption that the [kernel idle] process is truly a "NULL" process
that gets scheduled when there are no other higher priority runnable processes
to run, the following scenarios are possible:

1) If the system were I/O bound, it would make sense to see a high value 
   reported for the "id" field since "The *stat programs combine..." both 
   types of idle time.  The first scenario described by the customer doesn't 
   jive with an "I/O bound system" (he sees a "Low global idle time").

2) If the system were CPU bound, then a "Low global idle time" would make 
   sense, but *NOT* in conjunction with "high consumption by PID 0".  By
   definition, a CPU bound system always has runnable processes so PID 0
   should rarely consume CPU time.  Again, this doesn't jive with the first 
   scenario described by the customer.

3) If the system were truly "idle", you'd expect a high value reported by 
   vmstat for the "id" percentage *AND* a "high consumption by PID 0".  Once 
   again, this doesn't jive with the first scenario described by the customer.

  No matter how I look at this first scenario ("low global idle time" *AND*
"high consumption by PID 0"), I can't reconcile two metrics from two different
commands reporting what appears to be a impossible combination.  What am I
missing here?

---

  The second scenario ("high global idle time" *AND* low consumption by PID 0")
jives with an I/O bound system as far seeing a "High global idle time" goes, 
but then the "low CPU consumption by PID 0" doesn't make sense.  By definition,
an I/O bound system doesn't have a lot of runnable processes since they are
presumbly stalled waiting for I/O activity to complete.  I'd expect such a 
system to schedule the [kernel idle] process more often than not while "waiting
for I/O".

  The only configuration I can think of that would provide the metrics described
in the second scenario is an I/O bound system (keeps "id" percentage from vmstat
high) *AND* another low-priority compute intensive process present that pre-
cludes PID 0 from being scheduled (keeps COU consumption for PID 0 low).  If 
this were the case however, you'd expect to see a high value for CPU TIME 
consumed by this other compute intensive process (instead of being consumed by
the [kernel idle] process), but this isn't what the customer is seeing either.

  Once again, no matter how I look at this second scenario ("high global idle 
time" *AND* "low consumption by PID 0"), I can't reconcile the two metrics 
reporting what appears to be a impossible combination.  What am I missing here?


  I did ask the customer to send me the results from the dbx session detailed
in note 4479.1 (i.e. looking at thread activity).  I'll post the results he 
sent me in the next reply.  I tried this on my own V4.0a system and got pretty
much the same sort messages and display from dbx, but neither he nor I have 
clue what this is telling him.

  I also asked the customer about NFS activity and any pertinent patches (see
note 2786.2), but the customer only observed a "busy" system (per vmstat's "id"
percentage) with another Digital UNIX client.  Their Sun and IBM clients don't
impact the NFS server much one or the other.  FWIW, they have *not* applied the
patch referenced in note 2786.2.

  Any pointers, RTFM's, (the "Performance & Tuning" manual in the doc set didn't
help), or any pointers would be welcome.  I know basic internals type stuff in 
and out on that "other OS" ;), but my internals/archictecture education on 
Digital UNIX is being stretched real thin here.  Unfortunately I can't just go 
plowing through source listings for myself or grab that wunnerful "Internals & 
Data Structures" manual like I would have for similar questions for VMS, er...,
VAX/VMS, er..., OpenVMS, (that "other OS of many names"), cheers...


						Tony Swierkowski
						Digital Equipment Corporation
						Software Partner Engineering
						Palo Alto, California
						(415) 617-3601
						"[email protected]"
T.RTitleUserPersonal
Name
DateLines
8963.1Output from dbx re: PID 0...AMCUCS::SWIERKOWSKIQuot homines tot sententiaeTue Feb 25 1997 20:11242
Greetings!

  As promised here is a recent message from the customer referenced in the base 
note in response to my solicitation for the output from dbx per note 4479.1,
but again neither he nor I understand what this is telling us and a quick read
through the dbx man page and the Kernel Debugging manual didn't help me much,
cheers...

						Tony Swierkowski
						Digital Equipment Corporation
						Software Partner Engineering
						Palo Alto, California
						(415) 617-3601
						"[email protected]"


P.S.	The entire message with the dbx results the customer sent is below for
	your reference...

From:	DECPA::"[email protected]" "Yefim Somin" 25-FEB-1997 16:40:30.17
To:	amcucs::swierkowski (Tony Swierkowski)
CC:	
Subj:	Re: What did dbx yield about PID 0?...

> 
> Greetings!
> 
>   I'm preparing to enter a note describing the metrics you are observing that
> would seem to be contradictory (especially if the [kernel idle] process is
> supposed to be a low priority "thumb twiddler" and wondered if you tried the
> sequence of commands in my last message re: using dbx to find out more about 
> PID 0.  Per my previous message:
> 
> ...
> 
> Note headers/trailers zapped by me to protect the guilty!
> <--- begin body of note --->
> 
> As I understand it, the "kernel idle" process is really
> a collection of threads.  To find out what is happening,
>         # dbx -k /vmunix /dev/mem
>         (dbx) set $pid=0
>         (dbx) tlist
>         (dbx) tstack
> 
> <--- end body of note --->
> 


Here goes:

[to my ignorant eye, it looks like all kinds of stuff is going on in that 
process]

ys

------------------------------------------------------------------


Script started on Tue Feb 25 19:24:42 1997
$PWD % dbx -k /vmunix /dev/mem
dbx version 3.11.8
Type 'help' for help.

stopped at  [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available

warning: Files compiled -g3: parameter values probably wrong
(dbx) set $pid=0
(dbx) tlist
thread 0xfffffc000ff08000 stopped at   [thread_run:2259 +0x2c,0xfffffc000043ea48]	 Source not available
thread 0xfffffc000ff08400 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000ff3a800 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000ff3ac00 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000ff3b000 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000ff3b400 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000ff3b800 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000ff3bc00 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000fd6e000 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000fd6e800 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000fd6ec00 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
More (n if no)?
thread 0xfffffc000fd6f000 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000fd6f400 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000fd6f800 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000fd6fc00 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000e582000 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000e582400 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000e583c00 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000e762800 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000e762c00 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000e763000 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000e763400 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
More (n if no)?
thread 0xfffffc000e763800 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
thread 0xfffffc000e763c00 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc000f42c000 stopped at   [thread_block:1891 +0x28,0xfffffc000043e258]	 Source not available
thread 0xfffffc0007f23000 stopped at   [thread_block:1906 ,0xfffffc000043e2c8]	 Source not available
(dbx) tstack

Thread 0xfffffc000ff08000:
>  0 thread_run(new_thread = 0xfffffc000043fa4c) ["../../../../src/kernel/kern/sched_prim.c":2259, 0xfffffc000043ea44]
   1 idle_thread() ["../../../../src/kernel/kern/sched_prim.c":3053, 0xfffffc000043fb78]

Thread 0xfffffc000ff08400:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 malloc_thread() ["../../../../src/kernel/bsd/kern_malloc.c":1095, 0xfffffc00003f97dc]

Thread 0xfffffc000ff3a800:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 xpt_callback_thread() ["../../../../src/kernel/io/cam/xpt.c":2262, 0xfffffc00004ad86c]

Thread 0xfffffc000ff3ac00:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
More (n if no)?
   1 xpt_pool_alloc_thread() ["../../../../src/kernel/io/cam/xpt.c":2315, 0xfffffc00004ad988]

Thread 0xfffffc000ff3b000:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 psiop_thread() ["../../../../src/kernel/io/cam/siop/psiop.c":4748, 0xfffffc00004d11ac]

Thread 0xfffffc000ff3b400:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 netisr_thread() ["../../../../src/kernel/net/netisr.c":829, 0xfffffc0000444c28]

Thread 0xfffffc000ff3b800:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 ubc_dirty_thread_loop(0xfffffc00005d0c30, 0xfffffc0000526b28, 0xfffffc000057e3c0, 0xfffffc000029b674, 0x0) ["../../../../src/kernel/vfs/vfs_ubc.c":1257, 0xfffffc000029b670]

Thread 0xfffffc000ff3bc00:
More (n if no)?
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 thread_sleep(event = 18446739675908947976, lock = 0xfffffc00005e8290, interruptible = 3738696) ["../../../../src/kernel/kern/sched_prim.c":1581, 0xfffffc000043dbe4]
   2 _cond_wait(0xfffffc000057c058, 0xfffffc00005e8290, 0x21b, 0x0, 0xfffffc0000390ca4) ["../../../../src/kernel/msfs/bs/ms_generic_locks.c":592, 0xfffffc0000376a8c]
   3 ulmq_recv_msg(0xfffffc00005e8290, 0xfffffc000057f950, 0xfffffc0000534ac8, 0xfffffc000053dca8, 0xfffffc000057c058) ["../../../../src/kernel/msfs/bs/bs_msg_queue.c":539, 0xfffffc00003827b0]
   4 bs_io_thread() ["../../../../src/kernel/msfs/bs/bs_qio.c":3144, 0xfffffc0000390c18]

Thread 0xfffffc000fd6e000:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 thread_sleep(event = 17289301308300324847, lock = 0xfffffc000fef9e08, interruptible = -269488145) ["../../../../src/kernel/kern/sched_prim.c":1581, 0xfffffc000043dbe4]
   2 _cond_wait(0xefefefefefefefef, 0xfffffc000fef9e08, 0x21b, 0x0, 0xefefefefefefefef) ["../../../../src/kernel/msfs/bs/ms_generic_locks.c":592, 0xfffffc0000376a8c]
More (n if no)?
   3 ulmq_recv_msg(0xefefefefefefefef, 0xefefefefefefefef, 0xefefefefefefefef, 0xfffffc000057c110, 0xfffffc000057c018) ["../../../../src/kernel/msfs/bs/bs_msg_queue.c":539, 0xfffffc00003827b0]
   4 msgq_recv_msg(0xfffffc000fef9e08, 0xefefefefefefefef, 0xefefefefefefefef, 0xfffffc0000449f60, 0x0) ["../../../../src/kernel/msfs/bs/bs_msg_queue.c":305, 0xfffffc0000382368]
   5 bs_fragbf_thread() ["../../../../src/kernel/msfs/bs/bs_bitfile_sets.c":416, 0xfffffc00003a9f98]

Thread 0xfffffc000fd6e800:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 task_swapper_in_thread_loop(0xfffffc0000363440, 0xfffffc000f0d47c4, 0xfffffc0000582530, 0x0, 0xfffffc000fd6e800) ["../../../../src/kernel/kern/task_swap.c":696, 0xfffffc00002dc664]
   2 vm_pageout() ["../../../../src/kernel/vm/vm_pagelru.c":1182, 0xfffffc000036341c]

Thread 0xfffffc000fd6ec00:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 task_swapper_out_thread_loop(0x0, 0xfffffc00002dc320, 0xfffffc0000582530, 0x0, 0xfffffc00005d6f08) ["../../../../src/kernel/kern/task_swap.c":607, 0xfffffc00002dc398
More (n if no)?
More (n if no)?
]

Thread 0xfffffc000fd6f000:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 vm_pageout_loop() ["../../../../src/kernel/vm/vm_pagelru.c":589, 0xfffffc0000362428]
   2 vm_pageout() ["../../../../src/kernel/vm/vm_pagelru.c":1189, 0xfffffc000036343c]

Thread 0xfffffc000fd6f400:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 reaper_thread(0xfffffc000057b768, 0xfffffc0000000000, 0xffffffff90c6fa58, 0xfffffc000043dfe0, 0xfffffc000057b768) ["../../../../src/kernel/kern/thread.c":2743, 0xfffffc00002da2b0]

Thread 0xfffffc000fd6f800:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 swapin_thread(0x0, 0xfffffc00005d6e18, 0xfffffc00005d6e10, 0xfffffc00002d3188, 0x0) ["../../../../src/kernel/kern/thread_swap.c":485, 0xfffffc00002db754]
More (n if no)?

Thread 0xfffffc000fd6fc00:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 swapout_thread(0x0, 0xfffffc000fd6fc00, 0xfffffc000057b768, 0xfffffc00005201a0, 0xfffffc00005201d8) ["../../../../src/kernel/kern/thread_swap.c":619, 0xfffffc00002dbaf8]

Thread 0xfffffc000e582000:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 action_thread(0x0, 0x0, 0xffffffff803458c0, 0xffffffff80345a80, 0x0) ["../../../../src/kernel/kern/machine.c":587, 0xfffffc00002d3184]

Thread 0xfffffc000e582400:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 acctwatch_thread(0x7973706f43430065, 0xfffffc000023ada0, 0xfffffc0000582530, 0x0, 0x1ae1a3313830c) ["../../../../src/kernel/bsd/kern_acct.c":327, 0xfffffc000023ae2c]

Thread 0xfffffc000e583c00:
More (n if no)?
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]

Thread 0xfffffc000e762800:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]

Thread 0xfffffc000e762c00:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]

Thread 0xfffffc000e763000:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]
More (n if no)?

Thread 0xfffffc000e763400:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]

Thread 0xfffffc000e763800:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]

Thread 0xfffffc000e763c00:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]

Thread 0xfffffc000f42c000:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1891, 0xfffffc000043e254]
More (n if no)?
   1 nfs_thread() ["../../../../src/kernel/nfs/nfs_server.c":5290, 0xfffffc000042ccc4]

Thread 0xfffffc0007f23000:
>  0 thread_block() ["../../../../src/kernel/kern/sched_prim.c":1903, 0xfffffc000043e2c4]
   1 mpsleep(0xffffffff00000000, 0x11a, 0xffffffff90f909d0, 0x0, 0xffffffff90f9d6b8) ["../../../../src/kernel/bsd/kern_synch.c":441, 0xfffffc00004093c4]
   2 event_wait(0x0, 0x0, 0xffffffff90fa2008, 0x0, 0x0) ["../../../../src/kernel/kern/event.c":139, 0xfffffc0000439060]
   3 mvfs_osf1axp_vfreeq() ["../mfs_mdep_osf1axp.c":3234, 0xffffffff90f90be0]
(dbx) quit
$PWD % more typ^[^[     pesrcip    cti  ri             exit

script done on Tue Feb 25 19:26:00 1997

% ====== Internet headers and postmarks (see DECWRL::GATEWAY.DOC) ======
% Received: from mail1.digital.com by mts-gw.pa.dec.com (5.65/09May94) id AA17984; Tue, 25 Feb 97 16:38:31 -0800
% Received: from gatekeeper1.bgs.com by mail1.digital.com (5.65 EXP 4/12/95 for V3.2/1.0/WV) id AA02104; Tue, 25 Feb 1997 16:32:58 -0800
% Received: from aix5.bgs.com  by aix6.bgs.com (8.7.Beta.10) with ESMTP id TAA31354; Tue, 25 Feb 1997 19:30:27 -0500
% Received:  by aix5.bgs.com (8.7.Beta.10) id TAA55560; Tue, 25 Feb 1997 19:30:26 -0500
% From: Yefim Somin <[email protected]>
% Message-Id: <[email protected]>
% Subject: Re: What did dbx yield about PID 0?...
% To: amcucs::swierkowski (Tony Swierkowski)
% Date: Tue, 25 Feb 1997 19:30:25 -0500 (EST)
% In-Reply-To: <[email protected]> from "Tony Swierkowski" at Feb 25, 97 11:10:57 am
% X-Mailer: ELM [version 2.4 PL13]
% Content-Type: text

8963.2NETRIX::&quot;[email protected]&quot;Shashi MangalatWed Feb 26 1997 18:3414
The name [kernel idle] is a misnomer.  Along with the idle threads it also has
various other threads that do real work.  Use 'ps mlp0' to see the per-thread
times.

From looking at the sources, the idle threads (one per cpu) do not get charged
for their time, but the cpus do.  The time shown in ps for process 0 is the
cpu
time used up by the non-idle threads.  vmstat gets the idle time from the cpu
and not from the idle thread.

Hope this helps.

--shashi
[Posted by WWW Notes gateway]
8963.3PID 0 not "just a thumb twiddler"...AMCUCS::SWIERKOWSKIQuot homines tot sententiaeThu Feb 27 1997 16:2214
Greetings!

  And thanks for the feedback/explanation in .2.  The customer has been 
stating that the name "[kernel idle]" wasn't really indicative of what 
PID 0 is doing and suspected that it was in fact "busy" doing something 
else or that CPU consumption was being charged to it by some other process 
and thus a high value for the TIME consumed by PID 0 (per commands like 
ps aux) did not neccesarily indicate a truly "idle" system.  The value 
being returned from vmstat seemed to back this supposition up, but then he 
wanted to know more about the true nature of PID 0.  I've passed along the 
gist of your reply to the customer and await a response.  Thanks again for 
your help, cheers...

						Tony Swierkowski