T.R | Title | User | Personal Name | Date | Lines |
---|
163.1 | | VAXUUM::DYER | | Wed Oct 02 1985 15:40 | 3 |
| I believe Paul (ELUDOM::) Winalski has done something like
that, but it might depend on SHELL RTL routines . . .
<_Jym_>
|
163.2 | | EDSVAX::CRESSEY | | Thu Oct 03 1985 10:45 | 11 |
| While we are on the subject, can anybody explain 'pipes' to me?
I've seen them mentioned all over the place, but I have never
learned what they or what they do. As you may imagine I'm not
a Un*x user. If you want to explain it by comparison to other
operating systems features, I'll probably understand you if you
use VMS as a basis for comparison. In a pinch, you can use TOPS-%%.
Thanks,
Dave
|
163.3 | | JON::MORONEY | | Thu Oct 03 1985 13:40 | 9 |
| Basicly, a pipe is where there are 2 programs running concurrently and the
output of one is used as the input of another. It is similar to creating
a temporary file as the output of one program and using it as the input to
another, except no file is produced, and a write request by the first program
tells the second program there is data to be read and vice versa. Much of
Unix is designed around pipes and many utilities were designed so they can
be used as intermediates in pipes.
-Mike
|
163.4 | | REX::MINOW | | Fri Oct 04 1985 10:45 | 31 |
| To amplify on .3 somewhat. A pipe is generally implemented as
a 4K byte circular buffer -- some systems spool the buffers onto
disk. The normal way to use a pipe is to create it (a system
call), then spawn two subprocesses, attaching "sys$input" to
one pipe and "sys$output" to the other. The command language
does this for you if you write, for example,
grep "foo" source.file | sort
This uses the grep utility (similar to VMS SEARCH) to extract
all lines containing the text "foo" from source.file.
Normally, these would be written to your terminal. The | tells
the command interpretor to pipe them to the sort program.
(If you don't give sort any arguments, it sorts the full
input file, writing to its sys$output (which may also be a pipe.)
This is equivalent to the following VMS code
$ search/output=temp.tmp source.file "foo"
$ sort/key=(pos=1,size=80) temp.tmp sys$output:
$ delete temp.tmp
However, note that Unix runs both programs concurrently as
separate processes and doesn't have to manage a temp file.
Pipes, fast process creation, and I/O redirection are
very nice features of Unix. Note also that since
files may always be treated as streams of bytes,
binary data may be "piped" without added complications.
Martin.
|
163.5 | | DONJON::GOLDSTEIN | | Fri Oct 04 1985 11:38 | 12 |
| I'm not that good at VMS either, but I'm trying to defend my honor
against the scores of Un*x hackers (there are no users, only hackers)
out there...
So can someone tell me if pipes are related, functionally, to VMS
mailboxes? I would think that if one process created a mailbox for
output which the other took for input, the effect would be similar.
I don't know how to code mailboxes, so a little illustration would
be nice.
C'mon, let's defend VMS! How can Un*x have a feature that nice which
VMS (absent Shell) lacks a workaround for?
|
163.6 | | BEING::POSTPISCHIL | | Fri Oct 04 1985 11:50 | 9 |
| Re .5:
I am not sure how pipes may be related to mailboxes, but there is at least
one significant difference: An important aspect of pipes is that they can
be transparent to the processes using them; the processes just see ordinary
files and no special coding is needed.
-- edp
|
163.7 | | VAXUUM::DYER | | Fri Oct 04 1985 23:44 | 5 |
| One important difference is that Un*x allows you to run
more than one image in the same process. When you pipe from one
image to the other, the second image is taking the output from
the first image as soon as the first image produces it.
<_Jym_>
|
163.8 | | RANI::LEICHTERJ | | Sat Oct 05 1985 12:03 | 48 |
| re: .7
No. Unix is just much freer about creating new processes for you, since Unix
process creation is a lot cheaper. When you a pipe, you generally connect two
real, live processes - started/stopped by LDPCTX/SVPCTX.
re: Pipes on VMS
Pipes and mailboxes are closely related, but NOT identical.
1. A pipe provides a pair of one-way channels between its two ends.
When P and Q share a pipe V, you can imagine V to have two
halfs, Va and Vb. P has write access to Va, read access to Vb;
Q has write acess to Vb, read access to Va. Va is used to send
from P to Q; independently of this, Vb is used to send from Q to
P.
A mailbox is a single shared queue. If P and Q share mailbox
M, P writes a message into M, and then tries to read Q's res-
ponse, if it happens to read M before Q does, it will get its
own message back.
Simulating a pipe on VMS requires two mailboxes, each used as
a one-way communication channel. Unix pipes are actually
organized the same way - they appear as a pair of files -
but the call that creates a pipe creates both together, and
they are torn down together by the system.
2. Mailboxes have names, and are opened up via their names; pipes
(at least in standard Unix) are anonymous and accessible only
by their creators or their creator's direct descendents. A
Unix process creation passes open files to the child process.
The way you use a pipe is to create and open it, and fork
(spawn). Now both parent and child have the same pipe open.
(They must agree on which "half" of the pipe each will write to,
and which half each will read from.) This makes simple
connection of programs to each other almost automatic on Unix;
on VMS, you have to set up the mailboxes in the master process,
with appropriate logical names, then find and open them in the
child process. Of course, the fact that mailboxes have names
means they can be shared more generally than Unix pipes.
3. The Unix Shell provides a direct, simple interface to pipes that is
completely invisible to the programs connected to them. DCL
provides no simple analogous services; in fact, you can't even
create a mailbox in DCL (except by a real hack involving SPAWN,
but I don't feel like going into that now).
-- Jerry
|
163.9 | | VAXUUM::DYER | | Sun Oct 06 1985 00:18 | 13 |
| .7> Unix is just much freer about creating new process for you,
.7> since Unix process creation is a lot cheaper.
(Oops! How do I wriggle out of this one?)
Uh . . . uh . . . I was talking about Un*x, not Unix?
.7> . . . you can't even create a mailbox in DCL (except by a real
.7> hack involving SPAWN, but I don't feel like going into that
.7> now.
Well, whenever you do feel like going into it, this is the
note file to do it in!
<_Jym_>
|
163.10 | | EDSVAX::CRESSEY | | Mon Oct 07 1985 12:25 | 21 |
| Sounds like pipes are a little closer to TOPS-10 'pseudoteletypes'
than they are to VAX/VMS mailboxes...
1) Pseudo teletypes are bidirectional
2) Pseudo teletypes appear like I/O devices to the processes
Of course there are some constraints on PTYs that appear not to
be laid on PIPES:
1) At one, the PTY looks like a job-controlling console.
2) If the program wants to be event driven by activity at the other
end of the pipe, it has to program the response to the interrupt.
All in all, these coprocessing hacks seem to solve the same problem that
'coroutines' were theoretically supposed to hack, but I've never
seen a good high level language implementation of coroutines.
Thanks for the informative responses, you noters!
Dave
|
163.11 | | SPRITE::OSMAN | | Mon Oct 07 1985 15:37 | 15 |
| > completely invisible to the programs connected to them. DCL
> provides no simple analogous services; in fact, you can't even
> create a mailbox in DCL (except by a real hack involving SPAWN,
> but I don't feel like going into that now).
CAUTION:
Another notes file explained this hack. I attempted it, and the
DCL mailboxes cause your process to hang itself if you attempt
to write messages in the mailbox more than 16 bytes long !
(yes, I reported that in the other notes file and never got an
acknowledgment or workaround)
/Eric
|
163.12 | | VIKING::CAMPBELL | | Mon Oct 14 1985 19:04 | 21 |
| Many people explain the use of pipes by saying "It's just like putting
prog1's output into a temporary file, and then making prog2 take its
input from the temporary file". This explanation is useful pedagogically
but functionally incorrect, for two reasons:
1) If the data to be passed between programs is bigger than the
free space remaining on your disk, the temp file hack fails
2) If prog1 never terminates, the temp file hack fails. (This
is not a vacuous objection. Imagine prog1, which computes and
lists the prime numbers, forever. prog2 reads the first 100
numbers you give it, prints their sum, and exits. With real
pipes, you can say
prog1 | prog2
and get the sum of the first 100 primes. On VMS, you fill up
your disk and crash.)
Actually I think DCL could hack a simulated pipe facility using DECnet...
all we need to do is come up with a syntax ('|' anyone?).
|
163.13 | | JON::MORONEY | | Mon Oct 14 1985 20:44 | 7 |
| Re .12: Example 2 will fail under Unix as well, but for a different (but
similar) reason. If Program 2 exits and closes the pipe, the net time
Program 1 attempts a write it will abort with the error "broken pipe."
Example 1 is correct. Theoretically it is possible to create a closed loop
of pipes, if desired.
-Mike
|
163.14 | | VAXUUM::DYER | | Mon Oct 14 1985 23:05 | 5 |
| .12> Actually I think DCL could hack a simulated pipe facility
.12> using DECnet.
See Note #99!!!
<_Jym_>
|
163.15 | | SPRITE::OSMAN | | Tue Oct 15 1985 12:14 | 44 |
| NO NO ! Please don't let those vms folks make the same mistake they
made with command recall.
IT WAS A MISTAKE to put command recall in DCL. It should have been put
deeper down, like in the terminal driver. That way, CTRL/B would succeed
in bringing back more than the last line, whether you are in DCL, DEBUG,
MAIL, etc.
With current MISTAKE, CTRL/B only brings back most recent line in most
contexts.
So, if we implement some sort of "|" syntax for pipes, do it in RMS,
and *not* in DCL.
Then, I'll be able to specify
FOO.EXE|ZOT.EXE|filespec
to almost any context that wants me to type a filespec. Such an input
would cause the context to receive whatever FOO.EXE outputs, where FOO.EXE
takes its input from ZOT.EXE which takes its input from the "filespec".
The original context would think it's reading a file, even though
it's really reading output of a "live" FOO.EXE.
For instance, suppose I'm in VNOTES, in the middle of writing a note,
so I'm actually in TPU. I desire to include into my note all the lines
of some file containing the word "cheese". So, first I start my inclusion
like this:
<Do key> INCLUDE
Then, instead of typing a filespec, I type something like
|SYS$SYSTEM:SEARCH.EXE cheese
Now, TPU will think it's reading from a file, but it's actually reading
the output of the SEARCH program.
If you only teach DCL about "|", such examples wouldn't work.
/Eric
/Eric
|
163.16 | | EDSVAX::CRESSEY | | Wed Oct 16 1985 12:00 | 49 |
| Thanks for the excellent overview of pipes you noters have put in
here for me and others.
I like the concept. It gives one a lot of power for relatively
little intricacy. It is also useful in a wide variety of contexts.
That's what makes a system feature elegant.
Now, I have a couple of questions:
First, is there any way to plug more than one pipe into the input
side of a process? Let's say I have an old-fashioned "merge"
program that merges two similar record streams that have each been
sorted. I'd like to plug a pipe into each of them.
Maybe a more up to date example might help: what if I want to
provide a sequential image of a database and a transaction journal
to some process that produces an up to date database?
The systax I saw a few notes back:
FOO | BAR | BLECH
doesn't seem to allow this.
But mathematical function notation does allow it:
If foo(inp) represents the output of foo when given inp, then
blech(bar(foo(inp))) represents the piping of foo into bar into
blech.
Now if I say
merge (master, update)
I have provided more than one input to the merge function.
Second, is it possible to put a "Y valve" on the output
side of a pipe so as to feed the outputs, in parallel to more than
one process?
I can't think of a use for this, but with array processors around the
bend, who knows?
Dave
|
163.17 | | VAXUUM::DYER | | Wed Oct 16 1985 12:22 | 3 |
| [RE .16]: Unix has a program called "tee" that works as
the Y-valve (actually a T-valve - hence, the name) you describe.
<_Jym_>
|
163.18 | | REX::MINOW | | Wed Oct 16 1985 16:05 | 58 |
| As for the other side of the problem specified in .16, if you are
trying to do something like:
+-------+ +-------+
| proc |======>| proc |
| 1 |======>| 2 |
+-------+ +-------+
you are begging for a deadly-embrace lockout. However, it is quite
simple to do something like the following:
pipe_a = pipe(); /* create a pipe */
pipe_b = pipe(); /* and another one. */
if ((pid1 = fork()) == 0) { /* make a subprocess */
/* this is subprocess 1, take input from a file
* or some external device, write it to pipe_a.
* also take SMALL amounts of control input from
* pipe_b.
*/
}
else if ((pid2 = fork()) == 0) { /* another subprocess */
/* this is subprocess 2, take input from pipe_a and
* from pipe_b, write output to stdout (or whatever).
*/
}
else {
/* this is the master process, take input from the
* user or whatever, writing it to pipe_a and pipe_b
* as needed.
*/
}
(The fork() system call creates a new process duplicating the
current process environment -- including open devices/files.
It returns 0 to the child, and the process id to the master.)
For example, the master process might inquire for a customer number
and feed it to subprocess 1, which looks for the information in
a database, feeding it to subprocess 2.
Note, however, that classical Unix has fully-synchronous I/O, so
you can still get into a deadly embrace. Modern implementations, such
as Ultrix, use a more elaborate operating system mechanism, that
does permit asychronous I/O to pipes. There is a crude "master to
slave" signalling mechanism (that has problems of its own).
Also, note that the above (1) doesn't include error handling and (2)
is from memory, so no complaints that there are errors. Real-world
implementations aren't much more complicated, however.
Moral: for a limited range of applications, Unix offers a much simpler
programmer interface to the operating system. However, it is difficult
to scale up to larger application environments. VMS is damned hard
to get started with (go ahead and write the above in Vax C, if you
don't believe me), but after having mastered the initial hurdle, has
the capabilities a customer will need for large-scale applications.
Martin.
|
163.19 | | VIKING::WASSER | | Wed Oct 16 1985 16:45 | 32 |
| Re. -.2 (multi pipes into a process)
Yes and No.
Yes, the pipe feature of the operating system acts much
like a file and (as far as I know) you can have several pipes
open at one time.
No, I don't think you can use the standard shell syntax to
pipe the output of more than one process into a single input.
Each program is considered to have a single standard input stream
and a single standard output stream and it these streams that
is redirected or piped. The common way of getting more than
one file of data into a program is to have the program take
one or more file names as arguments:
MERGE <datafile transactionfile | TEE outputfile | PRINT
MERGE takes data from standard input and merges it with data from
a named transaction file and sends the results to TEE
which writes the data to an output file and passes the same data
along to PRINT which spools it to the printer.
Since the pipe feature is general enough to support almost any
combination of processes feeding each other, it is more a
limitation of the shell command language than anything else.
(This may already have been done... It's been a while since I
got to use a UNIX system)
-John A. Wasser
|
163.20 | | RANI::LEICHTERJ | | Sun Oct 20 1985 14:55 | 58 |
| First off, it's important to distinguish between pipes and CLI support for
them.
VMS, in mailboxes, supports everything supported by Unix pipes, and more. It's
a little harder to use VMS mailboxes, but the difficulty is pretty much due to
the relative difficulty of using ANY VMS system service, as compared to the
analogous Unix system servie. Unix system services solve the easy case. I/O
in Unix is much easier than in VMS, but much of the VMS difficulty comes from
support for asynchronous I/O, multiple streams, keyed access, etc. - which
are just not available in Unix. It would be very simple to write a small
routine that opened a pair of mailboxes and set them up for use as a pipe.
Unix pipes are NOT analogous to PTY's. Originally, Unix's world view was that
terminals were "glass Teletypes". Such simple devices require little special
support, so programs worked as well through pipes as they did talking to a
terminal directly. Over the years, Unix has developed its own new-style
programs - screen editors, and so on. They don't work through pipes any more
than similar programs on VMS work through mailboxes. PTY's have been written
over the years for various versions of Unix, but none has reaaly caught on as
a standard. Similarly, PTY's have been written from VMS, but none has really
caught on either. (None of the widely-available ones work too well, for one
thing!) In any case, the complexities and resulting inefficiencies of a PTY
interface imply that you probably don't want to use it for linking programs
together unless you have no choice - a pipe- or mailbox-like interface makes
more sense when programs are DESIGNED to communicate with other programs. So
you want both.
As for CLI interfaces: The Unix interface provides a simple, clean way to
specify pipeline parallelism. It quickly becomes messy if you want go much
beyond that; also, the semantics of arbitrary interconnections becomes prob-
lematical, which shows up as deadlocks.
Some work was done at the University of Arizona on a Unix CLI called something
like rx. It viewed commands as co-routines and provided a general way to
connect them. Each command could potentially have many inputs and many outputs
that were accessible and controllable from the CLI level. Where Unix has only
a single primitive that manipulates streams of data as such (tee), rx had seve-
ral, including things like "merge n input streams" and "split one stream into
n output streams". Neat and powerful, but it was a research toy that no one
seems to be using for anything.
Unix these days has a lower-level analogue of pipe connects using streams and
filters. These let you "plug together" stuff with the I/O system so that, for
example, you could build a "device" that when read gave you the merged input of
two other devices. This stuff was written up in the second Bell Labs Tech.
Journal - or whatever they are called now - on Unix about a year ago, and is
supposed to be part of the next release of Unix from AT&T. I proposed something
of this general sort for VMS in the VMSPHASE0 notesfile, but I doubt anything
will come of it.
The most general treatment of interconnection issues shows up in various
functional and dataflow languages. Some of these have dealt with the problems
of specifying complex interconnections by using a graphical interface - you
position boxes for various modules on the screen and draw pipes between them.
Making this work requires restrictions on the semantics of the pieces - "single
assignment" or "side-effect-free" modules only - that would not be acceptable
in a traditional environment.
-- Jerry
|
163.21 | | TOOLS::COWAN | | Thu Oct 31 1985 00:05 | 25 |
| I think pipes are more like mailboxes than people think. The pipes I
used (4.1cBSD and 4.2BSD -- from Berkley) gave you to "channels" to the
pipe. One was the input and the other the output. Just like VMS mailboxes,
if I wrote to the "write" channel and read from the "read" channel, I
read my own data. Typically, the parent creates two pipes, forks a child
and then closes one half of each pipe. The child closes the halves left
open by the parent. This is necessary to make end of file processing
work. Without it, if the child dies, and the parent still has the
write channel to the pipe open, the parent will never see EOF.
Unix people are used to using pipes all over the place because traditional
Unix utilities worked with the glass teletype concept (as some explained
previously).
I worked on a research project similar to "cx" from the Univ of Arizona.
The model we were using hopelessly fell apart with screen mode applications.
It also fell apart with the concept of records. Getting the next piece
of data is simple with streams of bytes, but there is too much information
needed to get the next piece of data from an ISAM file, for instance.
I like the idea of pipes, but they would be less useful than VAXStation
type windows on my cluster.
KC
|
163.22 | | BEING::POSTPISCHIL | | Thu Oct 31 1985 08:07 | 14 |
| Re .21:
> Typically, the parent creates two pipes, forks a child and then closes one
> half of each pipe. The child closes the halves left open by the parent.
> This is necessary to make end of file processing work. Without it, if the
> child dies, and the parent still has the write channel to the pipe open, the
> parent will never see EOF.
Huh? If the parent closes half and the child closes half, the pipe is gone.
Also, the typical usage of pipes with the shell involves the shell creating
one pipe for two children.
-- edp
|
163.23 | | HARE::COWAN | | Fri Nov 22 1985 16:49 | 11 |
| The pipes I used really worked the way I described in .21. If the parent
didn't close one half of the pipe, the parent could still write and then
read the pipe. This implemeneted returned EOF on a read only if there
were no active writers.
The term pipe implies a directional flow between processes. In this
implementation, writing to a pipe was like pouring water into a basin.
Reading a pipe was like opening the drain. Anyone could do the
reading and writing. Sounds alot like VMS mailboxes.
KC
|
163.24 | | PSW::WINALSKI | Paul S. Winalski | Sat Aug 15 1987 22:55 | 112 |
| Unix-style pipes do indeed exist under VMS. The VMS pipe driver ships with
the DEC/SHELL layered product. Anybody within DEC interested in putting the
pipe driver on their system can copy it from:
TLE::SYS$SYSTEM:PIPEDRIVR.EXE
To install it, you must issue this command from a privileged account (usually
it's put in the SYSTARTUP.COM file):
$ RUN SYS$SYSTEM:SYSGEN
SYSGEN> CONNECT PIPE/NOADA/DRIVER=PIPEDRIVR
Pipes are very similar to mailboxes. The substantive differences are:
1) A channel assigned to a pipe is either a read-only channel or a write-only
channel. Pipes are a unidirectional communications medium, with a distinct
input end and a distinct output end. It is possible to have multiple readers
and multiple writers, but each individual I/O channel may not be used for
both reading and writing.
2) As a consequence of 1), it is possible to tell with a pipe when there are
no more readers or no more writers. This condition is called a "broken
pipe" on Unix. You cannot tell this with mailboxes. Broken pipe detection
is a key feature to the useability of pipes for connecting filter programs
ala "|" notation in Unix shells. Otherwise, if one end of a piped
communication exited or died, the other end has no way of knowing and will
hang.
The VMS pipe driver has a few features to make it easier to use outside the
special environment of DEC/SHELL:
1) Unlike mailboxes, pipes can be created from DCL. PIPE0: is the template
device. Any channel assignment to it results in the creation of a new
pipe device. Thus the command:
$ OPEN/WRITE FOO PIPE:
creates a new pipe, assigns a write-only channel to it, and opens a process-
permanent file on it accessible by logical name FOO.
2) As far as RMS is concerned, VMS pipes appear to be mailboxes. Thus, pipes
can be opened as RMS files with no special programming involved.
3) A write to a mailbox does not complete until somebody reads the data, unless
you put the IO$M_NOW option on your QIO. Unfortunately, RMS does not use
the NOW option. Writes to a pipe always complete immediately if there is
enough quota left in the pipe (there is 4K available) to buffer the message.
Otherwise, the write doesn't complete until there *is* buffer space available
or until the written message is read, whichever comes first. This is like
the NOW option on mailboxes, except with pipes you always get that option.
This allows the asynchronous write behavior necessary to make Unix-style
pipelines work properly, without any special programming, even when using
RMS.
4) When a mailbox fills up, the process that filled it is put in RWMBX state.
This is very bad, as it means that the process cannot be scheduled for any
reason (including process deletion) until the mailbox is emptied enough for
the write to complete. A write QIO to a full pipe simply doesn't complete.
You don't go into any funny resource wait state.
Those interested in hacking with pipes can find a specification for the driver,
including its QIO-level interface, in
TLE::ULNK$:[WINALSKI.PIPEDRIVR]PIPESPEC.MEM. File
TLE::ULNK$:[WINALSKI.COM]PIPELINE.COM is a little command procedure that lets
one construct Unix shell-style pipelined command lines on VMS. It accepts
commands of the form command|command|command... creates the necessary pipes,
and spawns the necessary processes to do the commands. Within each
pipelined subprocess, logical name IN$ is the standard input and OUT$ is the
standard output (I tried to use SYS$INPUT and SYS$OUTPUT, but DCL does
something funny that eats the last record off the pipelined output at each
stage).
I also have some small filter files (all in TLE::ULNK$:[WINALSKI.COM]):
FNAME.COM given a filespec, extract filename portion
FNFT.COM given a filespec, extract filename.type
PIPEDOSET.COM takes a command string as its parameter. For each item read
from IN$, substitutes the item for the string $SET$ wherever
it appears in the command string, then executes the command
string. This is very useful for executing a "template"
command over and over again, varying only the parameter(s).
I usually define symbols for these command procedures, as well as symbols for
piped forms of DIRECTORY and SEARCH:
FNAME :== @FNAME.COM
FNFT :== @FNFT.COM
DO :== @PIPEDOSET.COM
PDIR :== DIRECTORY/NOHEAD/NOTRAIL/OUT=OUT$
sends the filespecs from a directory command down the pipe
PFIND :== SEARCH/HEAD/WINDOW=0/OUT=OUT$
"poor man's GREP." Sends the filespecs of all files that contained
the search string down the pipe.
Here is a small example of using PIPELINE.COM and some of these filters. In
this example, we search all .B32 files in the current directory for the string
"%INCLUDE FOO". We obtain the filename/types for the files, then use DO to
RESERVE them all from the CMS library:
$ @PIPELINE
> PFIND *.B32 "%INCLUDE FOO" | FNFT | DO CMS RESERVE $SET$ """change FOO"""
--PSW (author of the VMS pipe driver)
|
163.25 | SMP Dataway ? | CUJO::MEIER | Systems Engineering Resident... | Sat Aug 22 1987 22:27 | 7 |
| Paul,
Would you recommend, that the PIPE-Driver be a good /high speed/
supported method for sending data between SMP (VMS 5.0)
processors ?
Al (Realtime resident)
|
163.26 | | UFP::MURPHY | Rick Murphy | Sun Aug 23 1987 10:52 | 12 |
| Re: .25:
I don't understand...
If you are talking about sending data between PROCESSES (not
processors),then use the normal mechanisms - global sections come
to mind as you mention "high speed".
A SMP multiprocessor is *not* a cluster - the normal mechanisms
work, assuming you interlock them properly.
If you really mean that data is to be transferred between *processors*
(like between two 8800's) then you will need to use DECnet.
-Rick
|
163.27 | More Information -- | | CUJO::MEIER | Systems Engineering Resident... | Mon Aug 24 1987 23:58 | 16 |
|
Actually, I'm thinking of an existing application using MAILBOXes,
and switching to PIPE's when SMP becomes REAL - Released.
Paul, have you done any performance studies on the DRIVER?
What I have inmind, is using a 8800, tasking the first processor,
to take data from a DRB-32, compressing the data / filtering it.
Feed the data through a PIPE (Maybe), to the second processor,
for reporting and displaying.
I was thinking that this would simplfy the data interface, QIO's would
be easier than GLOBAL sections, since the data flows only
in one direction, and the CPU will have SHELL installed.
Also, all the points that Paul mention, over MAILBOXes.
|
163.28 | | PSW::WINALSKI | Paul S. Winalski | Fri Sep 04 1987 16:41 | 17 |
| The pipe driver and mailbox driver have comparable performance. The mailbox
driver is slightly faster, but when you factor in the defensive programming
necessary to handle broken connection conditions, the pipe driver definitely
wins all the time.
The only officially supported interface to the pipe driver is through DEC/SHELL.
If you do your own I/O to a pipe separate from having one handed to you by
DEC/SHELL, there is no official support.
If you are really concerned with maximum bandwith between the processes, you
should use global sections. Global sections are far faster than either pipes
or mailboxes and offer the maximum possible interprocess communication
bandwith. You can use the lock manager to detect broken connection conditions.
If you use a queue of buffers in global memory, you can achieve the same effect
as a pipe or mailbox but without all of the overhead of $QIO.
--PSW
|
163.29 | RMS *can* use IO$M_NOW | MDVAX3::COAR | A wretched hive of bugs and flamers. | Wed Nov 11 1987 11:19 | 13 |
| .24:
>3) A write to a mailbox does not complete until somebody reads the data, unless
> you put the IO$M_NOW option on your QIO. Unfortunately, RMS does not use
> the NOW option.
Actually, I think you can now persuade RMS to use IO$M_NOW; it has
something to do with RAB$V_TMO and RAB$B_TMO, as I recall. Check
the various VMS release notes since 4.0. Of course, this means
that your application has to check to see if the `file' is really
a mailbox, and mung the RAB if so, but the functionality IS there
- if hard to use.
#ken :-)}
|
163.30 | | PSW::WINALSKI | Paul S. Winalski | Sun Nov 15 1987 22:39 | 10 |
| TLE is now running field test version 5 of VMS, so the pipe driver that .24
tells you to copy will not work on V4 systems. If you are running VMS V4,
you can copy the pipe driver from:
PSW::SYS$SYSTEM:PIPEDRIVR.EXE
At least, this will work until I upgrade PSW to V5 (not likely for at least
a month or two).
--PSW
|
163.31 | template device protection | CADSYS::HEBERT | | Fri Dec 18 1987 09:29 | 9 |
|
I was wondering if there was a way to change the protection on the
template device pipe0:. I issue a set prot/dev and $status is success
but the protection is unchanged. Is there something I can do at
startup with sysgen to change it or is it coded into the driver
with no prospect for change?
Thanx
Chris
|
163.32 | | PASTIS::MONAHAN | I am not a free number, I am a telephone box | Fri Dec 18 1987 16:34 | 10 |
| I QARed this problem some time ago. The problem with setting
protection an a template device is that the code assigns a channel to
the device specified, sets protection on what it gets, and then
deassigns the channel.
With a template device, this creates a UCB, sets its protection,
and then destroys it.
I actually QARed it for NET0, but the QAR answer indicated that
other template devices were affected.
|
163.33 | Can't set prot on templates | PSW::WINALSKI | Paul S. Winalski | Sun Dec 20 1987 15:46 | 8 |
| Yup. It's not possible with the VMS I/O system coded as it is now to set the
protection on a template device.
RE: .31 - what is it that you were trying to accomplish by setting device
protection on PIPE0:? Perhaps there's another way to do what you are trying
to do.
--PSW
|
163.34 | pipes within cluster | CADSYS::HEBERT | | Mon Dec 21 1987 07:56 | 13 |
| RE: .33
I wanted to experiment with DCL-level interprocess communication
across nodes in a cluster. Unfortunately pipes have no access for
anyone other than the owner and it requires OPER priv to change
the protection on devices. I have a batch job which submits several
other jobs to do their work in parallel; using pipes means they
all have to be submitted on the same machine. I wanted to see if
the pipes across the cluster worked more efficiently than communicating
through files. I haven't tried SYNCHRONIZE because it didn't seem
to lend itself to waiting for more than one job.
/cah
|
163.35 | Why won't SYNCHRONIZE work? | STAR::BFISHER | Bill Fisher | Mon Dec 21 1987 13:54 | 15 |
| RE: .34
> I haven't tried SYNCHRONIZE because it didn't seem
> to lend itself to waiting for more than one job.
You can wait for more than one job using SYNCHRONIZE by waiting for each in
turn:
$SYNCHRONIZE <Job 1>
$SYNCHRONIZE <Job 2>
$SYNCHRONIZE <Job 3>
This would wait for all three jobs to complete.
/Bill Fisher
|
163.36 | need job completion status | CADSYS::HEBERT | | Mon Dec 21 1987 14:27 | 10 |
|
As I said I haven't tried it but according to the documentation, if the
specified job doesn't exist then an error message is issued. The jobs
can finish in any order, so if <job 2> finishes while I'm waiting for
<job 1> then I will get an error from SYNCH. Yes, I'll know that <job 2>
is through, but I won't know it's exit status which is the return value
from SYNCH and which I need to know. I guess I didn't mention that...
Thanx anyway,
/cah
|
163.37 | | PSW::WINALSKI | Paul S. Winalski | Tue Jan 05 1988 16:36 | 10 |
| RE: .34
You can change the protection to whatever you wish by issuing the appropriate
IO$_SETMODE QIO call. To do this from DCL, write a small program and RUN it.
Unfortunately, you're not going to be able to do what you want with pipes.
They do not operate across a cluster. Nor, for that matter, do any other
devices with the exception of MSCP-served disks.
--PSW
|