| I N T E R O F F I C E M E M O R A N D U M
Date: 11-Jul-1988 03:42 CET
From:
AI_INFO@AIVTX@MRGATE
Dept:
Tel No:
TO: ROACH@A1NSTC
Subject: PROLOG Digest V6.39
PROLOG Digest Friday, 8 July 1988 Volume 6 : Issue 39
Today's Topics:
Query - Parallel Systems & Data Structures,
Implementation - Disjunction & Screen Control
--------------------------------------------------------------------------------------------------------------------------
Date: 26 May 88 15:07:20 GMT
From: [email protected] (Barry S. Fagin)
Subject: Usable parallel logic programming systems
I'm trying to gather information on existing or planned implementations
of parallel logic programming systems, preferably on existing
multiprocessors. If you have any information on such a system, could
you please send me mail about it? Any pointers to documents would be
greatly appreciated. Thanks.
--Barry
------------------------------
Date: 25 May 88 08:43:43 GMT
From: [email protected] (Micha Meier)
Subject: Clause fusion (Disjunctions)
In article <[email protected]> [email protected] (Richard A. O'Keefe) writes:
>In article <[email protected]>, [email protected] (Micha Meier) writes:
>> Richard proposes that nested if-then-else's are treated at the same level,
>> which leads to confusions since then the indentation is context dependent
>> (an if-then-else inside another one cannot be indented independently).
>
>I DO NOT! I use exactly the same rule for indenting if->then;elses in
>Prolog that I use in Fortran 77, Pop, ADA, Algol 68, et cetera. Namely
>
> <IF> <condition> <THEN>
> [1 indent] <body>
> <ELIF> <condition> <THEN>
> [1 indent] <body>
> ...
> <ELSE>
> [1 indent] <body>
> <ENDIF>
>
The problem with Prolog is that any of the term can be
a conjunction, disjunction or if-then-else. What about
( ( C1 ->
B1
;
B2
) ->
( C2 ->
B3
;
B4
),
B5
; B6,
B7
)
I find it not much readable when the condition is difficult
to distinguish from the other code.
> ( test1 ->
> body1
> ; test2 ->
> body2
> ; /*otherwise*/
> body3
> )
Here it is different - how exactly do you indent your procedures?
This problem might seem to be a minor one, but should not there
be at least a recommendation from the standard or from somebody
else? Prolog does not have many syntactical structures and therefore
it is extremely important to keep some programming style, e.g.
to use names_like_that for procedures and LikeThat for variables,
to put each goal on a separate line etc. I've been trying to
port various external programs to Sepia and sometimes it's rather
difficult to realize what the author really meant.
--Micha
------------------------------
Date: 27 May 88 00:28:28 GMT
From: [email protected] (Richard A. O'Keefe)
Subject: Clause fusion
In article <[email protected]>, [email protected] (Micha Meier) writes:
> The problem with Prolog is that any of the term(s in an if->then;else)
> can be a conjunction, disjunction or if-then-else. What about
> ( ( C1 ->
> B1
> ;
> B2
> ) ->
> ( C2 ->
> B3
> ;
> B4
> ),
> B5
> ; B6,
> B7
> )
Algol 60, Algol 68, Lisp, Pop, Bourne shell, C shell, ML, ... have
exactly the same problem. There's nothing special about Prolog in this
respect. The answer is that it isn't a problem to have another if in a
then-part or else-part, and that programmers who care about readability
don't put ifs in if-parts. The big lesson for Prolog programmers is
"don't be scared of introducing new predicates". Programmers who do not
care about readability will find obfuscatory ways despite standards.
(The famous "Indian Hills style sheet" for C has led to some of the most
unreadable C code it has ever been my misfortune to try to read.)
I basically agree with Meier's concern for readability. But I think the
layout of the Prolog code as such is not the most important aspect. It
is easy to write a reformatter (the editor I'm using to write this has one).
You can fix what is there, the trouble is what _isn't_ there. I have
recently had occasion to look at two people's programs. One of them
I repeatedly misunderstood because it was doing some very tricky things
in its control flow and had essentially no comments. The other I still
do not understand because it is doing non-obvious things with its data
structures and has essentially no comments.
Rules of thumb for comments:
(1) Describe all major data structures.
(2) Comment every control trick.
------------------------------
Date: 28 May 88 06:14:00 GMT
From: [email protected]
Subject: Re: Clause fusion (Disjunctions)
/* Written 1:14 am May 28, 1988 by [email protected] in uiucdcsm:comp.lang.prolog */
/* Written 3:43 am May 25, 1988 by [email protected] in uiucdcsm:comp.lang.prolog */
/* ---------- "Re: Clause fusion (Disjunctions)" ---------- */
In article <[email protected]> [email protected] (Richard A. O'Keefe) writes:
>In article <[email protected]>, [email protected] (Micha Meier) writes:
>> Richard proposes that nested if-then-else's are treated at the same level,
>> which leads to confusions since then the indentation is context dependent
>> (an if-then-else inside another one cannot be indented independently).
>
>I DO NOT! I use exactly the same rule for indenting if->then;elses in
>Prolog that I use in Fortran 77, Pop, ADA, Algol 68, et cetera. Namely
>
> <IF> <condition> <THEN>
> [1 indent] <body>
> <ELIF> <condition> <THEN>
> [1 indent] <body>
> ...
> <ELSE>
> [1 indent] <body>
> <ENDIF>
>
The problem with Prolog is that any of the term can be
a conjunction, disjunction or if-then-else. What about
( ( C1 ->
B1
;
B2
) ->
( C2 ->
B3
;
B4
),
B5
; B6,
B7
)
I find it not much readable when the condition is difficult
to distinguish from the other code.
> ( test1 ->
> body1
> ; test2 ->
> body2
> ; /*otherwise*/
> body3
> )
Here it is different - how exactly do you indent your procedures?
This problem might seem to be a minor one, but should not there
be at least a recommendation from the standard or from somebody
else? Prolog does not have many syntactical structures and therefore
it is extremely important to keep some programming style, e.g.
to use names_like_that for procedures and LikeThat for variables,
to put each goal on a separate line etc. I've been trying to
port various external programs to Sepia and sometimes it's rather
difficult to realize what the author really meant.
--Micha
------------------------------
Date: 28 May 88 06:14:00 GMT
From: [email protected]
Subject: Clause Fusion
/* Written 3:43 am May 25, 1988 by [email protected] in uiucdcsm:comp.lang.prolog */
/* ---------- "Re: Clause fusion (Disjunctions)" ---------- */
In article <[email protected]> [email protected] (Richard A. O'Keefe) writes:
>In article <[email protected]>, [email protected] (Micha Meier) writes:
>> Richard proposes that nested if-then-else's are treated at the same level,
>> which leads to confusions since then the indentation is context dependent
>> (an if-then-else inside another one cannot be indented independently).
>
>I DO NOT! I use exactly the same rule for indenting if->then;elses in
>Prolog that I use in Fortran 77, Pop, ADA, Algol 68, et cetera. Namely
>
> <IF> <condition> <THEN>
> [1 indent] <body>
> <ELIF> <condition> <THEN>
> [1 indent] <body>
> ...
> <ELSE>
> [1 indent] <body>
> <ENDIF>
>
The problem with Prolog is that any of the term can be
a conjunction, disjunction or if-then-else. What about
( ( C1 ->
B1
;
B2
) ->
( C2 ->
B3
;
B4
),
B5
; B6,
B7
)
I find it not much readable when the condition is difficult
to distinguish from the other code.
> ( test1 ->
> body1
> ; test2 ->
> body2
> ; /*otherwise*/
> body3
> )
Here it is different - how exactly do you indent your procedures?
This problem might seem to be a minor one, but should not there
be at least a recommendation from the standard or from somebody
else? Prolog does not have many syntactical structures and therefore
it is extremely important to keep some programming style, e.g.
to use names_like_that for procedures and LikeThat for variables,
to put each goal on a separate line etc. I've been trying to
port various external programs to Sepia and sometimes it's rather
difficult to realize what the author really meant.
--Micha
------------------------------
Subject: Data Structures
Date: Fri, 27 May 88 14:10:54 -0700
From: Russ Abbott <[email protected]>
Date: 22 May 88 22:34:33 GMT
Can anyone refer me to a prolog (or other) system that does data structure
transformations? The general problem is to define interfaces between
pre-exising tools in an environment.
A particular example is to use P-Nut, a petri net analyzer, on a system
produced by Teamwork, a data flow diagrammer. Besides allowing one to draw
data flow diagrams, Teamwork also allows one to draw process activation tables.
For example, the following are two rows in a process activation table.
1. The first row says that if condition C1 holds and condition C4 does
not hold, the processes P1 and P4 should be started. Once they
finish, process P5 should be started.
2. The second row says that if conditions C1 and C2 hold and condition
C3 does not hold, processes P1 and P3 should be activated. When P1
and P3 complete, process P2 should be activated. When it completes
processes P1 and P3 should be reactivated. When they complete a
second time, processes P4 and P5 should be activated.
A complete process activation table would consist of any number of such rows,
each row having a unique distribution of +'s and -'s among the conditions.
C1 | C2 | C3 | C4 || P1 | P2 | P3 | P4 | P5
----+----+----+----++----+----+----+----+----
+ | | | - || | 1 | | 1 | 2
+ | + | - | || 1,3| 2 | 1,3| 4 | 4
....
The petri net input to say the same thing looks like the following.
C1, not C4 -> row({C1, not C4},1), P2s, P4s
row({C1, not C4},1), P2f, P4f -> row({C1, not C4},2), P5s
row({C1, not C4},2), P5f -> <empty>
C1, C2, not C3 -> row({C1, C2, not C3},1), P1s, P3s
row({C1, C2, not C3},1), P1f, P3f -> row({C1, C2, not C3},2), P2s
row({C1, C2, not C3},2), P2f -> row({C1, C2, not C3},3), P1s, P3s
row({C1, C2, not C3},3), P1f, P3f -> row({C1, C2, not C3},2), P4s, P5s
row({C1, C2, not C3},3), P4f, P5f -> <empty>
P1s -> P1f
P2s -> P2f
P3s -> P3f
P4s -> P4f
P5s -> P5f
where (a) P1s, P2s, P3s, P4s, and P5s stand for the start of P1, P2, P3, P4,
and P5; (b) P1f, P2f, P3f, P4f, and P5f stand for the finish of the same
processes; and (c) row(<conditions>, <level>) stands for the row identified by
the indicated set of conditions, executing at the indicated level. Of course,
the actual row conditions are not needed to identify the row. All that is
needed is some unique identifier for each row.
So the problem is to transform the information from table form into petri net
form.
The question, though, is not how to write a program for this particular
transformation, but to develop a more general system in which transformations
of this and similar kinds can be described and implemented.
I have already found the following work.
1. IDL, an Interface Description Language by David Alex Lamb of Queens
University. IDL was developed as part of CMU's Production Quality
Compiler Compiler project. It allows one to describe abstractly the
data structures that two system components agree on. (Since the
components agree on a data structure, this problem is not the same
as the one I'm asking about.) IDL then generates reader and writer
code for that abstract data structure in whatever concrete terms the
two components require. IDL is described in "IDL: Sharing
Intermediate Representations,", TOPLAS, July, 1987.
2. FORMAL a system for transforming hierarchical databases. In FORMAL,
one provides a description of an input database along with a sketch
of the desired output structure expressed in terms of the input
structure. Basically, this is what I'm asking for, but FORMAL is
limited to hierarchical databases. This work reminded me of Query
By Example except that the output may be a (hierarchical) database
structure and not just a list of tuples satisfying the sketched
conditions. FORMAL is described in "Automatic Data Transformation
and Restructuring," by Nan C. Shu, Proc. 3rd Intl. Conf. on Data
Engr., 1987.
3. Stage, a system for generating application generators by J. Craig
Cleaveland. The application generators Stage generates are, in
effect, data transformation systems. The input to Stage is (a) a
grammar describing the input to the desired application and (b) a
sketch-like description of the desired output of the application
expressed in terms of the parse tree of the input. Stage then
generates a program to perform such transformations. Stage is
described in a forthcoming article "Building Application
Generators," in IEEE Software, July, 1988.
It is clear that the problem as posed it is not well formed. That is, any
program could be described as a data structure transformer, so asking for a
system in which to describe more or less arbitrary transformations is asking
for too much.
One approach would be to limit the problem to a system for describing
transformations that operate on the structure of the data and that are not
dependent on the content. That is not satisfactory for a couple of reasons.
- For one thing the Teamwork/P-Nut example given above does not fit
this description since the content of the data structure must be
examined at least to the extent of determining concurrency and
process sequencing.
- Even worse, since a two counter machine is Turing equivalent, a
system that deals generally with a pair of lists whose lengths (but
not content) matter is equivalent to a general purpose programming
language.
All that notwithstanding, there does seem to be some value in developing a
notation in which data structures and transformations on them can be
described--and then automatically implemented. In addition, it seems that
prolog would be a suitable language in which to develop such a system.
As an experiment, a program has been written in C-Prolog that transforms data
transformation descriptions into corresponding data transformation programs.
The input to be transformed is assumed to be given as a set. For this example,
each set element is a pair, where the first component is a set of positive and
negative conditions and the second is a set of processes and their execution
levels. The input is stored on the file teamwork_to_pnut.
[c1/(+), c4/(-)], [p2/1, p4/1, p5/2].
[c1/(+), c2/(+), c3/(-)], [p1/1, p1/3, p2/2, p3/1, p3/3, p4/4, p5/4].
These are intended to correspond to the two rows of the process activation
table.
In specifying the output, the transformation description (see below) includes
new symbols: start(Process) and finish(Process), corresponding to Ps and Pf,
for each Process P; and row(<Conditions>, <Level>) for each row and level. The
actual (although manually formatted) output produced by the generated program
is as follows.
Output =
/* The processes */
start(p1) -> finish(p1)
start(p2) -> finish(p2)
start(p3) -> finish(p3)
start(p4) -> finish(p4)
start(p5) -> finish(p5)
/* The first row */
[c1, c2, not c3] -> [start(p1), start(p3), row([c1, c2, not c3], 1)]
[finish(p1), finish(p3), row([c1, c2, not c3], 1)] ->
[start(p2), row([c1, c2, not c3], 2)]
[finish(p2), row([c1, c2, not c3], 2)] ->
[start(p1), start(p3), row([c1, c2, not c3], 3)]
[finish(p1), finish(p3), row([c1, c2, not c3], 3)] ->
[start(p4), start(p5), row([c1, c2, not c3], 4)]
[finish(p4), finish(p5), row([c1, c2, not c3], 4)] -> []
/* The second row */
[c1, not c4] -> [start(p2), start(p4), row([c1, not c4], 1)]
[finish(p2), finish(p4), row([c1, not c4], 1)] ->
[start(p5), row([c1, not c4], 2)]
[finish(p5), row([c1, not c4], 2)] -> []
Here is the actual data transformaton specification. The input file and its
structure is specified as follows.
input = file teamwork_to_pnut.
structure row = (conditions, processes).
The output is also described as a set--the set of Petri net transformations.
Four different kinds of transformations are generated. Letting + stand for set
union:
output(Input) = output1(Input) + output2(Input) +
output3(Input) + output4(Input).
The individual transformations may be described as follows.
1. Each row has a unique starting Petri Net transition. Its LHS is the
set of conditions that characterize the row. Its RHS is the set of
processes at level 1.
output1(Input) =
{conditions(Row) -> rhs(Row, 1) | Row in Input}.
where
conditions(Row) = pos_conds(Row.conditions) +
neg_conds(Row.conditions).
pos_conds(Conditions) =
{Condition | Condition/(+) in Conditions}.
neg_conds(Conditions) =
{not Condition | Condition/(-) in Conditions}.
rhs(Row, Level) =
{start(Process) | Process/Level in Row.processes} +
{row(conditions(Row), Level)}.
2. The second kind of transformation appears once for each Process.
Each Process has an associated transformation from its start to its
finish.
output2(Input) =
{start(Process) -> finish(Process) |
Row in Input,
Process/_ in Row.processes}.
where
lhs(Row, Level) =
{finish(Process) | Process/Level in Row.processes} +
{row(conditions(Row), Level)}.
3. The third kind of transformation links the termination of Processes
at one level to the start of Processes at the next level.
output3(Input) =
{lhs(Row, Level) -> rhs(Row, Level+1) |
Row in Input,
_/Level in Row.processes,
_/(Level+1) in Row.processes}.
4. The fourth kind of transformation terminates the processing of the
rows. There is one for each row.
output4(Input) =
{lhs(Row, Level) -> [] | Row in Input,
_/Level in Row.processes,
_/(Level+1) not in Row.processes}.
The preceding does the required transformations, but it has limitations.
- The transformations are done abstractly. In practice, the concrete
representations of the input and output must be dealt with also.
- The generated program is not optimized.
- In this example, sets suffice. For some other problem some data
structure other than sets may be required. But since any data
structure can be mapped onto relations, perhaps this is all we need.
All in all, it doesn't seem like a bad start. My question is whether anyone
knows of existing work along these or similar lines?
Thanks,
-- Russ Abbott
------------------------------
Date: 26 May 88 19:18:06 GMT
From: [email protected] (D.BRODERICK)
Subject: screen control
If your version of Prolog does not have screen control predicates,
but you are familiar with the Unix(tm) tput(1) command
(or willing to become so), there is a quick and dirty way to
generate some character based screen control predicates.
tput(1) makes use of the terminfo database to generate terminal
specific escape sequences to control the screen. For example,
the following, invoked from the shell, will print out "hello"
in standout mode:
tput smso; echo hello; tput rmso
As you can see, the argument names are intuitively obvious(?).
To use this information from Prolog, write a shell script
called "get_tput" as follows:
--------------------------------------
echo "tput('$2',$1,'`tput -T$1 $2`')."
--------------------------------------
This is invoked, for example, as:
get_tput vt100 clear >> tput.pl
It is easy enough to use a for-loop to generate a prolog
terminfo database for all the tput arguments/terminals you want.
For an interesting sight, cat the file. ("tput sgr0" sets to normal)
To use on a PC running an ansi.sys driver, generate a database
for ansi and download.
To use:
consult('tput.pl'). and call
tput(clear). , defined as:
tput(Cmd) :- tput(Cmd,_,EscSeq), atomic(EscSeq), write(EscSeq).
tput(Cmd) :- tput(Cmd,_,[Esc|Seq]), print_list([Esc|Seq]).
tput(Cmd) :- not tput(Cmd,_,_).
This assumes you have loaded info only for your current terminal.
The reason for the middle clause is that shell tput commands that
take more than one parameter return a sequence that needs further
parsing. I have not written a general parser, but have done some
of these by hand. In what follows, what looks like ^ followed by [
is actually a real Escape char (in case you retype this).
% cup - set cursor position. this works for vt100 and ansi
tput(cup(Row,Col),att5425,['[',R1,';',C1,'H']) :- R1 is Row+1, C1 is Col+1.
% csr - change scroll region
tput(csr(Top,Bottom),att5425,['[',T1,';',B1,r]) :-
T1 is Top+1, B1 is Bottom+1.
% pln - set system function key Not on many terminals
tput(pln(Key,Label,Command),att5425,
['[',Key,';',Len,';0;0q',Label16,Command]) :-
name(Command,CAscii),
length(CAscii,Len),
pad_atom(16,Label,Label16).
tput(user,att5425,'}'). % switch to user function keys
tput(system,att5425,'~'). % switch to system function keys
% tsl - write to status line
tput(tsl(Col),att5425,['7[25;',Col1,'H']) :- Col1 is Col + 8.
% auxiliary predicates
print_list([]).
print_list([X|Xs]) :- write(X), print_list(Xs).
pad_atom(Num,Atom,Padded) :-
name(Atom,AAscii),
length(AAscii,Len),
Pads is Num - Len,
pad_blanks(Pads,BString),
append(AAscii,BString,PAscii),
name(Padded,PAscii).
pad_blanks(0,[]).
pad_blanks(Num,[32|Blanks]) :-
Num > 0,
Num1 is Num-1,
pad_blanks(Num1,Blanks).
% append/3 as usual
------------------------------
Date: 29 May 88 04:25:44 GMT
From: [email protected] (Richard A. O'Keefe)
Subject: Screen Control
In article <[email protected]>, [email protected] (D.BRODERICK) writes:
> If your version of Prolog does not have screen control predicates,
> but you are familiar with the Unix(tm) tput(1) command
It's worth pointing out that
(a) tput is a System V feature, not present in most BSDs (but SunOS has it).
(b) There are some non-trivial differences between V.2 tput and V.3 tput
(as I found the hard way when some scripts I wrote on a V.3 system
didn't work on a V.2 system.)
(c) Some of the things tput returns are numbers, for example
tput lines
prints the number of lines on the screen. Suppose that is 24.
The "get_tput" script provided by D.BRODERICK will write this as '24',
which is an atom. You may want to have another script which writes
things unquoted so that you can access such terminal properties.
(d) Some of the things tput reports are "boolean", for example
tput hc # is it a hard-copy terminal?
always prints nothing. Instead the answer is to be found in the exit
code (0 means yes, non-zero means no). You may want a third script
if tput -T$1 $2 ; then
echo "tput($2, '$1', true)."
else
echo "tput($2, '$1', false)."
fi
for use with boolean capabilities.
(e) tput writes things out verbatim, without escape characters. Some
Prolog systems may discard CRs or do other odd things with strange
characters. (Quintus Prolog is safe, but watch out for terminal
capabilities containing apostrophes.)
If the Prolog dialect you are using has an interface to C (as many have)
you would be better off writing an interface to 'curses'; since 'curses'
is available for IBM PCs and VAX/VMS as well as for UNIX this may be more
portable than using tput. In particular, if you use curses, you don't
have to figure out how to parse the 'cup' capability (rather hairy).
Then too, I for one would rather hide the rather bizarre capability
names (quickly now: what does eslok do?) from my Prolog code.
------------------------------
End of PROLOG Digest
|
| I N T E R O F F I C E M E M O R A N D U M
Date: 11-Jul-1988 06:24 CET
From:
AI_INFO@AIVTX@MRGATE
Dept:
Tel No:
TO: ROACH@A1NSTC
Subject: PROLOG Digest V6.40
PROLOG Digest Saturday, 9 July 1988 Volume 6 : Issue 40
Today's Topics:
Announcement - Logix & NU,
Implementation - Objects
-------------------------------------------------------------------------------------------------------------------------------
From: <udi%[email protected]>
Date: Tue, 31 May 88 12:47:25 +0200
Subject: The Logix system version 2.0 release 7
Version 2.0 of the Logix system, release 7, is available. Licensed
users can obtain the new release by sending a 1/2" tape or a 1/4" cartridge to:
Mr Yossef Dabby
Dept. of Applied Math and Computer Science
The Weizmann Institute of Science
Rehovot 76100
Israel
Please include a letter explaining that you are a licensed Logix user and
wish to get an updated release.
Unlicensed users can send to that address a request for a copy of
the license agreement, or send e-mail to me for an electronic copy.
There is a $250 handling fee for new licensees. Logix is available
for Sun2, Sun3, and for VAX and CCI computers running BSD/4.2 and up.
Logix is a single-user multi-tasking concurrent logic programming
environment, developed as a group effort at the Weizmann Institute of
Science. It runs as a single Unix process. Its low-end (abstractmachine, about 7000 lines of source code) is written in C; its
high-end (everything else, more than 10,000 lines of source code) is
written in Flat Concurrent Prolog (FCP). The model of computation of
FCP is based on dynamic light-weight (some say feather-weight)
nondeterministic processes, communicating by binding logical
variables, and synchronizing by waiting for such bindings. Normal
computations generate thousands of concurrent processes; Logix, when idle,
consists of about 1000 suspended processes.
Logix has been heavily used at the Weizmann Institute and other places
for the past two years for various applications, including its own
development, compiler development, parallel and distributed algorithm
development, embedded-language implementation, hardware simulation,
etc. An account of its applications can be found in ``Concurrent
Prolog: Collected Papers'', E. Shapiro (ed.), MIT Press, 1987.
Since Logix's first release, the following improvements and enhancements
were made:
1. Embedded languages
The system supports various embedded languages, including:
- FCP(|,:,?), a superset of Oc and of Flat GHC (and hence supports
also Flat GHC).
- Typed FCP, a typed variant of FCP, with an accompanying type checker.
- The implicit variables (also called framed logic programs)
notation.
2. System support
- Hierarchical module system, integrated with the Unix hierarchical
file system.
- Concurrent interactive debugger (not an algorithmic one yet,
though one is under development).
- Support for syntax extension and user-defined embedded languages
- Support for meta-programming
- Multiple execution modes: trust, failsafe, interrupt,
interpret.
- Support for multiple virtual machines (to be of real use only
in future multiprocessor versions of Logix).
- Nested, interactive computation control.
- `lint' like analysis of common errors.
3. Emulator support
- Foreign kernels/ C interface/ Unix interface/ Native code interface
- Double-precision floating point arithmetic.
- Freeze/melt primitives.
- Speed has almost doubled since first release. On a Sun 3/50 it
runs the standard concurrent naive reverse benchnmark at 5K process
reductions per second. It creates processes at a peak rate of
1500 a second, and can maintain up to 50,000 (fifty
thousand) concurrent processes in a 4MB heap.
There is also an unsupported implementation of FCP for the iPSC/I Hypercube,
which does not include the Logix development environment.
If you want more info on FCP for the iPSC/I let me know.
-- Ehud Shapiro
-----------------------------
Date: 6 Jun 88 05:08:06 GMT
From: [email protected] (Jeff Schultz)
Subject: New Release of NU-Prolog System
Version 1.3 of the NU-Prolog system is now available for release to
academic institutions (schools, colleges, universities). Commercial
licences are also available for some machines.
This release includes a much improved interpreter and debugger, both
of which now understand when declarations; floating point arithmetic;
an interface to foreign functions on many machines; and the usual
large collection of bug fixes. Performance has been increased by 10
to 20 per cent for many applications.
NU-Prolog is a second generation Prolog system which incorporates a
number of important advances in Logic Programming implementation.
NU-Prolog was implemented as part of the Machine Intelligence Project+
in the Department of Computer Science at the University of Melbourne.
It is the successor to Lee Naish's successful MU-Prolog system and
attempts to move Prolog closer to the ideals of Logic Programming by
allowing the user to program in a style closer to first order logic.
In addition, it provides substantial performance gains over interpreted
systems such as MU-Prolog.
NU-Prolog has the following features:
* compiles Prolog programs into machine code for an enhanced version
of the Warren abstract machine (implementing the delay/coroutine
style of programming of MU-Prolog)
* incorporates a database system based on superimposed codeword
indexing which can store general Prolog terms in external databases
for fast retrieval by NU-Prolog programs; the database system
makes use of the superjoin algorithm to perform efficient join
operations
* uses "when" declarations (the successor to MU-Prolog's "wait") to
control the execution of NU-Prolog programs according to the
availability of data
* implements a large set of built-in predicates, including many Quintus
Prolog predicates; most DEC-10/Edinburgh/MU-Prolog library predicates
are available through compatibility libraries
The NU-Prolog system contains the following major components:
* "nc", the NU-Prolog compiler
* "np", a simple interpreter-style interface which implements the
standard Edinburgh Prolog style debugging facilities and has a
sophisticated query language for accessing external database
predicates
* "nac", a program for adding control information to NU-Prolog programs
written in a purely logical style
* "nit", a program for reporting common errors in NU-Prolog programs
(cf. Unix/C's "lint")
NU-Prolog runs under Unix System V and Berkeley BSD Unix 4.?. It has
been implemented on a large number of machines including the following:
Sun 3 and 4, Vax, Encore, Elxsi, Perkin Elmer, Pyramid, Integrated
Solutions Workstations. For academic licences, the system comes
complete with a manual and all source code. The preferred distribution
medium is 1/2" tape, Unix tar-format at 1600bpi. Other distribution
media may be available on request. There is a A$400.00 fee to cover
distribution costs.
In order to obtain a copy of the system, you must first complete a
licence agreement with the University of Melbourne. Licences can be
obtained by contacting:
NU-Prolog Distribution
Department of Computer Science
University of Melbourne
Parkville, Victoria, 3052
AUSTRALIA
or
[email protected]
------------------------------
Date: 2 Jun 88 19:47:05 GMT
From: [email protected] (Mark Johnson)
Organization: Center for the Study of Language and Information, Stanford U.
Subject: Re: oops
/*
I saw a message on the bboard a while ago about object-oriented
programming in Prolog. After a few minutes thought, it struck me
that the main question about OOPS is inheritance; specifically,
who inherits what from whom.
Anyway, below is a fragment which in which the object hierachy
is represented by an is_a graph. Inheritance is defined
with respect to specific properties; it is assumed that
a daughter inherits a property from its mother unless explicitly
marked as not doing so. Tweety inherits all the properties
of birds except that of flying, for example.
As it's built, using the system involves first finding an
ancestor from who a given node can inherit the relevant
property, and then calling the property on the ancestor. However,
it would be easy to combine the two steps using meta-level
operations or (better) input preprocessing, I think.
This system differs from standard OOPS in two ways. First, *all*
accessible superordinate nodes are considered, rather than the
"first" one reached via some particular search strategy. Second,
inheritance is not "blocked" just because a predicate is defined
for a particular object.
Even though it would be simple to build a system with the
"standard OOPS" behaviour, it's not clear to me that one
should. I suspect that the major reason that "standard OOPS"
have the behaviour that they do is because they are based on
functional rather than relational programming systems in which
there is no sensible way to compose "properties" inherited
from different ancestors.
Anyway, this is only a first pass on the subject. I'd like
to hear any reactions you might have,
Mark Johnson
[email protected]
*/
/* A toy object-oriented system */
:- ensure_loaded(library(not)).
:- no_style_check(discontiguous).
:- op( 100, xfx, is_a ).
:- op( 100, xfx, doesnt_inherit ).
/* object's definitions */
tweety is_a bird. % tweety's defs
tweety doesnt_inherit walking. % tweety's gotta walk
tweety doesnt_inherit flying. % 'cause he can't fly!
walks(tweety).
polly is_a bird. % polly's defs
bird is_a animal. % bird's defs
bird doesnt_inherit flying.
bird doesnt_inherit walking. % why walk when
flies(bird). % you can fly?
eats( bird, seeds ).
lion is_a animal. % lion's defs
lion doesnt_inherit food. % lion food is special!
eats( lion, meat ).
walks(animal). % animal's defs
eats( animal, vegetables). % Most animals eat vegetables
/* inherits(Obj,Prop,Super) iff Obj inherits Prop from Super. */
inherits(Obj, _, Obj). % All objects inherit from themselves.
inherits(Obj, Prop, Super ) :-
not Obj doesnt_inherit Prop,
is_a( Obj, Parent),
inherits( Parent, Prop, Super ).
can_fly(X) :-
inherits( X, flying, Super),
flies( Super ).
can_walk(X) :-
inherits( X, walking, Super),
walks( Super ).
can_eat(X,Food) :-
inherits( X, food, Super),
eats( Super, Food).
/* Sample run.
| ?- can_walk(polly).
no
| ?- can_fly(tweety).
no
| ?- can_eat(polly,Food).
Food = seeds ;
Food = vegetables ;
no
| ?- can_walk(lion).
yes
| ?- can_eat(lion,Food).
Food = meat ;
no
*/
------------------------------
Date: 7 Jun 88 20:46:13 GMT
From: [email protected] (Juergen Wagner)
Subject: Re: Object-oriented system in Prolog
Some weeks ago, Mark Johnson ([email protected]) posted a short
Prolog program which was supposed to handle inheritance nets. In my
opinion, the program oversimplifies inheritance by just
o specifying properies of graph nodes,
o disallowing the inheritance of certain properties, and
o walking through in depth-first.
A *REAL* object-oriented system should at least be able to handle
examples like the following correctly (using the notation of Mark's
program):
truck is_a object.
toy is_a object.
toytruck is_a toy.
toytruck is_a truck.
toytruck is_a truck.
tt is_a toytruck.
purpose(object, nil).
purpose(truck, transport).
find_purpose(X,V) :-
inherits(X, purpose, Super),
purpose(Super, V).
For the query
find_purpose(tt, X)
this will generate the solutions
nil
transport
nil
which reveals the deficiency of such an algorithm. It is desirable to
get the solutions
transport
nil
only, because the "transport" property is defined in a more
specialized super-class of "toytruck" than "nil". Therefore, we need
some strategy tranversing nodes iff all relevant daughters have
already been searched.
The following implements such a strategy (C-Prolog 1.5).
o_recorded(Key, Form, Ref)
test whether something has been recorded in the database.
This can be replaced by a call to `recorded'.
o_record(Key, Form, Ref)
record something in the database. This can be replaced by
a call to `recordz'.
set_slot(Object, Slot, Value)
set a slot value of an object
%
% Compute Search Path (Breadth-First)
%
o_compute_search_path(Class,Path) :-
% Get list of superclasses
o_recorded(Class,superclasses(Super),_),
% Mark all superclasses as active
o_enter_all_super(Super),
% Compute the actual search path
o_compute_path([Class],Path),
% Enter the complete search path into the object
( o__class(Class);
asserta(o__class(Class)) ),
set_slot(Class,all_superclasses,[Path]), !.
% Recursively enter all superclasses into the table.
o_enter_all_super([]).
o_enter_all_super([C|Rest]) :-
( o_recorded(C,active,_), !;
o_record(C,active,_) ),
o_recorded(C,superclasses(Super),_),
o_enter_all_super(Super),
o_enter_all_super(Rest).
% Basically, do a breadth-first, but superclasses are removed from
% the agenda only if their subclasses have been searched completely.
o_compute_path(Old,Result) :-
o_expand_path(Old,Temp),
o_remove_lock(Temp),
( Temp = [], Result = Old;
append(Old,New,Result),
o_compute_path(Temp,New) ),
!.
% Expand the path by considering the next candidates
o_expand_path([],[]).
o_expand_path([C|Rest],Path) :-
o_recorded(C,superclasses(Super),_),
o_filter_super(Super,Path1),
append(Path1,Path2,Path), !,
o_expand_path(Rest,Path2).
o_filter_super([],[]).
o_filter_super([C|Rest],[C|Tail]) :-
o_recorded(C,active,Ref),
erase(Ref),
o_record(C,pending,_),
o_recorded(C,subclasses(Sub),_), !,
o_known_subclasses(Sub),
o_filter_super(Rest,Tail).
o_filter_super([C|Rest],Tail) :-
o_filter_super(Rest,Tail).
o_known_subclasses([]).
o_known_subclasses([C|Rest]) :-
not(o_recorded(C,active,_)),
not(o_recorded(C,pending,_)).
o_remove_lock([]).
o_remove_lock([X|_]) :-
o_recorded(X,pending,Ref),
erase(Ref),
fail.
o_remove_lock([_|Rest]) :-
o_remove_lock(Rest).
-- Juergen "Gandalf" Wagner
------------------------------
Date: 13 Jun 88 03:41:53 GMT
From: [email protected] (Mark Johnson)
Subject: Re: Object-oriented system in Prolog
/*
Finally a comment on my toy OOPS!!!
> From: [email protected] (Juergen Wagner)
> ... the program oversimplifies inheritance by just
> o specifying properies of graph nodes,
> o disallowing the inheritance of certain properties, and
> o walking through in depth-first.
> A *REAL* object-oriented system should at least be able to handle
> examples like the following correctly...
>
> truck is_a object.
> toy is_a object.
>
> toytruck is_a toy.
> toytruck is_a truck.
>
> tt is_a toytruck.
>
> purpose(object, nil).
> purpose(truck, transport).
>
> find_purpose(X,V) :-
> inherits(X, purpose, Super),
> purpose(Super, V).
>
> For the query
> find_purpose(tt, X)
> this will generate the solutions
> nil
> transport
> nil
> which reveals the deficiency of such an algorithm. It is desirable to
> get the solutions
> transport
> nil
> only, because the "transport" property is defined in a more
> specialized super-class of "toytruck" than "nil". Therefore, we need
> some strategy tranversing nodes iff all relevant daughters have
> already been searched.
... a bunch of code deleted...
Hmm. Declaratively speaking, the solutions { nil, transport, nil }
and { transport, nil } are the same set of solutions.
But I think the criticism is valid - particularly since the redundancies
are fairly easily avoided by defining find_purpose as follows:
find_purpose(X,V) :-
setof( S, inherits(X, purpose, S), Supers),
member( Super, Supers ),
purpose( Super, V ).
If one was dealing with a cyclic inheritance "hierarchy" then
the 'inherits' predicate would have to be augmented so it never
processed the same node twice, eg. by maintaining a list of the
nodes already visited. Also, the "setof" solution may be inefficient
on large hierarchies, but that's beside the point...
The notion of "more specialized superclass" doesn't strike me as
being well-defined declaratively, unless one wants to say that
a node has an ordered list of parents or something. Again, my
question is not so much whether one can code such a thing up in
Prolog (which one certainly can do without using recordz), but
how one might build an OOPS facility in Prolog that is reasonably
consistent with the declarative style of good Prolog programming.
I don't think that slot-filler style fits this requirement, for
example.
Any comments?
-- Mark Johnson
/* A toy object-oriented system, modified as per Gandalf's suggestion */
:- ensure_loaded(library(basics)).
:- ensure_loaded(library(not)).
:- no_style_check(discontiguous).
:- op( 100, xfx, is_a ).
:- op( 100, xfx, doesnt_inherit ).
/* object's definitions */
truck is_a object.
toy is_a object.
toytruck is_a toy.
toytruck is_a truck.
tt is_a toytruck.
purpose(object, nil).
purpose(truck, transport).
/* inherits(Obj,Prop,Super) iff Obj inherits Prop from Super. */
/* as i previous version */
inherits(Obj, _, Obj). % All objects inherit from themselves.
inherits(Obj, Prop, Super ) :-
not Obj doesnt_inherit Prop,
is_a( Obj, Parent),
inherits( Parent, Prop, Super ).
find_purpose(X,V) :- % modified in response to Gandalf's suggestion
setof( S, inherits(X, purpose, S), Supers),
member( Super, Supers ),
purpose( Super, V ).
------------------------------
End of PROLOG Digest
|
| I N T E R O F F I C E M E M O R A N D U M
Date: 11-Jul-1988 08:30 CET
From:
AI_INFO@AIVTX@MRGATE
Dept:
Tel No:
TO: ROACH@A1NSTC
Subject: PROLOG Digest V6.42
PROLOG Digest Sunday, 10 July 1988 Volume 6 : Issue 42
Today's Topics:
Query - Call/1,
Implementation - KR,
LP Library - Book Review
-----------------------------------------------------------------------------------------------------------------------------
Date: 7 Jun 88 02:31:00 GMT
From: [email protected]
Subject: Call/1 and goal processing
I am in the process of writing an Or-parallel interpreter for Prolog,
and have run across a difficulty. The question I wonder about is how
other interpreters that use a structure-shared representation go about
implementing call/1 in an efficient manner without unduly penalizing
ordinary goal processing. It seems that if one allows the interpretation
of dynamically constructed goals (i.e. those constructed via functor/3,
arg/3, and univ (=..)), then one must cope with the possibility of
variable dereferencing during the selection of a goal to process. Does
anyone out there know how it is done in other interpreters (C-Prolog,
SB-Prolog, etc.)?
-- David
------------------------------
Date: 15 Jun 88 19:41:11 GMT
From: [email protected]
Subject: Knowledge representation and Prolog
This is a forwarded message.
In the future, I suggest that instead of sending things to me for
forwarding, people should send mail to the Prolog Digest at
[email protected]
FORWARDED MESSAGE:
Does anyone know of any KRL written in Prolog besides APES? Has anyone any
comments on attempts to mimic KEE or ART type systems in Prolog?
Jerry Harper: Merrion Gates Software, Rosemount Court, Booterstown Avenue,
: Co Dublin, IRELAND
[email protected]
------------------------------
Date: 16 Jun 88 18:13:38 GMT
From: [email protected] (Tim Finin)
Subject: Knowledge representation and Prolog
We here at the Unisys Paoli Research Center have been using KNET, a
Knowledge Represention system implemented in Prolog, on a number of AI
projects since about 1982. Attached is a brief description of KNET.
More details can be found in an article which will appear in a
forthcoming issue of the Journal of Logic Programming. Tim
KNET is a frame-based knowledge representation system developed at the
Unisys Paoli Research Center to support the acquisition and
maintenance of large, real-world knowledge bases. The KNET system is
currently being used in a rule-based diagnosis and maintenance system
(KSTAMP), in a model-driven configuration expert system (BEACON) and
in our natural language processing system (PUNDIT).
The KNET representation language is similar to KL-ONE in that it is
based on a formal semantic model which defines the meaning of objects
in its knowledge bases. Objects in the knowledge base are called
concepts, each of which has a unique name. A KNET knowledge structure
consists of a number of concepts organized into two distinct data
structures, called hierarchies. Each hierarchy has its own structure,
but the two are related in a complex manner.
The first hierarchy is the specialization hierarchy. Concept B is
said to specialize concept A if B is a kind of A. There is a single
designated root concept, and all concepts participate in the
specialization hierarchy. It is essentially the IS-A hierarchy used
as a basis of conventional semantic networks. It is used so that
components (see below) may be described at a single concept and
inherited by all the children of that concept. The specialization
hierarchy is more general than the conventional IS-A network in that
it is possible for a concept to specialize more than one other
concept, thus inheriting properties from all its parents. This
feature is termed multiple inheritance.
The second hierarchy is the aggregation hierarchy. In this hierarchy,
the children of a concept are its components, and taken as an
aggregate they completely define that concept. In some cases these
children are not components in a literal sense, but are properties, as
for example the wall voltage required by a device may be a property of
that device. A link in this hierarchy consists of an object called a
role which names the component, together with a connection to the
owner of the role and another connection to the type of the role
(which is another concept).
The final constituent of a KNET structure is the constraint. A
constraint is a piece of executable code attached to the network. The
code is available for use by a program using KNET (an application);
whether, when, and how it is used is up to the application. A
constraint is housed at some particular concept, and refers to one or
more roles. Exactly one of the roles referred to by a constraint is
designated as the trigger of the constraint; the remaining roles are
the targets of the constraint. The purpose of the trigger is to
provide a means for indicating within the KNET structure when an
application program should use a constraint. The meaning of a
constraint is always defined relative to the context in which the
constraint occurs. This means that references to roles made from
within an inherited constraint always refer to the local context
rather than to the context in which the constraint was originally
defined.
Finally, it is important to maintain consistency in knowledge
networks. The definition of consistency varies for differing kinds of
knowledge representation, and depends on the semantic model
implemented by the knowledge representation. For KNET, a fundamental
consistency requirement is the subsumption requirement, defined as
follows: If concept A has a role R with type B, and if concept A2
specializes A, then the type of the inherited role R2 must be B or a
specialization of (a specialization of ...) B. If this requirement is
not met, a subsumption violation is said to occur. The program which
builds KNET structures, the Browser/Editor, automatically checks for
and disallows subsumption violations and several other types of
inconsistencies.
KNET has been implemented in a standard subset of Prolog which has
allowed it to be ported to several different Prolog systems on a
variety of machines. Versions of KNET run on VAXes, Suns, Lisp
machines and PCs. An interactive browser/editor system for KNET
knowledge bases can use a simple ASCII terminal interface, enabling
its use on each of these machines. A more sophisticated graphical
browser/editor is available for use on Sun workstations.
-- Tim Finin
------------------------------
Date: 17 Jun 88 02:44:14 GMT
From: [email protected] (Steve Hardy)
Subject: Knowledge representation and Prolog
Jerry Harper asks about knwoledge representation languages
written in Prolog (besides APES.)
Teknowledge developed a prolog-based expert system shell called
M.1. It was released in June 1984 and has sold close to 4,000
copies.
M.1 is unusual in that it is a complete logic programming
language as well as being an easy-to-use expert system shell.
For example:
/* --- simple EMYCIN-like heuristic rule --- */
if main-component-of-meal = beef
then best-color-of-wine = red cf 75.
/* --- list processing --- */
infix <>. /* for "append" */
[] <> LIST = LIST.
if LIST1 <> LIST2 = LIST3
then [ITEM|LIST1] <> LIST2 = [ITEM|LIST3].
/* --- objects and inheritance --- */
issquare(obj-22).
size(obj-22) = 5.
if isrectangle(X) and
height(X) = H and width(X) = W and H * W = R
then area(X) = R.
if issquare(X) then isrectangle(X).
if issquare(X) and size(X) = S then height(X) = S.
if issquare(X) and size(X) = S then width(X) = S.
After releasing four versions of M.1 in Prolog, Teknowledge
recoded the system in C. This led to the system being
approximately five times faster and able to handle knowledge
systems five times larger (up to 2000 rules on a 640K IBM-PC.)
It was easier to work out the design of M.1 with Prolog than it
would have been with C.
-- Steve Hardy
------------------------------
Date: 1 Jul 88 01:15:52 GMT
From: [email protected] (Richard A. O'Keefe)
Subject: An example from "Knowledge Systems & Prolog"
Yesterday I bought a copy of
[A Logical Approach to Expert Systems and Natural Language Processing]
Knowledge Systems and Prolog
Adrian Walker, Michael McCord, John Sowa, & Walter Wilson
Addison-Wesley 1987
ISBN 0-201-09044-9
I haven't had time to read much of it yet.
There's some rather good advice in it, and it is easily the most
powerful argument I have ever seen for Edinburgh syntax.
For now I'd just like to comment on one particular example, found
found on page 413. I'll transliterate it into Edinburgh syntax.
% most_general_terms(Terms, GeneralTerms)
% is true when GeneralTerms is the set of terms in Terms which are
% subsumed by no other term in Terms.
most_general_terms(Terms, GeneralTerms) :-
qsort(Terms, TermsInOrder),
most_general_terms_in_order(TermsInOrder, GeneralTerms).
% most_general_terms_in_order(TermsInOrder, GeneralTerms)
% is just like most_general_terms/2, except that less general terms
% precede more general terms in TermsInOrder (that is, if X and Y
% are both in TermsInOrder and X subsumes Y, X must follow Y). The
% order of terms in the result is the same as the order in the input.
most_general_terms_in_order([], []).
most_general_terms_in_order([Term|Terms], GeneralTerms) :-
member(MoreGeneral, Terms),
subsumes(MoreGeneral, Term),
!,
most_general_terms_in_order(Terms, GeneralTerms).
most_general_terms_in_order([Term|Terms], [Term|GeneralTerms]) :-
most_general_terms_in_order(Terms, GeneralTerms).
% This version of quicksort is supposed to order its input so that
% if X and Y are both given and X subsumes Y, X will follow Y in
% the output.
qsort(Terms, TermsInOrder) :-
qsort(Terms, TermsInOrder, []).
qsort([], L, L).
qsort([Term|Terms], L0, L) :-
partition(Terms, Term, Before, After),
qsort(Before, L0, [Term|L1]),
qsort(After, L1, L).
partition([], _, [], []).
partition([Term|Terms], Pivot, Before, [Term|After]) :-
subsumes(Term, Pivot),
!,
partition(Terms, Pivot, Before, After).
partition([Term|Terms], Pivot, [Term|Before], After) :-
partition(Terms, Pivot, Before, After).
%---- end of first version
Now, my antipathy to (falsely so-called) "quicksort" is well known by now.
But in this case its quadratic worst case hardly matters, because if there
are N terms initially, most_general_terms_in_order/2 will do O(N**2) calls
to subsumes/2 anyway. So we might as well do
most_general_terms(Terms, GeneralTerms) :-
most_general_terms(Terms, [], GeneralTerms).
% At each call of most_general_terms(T, G, X)
% there is a list L such that append(L, T, Terms) and
% most_general_terms(L, G).
most_general_terms([], GeneralTerms, GeneralTerms).
most_general_terms([Term|Terms], GeneralTerms0, GeneralTerms) :-
knock_out(GeneralTerms0, Term, GeneralTerms1),
most_general_terms(Terms, GeneralTerms1, GeneralTerms).
knock_out([], Pivot, [Pivot]).
knock_out([Term|Terms], Pivot, GeneralTerms) :-
subsumes(Pivot, Term),
!,
knock_out(Terms, Pivot, GeneralTerms).
knock_out([Term|Terms], Pivot, [Term|Terms]) :-
subsumes(Term, Pivot),
!.
knock_out([Term|Terms], Pivot, [Term|GeneralTerms]) :-
knock_out(Terms, Pivot, GeneralTerms).
%---- end of second version
This has the nice property that the order of terms in GeneralTerms is
exactly the same as the order of terms in the original Terms list,
apart from the deletion of subsumed terms. It does at most N(N-1)
calls to subsumes/2, and while the version of Walker et alia does
at most (1/2)N(N-1) such calls in most_general_terms_in_order/2,
it may do a similar number in partition/4. Indeed, because subsumes/2
is a partial order (not a total order), it is likely to fail rather more
often than it succeeds, so partition/4 is likely to put most of its
first argument into the Before list, and quadratic behaviour is very
likely. (In fact, whenever subsumes(Term, Pivot) succeeds, that tells
us that Pivot will not be in the final result, so we might as well drop
it.)
Actual measurements show that the two versions do about the same number
of calls to subsumes/2: both tend to be close to N(N-1) calls. Sometimes
the "unsorted" method does much better than the "sorted" one, sometimes it
does a little worse.
This is actually a very interesting problem. It crops up in all sorts of
guises. I currently have an algorithm which does at most N(N-1) calls to
subsumes/2 and reduces to at most 2(N-1) when subsumes/2 happens to be
total. If anyone knows of a better algorithm I would be _very_ interested
to hear of it.
------------------------------
Date: Fri, 1 Jul 88 17:30:43 PDT
From: [email protected] (Richard A. O'Keefe)
Subject: "Knowledge Systems in Prolog", more examples
Well, I've read more of the Walker, McCord, Sowa, & Wilson book,
and while I still say that there is some good advice in it, there are
one or two things it would pay you *NOT* to imitate.
(1) Wrappers.
On pp 34-35, we are told that the Pascal declaration
type person =
record
name : string;
address : string;
date_of_birth : array [1..3] of integer;
sex : boolean;
end {record};
-- which, by the way, is not a brilliant piece of Pascal --
introduces a type instances of which have as Prolog equivalents
terms of the form
person(name(N), address(A), date_of_birth(D.M.Y), sex(S))
Again, on page 51 we are shown
/* Prolog structure */ { Pascal record }
type loc =
loc( record
farmer(F), farmer : string;
wolf(W), wolf : string;
goat(G), goat : string;
cabbage(C) cabbage : string;
) end {record};
-- which, in context, is an amazingly bad piece of Pascal --
DON'T *DO* THIS! Supposing F, W, G, and C to be constants,
the representation they recommand would cost 13 words in Quintus
Prolog (18 words in another Prolog I know of), whereas the
sum-of-products approach yields the *real* equivalent of the
Pascal record as
loc(Farmer, Wolf, Goat, Cabbage)
at a cost of 5 words in Quintus Prolog (6 in another Prolog).
That's a substantial waste of space and time, and worse still,
it confuses the reader, because when you see patterns like
loc(farmer(F),_,_,_)
and loc(_,wolf(W),_,_)
floating around, your natural assumption is that patterns like
loc(wolf(W),_,_,_) and loc(_,farmer(_),_,_) may be possible, for
why else would anyone have unary wrappers if not to distinguish
such cases?
(2) Over-use of "."
At least in chapter 2, the book has an excessive fondness for ".".
Consider the birth_date(Day.Month.Year) example above. This would
be better as date(Year,Month,Day) --when, amongst other advantages,
it will sort properly-- at a space cost of 4 cells, which is
admittedly the same as the cost of Day.Month.Year. The big
advantage here is that all things made out of dots look alike, so it
is very hard for a human reader to tell D.M.Y from any other X.Y.X,
but date(Y,M,D) proclaims its nature to the world. On page 55, this
book actually _recommends_ X.Y, X.Y.Z and the like, so this was not
an accidental slip but a matter of policy.
The authors have finally cured me of my lingering fondness for infix dot.
I am now thoroughly convinced that bracket notation is to be preferred.
Perhaps if I had been reading Lee Naish's code I might have been swayed
the other way, but I fear that he may not be typical.
(3) Factorial
On page 36 they present the following code:
factorial(0,1).
factorial(N,X) <- lt(N,0) &
write('Negative input to factorial').
factorial(N,X) <- (M := N - 1) & factorial(M,Y) &
(X := N * Y).
{Note: * in VM/Prolog is usually an anonymous variable, but not here.}
What's wrong with this? Well, according to this, factorial(-1, 472)
is true. {For the benefit of those fortunate enough not to have
VM/Prolog, try
factorial(0, 1).
factorial(N, X) :-
N < 0,
write('Negative input to factorial.'), nl.
factorial(N, X) :-
M is N-1,
factorial(M, Y),
X is N*Y.
} For real fun, try factorial(1, 2). If you are going to print an
error message like this, you should at least have the decency to fail.
So the second clause would have been better as
factorial(N, *) <- lt(N,0) &
write('Negative input to factorial') & / & fail.
(4) (falsely so-called) "quick" sort
Section 2.4.1 (p89) starts out admirably, saying that "The choice of
algorithm is the most important factor in determining performance".
But the example they consider is sorting, and they say "Three
different algorithms might be considered:
- a permutation sort that tries all possible permutations of a
list in order to find one that is sorted
- a bubble sort that interchanges pairs of elements until the
result is sorted
- a version of Quicksort ..."
I am really getting thoroughly sick of "quick" sort. Remember, if you
check the fine print in Knuth, Sedgewick, Melhorn, or any good book
on the subject, you will find that the average case of "quick" sort
does about 1.4 times as many comparisons as the WORST case of merge sort.
If people are going to presume to give advice about sorting, they might
at least check the standard literature. (Not that sorting is a major
topic in Walker et al, but it wasn't a major topic in Clocksin&Mellish
either, and that didn't stop people mistaking the difference-list
example for advice about how to sort.)
(5) Breadth-first search.
On pp 82-83 we find
breadth_first(Goal, Histories, Answer) <-
member(Goal.Past, Histories) &
reverse(Goal.Past, Answer).
breadth_first(Goal, Histories, Answer) <-
Histories=(*,*) &
set_of(Next.Current.Past,
member(Current.Past, Histories) &
move(Current,Next) & \member(Next,Past),
New_history_list) &
remove_duplicate_head(New_history_list, Clean_list) &
breadth_first(Goal, Clean_list, Answer).
remove_duplicate_head(F.S.Tail, Clean) <-
((F=(Head.*) & S=(Head.*)) ->
remove_duplicate_head(F.Tail,Clean);
(remove_duplicate_head(S.Tail,L) &
Clean=(F.L))).
remove_duplicate_head(F.nil, F.nil).
remove_duplicate_head(nil, nil).
Let's start with remove_duplicate_head. The input is a list of lists,
which is sorted, and subsequences ...,[A|T1],...,[A|Tn],... with the
same head (A) are to be replaced by the first member of the subsequence
(here [A|T1]). Can we do this more cleanly?
remove_duplicate_heads([], []).
remove_duplicate_heads([[Tag|Info]|Given], [[Tag|Info]|Clean]) :-
skip_duplicate_heads(Given, Tag, Clean).
skip_duplicate_heads([[Tag|_]|Given], Tag, Clean) :- !,
skip_duplicate_heads(Given, Tag, Clean).
skip_duplicate_heads(Given, _, Clean) :-
remove_duplicate_heads(Given, Clean).
In Quintus Prolog, the cleaner version is about three times faster.
I think the test \member(Next, Past) might perhaps be better as
\member(Next, Current.Past), in case the problem graph has self-loops.
If you are given a generation function which yields all of the children
of a node at once instead of a move/2 which enumerates them, you can
write breadth first search without appealing to set_of/3 at all.
{The predicate called set_of/3 in this book corresponds to the
predicate called set_of_all/3 in the Quintus library, not to the
predicate called set_of/3, except that set_of_all/3 checks that it
is being called soundly, and set_of/3 in this book doesn't.}
Reminder:
In this note I've concentrated on some of the infelicities in the book.
I repeat that there is a lot of good stuff in it, and on the whole I do
not regret its purchase.
------------------------------
Date: Sat, 2 Jul 88 22:03:31 PDT
From: [email protected] (Richard A. O'Keefe)
Subject: "Knowledge Systems and Prolog", chapter 3
Yes, it's me again, with yet a third set of comments on the
"Knowledge Systems and Prolog" book by Walker, McCord, Sowa, & Wilson.
This particular set of comments pertains to chapter 3. I hasten to
say at the outset that there is a lot of good stuff in this chapter
which I wish I had written myself, but of course I have nothing to
say about _that_.
1. Logic Programming Development Process (pp 110-111)
Don't take steps 1..9 too seriously; that's not how I do it, and
one of the most important steps has been omitted, namely "check to
see if someone else has already solved the problem". But the rest
of the advice on p111 is good.
2. Reading (p 113)
The book presents two fragments (recast here as Prolog):
...
read(Stream1, X),
read(Stream2, T),
process(X, T)
...
and
...
customer(Name),
catalogue_item(Item),
interested_in(Name, Item),
send_letter(Name, Item)
saying "For example, catalogue_item/1 may simply read the next term".
Now the second fragment looks as though it has a declarative reading.
But its behaviour is radically different from the behaviour which
would result if customer/1 and catalogue_item/1 were tables. It is
_NOT_ good style to give commands with side effects names which suggests
that they are pure relations. In fact it is very bad style. Suppose
we had the customer and catalogue_item tables in memory as pure
relations, and wrote this failure-driven loop:
send_letters :-
customer(Customer),
catalogue_item(Item),
interested_in(Customer, Item),
send_letter(Customer, Item),
fail
; true.
This would combine *every* Customer with *every* catalogue Item.
But the original fragment with the two read commands doesn't do that;
it reads an X and a T and combines _that_ X with _that_ T and no other.
By the way, you _can_ keep (a small number of) tables in files and
access them with read/1. Here's an example (using Quintus Prolog):
:- dynamic
table_stream/2.
initial_position('$stream_position'(0,1,0)).
read_from_external_relation_1(Table, Entry) :-
( table_stream(Table, Stream) -> true
; external_relation(Table, FileName),
open(FileName, read, Stream),
assert(table_stream(Table, Stream))
),
initial_position(Zero),
do_external_access(Zero, Stream, Entry).
do_external_access(Before, Stream, Entry) :-
stream_position(Stream, _, Before),
read(Stream, Term),
stream_position(Stream, After),
( Term == end_of_file -> fail
; Entry = Term
; do_external_access(After, Stream, Entry)
).
external_relation(customer, 'custom.dat').
external_relation(catalogue_item, 'catalo.dat').
customer(Customer) :-
read_from_external_relation_1(customer, Customer).
catalogue_item(Item) :-
read_from_external_relation_1(catalogue_item, Item).
Now, this _is_ a version of catalogue_item/1 which "simply read[s]
the next term", but it is pretty remote from the first fragement.
I don't know whether Frank McCabe invented this technique, but he's
the one I got it from (testing the code above was the first time I
have ever _used_ the technique, mind...).
If you can possibly fit the information into memory, _do_ it.
Don't keep reading it from a file over and over again. Another
possibility, instead of using read_from_external_relation_1/2,
would be to read a file once and build an index in memory, so
that fewer reads would be done.
For example, suppose we have a a file 'passwd.pl' containing items like
passwd(teksup,123,45,'Tech Support', '/peano/teksup','/bin/csh').
passwd(halt, 6, 7,'Stop Machines', '/', '/etc/halt').
passwd(ok, 89,10,'Richard A. O''Keefe','/dedekind/ok', '/bin/csh').
{Crackers beware: these are _not_ real Quintus /etc/passwd entries!}
We might do this:
:- dynamic
passwd_stream/1,
passwd_index/2.
initialise_passwd(Stream) :-
open('passwd.pl', read, Stream),
assert(passwd_stream(Stream)),
repeat,
stream_position(Stream, Position),
read(Stream, Term),
( Term = passwd(Key,_,_,_,_,_) ->
assert(passwd_index(Key, Position)),
fail
; true
),
!. % The Stream is left open!
passwd1(Key, Uid, Gid, Name, Home, Shell) :-
( passwd_stream(Stream) -> true
; initialise_passwd(Stream)
),
passwd_index(Key, Position),
stream_position(Stream, _, Position),
read(Stream, passwd(Key,Uid,Gid,Name,Home,Shell)).
It is fair to give a predicate so defined a name which suggests
that it is a pure table, because (apart from tying up a stream and
being rather slow) that's how it _acts_.
ONLY use this technique if you can build a good index; it can be
hundreds, even thousands, of times slower than accessing tables in
main memory.
3. Wrappers (page 114-115)
There is an example on pp 114-115 which reads thus:
name(Name, TelephoneNumber, File) :-
get_next_record(File, Record),
parse_record(Record, name(Name,TelephoneNumber)).
^^^^^ ^
parse_record(Record, name(Name,TelephoneNumber)) :-
^^^^^ ^
parse_name(Name, Record, Rest_of_record),
parse_blanks(_, Rest_of_record, Last_of_record),
parse_telephone(TelephoneNumber, Last_of_record, _).
{If you look at the code in the book, you'll see that the first
argument of parse_blanks/3 is an anonymous variable in all calls
and definitions, so it might as well not be there.}
There is much to quarrel with in the choice of names here:
the parse_*** predicates are genuinely relational, so should have
noun-phrase or adjective-phrase names, not names that look like
commands. Worst of all, the operation which _is_ a command (that
is, which has a side effect) is given the name name/3 which looks
like a pure relation. It would be better named e.g.
read_name_and_phone_number(File, Name, PhoneNumber)
My main point here is that name(_,_) is a useless wrapper. If the
name and phone number had to be returned to the caller so packaged,
there'd be some sense in it, but not here. It would be better as
read_name_and_phone_number(Stream, Name, PhoneNumber) :-
get_line(Stream, Chars),
name_and_phone_number(Name, PhoneNumber, Chars, _).
name_and_phone_number(Name, Exchange-Extension) -->
token(Name),
blanks,
number(Exchange), "-", number(Extension).
where the compound term -(_,_) here _is_ justified because that's
what the caller wants.
4. Query-the-user (pp 116-117, p120).
When you hit page 116, look ahead to page 120. I'll not comment on
the rather major problems of the code on page 116, because the code
on page 120 remedies some of them.
The idea is that you want a relation user('User''s first name')
-- by the way, I find it offensive to have computers address me by
my first name, so whenever a program asks me my first name I lie;
my computer account name will do fine for any genuine identification--
but you don't want to ask unless you turn out to need the information.
The code on page 120 is (converted to Quintus Prolog):
:- dynamic
known/1.
user(Name) :-
known(user(Name)),
!.
user(Name) :-
prompted_line('What is your first name? ', Chars),
( verify(Chars) -> % you have to supply verify/1
atom_chars(Name, Chars),
assert(known(user(Name)))
; format('~s: not verified.~n', [Chars]),
user(Name)
).
{This is not identical to the code in the book, but it has the same
kinds of bugs, which is what I'm concerned with.}
I loaded this into Quintus Prolog, together with library(prompt)
and this definition of verify/1:
verify([_,_]).
Now see what happens:
| ?- user(fred).
What is your first name? jim
jim: not verified. % right
What is your first name? ok
no % right, BUT
| ?- listing(known/1). % prints _nothing_
| ?- user(Who). % should pick up 'ok', BUT
What is your first name? ok % it asks AGAIN!
Who = ok % but that's not all...
| ?- user(dz).
What is your first name? ok % it asks yet AGAIN!
no
The code breaks down in (at least) two ways:
A) If we call it with Name bound when known(user(X)) is true--i.e.
the user has been asked for his name--but X\==Name, the user
will be asked for his name again. Fortunately, the _second_
breakdown then occurs, so at least it isn't _stored_ again.
B) If we call it with Name bound when the user name is not known
(or when it is known to be different from Name), we'll ask for
the name, verify it, and then fail before storing the name.
How should it be written?
:- dynamic
property_known/2.
property_type(user, atom).
...
property_prompt(user, 'What is your first name? ').
...
property_verify(user, Chars) :-
verify_user(Chars).
...
simple_query(Property, Value) :-
simple_query_1(Property, X),
Value = X.
simple_query_1(Property, Value) :-
property_known(Property, Value),
!.
simple_query_1(Property, Value) :-
property_type(Property, Type),
simple_query_1(Type, Property, Value),
assert(property_known(Property, Value)).
...
simple_query_1(atom, Property, Value) :-
property_prompt(Property, Prompt),
repeat,
prompted_line(Prompt, Chars),
( property_verify(Property, Chars)
; format('~s: not verified~n', [Chars]), fail
),
!,
atom_chars(Value, Chars).
...
user(UserName) :-
simple_query(user, UserName).
Now, with this definition, we get
| ?- user(fred).
What is your first name? jim
jim: not verified
What is your first name? ok
no
| ?- listing(property_known/2).
property_known(user,ok).
yes
| ?- user(Who).
Who = ok
| ?- user(dz).
no
which is what you would expect something like this to do.
5. "Global variables" (p122)
I have to quote their VM/Prolog code here, because delax/1 doesn't
behave quite like retract/1, more like retract_first/1.
A ::= B <-
try(delax(global_value(A, *))) &
addax(global_value(A, B)).
try(X) <- X.
try(*).
What's wrong with this? Well, after converting to Quintus Prolog,
I got the following result:
| ?- x ::= 1, listing(global_value/2).
global_value(x,1).
| ?- x ::= 2, global_value(x, 1).
no
| ?- listing(global_value/2).
global_value(x, 2).
global_value(x, 2).
I'll leave you to figure _that_ one out for yourselves, but the
moral is that any time the last clause of a predicate is a
catch-all case like the last clause of try/1, the chances are
someone is trying to be clever and failing.
6. Another steadfastness problem (p122)
We are told on page 122 that "the global_value/2 technique
described above can be used to simulate a stack, as follows:
push(Stack, Item) <-
addax(global_value(Stack, Item), 0). % "asserta"
pop(Stack, Item) <-
delax(global_value(Stack, Item)).
The predicates push/2 and pop/2 have their obvious meaning."
Actually, they haven't. Or at any rate pop/2 hasn't. If pop/2
succeeds, it did delete an item from the stack, but the item it
deleted may not have been at the top of the stack... Working
Prolog code is
push(Stack, Item) :-
asserta(global_value(Stack,Item)).
pop(Stack, Item) :-
retract(global_value(Stack,Top)),
!,
Item = Top.
top(Stack, Item) :-
global_value(Stack, Top),
!,
Item = Top.
TO BE CONTINUED
I have about as much more to add, but it's time to go home.
DISCLAIMER
Remember, I'm criticising slips in a by-and-large-reasonable book;
the good stuff can speak for itself.
You'll recall that I had very little to criticise in the _content_
of the Warren & Maier book, but raged about the misleading preface
and blurb. I recommended it to the Prolog class I taught last week,
and if I had bought the Walker et al book by then I'd probably have
suggested it to them as worth reading too. Yes, I've made up my
mind, I _shall_ recommend it to the next Quintus class, if only to
show them why they should buy QP370 rather than VM/Prolog (:-).
---------------------------------
End of PROLOG Digest
========================================================================
|
| I N T E R O F F I C E M E M O R A N D U M
Date: 30-Sep-1988 09:18 CET
From:
AI_INFO@AIVTX@MRGATE
Dept:
Tel No:
TO: ROACH@A1NSTC
Subject: PROLOG Digest V6 #49
PROLOG Digest Monday, 18 July 1988 Volume 6 : Issue 49
Today's Topics:
Query - TI -Resolution,
Implementation - Speed & Style,
LP LIbrary - Vision
-----------------------------------------------------------------------------------------------------------------------------
Date: Wed, 13 Jul 88 15:10:21 MDT
From: [email protected]
Subject: TI-Resolution
Does anyone know any references about the work of TI-resolution algorithm
by McSkimin and Minker in early seventies? Please send me a message
if you know any such reference. It will certainly be appreciated!
-- Shaun-inn Wu
------------------------------
Date: 14 Jul 88 23:38:03 GMT
From: [email protected] (Saumya Debray)
Subject: "A Note on the Speed of Prolog"
The current (August 88) issue of ACM SIGPLAN Notices has an article
"A Note on the Speed of Prolog", by J. Paakki, that's
interesting. The author reports on an experiment comparing the
speeds of compilers, written in Pascal and Prolog, for the
language Edison. What's interesting is that even though the
Prolog implementation used is C-Prolog, the Prolog version of the
compiler is typically only about 5 times slower than the Pascal
version. Now there are faster Prolog systems readily available
that are anywhere from 10 to 50 times faster than C-Prolog.
Assuming that the comparison is a fair one (i.e. noone's
writing execrable Pascal code or using the slowest Pascal system
available), this seems to suggest that using a "state-of-the-art"
Prolog system, one could actually have a Prolog version of the
compiler that was faster than the Pascal version.
-- Saumya Debray
-----------------------------
Date: 15 Jul 88 03:16:19 GMT
From: [email protected]
Subject: Need style advice
I'd like some advice about how to clean up a relation.
foo_value/2 is intended to be a proper, single-valued, function.
Several clauses can be satisfied at the same time, so I'm depending on
the lexical ordering of clauses and the use of cut to give the proper
precedence among multiple satisfiable clauses and enforce a single
value.
foo_value(X, value_1) :- forced_to_be_1(X), !.
foo_value(X, value_2) :- relation_one(X, Y), not(forced_to_be_1(Y)), !.
foo_value(X, value_2) :- relation_two(X, Y), foo_value(Y, value_2), !.
foo_value(X, value_1) :- relation_one(X, Y), forced_to_be_1(Y), !.
foo_value(X, value_1) :- relation_two(X, Y), foo_value(Y, value_1), !.
foo_value(X, value_3).
Is there a better way to get the same effect? "Better" can mean
more efficient, more aesthetic, closer to purely logical, fewer uses
of cut and not, or whatever metric experienced Prolog users use. :-)
Some other notes:
relation_two/2 contains no cycles. (It is neither reflexive
nor symmetric, nor are there any longer sequences XrY YrZ ... WrX.)
forced_to_be_1/1 is a collection of ground literals.
relation_one/2 and relation_two/2 can satisfy multiple second
arguments for a fully instantiated first argument.
It would be nice, but not essential, if both arguments could be
given as unbound variables. Normally, I try to satisfy
foo_value(literal, V).
-- Stu Friedberg
------------------------------
Date: 15 Jul 88 22:53:51 GMT
From: [email protected] (Richard A. O'Keefe)
Subject: Need style advice
In article <[email protected]> stuart writes:
> foo_value(X, value_1) :- forced_to_be_1(X), !.
>
> foo_value(X, value_2) :- relation_one(X, Y), not(forced_to_be_1(Y)), !.
> foo_value(X, value_2) :- relation_two(X, Y), foo_value(Y, value_2), !.
>
> foo_value(X, value_1) :- relation_one(X, Y), forced_to_be_1(Y), !.
> foo_value(X, value_1) :- relation_two(X, Y), foo_value(Y, value_1), !.
>
> foo_value(X, value_3).
> forced_to_be_1/1 is a collection of ground literals.
> relation_two/2 contains no cycles.
> relation_one/2 and relation_two/2 can satisfy multiple second
> arguments for a fully instantiated first argument.
Note that the code as written is not steadfast:
?- foo_value(X, value_3).
will succeed for ANY value of X, even when forced_to_be_1(X).
Remember: a cut at the end of a clause is almost always in the wrong place.
Would it be possible for you to tell us what the relation is supposed to
_mean_, rather than showing us the current version of the code? For
example, the following values for forced_to_be_1/1 and relation_one/2
forced_to_be_1(jim). % "male"
forced_to_be_1(tom).
relation_one(tom, jim). % "sibling"
relation_one(jim, tom).
relation_one(tom, mary).
...
relation_two(_, _) :- fail. % "parent"
appears to satisfy the description given, yet both
relation_one(tom, mary), \+forced_to_be_1(mary)
and relation_one(tom, jim), forced_to_be_1(jim)
are true, so should foo_value(tom, value_2) or foo_value(tom, value_1)?
Presumably there is some finite number of Xs which you intend to assign
foo_values to. If that number is not too big, why not just write a
program to generate the table and write the results to a file, then
compile that file?
------------------------------
Date: 14 Jul 88 06:53:49 GMT
From: [email protected]
Subject: summary: Computer Vision and Logic Programming
Well,
Some of you may remember my early posting, regarding the intersection of
computer vision and logic programming. For those of you who replied, many
thanks.
What was found:
Alan J. Vayda ([email protected]) is using prolog for high-level
object recognition on range data. Ref: Kak, Vayda, Cromwell, Kim and Chen,
"Knowledge-Based Robotics", Proc. of the 1987 Conf. on Robotics and Auto-
mation.
Ray Reiter and Alan Mackworth at Univ. of British Colubia have a paper
"The Logic of Depiction" (UBC TR 87-24) which proposes a theory of vision
based in first order logic. Net address: mack%[email protected].
(note: Dr. Mackworth, I could never get mail back to you. My address is at
the end of this message.)
In vol. 1 of "Concurrent Prolog: collect papers", edited by E. Shapiro, MIT
Press, 1987, S. Edelman and E. Shapiro have "Image Processing in Concurrent
Prolog", this deals with algoritms for low-level vision. (edelman or udi,
@wisdom.BITNET)
Denis Gagne's thesis, from U. of Alberta, in Edmonton, home of the Oilers
and the West Ed. Mall, describes a reasoning approach to scene analysis.
The person that refered this to me didn't know the title, but did give me
a lead on how to find out. Randy Goebel ([email protected]) and David
Poole ([email protected]) were Denis's advisors.
Mulgaonkar, Shapiro and Haralick, describe a rule-based approach for
determining shap-from-perspective in "Shape from Perspective: a Rule Based
Approach", CVG&IP 36, pp. 289-320, 1986.
That's about all that I've heard. I'm still interested in hearing more,
though.
-- Vince Kraemer
|
| I N T E R O F F I C E M E M O R A N D U M
Date: 30-Sep-1988 09:19 CET
From:
AI_INFO@AIVTX@MRGATE
Dept:
Tel No:
TO: ROACH@A1NSTC
Subject: PROLOG Digest V6 #51
PROLOG Digest Tuesday, 19 July 1988 Volume 6 : Issue 51
Today's Topics:
Implementation - Style & Unification
--------------------------------------------------------------------------------------------------------------------------
Date: 17 Jul 88 04:32:41 GMT
From: [email protected]
Subject: Need style advice
Summary: More information about intended behavior
References: <[email protected]>, <[email protected]>
> Richard O'Keefe responds to my query with:
> Note that the code as written is not steadfast:
> ?- foo_value(X, value_3).
> will succeed for ANY value of X, even when forced_to_be_1(X).
> Remember: a cut at the end of a clause is almost always in the wrong place.
Ok, that's the first problem. My intent is to use the
value in the head of the first satisfied clause, using value_3 only
when no previous clause can be satisfied.
Your reminder is appreciated, but it would be more useful if I were
also made acquainted with how things *should* be done.
> Would it be possible for you to tell us what the relation is supposed to
> _mean_, rather than showing us the current version of the code?
Yes, my apologies if this description includes irrelevant detail. I
have a graph with two colors of arcs, and two kinds of colors of
nodes. foo_value/2 is to give values to nodes based on graph
connectivity and coloring.
The first kind node color distinguishes between "forced_to_be_1" and
otherwise. If a node is forced_to_be_1, it receives value_1, otherwise
we consider its neighbors. We look, first, for neighbors from whom X
can inherit value_2. If no such neighbors can be found, we look for
neighbors from whom to inherit value_1. If we fond no neighbors from
whom value_2 or value_1 can be inherited, we give X the value_3.
This is what I intended to express through the ordering of clauses.
(1) Is the decision forced, (2) Can we find a value_2, (3) Can
we find a value_1, (4) In all other cases, value_3.
There are two ways for X to inherit a value from a neighbor, and that
is what the pairs of clauses (2&3 and 4&5) were intended to express.
relation_one(X, Y) is satisfied when node X is connected to node Y by
an arc of the first color, and node Y has a particular value of the
second kind of node color. X receives a value based directly on
the first kind of node coloring of Y.
relation_two(X, Y) is satisfied when node X is connected to a node _
by an arc of the first color, and node _ is connected to node Y by
an arc of the second color. (The second kind of node coloring is
irrelevant in this case.) X receives the same value that Y receives.
> For example, the following values for forced_to_be_1/1 and relation_one/2
> [ some clauses deleted ]
> appears to satisfy the description given, yet both
> relation_one(tom, mary), \+forced_to_be_1(mary)
> and relation_one(tom, jim), forced_to_be_1(jim)
> are true, so should foo_value(tom, value_2) or foo_value(tom, value_1)?
foo_value(tom, value_2). My intent is to try the four steps I gave
above, stopping the first time a step succeeds. I suppose I should stress
that I want foo_value to succeed exactly *once* in all cases.
> Presumably there is some finite number of Xs which you intend to assign
> foo_values to. If that number is not too big, why not just write a
> program to generate the table and write the results to a file, then
> compile that file?
Well, if there were only one graph of interest, I'd do that, but this
*is* a (evidently incorrect) program to generate the table! (Actually,
it's my attempt to express more-or-less declaratively what a C program
is busy doing on a dynamically changing graph structure, but you get
the idea.)
****************
Clearly, I have not gone about things the right way. My suspicions
that things could be improved motivated my query in the first place.
I frankly hadn't noticed the "steadfast"ness bug. So, what is the
recommended form for simple decision trees in Prolog? How do I
obtain the intended behavior?
Unless I receive strong urging to the contrary, I would like to *avoid*
rewriting the clause bodies to something like the following:
foo_value(X, a) :- pred_a(X).
foo_value(X, b) :- \+pred_a(X), pred_b(X).
foo_value(X, c) :- \+pred_a(X), \+pred_b(X), pred_c(X).
foo_value(X, d) :- \+pred_a(X), \+pred_b(X), \+pred_c(X).
This is "pure", but I find it harder to read and understand. I suspect
that it's also considerably less efficient. If not so, please urge otherwise.
-------------------------------
Date: Sun, 17 Jul 88 14:33:46 BST
From: Fernando Pereira <pereira%[email protected]>
Subject: Unification Algorithms
> John Lloyd, in "Foundations of Logic Programming" 2nd extended edition, gives
> 3 references for linear time/space unification algorithms with the
> occur check.
> I haven't studied any of them, and Lloyd says that none of them are used
> in PROLOG systems (surely there must be one somewhere that uses one of these
> algorithms). He doesn't say why they aren't used, perhaps the basic
> multiplier for their expense is too high, even though they are linear
> in complexity.
The reason why most (if not all) Prolog systems do not use any of
these algorithms but rather the Robinson algorithm with the occurs
check removed (RWOC) is not ignorance on the part of the implementers,
but the fact that the algorithms are linear on the _sum_ of the sizes
of the two terms being unified. If this seems modest, consider the
execution of
append([], L, L).
append([X|L1], L2, [X|L3]) :- append(L1, L2, L3).
with the first argument instantiated to a list of length N. If one of
the algorithms in question is being used for all unifications, then
the execution time will be O(N^2), while it is just O(N) in a normal
Prolog system. In addition to this problem, the cost per basic
unification step in those algorithms is higher than that in the RWOC
algorithm, both because of the more complex data structures used and
the larger amount of information that has to be saved ("trailed") for
backtracking. In the original design of Prolog as a _programming_
_language_ based on Horn-clause logic, it was (correctly) believed
that the potential unsoundness and non-termination caused by the
choice of "unification" algorithm was a lesser evil than using a sound
and always terminating algorithm that made the language unpractical
compared with, say, Lisp -- the obvious counterpart of append/3 in
Lisp is clearly O(N).
Now, it would be nice to have a sound and terminating algorithm and
still maintain the practical efficiency of unification. It would also
be nice to avoid the exponential worst-case performance of the RWOC
algorithm in problems like
exp(N) :- reentrant_term(N,T1), reentrant_term(N,T2), T1 = T2.
reentrant_term(0, 0).
reentrant_term(s(N), f(T,T)) :-
reentrant_term(N, T).
The problem here is that the textually the term corresponds to the
full binary tree of height N, with size O(2^N), but it only contains
O(N) distinct subterms. Unfortunately, the RWOC algorithm knows
nothing about shared subterms and spends O(2^N) time unifying T1 with
T2, when O(N) would be sufficient.
The above problems have been attacked in two distinct ways. The first,
more widely available (Prolog-II, SICStus Prolog and probably others),
is based on changing the universe of terms so that the circular
structures created by an algorithm without occurs check are allowed.
This may appear a dirty trick, but in fact work by Colmerauer, van
Emden, Jaffar and others shows that it can be given a reasonable
logical interpretation, in which programs are interpreted not as usual
over the Herbrand universe but rather over a domain of _rational
graphs_, defined by a particular equational theory. However, although
the RWOC algorithm can build such creatures, it may loop if given
them, eg. try
?- X = f(X,X), Y = f(Y, Y), X = Y.
in C-Prolog, Quintus Prolog, and probably most other current Prolog
systems. Instead, what is needed is an algorithm that recognizes when
it is trying to unify a pair of subterms it tried to unify before and
just skip that pair. Various techniques have been proposed for this,
mostly refinements, variations or reinventions of the worst-case
O(N^2) algorithm presented by Fages in his dissertation. The best
algorithm I know for this purpose is the obvious adaptation of the
UNION-FIND algorithm for FSA equivalence (consult standard textbooks
for this), but it has the problem that a lot of information may have
to be saved for restoring terms on backtracking.
The other alternative is to stick to the usual theory of terms, bring
in the occurs check, but recognize situations in which the full
algorithm is not needed, eg. when unifying the first occurrence of a
variable in the head against the corresponding argument. Ideas of this
nature have been pursued by various people, eg. David Plaisted, but I
don't know of the latest results on its practical application. It
clearly seems a good idea for clausal theorem provers based on Prolog
technology (eg. Stickel's PTTP and Plaisted's prover distributed on
this digest a couple of years ago), but it is unclear to me whether it
is really practical for Prolog execution, eg. whether it will reduce
the execution of append/3 above to O(N).
I apologize to those people who have worked on this problem but were
not mentioned above (I'm composing this without access to a library),
and I would appreciate any corrections, updates and new references on
the topic.
-- Fernando Pereira
--------------------------------
End Of PROLOG Digest
|