T.R | Title | User | Personal Name | Date | Lines |
---|
33.1 | nosense | MLNOIS::TOGNAZZI | | Mon Oct 31 1988 18:31 | 3 |
| I think that compare nexpert with opsv in speed as nosense
Dario.
|
33.2 | But customers want it | MUNICH::DIRK | | Wed Nov 02 1988 10:09 | 10 |
|
Customers, however, don't think so. They want to know the
differences for both, NEXPERT and OPS5. That's why we've
tackled the same problem with both: OPS5 and NEXPERT. NEXPERT's
runtime speed should well be compared to OPS5 speed, and speed turns
out to be a significant advantage of OPS5 over NEXPERT.
Dirk
|
33.3 | no sense at all | ITGATE::TOGNAZZI | | Wed Nov 02 1988 11:09 | 13 |
| I think that nexpert could be speed or slow. If you want to use
nexpert in opsv style sure no chance.
What about the use of inheritance or some different feature, opsv
is faster, but you have to do a lot of work.
What abour backward chaining.
I the end i think that depend of your problem the tool could be
fast or slow, because assembler is faster then opsv in some benchmark
bye dario.
|
33.4 | Think problem oriented | MUNICH::DIRK | | Thu Nov 03 1988 09:41 | 25 |
| Sure, you can compare only that fraction of the functionality common to
both tools. And a customer's view is problem-oriented, he doesn't
care for functionality which he doesn't need: if his problem
turns out to be solvable with that fraction of the functionality
common to NEXPERT and OPS5, then speed may be the critical argument
(for OPS) or development comfort may be another argument (for NEXPERT).
Developers should be aware not to oversell in functionality.
The motivation for our comparison was
1) to check for Neuron Data's statement "..NEXPERT is fast"
2) to see the speed vary with complexity of rules
3) to see OPS5 speed vary with the size of conflict sets
and the result was in all cases OPS5 was much faster in runtime.
This is what counts for the end user.
Dirk
|
33.5 | Faster NEXPERT development? | HERON::ROACH | TANSTAAFL ! | Tue Nov 08 1988 13:05 | 32 |
| Does anyone have any idea how NEXPERT and OPS5 compare in the area
"speed of development"? I know that most people automatically think
that NEXPERT far outdistances OPS in this area, but I'm not convinced
and would like to be convinced by some facts. I'm not talking about
the first week of the project, but on the full lifecycle up to the
point where the system is ready to be put into the users hands.
Also, I'm not interested in the systems that are small toys developed
by inexperienced programmers, but in major systems that are done
by experienced software engineers and knowledge engineers.
The issues that come to mind that make me question the assumed
superiority of NEXPERT over OPS in speed of develpment are
o how many restarts does one have to make with NEXPERT because of
unforseen limitations in the tool
o how much time does one have to spend writing "work around' routines
because of the limitations implied above (reference the recent
internal AI training course in Munich where the students had to
start writing workaround code in C on their second day of trying
to build a prototype)
o how many projects started in NEXPERT had to be scrapped because
workarounds either couldn't be made or were not worth the effort
o how much time is wasted on a NEXPERT project waiting for support
from Neuron Data
These are a few of the questions I would liked answered about the
overall productivity gains of NEXPERT over OPS before I start
considering betting the reputation of myself and DIGITAL around
these claims.
|
33.6 | Speeds up the start phase | MUNICH::DIRK | | Tue Nov 08 1988 14:04 | 16 |
| The discussion so far was not on development but runtime speed.
For the development, the comfort from NEXPERT may help the
beginner to understand AI-concepts like rules, objects etc.
In addition, it may help the experienced expert systems
developer during the knowledge acquisition phase: It simply
allows to discuss the rule networks with the expert, who should
understand them better than the OPS-Rules. I doubt that
this comfort is needed in more advanced phases.
Dirk
|
33.7 | Goods and badsss | HERON::SERAIN | | Tue Dec 13 1988 17:00 | 31 |
|
I can see that some people are getting religion about tools! Any
tool has its plus and minus, and addresses best a certain set of
problems that we need to understand. This takes some time specially
when the tool is new.
In the process of understanding what a tool is good for, we
need to be "impartial" and base our conclusions on real facts and
not on reported comments. If some of the bad points about NEXPERT
which are said here are correct, some others need to be revisited:
1) Unforeseen limitations of the tool. To be able to appreciate
the capabilities of a tool takes a lot of experience of it. Reading
the user manual is a little help. Talking to other people is better
but you need to address the issues very precisely. You need to know
what are the characteristics of your problem in order to see if
a tool is appropriate or not. This understanding is the all purpose
of prototyping. To change of tool after the prototyping phase is
not a drama.
2) To say that NEXPERT needs to call out very often is not really
fair. This depends again of the problem. What we can say is that
any problem needs to collect data from outside, which implies call-out
(or call-in depending on your implementation). This is something
very easy to do in NEXPERT (the callable interface is well done).
This is something which was done the second day of the AI training and
which does not show in any way the limitations of NEXPERT. However,
when it comes to object manipulations, with relations between objects
the story is different. But this is clearly another issue.
Daniel
|