[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | DEC Rdb against the World |
|
Moderator: | HERON::GODFRIND |
|
Created: | Fri Jun 12 1987 |
Last Modified: | Thu Feb 23 1995 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 1348 |
Total number of notes: | 5438 |
269.0. "Nat'l Database Symposium Trip Report, Dec. '88" by BROKE::FARRELL () Mon Dec 12 1988 23:07
*************
d i g i t a l
*************
TO: DISTRIBUTION DATE: 8-DEC-1988
FROM: Vickie Farrell
DEPT: Database BPM
LOC: ZK02-1/P05
EXT: 381-2512
ENET: QUILL::FARRELL
SUBJ: National Database and 4th/5th GL Symposium - Boston, Dec. 1988
I did not attend the general sessions given by George Schussel and other
consultants, but did attend several of the vendor presentations. In
addition, I presented Rdb at the Digital session and represented Digital
in the RTI hospitality suite where we featured "Ingres Tools for Rdb."
At his request, I had a meeting with Dr. George Schussel, president of
Digital Consulting, Inc. who sponsors the symposium. Two years ago, I
couldn't get him to spend 5 minutes with me. This week he asked me to
brief the senior consultants on his staff about our strategy and products.
The sessions I attended and discuss here are:
Ingres
Oracle
SQL/DS (given by IBM, not a consultant)
Teradata DBC 1012
Informix
Rdb
The most heavily attended sessions were Rdb, Ingres, Teradata, and Oracle.
Of about 200 attendees, each of these sessions drew well over 60 people.
Interestingly, approximately 80-90% of the symposium attendees use IBM
equipment, and no DEC.
Ingres
------
Presenter was John Calendrello. I believe he is a corp. person from
Alameda. The demo was done by a local specialist.
He touted their multi-server architecture which provides:
- scalable high performance
- tailored configurations to host CPU
- VAXcluster automated recovery
There are no longer 2 processes per user. Each user gets a dedicated
front-end process and they share a few back-end processes. He claimed that
other vendors offer a single server, but the multi-server architecture
differentiates Ingres. He also claimed that it allows them to take
advantage of the multiple CPUs in an SMP machine and to set different
priorities for different servers.
He also touted their "AI-based" query optimizer and claimed that it is
unique. It predicts the cost of all access paths and automatically selects
the fastest. DBA can set the maximum cost w/ V6. It does not sound any
more AI-based than it ever did, but I spoke to a Digital sales support
person who was attending from upstate NY and he is familiar w/ benchmarks
that show that Ingres query performance is more predictable and consistent
across various queries than Rdb or Oracle. The EXECUTE DB command causes
the db to collect statistics to help the optimizer. It is not turned on as
the default.
They have implemented fast commit and group commit to minimize I/O.
In discussing their multi-file architecture, he did state that Rdb V1 was
single file and required the DBA to pre-allocate the disk space. I
corrected him off-line later and he said that he had not meant to say that.
V6 is due out Dec. 19 and will allow splitting a single table across
multiple disk drives.
RTI recently completed a close agreement with Cadre Technologies to sell
their Teamwork products as CASE tools for Ingres.
SQL/DS
------
The audience was not large. The presentation was very dry and several
people left before it was over.
In the past, sessions about IBM products were given by consultants and IBM
did not participate. In this case, an SQL/DS marketing person from the
development group in Toronto gave the presentation. There was no session on
DB2. The presenter was clearly an SQL/DS bigot and came close to
positioning it as a competitor to DB2; so much so that he apparently was not
stating the corporate party line and TWO other IBM people frantically waved
their hands to interject and added their statements to the positioning
question. In formal sessions such as these it is very unusual,
inappropriate and embarrassing for another employee to add comments.
The presenter claimed portability between OS/2, SQL/DS and DB2 that
customers say do not exist. He referred to SAA compliance several times.
According to George Schussel, it will takes years, if not decades, for
IBM to fully implement SAA, and that he will have long since retired to
a sailboat in the Caribbean.
Release 2 came out in June, with enhancements in Oct. These included
support for referential integrity, some distributed capability (didn't
specify) and support for VSE users as guests under VM. Improved performance
comes from better use of index, increased deadlock prevention and faster
sort. They added an extract tool that loads from flat files. SQL/DS
exploits VM V5 by allowing concurrent access to multiple databases from a
single application.
He stated that IBM has an internal requirement that a new version can't be
released without a performance increase over the previous version. He cited
performance on a database workload which included BOM, data entry and bank
teller transactions. Test was run on a 3090-M600 with .18 sec response
time. System performed 248 TPS or 41 TPS/ processor.
He stated that SQL/DS is THE strategic RDB for VM and VSE and that it is an
SAA product.
Q: How does SQL/DS fit in the mainframe/mini/micro configuration (implying
the need for heterogeneous communication)?
A: Wherever you have VM, you should run SQL/DS.
Q: SQL/DS and DB2 seem to be competing products. How are they positioned?
The speaker contrasted them, but the 2nd IBM employee stated that the goal
is that both run on all operating systems w/ DB2 optimized for MVS and
SQL/DS optimized for VM and VSE. They will use similar tools and
interoperate.
Future directions:
- Multi-site update
- Horizontal/vertical partitioning
- Global optimization (to join tables acrosse the network)
- Better monitoring showing what users are doing
- Support for larger fields
- Faster data load
- Basis for mail, allowing users to store, recall, sort their mail msgs
- Improved text search
- Repository
- Support for more users/database (currently 5000)
- Outer join
- International sorting sequence
- Extract to remote site
- Adherence to Stds (SAA, ANSI SQL, FIPS)
A statement that was made late in the presentation is that SQL/DS is very
successful with over 5000 licenses sold worldwide.
It was often the case that what other vendors hype or discuss for the
future, WE ALREADY HAVE. You will see even more of this later in the
report.
Teradata DBC/1012
-----------------
Speaker was David Clements, director of marketing.
The company went public in Aug. 1987. They have 14 offices in Europe and
are opening offices in Asia and the South Pacific now. They shipped over
500 new and add-on processors in quarters 3 and 4 of FY88. In case you
are interested, 10 to the 12th power is one tera. Thus, the name of the
product DBC/1012.
Their largest installation is at Kmart, an IBM stronghold:
270 MIPS (currently 270 processors)
200 GB database
inventory control application
The ave. system is 20 processors.
The parallel processor system support from 3 to 1024 processors and can be
shared by multiple users and multiple hosts. It is connected to the
mainframe host via a block mux channel and looks like a peripheral to the
channel.
A second type of communications processor allows it to connect to LANs,
minis and workstations.
Data is automatically distributed evenly across the disks, controlled by
firmware and using a hashing algorithm.
Each processor is made up of the following components:
CPU: Intel 1 MIP 80286 to be replaced soon my 3 MIP 80386
I/O processor
processor for numeric processing
2 firmware machines connecting to the 2 redundant host interfaces
32K high speed cache
4 MB memory
They claim they have just "passed C2 level security" and are working on B1.
However, since the National Computer Security Center (NCSC) standard for
databases is still in PRELIMINARY DRAFT form, anyone can make that claim.
It really can't be disputed.
They provide fault tolerant disk storage by allowing you to turn on the
"fallback" option. Each record is stored twice always on different disks.
He claimed that a failed disk can be brought back online while the database
is being used. When additional processors are added, the system
automatically reconfigures, taking selected elements from the existing disks
and populating the new ones.
A fully active data dictionary and directory tracks usage of data elements
and user security. Cobol and PL/1 preprocessors run on MVS or VM.
For 4GL, they provide their own ITEQ interactive query as well as an
interface from Focus, NOMAD2, Intellect and several others.
They do provide value-based security.
They connect to a VAX/VMS system via Ethernet, TCP/IP or OSI. They also
connect to UNIX V workstations and PC/DOS.
Teradata used the Debit/Credit benchmark to test performance:
# of CPUs TPS
--------- ---
40 41
80 82
128 130
Note that each processor is currently 1 MIP. 1 TPS per MIP would indicate a
pretty bad price/performance ratio compared to ANYONE including IBM. The
above numbers are not very impressive in absolute terms either considering
performance is their primary message.
Informix
--------
The speaker was Ginny Helmar, district pre-sales manager. As part of their
session, they sponsored a drawing for a free PC Informix kit.
Informix recently merged with Innovative Software which markets office
automation software. The 2 product capabilities are being integrated.
Informix runs on 150 different platforms, the largest number of any
RDMS. The above list includes AIX and Cray. They offer precompilers for C,
Ada, COBOL. In addition to the RDBMS and tools, they sell C-ISAM, which
they say is the accepted standard for ISAM files on UNIX.
Apparently, they use a FE/BE server architecture. The front-end components
are: 4GL, I-SQL, ESQL, Datasheet. These front-end tools work w/ Informix
and two AND ONLY TWO major databases: DB2 and Rdb. She stated very strongly
that they selected Rdb because it is the best database on VMS. It offers
the best performance, superior technology, and fully supports the VAXcluster.
She also stated that it will be packaged with VMS!! The 4GL, I-SQL and ESQL
front ends are scheduled to be available for Rdb in early Q2 (April '89).
Informix features that she hyped:
- better integration between the tools than the competition
- "List of Values" capability and pop-up menus (I didn't hear anything
discussed that is not in Rally)
- Automatic locking (do you believe it!!??)
- Data definition commands (not just data manipulation)
Futures:
- ANSI2 SQL compliance in '89
- 2 Phase Commit
- Distributed integration of non-Informix SQL databases
- Support for replicated data
- deferred update
- snapshots
- text search capability
They market Informix TURBO for OLTP (UNIX platform only). She claimed that
the TURBO high performance server makes them the fastest RDBMS on UNIX.
Some of the features that it uses:
- shared memory
- supports unlimited table size
- a table can span disks
- checkpoint/recovery
Futures are online archiving and disk mirroring, support for blob data type,
which she said was a precursor to object-oriented database, and I thought I
heard variable length character support.
The TURBO performance graph was a classic:
|
| ----
| | |
TPS | | |
| | | ____ ----
| | | | | | |
| | | | | | |
| | | | | | |
---------------------------------------
TURBO B-tree hash
She did NOT indicate:
- the benchmark used
- the TPS rate attained
- response time, database size, number of users, etc.
- the B-tree or hash access databases used
- hardware platform
When the above information was requested, she didn't know any of the
answers.
Oracle
------
Speaker was John Nugent from the local office. He gave the phone # for the
local sales office, announced the December $199 PC Oracle special and gave
the Ca. # to call to order it.
The opening slide read:
ORACLE: THE FASTEST GROWING COMPANY IN THE FASTEST GROWING INDUSTRY
IN THE WORLD
Oracle has doubled in size for the past 6 or 8 years. Currently at $282 m
in revenue, Larry Ellison's goal is to reach $506 in '89 and $1B in 1990.
Nugent says they are right on target. He said they have 4000 installations
and 2000 customers. (25,000 PC customers)
They have nice new slides (only slides that were as nice as ours), but
they are still doing the "successful key decisions chronology" pitch that
they've done for years.
Their marketing message is: Compatibility, Portability, Connectability.
THe presentation went on to elaborate.
Compatibility: They are more compatible w/ DB2 than other SQLs. The same
query will return the same answer for DB2 and Oracle, but a different answer
for other SQLs due to their lack of null values, for example. (I don't
think many people in the audience bought this).
Portability: Complete implementation on all platforms. New ports are Wang
VS, DOS-VSE.
Connectability: Oracle is network, location and machine independent. They
don't have 2PC yet, but can't give a timeframe when they will. They say no
one does. I guess they hope that the audience is not aware of Tandem,
Interbase, etc. He did highlight their multi-site update (from multiple
transactions).
They achieve location transparency through a dictionary on each node.
The Oracle tools talk to IMS (read/write), DB2 and SQL/DS. They are working on RMS and
VSAM. Even proprietary Rdb has had both of those for years.
The tool set now includes:
SQL*Graph, SQL*Calc, SQL*Plus, SQL*Forms, SQL*Menu, SQL*QMX, SQL*Loader,
SQL*Connect, SQL*Net, SQL*ReportWriter, Easy*SQL, precompilers,
Export/Import, Lotus 1-2-3 Add-in and the new CASE tools.
Here are the features they highlighted:
- Fully relational
- Integrated TP
- Automatic recovery
- Read consistency
- Support for null values
- NOW Record-level locking
- Open architecture (co-exists w/ 3GL programs)
See anything in there that Rdb hasn't had since V1 and can't do better than
Oracle??
I really liked the way he defined their TP support. "Oracle has complete
transaction atomicity."
V6 - World Record Performance. This was a 3-year development project which
involved a complete re-write of Oracle. It should be available in 3 months.
(Remember that is was announced on July 18, the day before we announced
Rdb V3.0). He had a slide that showed the TP1 figures on various
processors. I wasn't fast enough to get any except VMS. (49 and 43 TPS on
a VAX 6240, depending on the database size). He claimed that the figures
were audited by Codd & Date Consulting Group. In fact, Tom Sawyer did
validate them, but referred to the benchmark as a "banking benchmark"
because it did not even meet the TP1 requirements.
He also discussed their new financial packages: general ledger, A/P,
assets and purchasing w/ personnel and A/R coming soon. He claimed that
they are better than the competition which is based on proprietary flat
files. This is one of the ways Oracle plans to sustain their growth.
I talked to a representative of McCormack and Dodge at the Executive Seminar
last week and the Oracle VARs are really angry with Oracle for turning on
them and competing w/ them. This is not the kind of company you want to do
business with.
Oracle*Mail. The plan is impressive and ambitious. It remains to be seen
if they can pull this off. Portable across PC, VAX, mainframes and UNIX.
Distributed, first mail built on RDBMS. Passes graphics, forms and text.
The last topic discussed was support. Nugent announced that Oracle does $25
million/year in consulting in New England! It is very interesting that even
their phone support people are highly incentived. They receive bonuses
based on customer satisfaction surveys!! These people leave NOTHING to
chance.
Rdb
---
I gave the Rdb presentation, so I can't provide an objective critique.
The presentation is new. The 35mm slides are beautiful and they match
the format used for the OLTP slide shows. The show is scripted and
available from the Corp. slide library. Order # 863.
The presentation highlights our strategy, V3 enhancements and OLTP
capabilities. In contrast to Oracle's discussion of OLTP, I compared
us to other RDMS vendors who claim to have OLTP. Our capabilities include
queuing support for transactions with different priorities, multi-threaded
capability to support large numbers of users, audit trails, an additional
level of security for controlling access to applications as well as data,
etc.
One of the slides is a quote from Colin White, a noted database consultant,
Editor of InfoDB Newsletter, and database columnist in "Computer Decisions":
"As hardware architectures continue to develop... it will become even more
difficult, if not impossible, to build a single DBMS that will work well on
all available hardware architectures and operating systems. Simple ports of
DBMSs aren't likely to provide the answer."
Interestingly, the ONLY questions I received were regarding SQL services for
UNIX.
T.R | Title | User | Personal Name | Date | Lines |
---|
269.1 | Did you have the right RTI ? | BANZAI::HIGGS | Festooned with DMLs | Tue Dec 13 1988 04:28 | 17 |
| < Note 269.0 by BROKE::FARRELL >
-< Nat'l Database Symposium Trip Report, Dec. '88 >-
RTI recently completed a close agreement with Cadre Technologies to sell
their Teamwork products as CASE tools for Ingres.
Vickie --
According to both Digital News and Digital Review (both of which
have been known to be wrong...) the RTI that Cadre just announced
a joint program with is Research Triangle Institute (RTI), a non-
profit research institute, P.O. Box 12194, Research Triangle Park,
N.C. 27709.
As far as I know, Alameda is not associated with the folks in N.C.
Or maybe you can shed light on an additional agreement between Cadre
and the RTI of Ingres fame ?
|
269.2 | Many companies can have the initials RTI. | DEBIT::DREYFUS | | Tue Dec 13 1988 19:44 | 7 |
| There are at least two companies with the initials RTI. Both of them
have relationships with CADRE.
The RTI (Ingres) relationship focusses on integrating CASE tools with the
4GL tools.
--david
|
269.3 | | BANZAI::HIGGS | Festooned with DMLs | Tue Dec 13 1988 22:51 | 17 |
| < Note 269.2 by DEBIT::DREYFUS >
-< Many companies can have the initials RTI. >-
There are at least two companies with the initials RTI. Both of them
have relationships with CADRE.
The RTI (Ingres) relationship focusses on integrating CASE tools with the
4GL tools.
The reports that I read in Digital Review and Digital News also
talk about integrating CASE with 4GL. I didn't see anything
mentioned about RTI (INGRES), although it could have been a while
ago. It would be interesting to understand what each agreement
entails, and whether they relate to each other (as seems likely).
Must be just as confusing for CADRE as for us, if they are dealing
with 2 RTIs at the same time, especially when they worry about who
they pay what 8^).
|
269.4 | Nat'l DB Symposium: Another report | BROKE::DREYFUS | | Tue Dec 13 1988 23:33 | 245 |
| Date: 12/13/88
Digital Consulting's National Database and 4/5 GL Symposium,
Boston, MA, December 5-9, 1988
The symposium consisted of general sessions and vendor
presentations. I will discuss:
- General theme of the general sessions (paraphrased)
- Vendor information (abbreviated form)
Note: The best presentation was done by Jeffrey Tash, President
Database Decisions. It was both informative and humorous.
GENERAL THEMES AND INFORMATION:
MIS/DP :
MIS managers have traditionally been involved in automating
mundane tasks and have focused on reporting where the company has
been. They need to become more involved in the strategic planning
and operation of the corporation and then help deploy technology
to contribute to the business' goals.
MIS/DP need to move from being a splinter group or back-room
organization to being intimately involved. This requires that
MIS/DP learn more about how the business operates.
INDUSTRY GROWTH :
The computer industry is very much like the car industry of the
1930's: Very powerful cars with access to very limited roads.
Advances in fiber optic technology will allow for the development
of electronic super-highways. This network infrastructure will
allow for an explosion in the use of computers just as the
interstate highway system caused an explosion in the use of the
automobile.
While the cost of people continues to rise, the cost of
communications, processors, bulk storage, and main memory prices
drop. The net effect is that we need to start optimizing for
people and not machines. As processing power, memory, and
bandwidth become free, we can more effectively deploy technology
to improve human productivity. We need to start satisfying user
needs and not MIS/DP needs.
THE PC AND COOPERATIVE (DISTRIBUTED) PROCESSING :
Users want control over their applications, a WYSIWYG (What You
See Is What You Get) interface, shared data, and transparent
communications. Users want a WIMP interface (Windows, Icons,
Menus, Pull-downs). PCs and networking advances make this
possible.
The PC is more than an expensive terminal that turns every user
into a system manager. What DEC doesn't understand as it wrestles
with VTs and PCs is that the PC provides a qualitative difference
to the way the user sees the system. THE USER INTERFACE IS THE
SYSTEM. This qualitative difference is provided by memory
mapping.
Memory mapping is the ability to map memory to points on the
screen. In stead of writing escape sequences to ASCII terminals,
applications write to memory pages which get mapped to the
terminal. In some cases one byte of memory is mapped to a single
pixel (point) on the screen.
Thus, through the PC, we can provide the WIMP interface at a very
reasonable cost. Using a mainframe to do the same by writing
escape sequences is death [ed note: as anyone using RAMP-C to
benchmark a VAX can attest to].
The use of the PC as the user interface leads to server-based
cooperative processing. The PC uses 'task-to-task' communication
over the network to allow the PC to talk to an application server
which may then talk to a database server. In some cases, the
three different levels are not required.
Before diving into how an application can take advantage of
cooperative processing, we must first re-architect our
applications. Break the application into four virtual processes.
One process handles the user interface (WIMP interface) and does
the display and collection of data. The next process provides
process logic. It determines the next operation the user may
require or may actually perform the task. This process may be
written in COBOL [ed note: this example is an OLTP application].
The third module is transaction logic: handle the start and end of
transactions, handle the SQL statements and conditional logic in
between. The fourth process is the database engine (with an SQL
interface).
Traditionally, these four parts of an application are intermixed
and reside upon a mainframe. Cooperative processing says that you
can place each process on the machine that can serve that process
at the lowest cost. Thus, the presentation and processing logic
can be placed upon a PC and the transaction and database logic can
be placed on a mini or mainframe. [ed note: sounds similar to
distributed ACMS with the exception that PCs talk to the host
instead of VTs through a microVAX].
The first generation of cooperative processing might even put the
transaction logic on the PC and allow the PC and the host to
communicate with SQL [ed note: SQL Services?]. With stored
procedures (such as Oracle and Sybase), the transaction logic can
be moved into the database management system.
This cooperative approach leads to a change in the paradigm of who
controls the application. Traditionally, the program controlled
the interface. Now, the interface controls the program.
One of the reasons that all the procedural design methodologies
have failed to improve programmer productivity is that maintenance
costs have not been impacted and that this is the major cost. The
problem is that making a change in most applications is
complicated by the inability to track the usage of data and
variables. It is often easier to rewrite code than reuse it.
The cooperative focus to data processing leads one to a more data
driven design - centered around relational databases. A change to
any one of the four virtual processes doesn't need to impact any
of the other processes. The user interface can be completely
changed without impacting the transaction or database logic.
The added advantage to this approach is that the data model
changes a lot less frequently than the procedures of the
organization. Applications designed around the data, therefore,
will need less change or will be more easily changed than those
designed around the organization's operating methodologies.
The distributed (cooperative) processing approach leads to a three
tiered, server-based, computing topology. Micro computers
(workstations) are given to knowledge workers (clerks no longer
exist). These micros use memory mapping and dedicated processing
power to allow knowledge to be inexpensively placed in the hands
of those that need it. We no longer need clerks doing heads-down
data entry. These knowledge workers with PCs are connected to
mini computers that provide services to the PCs. These services
include local area networking, computing, mail, file, print, and
database (for reports, not OLTP). Mainframe data can be moved to
the servers for local access by local users [ed note: VIDA goes
here]. The mainframe system is used for OLTP, batch reporting,
and clerks on terminals (this is changing).
SAA, OS/2, UNIX, DEC :
The minicomputer area is the area of greatest change. It is the
area that will see the greatest competition from the UNIX vendors
- watch out DEC!
IBM has recognized distributed computing and is positioning
Presentation Manager to provide the common WIMP interface. DEC is
competing with DECwindows.
The key concepts of SAA are a common user interface (presentation
manager), a common programming interface [ed note: VAX calling
standard], and common applications (source code compatibility).
The strategic components are the presentation interface, the
languages, and the database interface (OS/2 EE connects to DB2).
SAA will level the playing field between PC, mini, and mainframe
software vendors. This will allow IBM to draw upon the talent and
innovation of the software companies (IBM can't do software).
IBM will try to OWN the network. This is the battle ground.
OS/2 will be successful because of IBM, but it is a real operating
system. It takes many MB of memory, has five feet of
documentation, and is complex. These factors may slow its
success.
VENDORS :
Two of the more interesting presentations/products:
SYNERGIST by GATEWAY systems 517/349-7740 :
Synergist provides a Rdb compatible, cooperative processing system
for PC application developers. Rdb is used as a database server
for applications that execute on the PC. The focus of the product
is data entry.
SYNERGIST provides an application development environment for the
professional developer. There is a great deal of functionality
here. The goal is to have the PC process data, validate it, etc,
and then transfer it up to the host.
The architecture is one of the Synergist environment running on
the PC and a Synergist server running on the host. In either
case, the application can call out to 3GL code to perform
necessary tasks.
The marketing strategy of the company is to capitalize on
cooperative processing.
They support the following networks: LAN, X.25, TCP/IP, DECnet,
NOVELL.
One of the biggest features is the ability to synch up PC
applications and databases with those on the host.
- database
Synergist can use both a PC as well as a host database. This
allows the use of laptop computers disconnected from the host. At
the end of a day, or whenever, the PC user can integrate the data
on the host and PC. The PC application can use either database
transparently.
- application
When a PC connects to the host to perform a task, the Synergist on
the PC checks with the host to make sure that the PC is running
the latest version of the application. If not, a new version of
the application is downloaded. This simplifies the problem of
application distribution.
NOVELL :
Novell was pushing their four data management products
- NetWare SQL
- XQL
- Btrieve
- Xtrieve
NetWare SQL is an SQL database than runs as a "value added
process" under a NetWare server. It lays claim to distributed
database processing but is functionally limited. It uses
table-level locking to simplify concurrency control and recovery
issues. However, it is inexpensive and compact.
XQL is the SQL API for application programmers to access NetWare
SQL. [ed note: Similar to SQL Services but with a more extensive
call interface]
Btrieve is an interface into the low-level record management
system. It provides transaction management, record locking,
indexes, flexible record formats. NetWare SQL is layered upon
Btrieve.
Xtrieve is an end-user query facility.
|