[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | DEC TCP/IP Services for OpenVMS |
Notice: | Note 2-SSB Kits, 3-FT Kits, 4-Patch Info, 7-QAR System |
Moderator: | ucxaxp.ucx.lkg.dec.com::TIBBERT |
|
Created: | Thu Nov 17 1994 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 5568 |
Total number of notes: | 21492 |
5517.0. "tpc_nodelay?" by CSC32::J_HENSON (Don't get even, get ahead!) Thu May 15 1997 18:03
ucx v4.1, ovms v6.2
Rather than attempt to paraphrase a customer's request, I'm posting
it as is. In a nutshell, though, he wants to know if the tcp_nodelay
option to setsockopt works, and what the implications of
set protocol tcp/nodelay are.
Jerry
===================================================
/*
I have a program which wants to receive data on a TCP/IP socket
at a 90ms rate. It 'connect's to a server, and then calls 'recv'.
The server cycles every 90ms, sending data to the client. The client
only receives data every 200ms, however. There is no overrun because
the sender combines the messages every 2nd-3rd send, just as
expected with TCP/IP.
I used an Ethernet sniffer to monitor the packets going between
the programs, and I saw that the receiving node waits 200ms after
receiving a packet before sending the TCP/IP ack. Because of
the Nagle algorithm, the sender does not send another partial
packet as long as there is an outstanding ack. That's why the
receiver only receives messages every 200ms.
I can fix this problem by using the following command on the
receiving (client) node:
$ UCX SET PROTOCOL TCP/NODELAY
However, this affects ALL users of TCP on the receiving node, and
I don't know what the ramifications of this are to overall
performance.
There is a 'setsockopt' call that allows the Nagle algorithm to be
disabled on individual sockets. This would allow the sending node
to send the messages without waiting for the ack. I tried to do
this, but it had no effect. I called 'getsockopt' before and after
'setsockopt', and this 'TCP_NODELAY' option actually changed value.
It just seems to be ignored.
Below is a program that demonstrates the problem. It works as both
the client and the server. To start it as a server, just run it.
To start it as a client, run it with a single parameter which is
the integer value of the server's IP address (base 10, host order!).
I get output like the following: (t = time in ms, d = data)
Server Client
t d t d
125 (1) 56 (1)
54 (2) 147 (2)
89 (3) 200 (3)
90 (4) 0 (4)
89 (5) 199 (5)
89 (6) 0 (6)
89 (7) 0 (7)
90 (8) 199 (8)
89 (9) 0 (9)
89 (10) 199 (10)
89 (11) 0 (11)
89 (12) 199 (12)
89 (13) 0 (13)
90 (14) 200 (14)
89 (15) 0 (15)
89 (16) 199 (16)
89 (17) 0 (17)
89 (18) 0 (18)
89 (19) 199 (19)
90 (20) 0 (20)
Does setsockopt(TCP_NODELAY) work?
Am I doing something wrong?
What are the drawbacks of $ UCX SET PROTOCOL TCP/NODELAY ?
*/
/*************************************************************************
***/
#include <stdlib.h>
#include <stdio.h>
#include <unixio.h>
#include <string.h>
#include <errno.h>
#include <socket.h>
#include <in.h>
#include <tcp.h>
void sys$gettim(__int64 *);
void sys$schdwk(int, int, __int64 *, __int64 *);
void sys$hiber(void);
static int Setup(int sock, char *arg) {
int one = 1;
struct linger ls = {0, 0};
struct sockaddr addr = {0};
struct sockaddr_in *in_params;
unsigned int len;
int rsock;
if (setsockopt (sock, SOL_SOCKET, SO_REUSEADDR, (char *)&one, sizeof
(one))){
printf ("Error setting reuseaddr sockopt: %s\n", strerror(errno));
} /* end if */
if (setsockopt (sock, SOL_SOCKET, SO_LINGER, (char *)&ls, sizeof(ls)))
{
printf ("Error setting linger sockopt: %s\n", strerror(errno));
} /* end if */
/*--------------------------------------------------------*/
one = -1;
{ len = 4;
if (getsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &one, &len) !=
0) {
printf ("Error checking NODELAY sockopt before: %s\n",
strerror(errno));
} else {
printf ("Before setting NODELAY, value = %d\n", one);
}
} one = 1;
#if 1
/* TURN OFF NAGLE */
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &one,
sizeof(one)) != 0) {
printf ("Error setting NODELAY sockopt: %s\n", strerror(errno));
exit(1);
}
#endif /* 0 */
one = -1;
{ len = 4;
if (getsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &one, &len) !=
0) {
printf ("Error checking NODELAY sockopt after: %s\n",
strerror(errno));
} else {
printf ("After setting NODELAY, value = %d\n", one);
}
} one = 1;
/*--------------------------------------------------------*/
len = sizeof(addr);
in_params = (struct sockaddr_in *) &addr;
in_params->sin_family = AF_INET;
in_params->sin_port = htons(5431);
if (arg) {
in_params->sin_addr.S_un.S_addr = htonl(atoi(arg));
if (connect(sock, &addr, len) == -1) {
printf ("Error from connect: %s\n", strerror(errno));
exit(1);
}
rsock = sock;
} else {
if (bind(sock, &addr, sizeof(addr)))
printf("Error from 'bind': %s\n", strerror(errno));
if (listen(sock, 1))
printf("Error from 'listen': %s\n", strerror(errno));
rsock = accept(sock, &addr, &len);
if (rsock == -1) {
printf ("Error from accept: %s\n", strerror(errno));
exit(1);
}
}
return rsock;
} /* end Setup */
/*************************************************************************
*****/
int main(int argc, char *argv[]) {
__int64 time, last_time;
int sock, sock1;
int buff[1] = {0};
int bytes, cnt = 0;
char *arg = argv[1];
sock1 = socket(AF_INET, SOCK_STREAM, 0);
sock = Setup(sock1, arg);
if (arg) {
} else {
time = (__int64) 90 * (__int64) -10000;
sys$schdwk(0, 0, &time, &time);
}
sys$gettim(&last_time);
for (;;) {
if (arg) {
bytes = recv(sock, &buff, sizeof(buff), 0);
} else {
sys$hiber();
bytes = send(sock, &buff, sizeof(buff), 0);
++buff[0];
}
sys$gettim(&time);
last_time = ((time - last_time) / ((__int64) 10000));
if (bytes == 4) {
printf("%d (%d)\n", (int) last_time, buff[0]);
} else {
printf("%d (xxx)\n", (int) last_time);
}
if ((last_time < 50) || (bytes < 1)) {
if (++cnt > 20) return 1;
} else {
cnt = 0;
}
last_time = time;
}
}
T.R | Title | User | Personal Name | Date | Lines
|
---|