Interested in improving this site? Please check the To Do page.

TCP/IP Networking

Basics

Frontier's TCP/IP layer implements two kinds of streams. An active stream is created by calling tcp.openStream to connect to a peer. It can be written to and read from with a variety of builtins.tcp verbs. It's closed by calling tcp.closeStream (a soft close) or tcp.abortStream (a hard close). A passive stream (or listener) is created by calling tcp.listenStream to listen on a specified port and address for incoming connections. If a peer connects to this port, a new active stream is created and passed on to a daemon script (usually inetd.supervisor, now also kernelized in langhtml.c) to handle the connection. A passive stream is closed by calling tcp.closeListen. The only other kernel verb that can be applied to passive streams is tcp.statusStream.

The builtins.tcp verbs are implemented as thin wrappers in the switch() statement in langfunctionvalue in langverbs.c. These wrappers don't do much besides fetching the parameters from the UserTalk code tree and then calling the actual platform-dependent implementations.

The Mac PPC version of the TCP/IP networking code is implemented in OpenTransportNetEvents.c. The Windows version is implemented in WinSockNetEvents.c.

The Mac 68k version shares most of the code with the Windows version by using a modified version of GUSI 1.x to emulate a BSD socket API. I haven't been able to figure out which version of GUSI 1.x exactly we are using. The modifications seem to be concerned with making DNS lookups thread-safe and improving the closing or aborting of connections.

In both files, there's some debugging code that you can activate to dynamically log information to a text file. Look for a section almost at the top of the files where TCPTRACKER is defined. The output you will get is basically a history of what you get in the expanded About Frontier window after calling mainWindow.showServerStats(true).

WinSockNetEvents.c

The Windows version of the TCP/IP code uses the Winsock API, more specifically, it almost exclusively only uses a subset of the Winsock API that is compatible with the BSD Sockets API. All the socket calls are documented in the MSDN library.

The sockets always operate in synchronous blocking mode, both for active and passive streams, i.e. if a socket operation can't be completed immediately, the function call won't return until it has completed. Therefore, it's important to wrap all WinSock API calls that may not return immediately into a pair of releasethreadglobals / grabthreadglobals calls so that other threads won't be blocked from execution.

The kernel representation of both active and passive streams is a struct of type tysockRecord. There's a limit of 255 simultaneous connections since only that many tysockRecord structs are allocated as a static array. What tcp.openStream and tcp.listenStream return is the index of the stream's tysockRecord struct in that array.

The implementation of active streams is fairly straightforward and is best studied by using the kernel implementations of the various builtins.tcp verbs in langverbs.c as starting points.

When you create a passive connection by calling tcp.listenStream, fwsNetEventListenStream creates a new socket, binds it to the specified port and address, and switches it to listening mode. It also creates a new kernel thread that is responsible for handling incoming connections. The thread's main function is fwsacceptingthreadmain. It sits in a loop and continually checks for incoming connections. For every incoming connection, it calls fwsacceptsocket which accepts the connection thereby creating a new active stream and a new UserTalk thread for executing the callback script that was specified in the tcp.listenStream call.

Something to be extremely careful about is to not set the sockID field of a tysockRecord struct to INVALID_SOCKET before you are completely done with it. If the sockID field is set to INVALID_SOCKET, it's possible for the socket record to get re-used immediately for another connection by another thread.

OpenTransportNetEvents.c

The Mac PPC version of the TCP/IP code uses asynchronous blocking endpoints. There are two notifier functions, one named Notifier for active streams, and another one named ListenNotifier for passive streams. In asynchronous blocking mode, all Open Transport calls return immediately so that the calling thread isn't blocked. However, the endpoint operation may not complete until later. When it completes, the notifier function gets called. To avoid tricky synchronisation issues, we use the tilisten module which serializes simultaneous incoming connections. This module is only available in Open Tranpsort 1.1.1 and later. Anyone who is serious about running a webserver should be running a much more recent version of Open Transport (or so says Chuck Shotton).

All these concepts are explained quite well in Inside Macintosh: Networking with Open Transport. OpenTransportNetEvents.c also borrows heavily from the OT Virtual Server sample code which is very well commented. I recommend studying it or a couple of hours before working on OpenTransportNetEvents.c. The main difference between the two is that our code doesn't handle reading from and writing to the stream directly in the Notifier function, because our requirements for reading and writing aren't quite as simple as in the sample code.

The kernel representation of an active stream is a struct of type tyendpointrecord. There's no hard limit on the number of active streams in Frontier. The tyendpointrecord structs are allocated dynamically. After the connection has been closed, the struct is linked into the sIdleEPs list for later re-use. What's returned by tcp.openStream is a pointer to the tyendpointrecord struct associated with the new connection.

The kernel representation of a passive stream is a struct of type tylistenrecord. The tyendpoint structs for active streams to be spawned by the passive stream are allocated right in tcp.listenStream. The number of allocated tyendpoint structs is fixed and given by the depth parameter of the tcp.listenStream call, effectively limitting the number of simultaneous incoming connections for the listener. After an active connection spawned by the listener has been closed, the tyendpointrecord struct is linked into the idleEPs list of the tylistenrecord for later re-use. What's returned by tcp.listenStream is a pointer to the tylistenrecord struct associated with the new listener.

When you create a passive connection by calling tcp.listenStream, fwsNetEventListenStream also creates a new kernel thread that is responsible for handling incoming connections. The thread's main function is fwsacceptingthreadmain. It sits in a loop and continually calls fwsprocesspendingconnections. This function checks the readyEPs linked list of the tylistenrecord for connections that have been accepted by the ListenNotifier and it kicks of a new UserTalk thread for every connection to execute the callback script that was specified in the tcp.listenStream call.

langhtml.c

The kernel implementations of inetd.supervisor, webserver.server, webserver.dispatch, and several utility functions are located in langhtml.c. These functions follow the old script implementations very closely, so besides the comments in the kernel code, the old script code probably serves as the best documentation.


Personal Tools