The NeL Net library comprises code for inter-server communication and server-client communication. It also provides implementations of the service executables required by the higher level layers of the code libraries, as part of the NeL NS component.
The first objective of NeL Net is to provide a complete OS-independent data transfer API that abstracts system specific code and provides mechanisms for complete control of bandwidth usage by the application code. The library has a further objective of providing a complete toolkit, comprising further layers of library code and core service implementations, for the development of performance critical distributed program systems for massively multi user universe servers.
The current feature requirement list for NeL Net corresponds to the application architecture for Nevrax' first product, Ryzom. This notably includes the requirement for a centralised login validation system at a separate geographical location from the universe servers.
NeL Net provides a single solution which caters for all of the Server -> Client, Client -> Server and Inter-Process communication requirements. This solution is structured as a number of layers that are stacked on top of each other. The API gives the app programmers direct access to all of the layers.
A complete TCP/IP implementation of the low level network layers is provided. A UDP implementation may be developed at a later date.
The NeL networking library is designed as a six layer architecture with each layer starting at zero building upon the previous layer and providing the developer with increasing functionality. We'll cover all six layers through the course of this section, including the two deprecated layers, layer two and layer four. The goal of the NeL networking library is to provide a simple elegant interface to facilitate the three primary forms of network communication: client to server, server to client and server to server. While most developers using NeL will only use layers three through six the NLNET library provides developers with direct access to all layers.
- Layer 0 (Bottom Layer): Data transfer layer. Abstraction of the network API and links (PC may be across a network, or local messaging).
- Layer 1: Data block management layer. Buffering and structuring of data with generic serialization system. Also provides multi-threading listening system for services.
- Layer 2: Deprecated. Serialised data management layer. Supports the standard serial() mechanism provided by NeL for handling data streams.
- Layer 3: Message management layer. Handling of asynchronous message passing, and callbacks.
- Layer 4: Deprecated. Inter-Service message addressing layer. Handles routing of messages to services, encapsulating connection to naming service and handling of lost connections.
- Layer 5 (Top Layer): Unified network services. Discovery automation through a naming service makes services addressable by functional name.
- Layer 6 (Modules): Not recommended. Network modules. Generated C++ interfaces for message passing, with multiple module instances possible within a single service process.
There is a program skeleton for the programs within a shard who are capable of communicating with each other via layer 5 messages. Programs of this form are referred to as Services.
The network library presents a generic service skeleton, which includes the base functions of a distributed service. At initialisation time it performs the following:
- Reads and interprets configuration file and command line parameters
- Redirects the system signals to NeL handler routines
- Creates and registers callbacks for network layer 3
- Sets up the service's 'listen' socket
- Registers itself with the Naming Service
The skeleton also handles exceptions and housekeeping when the program exits (whether cleanly or not)
The following system services are provided as part of NeL. For each of these services there exists an API class that may be instantiated in any app-specific service in order to encapsulate the system service's functionality.
A standalone program used by all services to reference each other.
- All services connect to the naming service when they are initialised. They inform the naming service of their name and whereabouts.
- The naming service is capable of informing any service of the whereabouts of any other service.
API class: CNamingClient
- Generates dynamic port numbers
- Registers the application service's name with the naming service.
- Retrieves the IP address and port number for a named service.
- See technical documentation for details
Service that handles the database of users permitted to connect to shards. NeL provides a skeleton program that includes the communication protocols for the Login Service.
NeL provides the base mechanisms for administering a NeL shard. Two basic services are provided:
- Provides an entry point for cluster administration.
- Provides access to logging information and mechanisms for starting or restarting services
- This is the relay for the Admin Service.
- Fetches statistics on the local machine and relays them to the Admin Service.
- Launches and controls the services running on the local machine.
- chat: A basic client/server chat system that use NeL.
- class_transport: This project demonstrates the usage of the CTransportClass class. This class allows services to send easily some class to another service. It manages different class version (For example, the sender class can have different variables than the receiver class)
- login_system: This examples shows you how to use the login system provided by NeL to connects/check/identify clients.
- multi_shards
- net_layer3: This project demonstrates the usage of layer 3 (NLNET: :CCallbackClient, NLNET: :CCallbackServer) and the service framework (NLNET: :IService). It contains three programs: a client, a front-end service and a ping service.
- The client connects to a front-end server at localhost:37000. It sends pings and expects pongs (ping replies).
- The front-end server expects pings, and forward them to the real ping server (known as "PS" in the naming service). When the ping server sends a pong back, the front-end server forwards it to the client.
- The ping service (PS) expects pings and sends pongs back.
To run the front-end service and the ping service, ensure their config files, frontend_service.cfg and ping_service.cfg, are located in the directory where they are run. These files state the address of the naming service.
- net_layer4: This project demonstrates the usage of layer 4 (NLNET: :CNetManager), the service framework (NLNET: :IService), and the connection and disconnection callbacks. It contains three programs: a client, a front-end service and a ping service.
The functionalities are close to the ones of the previous sample.
- The client connects to a front-end server at localhost:37000. It sends pings and expects pongs (ping replies).
- This front-end server expects pings, and forward them to the real ping server. When the ping server sends a pong back,the front-end server forwards it to the client. Even if the connection to the ping server is broken, our front-end server will keep storing the ping messages and will forward them when the connection is restored, thanks to layer 4.
- The ping service (PS) expects pings and sends pongs back.
To run the front-end service and the ping service, ensure their config files, frontend_service.cfg and ping_service.cfg, are located in the directory where they are run. These files state the address of the naming service.
- net_layer5: This project demonstrates the usage of layer 5 (NLNET: :CUnifiedNetwork), the service framework (NLNET: :IService), and the connection and disconnection callbacks. It contains a set of services that communicate between them. The functionalities are close to the ones of the previous sample but they add some features like unified callback array, and so on.
- service: This is a very simple service example to describes the architecture to create services.
- udp: This project demonstrates the usage of a client/server architecture for benching an UDP connection. The server listen on TCP port and UDP port for new incoming client. When a client is connected, it communicates on the TCP port to set the bench and after it uses the UDP port to bench the connection. The server log information on text file and send some info on the client using the TCP connection.
- udp_ping