Terena Title Logo  
decoration line

navigation button: home navigation button: programme navigation button: meetings navigation button: social navigation button: registration navigation button: venue navigation button: credits navigation button: contacts navigation button: sponsorship navigation button: search
decoration line

TERENA Logo small


More and more new distributed applications are designed to run on the GRID [Grid] network infrastructure to accomplish to their tasks on a worldwide scale. Most of these applications rely heavily on the performance of the TCP protocol.

The ratio of the link capacity over its price has been accelerating in the past few years and this makes it more cost effective to upgrade the capacity of the network rather than to engineer a lower speed one. This acceleration is much faster than the observed bandwidth usage from traditional Best Effort (BE) traffic. Thus in the short to medium-term scenario there is excess capacity available, especially in the core.

The GRID network infrastructure is firstly being developed in academic networks and as with the internet, the initial users of the above mentioned spare capacity are applications developed by scientists involved in areas such as particle physics, radio-astronomy and biology. Based on emerging applications within this areas, such as BaBar [Babar], the main requirement is a reliable and effective bulk data transfer at multi-gigabit per second speed over long distances. This invariably involves and depends on the performances shown by TCP as it is the protocol delegated to accomplish the end to end transport.

TCP’s congestion control algorithm additive increase multiplicative decrease (AIMD) performs poorly over paths of high bandwidth-Round-Trip-Time product [Stevens][Floyd] and the aggregate stability is even more precarious if such aggregate consists of only few flows [Low].

These well known facts pose a serious question on how to effectively deploy GRID given that multi-gigabit per second capacity can be provisioned on trans-continental links for few users/flows at any one time. So, although there is in principle spare capacity, TCP as it is now, will impact negatively on the performance of these new applications and will compromise the whole concept of a high performance computational GRID.

Two recent developments can significantly contribute to tackle this problem: Proposals for high throughput TCP stacks and the availability of Differentiated Services [Diffserv] enabled networks at multi-gigabit speeds.

In this paper we investigate the relation between IP-QoS configuration in the routers and the dynamic of standard TCP as well as that of new proposals for high throughput TCP, including High Speed TCP [Hstcp] and Scalable TCP [Scalable]. We conduct extensive experimental tests in a high bandwidth, high propagation delay research network.
To perform our tests we used the DataTAG testbed [Datatag] which consists of a transatlantic link connecting Geneva to Chicago. We used Juniper [Juniper] M10 routers with Diffserv-enabled GigaBit Ethernet cards (a choice made after having benchmarked several router manufacturers). This testbed is unique in providing a Differentiated Services network with high propagation delay and bandwidth capacity on the order of Gigabits per second. To generate traffic we used high-end multiprocessor PCs running Linux 2.4.20 kernels.



This presentation is part of session "QoS Technologies" which starts at Wednesday, June 9 @ 14:00


Home | Programme | Meetings | Social | Registration | Venue | Credits | Contacts | Sponsorship | Search back to top
Last modified on the 15th 2004f June 2004 - 12:35