Skip to main content
Feature Story | Mathematics and Computer Science

Argonne researchers demonstrate extraordinary throughput during SC14 demonstration

Surpassing 85 Gigabits per second for international data transfers from Louisiana, US, to Ottowa, Canada

Typically, two days are needed to move 60 terabytes of data between sites with a 10 Gbps connection. At the recent annual Supercomputing Conference (SC), a team of researchers from the Department of Energy’s Argonne National Laboratory and DataDirect Networks (DDN) moved an extraordinary amount of data in under just 100 minutes.

With help from Ciena, Brocade, ICAIR and DOE’s Energy Sciences Network (ESnet), the team achieved a sustained data transfer rate of 85+ Gbps (with peaks at over 90 Gbps) between a storage system in Ottawa, Canada, and a storage system in New Orleans, La., over a 100 Gbps wide-area network (WAN) connection. The demonstration took place Wednesday, Nov. 19, at SC14, the leading international conference for high-performance computing, networking, storage and analysis.

To achieve this feat required combining the embedded file system and virtual machine capabilities in a DDN storage controller, high-speed wide-area data transfer capabilities in the Globus GridFTP server and an advanced 100G wide-area network.

Embedding the GridFTP servers on the virtual machines in the DDN’s storage controller eliminates the need for external data transfer nodes and network adapters,” explained Raj Kettimuthu, a principal software development specialist at Argonne. We were able to achieve a sustained data transfer rate of 85 Gbps for a duration of more than 60 minutes – and sometimes as long as 90 minutes – several times during the SC14 conference.”

Achieving 90+ Gbps for memory-to-memory transfers using a benchmarking tool like iperf is straightforward and has been demonstrated several times in the past.

Achieving similar rates for disk-to-disk transfers presents a number of challenges, however –  including choosing the appropriate block size that works well for both the disk I/O and network I/O and selecting the appropriate combination of parallel storage I/O threads and parallel TCP streams for optimal end-to-end performance.

Network experts often claim that storage is the bottleneck in the end-to-end transfers on high-speed networks, while storage experts claim that the network is often the bottleneck on transfers between sites with high-performance parallel file systems.

This demonstration was aimed at bringing together the experts and latest developments in all aspects concerning disk-to-disk WAN data movement, including network, storage and data movement tools,” said Kettimuthu.

The team expects that the approach can be used to achieve 100+ Gbps wide-area transfer rates between storage systems with multiple WAN paths and more storage resources in the end systems.

Team members were Kevin Harms, Eun-Sung Jung, Raj Kettimuthu, Linda Winkler from Argonne and the University of Chicago and Mark Adams from DataDirect Networks, with help from Jim Chen and Joe Mambretti from ICAIR, Doug Hogg and Marc Lyonnais from Cienna, Wilbur Smith from Brocade, Jon Dugan and Brian Tierney from ESnet, Ian Foster and Mike Link from Argonne National Laboratory and the University of Chicago, and Clayton Walker, Laura Shepard, Susan Presley, and Bob Vassar from DDN.