The first full release of Stork Data Scheduler is now available!

Louisiana State University Stork team announced today that first full release of Stork Data Scheduler (Stork 1.0) is now available on the Stork Project web page. Stork is a batch scheduler specialized in data placement and data movement, which is based on the concept of making data placement a first class entity in a distributed computing environment. Stork understands the semantics and characteristics of data placement tasks and implements techniques specific to queuing, scheduling, and optimization of these type of tasks. One key benefit of distributed resources is that it allows institutions and organizations to gain access to resources needed for large-scale applications that they would not otherwise have. But in order to facilitate the sharing of compute, storage, and network resources between collaborating parties, middleware is needed for planning, scheduling, and management of the tasks as well as the resources. The majority of existing research has been on the management of compute tasks and resources, as they are widely considered to be the most expensive. As scientific applications become more data intensive, however, the management of storage resources and data movement between the storage and compute resources is becoming the main bottleneck. Many jobs executing in distributed environments are failed or are inhibited by overloaded storage servers. These failures prevent scientists from making progress in their research. Accessing and transferring widely distributed data can be extremely inefficient and can introduce unreliability. For instance, an application may suffer from insufficient storage space when staging-in the input data, generating the output, and staging-out the generated data to a remote storage. This can lead to trashing of the storage server and subsequent timeout due to too many concurrent read data transfers, ultimately causing server crashes due to an overload of write data transfers. Other third party data transfers may stall indefinitely due to loss of acknowledgment. And even if transfer is performed efficiently, faulty hardware involved in staging and hosting can cause data corruption. Furthermore, remote access will suffer from unforeseeable contingencies such as performance degradation due to unplanned data transfers, and intermittent network outages. Traditional distributed computing systems closely couple data handling and computation. They consider data resources as second class entities, and access to data as a side effect of computation. Data placement (i.e. access, retrieval, and/or movement of data) is either embedded in the computation and causes the computation to delay, or performed as simple scripts which do not have the privileges of a job. The insufficiency of the traditional systems and existing CPU-oriented schedulers in dealing with the complex data handling problem has yielded a new scheduler specializing in data placement: the Stork Data Scheduler. Using Stork, the users can transfer very large data sets via single command. The checkpointing, error recovery and retry mechanisms ensure the completion of the tasks even in case of unexpected failures. Multi-protocol support makes Stork one of the most powerful data transfer tools available. This feature does not only allow Stork to access and manage different data storage systems, but can also be used a s a fall-back mechanism when one of the protocols fails in transferring the desired data. Optimizations such as request ordering, task aggregation, and connection caching provide enhanced performance compared to other data transfer tools. The Stork Data Scheduler can also interact with higher level planners and workflow managers for the coordination of compute and data tasks. This allows the users to schedule both CPU resources and storage resources asynchronously as two parallel universes, overlapping computation and I/O. Currently, some of the widely used workflow tools such as Condor DAGMan and Pegasus already come with Stork support. The Stork Data Scheduler is currently being used in several NSF, DOE, and ONR funded projects such as PetaShare, UCoMS, and SCOOP. For more information on the Stork Project, please contact the project lead Dr. Tevfik Kosar at kosar@cct.lsu.edu. * Stork Project web page: www.storkproject.org