ENGINEERING
DARPA Awards GA Tech Team $2.7 Million for Big Data
- Written by: Tyler O'Neal, Staff Editor
- Category: ENGINEERING
A research team at the Georgia Institute of Technology has received a $2.7 million award from the Defense Advanced Research Projects Agency (DARPA) to develop technology intended to help address the challenges of "big data" – data sets that are both massive and complex.
The contract is part of DARPA’s XDATA program, a four-year research effort to develop new computational techniques and open-source software tools for processing and analyzing data, motivated by defense needs. Georgia Tech has been selected by DARPA to perform research in the area of scalable analytics and data-processing technology.
The Georgia Tech team will focus on producing novel machine-learning approaches capable of analyzing very large-scale data. In addition, team members will pursue development of distributed computing methods that can process data-analytics algorithms very rapidly by simultaneously utilizing a variety of systems, including supercomputers, parallel-processing environments and networked distributed computing systems.
"This award allows us to build on the foundations we've already established in large-scale data analytics and visualization," said Richard Fujimoto, Regents' Professor and chair of Georgia Tech’s School of Computational Science and Engineering (CSE), and leader of the Georgia Tech team. "The algorithms, tools and other technologies that we develop will all be open source, to allow them to be customized to address new problems arising in defense and other applications."
Under the open-source paradigm, collaborating developers create and maintain software and associated tools. Program source code is made widely available and can be improved by a community of developers and modified to address changing needs.
The XDATA award is part of a $200 million multi-agency federal initiative for big-data research and development announced in March. The initiative is aimed at improving the ability to extract knowledge and insights from the nation's fast-growing volumes of digital data. Numerous big-data-related research endeavors are underway at Georgia Tech, and the institute recently established the Center for High-Performance Computing and the Center for Data Analytics and Machine Learning.
The Georgia Tech XDATA effort will build upon foundational methods and software developed under the Foundations of Data and Visual Analytics (FODAVA) research initiative, a 17-university program led by Georgia Tech and funded by the National Science Foundation and the Department of Homeland Security. The FODAVA effort has produced the Visual Information Retrieval and Recommendation System (VIZIRR) and a research test bed.
"The FODAVA document retrieval and recommendation system uses automated algorithms to give users a range of subject-search choices and information visualization capabilities in an integrated way, so that users can interact with the data throughout the problem-solving process to produce more meaningful solutions," said Haesun Park, a School of Computational Science and Engineering professor and FODAVA director. "For XDATA, we will enhance these visualization and interaction capabilities and develop distributed algorithms that allow users to solve problems faster and on a larger scale than ever before."
Also participating from the School of Computational Science and Engineering is Alex Gray, an associate professor who has developed open-source software tools to make machine-learning algorithms scalable to large datasets. Other faculty members involved in the XDATA work include professor Hongyuan Zha and associate professor Guy Lebanon.
Investigators from the Georgia Tech Research Institute (GTRI) will also contribute to the XDATA initiative. Senior research scientists Barry Drake and Richard Boyd will tackle the computational demands of processing the machine-learning algorithms developed by the School of Computational Science and Engineering team.
GTRI's task involves enabling these algorithms to run on a networked distributed computing system. By configuring the software so that it operates on multiple processors simultaneously, the researchers believe they can ensure that the algorithms solve problems very rapidly – a requirement of the DARPA award.
"Scaling up machine-learning algorithms to big-data requirements is a relatively new area of research, and there will be both hardware and software issues to address here," said Drake, a specialist in parallel algorithms in numerous application domains. "In enabling these complex codes to analyze large data sets rapidly, we expect to be breaking new ground."
Boyd will support XDATA's hardware requirements with expertise on low-cost graphics processing units (GPUs), which offer performance levels reached only by supercomputers until recently. Clusters of linked GPUs could help provide the processing power needed to satisfy XDATA requirements.
"The XDATA vision involves providing an entirely new set of open-source data-processing tools for both military and other requirements," Boyd said. "We have to be prepared to deal with not only widely distributed computing, but also with heterogeneous data that could be structured or unstructured. Diverse hardware approaches including GPUs are likely to be part of the system."