2004 Supercomputing Conference convenes in Pittsburgh

This week, professors, software developers, researchers, vendors, and students assembled at the David L. Lawrence Convention Center in Pittsburgh for the 2004 Supercomputing Conference. High-performance computing, networking, storage issues and ideas, and recent achievements were all discussed amongst the trailblazers that look to define the next generation of computing. Exhibits and demonstrations filled the convention center. ?We probably have billions of dollars of computer equipment on the floor,? vendor liaison and industry volunteer Doug Luce told the Tribune-Review. Among the groups in attendance were more than half a dozen industry heavyweights. Over 100 universities, laboratories, and other research groups were also present, offering displays at the show.
The theme of the conference was ?Bridging Communities,? in both a technical and an architectural sense. Two state-of-the-art pieces of technology demonstrating this theme were featured at the conference. The first was StorCloud, a storage device presented to attendees through the StorCloud Challenge. Participants used test applications to find the most creative and successful ways to make use of random accessible storage, storage bandwidth, and I/O operations, the hallmark capabilities of the technology. ?We?re trying to establish storage as one of the legs of high-performance computing,? Ken Washington told the Post-Gazette. Washington is the director of distributed information systems at the Sandia National Laboratory in Livermore, Calif.
InfoStar was also showcased, demonstrating a collaboration of wireless devices, data sources, and software that provided real-time conference information and interactive maps to participants. StorCloud and InfoStar?s new technologies set the standard for the conference, whose backbone has not always been the exhibits. Generally, exhibits were present to showcase new technology that would not be seen on the market for a few years, encouraging partnerships among businesses before the products? release.
The latest breakthrough in the world of supercomputing comes from IBM. With government funding, IBM is in the process of building a complex supercomputer that, while only partially complete, already doubles the speed of Japan?s Earth Simulator, the previous champion and winner of a Gordon Bell Award at last year?s conference. The IBM BlueGene/L recently achieved a record-breaking performance of 70.72 teraflops (trillion floating-point operations per second). By way of comparison, one teraflop is about 100 times faster than the most powerful desktop computer. When complete, the BlueGene/L is expected to reach a staggering 360 teraflops. The ultimate task of this monster machine will be to perform complex calculations to simulate the condition of aging nuclear weapons.
Businesses have begun to embrace supercomputers and are incorporating them into everyday use. Supercomputers can help by computing delivery routes, analyzing supplies, or planning capacity and timing. Credit-card companies use them for complex fraud detection analysis. Other retailers are using the capabilities for data mining, which employs often complex algorithms and search methods to look through old databases of information and come up with new patterns. In the past, supercomputers have been used for scientific research in areas such as weather, astronomy, and biotechnology, as well as in exploration for oil and gas.