Critical to the understanding of earthquakes and other natural phenomenon, fractures reduce the strength and change the behavior of materials - whether a human arm, the trunk of a tree or a sheet of granite. Dell technology will help scientists at the university better understand the nature of fractures in rock, ultimately aiding engineers in preventing future weaknesses in critical structures and landscapes.
Uniquely pairing its digital research with traditional physical experiments, University of Toronto’s new state-of-the-art facility will house both the numerical and experimental equipment needed for world-class research. Utilizing Dell technology to execute real-time and off-line processing, as well as numerical modeling, the new laboratory will allow researchers to focus on understanding rock mechanics and how fractures grow in materials like rock and concrete.
Scheduled for completion at the end of the summer, the cluster uses 64 Dell PowerEdge 1950 servers. The U of T’s new laboratory enables the researchers to recreate synthetic rock formations through the creation of millions of digital spheres, replicating rock particles. These digital spheres can be given the physical properties in the computer enabling the researchers to track in real time how the rock particles react to various applied stresses and variables.
A supercomputing cluster, or HPCC, is a group of network servers connected to act as a single, high-powered computer. An HPCC like the one installed at University of Toronto can perform trillions of complex calculations per second, accomplishing work that previously required expensive mainframe computers using proprietary technology.
The university expects that the new HPCC will enable the research team to create rock models of up to 40 million digital spheres or particles, a huge increase over similar modeling systems which are able to replicate a mere 2.2 million.
University of Toronto’s HPCC is configured with 64 Dell PowerEdge 1950 2-socket servers equipped with 64-bit Dual-Core Intel(R) Xeon(R) processors, for a total of 256 processing cores. Additionally, running on both Red Hat Linux and Microsoft operating systems, the cluster will provide the university with 18.9 terabytes of disk storage and 320 Gigabytes of overall memory. The performance of the cluster has a theoretical peak throughput of 2.7 Teraflops using significantly fewer servers than other clusters in this range.
Many of the scientists working on this project were already proficient in managing Red Hat Linux operating systems, while others were more comfortable researching in a Microsoft environment. For this reason, a decision was made to use both operating systems on separate computing nodes. Deploying a beta version of Microsoft’s cluster operating system, Microsoft committed a group of computer engineers to work onsite at the university to help the researchers set-up the laboratory and ensure seamless integration and high performance levels from the outset. [June 22, 2007]
Send this IT news to a friend