Skip to main content

P&G Embraces High-Performance Computing

2/23/2009
In recent years, The Procter & Gamble Company (P&G, www.pg.com)  has emphasized shifting real-world work to virtualized work combined with better electronic collaboration to increase throughput and lower costs.

P&G has been involved with high-performance computing (HPC) since the early 1980s, but in early 2000 the company embraced HPC as a critical link in its work to test chemical interactions, perform molecular modeling and run package design simulations with more than 500 clustered servers.

To expand HPC use from approximately 50 specialty researchers to thousands of product designers and test engineers, P&G's IT organization needed to make the computer clusters easier to use.

"We've grown HPC dramatically -- from a single cluster with 128 cores to multiple clusters with 3,000 cores," says Kevin Wilson, HPC architect for P&G. "However, our clusters still aren't approachable. Users have to spend eight hours or more learning how to submit HPC jobs, which is a barrier to broader cluster use."

Also, deploying and managing existing clusters required a great deal of time for the IT staff. "There are so many moving parts in traditional clusters, including the operating system, drivers, job scheduler and authentication mechanism," Wilson says. "We've been spending considerable amounts of time integrating software from different vendors."


Window of Opportunity

In mid-2007, Microsoft Corporation (www.microsoft.com) introduced P&G's IT staff to Windows Compute Cluster Server 2003, which is built on the Windows Server platform and is designed to support high-performance technical and scientific applications that take advantage of parallel processing for improved performance.

P&G deployed an eight-node cluster running Windows Compute Cluster Server 2003 and evaluated its ability to run three key applications; the Abaqus and LS DYNA finite element analysis tools from Dassault Systemes (www.3ds.com) and Livermore Software (www.lstc.com) respectively, and the Fluent flow modeling tool from ANSYS (www.ansys.com).

"We really liked the deployment and management features of Windows Compute Cluster Server 2003," Wilson says. "It amazed me that you could push a few buttons and have the whole cluster built in a few hours."

An HPC team from Dassault Systemes worked closely with Microsoft and P&G to configure Abaqus, which is designed to run on Windows Compute Cluster Server, on the cluster. In addition, Dassault Systemes engineers ran simulation models to evaluate the performance, reliability and ease of use of Abaqus in P&G's environment.

Since then, P&G's IT staff plans to move the operating system into departmental pilot programs sometime this year.


Accessible Benefits

By moving to clusters running the new solution, P&G can broaden HPC use, increase user productivity, and speed cluster deployment and management.

Wilson says, "We believe that Windows Compute Cluster Server will make HPC accessible to more people, including engineers, scientists, financial analysts and others, which will help us design and test products faster and reduce costs."

Wilson sees end-user productivity as the biggest benefit of Windows-based HPC.

"We hope to link clustering to the Microsoft tools that users use every day: Office Communicator 2007, Office SharePoint Server 2007 and Office Excel 2007 spreadsheet software."

For example, researchers can drag a job from Excel into the cluster, process it, get an instant message that the job is done and have the data end up on a SharePoint site. Wilson anticipates productivity savings because of the shorter learning curve -- and bigger time savings when clustering is integrated with desktop productivity tools.

The time savings extends to IT users as well as P&G deployed a fully functional, Windows-based cluster in just a few hours versus two weeks or more previously.
 
"We expect to reduce the time spent on cluster integration and management by up to 20 percent," concludes Wilson.


X
This ad will auto-close in 10 seconds