XSEDE HPC Workshop on MPI at Harvey Mudd College

Harvey Mudd College will be participating in Pittsburgh Supercomputing Center’s XSEDE HPC Workshop about MPI (Message Passing Interface) as a remote site. MPI is a message passing library standard that can be used to parallelize your serial C/Fortran program and algorithm to exploit multi-node, multi-core clusters (or supercomputers) for enhanced performance and/or accuracy. If you are interested in learning MPI, please register for the workshop through XSEDE and come join us in the Learning Studio Classroom on Wednesday, December 4th and Thursday, December 5th.

This is a two-day intensive workshop through which you can learn from the basics to more advanced skills of MPI programming.

The tentative agenda given below is subject to change.

Wednesday, December 4
All times given are PST

  • 08:00 Welcome
  • 08:15 Computing Environment
  • 09:00 Intro to Parallel Computing
  • 10:00 Lunch break
  • 11:00 Introduction to MPI
  • 12:30 Introductory Exercises
  • 01:30 Scalable Programming: Laplace code
  • 02:00 Adjourn/Laplace Exercises

Thursday, December 5
All times given are PST

  • 08:00 Laplace Exercises
  • 09:00 Laplace Solution
  • 09:30 Lunch break
  • 10:30 Advanced MPI
  • 11:30 Outro to Parallel Computing
  • 12:30 MPI Debugging and Profiling
  • 01:30 Adjourn

Please visit the workshop page for more information: https://www.psc.edu/index.php/training/xsede-hpc-workshop-december-2013

For more information about other XSEDE HPC trainings, please visit the course calendar page at https://portal.xsede.org/course-calendar

For any questions, please contact Jeho Park (x79023 or email jepark@hmc.edu) at CIS

CIS awarded two high-end GPUs from NVIDIA

(Disclaimer: Sorry gamers. These GPUs are not for 3D gaming, but for number crunching scientific calculations!)

CIS submitted a proposal to NVIDIA’s Academic Partnership Program on the last day of May. NVIDIA quickly approved the proposal on June 11 and decided to donate us two Tesla C2075 GPUs (MSRP $2,500 per piece). Although it took a month to process the shipment at NVIDIA’s distribution center due to their high demand, it was worth the wait!

Tesla C2075 from NVIDIA

As of this writing, Tesla C2075 is the top-notch GPU available on the market for GPU computing. It has 448 CUDA cores and 6GB GDDR5 memory, which allows to pull 515 Gflops in double precision calculations and 1030 Gflops in single precision. Yup, it’s a little monster and we’ve got two of them!

CIS (Andy Davenport and Jeho Park) has been collaboratively working with Prof. Vatche Sahakian at Physics on a pilot project to prepare a new high performance GPU system on our campus. Prof. Sahakian generously offered to purchase a high-end GPU host computer. CIS is providing the NVIDIA GPUs and technical support to build a new system. Once the setup is complete, the new system will be tested and later shared with faculty and student researchers at Harvey Mudd for high-performance GPU computing experiences and tests. If you are interested in GPU computing or wish to involve in the GPU computing pilot, please contact Jeho Park at CIS for more information (or leave your comment to this post).

Useful Links:


HPC @ HMC: Survey Results

In November 2011, CIS conducted a short survey on the use of High Performance Computing (HPC) resources at Harvey Mudd College. We received 16 responses: 14 HPC users and 2 non-HPC users. We truly appreciate their time to fill out the survey. In this article, we share some of the interesting results considering the answers from those 14 HPC users.

1. Departments using HPC in research and/or teaching

First, we wished to know which departments were using HPC systems for their academic activities. Although we only had 14 responses, the result was clear. All seven departments have been involved in using HPC resources! Go Mudders! 🙂

2. The nature of the HPC use

In this question, the majority of the responders answered that they have used HPC facilities for simulation, data acquisition and analysis, and modeling. The “Other” selection includes computer animation, mathematical computation, teaching HPC and HPC research.

3. The location of the HPC facility

This result may be a bit misleading because multiple selection was allowed. One of the responders have used off-campus HPC facility and the rest have their HPC facilities in department labs or computer rooms. Four of them also have additional high-end workstations under/on their desks. The “Other” selection indicated the use of a CIS server.

4. What have you used HPC for?  All of the 14 respondents answered that they’ve used HPC resources for research and four of them have also used the resources for teaching as well (one of the four respondents teaches about HPC in class). Notice that multiple selection was also allowed for this question.

5. How do the HPC facilities you use meet your needs? Current and Future

CIS is always interested in knowing whether computing resources meet your needs now and in the future. About half of the responders thought that they have acceptable HPC systems for current and future needs. Most of the responders who chose “acceptable” for both current and future have relatively new HPC systems (< 1 year). On the other hand, those who have HPC systems older than 3 years said that the resource might not meet their needs in the future. CIS may help find required HPC resources for those who responded this question with “unacceptable” (and “neutral”). For example, FutureGrid resources may fit to your needs as it was introduced in this news article. Please contact our Scientific Computing Specialist (or leave your comments) for any assistance in finding the right HPC resources for you.

Through this survey, we believe that we’ve gathered very useful information for our HMC community. The HPC survey is still open at http://www.formstack.com/forms/hmc-hpc_survey_f2011. When you have time, please fill out this short survey so that we know more about your needs in High Performance Computing and help you accordingly.

SuperComputing (SC11) Conference for College Educators

SuperComputing (SC) conference is the leading international conference on High Performance Computing (HPC), Networking, Storage and Analysis. This year the 24th annual SC conference (SC11) was held in Seattle, WA, in November, 2011. More than 5000 participants were gathered in one place to learn, discuss, and show off cutting-edge technologies in HPC and related areas.

Although the conference is huge in all respects, the beauty of the SC conference is in its specialized sub-community conferences. One of the sub-community conferences called Education Program is very well organized to suit to college educators who teach HPC and Scientific Computing. The main focus of the Education Program is to learn and share better ways of teaching HPC and Scientific Computing (or Computational Sciences) tools to undergraduate faculty and students.

Jeho Park (Scientific Computing Specialist) at CIS attended the SC11 conference, and learned many good practices on HPC education and made relevant connections on behalf of our HMC community. A few of the takeaways worth mentioning are Bootable Cluster CD (BCCD), LittleFe Project, and FutureGrid Project.

BCCD is a turn key solution to build a Beowulf style cluster on the fly. The BCCD boot image comes with a complete parallel computing environment such as network setup, libraries, compilers, benchmarks and applications needed to teach HPC to undergraduate faculty and students. So to teach distributed and parallel computing, you just need BCCD and a couple of networked workstations or a computer with a multicore processor(s). BCCD even runs in virtual machine (VM) environments. This mean that you may boot multiple BCCD VMs on different cores and emulate the cluster environment right in front of your audience. CIS will be testing BCCD on our High Performance Workstations during the winter break. For more information, please visit http://bccd.net/.

LittleFe Build OutLittleFe is an interesting project funded in part by Intel (until this year) to build a portable (< 50 lb) six-node cluster with a relatively small amount of money (< $3,000). The LittleFe portable cluster is a simple and easy way to build a hardware and software resource for teaching  parallel processing speedup, efficiency, and load balancing. CIS will keep an eye on their call for applications for 2012 LittleFe grants. If you are interested in being involved in this project at HMC, please contact Jeho at CIS.

If you are looking for a more serious type of HPC resource, take a good look at the FutureGrid Project. The FutureGrid Project focuses on offering new and dedicated test-bed environments for research challenges on grid-enabled and cloud-enabled computational schemes in sciences and engineering. The FutureGrid also actively supports education and broader outreach activities:

“…. The project will advance education and training in distributed computing at academic institutions with less diverse computational resources. It will do this through the development of instructional resources that include preconfigured environments that provide students with sandboxed virtual clusters….”

So it sounds like the FutureGrid is waiting for your innovative ideas to exploit their new experimental testbed for your research and teaching on HPC, scientific computing, parallel computing, distributed computing and cloud computing. Harvey Mudd College is especially good fit for FutureGrid in terms of its scope. So we encourage faculty members to look at the FutureGrid website and feel free to contact CIS for any assistance to apply for FutureGrid instances.

The next SC12 conference will be held in Salt Lake City, Utah on November 10, 2012.