Page 1 of 1

Very bad parallel scaling of vasp over Gbit ethernet

Posted: Mon Aug 17, 2009 9:48 pm
by cpp6f
Maybe this should have gone in my previous post, but it is somewhat of a different topic. I have compiled vasp 4.6.34 using the Intel fortran compiler 11.1 with openmpi 1.3.3 on a cluster of 104 nodes running Rocks 5.2 with two quad core opterons connected by a Gbit ethernet. Running in parallel on one node (8 cores) runs very well, faster than any other cluster I have run it on. However, running on 2 nodes in parallel only improves the performance by 10% over the one node case while running on 4 and 8 nodes yields no improvement over the two node case. Furthermore, when running multiple (3-4) jobs simultaneously, the performance decreases by around 50% compared to running only a single job on the entire cluster. The nodes are connected by a Dell Powerconnect 6248 managed switch. I get the same performance with mpich2, so I don't think it is a problem specific to openmpi. Other vasp users have reported very good scaling up to 4 nodes on a similar cluster, so I don't think the problem is vasp either. Could something be wrong with the way mpi is configured to work with the switch? Or the operating system is not configured to work with the switch properly? Or the switch itself needs to be configured? Thanks!
<span class='smallblacktext'>[ Edited ]</span>

Re: Very bad parallel scaling of vasp over Gbit ethernet

Posted: Wed Sep 04, 2024 2:09 pm
by support_vasp

Hi,

We're sorry that we didn’t answer your question. This does not live up to the quality of support that we aim to provide. The team has since expanded. If we can still help with your problem, please ask again in a new post, linking to this one, and we will answer as quickly as possible.

Best wishes,

VASP