Difference between revisions of "ENZO"

From Point
Jump to: navigation, search
Line 21: Line 21:
 
Next we looked at how enabling load balancing effect performance. This a runtime comparison between non-load balanced vs. load balanced simulation.
 
Next we looked at how enabling load balancing effect performance. This a runtime comparison between non-load balanced vs. load balanced simulation.
  
 +
[[Image:EnzoMeanComp.png]]
  
 
Time spent MPI_Barrier decrease but is mostly offset by and increase in time spent in MPI_Recv.
 
Time spent MPI_Barrier decrease but is mostly offset by and increase in time spent in MPI_Recv.

Revision as of 22:00, 2 June 2009

ENZO Performance Study Summary

On this page will show the performance result for ENZO from the svn repository. We did this in par to see the effect of load balancing (not enabled in version 1.5) on scaling performance. The previous performance results for ENZO version 1 are at here.


Enzo Version 1.5

Following the release of Enzo 1.5 in November 08 we have done some follow up performance studies. Our initial findings are similar to what we found for version 1. For example see this chart showing the scaling behavior of Enzo 1.5 on Kraken:

EnzoScalingKraken.png

Scaling behavior was very similar on Ranger.

This poor scaling behavior could be anticipated by looking at the runtime breakdown (mean of 64 processors on Ranger):

EnzoMeanBreakdown.png

with this much time spent in MPI communication increasing the number of processors is unlikely to result in much faster simulations. Looking more closely at MPI_Recv and MPI_Barrier we see that on average 5.2ms is spent per call in MPI_Recv and 40.4ms for MPI_Barrier. This is much longer than can be explained by communication latency on Ranger's infiniband interconnect.

Next we looked at how enabling load balancing effect performance. This a runtime comparison between non-load balanced vs. load balanced simulation.

EnzoMeanComp.png

Time spent MPI_Barrier decrease but is mostly offset by and increase in time spent in MPI_Recv.