Difference between revisions of "Scientific Applications"

From Point
Jump to: navigation, search
([http://cobweb.ecn.purdue.edu/~gekco/nemo3D/ NEMO3D])
Line 1: Line 1:
 
== [http://cobweb.ecn.purdue.edu/~gekco/nemo3D/ NEMO3D] ==
 
== [http://cobweb.ecn.purdue.edu/~gekco/nemo3D/ NEMO3D] ==
NEMO3D has been running as an educational version in nanoHUB for a year with executions that take a few seconds. This version has been used by over 600 people. NEMO3D is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at [http://www.psc.edu PSC].
+
NEMO3D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. It has been running as an educational version in [https://www.nanohub.org/ nanoHUB] for a year with executions that take a few seconds. This version has been used by over 600 people. It is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at [http://www.psc.edu PSC].
 
* [[NEMO3DDiscussion | Discussion/Notes]]
 
* [[NEMO3DDiscussion | Discussion/Notes]]
 
* [[NEMO3DPerformance | Performance Results]]
 
* [[NEMO3DPerformance | Performance Results]]

Revision as of 19:33, 18 January 2008

NEMO3D

NEMO3D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. It has been running as an educational version in nanoHUB for a year with executions that take a few seconds. This version has been used by over 600 people. It is expected to be running soon on large systems with executions that will require hours of CPU time with hundreds of users. The code is currently being ported to the dual-core Cray XT3 at PSC.

ENZO

ENZO is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. Understanding the performance of AMR applications on distributed memory architectures is challenging, due to the dynamic multilevel data structures and variety of communication patterns involved.

NAMD

Development of NAMD is a collaborative effort between the Theoretical and Computational biophysics Group (TCBG) and the Parallel Programming Laboratory (PPL) at UIUC and is based on PPL’s Charm++ parallel programming system, which has extensive support for latency tolerance and dynamic load balancing. Efficient, lightweight communication is critical for Charm++ and the applications built within this framework.