You are here: TWiki> Groups/ALL Web>Applications>LAMMPS (2014-07-04, Alexandra_20Charalampidou)Facebook Twitter
create new tag
, view all tags

Short description

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator.

LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

LAMMPS template job for Grid submission

To execute a LAMMPS template job on the Grid you will need a JDL file describing the job itself (how many CPUs to use, what output files are expected etc), a wrapper shell script and a LAMMPS input file.

In the following we will go through the specific template files one by one.

ALERT! If you would prefer to skip all the job details and proceed with the submission of the LAMMPS job please feel free to do so.

The JDL file

The template JDL file for a LAMMPS job is the following one

Executable = ""; 
Arguments = "in.msst";
CpuNumber = 2; 
StdOutput = "std.out"; 
StdError = "std.err"; 
InputSandbox = {"","in.msst"}; 
OutputSandbox = {"std.err","std.out","log.lammps"}; 
Requirements = Member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment)
        && Member("MPICH2", other.GlueHostApplicationSoftwareRunTimeEnvironment)
        && (other.GlueHostArchitecturePlatformType == "x86_64")
        && (  (other.GlueCEInfoHostName == "")
           || (other.GlueCEInfoHostName == "")
           || (other.GlueCEInfoHostName == "")
           || (other.GlueCEInfoHostName == "")
           || (other.GlueCEInfoHostName == "") );

In the first line we define that this is a normal batch job and on the second line we define the name of the wrapper script that is to be executed on the Worker Nodes ( in this case). Next we define an argument to be read by the wrapper script (this should be the name of our input file, in.msst in this case) and the number of CPUs we will be using (8). In the next two lines we give the names of the files that the standard output and error of the batch job will be redirected to. In the next two lines we define the input and output files of our job. The input files are the wrapper script ( and the LAMMPS input file (in.msst in this case). If there are more input files that LAMMPS will require (i.e. a data.msst file) these should be included in the InputSandbox line. The output files of our job defined on the OutputSandbox line are the files we want to retrieve once the job is finished (in this case the standard output and error and the log.lammps file). Finally we explicitly define we want to run this job on a Grid cluster that supports the MPI-START mechanism, that has MPICH2 flavour installed and with hardware architecture x86_64.

The wrapper script (

The contents of the wrapper script ( we will be using are:


# Download binary lammps executable
export EXE=lmp_linux
lcg-cp lfn:/grid/see/gridauth-users/lammps-1Feb14/bin/$EXE file:$EXE
chmod +x $EXE

# Execute lammps
export MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'`
eval MPI_PATH=`printenv MPI_${MPI_FLAVOR}_PATH`
export I2G_MPI_APPLICATION_ARGS="-in $1"  

This wrapper script is structured into 2 distinct sections, which are

  1. Download of executable file
  2. Execution of LAMMPS

Download of executable file

In this section the LFC service to be used is defined and then the binary file is downloaded onto the Worker node. This binary has been precompiled using Intel (version 11.1) and the mpich2 (version 1.2.1p1) library. For more information please do not hesitate to contact the helpdesk.

Execution of LAMMPS

In this section the MPI-START environment is setup and the LAMMPS model is executed. Notice that the input file is passed to the executable via the Arguments line in the JDL file. In general NO changes will be required to adapt this part of the wrapper script to your needs.

Job submission and collection of results

The LAMMPS template job presented here should work as is. We encourage you to test this before proceeding with the modifications you need to make to adapt these two template files to your needs.

As a first step log into the User Interface and create and enter a folder which will contain all job input and output files.

mkdir lammps_job
cd lammps_job

To start off, download the JDL, the wrapper script and the in.msst files using the following 3 commands.


Then submit this template job. On the User Interface the following two commands should be used.

voms-proxy-init -voms see
glite-wms-job-submit -a -o id lammps.jdl

For more information on job management please visit this guide.

Check the status of your job with the following command:

glite-wms-job-status -i id

Output files are retrievable via the OutputSandbox. To retrieve the contents of the OutputSandbox (std.out, std.err and log.lammps in our case) you need to execute the following command once the job finishes execution.

glite-wms-job-output --dir ./result -i id

Look for the result folder. Within that there should be a subfolder containing the 3 files. Of most insterest in the log.lammps file. Use the following command to view its contents

cat log.lammps

Adapt this LAMMPS template job to your needs

The first change you will propably need to make to adapt this template job to your needs is use another input file for LAMMPS. Several examples are available to download from Grid storage elements. To view the available examples type in the following commands in the command line.

module load see/auth
lfc-ls -l /grid/see/gridauth-users/lammps/cases

To download (and possibly use) one of these example files (i.e. melt) use the following command

lcg-cp lfn:/grid/see/gridauth-users/lammps/cases/melt/in.melt file:in.melt

To adapt this LAMMPS template job to your own use case you will need to use your own input file either by working on one of the examples or by creating a new one from scratch. Provided you have your input file ready the changes you need to make to the JDL and the wrapper script files are presented below:

Changes in the JDL file

A few changes are needed here depending on your needs. First of all you may want to change the number of Cpu Cores in line 3:

CpuNumber = 12;

depending on your system size.

The following graph shows the execution time of the LAMMPS model when varying the number of Cpu Cores on HellasGrid compute resources and for the template case job described above.


Then you need to change the name of the input file in lines 2 and 6. Moreover, if more input files are needed please add them in line 6. In the example below the changes one would need to make to submit the chain example case

Arguments = "in.chain";

InputSandbox = {"","in.chain","data.chain"};

Finally, if more than the log.lammps output file are expected then you need to include those in line 7:

OutputSandbox = {"std.err","std.out","log.lammps","out.chain"};

Changes in the wrapper script

In the wrapper script you need to make NO changes.

Submit your job as described in the section Job submission and collection of results above.

In case you receive any error please use this guide as a guideline to report your problem.

If at some point you have any questions or suggestions please feel free to drop us an email.

Topic attachments
I Attachment Action Size Date Who Comment
elsemsst in.msst manage 1.0 K 2011-06-17 - 12:49 UnknownUser Example input file for LAMMPS
elsejdl lammps.jdl manage 0.4 K 2014-07-04 - 11:52 UnknownUser Template JDL file for LAMMPS job submission
pngpng lammps.png manage 140.4 K 2011-06-27 - 09:38 UnknownUser Benchmarking of LAMMPS on HellasGrid compute resources
shsh manage 0.5 K 2014-07-04 - 11:49 UnknownUser Template wrapper script file for LAMMPS job submission

Topic revision: r8 - 2014-07-04 - 11:52:24 - Alexandra_20Charalampidou
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback