Compilation VASP with mpich

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
ey

Compilation VASP with mpich

#1 Post by ey » Wed Mar 11, 2009 10:13 am

Dear Vasp users,
i have installed mpich1.2.7 version and got mpif90 command, then fixed the Makefile to use MPI. But i think the problem is about installing (configuring) mpich program. when i command "mpif90" it says:
No Fortran 90 compiler specified when mpif90 was created,
or configuration file does not specify a compiler


by the way, i'm using intel fortran compiler.10. I wrote the fallowing commands before installing mpich:
export F90=ifort , export FLIB=-static-libcxa , export F90LIB= -static-libcxa .....

I think you understand my simple problem, need your valuable experiences..
Last edited by ey on Wed Mar 11, 2009 10:13 am, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

Compilation VASP with mpich

#2 Post by pafell » Wed Mar 11, 2009 4:48 pm

What do you mean by "I have installed"? Did you compile mpich from source? Which fortran compiler did you use?
Last edited by pafell on Wed Mar 11, 2009 4:48 pm, edited 1 time in total.

ey

Compilation VASP with mpich

#3 Post by ey » Fri Mar 13, 2009 8:48 am

yes i have compiled it from the source, i am using intel fortran compiler 10.1.22.
could it be a problem about compiling mpich?
dear pawell, i changed my makefile as you told me and it worked for serial usage. Here is your help:
" http://cms.mpi.univie.ac.at/vasp-forum/ ... php?2.5374 "

i opened a new directory for parallel vasp. shall i do the same changes in the makefile for parallelization?
i think :
1. it may be a compiling problem about mpich . which MPI program should i prefer and how can i check whether i compiled truly or not?
2. it may be about makefile , which lines should be controled ?
Thank you..
Last edited by ey on Fri Mar 13, 2009 8:48 am, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

Compilation VASP with mpich

#4 Post by pafell » Mon Mar 16, 2009 4:54 pm

I'm happy I could help in previous problem.
According to MPICH - I'm not using it, but I doubt it's any difference in compiling with LAMMPI or OPENMPI (those two I am/was using).
Here's what I'm doing in my makefile:
- comment out FC and FCL lines in front of file (this is just for clarifying - those two would be overriden by definitions later)
- OFLAG, BLAS and LAPACK lines just like in serial version
- you need fftw compiled with intel's compiler and put path to it in FFT3D variable, or you should use included fftw library (although I'd recommend FFTW3)
- FC = mpif90
- CPP line in MPI part:
make sure there's -DMPI defined (it's by default).
- SCA =
above is empty value - I don't use scalapack 'cause of slow network

Now to the problem itself:
In which directory have you installed mpich? Is this directory in your $PATH? If yes, what do you get when you try to run mpif90 without any arguments?
Last edited by pafell on Mon Mar 16, 2009 4:54 pm, edited 1 time in total.

ey

Compilation VASP with mpich

#5 Post by ey » Tue Mar 17, 2009 11:37 am

yes i did as you said and this is what i get when i run mpif90:

$ mpif90
/opt/intel/fce/10.1.022/lib/for_main.o: In function `main':
/export/users/nbtester/efi2linuxx86_nightly/branch-10_1/20090205_010000/libdev/frtl/src/libfor/for_main.c:(.text+0x26): undefined reference to `MAIN__'

by the way, would it be possible for you to share the makefile?
ty
<span class='smallblacktext'>[ Edited Tue Mar 17 2009, 12:43PM ]</span>
Last edited by ey on Tue Mar 17, 2009 11:37 am, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

Compilation VASP with mpich

#6 Post by pafell » Tue Mar 17, 2009 9:28 pm

The problem is for sure with MPICH installation. I've never done it before, but as I thought it's just like LAMMPI. What I've done to compile MPICH is:
source /opt/intel/Compiler/11.0/074/bin/ifortvars.sh
source /opt/intel/Compiler/11.0/074/bin/iccvars.sh
cd /usr/src
tar xzf mpich2-1.0.8.tar.gz
cd mpich2-1.0.8
F77=ifort F90=ifort CC=icc CXX=icpc ./configure --enable-f77 --enable-f90 --enable-cxx --prefix=/opt/mpich2-1.0.8

So as you can see I set up F77, F90, CC and CXX variables to use Intel's compilers. I'm installing software which is to be distributed via nfs to /opt directory, so this is why --prefix is like that.
After this, your mpif90 should work correctly (it's just a wrapper to link correct MPI libraries, so it's not a must, but it simplifies compilation a lot), and you could try modifications from my previous post.
I'm not sure if I can send Makefile here, but I'm sure I've given in previous message all important modifications - it's just you should enable fftw3 and you should (if you don't want to specify all MPI libraries by yourself) set FC and FCL to mpif90. Of course you have to uncomment CPP line in MPI part of Makefile. That's it.

After first look at mpich I find lammpi a bit more simple to start with.
Last edited by pafell on Tue Mar 17, 2009 9:28 pm, edited 1 time in total.

ey

Compilation VASP with mpich

#7 Post by ey » Wed Mar 18, 2009 11:10 am

thank you very much for helpful advices. I think I have run mpich correctly, i have tested it with its default examples (calculating pi by using 8 nodes...) and it worked.

But if i wanna compile vasp , here what it says :

main.o: In function `MAIN__':
main.f90:(.text+0x17643): undefined reference to `mpi_barrier_'
main.f90:(.text+0x44663): undefined reference to `mpi_barrier_'
mpi.o: In function `m_sumb_d_':
mpi.f90:(.text+0xbc): undefined reference to `mpi_allreduce_'
mpi.f90:(.text+0x219): undefined reference to `mpi_abort_'
mpi.o: In function `m_sum_d_':
mpi.f90:(.text+0x30d): undefined reference to `mpi_allreduce_'
mpi.f90:(.text+0x46a): undefined reference to `mpi_abort_'
mpi.o: In function `m_sumf_d_':
mpi.f90:(.text+0xade): undefined reference to `mpi_allreduce_'
mpi.f90:(.text+0xc06): undefined reference to `mpi_abort_'
mpi.o: In function `m_alltoall_d_':
mpi.f90:(.text+0xf0a): undefined reference to `mpi_irecv_'
mpi.f90:(.text+0xfb5): undefined reference to `mpi_isend_'
mpi.f90:(.text+0x1041): undefined reference to `mpi_waitall_'
mpi.f90:(.text+0x10ec): undefined reference to `mpi_abort_'
mpi.f90:(.text+0x1180): undefined reference to `mpi_abort_'
mpi.f90:(.text+0x121c): undefined reference to `mpi_abort_'
mpi.o: In function `m_sum_z_':
mpi.f90:(.text+0x1301): undefined reference to `mpi_allreduce_'
mpi.f90:(.text+0x13de): undefined reference to `mpi_abort_'
mpi.o: In function `m_sumf_z_':
mpi.f90:(.text+0x1ac3): undefined reference to `mpi_allreduce_'
mpi.f90:(.text+0x1beb): undefined reference to `mpi_abort_'
mpi.o: In function `m_alltoall_wait_':
mpi.f90:(.text+0x1db9): undefined reference to `mpi_waitall_'
mpi.f90:(.text+0x1de5): undefined reference to `mpi_waitall_'
mpi.f90:(.text+0x1e7a): undefined reference to `mpi_abort_'
mpi.f90:(.text+0x1f05): undefined reference to `mpi_abort_'
mpi.o: In function `m_alltoall_d_async_':
.
.
.mpi.f90:(.text+0x5704): undefined reference to `mpi_abort_'
paw.o: In function `paw_mp_rd_rho_paw_':
paw.f90:(.text+0x18869): undefined reference to `mpi_barrier_'
electron.o: In function `elmin_':
electron.f90:(.text+0x535): undefined reference to `mpi_barrier_'
fftmpi_map.o: In function `mapset_':
fftmpi_map.f90:(.text+0x3165): undefined reference to `mpi_barrier_'
make: *** [vasp] Error 1


--------------------------------------------------------------------------------

What is this problem about?
Ty..
Last edited by ey on Wed Mar 18, 2009 11:10 am, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

Compilation VASP with mpich

#8 Post by pafell » Wed Mar 18, 2009 11:00 pm

Did you recompile mpich with intel's fortran? Fortran90 linking is not enabled by default. What is the output of:
mpif90 -v
You should see sth like:
mpif90 for 1.0.8
Version 10.1
Or whatever versions of MPICH/ifort you've got.

You could try doing make clean before recompiling vasp if you didn't.

Also showing a bit more output could help - you've removed lines with compiler and it's parameters.
Last edited by pafell on Wed Mar 18, 2009 11:00 pm, edited 1 time in total.

admin
Administrator
Administrator
Posts: 2921
Joined: Tue Aug 03, 2004 8:18 am
License Nr.: 458

Compilation VASP with mpich

#9 Post by admin » Thu Mar 19, 2009 9:04 am

have you set the $MPI shell variable and all paths correctly?
Last edited by admin on Thu Mar 19, 2009 9:04 am, edited 1 time in total.

ey

Compilation VASP with mpich

#10 Post by ey » Thu Mar 19, 2009 12:49 pm

I have fixed the problem, by re-compiling mpich-1.2.7 with device ch_shmem.

Here is the configuration line :

$ F77=ifort F90=ifort CC=icc CXX=icpc ./configure --enable-f77 --enable-f90 --enable-cxx --with-device=ch_shmem --with-comm=shared --prefix=/opt/mpich/mpich-1.2.7 --with-common-prefix=/opt/mpich/mpich-1.2.7

Thank you Dear Pawell again :)

and vasp.4.6 makefile:


.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for P4 systems
#
# The makefile was tested only under Linux on Intel platforms
# (Suse 5.3- Suse 9.0)
# the followin compiler versions have been tested
# 5.0, 6.0, 7.0 and 7.1 (some 8.0 versions seem to fail compiling the code)
# presently we recommend version 7.1 or 7.0, since these
# releases have been used to compile the present code versions
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
# http://math-atlas.sourceforge.net/
# you certainly need atlas on the Athlon, since the mkl
# routines are not optimal on the Athlon.
# If you want to use atlas based BLAS, check the lines around LIB=
#
# 3c) mindblowing fast SSE2 (4 GFlops on P4, 2.53 GHz)
# Kazushige Goto's BLAS
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
#
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f90
SUFFIX=.f90

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
#FC=ifort
# fortran linker
#FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE X.X, maybe some Red Hat distributions:

CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
#-----------------------------------------------------------------------

#CPP = $(CPP_) -DHOST=\"LinuxIFC\" \
-Dkind8 -DNGXhalf -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc \
# -DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
#-----------------------------------------------------------------------

FFLAGS = -FR -lowercase -assume byterecl
#FFLAGS= -I/opt/intel/mkl/10.1.1.019/include/fftw
#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK SSE1 optimization, but also generate code executable on all mach.
# xK improves performance somewhat on XP, and a is required in order
# to run the code on older Athlons as well
# -xW SSE2 optimization
# -axW SSE2 optimization, but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------

OFLAG=-O3 -xW

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =

OBJ_NOOPT =
DEBUG = -FR -O0
INLINE = $(OFLAG)


#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# on P4, VASP works fastest with the libgoto library
# so that's what I recommend
#-----------------------------------------------------------------------

# Atlas based libraries
#ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_P4SSE2/
#BLAS= -L$(ATLASHOME) -lf77blas -latlas

# use specific libraries (default library path might point to other libraries)
#BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a

# use the mkl Intel libraries for p4 (www.intel.com)
# mkl.5.1
# set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines
#BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread

# mkl.5.2 requires also to -lguide library
# set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines
#BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lguide -lpthread
#BLAS=-L/opt/intel/mkl/10.1.1.019/lib/em64t -lmkl_intel_lp64 -lmkl_blacs_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread
# even faster Kazushige Goto's BLAS
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
#BLAS= /opt/libs/libgoto/libgoto_p4_512-r0.6.so

# LAPACK, simplest use vasp.4.lib/lapack_double
#LAPACK= ../vasp.4.lib/lapack_double.o
#LAPACK=-L/opt/intel/mkl/10.1.1.019/lib/em64t -lmkl_intel_lp64 -lmkl_blacs_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread
# use atlas optimized part of lapack
#LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lcblas

# use the mkl Intel lapack
#LAPACK= -lmkl_lapack

LAPACK=

BLAS=-Wl,-rpath,/opt/intel/mkl/10.1.1.019/lib/em64t -Wl,--start-group /opt/intel/mkl/10.1.1.019/lib/em64t/libmkl_intel_lp64.a /opt/intel/mkl/10.1.1.019/lib/em64t/libmkl_sequential.a /opt/intel/mkl/10.1.1.019/lib/em64t/libmkl_core.a -Wl,--end-group -lpthread

#-----------------------------------------------------------------------

#LIB = -L../vasp.4.lib -ldmy \
# ../vasp.4.lib/linpack_double.o $(LAPACK) \
# $(BLAS)

# options for linking (for compiler version 6.X, 7.1) nothing is required
LINK =
# compiler version 7.0 generates some vector statments which are located
# in the svml library, add the LIBPATH and the library (just in case)
#LINK = -L/opt/intel/compiler70/ia32/lib/ -lsvml

#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.6 can use fftw.3.0.X (http://www.fftw.org)
# since this version is faster on P4 machines, we recommend to use it
#-----------------------------------------------------------------------

#FFT3D = fft3dfurth.o fft3dlib.o
#FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.0.1/lib/libfftw3.a
#FFT3D= fftmpiw.o fftmpi_map.o fft3dlib.o /opt/intel/mkl/10.1.1.019/lib/em64t /libfftw3xf_intel.a

#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90/ifc
# compilers however append only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 " \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \
# --with-f77flags=-O --without-romio
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following line
#-----------------------------------------------------------------------

FC=/opt/mpich/mpich-1.2.7/bin/mpif90
FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------

CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \
-Dkind8 -DNGZhalf -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \
# -DscaLAPACK \
-DMPI_BLOCK=500 \
# -DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------

#BLACS=$(HOME)/archives/SCALAPACK/BLACS/
#SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK

#SCA= $(SCA_)/libscalapack.a \
# $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a

SCA=
#SCA=/opt/intel/mkl/10.1.1.019/lib/em64t/libmkl_scalapack_lp64.a /opt/intel/mkl/10.1.1.019/lib/em64t/libmkl_blacs_intelmpi_lp64.a
#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------

LIB = -L../vasp.4.lib -ldmy \
../vasp.4.lib/linpack_double.o $(LAPACK) \
$(SCA) $(BLAS)

# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o

# fftw.3.0.1 is slighly faster and should be used if available
#FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /opt/libs/fftw-3.0.1/lib/libfftw3.a

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC= symmetry.o symlib.o lattlib.o random.o

SOURCE= base.o mpi.o smart_allocate.o xml.o \
constant.o jacobi.o main_mpi.o scala.o \
asa.o lattice.o poscar.o ini.o setex.o radial.o \
pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \
nonl.o nonlr.o dfast.o choleski2.o \
mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \
metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \
tet.o hamil.o steep.o \
chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \
ebs.o wavpre.o wavpre_noio.o broyden.o \
dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \
brent.o stufak.o fileio.o opergrid.o stepver.o \
dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \
edtest.o electron.o shm.o pardens.o paircorrection.o \
optics.o constr_cell_relax.o stm.o finite_diff.o \
elpol.o setlocalpp.o aedens.o

INC=

vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F

main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)

# special rules
#-----------------------------------------------------------------------
# these special rules are cummulative (that is once failed
# in one compiler version, stays in the list forever)
# -tpp5|6|7 P, PII-PIII, PIV
# -xW use SIMD (does not pay of on PII, since fft3d uses double prec)
# all other options do no affect the code performance since -O1 is used
#-----------------------------------------------------------------------

fft3dlib.o : fft3dlib.F
$(CPP)
$(FC) -FR -lowercase -O1 -xW -unroll0 -vec_report3 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

radial.o : radial.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symlib.o : symlib.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symmetry.o : symmetry.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

dynbr.o : dynbr.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

broyden.o : broyden.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)

us.o : us.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

wave.o : wave.F
$(CPP)
$(FC) -FR -lowercase -O0 -c $*$(SUFFIX)

LDApU.o : LDApU.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)

-------------

<span class='smallblacktext'>[ Edited Fri Mar 20 2009, 01:40PM ]</span>
Last edited by ey on Thu Mar 19, 2009 12:49 pm, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

Compilation VASP with mpich

#11 Post by pafell » Fri Mar 20, 2009 1:48 pm

I get similar errors when I try to link with ifort instead of mpif90. Something is broken with mpif90 script.

What looks odd to me is the output of mpif90 -v. It should be really 2-liner.
What can you see with mpif90 -link-info and -compile-info?

I've sth like this:
[root@pamela vasp.4.6]# mpif90 -link-info
ifort -I/opt/mpich2-1.0.8/include -I/opt/mpich2-1.0.8/include -L/opt/mpich2-1.0.8/lib -lmpichf90 -lmpichf90 -lmpich -lpthread -lrt
[root@pamela vasp.4.6]# mpif90 -compile-info
ifort -I/opt/mpich2-1.0.8/include -I/opt/mpich2-1.0.8/include -L/opt/mpich2-1.0.8/lib -lmpichf90 -lmpichf90 -lmpich -lpthread -lrt

You could try get info from -link-info, and add it to FCL, so in my example you'd have:
FCL=$(FC) -I/opt/mpich2-1.0.8/include -I/opt/mpich2-1.0.8/include -L/opt/mpich2-1.0.8/lib -lmpichf90 -lmpichf90 -lmpich -lpthread -lrt

Check if compilation will get to the end.

And one more idea. Maybe you have some alias in shell which makes mpif90 do something nasty?
Last edited by pafell on Fri Mar 20, 2009 1:48 pm, edited 1 time in total.

ey

Compilation VASP with mpich

#12 Post by ey » Mon Mar 23, 2009 9:49 am

here:

$ mpif90 -link-info
ln -s /opt/mpich/mpich-1.2.7/include/mpif.h mpif.h
ifort -L/opt/mpich/mpich-1.2.7/lib -lmpichf90 -lmpich -lpthread -lrt
rm -f mpif.h

$ mpif90 -compile-info
ln -s /opt/mpich/mpich-1.2.7/include/mpif.h mpif.h
ifort -c -I/opt/mpich/mpich-1.2.7/include/f90choice
rm -f mpif.h

i think it works good, but when i run parallel vasp with 8 nodes, it works only two times faster than serial work for 60-atom structure. may i find bigger input file examples?
Last edited by ey on Mon Mar 23, 2009 9:49 am, edited 1 time in total.

Post Reply