VASP 6.2 compilation with OpenACC GPU port

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Message
Author
david_keller
Newbie
Newbie
Posts: 21
Joined: Tue Jan 12, 2021 3:17 pm

Re: VASP 6.2

#1 Post by david_keller » Wed Jan 27, 2021 9:45 pm

I downloaded 6.2 and have a problem trying to link up the OpenACC GPU version. Any suggestions would be appreciated.

Thanks,
Dave

rm -f vasp ; make vasp ; cp vasp ../../bin/vasp_gpu
make[2]: Entering directory `/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/gpu'
mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.0 -Mfree -Mbackslash -Mlarge_arrays -fast -c c2f_interface.f90
/cm/shared/software/nvidia-hpc-sdk/20.11/Linux_x86_64/20.11/comm_libs/openmpi/openmpi-3.1.5/bin/.bin/mpif90: error while loading shared libraries: libatomic.so.1: cannot open shared object file: No such file or directory
make[2]: *** [c2f_interface.o] Error 127

My makefile.include was based off arch/makefile.include.linux_nv_acc.

CPP_OPTIONS= -DHOST=\"LinuxIFC\" \
-DMPI -DMPI_BLOCK=8000 -DMPI_INPLACE -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Duse_bse_te \
-Dtbdyn \
-Dqd_emulate \
-Dfock_dblbuf \
-D_OPENACC \
-DUSENCCL -DUSENCCLP2P

CPP = nvfortran -Mpreprocess -Mfree -Mextend -E $(CPP_OPTIONS) $*$(FUFFIX) > $*$(SUFFIX)

FC = mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.0
FCL = mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.0 -c++libs

FREE = -Mfree

FFLAGS = -Mbackslash -Mlarge_arrays

OFLAG = -fast

DEBUG = -Mfree -O0 -traceback

# Specify your NV HPC-SDK installation, try to set NVROOT automatically
#NVROOT =$(shell which nvfortran | awk -F /compilers/bin/nvfortran '{ print $$1 }')
# ...or set NVROOT manually
#NVHPC ?= /opt/nvidia/hpc_sdk
#NVVERSION = 20.11/compilers
#NVROOT = $(NVHPC)/Linux_x86_64/$(NVVERSION)
NVROOT = /cm/shared/software/nvidia-hpc-sdk/20.11/Linux_x86_64/20.11/compilers
#/cm/shared/software/nvidia-hpc-sdk/20.11

# Use NV HPC-SDK provided BLAS and LAPACK libraries
BLAS = -lblas
LAPACK = -llapack

BLACS =
SCALAPACK = -Mscalapack

CUDA = -cudalib=cublas,cusolver,cufft,nccl -cuda
LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS) $(CUDA)

# Software emulation of quadruple precsion
#QD ?= $(NVROOT)/compilers/extras/qd
#LLIBS += -L$(QD)/lib -lqdmod -lqd
#INCS += -I$(QD)/include/qd

# Use the FFTs from fftw
#FFTW ?= /opt/gnu/fftw-3.3.6-pl2-GNU-5.4.0
#LLIBS += -L$(FFTW)/lib -lfftw3
#INCS += -I$(FFTW)/include
INCS=

OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o

# Redefine the standard list of O1 and O2 objects
SOURCE_O1 := pade_fit.o
SOURCE_O2 := pead.o

# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = nvfortran
CC_LIB = nvc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1 -Mfixed
FREE_LIB = $(FREE)

OBJECTS_LIB= linpack_double.o getshmem.o

# For the parser library
CXX_PARS = nvc++ --no_warnings

# Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin

MPI_INC = $(I_MPI_ROOT)/include64/
NVCC = $(NVHPC)/Linux_x86_64/20.11/compilers/bin/nvcc



Currently Loaded Modulefiles:
1) pbspro/19.1.3 2) lle 3) intel/2020.2 4) nvidia-hpc-sdk/20.11 5) cuda/11.1

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 455.32.00 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... Off | 00000000:3B:00.0 Off | Off |
| N/A 28C P0 34W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+



I

henrique_miranda
Global Moderator
Global Moderator
Posts: 490
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: VASP 6.2

#2 Post by henrique_miranda » Thu Jan 28, 2021 6:28 am

This seems to be an issue related to your compiler or environment.
Are you able to compile any other program using this compiler?

I recommend you to look for solutions online:
https://unix.stackexchange.com/question ... all-nodejs

david_keller
Newbie
Newbie
Posts: 21
Joined: Tue Jan 12, 2021 3:17 pm

Re: VASP 6.2

#3 Post by david_keller » Thu Jan 28, 2021 2:09 pm

Thanks for the suggestion about libatomic.
But I am wondering if my problem is not more basic?
1. I am trying to build std for use on my intel based server but have the ACC enhancement to use the GPU enabled.
2. I have an include file that succeeds in building without ACC enabled.
3. Should all I have to do is include the following to the CCP_OPTIONS?
-Dqd_emulate \
-Dfock_dblbuf \
-D_OPENACC \
-DUSENCCL -DUSENCCLP2P
4. If I do a 'make std' will the std execuatable I build aotomatically make use of a GPU if present?
5. If I 'make gpu' am I making the no longer developed GPU version?
6. If I want the ACC version must I use nvfortran to compile or can I use Intel's mpifort?

Perhaps the title of this thread should be VASP 6.2 ACC realted questions?

henrique_miranda
Global Moderator
Global Moderator
Posts: 490
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: VASP 6.2

#4 Post by henrique_miranda » Thu Jan 28, 2021 4:04 pm

Thanks for the suggestion about libatomic.
Were you able to fix this issue?
Or do you still get an error? Which error?

As to the compilation with OpenACC:
1. You should perhaps try starting from one of the example makefiles provided in the arch/ folder:
https://www.vasp.at/wiki/index.php/Inst ... ort_to_GPU
2. 'make gpu' will make the no longer developed GPU version. This is different from compiling VASP with GPU support through OpenACC.
3. To compile with OpenACC you do the typical 'make std gam ncl' to get the 3 different VASP versions.
4. You must use a compiler supporting OpenACC directives, unfortunately, the intel compiler is not one of them.
The list of tested and supported compilers is provided in the link above.

david_keller
Newbie
Newbie
Posts: 21
Joined: Tue Jan 12, 2021 3:17 pm

Re: VASP 6.2

#5 Post by david_keller » Thu Jan 28, 2021 4:35 pm

OK. So I want to use gfortran?

I have nvidia-hpc-sdk/20.11 in my environment.
If I start with an exact copy of makefile.include.linux_nv_acc my 'make std' build gets to:

make[3]: Entering directory `/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/parser'
nvc++ --no_warnings -D YY_parse_DEBUG=1 -c sites.cpp -o sites.o
nvc++ --no_warnings -D YY_parse_DEBUG=1 -c functions.cpp -o functions.o
nvc++ --no_warnings -D YY_parse_DEBUG=1 -c radial.cpp -o radial.o
nvc++ --no_warnings -D YY_parse_DEBUG=1 -c basis.cpp -o basis.o
nvc++ --no_warnings -D YY_parse_DEBUG=1 -c lex.yy.c -o lex.yy.o
nvc++ --no_warnings -D YY_parse_DEBUG=1 -c locproj.tab.c -o locproj.tab.o
nvc++ --no_warnings -D YY_parse_DEBUG=1 -c yywrap.c -o yywrap.o
rm -f libparser.a
ar vq libparser.a sites.o functions.o radial.o basis.o lex.yy.o locproj.tab.o yywrap.o locproj.tab.h
ar: creating libparser.a
a - sites.o
a - functions.o
a - radial.o
a - basis.o
a - lex.yy.o
a - locproj.tab.o
a - yywrap.o
a - locproj.tab.h
make[3]: Leaving directory `/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/parser'
make[2]: Leaving directory `/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/parser'
rsync -u ../../src/*.F ../../src/*.inc .
rm -f vasp ; make vasp ; cp vasp ../../bin/vasp_std
make[2]: Entering directory `/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std'
nvfortran -Mpreprocess -Mfree -Mextend -E -DHOST=\"Linux\" -DMPI -DMPI_BLOCK=8000 -DMPI_INPLACE -Duse_collective -DscaLAPACK -DCACHE_SIZE=4000 -Davoidalloc -Dvasp6 -Duse_bse_te -Dtbdyn -Dqd_emulate -Dfock_dblbuf -D_OPENACC -DUSENCCL -DUSENCCLP2P c2f_interface.F > c2f_interface.f90 -DNGZhalf
mpif90 -acc -gpu=cc70,cc80,cuda11.1 -Mfree -Mbackslash -Mlarge_arrays -fast -I/cm/shared/software/nvidia-hpc-sdk/20.11/Linux_x86_64/20.11/compilers/extras/qd/include/qd -I/cm/shared/software/fftw3/3.3.8/b7/include -c c2f_interface.f90
gfortran: error: unrecognized debug output level ‘pu=cc70,cc80,cuda11.1’
gfortran: error: unrecognized command line option ‘-acc’
gfortran: error: unrecognized command line option ‘-Mfree’; did you mean ‘-free’?
gfortran: error: unrecognized command line option ‘-Mbackslash’; did you mean ‘-fbackslash’?
gfortran: error: unrecognized command line option ‘-Mlarge_arrays’
gfortran: error: unrecognized command line option ‘-fast’; did you mean ‘-Ofast’?
make[2]: *** [c2f_interface.o] Error 1
make[2]: Leaving directory `/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std'

david_keller
Newbie
Newbie
Posts: 21
Joined: Tue Jan 12, 2021 3:17 pm

Re: VASP 6.2

#6 Post by david_keller » Thu Jan 28, 2021 5:42 pm

Follow up to my previous post.

Obviously I am having trouble understanding what envoronmet I should be compiling for and getting the paramters correct.

Do you happen to gave a sample makefile.include for NV with GNU rather than PGI?

henrique_miranda
Global Moderator
Global Moderator
Posts: 490
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: VASP 6.2

#7 Post by henrique_miranda » Thu Jan 28, 2021 10:25 pm

Are you sure your mpif90 version is using the nvidia compiler from nvidia-hpc-sdk/20.11?
You can check with:

Code: Select all

mpif90 -v
Do you happen to gave a sample makefile.include for NV with GNU rather than PGI?
No.
As mentioned in the wiki (https://www.vasp.at/wiki/index.php/Inst ... ort_to_GPU) only nvidia hpc sdk or a latest version of the pgi compiler are tested for the openacc port.
It should be possible to use the gnu compiler but this is not yet supported.

david_keller
Newbie
Newbie
Posts: 21
Joined: Tue Jan 12, 2021 3:17 pm

Re: VASP 6.2

#8 Post by david_keller » Fri Jan 29, 2021 2:14 pm

It was the wrong compiler being invoked... thanks.

What -DHOST should be set for the NV build rather than LinuxPGI?

henrique_miranda
Global Moderator
Global Moderator
Posts: 490
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: VASP 6.2

#9 Post by henrique_miranda » Fri Jan 29, 2021 2:20 pm

-DHOST is purely a comment (and optional). It only affects what VASP will display a the start the OUTCAR.

For instance, for -DHOST=\"LinuxPGI\" \, you'll get:
vasp.6.2.0 18Jan21 (build Feb 10 2021 18:15:02) complex
executed on LinuxPGI date 2021.02.10 19:36:29
i.e., the "LinuxPGI" in the second line.

david_keller
Newbie
Newbie
Posts: 21
Joined: Tue Jan 12, 2021 3:17 pm

Re: VASP 6.2

#10 Post by david_keller » Fri Jan 29, 2021 2:50 pm

OK Thanks again.

Now I have a good build except for the final link.
Any suggestion for this?

mpif90 -acc -gpu=cc70,cc80,cuda11.1 -c++libs -o vasp c2f_interface.o nccl2for.o simd.o base.o profiling.o string.o tutor.o version.o vhdf5_base.o incar_reader.o reader_base.o openmp.o openacc_struct.o mpi.o mpi_shmem.o mathtools.o hamil_struct.o radial_struct.o pseudo_struct.o mgrid_struct.o wave_struct.o nl_struct.o mkpoints_struct.o poscar_struct.o afqmc_struct.o phonon_struct.o fock_glb.o chi_glb.o smart_allocate.o xml.o extpot_glb.o constant.o vdwforcefield_glb.o jacobi.o main_mpi.o openacc.o scala.o asa.o lattice.o poscar.o ini.o mgrid.o setex_struct.o xclib.o vdw_nl.o xclib_grad.o setex.o radial.o pseudo.o gridq.o ebs.o symlib.o mkpoints.o wave.o wave_mpi.o wave_high.o bext.o spinsym.o symmetry.o lattlib.o random.o nonl.o nonlr.o nonl_high.o dfast.o choleski2.o mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o constrmag.o cl_shift.o relativistic.o LDApU.o paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o diis.o rhfatm.o hyperfine.o fock_ace.o paw.o mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o solvation.o scpc.o pot.o tet.o dos.o elf.o hamil_rot.o chain.o dyna.o fileio.o vhdf5.o sphpro.o us.o core_rel.o aedens.o wavpre.o wavpre_noio.o broyden.o dynbr.o reader.o writer.o xml_writer.o brent.o stufak.o opergrid.o stepver.o chgloc.o fast_aug.o fock_multipole.o fock.o fock_dbl.o fock_frc.o mkpoints_change.o subrot_cluster.o sym_grad.o mymath.o npt_dynamics.o subdftd3.o subdftd4.o internals.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o nmr.o pead.o k-proj.o subrot.o subrot_scf.o paircorrection.o rpa_force.o ml_interface.o force.o pwlhf.o gw_model.o optreal.o steep.o rmm-diis.o davidson.o david_inner.o root_find.o lcao_bare.o locproj.o electron_common.o electron.o rot.o electron_all.o shm.o pardens.o optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o hamil_lr.o rmm-diis_lr.o subrot_lr.o lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o linear_optics.o setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o gauss_quad.o m_unirnk.o minimax_ini.o minimax_dependence.o minimax_functions1D.o minimax_functions2D.o minimax_struct.o minimax_varpro.o minimax.o mlwf.o ratpol.o pade_fit.o screened_2e.o wave_cacher.o crpa.o chi_base.o wpot.o local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o bse_te.o bse.o time_propagation.o acfdt.o afqmc.o rpax.o chi.o acfdt_GG.o dmft.o GG_base.o greens_orbital.o lt_mp2.o rnd_orb_mp2.o greens_real_space.o chi_GG.o chi_super.o sydmat.o rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o wave_interpolate.o linear_response.o auger.o dmatrix.o phonon.o wannier_mats.o elphon.o core_con_mat.o embed.o extpot.o fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o main.o -Llib -ldmy -Lparser -lparser -Mscalapack -llapack -lblas -cudalib=cublas,cusolver,cufft,nccl -cuda -L/cm/shared/software/nvidia-hpc-sdk/20.11/Linux_x86_64/20.11/compilers/extras/qd/lib -lqdmod -lqd -L/cm/shared/software/fftw3-mkl/2019.3/b1/lib -lfftw3
wannier_interpol.o: In function `wannier_interpolation_fourier_interpol_ktor_':
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/wannier_interpol.f90:764: und/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/wannier_interpol.f90:780: undefined reference to `dfftw_execute_dft_'
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/wannier_interpol.f90:793: undefined reference to `dfftw_destroy_plan_'
fftmpiw.o: In function `fftbas_plan_mpi_':
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/fftmpiw.f90:232: undefined reference to `dfftw_plan_many_dft_'
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/fftmpiw.f90:239: undefined reference to `dfftw_plan_many_dft_'
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/fftmpiw.f90:248: undefined reference to `dfftw_plan_many_dft_c2r_'
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/fftmpiw.f90:264: undefined reference to `dfftw_plan_many_dft_r2c_'
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/fftmpiw.f90:270: undefined reference to `dfftw_plan_many_dft_'
/g1/ssd/kellerd/vasp_gpu_work/vasp.6.2.0/build/std/fftmpiw.f90:278: undefined reference to `dfftw_plan_many_dft_'
...

henrique_miranda
Global Moderator
Global Moderator
Posts: 490
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: VASP 6.2

#11 Post by henrique_miranda » Fri Jan 29, 2021 3:32 pm

It looks like your fftw3 library does not have those symbols defined.
This is an issue related to your installation of the fftw3 library.
If it was not you who compiled that library I suggest that you ask whoever did.

You might also find this information useful:
https://www.vasp.at/wiki/index.php/Inst ... Transforms

henrique_miranda
Global Moderator
Global Moderator
Posts: 490
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: VASP 6.2 compilation with OpenACC GPU port

#12 Post by henrique_miranda » Thu Feb 04, 2021 12:09 pm


ctetsass
Newbie
Newbie
Posts: 3
Joined: Tue Jul 27, 2021 1:00 pm

Re: VASP 6.2 compilation with OpenACC GPU port

#13 Post by ctetsass » Thu Jul 29, 2021 5:02 pm

Hi I am having trouble installing vasp 6.2.1 with hpc_sdk/Linux_x86_64/21.7 on ubuntu 20.04

I am using the ''Makefile.include nv acc+omp+mkl'' see below

here is the error message

Code: Select all

mpif90 -acc -gpu=cc60,cc70,cc80,cc75, cuda11.4 -mp -Mfree -Mbackslash -Mlarge_arrays -fast -I/opt/nvidia/hpc_sdk/Linux_x86_64/21.7 /compilers/extras/qd/include/qd  -c base.f90
base.f90:
NVFORTRAN-F-0004-Unable to open MODULE file qdmodule.mod (base.F: 288)
NVFORTRAN/x86-64 Linux 21.7-0: compilation aborted
nvidia-smi

Code: Select all

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01    Driver Version: 470.42.01    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
|  0%   57C    P5    48W / 260W |    893MiB / 11019MiB |      4%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
nvaccelinfo

Code: Select all

CUDA Driver Version:           11040
NVRM version:                  NVIDIA UNIX x86_64 Kernel Module  470.42.01  Tue Jun 15 21:26:37 UTC 2021

Device Number:                 0
Device Name:                   NVIDIA GeForce RTX 2080 Ti
Device Revision Number:        7.5
Global Memory Size:            11554717696
:
:
Default Target:                cc75
makefile.include

Code: Select all

# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxPGI\" \
             -DMPI -DMPI_BLOCK=8000 -DMPI_INPLACE -Duse_collective \
             -DscaLAPACK \
             -DCACHE_SIZE=4000 \
             -Davoidalloc \
             -Dvasp6 \
             -Duse_bse_te \
             -Dtbdyn \
             -Dqd_emulate \
             -Dfock_dblbuf \
             -D_OPENACC \
             -DUSENCCL -DUSENCCLP2P

CPP        = nvfortran -Mpreprocess -Mfree -Mextend -E $(CPP_OPTIONS) $*$(FUFFIX)  > $*$(SUFFIX)

FC         = mpif90 -acc -gpu=cc60,cc70,cc80,cc75, cuda11.4 -mp
FCL        = mpif90 -acc -gpu=cc60,cc70,cc80 cc75, cuda11.4 -c++libs -mp


FREE       = -Mfree

FFLAGS     = -Mbackslash -Mlarge_arrays

OFLAG      = -fast

DEBUG      = -Mfree -O0 -traceback

# Specify your NV HPC-SDK installation, try to set NVROOT automatically
#NVROOT     =$(shell which nvfortran | awk -F /compilers/bin/nvfortran '{ print $$1 }')
# ...or set NVROOT manually
NVHPC      = /opt/nvidia/hpc_sdk
NVVERSION  = 21.7
NVROOT     = $(NVHPC)/Linux_x86_64/$(NVVERSION) # /opt/nvidia/hpc_sdk/Linux_x86_64/21.7/

# Use NV HPC-SDK provided BLAS and LAPACK libraries
BLAS       = -lblas
LAPACK     = -llapack

BLACS      =
SCALAPACK  = -Mscalapack

CUDA       = -cudalib=cublas,cusolver,cufft,nccl -cuda

LLIBS      = $(SCALAPACK) $(LAPACK) $(BLAS) $(CUDA)

# Software emulation of quadruple precsion
QD         ?= $(NVROOT)/compilers/extras/qd
LLIBS      += -L$(QD)/lib -lqdmod -lqd
INCS       += -I$(QD)/include/qd

# Use the FFTs from fftw
#FFTW       ?= /opt/gnu/fftw-3.3.6-pl2-GNU-5.4.0
#LLIBS      += -L$(FFTW)/lib -lfftw3
#INCS       += -I$(FFTW)/include

OBJECTS    = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o

# Redefine the standard list of O1 and O2 objects
SOURCE_O1  := pade_fit.o
SOURCE_O2  := pead.o

# For what used to be vasp.5.lib
CPP_LIB    = $(CPP)
FC_LIB     = nvfortran
CC_LIB     = nvc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1 -Mfixed
FREE_LIB   = $(FREE)

OBJECTS_LIB= linpack_double.o getshmem.o

# For the parser library
CXX_PARS   = nvc++ --no_warnings


# Normally no need to change this
SRCDIR     = ../../src
BINDIR     = ../../bin

NVCC = $(NVHPC)/Linux_x86_64/$(NVVERSION)/compilers/bin/nvcc

henrique_miranda
Global Moderator
Global Moderator
Posts: 490
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: VASP 6.2 compilation with OpenACC GPU port

#14 Post by henrique_miranda » Fri Jul 30, 2021 12:29 pm

Maybe this is because there a space in this line:

Code: Select all

-I/opt/nvidia/hpc_sdk/Linux_x86_64/21.7 /compilers/extras/qd/include/qd
Maybe you can try setting NVROOT manually.

ctetsass
Newbie
Newbie
Posts: 3
Joined: Tue Jul 27, 2021 1:00 pm

Re: VASP 6.2 compilation with OpenACC GPU port

#15 Post by ctetsass » Thu Aug 05, 2021 3:31 pm

Thank you

It worked by setting the NVROOT manually

Now I have another error

Code: Select all

nvfortran -Mpreprocess -Mfree -Mextend -E -DHOST=\"LinuxPGI\" -DMPI -DMPI_BLOCK=8000 -DMPI_INPLACE -Duse_collective -DscaLAPACK -DCACHE_SIZE=4000 -Davoidalloc -Dvasp6 -Duse_bse_te -Dtbdyn -Dqd_emulate -Dfock_dblbuf -D_OPENACC -DUSENCCL -DUSENCCLP2P fock_glb.F > fock_glb.f90 -DNGZhalf
mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.4 -mp -Mfree -Mbackslash -Mlarge_arrays -fast -I/opt/nvidia/hpc_sdk/Linux_x86_64/21.7/compilers/extras/qd/include/qd  -c fock_glb.f90

NVFORTRAN-S-0087-Non-constant expression where constant expression required (fock_glb.F: 234)
  0 inform,   0 warnings,   1 severes, 0 fatal for fock_glb
make[2]: *** [makefile:181: fock_glb.o] Error 2

Locked