:D 获取中...

1 Installtion

1.1 install Bader

tar -xvf  bader.tar.gz && cd  bader`  
#----------------
#to edit makefile.lnx_ifort:  
Line 7:  LINK = -static =>  LINK = -static-intel
#----------------  
make -f makefile.lnx_ifort   
cp bader ~/.soft/vasp/vaspshell/bader.x

1.2 install MKL, IMPI and FFTW

  1. Install intel parallel studio first.
  2. compile fftw
cd /opt/intel/mkl/interfaces/fftw3xf
make libintel64 compiler=intel 

1.2.1 FFTW 3

cd fftw/
./configure --prefix=/opt/local/fftw-3.3.8.g7 CC=gcc MPICC=mpicc FC=gfortran F77=gfortran --enable-mpi --enable-openmp --enable-threads --enable-avx
make 
make install

1.3 install Wannier90

1.3.1 Wannier90 3.1 (Intel)

# load intel  environment
tar -xvf wannier90-3.1.0.tar.gz
cd  /wannier90-3.1.0`, to eidt 
cp config/make.inc.ifort make.inc

# to edit make.inc for lib , remove COMMS:  
...
#COMMS=mpi
LIBDIR = /opt/intel2015/composer_xe_2015.0.090/mkl/lib/intel64
...  
make lib 

# to edit make.inc, for wannier90.x ...
...
COMMS=mpi
LIBDIR = /opt/intel2015/composer_xe_2015.0.090/mkl/lib/intel64
...  
make clean
make            !(default) build wannier90.x and postw90.x
make w90chk2chk !build the w90chk2chk.x utility (see 'Utility' section of
make w90vdw     !build the van der Waals code
make w90pov     !build the ray-tracing code
make w90spn2spn

# lib (gcc+openblas): cp conifg/makefile.gfort make.inc for  
...
#COMMS=mpi
LIBS = /home/ywmu/backup/Src/comlib/gcc7/libopenblas.a
... 

1.3.2 Wannier90 3.1 (gcc + OpenBlas)

# load gcc and OpenBlas environment
tar -xvf wannier90-3.1.0.tar.gz
cd  /wannier90-3.1.0`, to eidt 
cp config/make.inc.gfort make.inc

# to edit make.inc for lib , remove COMMS:  
...
#COMMS=mpi
LIBS = /home/ywmu/backup/Src/comlib/gcc7/libopenblas.a
...  
make lib 

# to edit make.inc, for wannier90.x ...
...
COMMS=mpi
LIBS = /home/ywmu/backup/Src/comlib/gcc7/libopenblas_omp.a
...  
make clean
make  default  w90chk2chk  w90vdw   w90pov w90spn2spn  

1.4 install SSAdNDP

tar -xvf  projection_v.1.1.2.tgz && cd projection  
make (not use)    
tar -xvf  SSAdNDP_program_distr.zip && cd SSAdNDP_program_distr  
cd SSAdNDP/
#------------------
change the PATH for MKL  in Makefile.template, then
#------------------
perl mkmf -t Makefile.template -p ssadndp.exe  
make && cp ssadndp.exe ~/.soft/vasp/vaspshell/  
cd ../Visualization
ifort -o visual.exe  visual.f90 
cp visual.exe  ~/.soft/vasp/vaspshell/  
cd Projection
#---------------
correct MKLROOT, 
#---------------
make
cp projection ~/.soft/vasp/vaspshell/projection.exe
 get projection,  use this one rather the one in projection_v.1.1.2.tgz   

1.5 VASP 6.2.1

  • VASP 6.2 does not support composer_xe_2015

1.5.1 DFT-D4 (some thing wrong interfaced with vasp, not use )

  • download cmake from https://cmake.org/download/ and unzip
  • load environment variables for gcc, cmake and python
  • maybe you need proxy for git: git config --global http.proxy http://127.0.0.1:8123, disable it by git config --global --unset http.proxy.
export FC=gfortran CC=gcc
meson setup _build
meson test -C _build --print-errorlogs
meson configure _build --prefix=/home/muyw/backup/dft-d4/bin
meson install -C _build 

1.5.2 Install BEEF

unzip libbeef-master
cd libbeef-master
 ./configure FC=gfortran CC=gcc CXX=g++
 or ./configure FC=ifort CC=icc CXX=icpc 
 make 
 cp libbeef.a /home/ywmu/backup/Src/comlib/gcc7/

1.5.3 Solvation

  • https://github.com/henniggroup/VASPsol/issues/43
    As stated in the issue by shk11: The MPI communication error is caused by applying the pbz_patch_541 to the pot.F file of VASP 6.1.0 / 6.1.1 / 6.1.2. As a result, the call of the FERMISHIFT subroutine is inserted at the wrong position in the code. Please find attached another patch which solves both the FERMI-SHIFT bug as well as the previously mentioned issue:
  1. Update the solvation routines by copying the VASPsol/src/solvation.F file to the src/ folder in the original VASP folder
  2. Copy the vaspsol_fermi_vasp6.patch file (see attachment) into the src folder of VASP 6.1.0 / 6.1.1 / 6.1.2
  3. Patch the pot.F, solvation.F, and mpi.F file: patch -p0 < vaspsol_fermi_vasp6.patch
  4. Compile the code as described in the VASP wiki (don’t forget to set the CPP option -Dsol_compat in the makefile.include file)

1.5.4 FIX CELL

## vi  src/constr_cell_relax.F

      SUBROUTINE CONSTR_CELL_RELAX(FCELL)
      USE prec
      REAL(q) FCELL(3,3)
      INTEGER FIXCELL(3,3) ! added by Mu
      LOGICAL FCEXIST   !added by Mu
      INTEGER  I,J,IREADFIX  ! added by Mu
      SAVE FIXCELL,IREADFIX  ! added by Mu, before DATA
      DATA IREADFIX /0/  ! added by Mu, 0-read in ,1- not read in
      DATA FIXCELL /3*1,3*1,3*1/

!     just one simple example
!     relaxation in z directions only
!      SAVE=FCELL(3,3)
!      FCELL=0   ! F90 style: set the whole array to zero
!      FCELL(3,3)=SAVE

!  added by Mu
      IF(IREADFIX==0) THEN
        INQUIRE(file="FIXCELLCAR",exist=FCEXIST)
        IF(FCEXIST) THEN
          OPEN(unit=1983,file="FIXCELLCAR",status="old")
            DO I=1,3
                  READ(1983,*) (FIXCELL(J,I),J=1,3)
            END DO
          CLOSE(1983)
          WRITE(*,*) "The Matrix for Fixing Cell as Follows:"
          DO I=1,3
            WRITE(*,*) (FIXCELL(J,I),J=1,3)
          END DO
          WRITE(*,*) "End of Matrix for Fixing Cell."
          IREADFIX=2
        ELSE
          IREADFIX=1
        END IF
      END IF
      IF(IREADFIX==2) THEN
          DO I=1,3
            DO J=1,3
                IF(FIXCELL(J,I) == 0) FCELL(J,I)=0.0
            END DO
          END DO
      END IF
!!!!!!!!!!!  end , added by MU

      RETURN
      END SUBROUTINE
  • Prepare FIXCELLCAR to fix the lattice, such as 2D lattice:
1 1 0
1 1 0
0 0 0

1.5.5 Standard VASP 6.4

1.5.5.1 Intel

P.S. Intel OneAPI 2021.04 failed for VASP 6.4, and Intel OneAPI 2023.02 is ok.

P.S. ld version in CentOS 7 may be too low, to install a higher version (e.g., binutils-2.30 )

cp arch/makefile.include.intel_omp makefile.include
####### edit makefile.inlude (I will use AVX2 SIMD for both intel and amd cpus)
## replace  -xHost with -march=core-avx2
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxIFC\"\
...
             -Dfock_dblbuf \
             -D_OPENMP -Dsol_compat -Dlibbeef
...
VASP_TARGET_CPU ?= -march=core-avx2

....
MKLROOT    ?= /home/ywmu/backup/opt/mkl/latest
...
#SCALAPACK_ROOT ?= /path/to/your/scalapack/installation
#LLIBS      += -L${SCALAPACK_ROOT}/lib -lscalapack
...
LIBDIR     = /home/ywmu/backup/Src/comlib/oneapi
CPP_OPTIONS    += -DVASP2WANNIER90
LLIBS          +=  $(LIBDIR)/libwannier.a
LLIBS          += $(LIBDIR)/libbeef.a

...
########################
make all

1.5.5.2 gcc 7.5+Openmpi 3.1+OpenBlas 0.3.18

Gcc 7.5 fails for vasp.6.4, and it seems something wrong with the OpenMP. Gcc should be upgraded for vasp 6.4.

#cp arch/makefile.include.linux_gnu_omp makefile.include
cp arch/makefile.include.linux_gnu  makefile.include

####### edit makefile.inlude
## replace  -march=native with -march=core-avx2, 
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxGNU\" \
...
             -Dfock_dblbuf \
             -D_OPENMP -Dsol_compat -Dlibbeef
...
VASP_TARGET_CPU ?= -march=core-avx2
...
##comment out "FFLAGS     += -fallow-argument-mismatch" for gcc<10

LIBDIR     = /home/ywmu/backup/Src/comlib/gcc7
#OPENBLAS_ROOT ?= /path/to/your/openblas/installation
BLASPACK       = $(LIBDIR)/libopenblas.a
...
#SCALAPACK_ROOT ?= /path/to/your/scalapack/installation
SCALAPACK   = $(LIBDIR)/libscalapack.a
,,,
FFTW       ?= /home/ywmu/backup/Src/comlib/fftw-3.3.8.g7
LLIBS      += -L$(FFTW)/lib -lfftw3 -lfftw3_omp
INCS       = -I$(FFTW)/include
...
CPP_OPTIONS    += -DVASP2WANNIER90
LLIBS          +=  $(LIBDIR)/libwannier.a
LLIBS          += $(LIBDIR)/libbeef.a
...


########################
make all

1.5.6 NEB Version

tar -xvf vtstcode-184.tgz
cd vtstcode-184
cp *.F ../vasp/src/
cd ../vasp/src/
#to edit main.F,
CALL CHAIN_FORCE(T_INFO%NIONS,DYN%POSION,TOTEN,TIFOR, &
     TSIF,LATT_CUR%A,LATT_CUR%B,IO%IU6)
# to edit .objects, to add
    bfgs.o \
    dynmat.o \
    instanton.o  \
    lbfgs.o \
    sd.o   \
    cg.o \
    dimer.o \
    bbm.o \
    fire.o \
    lanczos.o \
    neb.o  \
    qm.o \
    opt.o \
before  chain.o
#-------------------------  
make std  
mv bin/vasp_std vasp.neb 

1.5.7 NBO Version

diff -Naur -Naur ../src.old/main.F ./main.F
--- ../src.old/main.F   2021-03-16 08:45:37.383789137 +0800
+++ ./main.F    2021-03-16 09:22:12.000000000 +0800
@@ -176,6 +176,7 @@
 ! solvation__
       USE solvation
 ! solvation__
+      USE projection_output
       USE elphon
       USE phonon
       USE embed
@@ -405,6 +406,9 @@
 #ifdef MinimaxTest
       TYPE(imag_grid_handle) :: IMAG_GRIDS
 #endif
+!Added by BDD
+!-----Logical to be read in for performing output of pw's for projection
+      LOGICAL    LNBO
 !=======================================================================
 ! All COMMON blocks
 !=======================================================================
@@ -4567,6 +4571,17 @@
       ENDIF
    ENDIF wrtwave

+  !Added by BDD for customized output
+   wrtnbo: IF ( INFO%LSTOP ) THEN
+
+      !First, test if the flag has been given in the INCAR for output
+      LNBO = .FALSE.
+      CALL RDATAB(IO%LOPEN,'INCAR',IO%IU5,'LNBO','=','#',';','L', &
+     &            IDUM,RDUM,CDUM,LNBO,CHARAC,N,1,IERR)
+      !Then call the subroutine
+      IF ( LNBO )CALL output_NBO(WDES,W,LATT_INI,T_INFO,P,CQIJ)
+
+   ENDIF wrtnbo

 #ifdef VASP_HDF5
    IF ( (IO%LWAVEH5 .AND. ( INFO%LSTOP .OR. LTMP .OR. LHFCALC)) .OR. IO%LCHARGH5) THEN
diff -Naur -Naur ../src.old/.objects ./.objects
--- ../src.old/.objects 2021-03-16 08:45:37.367789136 +0800
+++ ./.objects  2021-03-16 09:20:39.022821801 +0800
@@ -237,6 +237,7 @@
        linear_response.o \
        auger.o \
        dmatrix.o \
+       projection_output.o \
        phonon.o \
        wannier_mats.o \
        elphon.o \
#################
cd  vasp/src 
patch <addnbo.patch
cd ..
make std 

2 Unit for CHGCAR

  • From VASP manual, CHGCAR: “This file contains the lattice vectors, atomic coordinates, the total charge density multiplied by the volume \rho(r)*V_{cell} on the fine FFT-grid (NG(X,Y,Z)F), and the PAW one-center occupancies.”

  • From VESTA manual, CHGCAR/CHG/PARCHG: “When these files are read in to visualize isosurfaces and sections, data values are divided by V in the unit of bohr^3. The unit of charge densities input by VESTA is, therefore, bohr^{−3}.”

  • Test: Though ELFCAR and CHGCAR share the same format, files in CHGCAR format will be divided by V in the unit of bohr^3 unless the file name have the string “ELFCAR” unambiguously.

3 VASP Tips

I include here some accumulated VASP wisdom. Many of these tips are rules-of-thumb, so consider investigating them for your particular system of interest.

Geometry Optimizations

  • When performing geometry optimizations that involve changes in the cell shape and/or volume, always ensure that ENCUT > 1.3*ENMAX to prevent Pulay stresses.

  • The first step in a geometry optimization will generally have the highest number of SCF iterations. It is okay if that first step does not converge electronically within the limits of NELM. In fact, it is arguably better to have the first step reach NELM instead of running for many hundreds of SCF iterations.

  • The general rule-of-thumb is to set EDIFFG to 0.05 eV/Å or lower for optimizations. I personally recommend 0.03 eV/Å or lower. For large, flexible materials, any value greater than 0.03 eV/Å is likely too high.

  • When performing a full optimization of atomic positions and cell shape/volume, it is almost always best to do this in stages. It is often wise to start with a relaxation of the atomic positions (ISIF=2) followed by a full volume relaxation (ISIF=3). This will significantly reduce the chance of running into convergence issues.

  • If your material can only be modeled with vacuum space as part of the simulation unit cell (e.g. 2D vdW materials), never use ISIF=3 to optimize the cell volume, as it will simply reduce/eliminate the vacuum. You need to vary the lattice parameters manually and find the global energy minimum that way.

  • When performing optimizations, you are probably better off setting ISYM=0 unless you are certain you wish to constrain the symmetry. Even though this will increase the computational cost by not using available symmetry, it allows the system to reach local minima that would not otherwise be accessible. In addition, it prevents you from having to worry about errors related to SYMPREC. That being said, some symmetry issues sometimes still arise despite having ISYM=0. To resolve these, set SYMPREC = 1.0e-8.

  • If you wish to stop a job but want it to still output any WAVECAR or CHGCAR files, create a file in the working directory named STOPCAR and have a line that reads LSTOP=.TRUE.(this will cause the job to stop on the next ionic step, and all restart files will be written)

Choice of Optimization Algorithms

  • In choosing an optimization algorithm for finding local minima, I generally recommend starting with the conjugate gradient (CG) algorithm (IBRION=2) because it is very robust and you do not have to worry about tweaking POTIM.

  • In large, flexible materials with many degrees of freedom, the CG optimization algorithm (IBRION=2) oftentimes results in a bracketing error once it gets relatively close to the local minimum (search for ZBRENT: fatal error in bracketing in the standard output file). This occurs because the potential energy surface is very flat, and the CG algorithm implemented in VASP is based on the energy differences. One option to fix this is to copy the CONTCAR to the POSCAR and tighten EDIFF to 1e-6 , but a more reliable option is to use a force-based optimizer. Of these, I’d recommend FIRE as implemented with VTST (IBRION=3, IOPT=7). I have found that FIRE is generally more robust than the QN (IBRION=1) method.

  • If the starting structure has extremely high forces on some atoms, make sure you use an appropriate optimizer, at least for the first few steps. I strongly recommend a force-based optimization algorithm with a robust line-search algorithm. For cases where the forces are so high that the structure “explodes” within a few iterations, I recommend using ASE’s BFGSLineSearch algorithm until max|F| < 10 eV/A or so. Then continue with your favorite optimizer.

  • Generally, for NEB and CI-NEB calculations,  the L-BFGS algorithm (IOPT=1) implemented in VTST is the fastest. For the dimer method, the force-based CG method in VTST (IOPT=2) is recommended. However, if you are having trouble in either case, I suggest switching to the FIRE algorithm (IOPT=7) with the default settings. It is a bit slower, but it is especially useful in troublesome cases of convergence.

Electronic Energy Convergence

  • For insulating materials, SCF convergence is greatly accelerated by using ALGO=All. This has the added benefit that you don’t have to worry about any of the mixing tags. This algorithm is also recommended when using the M06-L meta-GGA functional, regardless of system type.

  • I recommend setting LWAVE=.TRUE. for most jobs, unless you are using an ASE-based optimizer or running many short jobs in succession. There is a large file I/O cost, but the WAVECAR serves as an excellent starting guess for continuation jobs, if needed, and can be deleted when you’re done. I don’t necessarily recommend setting LCHARG=.TRUE. for optimizations. The initial charge density can be calculated from the previous WAVECAR. 

  • Be careful about obtaining final energies from calculations using real-space projectors (e.g. LREAL=Auto) even for large systems. This can impact energies somewhat (even energy differences) depending on what you wish to study. If VASP recommends in the OUTCAR file that your system is large and that you will save time by using real-space projectors, I strongly recommend running your calculations with LREAL=Auto and then doing a final optimization with LREAL=.False. The optimization with the reciprocal-space projectors will only require an additional 1-5 steps on average, so you will still get the speedup of using real-space projectors.

  • While the VASP manual suggests not setting the maximum number of SCF iterations (NELM) above 60, there are many materials (e.g. MOFs) where a higher value is needed for the first few steps. I generally set NELM=150 when studying very large materials.

  • For materials that have charge sloshing or difficult convergence issues with the SCF, I recommend setting NELMIN to 4-6 so that accurate energies and forces are obtained.

  • If you do not explicitly set ISTART or ICHARG, VASP will detect if a WAVECAR is present and use this for the initial wavefunction and charge density if it exists. This is arguably better than explicitly setting ISTART in your INCAR because it prevents VASP from crashing if a WAVECAR is not present (e.g. because the previous job crashed).

  • For insulating materials (or materials that have unknown band gaps), I recommend using Gaussian smearing (ISMEAR=0) since it is appropriate for both conductors and insulators, although there are better choices for conducting materials.

  • For insulating materials, I recommend settings SIGMA to 0.01 eV as a reasonable initial test. Always check to make sure that extrapolation back to T = 0 K from the finite-temperature approximation is close to the fictitious free energy, as listed in the OUTCAR file.

  • It is generally good practice to set PREC=Accurate. This provides reasonable estimates for the integration grid (NGX, NGY, NGZ). It is often stated the meta-GGA functionals require tight integrations grids to achieve appreciable convergence. I have found that PREC=Accurate is sufficient even for these troublesome functionals.

Spin Polarization

  • When studying systems with magnetic moments, set LORBIT=11 so that you can view the converged magnetic moments for each atom.

  • If you are continuing a job from a previous WAVECAR, you do not need to set the MAGMOMS in the INCAR file so long as ISPIN=2 (you can if you’d like, but VASP will ignore them).

  • If you are unsure what to set as the magnetic moment for a metal atom, consider setting the MAGMOM value to the number of anticipated unpaired electrons. If this too is completely unknown, setting a value of 5 for d-block and 7 for f-block elements is typically okay as a first guess. If this approach is taken, it would be wise to use the converged structure to also test other spin initializations. I would also test an initial MAGMOM of 0.1 to see if this converges to a system with no spin, back to the same magnetic moment as the high-spin initialization, or something else entirely.

Transition States

  • If upon restarting a dimer calculation you find that the torque and angles are higher than where they left off, make sure that you are using a VASP build with VTST 3.2 or newer. See here.

  • The usage instructions for the VTST dimmins.pl script are incorrect. It should be dimmins.pl POSCAR MODECAR displacement, where the POSCAR and MODECAR are the resulting files from doing vfin.pl

  • The usage instructions for the VTST neb2dim.pl script are unclear. It should state that you must first run vfin.pl, copy the exts.dat file to the parent directory (where the new POSCAR files are written), and run neb2dim.pl from that parent directory.

Spin/Charge Densities

  • Never generate AECCAR files (e.g. for a Bader analysis or DDEC charges) during a geometry optimization. VASP writes the AECCAR0 file for the input geometry but the AECCAR2 file for the converged geometry, so they are not compatible. Always do a separate single-point energy calculation (nsw=0) when setting LAECHG=.True.

Pseudopotentials

  • Always consider using the VASP-recommended PAW potentials, shown here. Note that Li_sv has an ENMAX of 499 eV in the 5.4 version of the PAW_PBE pseudopotentials, but all the rest should have ENMAX < 400 eV. There is now an option in ASE to automatically choose the VASP-recommended default pseudopotentials. Just set setups='recommended'.

  • For DFT+U, hybrid functionals, and meta-GGA calculations, it is recommended to set LASPH=.TRUE. If you use the LASPH=.TRUE. flag, make sure you include it for gas-phase species as well! It should be considered as part of your model chemistry.

  • A subtle point is that, because the non-spherical contributions to the gradient corrections inside the PAW spheres will change with ENCUT, it will be hard (if not impossible) to converge an absolute energy with respect to the plane-wave kinetic energy cutoff if LASPH=.TRUE. Instead, one should look at convergence of relative energies or do a convergence test with LASPH=.FALSE.

Parallel Performance

  • The rule-of-thumb is that the number of processors you should use scales reasonably well with the number of atoms in the system. This holds fairly well for metallic systems where there are many electron-rich elements, but can be an over-estimate for organic or organometallic systems where there are many C and H atoms.

  • I recommend using NCORE in place of NPAR since it automatically adjusts based on the number of nodes. The optimal value for NCORE strongly varies based on the computing environment. A good first-guess on many compute clusters is to try setting NCORE to the number of processors in a given node. Regardless, always make sure that the number of processors per node is value divisible by NCORE.

ASE

  • Whenever possible, refrain from using the ASE optimizers (or any external optimizer) with VASP calculations. If necessary, consider using the VaspInteractive calculator instead of the Vasp calculator, as this will reduce CPU time associated with starting and stopping VASP every ionic step. However, even if the VaspInteractive calculator is used in place of the Vasp calculator, VASP will generally require more electronic steps when run with an ASE optimizer than with an internal VASP or VTST optimizer. This is because VASP keeps an internal history of previous ionic steps in its mixer history and because VASP extrapolates the wavefunction and charge density from the previous step to the new positions. There is also the I/O overhead associated with reading/writing any restart files every ionic step.

  • The above holds true for vibrational frequency calculations as well. That being said, it sometimes may be worth it to use ASE’s vibrations module even if there is added CPU overhead. The ASE-generated vibrations can be easily visualized using the ASE GUI, whereas VASP-generated vibrations are more difficult to visualize. The ASE vibrations module also saves restart files (.pckl files) for every displacement, so if the job crashes or exceeds the specified walltime, it is easy to continue where you left off.