I'm doing ML_FF calculations on hybrid perovskites. Since the system contains hydrogen atoms, the basis sets for ML are very big. With the default ML_MB parameter, I was quickly confronted with the error and hint that ML_MB was too small. I gradually increased ML_MB from 2000 to 4000 to 7000 every time when the code stopped and suggested me to increase ML_MB. Now, I increased ML_MB to 9000. The code stopped without doing any SCF loops. The error shows that:
Code: Select all
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7f0b43aded6f in ???
#1 0x7f0b49b4974d in buff2block
at /tmp/tmp.2R3jevzSvm/gnu8.1_x86_64_build/mp/scalapack/REDIST/SRC/pdgemr.c:679
#2 0x7f0b49b4974d in Cpdgemr2d
at /tmp/tmp.2R3jevzSvm/gnu8.1_x86_64_build/mp/scalapack/REDIST/SRC/pdgemr.c:547
#3 0x4a2108 in ???
#4 0x56df5b in ???
#5 0x578163 in ???
#6 0x5806aa in ???
#7 0xa2786f in ???
#8 0x10afe3a in ???
#9 0x10e2d33 in ???
#10 0x7f0b43ac92bc in ???
#11 0x40a719 in ???
at ../sysdeps/x86_64/start.S:120
#12 0xffffffffffffffff in ???
srun: error: nid005553: task 0: Segmentation fault
srun: launch/slurm: _step_signal: Terminating StepId=2753888.0
Code: Select all
Estimated memory consumption for ML force field generation (MB):
Persistent allocations for force field : 38414.4
|
|-- CMAT for basis : 16453.8
|-- FMAT for basis : 1974.4
|-- DESC for basis : 1646.5
|-- DESC product matrix : 53.1
Persistent allocations for ab initio data : 9.4
|
|-- Ab initio data : 8.9
|-- Ab initio data (new) : 0.4
Temporary allocations for sparsification : 406.5
|
|-- SVD matrices : 405.5
Other temporary allocations : 609.7
|
|-- Descriptors : 42.3
|-- Regression : 519.7
|-- Prediction : 47.7
Total memory consumption : 39439.9
Best,
Xiaoming