Dear,
For the same computer to run the MLFF train, I found the training speed will be slower when I copied the ML_ABN from the first time to ML_AB to continue train,
it become slower train when copying ML_ABN to ML_AB to continue to train
Moderators: Global Moderator, Moderator
-
- Jr. Member
- Posts: 64
- Joined: Tue Nov 19, 2019 4:15 am
it become slower train when copying ML_ABN to ML_AB to continue to train
-
- Full Member
- Posts: 237
- Joined: Tue Jan 19, 2021 12:01 am
Re: it become slower train when copying ML_ABN to ML_AB to continue to train
Hi,
Great that you do some testing. Could you clarify what exactly you are observing?
For reference, the ab-initio calculation should remain at the same computational cost in any MD step unless you changed some settings. During training more and more local reference configurations are collected and that will indeed cost more computational effort to add the e.g. 15th compared to the 4th local reference configuration and apply the design matrix. However it is not an option to entirely avoid adding local reference configurations, since this is what improves the force field. A comparison of restarting a training calculation or running a training calculation for longer should not impact the computational cost significantly (minus the overhead you get from writing and reading files etc.).
Do you have a question in connection with your observation?
Marie-Therese