dft energy gradients error in qm/mm dft dynamics

From NWChem

You are viewing a single post from the thread title above
Jump to: navigation, search

Click here for full thread
Clicked A Few Times
Threads 1
Posts 8
I have changed my .nw and "**********" disappeared . But another error is produced.
Here is the error.

failed qmmm_energy_gradient 0
qmmm_main                       failed qmmm_energy_gradient        0
...

This error has not yet been assigned to a category
qmmm_main                       failed qmmm_energy_gradient        0
...

For further details see manual section: No section for this category                                                                                                              
12:12:qmmm_main failed qmmm_energy_gradient:: 0
(rank:12 hostname:node2 pid:16008):ARMCI DASSERT fail. armci.c:ARMCI_Error():260 cond:0
...

MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 0.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.


Last System Error Message from Task 0:: Inappropriate ioctl for device
Last System Error Message from Task 1:: Numerical result out of range
Last System Error Message from Task 2:: Numerical result out of range
Last System Error Message from Task 3:: Numerical result out of range
Last System Error Message from Task 4:: Numerical result out of range
Last System Error Message from Task 5:: Numerical result out of range
Last System Error Message from Task 6:: Numerical result out of range
Last System Error Message from Task 7:: Numerical result out of range
Last System Error Message from Task 8:: Numerical result out of range
Last System Error Message from Task 9:: Numerical result out of range
Last System Error Message from Task 10:: Numerical result out of range
Last System Error Message from Task 11:: Numerical result out of range
Last System Error Message from Task 12:: Numerical result out of range
Last System Error Message from Task 13:: Numerical result out of range
Last System Error Message from Task 14:: Numerical result out of range
Last System Error Message from Task 15:: Numerical result out of range
Last System Error Message from Task 16:: Numerical result out of range
Last System Error Message from Task 17:: Numerical result out of range
Last System Error Message from Task 18:: Numerical result out of range
Last System Error Message from Task 19:: Numerical result out of range
Last System Error Message from Task 20:: Numerical result out of range
Last System Error Message from Task 21:: Numerical result out of range
Last System Error Message from Task 22:: Numerical result out of range
Last System Error Message from Task 23:: Numerical result out of range
...

mpirun has exited due to process rank 21 with PID 16017 on
node node2 exiting without calling "finalize". This may
have caused other processes in the application to beterminated by signals sent by mpirun (as reported here).


[node2:15995] 19 more processes have sent help message help-mpi-api.txt / mpi-abort
[node2:15995] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Could someone give me some suggestions? Each advice will be highly appreciated.


Who's here now Members 0 Guests 0 Bots/Crawler 0


AWC's: 2.5.10 MediaWiki - Stand Alone Forum Extension
Forum theme style by: AWC