NeuroCGMD
NeuroCGMD is a Python-native molecular simulation framework that links coarse-grained dynamics, selective refinement, residual learning, and CG-to-AA reconstruction within one documented workflow. It is intended for methodological development, exploratory studies, and structured downstream analysis.
Lightweight installation and straightforward evaluation in standard research environments.
Coarse-grained dynamics, selective refinement, residual learning, and back-mapping remain connected end to end.
Trajectory products, diagnostics, and analysis figures are generated from the same run context.
Organized to support technical evaluation, benchmark discussion, and external scientific review.
Platform Positioning
NeuroCGMD is positioned as a compact adaptive MD framework for exploratory biomolecular modeling, method development, and interpretable analysis. It is not presented as a replacement for established production engines. Its differentiator is the integration of simulation, selective refinement, learning, reconstruction, and analysis within one coherent operating model.
Why “Neuro”?
In NeuroCGMD, the Neuro designation refers to the adaptive learning and control architecture implemented around the MD kernel. It denotes a scheduling and inference framework for region prioritization, memory of informative states, graph adaptation, and dynamic allocation of higher-cost refinement effort during the run.
Adaptive Graph Layer
Dynamic connectivity can be updated in response to simulation behavior, allowing control priorities to evolve with the state of the system.
Plasticity Rules
Plasticity-inspired update rules are used to adjust interaction emphasis and to re-prioritize repeatedly informative regions or events.
State Memory
Replay buffers preserve salient configurations for later reuse during learning and correction updates.
Executive Control
A supervisory layer monitors stability, allocates compute effort, and determines when higher-cost refinement should be intensified or relaxed.
Online Neural Residuals
The residual model is trained during the run from higher-cost correction data and extends refinement between direct evaluation points.
Taken together, these modules form an adaptive control layer around the MD kernel: graph updates influence region prioritization, memory informs learning, and the controller modulates computational attention. The intent is not anthropomorphic branding, but a formal description of how correction effort is selected, scheduled, and reused during simulation.