NeuroCGMD
Installation
Menu
Adaptive cooperative molecular dynamics

NeuroCGMD

NeuroCGMD is a Python-native molecular simulation framework that links coarse-grained dynamics, selective refinement, residual learning, and CG-to-AA reconstruction within one documented workflow. It is intended for methodological development, exploratory studies, and structured downstream analysis.

Python-native

Lightweight installation and straightforward evaluation in standard research environments.

Four-layer workflow

Coarse-grained dynamics, selective refinement, residual learning, and back-mapping remain connected end to end.

Structured outputs

Trajectory products, diagnostics, and analysis figures are generated from the same run context.

Review oriented

Organized to support technical evaluation, benchmark discussion, and external scientific review.

Installation
Environment setup and validation
Prepare a clean runtime environment and verify the command-line interface. Read more →
Quickstart
Starter parameter file and run sequence
Follow a compact project layout and execute a first run with a documented TOML file. Read more →
Tutorial
Worked barnase-barstar example
Review a complete demonstration workflow and the resulting analysis products. Read more →
Architecture
Force composition and control flow
Inspect how the engine combines classical dynamics, refinement, learning, and reconstruction. Explore →

Platform Positioning

NeuroCGMD is positioned as a compact adaptive MD framework for exploratory biomolecular modeling, method development, and interpretable analysis. It is not presented as a replacement for established production engines. Its differentiator is the integration of simulation, selective refinement, learning, reconstruction, and analysis within one coherent operating model.

Why “Neuro”?

In NeuroCGMD, the Neuro designation refers to the adaptive learning and control architecture implemented around the MD kernel. It denotes a scheduling and inference framework for region prioritization, memory of informative states, graph adaptation, and dynamic allocation of higher-cost refinement effort during the run.

Adaptive Graph Layer

Dynamic connectivity can be updated in response to simulation behavior, allowing control priorities to evolve with the state of the system.

Plasticity Rules

Plasticity-inspired update rules are used to adjust interaction emphasis and to re-prioritize repeatedly informative regions or events.

State Memory

Replay buffers preserve salient configurations for later reuse during learning and correction updates.

Executive Control

A supervisory layer monitors stability, allocates compute effort, and determines when higher-cost refinement should be intensified or relaxed.

Online Neural Residuals

The residual model is trained during the run from higher-cost correction data and extends refinement between direct evaluation points.

Taken together, these modules form an adaptive control layer around the MD kernel: graph updates influence region prioritization, memory informs learning, and the controller modulates computational attention. The intent is not anthropomorphic branding, but a formal description of how correction effort is selected, scheduled, and reused during simulation.

four layers integrated in one workflow
THE FOUR LAYERS — click any card to expand
CG Dynamics
Coarse-grained propagation
Langevin integration with bonded and nonbonded terms, BAOAB splitting, and cell-list based neighborhood evaluation.
QCloud Refinement
Selective higher-cost correction
Applies bounded force corrections on prioritized regions and feeds event information back into region selection.
ML Residual
Inter-step correction learning
Learns residual corrections from direct refinement data and supplies lower-cost estimates between refinement points.
Back-mapping
CG to atomistic reconstruction
Reconstructs atomistic coordinates from the coarse-grained trajectory to support structure-based downstream analysis.
force composition and output continuity
Ftotal = FCG + ΔFQCloud + α ΔFML  →  integrator  →  CG positions  →  back-map  →  AA
One workflow for force evaluation, propagation, reconstruction, and analysis.
Python-native deployment
TOML-driven execution
Integrated analysis outputs
Research evaluation workflow