NeuroCGMD
Installation
Menu
Home  /  Reference  /  ML Residual

ML Residual Learning Layer

Online residual learning layer that approximates force corrections between higher-cost refinement steps while maintaining stability controls.

MODEL ARCHITECTURE — neural residual model
Multi-Layer Perceptron (MLP)
Maps particle features to force correction vectors
Input layer: particle positions (3D), velocities (3D), local environment descriptors derived from the neighbor list

Hidden layers: 2–3 fully-connected layers with ReLU activation. Width scales with system size.

Output layer: per-particle force correction vectors (ΔFx, ΔFy, ΔFz)

The model learns a residual relative to the base CG force calculation using correction data supplied by the refinement layer. In practice, it approximates the force-adjustment function observed during higher-cost refinement calls.
training protocol
ON-THE-FLY TRAINING — online training during the simulation
Training Loop
QCloud corrections as reference targets
At each scheduled refinement step, the QCloud layer produces force corrections. These become training targets:

Loss = ∑i |ΔFpredicted(i) − ΔFQCloud(i)|²

The model updates via stochastic gradient descent (SGD) with momentum. Learning rate is adaptive. The training set grows continuously throughout the simulation.
Mixing Strategy
Documented reference mixing rule
When direct refinement is active, ML predictions are down-weighted in the reference configuration to avoid double-counting:

Ftotal = FCG + ΔFQCloud + 0.35 · ΔFML

Between scheduled refinement steps, ML predictions run at full scale:

Ftotal = FCG + 1.0 · ΔFML

The coefficient shown here reflects the documented reference workflow and may be treated as a tunable implementation choice rather than a universal constant.
safety mechanisms
DRIFT CONTROL & UNCERTAINTY — stability controls for learned corrections
Energy Drift Monitor
Rolling-window conservation check
Monitors total energy over a rolling window of states. If the ML corrections introduce excessive drift, the model predictions can be scaled down to protect simulation stability.
Ensemble Uncertainty
Variance from multiple predictions
An ensemble of ML models can provide uncertainty estimates. When predictions disagree (high variance), this signals that the current configuration is out-of-distribution and QCloud should be called for a direct correction.