reward shaping


Continuity

A Constraint-Aware Bandit

In February, the control architecture for Continuity took a substantial step forward with the formal integration of the bandit-based decision layer into the QuectoFSM state machine framework. This was the first implementation of what I termed Constrained Local Learning (CLL): a structure where bandit-based exploration is actively gated by contextual rules, and actions are only permitted when predefined safety and state conditions are satisfied. The multi-armed bandit (MAB) component, now in version 3, moved beyond basic reward selection and hard-coded action cycling. Instead of optimising only low-level parameters (e.g., step height or cycle duration), the new system could also select from a list of gait patterns, each encoded as an arm with an associated performance history. To regulate instability, the bandit’s reward function was overhauled. The original inverse formulation:

enter image description here

was replaced by a log-scaled penalty system, better suited for sharply discouraging large attitude deviations while maintaining sensitivity around small corrections:

enter image description here

enter image description here