

Finally, the flexibility of the learning framework is demonstrated across a range of mission scenarios and low-thrust engine types.Ĭurrent practice for asteroid close proximity maneuvers requires extremely accurate characterization of the environmental dynamics and precise spacecraft positioning prior to the maneuver. The sample low-thrust controller exhibits remarkable robustness to perturbations and generalizes effectively to nearby motion. Control feasibility is demonstrated through sample transfers between Lyapunov orbits in the Earth-Moon system. This controller may be employed onboard to autonomously generate low-thrust control profiles in real-time without imposing a heavy workload on a flight computer. The learning process leverages high-performance computing to train a closed-loop neural network controller. The proposed controller functions without direct knowledge of the dynamical model direct interaction with the nonlinear equations of motion creates a flexible learning scheme that is not limited to a single force model, mission scenario, or spacecraft. The results demonstrate the controller’s ability to directly guide a spacecraft despite large initial deviations and to augment a traditional targeting guidance approach.

This research effort employs reinforcement learning, a subset of machine learning, to produce a ‘lightweight’ closed-loop controller that is potentially suitable for onboard low-thrust guidance in challenging dynamical regions of space. Many traditional approaches rely on either simplifying assumptions in the dynamical model or on abundant computational resources. In nonlinear dynamical environments, computationally efficient guidance strategies are challenging. Onboard autonomy is an essential component in enabling increasingly complex missions into deep space. Furthermore, this approach indicates that reinforcement learning can be effectively used to solve constrained relative spacecraft guidance problems in complex environments and thus can be effective for autonomous relative motion operations in the Earth-Moon dynamical environment. It shows promising results in terminal guidance error and satisfies path constraints in constraint scenarios comprising spherical constraints and keep-out-spheres with approach corridors. The algorithm is tested in the circular restricted three-body problem (CRTBP) framework for Near Rectilinear Orbits (NRO) in the Earth-Moon system.

The method relies on reinforcement learning to make the well known Zero-Effort-Miss/Zero-Effort-Velocity guidance state dependent and allow for path constraints to be directly embedded. The algorithm is lightweight, closed-loop, and capable of taking path constraints into account. This paper presents a feedback guidance algorithm for proximity operation in cislunar environment based on actor-critic reinforcement learning. Effective guidance in sample scenarios suggests extendibility of the learning framework to higher-fidelity domains. This controller may be employed on-board, and autonomously generates low-thrust control profiles in real-time without imposing a heavy workload on a flight computer. The proposed controller functions without knowledge of the simplifications and assumptions of the dynamical model, and direct interaction with the nonlinear equations of motion creates a flexible learning scheme that is not limited to a single force model. This research employs reinforcement learning, a subset of machine learning, to produce a controller that is suitable for on-board low-thrust guidance in challenging dynamical regions of space.

Human presence in cislunar space continues to expand, so too does the demand for `lightweight' automated on-board processes.
