Active Flow Control for Bluff Body under High Reynolds Number Turbulent Flow Conditions Using Deep Reinforcement Learning

Keywords

Computational learning
Advanced Numerical Methods for Scientific Computing
Code:
107/2024
Title:
Active Flow Control for Bluff Body under High Reynolds Number Turbulent Flow Conditions Using Deep Reinforcement Learning
Date:
Thursday 19th December 2024
Author(s):
Chen, J.; Ballini, E.; Micheletti, S.
Download link:
Abstract:
This study employs Deep Reinforcement Learning (DRL) for active flow control in a turbulent flow field of high Reynolds numbers at Re = 274000. That is, an agent is trained to obtain a control strategy that can reduce the drag of a cylinder while also minimizing the oscillations of the lift. Probes are placed only around the surface of the cylinder, and a Proximal Policy Optimization (PPO) agent controls nine zero-net mass flux jets on the downstream side of the cylinder. The trained PPO agent effectively reduces drag by 29% and decreases lift oscillations by 18% of amplitude, with the control effect demonstrating good repeatability. Control tests of this agent within the Reynolds number range of Re = 260000 to 288000 show the agent’s control strategy possesses a certain degree of robustness, with very similar drag reduction effects under different Reynolds numbers. Analysis using power spectral energy reveals that the agent learns specific flow frequencies in the flow field and effectively suppressesù low-frequency, large-scale structures. Graphically visualizing the policy, combined with pressure, vorticity, and turbulent kinetic energy contours, reveals the mechanism by which jets achieve drag reduction by influencing reattachment vortices. This study successfully implements robust active flow control in realistically significant high Reynolds number turbulent flows, minimizing time costs (using two-dimensional geometrical models and turbulence models) and maximally considering the feasibility of future experimental implementation.