Comparison study of high-performance rule-based HVAC control with deep reinforcement learning-based control in a multi-zone VAV system.

Number: 3347

Author(s) : LU X., FU Y., XU S., ZHU Q., O’NEILL Z., YANG Z.

Summary

The design, commissioning, and retrofit of heating, ventilation, and air-conditioning (HVAC) control systems are crucially important for energy efficiency but often neglected. Generally, designers and control contractors adopt ad-hoc control sequences based on diffused and fragmented information and therefore the majority of the existing control sequences are diverse and sub-optimal. ASHRAE Guideline 36 (GDL36), High-performance Sequences of Operation for HVAC Systems, is thus developed to provide standardized and high-performance rule-based HVAC control sequences with the main focus on maximizing energy efficiency. However, these high-performance rules-based control sequences are still under-development, and only a few studies verify their overall effectiveness. In addition, the performance evaluations in most existing studies only focus on the energy-saving potentials compared with the conventional rule-based control strategies. In this study, the high-performance rule-based control sequences from GDL36 was compared to the state-of-the-art deep reinforcement (DRL) control in terms of the energy efficiency in a multi-zone VAV system. The system-level supervisory controls (i.e., supply air temperature and supply differential pressure setpoint) in ASHRAE GDL36 were replaced by the counterpart in the DRL control, of which action space is a bi-dimensional continuous space. A five-zone medium office building model in Modelica was utilized as a virtual testbed. Particularly, the plant side power consumption uses a regression model to reflect the real condition of the plant loop operation, Proximal policy optimization (PPO) was selected as the DRL algorithm due to its stable performance for the continuous space and easiness of the hyper-parameter tuning. The DRL algorithm was implemented using the Tianshou library in Python. A containerized OpenAI gym environment was leveraged to enable the connection between the Modelica building model and the DRL algorithm. Typical load conditions in Chicago, 5A (high and mild load weeklong simulation) were considered. The simulation results show that control sequences from GDL36 perform comparable performance in terms of energy efficiency and thermal comfort as the DRL controls.

Available documents

Format PDF

Pages: 10 p.

Available

Free

Details

  • Original title: Comparison study of high-performance rule-based HVAC control with deep reinforcement learning-based control in a multi-zone VAV system.
  • Record ID : 30030228
  • Languages: English
  • Source: 2022 Purdue Conferences. 7th International High Performance Buildings Conference at Purdue.
  • Publication date: 2022

Links


See other articles from the proceedings (39)
See the conference proceedings