Fig. 1: Diagram of a simple neural network (Source: Wikimedia Commons) |
Managing an energy system, or simply the "grid," is essentially an optimal control problem in which the energy production and allocation are determined by the external factors. However, these models are extremely complex due to their parameters being dependent on time-varying signals. For these reasons, despite gradual advancements, manual engineering of the system has proven to be suboptimal. However, recent advancements in artificial intelligence technologies that use artificial neural networks, often dubbed "deep learning," are suggesting new answers for the age-old optimization problems that consist the core of the energy system. Furthermore, ability to adapt to the large-scale, changing environment allows us to connect essentially all electrical devices from the supply grid to the individual users into one giant network called the "energy cloud." AI-based optimization of the energy cloud aims at significantly reducing the total energy cost as well as building a secure, stable network.
Traditional optimization techniques formulate the problem using a mathematical model. For simple systems, these model-based approaches often give computationally efficient solutions that are sufficiently accurate. However, in reality, the usage patterns (energy production, consumption) of the electrical devices are often complex and non- stationary, making it impossible to directly solve the problem with traditional approaches. A model-free reinforcement learning approach, on the other hand, handles this issue by considering the environment as a black box and adapting its parameters according to the past experience of the system. Moreover, the nonlinear nature of neural networks allows us to take into account the complex patterns that are not easily interpretable or model.
Traditional prediction/optimization techniques formulate the problem using a mathematical model. For simple systems, these model-based approaches often give computationally efficient solutions that are sufficiently accurate. However, in reality, the usage patterns (energy production, consumption) of the electrical devices are often complex and non- stationary, and are affected by many influencing factors such as climate and performance of thermal systems. Hence, it is often impossible to either predict the future consumption or optimize the usage with traditional approaches. Deep learning approaches solve these problems by modeling the environment as a black box and adapting its parameters according to the past experience of the system. Fig. 1 shows a diagram of a simple neural network. The nonlinear nature of neural networks allows us to take into account the complex patterns that are not easily interpretable or model. Mocanu et al. have proposed a Deep Reinforcement Learning-based algorithm for energy consumption prediction. [1] The RL model keeps a nonlinearly encoded continuous state space that changes as it receives the next energy consumption in an online fashion. Comparing the RL model with and without deep representation, the authors have shown that a deep state representation-based RL model gives a prediction error that is approximately 20 times lower than RL without it, in the longer horizon. The authors have also shown that the model trained in one building can be used to predict consumption in another building as well, demonstrating the features of transfer learning.
McKinsey Global Institute reported that 28% of the energy firms have already adopted one or more AI technology at scale as of 2017. [2] In June 2016, Deepmind announced that it significantly reduced energy consumption in Google's data centers via reinforcement learning technologies. Its future goal is to collaborate with the U.K. government and reduce the energy consumption for the entire nation. IBM Research, collaborating with Department of Energy, announced a machine learning-based forecast of solar energy production that could in theory reduce by 25% the amount of reserves that must be held to accommodate the uncertainty of solar power output. [3] An AI-powered energy cloud will optimize the energy usage as well as utilize the predictive power it has over the expected energy consumption to minimize the peak load in the energy grid. DOE expects that, while the total energy consumption of the US will continue to increase, the increase in the peak load on the energy grid will be limited to the current level until 2050 thanks to the smart grid.
Using AI to optimize the device-level energy consumption has an enormous upside potential to reduce the energy demand. Self-driving cars and IoT@Home are expected to significantly reduce the amount of energy consumed on the user side, and the inclusion of such device-level AI to the smart grid will significantly enhance the overall efficiency of the grid. However, for this to happen, reliability and safety of the energy cloud must be guaranteed. The robustness and security of the energy cloud against the external risks must be established, for which the AI can again play a significant role.
© Jae Hyun Kim. The author warrants that the work is the author's own and that Stanford University provided no input other than typesetting and referencing guidelines. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
[1] E. Mocanu, et al., "Unsupervised Energy Prediction in a Smart Grid Context Using Reinforcement Cross-Building Transfer Learning," Energy Buildings 116, 646 (2016).
[2] J. Bughin et al., "Artificial Intelligence: The Next Digital Frontier?" McKinsey and Company, June 2017.
[3] J. Zhang et al., "Baseline and Target Values For Regional and Point PV Power Forecasts: Toward Improved Solar Forecasting," Sol. Energy 122, 904 (2015).