News Release

A MLP-mixer and mixture of expert model for remaining useful life prediction of lithium-ion batteries

Peer-Reviewed Publication

Higher Education Press

The MMMe model predicts the RUL process

image: 

The MMMe model predicts the RUL process

view more 

Credit: Lingling ZHAO, Shitao SONG, Pengyan WANG, Chunyu WANG, Junjie WANG, Maozu GUO

Although deep learning-based methods have demonstrated promising results in estimating the RUL, most methods consider that each time step's features hold equal importance. When data with varying degrees of contribution are treated with equal weights, it can potentially restrict the model's feature extraction capability. In light of this, there is value in exploring methods that can effectively handle the varying contributions of different time steps in the RUL estimation.

To solve the problems, a research team led by Lingling ZHAO published their new research on 15 October 2024 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team proposed a prediction method based on the MLP-Mixer and Mixture of Expert (MMMe) model, for RUL prediction. Experimental results on NASA and CALCE public datasets show that the proposed model significantly outperforms other existing methods, providing more reliable and accurate RUL predictions while also accurately tracking the capacity decay process.

To better improve the learning of characterization of features such as long-range sequence dependence and mutation in capacity time series, they propose a novel deep learning model, the MLP-Mixer and Mixture of Expert (MMMe) model, for RUL prediction. The MMMe model leverages the Gated Recurrent Unit and Multi-Head Attention mechanism to encode the sequential data of battery capacity to capture the temporal features and a re-zero MLP-Mixer model to capture the high-level features. Additionally, we devise an ensemble predictor based on a Mixture-of-Experts (MoE) architecture to generate reliable RUL predictions.

The proposed method first constructs a time series matrix to preserve temporal information and projects the original input into high-dimensional space using the Bi-directional Gated Recurrent Unit with Multi-Head Attention (BiGRU-MHA) encoder. To learn abstract features for capacity fading, the ReZero Mixer-MLP is employed. Finally, the mixture-of-experts (MoE) mechanism is utilized to predict the RUL based on the learned features. Extensive experiments conducted on two publicly available LIB datasets demonstrate that the proposed MMMe method outperforms all baseline methods for RUL prediction.

DOI: 10.1007/s11704-023-3277-4


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.