KIDL: A knowledge-informed deep learning paradigm for generalizable and stability-optimized car-following models
Tsinghua University PressPeer-Reviewed Publication
In this study, we proposed a novel Knowledge-Informed Deep Learning (KIDL) paradigm that, to the best of our knowledge, is the first to unify behavioral generalization and traffic flow stability by systematically integrating high-level knowledge distillation from LLMs with physically grounded stability constraints in car-following modeling. Generalization is enhanced by distilling car-following knowledge from LLMs into a lightweight and efficient neural network, while local and string stability are achieved by embedding physically grounded constraints into the distillation process. Experimental results on real-world traffic datasets validate the effectiveness of the KIDL paradigm, showing its ability to replicate and even surpass the LLM's generalization performance. It also outperforms traditional physics-based, data-driven, and hybrid CFMs by at least 10.18% in terms of trajectory simulation error RMSE. Furthermore, the resulting KIDL model is proven through theoretical and numerical analysis to ensure local and string stability at all equilibrium states, offering a strong foundation for advancing AV technologies.
Practically, KIDL offers a deployable solution for AV control, serving as a high-level motion reference that ensures realistic and stable car-following in mixed traffic environments. Moreover, this framework provides a promising pathway for integrating LLM-derived knowledge into traffic modeling by distilling it into a lightweight model with embedded physical constraints, balancing generalization with real-world feasibility.- Journal
- Communications in Transportation Research