Skip to content

The official hub for our team dedicated to explainable AI (XAI) for time series, providing valuable resources to enhance understanding this critical field.

License

Notifications You must be signed in to change notification settings

DAMO-DI-ML/XAI-for-Time-Series

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

XAI for Time Series

This is a hub for our team dedicated to explainable AI (XAI) for time series, providing valuable resources and insights to enhance understanding and application in this critical field.

We are hiring!

  • Research Interns: We are currently looking for passionate graduate students to join our team as interns! We also If you are interested in contributing to the field of explainable AI for time series and making a meaningful impact, check out our job openings
  • Alibaba Innovative Research: We are actively seeking professors for exciting collaboration opportunities! This one-year project focuses on explainable AI (XAI) in time-series scenarios. Professors can send 1-2 interns to join our team, enhancing their academic experience while contributing to meaningful research. You may refer to more details at the official website.

If you are interested in exploring these opportunity, please reach out to us at [guxinyue.gxy@alibaba-inc.com] or [linxiao.ylx@alibaba-inc.com]. Let's work together to push the boundaries of XAI!

Table of Contents

Our XAI Team: Fostering understanding between people and machines.

Team Members

Selected research

  • Explain Temporal Black-Box Models via Functional Decomposition:
image How to explain temporal models is a significant challenge due to the inherent characteristics of time series data, notably the strong temporal dependencies and interactions between observations.

The unique characters of time series data: 1,the high degree of temporal dependency (including the commonly used autocorrelation) inherent in time series data complicates the task of identifying the influence of specific inputs on the model outputs. 2,events occurring at a given time point can have enduring effects on subsequent observations, encapsulating phase shifts that ripple through future data points. 3,we need to consider not only a single time point (e.g., points corresponding to events), but also time series subsequence in explanation.

cite this: @inproceedings{yang2024explain, title={Explain Temporal Black-Box Models via Functional Decomposition}, author={Yang, Linxiao and Tong, Yunze and Gu, Xinyue and Sun, Liang}, booktitle={International Conference on Machine Learning}, year={2024}, organization={PMLR} }

  • Interactive Generalized Additive Model and Its Applications in Electric Load Forecasting: image
image image

Inaccurate load forecasting may lead to the threat of outages or a waste of energy. How to give accurate electric load forecasting when there is

  1. limited data or even no data, such as load forecasting in holiday, or under extreme weather conditions.
  2. a need for model interpretability as high-stakes decision-making usually follows after load forecasting.

We propose an interactive GAM which is not only interpretable but also can incorporate specific domain knowledge in electric power industry for improved performance.

cite this: @inproceedings{yang2023interactive, title={Interactive Generalized Additive Model and Its Applications in Electric Load Forecasting}, author={Yang, Linxiao and Ren, Rui and Gu, Xinyue and Sun, Liang}, booktitle={Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages={5393--5403}, year={2023} }

  • Learning Interpretable Decision Rule Sets: A Submodular Optimization Approach:
image

Rule sets are highly interpretable logical models in which the predicates for decision are expressed in disjunctive normal form (DNF, OR-of-ANDs). We consider a submodular optimization based approach to form an accurate and interpretable rule set. the proposed approach is simple, scalable, and can deal with the exponential-sized ground set of rules.

cite this: @inproceedings{yang2021learning, title={Learning interpretable decision rule sets: A submodular optimization approach}, author={Yang, Fan and He, Kai and Yang, Linxiao and Du, Hongxia and Yang, Jingbang and Yang, Bo and Sun, Liang}, booktitle={Advances in Neural Information Processing Systems}, volume={34}, pages={27890--27902}, year={2021} }

Resources of XAI for Time Series

About

The official hub for our team dedicated to explainable AI (XAI) for time series, providing valuable resources to enhance understanding this critical field.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published