Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Aug 19, 2024
1 parent 5b87532 commit da39c8a
Show file tree
Hide file tree
Showing 64 changed files with 3,414 additions and 3,438 deletions.
12 changes: 6 additions & 6 deletions docs/recommendations/06a0ba437d41a7c82c08a9636a4438c1b5031378.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-08-12 06:05:45 UTC</i>
<i class="footer">This page was last updated on 2024-08-19 06:05:28 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -51,7 +51,7 @@ hide:
<td>2023-03-15</td>
<td>PLOS Computational Biology</td>
<td>3</td>
<td>54</td>
<td>55</td>
</tr>

<tr id="High-throughput data acquisition in synthetic biology leads to an abundance of data that need to be processed and aggregated into useful biological models. Building dynamical models based on this wealth of data is of paramount importance to understand and optimize designs of synthetic biology constructs. However, building models manually for each data set is inconvenient and might become infeasible for highly complex synthetic systems. In this paper, we present state-of-the-art system identification techniques and combine them with chemical reaction network theory (CRNT) to generate dynamic models automatically. On the system identification side, Sparse Bayesian Learning offers methods to learn from data the sparsest set of dictionary functions necessary to capture the dynamics of the system into ODE models; on the CRNT side, building on such sparse ODE models, all possible network structures within a given parameter uncertainty region can be computed. Additionally, the system identification process can be complemented with constraints on the parameters to, for example, enforce stability or non-negativity-thus offering relevant physical constraints over the possible network structures. In this way, the wealth of data can be translated into biologically relevant network structures, which then steers the data acquisition, thereby providing a vital step for closed-loop system identification.">
Expand All @@ -63,7 +63,7 @@ hide:
<td>2018-09-24</td>
<td>2018 IEEE Conference on Decision and Control (CDC)</td>
<td>3</td>
<td>34</td>
<td>35</td>
</tr>

<tr id="Reconstruction of biochemical reaction networks (BRN) and genetic regulatory networks (GRN) in particular is a central topic in systems biology which raises crucial theoretical challenges in system identification. Nonlinear Ordinary Differential Equations (ODEs) that involve polynomial and rational functions are typically used to model biochemical reaction networks. Such nonlinear models make the problem of determining the connectivity of biochemical networks from time-series experimental data quite difficult. In this paper, we present a network reconstruction algorithm that can deal with ODE model descriptions containing polynomial and rational functions. Rather than identifying the parameters of linear or nonlinear ODEs characterised by pre-defined equation structures, our methodology allows us to determine the nonlinear ODEs structure together with their associated parameters. To solve the network reconstruction problem, we cast it as a compressive sensing (CS) problem and use sparse Bayesian learning (SBL) algorithms as a computationally efficient and robust way to obtain its solution.">
Expand All @@ -75,7 +75,7 @@ hide:
<td>2012-05-08</td>
<td>2012 IEEE 51st IEEE Conference on Decision and Control (CDC)</td>
<td>33</td>
<td>34</td>
<td>35</td>
</tr>

<tr id="None">
Expand Down Expand Up @@ -110,8 +110,8 @@ hide:
</td>
<td>2022-06-01</td>
<td>Nonlinear Dynamics</td>
<td>27</td>
<td>90</td>
<td>28</td>
<td>91</td>
</tr>

<tr id="Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior.">
Expand Down
12 changes: 6 additions & 6 deletions docs/recommendations/0acd117521ef5aafb09fed02ab415523b330b058.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-08-12 06:05:57 UTC</i>
<i class="footer">This page was last updated on 2024-08-19 06:05:36 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2023-11-01</td>
<td>Chaos</td>
<td>2</td>
<td>3</td>
<td>11</td>
</tr>

Expand All @@ -74,8 +74,8 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3181</td>
<td>63</td>
<td>3190</td>
<td>65</td>
</tr>

<tr id="None">
Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>237</td>
<td>238</td>
<td>12</td>
</tr>

Expand Down Expand Up @@ -135,7 +135,7 @@ hide:
<td>2017-12-01</td>
<td>2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)</td>
<td>12</td>
<td>63</td>
<td>65</td>
</tr>

</tbody>
Expand Down
12 changes: 6 additions & 6 deletions docs/recommendations/0d01d21137a5af9f04e4b16a55a0f732cb8a540b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-08-12 06:05:12 UTC</i>
<i class="footer">This page was last updated on 2024-08-19 06:04:58 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -51,7 +51,7 @@ hide:
<td>2023-10-24</td>
<td>ArXiv</td>
<td>4</td>
<td>49</td>
<td>50</td>
</tr>

<tr id="Graph neural networks (GNNs) have been widely applied in multi-variate time-series forecasting (MTSF) tasks because of their capability in capturing the correlations among different time-series. These graph-based learning approaches improve the forecasting performance by discovering and understanding the underlying graph structures, which represent the data correlation. When the explicit prior graph structures are not available, most existing works cannot guarantee the sparsity of the generated graphs that make the overall model computational expensive and less interpretable. In this work, we propose a decoupled training method, which includes a graph generating module and a GNNs forecasting module. First, we use Graphical Lasso (or GraphLASSO) to directly exploit the sparsity pattern from data to build graph structures in both static and time-varying cases. Second, we fit these graph structures and the input data into a Graph Convolutional Recurrent Network (GCRN) to train a forecasting model. The experimental results on three real-world datasets show that our novel approach has competitive performance against existing state-of-the-art forecasting algorithms while providing sparse, meaningful and explainable graph structures and reducing training time by approximately 40%. Our PyTorch implementation is publicly available at https://github.com/HySonLab/GraphLASSO">
Expand Down Expand Up @@ -99,7 +99,7 @@ hide:
<td>2021-09-10</td>
<td>ArXiv</td>
<td>12</td>
<td>45</td>
<td>46</td>
</tr>

<tr id="Multivariate time-series forecasting is a critical task for many applications, and graph time-series network is widely studied due to its capability to capture the spatial-temporal correlation simultaneously. However, most existing works focus more on learning with the explicit prior graph structure, while ignoring potential information from the implicit graph structure, yielding incomplete structure modeling. Some recent works attempts to learn the intrinsic or implicit graph structure directly, while lacking a way to combine explicit prior structure with implicit structure together. In this paper, we propose Regularized Graph Structure Learning (RGSL) model to incorporate both explicit prior structure and implicit structure together, and learn the forecasting deep networks along with the graph structure. RGSL consists of two innovative modules. First, we derive an implicit dense similarity matrix through node embedding, and learn the sparse graph structure using the Regularized Graph Generation (RGG) based on the Gumbel Softmax trick. Second, we propose a Laplacian Matrix Mixed-up Module (LM3) to fuse the explicit graph and implicit graph together. We conduct experiments on three real-word datasets. Results show that the proposed RGSL model outperforms existing graph forecasting algorithms with a notable margin, while learning meaningful graph structure simultaneously. Our code and models are made publicly available at https://github.com/alipay/RGSL.git.">
Expand All @@ -111,7 +111,7 @@ hide:
<td>2022-07-01</td>
<td>ArXiv, DBLP</td>
<td>37</td>
<td>33</td>
<td>34</td>
</tr>

<tr id="Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.">
Expand All @@ -122,8 +122,8 @@ hide:
</td>
<td>2021-01-18</td>
<td>ArXiv</td>
<td>175</td>
<td>36</td>
<td>177</td>
<td>37</td>
</tr>

<tr id="Multi-variate time series forecasting is an important problem with a wide range of applications. Recent works model the relations between time-series as graphs and have shown that propagating information over the relation graph can improve time series forecasting. However, in many cases, relational information is not available or is noisy and reliable. Moreover, most works ignore the underlying uncertainty of time-series both for structure learning and deriving the forecasts resulting in the structure not capturing the uncertainty resulting in forecast distributions with poor uncertainty estimates. We tackle this challenge and introduce STOIC, that leverages stochastic correlations between time-series to learn underlying structure between time-series and to provide well-calibrated and accurate forecasts. Over a wide-range of benchmark datasets STOIC provides around 16% more accurate and 14% better-calibrated forecasts. STOIC also shows better adaptation to noise in data during inference and captures important and useful relational information in various benchmarks.">
Expand Down
16 changes: 8 additions & 8 deletions docs/recommendations/123acfbccca0460171b6b06a4012dbb991cde55b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-08-12 06:05:13 UTC</i>
<i class="footer">This page was last updated on 2024-08-19 06:04:59 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2024-03-12</td>
<td>ArXiv</td>
<td>24</td>
<td>27</td>
<td>18</td>
</tr>

Expand All @@ -63,7 +63,7 @@ hide:
<td>2024-02-04</td>
<td>ArXiv</td>
<td>3</td>
<td>65</td>
<td>66</td>
</tr>

<tr id="Large language models (LLMs) are being applied to time series tasks, particularly time series forecasting. However, are language models actually useful for time series? After a series of ablation studies on three recent and popular LLM-based time series forecasting methods, we find that removing the LLM component or replacing it with a basic attention layer does not degrade the forecasting results -- in most cases the results even improved. We also find that despite their significant computational cost, pretrained LLMs do no better than models trained from scratch, do not represent the sequential dependencies in time series, and do not assist in few-shot settings. Additionally, we explore time series encoders and reveal that patching and attention structures perform similarly to state-of-the-art LLM-based forecasters.">
Expand All @@ -86,8 +86,8 @@ hide:
</td>
<td>2024-02-04</td>
<td>ArXiv</td>
<td>6</td>
<td>65</td>
<td>5</td>
<td>66</td>
</tr>

<tr id="In this paper, we introduce TimeGPT, the first foundation model for time series, capable of generating accurate predictions for diverse datasets not seen during training. We evaluate our pre-trained model against established statistical, machine learning, and deep learning methods, demonstrating that TimeGPT zero-shot inference excels in performance, efficiency, and simplicity. Our study provides compelling evidence that insights from other domains of artificial intelligence can be effectively applied to time series analysis. We conclude that large-scale time series models offer an exciting opportunity to democratize access to precise predictions and reduce uncertainty by leveraging the capabilities of contemporary advancements in deep learning.">
Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>39</td>
<td>40</td>
<td>1</td>
</tr>

Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2023-10-03</td>
<td>ArXiv</td>
<td>113</td>
<td>120</td>
<td>9</td>
</tr>

Expand All @@ -146,7 +146,7 @@ hide:
</td>
<td>2023-06-19</td>
<td>ArXiv</td>
<td>34</td>
<td>35</td>
<td>8</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-08-12 06:05:17 UTC</i>
<i class="footer">This page was last updated on 2024-08-19 06:05:02 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,7 +74,7 @@ hide:
</td>
<td>2022-09-20</td>
<td>IEEE Transactions on Knowledge and Data Engineering</td>
<td>56</td>
<td>58</td>
<td>17</td>
</tr>

Expand Down Expand Up @@ -146,7 +146,7 @@ hide:
</td>
<td>2023-10-14</td>
<td>ArXiv</td>
<td>45</td>
<td>48</td>
<td>14</td>
</tr>

Expand All @@ -159,7 +159,7 @@ hide:
<td>2024-02-04</td>
<td>ArXiv</td>
<td>3</td>
<td>65</td>
<td>66</td>
</tr>

</tbody>
Expand Down
Loading

0 comments on commit da39c8a

Please sign in to comment.