Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed May 27, 2024
1 parent 362711b commit 5f97153
Show file tree
Hide file tree
Showing 162 changed files with 11,162 additions and 11,507 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:05:17 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:03:17 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -51,7 +51,7 @@ hide:
<td>2023-03-15</td>
<td>PLOS Computational Biology</td>
<td>1</td>
<td>54</td>
<td>53</td>
</tr>

<tr id="High-throughput data acquisition in synthetic biology leads to an abundance of data that need to be processed and aggregated into useful biological models. Building dynamical models based on this wealth of data is of paramount importance to understand and optimize designs of synthetic biology constructs. However, building models manually for each data set is inconvenient and might become infeasible for highly complex synthetic systems. In this paper, we present state-of-the-art system identification techniques and combine them with chemical reaction network theory (CRNT) to generate dynamic models automatically. On the system identification side, Sparse Bayesian Learning offers methods to learn from data the sparsest set of dictionary functions necessary to capture the dynamics of the system into ODE models; on the CRNT side, building on such sparse ODE models, all possible network structures within a given parameter uncertainty region can be computed. Additionally, the system identification process can be complemented with constraints on the parameters to, for example, enforce stability or non-negativity-thus offering relevant physical constraints over the possible network structures. In this way, the wealth of data can be translated into biologically relevant network structures, which then steers the data acquisition, thereby providing a vital step for closed-loop system identification.">
Expand Down Expand Up @@ -110,7 +110,7 @@ hide:
</td>
<td>2022-06-01</td>
<td>Nonlinear Dynamics</td>
<td>23</td>
<td>24</td>
<td>90</td>
</tr>

Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/0acd117521ef5aafb09fed02ab415523b330b058.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:05:20 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:03:21 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,8 +74,8 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>2986</td>
<td>61</td>
<td>2993</td>
<td>62</td>
</tr>

<tr id="None">
Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>205</td>
<td>211</td>
<td>12</td>
</tr>

Expand Down Expand Up @@ -135,7 +135,7 @@ hide:
<td>2017-12-01</td>
<td>2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)</td>
<td>12</td>
<td>61</td>
<td>62</td>
</tr>

</tbody>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:06:38 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:04:34 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -51,7 +51,7 @@ hide:
<td>2019-03-29</td>
<td>Proceedings of the National Academy of Sciences of the United States of America</td>
<td>562</td>
<td>61</td>
<td>62</td>
</tr>

<tr id="Machine learning models have emerged as powerful tools in physics and engineering. In this work, we use an autoencoder with latent space penalization to discover approximate finite-dimensional manifolds of two canonical partial differential equations. We test this method on the Kuramoto-Sivashinsky (K-S), Korteweg-de Vries (KdV), and damped KdV equations. We show that the resulting optimal latent space of the K-S equation is consistent with the dimension of the inertial manifold. We then uncover a nonlinear basis representing the manifold of the latent space for the K-S equation. The results for the KdV equation show that it is more difficult to recover a reduced latent space, which is consistent with the truly infinite-dimensional dynamics of the KdV equation. In the case of the damped KdV equation, we find that the number of active dimensions decreases with increasing damping coefficient.">
Expand All @@ -63,7 +63,7 @@ hide:
<td>2020-11-14</td>
<td>Physical review. E</td>
<td>2</td>
<td>32</td>
<td>33</td>
</tr>

<tr id="We develop data-driven methods incorporating geometric and topological information to learn parsimonious representations of nonlinear dynamics from observations. We develop approaches for learning nonlinear state space models of the dynamics for general manifold latent spaces using training strategies related to Variational Autoencoders (VAEs). Our methods are referred to as Geometric Dynamic (GD) Variational Autoencoders (GD-VAEs). We learn encoders and decoders for the system states and evolution based on deep neural network architectures that include general Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Transpose CNNs (T-CNNs). Motivated by problems arising in parameterized PDEs and physics, we investigate the performance of our methods on tasks for learning low dimensional representations of the nonlinear Burgers equations, constrained mechanical systems, and spatial fields of reaction-diffusion systems. GD-VAEs provide methods for obtaining representations for use in diverse learning tasks involving dynamics.">
Expand All @@ -86,7 +86,7 @@ hide:
</td>
<td>2018-01-20</td>
<td>J. Mach. Learn. Res.</td>
<td>642</td>
<td>645</td>
<td>24</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:04:55 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:02:54 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -110,7 +110,7 @@ hide:
</td>
<td>2022-07-01</td>
<td>DBLP, ArXiv</td>
<td>27</td>
<td>29</td>
<td>33</td>
</tr>

Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2021-01-18</td>
<td>ArXiv</td>
<td>150</td>
<td>152</td>
<td>36</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:06:39 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:04:35 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -134,7 +134,7 @@ hide:
</td>
<td>2020-03-04</td>
<td>ArXiv</td>
<td>98</td>
<td>100</td>
<td>63</td>
</tr>

Expand Down
18 changes: 9 additions & 9 deletions docs/recommendations/123acfbccca0460171b6b06a4012dbb991cde55b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:04:57 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:02:56 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,11 +50,11 @@ hide:
</td>
<td>2024-03-12</td>
<td>ArXiv</td>
<td>7</td>
<td>10</td>
<td>18</td>
</tr>

<tr id="Foundation models of time series have not been fully developed due to the limited availability of large-scale time series and the underexploration of scalable pre-training. Based on the similar sequential structure of time series and natural language, increasing research demonstrates the feasibility of leveraging large language models (LLM) for time series. Nevertheless, prior methods may overlook the consistency in aligning time series and natural language, resulting in insufficient utilization of the LLM potentials. To fully exploit the general-purpose token transitions learned from language modeling, we propose AutoTimes to repurpose LLMs as Autoregressive Time series forecasters, which is consistent with the acquisition and utilization of LLMs without updating the parameters. The consequent forecasters can handle flexible series lengths and achieve competitive performance as prevalent models. Further, we present token-wise prompting that utilizes corresponding timestamps to make our method applicable to multimodal scenarios. Analysis demonstrates our forecasters inherit zero-shot and in-context learning capabilities of LLMs. Empirically, AutoTimes exhibits notable method generality and achieves enhanced performance by basing on larger LLMs, additional texts, or time series as instructions.">
<tr id="Foundation models of time series have not been fully developed due to the limited availability of time series corpora and the underexploration of scalable pre-training. Based on the similar sequential formulation of time series and natural language, increasing research demonstrates the feasibility of leveraging large language models (LLM) for time series. Nevertheless, the inherent autoregressive property and decoder-only architecture of LLMs have not been fully considered, resulting in insufficient utilization of LLM abilities. To further exploit the general-purpose token transition and multi-step generation ability of large language models, we propose AutoTimes to repurpose LLMs as autoregressive time series forecasters, which independently projects time series segments into the embedding space and autoregressively generates future predictions with arbitrary lengths. Compatible with any decoder-only LLMs, the consequent forecaster exhibits the flexibility of the lookback length and scalability of the LLM size. Further, we formulate time series as prompts, extending the context for prediction beyond the lookback window, termed in-context forecasting. By adopting textual timestamps as position embeddings, AutoTimes integrates multimodality for multivariate scenarios. Empirically, AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over 5 times training/inference speedup compared to advanced LLM-based forecasters.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/d488445bb2bf6719bc48a4d39bd906116274abda" target='_blank'>AutoTimes: Autoregressive Time Series Forecasters via Large Language Models</a></td>
<td>
Expand All @@ -68,13 +68,13 @@ hide:

<tr id="In this paper, we introduce TimeGPT, the first foundation model for time series, capable of generating accurate predictions for diverse datasets not seen during training. We evaluate our pre-trained model against established statistical, machine learning, and deep learning methods, demonstrating that TimeGPT zero-shot inference excels in performance, efficiency, and simplicity. Our study provides compelling evidence that insights from other domains of artificial intelligence can be effectively applied to time series analysis. We conclude that large-scale time series models offer an exciting opportunity to democratize access to precise predictions and reduce uncertainty by leveraging the capabilities of contemporary advancements in deep learning.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/975eeae498797ad47a2034231d2b87c348dfc1b5" target='_blank'>TimeGPT-1</a></td>
<td><a href="https://www.semanticscholar.org/paper/3282fd7ad0f45e2e3d7af7072023869e6f5399a3" target='_blank'>TimeGPT-1</a></td>
<td>
Azul Garza, Max Mergenthaler-Canseco
Azul Garza, Cristian Challu, Max Mergenthaler-Canseco
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>21</td>
<td>23</td>
<td>1</td>
</tr>

Expand All @@ -86,7 +86,7 @@ hide:
</td>
<td>2024-02-04</td>
<td>ArXiv</td>
<td>1</td>
<td>2</td>
<td>65</td>
</tr>

Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2023-10-03</td>
<td>ArXiv</td>
<td>61</td>
<td>66</td>
<td>8</td>
</tr>

Expand Down Expand Up @@ -146,7 +146,7 @@ hide:
</td>
<td>2023-10-14</td>
<td>ArXiv</td>
<td>17</td>
<td>21</td>
<td>13</td>
</tr>

Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/16f01c1b3ddd0b2abd5ddfe4fdb3f74767607277.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:05:02 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:03:00 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -42,7 +42,7 @@ hide:
</thead>
<tbody>

<tr id="Foundation models of time series have not been fully developed due to the limited availability of large-scale time series and the underexploration of scalable pre-training. Based on the similar sequential structure of time series and natural language, increasing research demonstrates the feasibility of leveraging large language models (LLM) for time series. Nevertheless, prior methods may overlook the consistency in aligning time series and natural language, resulting in insufficient utilization of the LLM potentials. To fully exploit the general-purpose token transitions learned from language modeling, we propose AutoTimes to repurpose LLMs as Autoregressive Time series forecasters, which is consistent with the acquisition and utilization of LLMs without updating the parameters. The consequent forecasters can handle flexible series lengths and achieve competitive performance as prevalent models. Further, we present token-wise prompting that utilizes corresponding timestamps to make our method applicable to multimodal scenarios. Analysis demonstrates our forecasters inherit zero-shot and in-context learning capabilities of LLMs. Empirically, AutoTimes exhibits notable method generality and achieves enhanced performance by basing on larger LLMs, additional texts, or time series as instructions.">
<tr id="Foundation models of time series have not been fully developed due to the limited availability of time series corpora and the underexploration of scalable pre-training. Based on the similar sequential formulation of time series and natural language, increasing research demonstrates the feasibility of leveraging large language models (LLM) for time series. Nevertheless, the inherent autoregressive property and decoder-only architecture of LLMs have not been fully considered, resulting in insufficient utilization of LLM abilities. To further exploit the general-purpose token transition and multi-step generation ability of large language models, we propose AutoTimes to repurpose LLMs as autoregressive time series forecasters, which independently projects time series segments into the embedding space and autoregressively generates future predictions with arbitrary lengths. Compatible with any decoder-only LLMs, the consequent forecaster exhibits the flexibility of the lookback length and scalability of the LLM size. Further, we formulate time series as prompts, extending the context for prediction beyond the lookback window, termed in-context forecasting. By adopting textual timestamps as position embeddings, AutoTimes integrates multimodality for multivariate scenarios. Empirically, AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over 5 times training/inference speedup compared to advanced LLM-based forecasters.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/d488445bb2bf6719bc48a4d39bd906116274abda" target='_blank'>AutoTimes: Autoregressive Time Series Forecasters via Large Language Models</a></td>
<td>
Expand Down Expand Up @@ -74,7 +74,7 @@ hide:
</td>
<td>2022-09-20</td>
<td>IEEE Transactions on Knowledge and Data Engineering</td>
<td>33</td>
<td>36</td>
<td>16</td>
</tr>

Expand Down Expand Up @@ -110,7 +110,7 @@ hide:
</td>
<td>2023-10-14</td>
<td>ArXiv</td>
<td>17</td>
<td>21</td>
<td>13</td>
</tr>

Expand Down Expand Up @@ -146,7 +146,7 @@ hide:
</td>
<td>2024-02-05</td>
<td>ArXiv</td>
<td>5</td>
<td>7</td>
<td>4</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:04:57 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:02:57 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,7 +74,7 @@ hide:
</td>
<td>2022-10-08</td>
<td>ArXiv</td>
<td>41</td>
<td>43</td>
<td>16</td>
</tr>

Expand Down Expand Up @@ -123,7 +123,7 @@ hide:
<td>2023-12-18</td>
<td>ArXiv</td>
<td>1</td>
<td>19</td>
<td>20</td>
</tr>

</tbody>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:05:49 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:03:51 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2021-05-06</td>
<td>Physical review letters</td>
<td>74</td>
<td>75</td>
<td>81</td>
</tr>

Expand All @@ -110,7 +110,7 @@ hide:
</td>
<td>2020-06-19</td>
<td>ArXiv</td>
<td>365</td>
<td>367</td>
<td>105</td>
</tr>

Expand All @@ -123,7 +123,7 @@ hide:
<td>2021-11-24</td>
<td>ArXiv</td>
<td>12</td>
<td>9</td>
<td>10</td>
</tr>

</tbody>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:06:17 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:04:20 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -122,8 +122,8 @@ hide:
</td>
<td>2023-07-05</td>
<td>AI Mag.</td>
<td>0</td>
<td>19</td>
<td>1</td>
<td>18</td>
</tr>

<tr id="None">
Expand Down
12 changes: 6 additions & 6 deletions docs/recommendations/25903eabbb1830aefa82048212e643eec660de0b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-05-21 05:06:21 UTC</i>
<i class="footer">This page was last updated on 2024-05-27 07:04:22 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2019-02-01</td>
<td>J. Comput. Phys.</td>
<td>6758</td>
<td>6818</td>
<td>125</td>
</tr>

Expand All @@ -62,7 +62,7 @@ hide:
</td>
<td>2017-11-28</td>
<td>ArXiv</td>
<td>719</td>
<td>721</td>
<td>125</td>
</tr>

Expand All @@ -74,7 +74,7 @@ hide:
</td>
<td>2018-01-20</td>
<td>J. Mach. Learn. Res.</td>
<td>642</td>
<td>645</td>
<td>24</td>
</tr>

Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>205</td>
<td>211</td>
<td>12</td>
</tr>

Expand All @@ -134,7 +134,7 @@ hide:
</td>
<td>2018-08-15</td>
<td>Proceedings of the National Academy of Sciences of the United States of America</td>
<td>405</td>
<td>408</td>
<td>64</td>
</tr>

Expand Down
Loading

0 comments on commit 5f97153

Please sign in to comment.