Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jul 15, 2024
1 parent 48a11a2 commit 3ecc105
Show file tree
Hide file tree
Showing 162 changed files with 11,233 additions and 10,835 deletions.
18 changes: 3 additions & 15 deletions docs/recommendations/06a0ba437d41a7c82c08a9636a4438c1b5031378.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:05:17 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:05:08 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -110,7 +110,7 @@ hide:
</td>
<td>2022-06-01</td>
<td>Nonlinear Dynamics</td>
<td>25</td>
<td>26</td>
<td>90</td>
</tr>

Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2009-12-01</td>
<td>Inverse Problems</td>
<td>134</td>
<td>135</td>
<td>48</td>
</tr>

Expand All @@ -138,18 +138,6 @@ hide:
<td>47</td>
</tr>

<tr id="Nonlinear dynamic models are widely used for characterizing processes that govern complex biological pathway systems. Over the past decade, validation and further development of these models became possible due to data collected via high-throughput experiments using methods from molecular biology. While these data are very beneficial, they are typically incomplete and noisy, which renders the inference of parameter values for complex dynamic models challenging. Fortunately, many biological systems have embedded linear mathematical features, which may be exploited, thereby improving fits and leading to better convergence of optimization algorithms. In this paper, we explore options of inference for dynamic models using a novel method of separable nonlinear least-squares optimization and compare its performance to the traditional nonlinear least-squares method. The numerical results from extensive simulations suggest that the proposed approach is at least as accurate as the traditional nonlinear least-squares, but usually superior, while also enjoying a substantial reduction in computational time.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/90fe465be98bcb661c86f4426aa37bc78fe75c9a" target='_blank'>Separable Nonlinear Least-Squares Parameter Estimation for Complex Dynamic Systems</a></td>
<td>
I. Dattner, Harold J. Ship, E. Voit
</td>
<td>2019-08-10</td>
<td>Complexity</td>
<td>8</td>
<td>46</td>
</tr>

</tbody>
<tfoot>
<tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:05:22 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:05:11 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,7 +74,7 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3090</td>
<td>3108</td>
<td>63</td>
</tr>

Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>227</td>
<td>229</td>
<td>12</td>
</tr>

Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/0ce6f9c3d9dccdc5f7567646be7a7d4c6415576b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:07:50 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:06:43 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2019-03-29</td>
<td>Proceedings of the National Academy of Sciences of the United States of America</td>
<td>585</td>
<td>588</td>
<td>63</td>
</tr>

Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2018-01-20</td>
<td>J. Mach. Learn. Res.</td>
<td>653</td>
<td>657</td>
<td>24</td>
</tr>

Expand All @@ -97,7 +97,7 @@ hide:
Y. Liu, Han-Juan Shao, Bing Bai
</td>
<td>2023-08-03</td>
<td>DBLP, ArXiv</td>
<td>ArXiv, DBLP</td>
<td>1</td>
<td>1</td>
</tr>
Expand Down Expand Up @@ -135,7 +135,7 @@ hide:
<td>2023-05-18</td>
<td>ArXiv</td>
<td>10</td>
<td>126</td>
<td>127</td>
</tr>

<tr id="Neural Ordinary Differential Equations (Neural ODEs) is a class of deep neural network models that interpret the hidden state dynamics of neural networks as an ordinary differential equation, thereby capable of capturing system dynamics in a continuous time framework. In this work, I integrate symmetry regularization into Neural ODEs. In particular, I use continuous Lie symmetry of ODEs and PDEs associated with the model to derive conservation laws and add them to the loss function, making it physics-informed. This incorporation of inherent structural properties into the loss function could significantly improve robustness and stability of the model during training. To illustrate this method, I employ a toy model that utilizes a cosine rate of change in the hidden state, showcasing the process of identifying Lie symmetries, deriving conservation laws, and constructing a new loss function.">
Expand Down
20 changes: 16 additions & 4 deletions docs/recommendations/0d01d21137a5af9f04e4b16a55a0f732cb8a540b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:04:51 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:04:49 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -109,8 +109,8 @@ hide:
Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, A. Liu
</td>
<td>2022-07-01</td>
<td>DBLP, ArXiv</td>
<td>33</td>
<td>ArXiv, DBLP</td>
<td>35</td>
<td>33</td>
</tr>

Expand All @@ -122,10 +122,22 @@ hide:
</td>
<td>2021-01-18</td>
<td>ArXiv</td>
<td>167</td>
<td>169</td>
<td>36</td>
</tr>

<tr id="Multi-variate time series forecasting is an important problem with a wide range of applications. Recent works model the relations between time-series as graphs and have shown that propagating information over the relation graph can improve time series forecasting. However, in many cases, relational information is not available or is noisy and reliable. Moreover, most works ignore the underlying uncertainty of time-series both for structure learning and deriving the forecasts resulting in the structure not capturing the uncertainty resulting in forecast distributions with poor uncertainty estimates. We tackle this challenge and introduce STOIC, that leverages stochastic correlations between time-series to learn underlying structure between time-series and to provide well-calibrated and accurate forecasts. Over a wide-range of benchmark datasets STOIC provides around 16% more accurate and 14% better-calibrated forecasts. STOIC also shows better adaptation to noise in data during inference and captures important and useful relational information in various benchmarks.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/7d4b412399f89c9cd66cffc4ab811b74172dcc63" target='_blank'>Learning Graph Structures and Uncertainty for Accurate and Calibrated Time-series Forecasting</a></td>
<td>
Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. A. Prakash
</td>
<td>2024-07-02</td>
<td>ArXiv</td>
<td>0</td>
<td>9</td>
</tr>

</tbody>
<tfoot>
<tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:07:50 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:06:45 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down
12 changes: 6 additions & 6 deletions docs/recommendations/123acfbccca0460171b6b06a4012dbb991cde55b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:04:52 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:04:51 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2024-03-12</td>
<td>ArXiv</td>
<td>20</td>
<td>21</td>
<td>18</td>
</tr>

Expand Down Expand Up @@ -98,7 +98,7 @@ hide:
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>35</td>
<td>36</td>
<td>1</td>
</tr>

Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2023-10-03</td>
<td>ArXiv</td>
<td>100</td>
<td>103</td>
<td>9</td>
</tr>

Expand All @@ -146,7 +146,7 @@ hide:
</td>
<td>2023-06-19</td>
<td>ArXiv</td>
<td>29</td>
<td>30</td>
<td>8</td>
</tr>

Expand All @@ -158,7 +158,7 @@ hide:
</td>
<td>2023-10-12</td>
<td>ArXiv</td>
<td>11</td>
<td>12</td>
<td>40</td>
</tr>

Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/16f01c1b3ddd0b2abd5ddfe4fdb3f74767607277.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:04:54 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:04:53 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,7 +74,7 @@ hide:
</td>
<td>2022-09-20</td>
<td>IEEE Transactions on Knowledge and Data Engineering</td>
<td>52</td>
<td>53</td>
<td>17</td>
</tr>

Expand All @@ -87,7 +87,7 @@ hide:
<td>2024-06-20</td>
<td>ArXiv</td>
<td>0</td>
<td>21</td>
<td>22</td>
</tr>

<tr id="Time series data are ubiquitous across various domains, making time series analysis critically important. Traditional time series models are task-specific, featuring singular functionality and limited generalization capacity. Recently, large language foundation models have unveiled their remarkable capabilities for cross-task transferability, zero-shot/few-shot learning, and decision-making explainability. This success has sparked interest in the exploration of foundation models to solve multiple time series challenges simultaneously. There are two main research lines, namely pre-training foundation models from scratch for time series and adapting large language foundation models for time series. They both contribute to the development of a unified model that is highly generalizable, versatile, and comprehensible for time series analysis. This survey offers a 3E analytical framework for comprehensive examination of related research. Specifically, we examine existing works from three dimensions, namely Effectiveness, Efficiency and Explainability. In each dimension, we focus on discussing how related works devise tailored solution by considering unique challenges in the realm of time series. Furthermore, we provide a domain taxonomy to help followers keep up with the domain-specific advancements. In addition, we introduce extensive resources to facilitate the field's development, including datasets, open-source, time series libraries. A GitHub repository is also maintained for resource updates (https://github.com/start2020/Awesome-TimeSeries-LLM-FM).">
Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2023-08-16</td>
<td>ArXiv</td>
<td>14</td>
<td>15</td>
<td>2</td>
</tr>

Expand All @@ -146,7 +146,7 @@ hide:
</td>
<td>2023-10-14</td>
<td>ArXiv</td>
<td>38</td>
<td>41</td>
<td>14</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:04:52 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:04:51 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2022-10-08</td>
<td>ArXiv</td>
<td>50</td>
<td>51</td>
<td>17</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:06:44 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:05:49 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2020-06-19</td>
<td>ArXiv</td>
<td>386</td>
<td>385</td>
<td>104</td>
</tr>

Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/23c7b93a379c26c3738921282771e1a545538703.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-08 06:07:24 UTC</i>
<i class="footer">This page was last updated on 2024-07-15 06:06:26 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -51,7 +51,7 @@ hide:
<td>2024-02-16</td>
<td>ArXiv</td>
<td>1</td>
<td>21</td>
<td>22</td>
</tr>

<tr id="We present an open-source Python framework for NeuroEvolution Optimization with Reinforcement Learning (NEORL) developed at the Massachusetts Institute of Technology. NEORL offers a global optimization interface of state-of-the-art algorithms in the field of evolutionary computation, neural networks through reinforcement learning, and hybrid neuroevolution algorithms. NEORL features diverse set of algorithms, user-friendly interface, parallel computing support, automatic hyperparameter tuning, detailed documentation, and demonstration of applications in mathematical and real-world engineering optimization. NEORL encompasses various optimization problems from combinatorial, continuous, mixed discrete/continuous, to high-dimensional, expensive, and constrained engineering optimization. NEORL is tested in variety of engineering applications relevant to low carbon energy research in addressing solutions to climate change. The examples include nuclear reactor control and fuel cell power production. The results demonstrate NEORL competitiveness against other algorithms and optimization frameworks in the literature, and a potential tool to solve large-scale optimization problems. More examples and benchmarking of NEORL can be found here: https://neorl.readthedocs.io/en/latest/index.html">
Expand All @@ -63,7 +63,7 @@ hide:
<td>2021-12-01</td>
<td>ArXiv</td>
<td>3</td>
<td>21</td>
<td>22</td>
</tr>

<tr id="Significance The scientific and engineering field has long sought an optimization method that is both efficient and accurate. While combining evolutionary algorithms with deep-learning methods offers a viable solution for complex problems, the simple combination does not achieve accurate and efficient optimization due to the nature of evolutionary principles, even with well-trained deep-learning models. We introduce a physics-supervised deep-learning optimization (PSDLO) algorithm that significantly improves convergence speed while maintaining optimization accuracy. PSDLO’s mechanism and demonstration are clearly explained and verified, presenting an ideal optimization method for various science and engineering applications.">
Expand Down Expand Up @@ -110,7 +110,7 @@ hide:
</td>
<td>2023-11-27</td>
<td>The journal of physical chemistry. B</td>
<td>2</td>
<td>3</td>
<td>29</td>
</tr>

Expand All @@ -134,7 +134,7 @@ hide:
</td>
<td>2018-12-07</td>
<td>Astrodynamics</td>
<td>176</td>
<td>177</td>
<td>36</td>
</tr>

Expand Down
Loading

0 comments on commit 3ecc105

Please sign in to comment.