Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jul 22, 2024
1 parent 3ecc105 commit 53469d7
Show file tree
Hide file tree
Showing 162 changed files with 10,281 additions and 10,509 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:05:08 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:05:31 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2023-03-15</td>
<td>PLOS Computational Biology</td>
<td>1</td>
<td>2</td>
<td>54</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:05:11 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:05:36 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2023-11-01</td>
<td>Chaos</td>
<td>1</td>
<td>2</td>
<td>11</td>
</tr>

Expand All @@ -74,7 +74,7 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3108</td>
<td>3119</td>
<td>63</td>
</tr>

Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>229</td>
<td>232</td>
<td>12</td>
</tr>

Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/0ce6f9c3d9dccdc5f7567646be7a7d4c6415576b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:06:43 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:07:14 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2019-03-29</td>
<td>Proceedings of the National Academy of Sciences of the United States of America</td>
<td>588</td>
<td>589</td>
<td>63</td>
</tr>

Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2018-01-20</td>
<td>J. Mach. Learn. Res.</td>
<td>657</td>
<td>658</td>
<td>24</td>
</tr>

Expand Down Expand Up @@ -138,9 +138,9 @@ hide:
<td>127</td>
</tr>

<tr id="Neural Ordinary Differential Equations (Neural ODEs) is a class of deep neural network models that interpret the hidden state dynamics of neural networks as an ordinary differential equation, thereby capable of capturing system dynamics in a continuous time framework. In this work, I integrate symmetry regularization into Neural ODEs. In particular, I use continuous Lie symmetry of ODEs and PDEs associated with the model to derive conservation laws and add them to the loss function, making it physics-informed. This incorporation of inherent structural properties into the loss function could significantly improve robustness and stability of the model during training. To illustrate this method, I employ a toy model that utilizes a cosine rate of change in the hidden state, showcasing the process of identifying Lie symmetries, deriving conservation laws, and constructing a new loss function.">
<tr id="Neural ordinary differential equations (Neural ODEs) is a class of machine learning models that approximate the time derivative of hidden states using a neural network. They are powerful tools for modeling continuous-time dynamical systems, enabling the analysis and prediction of complex temporal behaviors. However, how to improve the model's stability and physical interpretability remains a challenge. This paper introduces new conservation relations in Neural ODEs using Lie symmetries in both the hidden state dynamics and the back propagation dynamics. These conservation laws are then incorporated into the loss function as additional regularization terms, potentially enhancing the physical interpretability and generalizability of the model. To illustrate this method, the paper derives Lie symmetries and conservation laws in a simple Neural ODE designed to monitor charged particles in a sinusoidal electric field. New loss functions are constructed from these conservation relations, demonstrating the applicability symmetry-regularized Neural ODE in typical modeling tasks, such as data-driven discovery of dynamical systems.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/c3588c4ada39c8db4738289c4cbc36025ce952f8" target='_blank'>Symmetry-regularized neural ordinary differential equations</a></td>
<td><a href="https://www.semanticscholar.org/paper/9827f402bc2fc9561f674b5bdfc992fb7dd32e2e" target='_blank'>Symmetry-regularized neural ordinary differential equations</a></td>
<td>
Wenbo Hao
</td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:04:49 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:04:59 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -110,7 +110,7 @@ hide:
</td>
<td>2022-07-01</td>
<td>ArXiv, DBLP</td>
<td>35</td>
<td>36</td>
<td>33</td>
</tr>

Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2021-01-18</td>
<td>ArXiv</td>
<td>169</td>
<td>172</td>
<td>36</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:06:45 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:07:15 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -134,7 +134,7 @@ hide:
</td>
<td>2020-03-04</td>
<td>ArXiv</td>
<td>108</td>
<td>109</td>
<td>64</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:04:51 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:05:01 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2024-02-04</td>
<td>ArXiv</td>
<td>7</td>
<td>6</td>
<td>65</td>
</tr>

Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>36</td>
<td>37</td>
<td>1</td>
</tr>

Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2023-10-03</td>
<td>ArXiv</td>
<td>103</td>
<td>105</td>
<td>9</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:04:53 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:05:06 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/1df04f33a8ef313cc2067147dbb79c3ca7c5c99f.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:04:51 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:05:01 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -62,7 +62,7 @@ hide:
</td>
<td>2024-02-13</td>
<td>ArXiv</td>
<td>28</td>
<td>30</td>
<td>8</td>
</tr>

Expand Down Expand Up @@ -106,7 +106,7 @@ hide:
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/48052c9ebe066b9fcd653897dabf18582ec7e7fb" target='_blank'>A Scalable and Effective Alternative to Graph Transformers</a></td>
<td>
Kaan Sancak, Zhigang Hua, Jin Fang, Yan Xie, Andrey Malevich, Bo Long, M. F. Balin, Umit V. cCatalyurek
Kaan Sancak, Zhigang Hua, Jin Fang, Yan Xie, Andrey Malevich, Bo Long, M. F. Balin, Ümit V. Çatalyürek
</td>
<td>2024-06-17</td>
<td>ArXiv</td>
Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2022-10-08</td>
<td>ArXiv</td>
<td>51</td>
<td>54</td>
<td>17</td>
</tr>

Expand All @@ -134,7 +134,7 @@ hide:
</td>
<td>2022-06-29</td>
<td>ArXiv</td>
<td>5</td>
<td>6</td>
<td>27</td>
</tr>

Expand Down
16 changes: 8 additions & 8 deletions docs/recommendations/2232751169e57a14723bfffb4ab26aa0e0e3839a.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:05:49 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:06:26 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2021-05-06</td>
<td>Physical review letters</td>
<td>79</td>
<td>80</td>
<td>81</td>
</tr>

Expand Down Expand Up @@ -114,16 +114,16 @@ hide:
<td>77</td>
</tr>

<tr id="We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example-a detailed dark matter simulation-and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.">
<tr id="The Observation--Hypothesis--Prediction--Experimentation loop paradigm for scientific research has been practiced by researchers for years towards scientific discoveries. However, with data explosion in both mega-scale and milli-scale scientific research, it has been sometimes very difficult to manually analyze the data and propose new hypotheses to drive the cycle for scientific discovery. In this paper, we discuss the role of Explainable AI in scientific discovery process by demonstrating an Explainable AI-based paradigm for science discovery. The key is to use Explainable AI to help derive data or model interpretations, hypotheses, as well as scientific discoveries or insights. We show how computational and data-intensive methodology -- together with experimental and theoretical methodology -- can be seamlessly integrated for scientific research. To demonstrate the AI-based science discovery process, and to pay our respect to some of the greatest minds in human history, we show how Kepler's laws of planetary motion and Newton's law of universal gravitation can be rediscovered by (Explainable) AI based on Tycho Brahe's astronomical observation data, whose works were leading the scientific revolution in the 16-17th century. This work also highlights the important role of Explainable AI (as compared to Blackbox AI) in science discovery to help humans prevent or better prepare for the possible technological singularity that may happen in the future, since science is not only about the know how, but also the know why. Presentation of the work is available at https://slideslive.com/38986142/from-kepler-to-newton-explainable-ai-for-science-discovery.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/643ac3ef063c77eb02a3d52637c11fe028bfae28" target='_blank'>Discovering Symbolic Models from Deep Learning with Inductive Biases</a></td>
<td><a href="https://www.semanticscholar.org/paper/2b1143fbed61617fcc27633dd9452a627edb5c99" target='_blank'>From Kepler to Newton: Explainable AI for Science Discovery</a></td>
<td>
M. Cranmer, Alvaro Sanchez-Gonzalez, P. Battaglia, Rui Xu, Kyle Cranmer, D. Spergel, S. Ho
Zelong Li, Jianchao Ji, Yongfeng Zhang
</td>
<td>2020-06-19</td>
<td>2021-11-24</td>
<td>ArXiv</td>
<td>385</td>
<td>104</td>
<td>12</td>
<td>10</td>
</tr>

</tbody>
Expand Down
10 changes: 5 additions & 5 deletions docs/recommendations/23c7b93a379c26c3738921282771e1a545538703.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2024-07-15 06:06:26 UTC</i>
<i class="footer">This page was last updated on 2024-07-22 06:06:58 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -42,9 +42,9 @@ hide:
</thead>
<tbody>

<tr id="The nuclear sector represents the primary source of carbon-free energy in the United States. Nevertheless, existing nuclear power plants face the threat of early shutdowns due to their inability to compete economically against alternatives such as gas power plants. Optimizing the fuel cycle cost through the optimization of core loading patterns is one approach to addressing this lack of competitiveness. However, this optimization task involves multiple objectives and constraints, resulting in a vast number of candidate solutions that cannot be explicitly solved. While stochastic optimization (SO) methodologies are utilized by various nuclear utilities and vendors for fuel cycle reload design, manual design remains the preferred approach. To advance the state-of-the-art in core reload patterns, we have developed methods based on Deep Reinforcement Learning. Previous research has laid the groundwork for this approach and demonstrated its ability to discover high-quality patterns within a reasonable timeframe. However, there is a need for comparison against legacy methods to demonstrate its utility in a single-objective setting. While RL methods have shown superiority in multi-objective settings, they have not yet been applied to address the competitiveness issue effectively. In this paper, we rigorously compare our RL-based approach against the most commonly used SO-based methods, namely Genetic Algorithm (GA), Simulated Annealing (SA), and Tabu Search (TS). Subsequently, we introduce a new hybrid paradigm to devise innovative designs, resulting in economic gains ranging from 2.8 to 3.3 million dollars per year per plant. This development leverages interpretable AI, enabling improved algorithmic efficiency by making black-box optimizations interpretable. Future work will focus on scaling this method to address a broader range of core designs.">
<tr id="Optimizing the fuel cycle cost through the optimization of nuclear reactor core loading patterns involves multiple objectives and constraints, leading to a vast number of candidate solutions that cannot be explicitly solved. To advance the state-of-the-art in core reload patterns, we have developed methods based on Deep Reinforcement Learning (DRL) for both single- and multi-objective optimization. Our previous research has laid the groundwork for these approaches and demonstrated their ability to discover high-quality patterns within a reasonable time frame. On the other hand, stochastic optimization (SO) approaches are commonly used in the literature, but there is no rigorous explanation that shows which approach is better in which scenario. In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO), against the most commonly used SO-based methods: Genetic Algorithm (GA), Parallel Simulated Annealing (PSA) with mixing of states, and Tabu Search (TS), as well as an ensemble-based method, Prioritized Replay Evolutionary and Swarm Algorithm (PESA). We found that the LP scenarios derived in this paper are amenable to a global search to identify promising research directions rapidly, but then need to transition into a local search to exploit these directions efficiently and prevent getting stuck in local optima. PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global and local search method. Subsequently, we compared all algorithms against PPO in long runs, which exacerbated the differences seen in the shorter cases. Overall, the work demonstrates the statistical superiority of PPO compared to the other considered algorithms.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/2f81024fe705a70ad1dcacecb371e2c31c8e49b9" target='_blank'>Surpassing legacy approaches and human intelligence with hybrid single- and multi-objective Reinforcement Learning-based optimization and interpretable AI to enable the economic operation of the US nuclear fleet</a></td>
<td><a href="https://www.semanticscholar.org/paper/0541f363cd64cc0a196357699559c05301d66b61" target='_blank'>Surpassing legacy approaches to PWR core reload optimization with single-objective Reinforcement learning</a></td>
<td>
Paul Seurin, K. Shirvan
</td>
Expand Down Expand Up @@ -98,7 +98,7 @@ hide:
</td>
<td>2023-06-01</td>
<td>IEEE Transactions on Aerospace and Electronic Systems</td>
<td>1</td>
<td>2</td>
<td>14</td>
</tr>

Expand All @@ -110,7 +110,7 @@ hide:
</td>
<td>2023-11-27</td>
<td>The journal of physical chemistry. B</td>
<td>3</td>
<td>4</td>
<td>29</td>
</tr>

Expand Down
Loading

0 comments on commit 53469d7

Please sign in to comment.