You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I am new to dataset distillation and my question may be shallow. It seems to me that dataset distillation is generally for classification tasks, synthesizing condensed data for each class for efficient training, just like the node classification task in your work. I'm wondering if your work can also apply to the link prediction task on the graph?
The text was updated successfully, but these errors were encountered:
Common node classification methods follow the AXW sheme, which facilitates the graph structure learning (fix W and update A/X). Using the same idea, pairwise link prediction task (MLP(v1,v2)), you can fix the W in MLP and update v1,v2. IMP it's pretty trival to handle conflict in v1 when simultaneously updating (v1,v2) and (v1,v3).
For subgraph based link prediction, you can refer to DosCond. For pairwise link prediction, you can just select less positive samples by certain strategy.
Hello! I am new to dataset distillation and my question may be shallow. It seems to me that dataset distillation is generally for classification tasks, synthesizing condensed data for each class for efficient training, just like the node classification task in your work. I'm wondering if your work can also apply to the link prediction task on the graph?
The text was updated successfully, but these errors were encountered: