forked from manbo/internal-docs
Reference Paper
This commit is contained in:
@@ -0,0 +1,9 @@
|
||||
@misc{veličković2018graphattentionnetworks,
|
||||
title={Graph Attention Networks},
|
||||
author={Petar Veličković and Guillem Cucurull and Arantxa Casanova and Adriana Romero and Pietro Liò and Yoshua Bengio},
|
||||
year={2018},
|
||||
eprint={1710.10903},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={stat.ML},
|
||||
url={https://arxiv.org/abs/1710.10903},
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,9 @@
|
||||
@misc{hou2022graphmaeselfsupervisedmaskedgraph,
|
||||
title={GraphMAE: Self-Supervised Masked Graph Autoencoders},
|
||||
author={Zhenyu Hou and Xiao Liu and Yukuo Cen and Yuxiao Dong and Hongxia Yang and Chunjie Wang and Jie Tang},
|
||||
year={2022},
|
||||
eprint={2205.10803},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.LG},
|
||||
url={https://arxiv.org/abs/2205.10803},
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,9 @@
|
||||
@misc{hu2020heterogeneousgraphtransformer,
|
||||
title={Heterogeneous Graph Transformer},
|
||||
author={Ziniu Hu and Yuxiao Dong and Kuansan Wang and Yizhou Sun},
|
||||
year={2020},
|
||||
eprint={2003.01332},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.LG},
|
||||
url={https://arxiv.org/abs/2003.01332},
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,9 @@
|
||||
@misc{xu2019powerfulgraphneuralnetworks,
|
||||
title={How Powerful are Graph Neural Networks?},
|
||||
author={Keyulu Xu and Weihua Hu and Jure Leskovec and Stefanie Jegelka},
|
||||
year={2019},
|
||||
eprint={1810.00826},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.LG},
|
||||
url={https://arxiv.org/abs/1810.00826},
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,21 @@
|
||||
图表示学习/异构图(直接对应你选的(设备,寄存器)结构)
|
||||
|
||||
你明确了节点是 (设备,寄存器),这天然是二部图/异构图(Device 与 Register 两类实体)。STOUTER 用的是 base station graph;你这里更建议引用异构图建模经典方法,解释为什么要用二部图结构做“空间先验”。
|
||||
|
||||
Kipf & Welling. Semi-Supervised Classification with Graph Convolutional Networks (GCN). ICLR 2017.
|
||||
用途:基础 GCN;你做拓扑表征学习的常用基线引用。
|
||||
|
||||
Veličković et al. Graph Attention Networks (GAT). ICLR 2018.
|
||||
用途:注意力聚合,适合“不同邻边权重不同”(例如不同设备之间依赖强弱不同)。
|
||||
|
||||
Xu et al. How Powerful are Graph Neural Networks? (GIN). ICLR 2019.
|
||||
用途:强调结构表达能力;如果你需要强结构区分能力,可引用。
|
||||
|
||||
Schlichtkrull et al. Modeling Relational Data with Graph Convolutional Networks (R-GCN). ESWC 2018.
|
||||
用途:关系类型图卷积;很适合你在(设备,寄存器)图里加入多种边:read、write、same-device、process-link 等关系类型。
|
||||
|
||||
Hu et al. Heterogeneous Graph Transformer (HGT). WWW 2020.
|
||||
用途:异构图 Transformer;如果你后续把“设备类型/寄存器类型/功能码”都纳入异构建模,HGT 是很强的参考。
|
||||
|
||||
Hou et al. GraphMAE: Self-Supervised Masked Graph Autoencoders. KDD/相关会议 2022.
|
||||
用途:图自监督预训练;对应 STOUTER 的图 autoencoder 预训练思想,你可以用它支撑“先学图嵌入,再用于生成”。
|
||||
@@ -0,0 +1,9 @@
|
||||
@misc{schlichtkrull2017modelingrelationaldatagraph,
|
||||
title={Modeling Relational Data with Graph Convolutional Networks},
|
||||
author={Michael Schlichtkrull and Thomas N. Kipf and Peter Bloem and Rianne van den Berg and Ivan Titov and Max Welling},
|
||||
year={2017},
|
||||
eprint={1703.06103},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={stat.ML},
|
||||
url={https://arxiv.org/abs/1703.06103},
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,9 @@
|
||||
@misc{kipf2017semisupervisedclassificationgraphconvolutional,
|
||||
title={Semi-Supervised Classification with Graph Convolutional Networks},
|
||||
author={Thomas N. Kipf and Max Welling},
|
||||
year={2017},
|
||||
eprint={1609.02907},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.LG},
|
||||
url={https://arxiv.org/abs/1609.02907},
|
||||
}
|
||||
Reference in New Issue
Block a user