Files
internal-docs/papers/RefPaperByMarkyan04/Topic5 Graph or heterogeneous graph priors/Intro.txt
2026-01-22 16:08:52 +08:00

22 lines
1.5 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
图表示学习/异构图(直接对应你选的(设备,寄存器)结构)
你明确了节点是 (设备,寄存器),这天然是二部图/异构图Device 与 Register 两类实体。STOUTER 用的是 base station graph你这里更建议引用异构图建模经典方法解释为什么要用二部图结构做“空间先验”。
Kipf & Welling. Semi-Supervised Classification with Graph Convolutional Networks (GCN). ICLR 2017.
用途:基础 GCN你做拓扑表征学习的常用基线引用。
Veličković et al. Graph Attention Networks (GAT). ICLR 2018.
用途:注意力聚合,适合“不同邻边权重不同”(例如不同设备之间依赖强弱不同)。
Xu et al. How Powerful are Graph Neural Networks? (GIN). ICLR 2019.
用途:强调结构表达能力;如果你需要强结构区分能力,可引用。
Schlichtkrull et al. Modeling Relational Data with Graph Convolutional Networks (R-GCN). ESWC 2018.
用途:关系类型图卷积;很适合你在(设备,寄存器)图里加入多种边read、write、same-device、process-link 等关系类型。
Hu et al. Heterogeneous Graph Transformer (HGT). WWW 2020.
用途:异构图 Transformer如果你后续把“设备类型/寄存器类型/功能码”都纳入异构建模HGT 是很强的参考。
Hou et al. GraphMAE: Self-Supervised Masked Graph Autoencoders. KDD/相关会议 2022.
用途:图自监督预训练;对应 STOUTER 的图 autoencoder 预训练思想,你可以用它支撑“先学图嵌入,再用于生成”。