国外服务器网站打开慢,wordpress本地服务器,纯免费聊天的app,宁波易企网做的网站论文网址#xff1a;[1611.07308] Variational Graph Auto-Encoders (arxiv.org)
英文是纯手打的#xff01;论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误#xff0c;若有发现欢迎评论指正#xff01;文章偏向于笔记#xff0c;谨慎…
论文网址[1611.07308] Variational Graph Auto-Encoders (arxiv.org)
英文是纯手打的论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误若有发现欢迎评论指正文章偏向于笔记谨慎食用 目录
1. 省流版
1.1. 心得
1.2. 论文总结图
2. 论文逐段精读
2.1. A latent variable model for graph-structured data
2.2. Experiments on link prediction
3. Reference 1. 省流版
1.1. 心得
1好短的文章捏只有两页 1.2. 论文总结图 2. 论文逐段精读
2.1. A latent variable model for graph-structured data ①Task: unsupervised learning ②Latent space of unsupervised VGAE in Cora, a citation network dataset: ③Definitions: for undirected and unweighted graph , the number of nodes , the adjacency matrix with self-loop and the diagnal elements all set to 1defined as , the degree matrix is , the stochastic latent variables is , , node feature matrix 但是没说这个节点特征是啥估计自己随便定义吧 ④Inference model: with
where is the matrix of mean vectors ;
为啥左边要有个log啊 ⑤A 2 layer GCN: where denotes weight matrix, ⑥ 和 共享的参数什么玩意儿为啥有俩是引用了之前的什么高斯吗 ⑦Generative model: with
where represents the logistic sigmoid function ⑧Loss function: where Gaussian prior ⑨作者觉得对于非常稀疏的邻接矩阵在损失函数中重新加权a) 的项或b) 的子样本项可能是有益的。然后它们选择了a) 方法。 ⑩If there is no node features, replace by indentity matrix ⑪Reconstruct adjacency matrix by non-probabilistic graph auto-encoder (GAE) model: 2.2. Experiments on link prediction ①Prediction task: randomly delete some edges and keep all the node features ②Validation/Test set: deleted edges and unconnected node pairs with the same number ③Connection contained: 5% for val set and 10% for test set ④Epoch: 200 ⑤Optimizer: Adam ⑥Learning rate: 0.01 ⑦Hidden dim: 32 ⑧Latent variable dim: 16 ⑨Embedding dim: 128 ⑩Performance comparison table with mean results and std error for 10 runs: where * means w/o node features 3. Reference
Kipf, T. N. Welling, M. (2016) Variational Graph Auto-Encoders, NIPS. doi: https://doi.org/10.48550/arXiv.1611.07308