APTM: Structurally Informative Network Representation Learning
DOI:
https://doi.org/10.54691/fse.v3i11.5701Keywords:
Network Representation Learning; Graph Data Mining; Autoencoder.Abstract
Network representation learning algorithms provide a method to map complex network data into low-dimensional real vectors, aiming to capture and preserve structural information within the network. In recent years, these algorithms have found widespread applications in tasks such as link prediction and node classification in graph data mining. In this work, we propose a novel algorithm based on an adaptive transfer probability matrix. We use a deep neural network, comprising an autoencoder, to encode and reduce the dimensionality of the generated matrix, thereby encoding the intricate structural information of the network into low-dimensional real vectors. We evaluate the algorithm's performance through node classification, and in comparison with mainstream network representation learning algorithms, our proposed algorithm demonstrates favorable results. It outperforms baseline models in terms of micro-F1 scores on three datasets: PPI, Citeseer, and Wiki.
Downloads
References
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
Wu, Y., Schuster, M., Chen, Z., et al. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
Yu, A. W., Dohan, D., Luong, M. T., et al. (2018). QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. In International Conference on Learning Representations.
Szegedy, C., Toshev, A., & Erhan, D. (2013). Deep neural networks for object detection. Advances in neural information processing systems, 26.
Marino, K., Salakhutdinov, R., & Gupta, A. (2017). The More You Know: Using Knowledge Graphs for Image Classification. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 20-28). IEEE Computer Society.
Huang, L., & Yang, C. M. (2022). GeNI: A Network Representation Learning Method for Node Classification. Journal of Southwest University of Science and Technology (Natural Science Edition), 37(4), 63.
Chen, S. C., Yuan, D. Y., Huang, S. H., et al. (2022). Node Label Classification Algorithm Based on Structural Deep Network Embedding Model. Computer Science, 49(03), 105-112.
Ai, C. L., He, M., Lv, L., et al. (2023). Edge Embedding Link Prediction Algorithm Based on Comprehensive Random Walk Strategy. Journal of Yunnan University (Natural Science Edition), 45(01), 29-37. DOI:10.15888/j.cnki.csa.009068.
Zhang, X. X., Tang, Y. Q., Zhao, W., et al. (2023). Personalized Learning Resource Recommendation Based on Knowledge Graph and Graph Embedding. Journal of Computer Systems & Applications, 32(05), 180-187. DOI:10.15888/j.cnki.csa.009068.
Tang, J., Qu, M., Wang, M., et al. (2015). Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web (pp. 1067-1077).
Perozzi, B., Al-Rfou, R., & Skiena, S. (2014). Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 701-710).
Ribeiro, L. F. R., Saverese, P. H. P., & Figueiredo, D. R. (2017). struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 385-394).
Wang, D., Cui, P., & Zhu, W. (2016). Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1225-1234).
Ng, A. (2011). Sparse autoencoder. CS294A Lecture notes, 72(2011), 1-19.
Lore, K. G., Akintayo, A., & Sarkar, S. (2017). LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61, 650-662.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




