My profile

Biography

Dr. Xinqi Fan is a Lecturer (Assistant Professor) in Artificial Intelligence at Manchester Metropolitan University. He received the Bachelor’s degree from Southwest University, the Master’s degree from the University of Western Australia, and the Ph.D. degree from the City University of Hong Kong. He was a research assistant at King Abdullah University of Science and Technology, and an autonomous driving engineer at NIO.

Dr. Fan’s research interests include artificial intelligence, deep learning, computer vision and their applications in facial, human, medical and energy analysis. He has published several papers on top artificial intelligence conferences and journals, including the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), and IEEE Transactions on Image Processing (TIP). He was a recipient of the National Scholarship, and a recipient of Yuhui Scholarship for academic excellence in Computer and Information Science. He and his collaborators won a Silver Medal from the International Exhibition of Inventions Geneva, a Silver Medal from the Internet+ Innovation and Entrepreneurship Competition, and the Second Place in Facial Micro-Expression Challenge from ACM Multimedia. He is a certificated Facial Action Coding System coder.

Dr. Fan was a Chair of the IEEE City University of Hong Kong student branch. He has been a reviewer for several prestigious journals, including IEEE Transactions on Affective Computing, IEEE Transactions on Multimedia, IEEE Transactions on Circuits and Systems for Video Technology, ACM Transactions on Multimedia Computing, Communications, and Applications, Pattern Recognition, Pattern Recognition Letters, Engineering Applications of Artificial Intelligence, etc.

Please feel free to email Xinqi (x.fan@mmu.ac.uk) for potential collaborations, visiting, and Ph.D. opportunities in AI+X.

Teaching

Deep Learning

Supervision

I always look for highly motivated students to join my team to do research. Students will get full support and have academic exchange opportunities in top universities worldwide. Preferred academic backgrounds include but are not limited to artificial intelligence, data science, computer science, electrical and electronic engineering, and automation. 

The successful candidates will join an energetic and friendly research group, Human-Centred Computing Group, led by Prof. Moi Hoon Yap (Professor of Image and Vision Computing).

If you are interested in joining my team, please do not hesitate to contact me at x.fan@mmu.ac.uk with your CV and a short self-introduction. I always welcome self-funded Ph.D. students with a strong background, and will have funded Ph.D. opportunities depending on the funding.

Research outputs

Selected Publication

  • X. Fan, X. Chen, M. Jiang, et al., “SelfME: Self-supervised motion learning for micro-expression recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)., 2023. 
  • X. Chen, X. Fan, B. Chiu, “Interpretable deep biomarker for serial monitoring of carotid atherosclerosis based on three-dimensional ultrasound imaging,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)., 2023.
  • A. Zafar, D. Aftab, R. Qureshi, X. Fan, et al., “Single stage adaptive multi-attention network for image restoration,” IEEE Transactions on Image Processing (TIP)., 2024.
  • A. Shahid, M. Nawaz, X. Fan, et al., “View-adaptive graph neural network for action recognition,” IEEE Transactions on Cognitive and Developmental Systems (TCDS)., 2022.
  • X. Fan, A. Shahid, H. Yan, “Facial micro-expression generation based on deep motion retargeting and transfer learning,” ACM International Conference on Multimedia (MM)., 2021.
  • Chapters in books

    Chen, X., Fan, X., Chiu, B. (2023) 'Interpretable Deep Biomarker for Serial Monitoring of Carotid Atherosclerosis Based on Three-Dimensional Ultrasound Imaging.' Lecture Notes in Computer Science. Springer Nature Switzerland, pp. 295-305.

  • Journal articles

    Zafar, A., Aftab, D., Qureshi, R., Fan, X., Chen, P., Wu, J., Ali, H., Nawaz, S., Khan, S., Shah, M. (2024) 'Single Stage Adaptive Multi-Attention Network for Image Restoration.' IEEE Transactions on Image Processing, 33pp. 2924-2935.

    Shahid, A.R., Nawaz, M., Fan, X., Yan, H. (2023) 'View-Adaptive Graph Neural Network for Action Recognition.' IEEE Transactions on Cognitive and Developmental Systems, 15(2) pp. 969-978.

    Qureshi, R., Basit, S.A., Shamsi, J.A., Fan, X., Nawaz, M., Yan, H., Alam, T. (2022) 'Machine learning based personalized drug response prediction for lung cancer patients.' Scientific Reports, 12(1)

    Fan, X., Shahid, A.R., Yan, H. (2022) 'Edge-aware motion based facial micro-expression generation with attention mechanism.' Pattern Recognition Letters, 162pp. 97-104.

    Fan, X., Jiang, M., Shahid, A.R., Yan, H. (2022) 'Hierarchical scale convolutional neural network for facial expression recognition.' Cognitive Neurodynamics, 16(4) pp. 847-858.

    Fan, X., Jiang, M., Yan, H. (2021) 'A Deep Learning Based Light-Weight Face Mask Detector with Residual Context Attention and Gaussian Heatmap to Fight against COVID-19.' IEEE Access, 9pp. 96964-96974.

  • Conference papers

    Fan, X., Shahid, A.R., Yan, H. (2022) 'Adaptive Dual Motion Model for Facial Micro-Expression Generation.' pp. 7125-7129.

    Fan, X., Shahid, A.R., Yan, H. (2021) 'Facial Micro-Expression Generation based on Deep Motion Retargeting and Transfer Learning.' pp. 4735-4739.

    Fan, X., Jiang, M., Zhang, H., Li, Y., Yan, H. (2021) 'Quantized Separable Residual Network for Facial Expression Recognition on FPGA.' 1397 CCIS. pp. 3-14.

    Fan, X., Jiang, M. (2021) 'RetinaFaceMask: A Single Stage Face Mask Detector for Assisting Control of the COVID-19 Pandemic.' pp. 832-837.

    Fan, X., Qureshi, R., Shahid, A.R., Cao, J., Yang, L., Yan, H. (2020) 'Hybrid separable convolutional inception residual network for human facial expression recognition.' 2020-December. pp. 21-26.