Research
Current Research
Multi-modal Deletion Completion of OCT and Fundus Medical Images
March 2024 - August 2024
University of Exeter
Supervised by Professor Yanda Meng
This research focuses on developing advanced techniques for handling incomplete medical imaging data in ophthalmology. Our work has been accepted by AAAI 2025.
Key Contributions:
- Developed an Incomplete Modality Disentangled Representation strategy
- Created novel feature disentanglement methods for modal-common and modal-specific components
- Implemented mutual information-based learning and joint proxy learning
- Achieved significant improvements over state-of-the-art methods across four ophthalmology datasets
Related Publication:
Interpretable Machine Learning for Medical Diagnosis
2024
Tsinghua University School of Medicine
Supervised by Professor Xiaoyun Xie
This project focuses on developing interpretable ML models for early detection of diabetic complications.
Key Achievements:
- Developed predictive models for diabetic peripheral neuropathy (DPN) and lower extremity arterial disease (LEAD)
- Implemented advanced feature engineering techniques
- Utilized SHAP values for identifying critical risk factors
- Published findings in BMC Medical Informatics and Decision Making
Related Publication:
Past Research
Deep Belief Network-Based Model for Modality Completion
September 2023 - November 2023
XJTLU
Supervised by Professor Xiaobo Jin
Research Highlights:
- Developed an encoder-decoder framework for incomplete multi-modal data
- Implemented novel loss functions for data completion and integration
- Combined DBNs with attention mechanisms
Related Publication:
Multi-modal Time Series Analysis with Spiking Neural Networks
June 2023 - September 2023
XJTLU
Supervised by Professor Shuliang Zhao
Project Overview:
- Created a multi-modal pulse peak network for heart rate anomaly detection
- Integrated image and time series data analysis
- Enhanced computational efficiency in medical diagnosis
Related Publication:
Additional Research Contributions
ARIF Framework
Contributed to the development of an adaptive attention-based cross-modal representation integration framework.
Related Publication:
Research Interests
- Multi-modal Learning
- Medical Image Analysis
- Deep Learning
- Machine Learning Interpretability
- Neural Network Architectures
Research Skills
- Programming: Python, PyTorch, TensorFlow
- Data Analysis: Scikit-learn, Pandas, NumPy
- Visualization: Matplotlib, Seaborn
- Medical Imaging: OCT, Fundus Photography
- Machine Learning: Deep Learning, Statistical Analysis
Contact for Research Collaboration
If you’re interested in collaborating on research projects, please feel free to contact me at: Z.Luo21@student.liverpool.ac.uk