陳浩

發布者:陳浩發布時間:2023-10-24浏覽次數:3637

Hao Chen (陳浩)    

Ph.D., Associate Professor

School of Computer Science and Engineering, Southeast University

I am a member of PAttern Learning and Mining(PALM) Lab. 

Email: haochen303@seu.edu.cn

Office: Room 150, School of Computer Science and Engineering, Southeast University Jiulonghu Campus, Nanjing, Jiangsu, China.

 

Brief Biography

Dr. Hao Chen joined the PALM Lab in the School of Computer Science and Engineering, Southeast University as an associate professor in 2021He received the Ph.D. degree from the Robotic Vision Lab in City University of Hong Kong, in 2019. From 2019 to 2020, he worked as a research fellow at Nanyang Technological University, Singapore.  

陳浩,博士,bet356手机版唯一官网登录PALM實驗室副教授,博士生導師,江蘇省雙創博士、南京市留學擇優人才,bet356手机版唯一官网紫金青年學者2019年于香港城市大學機器人視覺實驗室獲博士學位,2019-2020年任新加坡南洋理工大學博士後研究員。迄今在計算機視覺及機器人領域國際權威期刊和會議IJCV, TIP, CVPRIROS等上發表論文20餘篇,其中3篇被評為ESI高被引論文(前1%)。主持國家自然科學基金,江蘇省自然科學基金多項,長期擔任計算機視覺和機器人領域國際知名期刊和會議的審稿人。

Research Interests

My research interests mainly focus on computer vision and multi-modal systems, specifically on developing multi-modal learning and interpretation schemes, multi-modal scene-understanding models, and controlable unimodal/multi-modal AIGC. These include:

·        Developing methods for learning, selecting, and fusing multi-modal data (such as RGB-D, 3D point cloud, event data, and vision-language) to improve the accuracy and generalization ability of multi-modal systems, such as those used in autonomous driving.

·        Developing transfer learning, weakly-supervised learning, and self-supervised learning schemes for multi-modal data.

·        Conducting multi-modal interpretation to gain insights into the working rules of multi-modal systems.

·        Tackling downstream scene understanding and generation tasks, including computational visual attention modeling, semantic segmentation, action recognition and controlable unimodal/multi-modal AIGC for images/videos.

 

  • 歡迎2024年入學的考研同學加入,還有學碩和專碩名額,招生須知見招生介紹.pdf歡迎盡快聯系!

  • 實驗室随時歡迎感興趣的大二大三學生加入進行科研訓練!

  • To date, I have no quotas for foreign students.



Selected Publications #co-first author, *corresponding author

 

1.  Yuhan LiuYongjian DengHao ChenZhen Yang,Video Frame Interpolation via Direct Synthesis with the Event-based Reference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. (CCF A)

2.   Hao Chen, Feihong Shen, Ding Ding, Yongjian Deng, Chao Li, Disentangled Cross-modal Transformer for RGB-D Salient Object Detection and Beyond. IEEE Transactions on Image Processing, 2024. (JCR Q1, CCF A)

3. Yongjian Deng, Hao Chen*, and Youfu Li. A Dynamic GCN with Cross-Representation Distillation for Event-Based Learning. In Annual AAAI Conference on Artificial Intelligence (AAAI), 2024. (CCF A)

4.  Yongjian Deng, Hao Chen*, and Youfu Li*. A Voxel Graph CNN for Object Classification with Event Cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. (CCF A)

5.   Hao Chen, Youfu Li*, Yongjian Deng, and Guosheng Lin. CNN-based RGB-D salient object detection: learn, select and fuse. International Journal of Computer Vision, 2021, 129 (7), 2076-2096. (JCR Q1, CCF A)

6.  Yongjian Deng#, Hao Chen#, and Youfu Li*. Learning from Images: A Distillation Learning Framework for Event Cameras. IEEE Transactions on Image Processing, 2021, 30, 4919-4931. (JCR Q1, CCF A)

7.  Yongjian Deng#, Hao Chen#, and Youfu Li*. MVF-Net: A multi-view fusion network for event-based object classificationIEEE Transactions on Circuits and Systems for Video Technology, 2021. (JCR Q1)

8.  Hao Chen, Youfu Li*, and Dan Su. Discriminative cross-modal transfer learning and densely cross-level feedback fusion for RGB-D salient object detection. IEEE Transactions on Cybernetics, 50(11): 4808-4820, 2020. (JCR Q1)

9.  Hao Chen, Yongjian Deng, Youfu Li*, Tzu-Yi Hung, and Guosheng Lin*. RGBD salient object detection via disentangled cross-modal fusion. IEEE Transactions on Image Processing, 29:8407–8416, 2020. (JCR Q1, CCF A)

10.  Hao Chen and Youfu Li*. Three-stream attention-aware network for RGB-D salient object detection. IEEE Transactions on Image Processing, 28(6):2825–2835, 2019. (JCR Q1, CCF AESI高被引論文)

11. Hao Chen, Youfu Li*, and Dan Su. Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for RGB-D salient object detection. Pattern Recognition, 86:376–385, 2019. (JCR Q1ESI高被引論文)

12. Hao Chen and Youfu Li*. Progressively complementarity-aware fusion network for RGB-D salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3051–3060, 2018. (CCF A)

13. Hao Chen, You-Fu Li*, and Dan Su. Attention-aware cross-modal cross-level fusion network for RGB-D salient object detection. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6821–6826. IEEE, 2018. (Top Conference on Robotics)

14. Junwei Han*, Hao Chen, Nian Liu, Chenggang Yan, and Xuelong Li. CNNs-based RGB-D saliency detection via cross-view transfer and multi-view fusion. IEEE Transactions on Cybernetics, 48(11): 3171-3183, 2017. (JCR Q1, ESI高被引論文)

 

Projects

1. 國家自然科學基金青年科學基金項目,2022.01--2024.12,主持

2. 江蘇省自然科學基金青年基金項目,2021.07--2024.06,主持

3. 國家自然科學基金國際(地區)合作與交流項目, 2023.01--2026.12,子經費負責人

4. 南京市留學人員科技創新項目,2023.01--2023.12,主持

5. 創新特區項目,2023.12--2024.12主持


 


Baidu
sogou