Yufei Cui

alt text 

Yufei Cui, Postdoc,
School of Computer Science,
McGill University


Email:
yufei dot cui at mail dot mcgill dot ca
yufei dot cui at mila dot quebec

Google scholar, DBLP

About

I'm working as a postdoc in McGill University and MILA, advised by Prof. Xue Liu.

I am an advisor of the medical AI research lab in Bingli Tech, Guangzhou.

I worked as a postdoc from July, 2021 to June 2022, in MLab, CityU HK. Prior to that, I obtained my PhD degree in CityU HK, advised by Prof. Chun Jason Xue, Prof. Antoni B. Chan and Prof. Tei-Wei Kuo. I obtained MS degree in Telecommunications in HKUST and BE in Communication Engineering in Shandong University.

My research interests include

  1. Probabilistic deep learning with applications in:

    1. Medical images and Histopathology.

    2. Light-weight neural network and embedded AI.

    3. Data compression and learned index.

  2. Retrieval-augmented language models.

News

Our paper “Improving Natural Language Understanding with Computation-Efficient Retrieval Representation Fusion” was accepted in ICLR 2024. (Preprint, Poster)

Our team - HugeRabbit secured the 2nd place winner at 2023 ACM/IEEE TinyML Design Contest. (News, Code)

Our paper “Improving Natural Language Understanding with Computation-Efficient Retrieval Representation Fusion” was accepted in ENLSP-NeurIPS 2023. (Preprint, Poster)

Our paper “Retrieval-Augmented Multiple Instance Learning” was accepted in NeurIPS 2023. (Paper, Code)

Our paper “Faster and stronger Lossless Compression with Optimized Autoregressive framework” was accepted in DAC 2023.

Our paper “Bayes-MIL: A New Probabilistic Perspective on Attention-based Multiple Instance Learning for Whole Slide Images” was accepted in ICLR 2023. (Paper, Code)

Our paper “Variational Nested Dropout” was accepted in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).

Our paper “Precise Augmentation and Counting of Helicobacter Pylori in Histology Image” was accepted in MED-NEURIPS 2022. (Dataset)

Our paper “Bits-Ensemble: Towards Light-Weight Robust Deep Ensemble by Bits-Sharing” was accepted in CASES 2022 and TCAD.
Sharing less significant quantization bits across an ensemble significantly reduces the ensemble size without loss of performance.

Our paper “Accelerating General-purpose Lossless Compression via Simple and Scalable Parameterization” was accepted in ACM MM 2022.
Explicitly modeling local dependency of symbols using variants of Variational Nested Dropout helps build an efficient data compressor.

Our preprint “Variational Nested Dropout” is available on arXiv and is under review as a journal paper. (Preprint, Code)
A new formulation of variational autoencoder with the ordered latent space is proposed.

Our paper “NFL: Robust Learned Index via Distribution Transformation” was accepted in VLDB 2022. (Preprint, Code)
Transforming the distribution of keys with normalizing flow helps build an efficient learned index.

Our paper “A Fast Transformer-based General-Purpose Lossless Compressor” was accepted in TheWebConf 2022. (Paper, Code).

Our paper “CacheSifter: Sifting Cache Files for Boosted Mobile Performance and Lifetime” was accepted in FAST 2022. (Paper)

Serving as reviewer for CVPR-22, NeurIPS-22, ICML-22.

Our paper “Online Rare Category Identification and Data Diversification for Edge Computing” was accepted in TCAD. (Paper)

Code for “Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive Compression” using proposed variational nested dropout are released. (Code)

Our paper “FlashEmbedding: Storing embedding tables in SSD for large-scale recommender systems” was accepted in APSys 2021. (Paper)

Our paper “Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations” was accepted in ICML Workshop on Adversarial Machine Learning 2021. (Paper)

Our paper “Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive Compression” was accepted in CVPR 2021. (Paper, Code)