Yufei Cui

alt text 

Yufei Cui, Postdoc,
School of Computer Science,
McGill University


Email: yufei dot cui at mail dot mcgill dot ca
Google scholar, DBLP

About

I'm working as a postdoc in McGill University, advised by Prof. Xue Liu. I also lead the AI research lab in Bingli Tech, Guangzhou.

I worked as a postdoc from July, 2021 to June 2022, in MLab, CityU HK. Prior to that, I obtained my PhD degree in CityU HK, advised by Prof. Chun Jason Xue, Prof. Antoni B. Chan and Prof. Ted-Wei Kuo. I obtained MS degree in Telecommunications in HKUST and BE in Communication Engineering in Shandong University.

My research interests include

  1. Probabilistic deep learning with applications in:

    1. Medical images and Histopathology.

    2. Light-weight neural network and embedded AI.

    3. Data engineering - data compression and learned index.

  2. Storage - optimization for flash reliability.

News

Our paper “Bits-Ensemble: Towards Light-Weight Robust Deep Ensemble by Bits-Sharing” was accepted in CASES 2022 and TCAD.
Sharing less significant quantization bits across an ensemble significantly reduces the ensemble size without loss of performance.

Our paper “Accelerating General-purpose Lossless Compression via Simple and Scalable Parameterization” was accepted in ACM MM 2022.
Explicitly modeling local dependency of symbols using variants of Variational Nested Dropout helps build an efficient data compressor.

Our preprint “Variational Nested Dropout” is available on arXiv and is under review as a journal paper. (Preprint, Code)
A new formulation of variational autoencoder with the ordered latent space is proposed.

Our paper “NFL: Robust Learned Index via Distribution Transformation” was accepted in VLDB 2022. (Preprint, Code)
Transforming the distribution of keys with normalizing flow helps build an efficient learned index.

Our paper “A Fast Transformer-based General-Purpose Lossless Compressor” was accepted in TheWebConf 2022. (Paper, Code).

Our paper “CacheSifter: Sifting Cache Files for Boosted Mobile Performance and Lifetime” was accepted in FAST 2022. (Paper)

Serving as reviewer for CVPR-22, NeurIPS-22, ICML-22.

Our paper “Online Rare Category Identification and Data Diversification for Edge Computing” was accepted in TCAD. (Paper)

Code for “Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive Compression” using proposed variational nested dropout are released. (Code)

Our paper “FlashEmbedding: Storing embedding tables in SSD for large-scale recommender systems” was accepted in APSys 2021. (Paper)

Our paper “Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations” was accepted in ICML Workshop on Adversarial Machine Learning 2021. (Paper)

Our paper “Bayesian Nested Neural Networks for Uncertainty Calibration and Adaptive Compression” was accepted in CVPR 2021. (Paper, Code)