I am interested in contributing to create machine learning models that are Robust, Safe and ideally as Sample Efficient as Humans. I have broad interest in Deep Learning, Natural Language Processing and Signal Processing.
Currently I work as Principal Researcher at Microsoft Research, Affiliate Associate Professor at the University of Washington, serve as an advisor for Koidra and as Area Editor for IEEE Signal Processing Magazine Newsletter. I have had the priviledge to be a Senior member of IEEE, to serve as part of organizing committee of ACL 2020 as one of the virtual infrastructure chairs, to work as a mentor at Microsoft AI School from 2017 to 2019 and Microsoft AI Residency Program from 2019 to 2020, and to have R&D collaboration with various Microsoft product teams in Microsoft Azure (image captioning, hate speech detection), Office (document recommendation), Bing (text-image retrieval) and CELA (developing an intitial NLP system to understand legal contracts).
I have a PhD in Electrical and Computer Engineering from the University of British Columbia advised by Rabab Ward and Li Deng where I worked on Sparse Decomposition and Compressive Sensing and Deep Sentence Representations for Web Search Engines and Information Retrieval which received IEEE Signal Processing Society Best Paper Award 2018. My MSc and BSc were in Electrical Engineering.
For general inquiries please contact me at hamidpalangi [at] ieee [dot] org, for Microsoft related inquiries please contact me at hpalangi [at] microsoft [dot] com.
For my references please refer to the recommendations section of my Linkedin profile or send me an email.
2022 NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries? To appear at EMNLP 2022: Paper Code
2022 Robustness analysis for vision-language models will appear at NeurIPS 2022: Paper Website
2022 How robust video action recognition models are against real world perturbations: Paper Website
2022 (De)ToxiGen: we are releasing a dataset (ToxiGen) covering 13 minority groups, a tool (ALICE) which is an adversarial decoding method to stress test and improve any given off the shelf hate speech detector, two hate speech detection checkpoints and the source codes for all of the above, please stop by if you are at ACL 2022: Paper Code
2022 Our work on creating more robust hate speech detection tools has been accpeted at ACL 2022 as a long paper.
2022 We will be giving a tutorial at AAAI 2022 on “Neuro-Symbolic Methods for Language and Vision”.
2022 We are Organizing a workshop at CVPR 2022 on “Robustness in Sequential Data”.
2022 I have conducted a number of non technical interviews with a number of industry leaders in signal processing and machine learning during 2021. The questions are the same for all of the interviewees to learn from their journey. Here are the links for interviews with Yoshua Bengio, Tomas Mikolov, Max Welling, Xuedong Huang, Dong Yu, Luna Dong, Henrique Malvar and Greg Mori.
2021 “NICE: Neural Image Commenting with Empathy” will appear at EMNLP 2021.
2021 “Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization” will appear at NAACL 2021.
2021 Will be serving as Associate Editor for IEEE Signal Processing Magazine Newsletter.
2021 “Compositional Processing Emerges in Neural Networks Solving Math Problems” will appear at CogSci 2021.
2021 Will be participating in young professionals panel at ICASSP 2021 as a panelist, if you are attending ICASSP please stop by.
2021 “Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages” will appear at LoResMT 2021.
2020 “Neuro-Symbolic Representations for Video Captioning: A Case for Leveraging Inductive Biases for Vision and Language” has been released as a technical report to arxiv.
2020 Are the tasks/datasets that machine learning researchers use for natural language reasoning grounded in vision (visual question answering) actually measure the reasoning capability of the models? Or they are more of a competition about who has a better vision backbone (perception) that might be publicly available for others or not? Our recent work proposing a Neuro-Symbolic approach that disentangles reasoning from perception to address this issue has been accepted at ICML 2020.
2020 My interview with IEEE Signal Processing Newsletter.
2020 Leveraging Neuro-Symbolic representations to solve math problems helps us to better understand neural models and impose necessary discrete inductive biases into them. What are the necessary ingredients for these types of structures for being efficient in these reasoning tasks? In our recent effort we propose an approach that will be presented at ICML 2020.
2020 Will be serving as Area Chair for Multimodality track at ACL 2020.
2020 How can we leverage the large number of image-text pairs available on the web to mimic the way people improve their scene and language understanding through weak supervision? Our work on large scale Vision and Language Pretraining is a step towards this direction. Two short posts related to this work are available at MSFT Blog and VentureBeat. We will be presenting this work at AAAI 2020 (spotlight).
2020 Will be serving as a Member of Organizing Committee of ACL 2020.
2019 “Mapping Natural-language Problems to Formal-language Solutions Using Structured Neural Representations” received best paper award at NeurIPS 2019 KR2ML workshop, congrats to our intern and all the authors.
2019 “Learning Visual Relation Priors for Image-Text Matching and Image Captioning with Neural Scene Graph Generators” has been released as a technical report to arxiv.
2019 Our work “Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval” has been selected for IEEE Signal Processing Society Best Paper Award for 2018 (Test of Time). It was announced during ICASSP 2019 in Brighton, UK. Congratulations to the team and wonderful collaborators!
2019 “HUBERT Untangles BERT to Improve Transfer across NLP Tasks” has been released as a technical report to arxiv.
2019 I have written a short recap about ICASSP 2019.
2018 Epilepsy is one of the most common neurological disorders in the world, affecting over 70 million people globally, 30 to 40 per cent of whom do not respond to medication. Our recent work published at Clinical Neurophysiology proposes optimized DNN architectures to detect them.
2018 Our work on perceptually de-hashing image hashes for similarity retrieval will appear at Signal Processing Image Communications.
2018 Our work on robust detection of epileptic seizures will be presented at ICASSP 2018.
2018 Our work on leveraging Neuro-Symbolic representations to design DL models for Question Answering task with higher interpretability capabilities using Paul Smolensky’s Tensor Product Representations (TPRs) got accepted at AAAI 2018 (Oral Presentation). Two short posts related to this work are available here and here.
2017 Presented a tutorial at IEEE GlobalSIP 2017 at Montreal about various Deep Learning frameworks (PyTorch, TensorFlow, MXNet, Chainer, Theano, …).
2017 Our recent results leveraging Neuro-Symbolic representations in deep NLP models will be presented at NIPS 2017 Explainable AI Workshop.
2016 I have summarized what I learned from Deep Learning Summer School 2016 at Montreal, check it out here.
2016 I defended my PhD!
2016 Our recent work on sentence embedding for web search and IR will appear in IEEE/ACM Transactions on Audio, Speech, and Language Processing.
2016 Our recent work proposing a deep learning approach for distributed compressive sensing will appear in IEEE Transactions on Signal Processing. Check out the paper and a post about it at Nuit Blanche. The codes are open source, give it a try! For more information about compressive sensing check out here.