Welcome

My research interests are in the areas of Artificial Intelligence, Machine Learning and Natural Language Processing. At the moment, I am passionate about contributing to the understanding, control, and training of foundation models.

I work as a Principal Researcher at Microsoft Research, Affiliate Associate Professor at the University of Washington, and serve as an advisor for Koidra. I am a Senior member of IEEE and have had the priviledge to serve as Area Editor for IEEE Signal Processing Magazine Newsletter, be part of the organizing committee of ACL 2020 as one of the virtual infrastructure chairs, and work as a mentor at Microsoft AI School from 2017 to 2019 and Microsoft AI Residency Program from 2019 to 2020. I have had R&D collaboration with various Microsoft teams in Cogitive Services (image captioning), Azure (hate speech detection which resulted in ToxiGen that has been used in Llama2, Code Llama, Orca1, Orca2, phi-1.5, phi-2, Llama2 Long-Context, Google’s Gemma, and also to detect toxicity in Laws and Econ Forums[1, 2]), Office (document recommendation), Bing (New Bing, text-image retrieval), and CELA (an intitial NLP system to understand legal contracts).

I have a Ph.D. in Electrical and Computer Engineering where I worked on Sparse Decomposition and Compressive Sensing and Deep Sentence Representations for Web Search Engines and Information Retrieval which received IEEE Signal Processing Society Best Paper Award 2018. My M.Sc. and B.Sc. were in Electrical Engineering.

For general inquiries please contact me at hamidpalangi [at] ieee [dot] org, for Microsoft related inquiries please contact me at hpalangi [at] microsoft [dot] com.


News:

2023 Keynote at AACL 2023: Mind the Gaps: Adversarial Testing in Generative AI, Challenges and Opportunities
2023 Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors will apear in NeurIPS 2023: Paper MSFT Blog Code
2023 We have released Orca 2, its detailed evaluation and the checkpoints: Paper MSFT Blog Model: 13B ckpt 7B ckpt
2023 Can we use synthetic tasks for which it is simple to automatically detect hallucination, optimize the Language Model to hallucinate less given the clear evaluation scheme, and then attempting to transfer the behavior for the same language model on a real world task? Several interesting observations we had and report here: Paper
2023 A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications: Paper
2023 Gave an invited talk at CVPR 2023: Lost in Translation: The Difficulty of Evaluating Image Captioning: Slides
2023 Does Diversity of Thought Improve Reasoning Abilities of Large Language Models?: Paper
2023 Attention Satisfies? A Constraint-Satisfaction Lens on Factual Errors of Language Models: Paper
2023 Evaluating Cognitive Maps and Planning in Large Language Models with CogEval will appear in NeurIPS 2023: Paper
2023 Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models will appear in ACL 2023: Paper
2023 Improving the Reusability of Pre-trained Language Models in Real-world Applications received the Best Paper Award in IEEE IRI 2023: Paper
2023 Orca: a student 13B language model learning how to perform well on various tasks from its teachers ChatGPT and GPT-4: Paper
2023 Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning will appear in ICML 2023: Paper
2023 How can we find important examples from the training data contributing the most to gender fairness of the models? Here is a possible approach that will appear in AAAI 2023: Paper
2023 Sparks of Artificial General Intelligence? Early experiments with GPT-4: Paper
2023 What are the existing metrics to measure representaion harms in Large Language Models? Can we use existing datasets to measure their effectiveness? ACL 2023 Paper Code & Dataset
2023 Robustness analysis of video action recognition models against real world perturbations will appear at CVPR 2023: Paper Website
2022 Do Text to Image generation models understand simple spatial relationships? We studied DALL·E, Stable Diffusion, CogView, Composable Diffusion, DALL·E-mini and GLIDE using a dataset that we created (SR2D) and a metric that we proposed (VISOR) and the answer is most probably not! Paper Code & Dataset
2022 NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries? To appear at EMNLP 2022: Paper Code
2022 Robustness analysis for vision-language models will appear at NeurIPS 2022: Paper Website
2022 (De)ToxiGen: we are releasing a dataset (ToxiGen) covering 13 minority groups, a tool (ALICE) which is an adversarial decoding method to stress test and improve any given off the shelf hate speech detector, two hate speech detection checkpoints and the source codes for all of the above, please stop by if you are at ACL 2022: Paper Code & Dataset
2022 Our work on creating more robust hate speech detection tools has been accpeted at ACL 2022 as a long paper.
2022 We will be giving a tutorial at AAAI 2022 on “Neuro-Symbolic Methods for Language and Vision”. Slides
2022 We are Organizing a workshop at CVPR 2022 on “Robustness in Sequential Data”.
2022 I have conducted a number of non technical interviews with a number of industry leaders in signal processing and machine learning during 2021. The questions are the same for all of the interviewees to learn from their journey. Here are the links for interviews with Yoshua Bengio, Tomas Mikolov, Max Welling, Xuedong Huang, Dong Yu, Luna Dong, Henrique Malvar and Greg Mori.
2021 “NICE: Neural Image Commenting with Empathy” will appear at EMNLP 2021.
2021 “Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization” will appear at NAACL 2021.
2021 Will be serving as Associate Editor for IEEE Signal Processing Magazine Newsletter.
2021 “Compositional Processing Emerges in Neural Networks Solving Math Problems” will appear at CogSci 2021.
2021 Will be participating in young professionals panel at ICASSP 2021 as a panelist, if you are attending ICASSP please stop by.
2021 “Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages” will appear at LoResMT 2021.
2020 “Neuro-Symbolic Representations for Video Captioning: A Case for Leveraging Inductive Biases for Vision and Language” has been released as a technical report to arxiv.
2020 Are the tasks/datasets that machine learning researchers use for natural language reasoning grounded in vision (visual question answering) actually measure the reasoning capability of the models? Or they are more of a competition about who has a better vision backbone (perception) that might be publicly available for others or not? Our recent work proposing a Neuro-Symbolic approach that disentangles reasoning from perception to address this issue has been accepted at ICML 2020.
2020 My interview with IEEE Signal Processing Newsletter.
2020 Leveraging Neuro-Symbolic representations to solve math problems helps us to better understand neural models and impose necessary discrete inductive biases into them. What are the necessary ingredients for these types of structures for being efficient in these reasoning tasks? In our recent effort we propose an approach that will be presented at ICML 2020.
2020 Will be serving as Area Chair for Multimodality track at ACL 2020.
2020 How can we leverage the large number of image-text pairs available on the web to mimic the way people improve their scene and language understanding through weak supervision? Our work on large scale Vision and Language Pretraining is a step towards this direction. Two short posts related to this work are available at MSFT Blog and VentureBeat. We will be presenting this work at AAAI 2020 (spotlight).
2020 Will be serving as a Member of Organizing Committee of ACL 2020.
2019 “Mapping Natural-language Problems to Formal-language Solutions Using Structured Neural Representations” received Best Paper Award at NeurIPS 2019 KR2ML workshop, congrats to our intern and all the authors.
2019 “Learning Visual Relation Priors for Image-Text Matching and Image Captioning with Neural Scene Graph Generators” has been released as a technical report to arxiv.
2019 Our work “Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval” has been selected for IEEE Signal Processing Society Best Paper Award for 2018 (Test of Time). It was announced during ICASSP 2019 in Brighton, UK. Congratulations to the team and wonderful collaborators!
2019 “HUBERT Untangles BERT to Improve Transfer across NLP Tasks” has been released as a technical report to arxiv.
2019 I have written a short recap about ICASSP 2019.
2018 Epilepsy is one of the most common neurological disorders in the world, affecting over 70 million people globally, 30 to 40 per cent of whom do not respond to medication. Our recent work published at Clinical Neurophysiology proposes optimized DNN architectures to detect them.
2018 Our work on perceptually de-hashing image hashes for similarity retrieval will appear at Signal Processing Image Communications.
2018 Our work on robust detection of epileptic seizures will be presented at ICASSP 2018.
2018 Our work on leveraging Neuro-Symbolic representations to design DL models for Question Answering task with higher interpretability capabilities using Paul Smolensky’s Tensor Product Representations (TPRs) got accepted at AAAI 2018 (Oral Presentation). Two short posts related to this work are available here and here.
2017 Presented a tutorial at IEEE GlobalSIP 2017 at Montreal about various Deep Learning frameworks (PyTorch, TensorFlow, MXNet, Chainer, Theano, …).
2017 Our recent results leveraging Neuro-Symbolic representations in deep NLP models will be presented at NIPS 2017 Explainable AI Workshop.
2016 I have summarized what I learned from Deep Learning Summer School 2016 at Montreal, check it out here.
2016 I defended my PhD!
2016 Our recent work on sentence embedding for web search and IR will appear in IEEE/ACM Transactions on Audio, Speech, and Language Processing.
2016 Our recent work proposing a deep learning approach for distributed compressive sensing will appear in IEEE Transactions on Signal Processing. Check out the paper and a post about it at Nuit Blanche. The codes are open source, give it a try! For more information about compressive sensing check out here.