🤗 Workshop Website: https://knowledge-nlp.github.io/aaai2023 (accepted papers are available now. Workshop videos and keynote slides will be uploaded soon!)
🤗 Follow us on Twitter: https://twitter.com/knowledgenlp
The field of NLP has seen remarkable advancements in recent years, as demonstrated by larger-scale models such as ChatGPT [1] [2]. Large language models have proven to effectively capture linguistic patterns in text and generate context-aware representations of high quality. However, their training method, which relies solely on input-output pairs, limits their ability to incorporate external knowledge (e.g., updated world facts, trending news) [3], often leading to hallucinations [4] in generated contents. To reach higher levels of intelligence, knowledge is a crucial component that cannot be obtained through statistical learning of input text patterns alone.
In AAAI 2023 (Feb 7th - Feb 14, 2023), six researchers (Chenguang Zhu, Shuohang Wang, Meng Jiang, Wenhao Yu, Lu Wang, Huan Sun) from four institutions (Microsoft Cognitive Research, University of Notre Dame, University of Michigan, Ohio State University) held the first workshop on knowledge-augmented methods for NLP. The workshop had attracted more than 50 people join us in-person and 20 virtually, making it one of the most popular events at AAAI!

The workshop invited four keynote speakers from academia and industry, including Dr. Scott Wen-tau Yih (Meta AI - FAIR), Prof. Amit Sheth (University of South Carolina), Prof. Jordan Boyd-Graber (University of Maryland) and Prof. Chandan Reddy (Virginia Tech University). In addition, we have a panel discussion on various topics of knowledge, NLP and large language models. Five panelists shared their insights and experiences on how to incorporate knowledge into NLP models to make them more efficient, scalable, and intelligent.
The workshop received 35 submissions from 50 institutions (top-3 are Amazon, UIUC and Stanford), and 26 papers [link to papers] were accepted, covering a wide range of topics including retrieval, knowledge graph and commonsense augmented models, knowledge-enhanced language model pre-training, new benchmark datasets and survey papers.
At the start of the event, Prof. Amit Sheth delivered a keynote speech on the topic of "From NLP to NLU: Why we need varied, comprehensive, and stratified knowledge, and how to use it for Neuro-symbolic AI". In his talk, he emphasized the three crucial dimensions of Why, What, and How in the utilization of knowledge in neuro-symbolic AI systems.

Next, Prof. Chandan Reddy's presentation focused on "Deep Learning for Code Understanding and Generation: Challenges and Opportunities". He discussed the potential of pre-trained programming language models (PLMs) on large code repositories for various code-related tasks.