Ta-Chung Chi 🐸
I earned my PhD from the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University advised by Alexander I. Rudnicky. My research interests lie in the field of length extrapolatable Transformers, dialogue systems, and related NLP topics.
Publications
[0] TC Chi, TH Fan, AI Rudnicky Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation
arXiv
[1] TH Fan*, TC Chi*, AI Rudnicky Advancing Regular Language Reasoning in Linear Recurrent Neural Networks
arXiv
[3] YS Wang, TC Chi, R Zhang, Y Yang. PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification.
The 61st Annual Meeting of the Association for Computational Linguistics (ACL’23)
[4] TC Chi, TH Fan, LW Chen, AI Rudnicky, PJ Ramadge. Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings.
The 61st Annual Meeting of the Association for Computational Linguistics (ACL’23)
[5] TC Chi, TH Fan, AI Rudnicky, PJ Ramadge. Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis
The 61st Annual Meeting of the Association for Computational Linguistics (ACL’23)
[6] TC Chi, TH Fan, AI Rudnicky, PJ Ramadge. Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP) findings
[7] TC Chi*, TH Fan*, PJ Ramadge, AI Rudnicky KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Advances in Neural Information Processing Systems 35 (2022)
[8] TH Fan*, TC Chi*, AI Rudnicky, PJ Ramadge Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Proceedings of the International Conference on Machine Learning (ICML) 2022
[9] TC Chi, A Rudnicky. Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)
[10] TR Chiang, YT Yeh, TC Chi, YS Wang. Are you doing what I say? On modalities alignment in ALFRED
arXiv preprint arXiv:2110.05665 2021
[11] TC Chi, M Shen, M Eric, S Kim, D Hakkani-tur. Just ask: An interactive learning framework for vision and language navigation
Proceedings of the AAAI Conference on Artificial Intelligence 34 (03), 2459-2466 3 2020
[12] TC Chi, CY Shih, YN Chen. BCWS: Bilingual contextual word similarity
arXiv preprint arXiv:1810.08951 2018
[13] TC Chi, YN Chen. CLUSE: Cross-lingual unsupervised sense embeddings
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)
[14] TY Chang, TC Chi, SC Tsai, YN Chen. xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks
arXiv preprint arXiv:1809.03348 7 2018
[15] PC Chen, TC Chi, SY Su, YN Chen. Dynamic time-aware attention to speaker roles and contexts for spoken language understanding
2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
[16] TC Chi, PC Chen, SY Su, YN Chen. Speaker role contextual modeling for language understanding and dialogue policy learning
IJCNLP 2017