<!-- 政务处理 -->
<p data-role="original-title" style="display:none">原标题:ACL 2020接收论文列表公开,接收率25.2%,你上榜了吗?</p>
<p style="text-align: center;"><strong>参与:魔王</strong></p>
<blockquote>
<p>自然语言处理顶会 ACL 2020 将于 7 月 5 日-10 日线上举行。不久之前,ACL 论文接收结果公布,但官方并未放出完整的论文列表。近日,ACL 接收论文列表公布,让我们看一下都有哪些论文被接收了。</p>
</blockquote>
<p>此次 ACL 会议的投稿数量为 3088 篇,与去年的投稿数量 2906 篇相比稍有增长。ACL 2020 共接收 779 篇论文,包括 571 篇长论文和 208 篇短论文,接收率为 25.2%。</p>
<p style="text-align: center;"><img src="/public/uploads/article/2020/05/19/fb0a696e560aed7a9cae0dcd.jpeg" /></p>
<p>在接收论文列表中,我们看到了很多熟悉的名字:</p>
<p>Christopher D. Manning(斯坦福大学教授、斯坦福 AI 实验室负责人):</p>
<ul>
<li>Finding Universal Grammatical Relations in Multilingual BERT</li>
<li>Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports</li>
<li>Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation</li>
</ul>
<p>Yoshua Bengio(加拿大计算机科学家、蒙特利尔大学教授):</p>
<ul>
<li>Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach</li>
</ul>
<p>Yoav Goldberg(以色列 Bar-Ilan 大学计算机科学系高级讲师):</p>
<ul>
<li>A Formal Hierarchy of RNN Architectures</li>
<li>Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection</li>
<li>Simple, Interpretable and Stable Method for Detecting Words with Usage Change across Corpora</li>
<li>Unsupervised Domain Clusters in Pretrained Language Models</li>
<li>A Two-Stage Masked LM Method for Term Set Expansion</li>
<li>Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?</li>
</ul>
<p>Noah A. Smith(华盛顿大学计算机科学与工程系教授):</p>
<ul>
<li>A Formal Hierarchy of RNN Architectures</li>
<li>A Mixture of h − 1 Heads is Better than h Heads</li>
<li>Don't Stop Pretraining: Adapt Language Models to Domains and Tasks</li>
<li>Improving Transformer Models by Reordering their Sublayers</li>
<li>Social Bias Frames: Reasoning about Social and Power Implications of Language</li>
<li>The Right Tool for the Job: Matching Model and Instance Complexities</li>
<li>Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models</li>
</ul>
<p>Percy Liang(斯坦福大学计算机系副教授、斯坦福人工智能实验室成员):</p>
<ul>
<li>Robust Encodings: A Framework for Combating Adversarial Typos</li>
<li>Selective Question Answering under Domain Shift</li>
<li>Enabling Language Models to Fill in the Blanks</li>
<li>ExpBERT: Representation Engineering with Natural Language Explanations</li>
<li>Shaping Visual Representations with Language for Few-Shot Classification</li>
</ul>
<p>Sebastian Ruder(DeepMind 研究科学家):</p>
<ul>
<li>A Call for More Rigor in Unsupervised Cross-lingual Learning</li>
<li>On the Cross-lingual Transferability of Monolingual Representations</li>
</ul>
<p>周明(微软亚洲研究院副院长、国际计算语言学协会(ACL)主席):</p>
<ul>
<li>A Graph-based Coarse-to-fine Method for Unsupervised Bilingual Lexicon Induction</li>
<li>Curriculum Pre-training for End-to-End Speech Translation</li>
<li>Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension</li>
<li>Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder</li>
<li>Graph Neural News Recommendation with Unsupervised Preference Disentanglement</li>
<li>Improving Neural Machine Translation with Soft Template Prediction</li>
<li>LogicalFactChecker: Leveraging Logical Operations for Fact Checking with Graph Module Network</li>
<li>MIND: A Large-scale Dataset for News Recommendation</li>
<li>MuTual: A Dataset for Multi-Turn Dialogue Reasoning</li>
<li>Reasoning Over Semantic-Level Graph for Fact Checking</li>
<li>A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation</li>
<li>A Simple and Effective Unified Encoder for Document-Level Machine Translation</li>
</ul>
<p>刘铁岩(微软亚洲研究院副院长):</p>
<ul>
<li>A Study of Non-autoregressive Model for Sequence Generation</li>
<li>SEEK: Segmented Embedding of Knowledge Graphs</li>
<li>SimulSpeech: End-to-End Simultaneous Speech to Text Translation</li>
</ul>
<p>刘群(华为诺亚方舟实验室语音语义首席科学家):</p>
<ul>
<li>Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT</li>
<li>Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order</li>
<li>Word-level Textual Adversarial Attacking as Combinatorial Optimization</li>
</ul>
<p>宗成庆(中科院自动化所研究员):</p>
<ul>
<li>Attend, Translate and Summarize: An Efficient Method for Neural Cross-Lingual Summarization</li>
</ul>
<p>孙茂松(清华大学计算机科学与技术系教授):</p>
<ul>
<li>Continual Relation Learning via Episodic Memory Activation and Reconsolidation</li>
<li>Fine-grained Fact Verification with Kernel Graph Attention Network</li>
<li>How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence</li>
<li>Word-level Textual Adversarial Attacking as Combinatorial Optimization</li>
</ul>
<p>刘知远(清华大学计算机科学与技术系副教授):</p>
<ul>
<li>Continual Relation Learning via Episodic Memory Activation and Reconsolidation</li>
<li>Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen</li>
<li>Fine-grained Fact Verification with Kernel Graph Attention Network</li>
<li>Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs</li>
<li>How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence</li>
<li>Word-level Textual Adversarial Attacking as Combinatorial Optimization</li>
<li>MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs</li>
</ul>
<p>黄民烈(清华大学计算机科学与技术系副教授):</p>
<ul>
<li>A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction</li>
<li>KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation</li>
<li>Multi-Agent Task-Oriented Dialog Policy Learning with Role-Aware Reward Decomposition</li>
</ul>
<p>万小军(北京大学计算机科学技术研究所研究员):</p>
<ul>
<li>Automatic Generation of Citation Texts in Scholarly Papers: A Pilot Study</li>
<li>Heterogeneous Graph Transformer for Graph-to-Sequence Learning</li>
<li>Jointly Learning to Align and Summarize for Neural Cross-Lingual Summarization</li>
<li>Learning to Ask More: Semi-Autoregressive Sequential Question Generation under Dual-Graph Interaction</li>
<li>Multi-Granularity Interaction Network for Extractive and Abstractive Multi-Document Summarization</li>
<li>Semantic Parsing for English as a Second Language</li>
<li>Multimodal Transformer for Multimodal Machine Translation</li>
</ul>
<p>邱锡鹏(复旦大学计算机科学技术学院教授):</p>
<ul>
<li>Extractive Summarization as Text Matching</li>
<li>Heterogeneous Graph Neural Networks for Extractive Document Summarization</li>
<li>Improving Image Captioning with Better Use of Caption</li>
<li>FLAT: Chinese NER Using Flat-Lattice Transformer</li>
</ul>
<p>韩松(MIT 电子工程和计算机科学系助理教授):</p>
<ul>
<li>HAT: Hardware-Aware Transformers for Efficient Natural Language Processing</li>
</ul>
<p>欢迎中了 ACL 2020 论文的读者留言,机器之心也将持续为大家推荐更多优质论文。</p>
<p>ACL 2020 接收论文列表,参见:https://acl2020.org/program/accepted/#long-papers</p>
<p><strong>本 文为机器之心报道, 转载请联系本公众号获得授权 。</strong><i class="backsohu"></i>返回搜狐,查看更多</p> <!-- 政务账号添加来源标示处理 -->
<!-- 政务账号添加来源标示处理 -->
<p data-role="editor-name">责任编辑:</p>