CoNLL, the Conference on Natural Language Learning, is SIGNLL's yearly meeting.
All papers published at the conference are available at the ACL Anthology.
Apart from yearly special themes, CoNLL accepts contributions about language learning topics, including, but not limited to:
- Computational models of human language acquisition
- Computational models of the origins and evolution of language
- Machine learning methods applied to natural language processing tasks (speech processing, phonology, morphology, syntax, semantics, discourse processing, language engineering applications)
- Symbolic learning methods (Rule Induction and Decision Tree Learning, Lazy Learning, Inductive Logic Programming, Analytical Learning, Transformation-based Error-driven Learning)
- Biologically-inspired methods (Neural Networks, Evolutionary Computing)
- Statistical methods (Bayesian Learning, HMM, maximum entropy, SNoW, Support Vector Machines)
- Reinforcement Learning
- Active learning, ensemble methods, meta-learning
- Computational Learning Theory analyses of language learning
- Empirical and theoretical comparisons of language learning methods
- Models of induction and analogy in Linguistics
Since 1999, CoNLL has included a shared task in which training and test data is provided by the organizers which allows participating systems to be evaluated and compared in a systematic way. Descriptions of the participating systems and an evaluation of their performances are presented both at the conference and in the proceedings.
A list of all editions of the conference with links to the conference home pages, as well as a list of all shared tasks, can be found on the CoNLL website.