Return to ACL-03 CDROM Home
There will be four tutorials, to be given by
leading experts in language and speech processing. The tutorials will take
place on July 7. The titles and presenters are:
Finite state automata are well-understood, and inherently compact and efficient models of simple languages. In addition, finite state automata can be combined in various interesting ways, with the guarantee that the result again is a finite state automaton.
In the introductory part of the tutorial, finite state acceptors and finite state transducers (both weighted and unweighted) are introduced, and we briefly review their formal and computational properties.
In the second part of the tutorial, we illustrate the use of finite state methods in dictionary construction. In particular, we present an application of perfect hash automata in tuple dictionaries. Tuple dictionaries provide a very compact representation of huge language models of the kind typically used in NLP applications (including Ngram language models).
In the third part of the tutorial we focus on regular expressions for NLP. The type of regular expressions used in modern NLP applications has evolved dramatically from the regular expressions found in standard Computer Science textbooks. In recent years, various high level regular expression operators have been introduced (such as contexted replacement operators). The availability of more and more abstract operators make the regular expression notation more and more attractive. The tutorial provides an introduction into the regular expression calculus. The examples use the notation of the Fsa Utilities toolkit: a freely available implementation of the regular expression calculus. We introduce various regular expression operators for acceptors and transducers. We then continue to show how new regular expression operators can be defined.
In the last part of the tutorial, we focus in more detail on regular expression operators that turned out to be useful for the description of certain aspects of phonology using ideas from Optimality Theory. This part of the tutorial describes the lenient composition operator of Karttunen, and the optimality operator of Gerdemann and van Noord, as well as a number of alternatives (Eisner, Jaeger).
and Optimization without the Magic
This tutorial presents the foundations of maximum entropy models, optimization methods to learn them, and various issues in the use of graphical models more complex than simple naive-Bayes (NB) or HMM models. The focus is on intuition and understanding, using visual illustrations and simple examples rather than detailed derivations whenever possible.
Maximum Entropy Models: What maximum entropy models are, from first principles, what they can and cannot do, and how they behave. Lots of examples. The equivalence of maxent models and maximum-likelihood exponential models. The relationship between maxent models and other classifiers. Smoothing methods for maxent models.
Basic Optimization: Unconstrained optimization: convexity, gradient methods (both simple descent and more practical conjugate methods). Constrained optimization: Lagrange multipliers and several ways of turning them into a concrete optimization system. Other fun things to do with optimization. Specialized iterative scaling methods vs. general optimization.
Model Structures: Conditional independence in graphical models (focusing on NB, HMMs, and PCFGs). Practical ramifications of various independence assumptions. Label and observation biases in conditional structures. Survey of sequence models (HMMs, MEMMs, CRFs, and dependency networks).
Prerequisites: Familiarity with basic calculus and a working knowledge of NB and HMMs are required. Existent but possibly vague knowledge of general Bayes' nets or basic information theory is a plus. Most importantly: a low tolerance for conceptual black boxes labeled "magic here".
Roxana Girju (Baylor University, U.S.A.)
Knowledge Discovery is a fast growing area of research and commercial interest. While knowledge may be discovered from many sources of information, this tutorial focuses on the discovery of knowledge from open texts, the largest source of knowledge. The problem of Knowledge Discovery from Text (KDT) is to extract explicit and implicit concepts and semantic relations between concepts using Natural Language Processing techniques. The discovery process is guided by the notion of context specified either by seed concepts or in some other more formal way.
KDT, while deeply rooted in NLP, actually draws on methods from statistics, machine learning, reasoning, information extraction, knowledge management, cognitive science and others for its discovery process. The emphasis here is on the automatic discovery of new concepts and on the large number of semantic relations that link them. This tutorial presents recent results from KDT research and system implementations.
Since the goal of KDT is to get insights into large quantities of text data and bring to bear text semantics, it plays an increasingly significant role in emerging applications, such as Question Answering, Summarization, Text Understanding and Ontology Development.
This tutorial is aimed at researchers, practitioners, educators, and research planners who want to keep in sync with the newly emerging KDT technology.
The advent of talking and listening machines has long been hailed as "the next big thing" in human-machine interaction. Indeed only recently, the IEEE Spectrum magazine (September 2002) named speech as one of five technologies likely to reap big market rewards in the next five years. Certainly, the frequency with which members of the general public come across speech-enabled applications in their everyday lives does seem to be on the increase, and the marketplace is currently able to support a number of sizeable commercial companies who are supplying speech-based products and services - as well as a growing academic community of speech scientists and engineers. This apparent progress has been fuelled by a number of key developments: the relentless increase in available computing power, the introduction of 'data-driven' techniques for speech pattern modelling, and the institution of public system evaluations.
This tutorial will chart the main advances that have been made in spoken language processing algorithms and applications over the past few years. The key enabling technologies of 'automatic speech recognition', 'text-to-speech synthesis' and 'spoken language dialogue' will be explained in some detail, with emphasis being placed on how the technology works and, perhaps more importantly, why it sometimes doesn't. Insight will also be given into the linguistic/paralinguistic properties of speech signals and human spoken language, and comparisons will be drawn between the capabilities of 'automatic' and 'natural' spoken language processing systems.
The tutorial is aimed at both specialists and non-specialists in the language prcessing field, and will be of great interest to anyone who is keen to develop a greater understanding of the main issues involved in spoken language processing. Prof. Moore will cover theoretical and practical aspects of the inner workings of state-of-the-art spoken language systems, as well as providing a balanced overview of their capabilities in relation to other modes of human-machine interaction.
The tutorial will incorporate question-and-answer opportunities, and will conclude with a survey of open research issues and some predictions for the future.