91 found
Order:
Disambiguations
Christopher D. Manning [49]Christopher Manning [42]ChristopherD Manning [1]
  1.  97
    Probabilistic models of language processing and acquisition.Nick Chater & Christopher D. Manning - 2006 - Trends in Cognitive Sciences 10 (7):335–344.
    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  2.  35
    Accurate Unlexicalized Parsing.Dan Klein & Christopher D. Manning - unknown
    We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   25 citations  
  3. An Introduction to Information Retrieval.Christopher D. Manning - unknown
    1 Boolean retrieval 1 2 The term vocabulary and postings lists 19 3 Dictionaries and tolerant retrieval 49 4 Index construction 67 5 Index compression 85 6 Scoring, term weighting and the vector space model 109 7 Computing scores in a complete search system 135 8 Evaluation in information retrieval 151 9 Relevance feedback and query expansion 177 10 XML retrieval 195 11 Probabilistic information retrieval 219 12 Language models for information retrieval 237 13 Text classification and Naive Bayes 253 (...)
    Direct download  
     
    Export citation  
     
    Bookmark   18 citations  
  4. Accurate unlexicalized parsing.Christopher Manning - manuscript
    assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is..
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  5.  56
    Generating Typed Dependency Parses from Phrase Structure Parses.Christopher Manning - unknown
    This paper describes a system for extracting typed dependency parses of English sentences from phrase structure parses. In order to capture inherent relations occurring in corpus texts that can be critical in real-world applications, many NP relations are included in the set of grammatical relations used. We provide a comparison of our system with Minipar and the Link parser. The typed dependency extraction facility described here is integrated in the Stanford Parser, available for download.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  6.  63
    Probabilistic syntax.Christopher Manning - manuscript
    “Everyone knows that language is variable.” This is the bald sentence with which Sapir (1921:147) begins his chapter on language as an historical product. He goes on to emphasize how two speakers’ usage is bound to differ “in choice of words, in sentence structure, in the relative frequency with which particular forms or combinations of words are used”. I should add that much sociolinguistic and historical linguistic research has shown that the same speaker’s usage is also variable (Labov 1966, Kroch (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  7.  88
    Studying the History of Ideas Using Topic Models.David Hall & Christopher D. Manning - unknown
    How can the development of ideas in a scientific field be studied over time? We apply unsupervised topic modeling to the ACL Anthology to analyze historical trends in the field of Computational Linguistics from 1978 to 2006. We induce topic clusters using Latent Dirichlet Allocation, and examine the strength of each topic over time. Our methods find trends in the field including the rise of probabilistic methods starting in 1988, a steady increase in applications, and a sharp decline of research (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  8.  36
    Learning to recognize features of valid textual entailments.Christopher Manning - unknown
    separated from evaluating entailment. Current approaches to semantic inference in question answer-.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  9.  61
    Feature-rich part-of-speech tagging with a cyclic dependency network.Christopher Manning - manuscript
    first-order HMM, the current tag t0 is predicted based on the previous tag t−1 (and the current word).1 The back- We present a new part-of-speech tagger that ward interaction between t0 and the next tag t+1 shows demonstrates the following ideas: (i) explicit up implicitly later, when t+1 is generated in turn. While unidirectional models are therefore able to capture both use of both preceding and following tag con-.
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  10.  44
    Natural Logic for Textual Inference.Christopher D. Manning - unknown
    This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PAS- CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  11. Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora.David Hall & Christopher D. Manning - unknown
    A significant portion of the world’s text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  12.  18
    Fast Exact Inference with a Factored Model for Natural Language Parsing.Dan Klein & Christopher D. Manning - unknown
    We present a novel generative model for natural language tree structures in which semantic (lexical dependency) and syntactic (PCFG) structures are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing models, the factored model admits an extremely effective A* parsing algorithm, which enables efficient, exact inference.
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  13.  19
    Natural Language Grammar Induction using a Constituent-Context Model.Dan Klein & Christopher D. Manning - unknown
    This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  14.  34
    Ergativity: Argument Structure and Grammatical Relations.Christopher D. Manning - unknown
    I wish to present a codi cation of syntactic approaches to dealing with ergative languages and argue for the correctness of one particular approach, which I will call the Inverse Grammatical Relations hypothesis.1 I presume familiarity with the term `ergativity', but, brie y, many languages have ergative case marking, such as Burushaski in (1), in contrast to the accusative case marking of Latin in (2). More generally, if we follow Dixon (1979) and use A to mark the agent-like argument of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  15.  29
    Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling.Christopher Manning - unknown
    Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sam- pling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  16.  38
    An extended model of natural logic.Christopher D. Manning & Bill MacCartney - unknown
    We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  17.  29
    A Generative Constituent-Context Model for Improved Grammar Induction.Dan Klein & Christopher D. Manning - unknown
    We present a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. Parameter search with EM produces higher quality analyses than previously exhibited by unsupervised systems, giving the best published unsupervised parsing results on the ATIS corpus. Experiments on Penn treebank sentences of comparable length show an even higher F1 of 71% on nontrivial brackets. We compare distributionally induced and actual part-of-speech tags as input data, and examine extensions to the basic (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  34
    Learning to distinguish valid textual entailments.Christopher D. Manning & Daniel Cer - unknown
    This paper proposes a new architecture for textual inference in which finding a good alignment is separated from evaluating entailment. Current approaches to semantic inference in question answering and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, using a locally decomposable matching score. While this formulation is adequate for representing local (word-level) phenomena such as synonymy, it is incapable of representing global interactions, such as that between verb negation (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  19.  50
    Enriching the knowledge sources used in a maximum entropy part-of-speech tagger.Christopher Manning - manuscript
    Kristina Toutanova Christopher D. Manning Dept of Computer Science Depts of Computer Science and Linguistics Gates Bldg 4A, 353 Serra Mall Gates Bldg 4A, 353 Serra Mall Stanford, CA 94305–9040, USA Stanford, CA 94305–9040, USA [email protected] [email protected]..
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  20.  59
    Learning Alignments and Leveraging Natural Logic.Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon, Bill MacCartney, Marie-Catherine de Marneffe, Daniel Ramage, Eric Yeh & Christopher D. Manning - unknown
    We describe an approach to textual inference that improves alignments at both the typed dependency level and at a deeper semantic level. We present a machine learning approach to alignment scoring, a stochastic search procedure, and a new tool that finds deeper semantic alignments, allowing rapid development of semantic features over the aligned graphs. Further, we describe a complementary semantic component based on natural logic, which shows an added gain of 3.13% accuracy on the RTE3 test set.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  21.  19
    Parsing with Treebank Grammars: Empirical Bounds, Theoretical Models, and the Structure of the Penn Treebank.Dan Klein & Christopher D. Manning - unknown
    This paper presents empirical studies and closely corresponding theoretical models of the performance of a chart parser exhaustively parsing the Penn Treebank with the Treebank’s own CFG grammar. We show how performance is dramatically affected by rule representation and tree transformations, but little by top-down vs. bottom-up strategies. We discuss grammatical saturation, including analysis of the strongly connected components of the phrasal nonterminals in the Treebank, and model how, as sentence length increases, the effective grammar rule size increases as regions (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  22.  43
    Modeling Semantic Containment and Exclusion in Natural Language Inference.Christopher D. Manning - unknown
    We propose an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We greatly extend past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. Our system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  23.  12
    Robust Textual Inference via Graph Matching.Christopher D. Manning - unknown
    We present a system for deciding whether a given sentence can be inferred from text. Each sentence is represented as a directed graph (extracted from a dependency parser) in which the nodes represent words or phrases, and the links represent syntactic and semantic relationships. We develop a learned graph matching model to approximate entailment by the amount of the sentence’s semantic content which is contained in the text. We present results on the Recognizing Textual Entailment dataset (Dagan et al., 2005), (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  24.  79
    Finding contradictions in text.Christopher Manning - manuscript
    Marie-Catherine de Marneffe, Anna N. Rafferty and Christopher D. Manning Linguistics Department Computer Science Department Stanford University Stanford University Stanford, CA 94305 Stanford, CA 94305 {rafferty,manning}@stanford.edu [email protected]..
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  40
    Learning alignments and leveraging natural logic.Christopher Manning - manuscript
    Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon Bill MacCartney, Marie-Catherine de Marneffe, Daniel Ramage Eric Yeh, Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  26.  61
    Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics?Christopher D. Manning - unknown
    I examine what would be necessary to move part-of-speech tagging performance from its current level of about 97.3% token accuracy (56% sentence accuracy) to close to 100% accuracy. I suggest that it must still be possible to greatly increase tagging performance and examine some useful improvements that have recently been made to the Stanford Part-of-Speech Tagger. However, an error analysis of some of the remaining errors suggests that there is limited further mileage to be had either from better machine learning (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  14
    Conditional Structure versus Conditional Estimation in NLP Models.Dan Klein & Christopher D. Manning - unknown
    This paper separates conditional parameter estima- tion, which consistently raises test set accuracy on statistical NLP tasks, from conditional model struc- tures, such as the conditional Markov model used for maximum-entropy tagging, which tend to lower accuracy. Error analysis on part-of-speech tagging shows that the actual tagging errors made by the conditionally structured model derive not only from label bias, but also from other ways in which the independence assumptions of the conditional model structure are unsuited to linguistic sequences. The (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  16
    Automatic Acquisition of a Large Subcategorization Dictionary From Corpora.Christopher D. Manning - unknown
    This paper presents a new method for producing a dictionary of subcategorization frames from unlabelled text corpora. It is shown that statistical filtering of the results of a finite state parser running on the output of a stochastic tagger produces high quality results, despite the error rates of the tagger and the parser. Further, it is argued that this method can be used to learn all subcategorization frames, whereas previous methods are not extensible to a general solution to the problem.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  29.  85
    Efficient, Feature-based, Conditional Random Field Parsing.Christopher D. Manning - unknown
    Discriminative feature-based methods are widely used in natural language processing, but sentence parsing is still dominated by generative methods. While prior feature-based dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first general, featurerich discriminative parser, based on a conditional random field model, which has been successfully scaled to the full WSJ parsing data. Our efficiency is primarily due to the use of stochastic optimization techniques, as well as parallelization and chart prefiltering. On WSJ15, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  15
    Soft Constraints Mirror Hard Constraints: Voice and Person in English and Lummi.Christopher D. Manning - unknown
    The same categorical phenomena which are attributed to hard grammatical constraints in some languages continue to show up as statistical preferences in other languages, motivating a grammatical model that can account for soft constraints. The effects of a hierarchy of person (1st, 2nd 3rd) on grammar are categorical in some languages, most famously in languages withError: Illegal entry in bfrange block in ToUnicode CMap inverse systems, but also in languages with person restrictions on passivization. In Lummi, for example, the person (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  31.  26
    The Lexical Integrity of Japanese Causatives.Christopher D. Manning & Ivan A. Sag - unknown
    Grammatical theory has long wrestled with the fact that causative constructions exhibit properties of both single words and complex phrases. However, as Paul Kiparsky has observed, the distribution of such properties of causatives is not arbitrary: ‘construal’ phenomena such as honorification, anaphor and pronominal binding, and quantifier ‘floating’ typically behave as they would if causatives were syntactically complex, embedding constructions; whereas case marking, agreement and word order phenomena all point to the analysis of causatives as single lexical items.1 Although an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  66
    Fast exact inference with a factored model for natural language parsing.Christopher Manning - manuscript
    We present a novel generative model for natural language tree structures in which semantic (lexical dependency) and syntactic (PCFG) structures are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing models, the factored model admits an extremely effective A* parsing algorithm, which enables efficient, exact inference.
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  33.  54
    Optimizing chinese word segmentation for machine translation performance.Christopher Manning - unknown
    Pi-Chuan Chang, Michel Galley, and Christopher D. Manning Computer Science Department, Stanford University Stanford, CA 94305 pichuan,galley,[email protected]..
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  34.  24
    A dictionary database template for.Brett Baker & Christopher Manning - unknown
    Dictionary-making is an increasingly important avenue for cultural preservation and maintenance for Aboriginal people. It is also one of the main jobs performed by linguists working in Aboriginal communities. However, current tools for making dicitionaries are either not specifically designed for the purpose (Word, Nisus), with the result that dictionaries written in them are difficult to maintain, to keep consistent, and to manipulate automatically, or are too complex for many people to use (Shoebox), and are thereby wasted as potential resources. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  38
    An exploration of sentiment summarization.Philip Beineke & Christopher Manning - unknown
    The website Rotten Tomatoes, located at www.rottentomatoes.com, is primarily an online repository of movie reviews. For each movie review document, the site provides a link to the full review, along with a brief description of its sentiment. The description consists of a rating (“fresh” or “rotten”) and a short quotation from the review. Other research (Pang, Lee, & Vaithyanathan 2002) has predicted a movie review’s rating from its text. In this paper, we focus on the quotation, which is a main (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  36.  28
    Aligning Semantic Graphs for Textual Inference and Machine Reading.Marie-Catherine de Marneffe, Trond Grenager, Bill MacCartney, Daniel Cer, Daniel Ramage, Chloe Kiddon & Christopher D. Manning - unknown
    This paper presents our work on textual inference and situates it within the context of the larger goals of machine reading. The textual inference task is to determine if the meaning of one text can be inferred from the meaning of another and from background knowledge. Our system generates semantic graphs as a representation of the meaning of a text. This paper presents new results for aligning pairs of semantic graphs, and proposes the application of natural logic to derive inference (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  19
    Bilingual Dictionaries for Australian Languages: User studies on the place of paper and electronic dictionaries.Miriam Corris, Christopher Manning, Susan Poetsch & Jane Simpson - unknown
    Dictionaries have long been seen as an essential contribution by linguists to work on endangered languages. We report on preliminary investigations of actual dictionary usage and usability by 76 speakers, semi-speakers and learners of Australian Aboriginal languages. The dictionaries include: electronic and printed bilingual Warlpiri-English dictionaries, a printed trilingual Alawa-Kriol- English dictionary, and a printed bilingual Warumungu-English dictionary. We examine competing demands for completeness of coverage and ease of access, and focus on the prospects of electronic dictionaries for solving many (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  25
    Dictionaries and endangered languages.Miriam Corris, Christopher Manning, Susan Poetsch & Jane Simpson - unknown
    Linguists have seen creating dictionaries of endangered languages as a key activity in language maintenance and revival work. However, like any approach to language engineering, there are concerns to address. The first is the tension between language documentation and language maintenance2. The second is the role of literacy. A lot of effort has been put into vernacular literacy, on the assumption that it assists language maintenance, as well as language documentation. In some respects this is a dubious assumption, because writing (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  39.  12
    Template Sampling for Leveraging Domain Knowledge in Information Extraction.Christopher Cox, Christopher Manning & Pat Langley - unknown
    We initially describe a feature-rich discriminative Conditional Random Field (CRF) model for Information Extraction in the workshop announcements domain, which offers good baseline performance in the PASCAL shared task. We then propose a method for leveraging domain knowledge in Information Extraction tasks, scoring candidate document labellings as one-value-per-field templates according to domain feasibility after generating sample labellings from a trained sequence classifier. Our relational models evaluate these templates according to our intuitions about agreement in the domain: workshop acronyms should resemble (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  40.  30
    NP Subject Detection in Verb-Initial Arabic Clauses.Spence Green & Christopher D. Manning - unknown
    Phrase re-ordering is a well-known obstacle to robust machine translation for language pairs with significantly different word orderings. For Arabic-English, two languages that usually differ in the ordering of subject and verb, the subject and its modifiers must be accurately moved to produce a grammatical translation. This operation requires more than base phrase chunking and often defies current phrase-based statistical decoders. We present a conditional random field sequence classi- fier that detects the full scope of Arabic noun phrase subjects in (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  41.  37
    An Ç ´Ò¿ µ Agenda-Based Chart Parser for Arbitrary Probabilistic Context-Free Grammars.Dan Klein & Christopher D. Manning - unknown
    While Ç ´Ò¿ µ methods for parsing probabilistic context-free grammars (PCFGs) are well known, a tabular parsing framework for arbitrary PCFGs which allows for botton-up, topdown, and other parsing strategies, has not yet been provided. This paper presents such an algorithm, and shows its correctness and advantages over prior work. The paper finishes by bringing out the connections between the algorithm and work on hypergraphs, which permits us to extend the presented Viterbi (best parse) algorithm to an inside (total probability) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  17
    A∗ parsing: Fast exact viterbi parse selection.Dan Klein & Christopher D. Manning - unknown
    A* PCFG parsing can dramatically reduce the time required to find the exact Viterbi parse by conservatively estimating outside Viterbi probabilities. We discuss various estimates and give efficient algorithms for computing them. On Penn treebank sentences, our most detailed estimate reduces the total number of edges processed to less than 3% of that required by exhaustive parsing, and even a simpler estimate which can be pre-computed in under a minute still reduces the work by a factor of 5. The algorithm (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  13
    Combining Heterogeneous Classifiers for Word-Sense Disambiguation.Dan Klein, Christopher D. Manning & Kristina Toutanova - unknown
    This paper discusses ensembles of simple but heterogeneous classifiers for word-sense disambiguation, examining the Stanford-CS224N system entered in the SENSEVAL-2 English lexical sample task. First-order classifiers are combined by a second-order classifier, which variously uses majority voting, weighted voting, or a maximum entropy model. While individual first-order classifiers perform comparably to middle-scoring teams’ systems, the combination achieves high performance. We discuss trade-offs and empirical performance. Finally, we present an analysis of the combination, examining how ensemble performance depends on error independence (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44. Distributional Phrase Structure Induction.Dan Klein & Christopher D. Manning - unknown
    Unsupervised grammar induction systems commonly judge potential constituents on the basis of their effects on the likelihood of the data. Linguistic justifications of constituency, on the other hand, rely on notions such as substitutability and varying external contexts. We describe two systems for distributional grammar induction which operate on such principles, using part-of-speech tags as the contextual features. The advantages and disadvantages of these systems are examined, including precision/recall trade-offs, error analysis, and extensibility.
     
    Export citation  
     
    Bookmark   1 citation  
  45.  16
    From instance-level constraints to space-level constraints: Making the most of prior knowledge in data clustering.Dan Klein & Christopher D. Manning - unknown
    We present an improved method for clustering in the presence of very limited supervisory information, given as pairwise instance constraints. By allowing instance-level constraints to have spacelevel inductive implications, we are able to successfully incorporate constraints for a wide range of data set types. Our method greatly improves on the previously studied constrained -means algorithm, generally requiring less than half as many constraints to achieve a given accuracy on a range of real-world data, while also being more robust when over-constrained. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  12
    Interpreting and extending classical agglomerative clustering algorithms using a model-based approach.Dan Klein & Christopher D. Manning - unknown
    erative clustering. First, we show formally that the common heuristic agglomerative clustering algorithms – Ward’s method, single-link, complete-link, and a variant of group-average – are each equivalent to a hierarchical model-based method. This interpretation gives a theoretical explanation of the empirical behavior of these algorithms, as well as a principled approach to resolving practical issues, such as number of clusters or the choice of method. Second, we show how a model-based viewpoint can suggest variations on these basic agglomerative algorithms. We (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  19
    Parsing and Hypergraphs.Dan Klein & Christopher D. Manning - unknown
    While symbolic parsers can be viewed as deduction systems, this view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an Ç´Ò¿µ time bound for arbitrary PCFGs, while preserving as much of the flexibility of symbolic chart parsers as allowed by the inherent (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  27
    An ¢¡¤£¦¥¨§ agenda-based chart parser for arbitrary probabilistic context-free grammars.Christopher Manning - manuscript
    fundamental rule” in an order-independent manner, such that the same basic algorithm supports top-down and Most PCFG parsing work has used the bottom-up bottom-up parsing, and the parser deals correctly with CKY algorithm (Kasami, 1965; Younger, 1967) with the difficult cases of left-recursive rules, empty elements, Chomsky Normal Form Grammars (Baker, 1979; Jeand unary rules, in a natural way.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  67
    A Conditional Random Field Word Segmenter.Christopher Manning - unknown
    We present a Chinese word segmentation system submitted to the closed track of Sighan bakeoff 2005. Our segmenter was built using a conditional random field sequence model that provides a framework to use a large number of linguistic features such as character identity, morphological and character reduplication features. Because our morphological features were extracted from the training corpora automatically, our system was not biased toward any particular variety of Mandarin. Thus, our system does not overfit the variety of Mandarin most (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  34
    An Effective Two-Stage Model for Exploiting Non-Local Dependencies in Named Entity Recognition.Christopher D. Manning - unknown
    This paper shows that a simple two-stage approach to handle non-local dependencies in Named Entity Recognition (NER) can outperform existing approaches that handle non-local dependencies, while being much more computationally efficient. NER systems typically use sequence models for tractable inference, but this makes them unable to capture the long distance structure present in text. We use a Conbel.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 91