• Standford CoreNLP


    Stanford CoreNLP

    Stanford CoreNLP提供一组自然语言处理的工具。这些工具可以把原始英语文本作为输入,输出词的基本形式,词的词性标记,判断词是否是公司名、人名等,规格化日期、时间、数字量,剖析句子的句法分析树和词依存,指示那些名词短语指代相同的实体。Stanford CoreNLP是一个综合的框架,这可以很简单的使用工具集的一个分支分析一小块文本。从简单的文本开始,你可以仅仅使用两行代码对它运行所有的工具。

    Stanford CoreNLP集合了词性标注器,命名实体识别器,共指消解系统,情绪分析工具,提供模型文件来分析英语。这个项目的目标是使人们能够快速不费力的获得完整的自然语言标注。它被设计为灵活的和可扩展的。使用一个选项你可以选择启用那个工具禁用哪个工具。

    Stanford CoreNLP的代码使用Java语言编写,遵循GNU通用公共许可证。需要java 1.6+。

    下载链接为:http://nlp.stanford.edu/software/stanford-corenlp-full-2014-01-04.zip

    使用说明

    剖析一个文件,保存为xml文件

    在使用Stanford CoreNLP之前,通常要建立一个配置文件(Java properties file)。最小的,文件应该包含“annotators”属性,这个属性包含用逗号分隔开的标注器的列表。

    例如:annotators = tokenize, ssplit, pos, lemma, ner, parse, dcoref

    激活了tokenization, sentence splitting (required by most Annotators), POS tagging, lemmatization, NER, syntactic parsing, and coreference resolution.

    然而,如果你只想要指定一个或两个属性,你可以在命令行中指定。

    要使用Stanford CoreNLP处理一个文件,使用下面的排序命令行

    java -cp stanford-corenlp-YYYY-MM-DD.jar:stanford-corenlp-YYYY-MM-DD-models.jar:xom.jar:joda-time.jar:jollyday.jar:ejml-VV.jar -Xmx3g edu.stanford.nlp.pipeline.StanfordCoreNLP [ -props <YOUR CONFIGURATION FILE> ] -file <YOUR INPUT FILE>

    例如,为了处理样例文件input.txt你可以在发布文件夹下使用命令行:

    java -cp stanford-corenlp-3.3.1.jar:stanford-corenlp-3.3.1-models.jar:xom.jar:joda-time.jar:jollyday.jar:ejml-0.23.jar
    -Xmx3g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner,parse,dcoref -file input.txt

    注意:

    • Stanford CoreNLP需要Java版本1.6或更高
    • -Xmx3g 指定RAM总量,在一个64位电脑中,Stanford CoreNLP一般需要3GB内存来运行。
    • 上面的命令可以在OSX 和Linux中运行,在Windows中,冒号(:)要改为分号(;)。如果不在发布文件夹中,需要加上文件夹路径
    • 参数 -annotators是可选的。如果你省略了,代码将使用属性文件中的属性。
    • 像这样处理小文件是不效率的,因为这需要花几分钟的时间来加载数据。应该成批处理。

    如果你想要处理一个列表的文件,使用如下命令行:

    java -cp stanford-corenlp-YYYY-MM-DD.jar:stanford-corenlp-models-YYYY-MM-DD.jar:xom.jar:joda-time.jar:jollyday.jar:ejml-VV.jar -Xmx3g edu.stanford.nlp.pipeline.StanfordCoreNLP [ -props <YOUR CONFIGURATION FILE> ] -filelist <YOUR LIST OF FILES>

    其中,-filelist参数列出指向一个文件,文件的内容是要处理的文件列表。

    -props参数是可选的,默认的,它将查找StanfordCoreNLP.propertiys在你的类路径中,并且使用在发布中的默认参数。

    默认的情况下,你的输出文件被写在当前文件夹下,你可以执行一个可更改的路径,使用参数-outputDirectoty。输出文件名和输入文件名相同,使用-outputExtension加上名称扩展(默认是.xml)文件。默认的将覆盖输出文件。可以使用参数-noClobber避免这种情况。可以使用参数-replaceExtension来代替名称扩展。

    Stanford CoreNLP具有移除标签的功能。例如,如果使用如下标注器参数运行

    annotators = tokenize, cleanxml, ssplit, pos, lemma, ner, parse, dcoref

    使用如下文本,

    <xml>Stanford University is located in California. It is a great university.</xml>

    将会生成移除标签后的结果。

    使用Stanford CoreNLP API

    CoreNLP的主干分为两个类:Annotation和Annotator。Annotations是数据结构保存标注结果。是基本的map,从键值到标注。Annocators类似于函数,作用与Annotations.他们做类似剖析、实体识别等任务。Annotations和Annotators被AnnotationPipelines集成,AnnotationPipelines创建Annotators的序列。

    下表是目前支持的Annocators和他们产生的Annocations。

    Property name Annotator class name Generated Annotation Description
    tokenize PTBTokenizerAnnotator TokensAnnotation (list of tokens), and CharacterOffsetBeginAnnotation, CharacterOffsetEndAnnotation, TextAnnotation (for each token) Tokenizes the text. This component started as a PTB-style tokenizer, but was extended since then to handle noisy and web text. The tokenizer saves the character offsets of each token in the input text, as CharacterOffsetBeginAnnotation and CharacterOffsetEndAnnotation.
    cleanxml CleanXmlAnnotator XmlContextAnnotation Remove xml tokens from the document
    ssplit WordToSentenceAnnotator SentencesAnnotation Splits a sequence of tokens into sentences.
    pos POSTaggerAnnotator PartOfSpeechAnnotation Labels tokens with their POS tag. For more details see this page.
    lemma MorphaAnnotator LemmaAnnotation Generates the word lemmas for all tokens in the corpus.
    ner NERClassifierCombiner NamedEntityTagAnnotation and NormalizedNamedEntityTagAnnotation Recognizes named (PERSON, LOCATION, ORGANIZATION, MISC) and numerical entities (DATE, TIME, MONEY, NUMBER). Named entities are recognized using a combination of three CRF sequence taggers trained on various corpora, such as ACE and MUC. Numerical entities are recognized using a rule-based system. Numerical entities that require normalization, e.g., dates, are normalized to NormalizedNamedEntityTagAnnotation. For more details on the CRF tagger see this page.
    regexner RegexNERAnnotator NamedEntityTagAnnotation Implements a simple, rule-based NER over token sequences using Java regular expressions. The goal of this Annotator is to provide a simple framework to incorporate NE labels that are not annotated in traditional NL corpora. For example, the default list of regular expressions that we distribute in the models file recognizes ideologies (IDEOLOGY), nationalities (NATIONALITY), religions (RELIGION), and titles (TITLE). Here is a simple example of how to use RegexNER. For more complex applications, you might consider TokensRegex.
    sentiment SentimentAnnotator SentimentCoreAnnotations.AnnotatedTree Implements Socher et al's sentiment model. Attaches a binarized tree of the sentence to the sentence level CoreMap. The nodes of the tree then contain the annotations from RNNCoreAnnotations indicating the predicted class and scores for that subtree. See the sentiment page for more information about this project.
    truecase TrueCaseAnnotator TrueCaseAnnotation and TrueCaseTextAnnotation Recognizes the true case of tokens in text where this information was lost, e.g., all upper case text. This is implemented with a discriminative model implemented using a CRF sequence tagger. The true case label, e.g., INIT_UPPER is saved in TrueCaseAnnotation. The token text adjusted to match its true case is saved as TrueCaseTextAnnotation.
    parse ParserAnnotator TreeAnnotation, BasicDependenciesAnnotation, CollapsedDependenciesAnnotation, CollapsedCCProcessedDependenciesAnnotation Provides full syntactic analysis, using both the constituent and the dependency representations. The constituent-based output is saved in TreeAnnotation. We generate three dependency-based outputs, as follows: basic, uncollapsed dependencies, saved in BasicDependenciesAnnotation; collapsed dependencies saved in CollapsedDependenciesAnnotation; and collapsed dependencies with processed coordinations, in CollapsedCCProcessedDependenciesAnnotation. Most users of our parser will prefer the latter representation. For more details on the parser, please see this page. For more details about the dependencies, please refer to this page.
    dcoref DeterministicCorefAnnotator CorefChainAnnotation Implements both pronominal and nominal coreference resolution. The entire coreference graph (with head words of mentions as nodes) is saved in CorefChainAnnotation. For more details on the underlying coreference resolution algorithm, see this page.
  • 相关阅读:
    集成方法-概念理解
    k-近邻算法-手写识别系统
    k-近邻算法-优化约会网站的配对效果
    支持向量机-手写识别问题
    支持向量机-在复杂数据上应用核函数
    支持向量机-完整Platt-SMO算法加速优化
    支持向量机-SMO算法简化版
    支持向量机-引入及原理
    hdu4123-Bob’s Race(树形dp+rmq+尺取)
    hdu4436-str2int(后缀数组 or 后缀自动机)
  • 原文地址:https://www.cnblogs.com/Dream-Fish/p/3706446.html
Copyright © 2020-2023  润新知