Ways to use Stanford CORENLP under Eclipse

Source: Internet
Author: User

Source:CORENLP official website .

Currently release CORENLP version 3.5.0 versions only support java-1.8 and above, so sometimes you need to add jdk-1.8 configuration for Eclipse , The configuration method is as follows:

    • First, go to Oracle official website to download java-1.8, download URL:java download , after installation is complete.
    • Open Eclipse and choose Preferences, java–> installed JREs, Window , to configure:
    • 点击窗体右边的“add”,然后添加一个“Standard VM”(应该是标准虚拟机的意思),然后点击“next”;
    • 在”JRE HOME”那一行点击右边的“Directory…”找到你java 的安装路径,比如“C:Program Files/Java/jdk1.8”

This way your eclipse has supported the jdk-1.8.

1. Create a new Java project, note the compilation Environment version selection 1.8

2. Download the source code to extract to the project, and import the required jar package

such as importing Stanford-corenlp-3.5.0.jar, Stanford-corenlp-3.5.0-javadoc.jar, Stanford-corenlp-3.5.0-models.jar, Stanford-corenlp-3.5.0-sources.jar, Xom.jar, etc.

The process for importing the jar package is: Project Right-click->properties->java Build path->libraries, click "Add JARs" and select the appropriate jar package in the path.

3. Create a new TESTCORENLP class with the following code

1  PackageTest;2 3 Importjava.util.List;4 ImportJava.util.Map;5 Importjava.util.Properties;6 7 ImportEdu.stanford.nlp.dcoref.CorefChain;8 Importedu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefChainAnnotation;9 Importedu.stanford.nlp.ling.CoreAnnotations.LemmaAnnotation;Ten Importedu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation; One Importedu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation; A Importedu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation; - Importedu.stanford.nlp.ling.CoreAnnotations.TextAnnotation; - Importedu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation; the ImportEdu.stanford.nlp.ling.CoreLabel; - Importedu.stanford.nlp.pipeline.Annotation; - ImportEdu.stanford.nlp.pipeline.StanfordCoreNLP; - Importedu.stanford.nlp.semgraph.SemanticGraph; + Importedu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation; - Importedu.stanford.nlp.sentiment.SentimentCoreAnnotations; + ImportEdu.stanford.nlp.trees.Tree; A Importedu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation; at ImportEdu.stanford.nlp.util.CoreMap; -  -  Public classTESTCORENLP { -      Public Static voidMain (string[] args) { -         //creates a Stanfordcorenlp object, with POS tagging, lemmatization, NER, parsing, and Coreference resolution -Properties props =NewProperties (); inProps.put ("Annotators", "Tokenize, Ssplit, POS, lemma, ner, parse, dcoref"); -STANFORDCORENLP pipeline =NewSTANFORDCORENLP (props); to          +         //read some text in the text variable -String Text = "Add your text here:beijing sings Lenovo"; the          *         //Create an empty Annotation just with the given text $Annotation document =NewAnnotation (text);Panax Notoginseng          -         //Run all annotators on the This text the pipeline.annotate (document); +          A         //These is all the sentences in this document the         //a coremap is essentially a MAP that uses class objects as keys and have values with custom types +list<coremap> sentences = Document.get (sentencesannotation.class); -          $System.out.println ("Word\tpos\tlemma\tner"); $          for(Coremap sentence:sentences) { -              //traversing the words in the current sentence -              //a CoreLabel is a coremap with additional token-specific methods the              for(CoreLabel token:sentence.get (tokensannotation.class)) { -                 // This is theWuyiString Word = Token.get (textannotation.class); the                 //This is the POS tag of the token -String pos = Token.get (partofspeechannotation.class); Wu                 //This is the NER label of the token -String ne = token.get (namedentitytagannotation.class); AboutString lemma = Token.get (lemmaannotation.class); $                  -System.out.println (word+ "\ t" +pos+ "\ T" +lemma+ "\ T" +ne); -             } -             //This was the parse tree of the current sentence ATree tree = Sentence.get (treeannotation.class); +              the             //The Stanford dependency graph of the current sentence -Semanticgraph dependencies = Sentence.get (collapsedccprocesseddependenciesannotation.class); $         } the         //This is the coreference link graph the         //Each chain stores a set of mentions. the         //along with a method for getting the most representative mention the         //Both sentence and token offsets start at 1! -Map<integer, corefchain> graph = Document.get (corefchainannotation.class); in     } the}

PS: The idea of the code is to give the text string to Stanford CORENLP processing, STANFORDCORENLP of the various components (Annotator) Press "tokenize (participle), ssplit (segmentation), POS (part-of-speech tagging), Lemma (lexical tuple), NER (named entity recognition), parse (parsing), dcoref (synonym resolution), are processed in order.

After processing list<coremap> sentences = Document.get (Sentencesannotation.class), all the analysis results are included, and the results can be learned by traversing.

It simply prints out the word, part of speech, the word element, and the entity. See the official website for the rest of the usage (e.g. sentiment, parse, relation, etc.).

4. Implementation results:

Ways to use Stanford CORENLP under Eclipse

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.