Because the use of the official website is very inconvenient, the parameters are not detailed description, also can not find very good information. So decided to use Python with NLTK to get constituency Parser and Denpendency Parser. First, install Python
Operating system Win10
JDK (version 1.8.0_151)
Anaconda (version 4.4.0), Python (version 3.6.1)
Slightly second, install NLTK
Pip Install NLTK
After the installation is complete, enter the python command.
Import NLTK
nltk.download ()
As shown in the figure:
Then will pop up a box, specific I am not very understand at present, probably is the provision of some resource bundles, so I will all first download
As shown in the figure:
So it's done. Iii. Stanford Parser and NLTK
In the case of not setting classpath, simple and practical Stanford Parser a few simple demo 1.Constituency Parser
#-*-Coding:utf-8-*-
import os from
nltk.parse.stanford import stanfordparser
os.environ[' Stanford_ PARSER '] = './model/stanford-parser.jar '
os.environ[' stanford_models ' = './model/ Stanford-parser-3.8.0-models.jar '
parser = Stanfordparser (model_path= "edu/stanford/nlp/models/lexparser/ EnglishPCFG.ser.gz ")
sentences = Parser.raw_parse (" The quick brown fox jumps over the "lazy \" dog. ")
# for line in sentences:
# for T in line :
# print (t)
# GUI for line in
sentences: for
sent ence in line:
Sentence.draw ()
2.Denpendency Parser
#-*-Coding:utf-8-*-
import os from
nltk.parse.stanford import stanforddependencyparser
' Stanford_parser '] = './model/stanford-parser.jar '
os.environ[' stanford_models ' = './model/ Stanford-parser-3.8.0-models.jar '
parser = Stanforddependencyparser (model_path= "edu/stanford/nlp/models/ Lexparser/englishpcfg.ser.gz ")
sentences = Parser.raw_parse (" The quick brown fox jumps over the lazy dog ")
# returns the Tree # To line in
sentences:
# print (line)
res = list (Parser.parse ("The quick brown fox jumps over the L Azy dog. ". Split ())) for
row in Res[0].triples ():
print (ROW)
This is a split line.
Final version of:
#-*-Coding:utf-8-*-
import os from
nltk.parse.stanford import stanforddependencyparser
' Stanford_parser '] = './model/stanford-parser.jar '
os.environ[' stanford_models ' = './model/ Stanford-parser-3.8.0-models.jar '
parser = Stanforddependencyparser (model_path= "edu/stanford/nlp/models/ lexparser/englishpcfg.ser.gz ")
fin = open ("./data/raw.clean.test ", encoding=" Utf-8 ")
fout = Open ("./result/ Test.txt "," w+ ", encoding=" Utf-8 ")
i = 0 for line in
Fin.readlines ():
if line are None or line = =" ":
Pass
else:
sentences, = Parser.parse (Line.split ("| | | |") [0].split (")]
# Print (SENTENCES.TO_CONLL (4))
Fout.write (SENTENCES.TO_CONLL (4))
fout.write (' \ n ')
Fout.flush ()
i + + 1
print (i)
fin.close ()
fout.close ()
The final look is very much in line with my needs.
Over