Software Overview
Thulac (Thu lexical analyzer for Chinese) is a Chinese lexical analysis toolkit developed by the natural language processing and Social humanities computing laboratory of Tsinghua University. It has the Chinese word segmentation and part-of-speech tagging functions. Thulac has the following features:
Strong capabilities. We are trained using the world's largest human word segmentation and part-of-speech tagging Chinese corpus (about 58 million words), which provides powerful model tagging capabilities.
High accuracy. The F1 value of the toolkit for word segmentation on the Standard dataset Chinese treebank (ctb5) can reach 97.3%, and the F1 value of the part-of-speech tagging can reach 92.9%, which is equivalent to the best method on the dataset.
Fast. The word segmentation and part-of-speech tagging speed is 0.15 million KB/s at the same time, and can process about words per second. The word splitting speed can reach 1.3 Mb/s.
Software address:
Http://thulac.thunlp.org/
Python version example:
Python Programimport thulac
, Newthulac.thulac(args)
Class, where ARGs is the program parameter. You can callthulac.cut()
Word segmentation.
1 "2 test using 3" 4 Import thulac 5 6 7 def thulac_use (): 8 "9 for Word Segmentation and part-of-speech tagging 10: return: 11 "12 content = 'nanjing Yangtze River Bridge '13 th = thulac. thulac () 14 res = th. cut (content, text = true) 15 16 print (RES) 17 18 19 if _ name _ = '_ main _': 20 thulac_use ()
Result:
Nanjing _ NS Changjiang _ NS bridge _ n
A set of highly accurate and efficient word segmentation and part-of-speech tagging tools-thulac