for (much) more power and flexibility, use a dedicated spellchecking the library like PyEnchant
. There ' s a tutorial, or could just dive straight in:
>>> ImportEnchant>>>D=Enchant.Dict("en_US")>>>D.Check("Hello")True>>>D.Check("Helo")False>>>D.Suggest("Helo")[ ' He lo ' , ' He-lo ' , ' Hello ' , ' helot ' , ' help ' , ' Halo ' , ' Hell ' , ' Held ' , ' Helm ' , ' Hero ' , "He" ll "]>>>
PyEnchant
Comes with a few dictionaries (EN_GB, en_US, De_de, fr_fr), but can and any of the OpenOffice ones if you want more Langua Ges.
Using a set to store the word list because looking them up would be faster:
WithOpen("English_words.txt") AsWord_file:English_words=Set(Word.strip (). () for Word in< Span class= "PLN" > Word_file) def Is_ English_word (word Word. () in english_words print Is_english_word ( "Ham" ) # should is true if you have a good english_words.txt span>
To answer the second part of the question, the plurals would already is in a good word list, and if you wanted to specific Ally exclude those from the list for some reason, you could indeed write a-function to handle it. But 中文版 pluralization rules is tricky enough that I ' d just include the plurals in the word list to begin with.
As to where to find 中文版 word lists, I found several just by googling "中文版 word list". Here's one:http://www.sil.org/linguistics/wordlists/english/wordlist/wordsen.txt you could Google for British or American 中文版 If you want specifically one of the those dialects.
Using NLTK:
from nltk.corpus import wordnetif not wordnet.synsets(word_to_test): #Not an English Wordelse: #English Word
You should refer the article if you had trouble installing wordnet or want to try other approaches.
Excerpt from: Https://stackoverflow.com/questions/3788870/how-to-check-if-a-word-is-an-english-word-with-python
Python Determines whether a word is a valid English word? --Three ways to