Do it in two parts. The first part is lossless text compression, the second part is sentence level text summarization, called lossy text compression.
Do not send too high expectations to the second part, because the big probability is not finished, after all, I have no contact with this field.
Lossless text Compression
Overall introduction. The internet produces too much text (is it a pseudo proposition?) Storage and propagation is not economical if compression is not performed. At the time of installing the NLTK corpus, nearly 300M of text was also downloaded for a long time. (More examples, net text, etc., explaining the meaning of text compression)
Information. To have the composition of mathematics, the main description, coding and information entropy. This paper introduces Havermann coding, which is used to calculate the compression rate of the brown corpus characters and word compression by Havermann coding. Describes the arithmetic encoding. Demonstrate the superiority of arithmetic coding.
A brief overview of data compression. Description data compression = encode + model. Then it shows that coding is a problem that has been solved, and the model is more of an AI problem. Refer to the book of the Don
Ppm. Use Python to reproduce a ppm to compress the brown corpus (or the entire corpus so that it compares to the 300M 233). Notice the use of the NLTK in the function, with N-gram instead of the above, using freqdist to do statistics. (Sense of complexity to explode)
Introduce text prediction, LSTM. Refer to the multi-data, the existing Stanford one, and then look at the practice of Cmix and PAQ8, this research should be done by the predecessors. Do benchmark, such as press ENWIKI8. To be able to do state of art is flattered.
Several implementations of LSTM
Https://github.com/kedartatwawadi/NN_compression This is like Stanford that guy ...
Https://github.com/byronknoll/lstm-compress Cmix Realization, unexpectedly is the handwriting reverse dissemination ... Orz
lossy text compression
Follow sentence level text summarization.
Two aspects, on the one hand is the traditional method, attempts to analyze the sentence structure, uses the already annotated corpus, the simplification sentence structure.
On the other hand is the neural network of the attention model. Reference: Text summary-text summarization when deep learning meets automatic text summary
Natural language Processing PJ Outline