Source: blog. csdn. netchaijunkunarticledetails17279565. I will sort out relevant blog posts from time to time and improve the relevant content. Therefore, we strongly recommend that you view this article in the original source. I have been writing a technical blog for more than half a year. At the end of the year, I think it is necessary to write something to sum up my experience,
Source: http://blog.csdn.net/chaijunkun/article/details/17279565. For more information, see. I will sort out relevant blog posts from time to time and improve the relevant content. Therefore, we strongly recommend that you view this article in the original source. I have been writing a technical blog for more than half a year. At the end of the year, I think it is necessary to write something to sum up my experience,
Source: http://blog.csdn.net/chaijunkun/article/details/17279565. For more information, see. I will sort out relevant blog posts from time to time and improve the relevant content. Therefore, we strongly recommend that you view this article in the original source.
I have been writing a technical blog for more than half a year. At the end of the year, I think it is necessary to write something to sum up my experience and share it with you. Recently, I am working on a data synchronization project. After obtaining the export files that are regularly distributed from the data center, I will parse them row by row based on the meaning of the fixed fields. Then I will analyze the exported files and import them to my database. The requirement is simple. Let's look at an example:
^_^ 21635265 ^_^ test title ^_^ 10 ^_^ 20 ^_^ 15
Assume that the preceding example shows one line of text data. In this example, the column separator is pai_^ (note that it is multi-character), and the field definitions are
Release date ^_^ article ID ^_^ article title ^_^ comments ^_^ clicks ^_^ top
Considering the trust in the data center, we ignore the illegal fields such as "Release Date", "Article ID", "number of comments", "Number of clicks", and "top number, the focus is on the Analysis Title, because the title is specified by the user and any visible characters can be entered. Therefore, we also consider the situation where the article contains our delimiters, so in data. after splite (), the algorithm is used to analyze the first two fields, and then analyze them from "top number", "clicks", and "comment number, the rest is the title. However, we only consider the following format in the title:
Test title pai_^, test pai_^ title, and test title
This is not taken into account:
Test title ^ _
That is to say, the end of the title has a half separator, so that the first half of the logical and real separators can be combined into a reasonable separator, such:
^_^ 21635265 ^_^ test title ^_^_ ^ 10 ^_^ 20 ^_^ 15
Therefore, when splitting a field, the Comment Segment is split into "_ ^ 10". In this case, it cannot be converted to the Integer type. Therefore, an error is returned.
The Separator Used in this project is determined by other colleagues long ago. It is not until this problem occurs that it is necessary to change it to a single character, so that there will be no ambiguity.
Later, when I imported other data in Excel for analysis, I found that it had long noticed this problem. When specifying a custom separator, only single characters are allowed: