I am a beginner. I have access to data in a simple way only through databases, XML, and text. If I open a portal website by myself, I may have to update my website news or other things that change every day. Every day, tedious and technical operations, just to collect some data and insert it into the database. Some people may ask me to use robots to collect data. I think this is still a little troublesome. So today, we have a whimsy. Let me give you an example to illustrate this point. Databases are used for normal projects. MSSQL is used to obtain data through simple SQL statements during each operation, for example, select * from table. My idea is to customize a syntax to collect any resources on a local machine or network.
Let's talk about the process:
1. Assume that the Baidu database server stores website data
2. Baidu Program The employee wrote a set of programs to show the data.
3. Users can access Baidu's domain name to view beautiful pages and get the desired information.
For the above three points, if we want to connect to the Baidu database to obtain data for our own use, do you think it is possible? However, we can easily obtain the data displayed by Baidu to our customers, that is, the web page. Source code , We can Code Analyze and finally process the source code of the webpage into raw data. Okay, we got it. How can we use it?
My goal is to build a framework for you to obtain the original data of a website page in a simple way. The framework only serves as an intermediate layer for analysis and processing. Users cannot see how to obtain data.
I think I have no mature works on the Internet?
The final purpose of this framework is to turn the Internet into a super-large database. What do you want to get? Hey, you have a good idea!
To help you better understand my ideas, I made a simple example. For more information, or for better implementation ideas, thank you for reminding me.
In this example, we can use custom statements to obtain the source code and hyperlinks of a webpage.
Select link. url, Link. Body from [URL: http://www.cnblogs.com/?encoding=8]
Brief syntax description
Select * from [URL: http: // URL; encoding = Page code] To get the source code of a website
Select link. * from [URL: http: // URL; encoding = Page code] get all hyperlinks of a webpage
The page encoding is not required. The default page encoding is gb2312.
Display hyperlink attributes:
Link.url,link.title,link.id,link.name,link.tar get, Link. Body
Currently, only two simple syntaxes are available for testing.
Various query conditions will be added for future targets to make data query more flexible.
Source code: http://files.cnblogs.com/dirain/SearchData.rar
The code I wrote may be the most stupid way. I hope you can leave a message to help me provide implementation ideas. Thank you.