The new website on-line one day by the search engine collects the experience

Source: Internet
Author: User
Keywords SEO website construction DEDECMS

Intermediary transaction SEO diagnosis Taobao guest Cloud host technology Hall

Recently has been trying to do a website to enrich the spare time, after all, after work every day is very boring, because the girlfriend belongs to the Non-mainstream brain residual people, so in order to cater to her preferences, they intend to engage in a non-mainstream culture of the site.

The upfront work can be roughly divided into the following steps:

1, the choice of CMS system: has been in Drupal and dedecms between the hesitation, but finally chose the local dedecms, after all, localization to do well, and in contrast, Drupal although the function is unusually strong, but for my application, it is too complicated. So the local set up the WAMP environment, loaded on the DEDECMS, the operation of the system has a preliminary familiarity, and the site of the column is divided.

2, the selection of the collector: to be honest, as a personal webmaster, I really do not have much time and energy to engage in Non-mainstream material original, so consider the use of data collection, tried the Dedecms collection function, not very much, and is based on the Web interface, in my poor speed, The page should be zombie. Now the acquisition software than a few years ago should be a lot more powerful, so I believe there must be a more powerful acquisition software, GG on a search, and indeed there is a software called train acquisition. So download, familiar. To say a digression, this software is actually really not good, memory consumption is very large, and many UI, UE design is simply baffling. But fortunately, play, the basic functions can be satisfied, and stability, crawl speed is significantly stronger than DEDECMS. After fully familiar with it, began to the divided column for the corresponding collection.

3, Domain name registration and space leasing: Online Search a lot of related domain names, have been registered, and finally chose to www.17feizl.com this domain name, meaning "together Non-mainstream", is a bit of a border. Space rental is 10G, mysql1g, independent IP, speed is OK. Just can't anti-theft chain, this for a picture-oriented site, not too ideal.

4, ICP Record: I choose the IDC is more formal, so the management is more stringent, if there is no ICP record number, is not allowed to bind the domain name, so let IDC on behalf of the record, because it is known that the record of the cycle is surprisingly slow. I'm ready to wait 3 weeks. This 3-week time, I can use to synchronize the Dedecms template modification and program adjustment. But unexpectedly, I applied ICP 2 days after the submission of the audit passed. summed up, on behalf of my record of IDC in ICP where the credibility of the high, may I as a personal webmaster identity report, the cycle will be president. Here is a small experience, that is, I reported that the identity card number, before the registration, but 15, this time I am ready to enter the 18-bit, the same audit passed.

5, Template modification: As the ICP quickly through the audit, let my plan a little upset, so I overtime to do dedecms template changes, the difficulty is nothing, more is a number of CSS adjustments. There is a free list function, a bit let me inexplicable, because the Dede official website for its introduction is also very vague, the forum also has a large number of people ask this function in the end how to use. Through the repeated groping research, finally understand it. In fact, the free list to a certain extent, can replace the article List page and smart tags, because it can apply different styles of the list template, which is not available on the list page, but smart tags can be implemented. Just smart tags can't do pagination. Dede developers really don't think clearly. Make so complicated, how to use the free list, here is not much elaboration, to mention a small experience, when the free list to replace a column of the list page, after each generation column static page, to update the free list, otherwise no effect. And remember not to update the column static page, otherwise the free list will have no effect.

6, fetching data processing: Dede's article abstract is strange, is automatically extracts the first n words of the article, which for me to need a custom summary, a little superfluous. In addition, some grab the picture of the alt also need to replace, so they wrote a plug-in, you can check each column article keywords, description, and can be amended. Also able to batch detection, correction of the article in the picture Alt and so on. And for the Dede program has done some hack, each add a summary of articles and keywords are based on a good written program automatically generated.

7, False original: For picture articles, my approach is to modify the title of the article, basically changed beyond recognition, but will not be detached from the theme of the picture. For the text mixed with the article, change the title, add original end paragraph text, the middle of the text for the semantic more easily converted, also try to make adjustments, as far as possible the similarity of the two articles to reduce.

8, deployment: I do not advocate a Web site, a brain to all the data crawled. First of all, spiders look at your instantaneous appearance of the huge amount of data, it is easy to determine the site for the garbage station, and secondly, just online site, make so many, to who see it? My practice is, on line, generated dozens about the article, the rest in the background all set to "to be audited", so that when the static generation, will not be generated together. And then every day in the background, from the articles to be audited, pick a twenty or thirty to update, so the spider seems, more like a natural update of the site, and in fact, these data early one weeks ago ready, a click ~ But the premise is that these data must be processed by pseudo original, otherwise ... But I found another problem, that is, if the data capture time is August 5, the update time is August 9, the file directory name is the date of this way, the 9th update of the file, is to be saved in the 5th folder, so not very beautiful, also do not know whether the SEO will have an impact. So or malicious under the heart, read the source code of Dede, the source code modified for each time after the article, Sortdate, senddate all take the current timestamp, so you can guarantee the release to the current date of the folder, and the article published date is correct. I'm archives and arctiny. Two tables are updated synchronously. Update only Archives table What's the consequence I haven't tried.

9, on-line: to the major search engine submitted the site, which GG and Baidu submitted 2 times. Then in the middle of the night in a forum to reply to a post, followed by the domain name and the super chain, went to sleep. Wake up during the day, found no movement, so to NetEase, Sohu, Sina Blog posted a log, a lot of references to the site name and hyperlink. In the afternoon, watch log, finally found that Google's spiders. But Baidu has not come, so went to Baidu to know a reply to a question, and to ask questions, gave a link to the mainstream of the site to the questioner reference, but also posted with the domain name and hyperlinks. In the afternoon, Baidu Spider came, and went to GG Webmaster Management tools verified the site and applied for GG AdSense. Online eat good rice, open GG, found has been included in GG, although only the home page, but finally also included. I am in GG to enter some of my article title, but also to find that piece of article in the list page address. This does not calculate is not included, I am not very clear. But Baidu still has no movement. And then look at log and find a lot of 404, I checked for a long time also do not know where these 404 Spiders found, and then spent 1 hours, and finally I found the reason: in the website formally generated static before, I once produced all static page as a test of the use, and then all was deleted by me, I thought this would be clean, but did not think that I forgot to delete or update sitemap and RSS files, these two files inside, the existence of a large number of links generated before! When I updated the two files, the spider crawling also smoother, there are a lot of 200, but 404 is still mixed with them, It should be that the previous site index has been completely crawled causing the spider to continue crawling those 404 pages. I'm a chest-tightness. Can only blame oneself too careless, did not notice this detail. So take this as a ring, I hope you webmaster friends must pay attention to these two documents.

About GG quickly included, no lack of luck factors, but summed up, these several aspects also worth pondering: blog log outside the chain effect or have a certain effect, and GG Webmaster management tools and GG AdSense are Google's own products, I believe it also has a certain weight. And Baidu, it is really to see the good fortune.

Well, talk about so much, not to specifically discuss how to choose CMS, how to buy domain name space, how to use the collector, how to modify Dede, how to do seo. But will be my station process and share with you, in this process, you can see some of the steps can be advanced, and some can be parallel, but also can see the construction station process may encounter problems and traps. This article is nothing more than a contribution. Hope to help those novice webmaster Rillican Station, more clearly what step you should do and what to do, what to avoid to do.

Welcome to visit the new site I mentioned in this article: Together Non-mainstream, contact: naozifangde@gmail.com.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.