Article Title: update Twitter and FriendFeed from the Linux Command Line. Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
Learn how to use GNU Wget and cURL to send status updates to Twitter and FriendFeed without using Twitter desktop applications. In addition, I also learned how to use Linux? Command Line Tracking of feeds from Twitter and FriendFeed.
People choose an operating system like Linux because of its advantages in various aspects-its total utility. It is stable, fast, inexpensive, and can run on all types of hardware. It was very flexible at the beginning, mainly because of its powerful command line interface (CLI) or shell.
This article focuses on two tools-GNU Wget and cURL. You will learn how to use these two tools to send status updates to social networking sites without having to use Twitter desktop applications, and how to trace feeds from Twitter and FriendFeed from the command line.
Do you need to know the API details? This article will not go into details about the use of APIs. Both Twitter and FriendFeed have such APIs, which can be easily accessed through a Representational State Transfer (REST) interface.
GNU Wget history
GNU Wget is a flexible software used to obtain data (such as files, mp3 files, and images) from servers ). Its non-interactive, robust, and recursive features make it very common. It is mainly used to capture content from Web sites or read HTML files offline. (The link on the HTML page is automatically adjusted to support this function ).
For example, to obtain a page found in a specific URL, run the following command:
wget http://wikipedia.org/ |
This command downloads the Wikipedia homepage found on that URL to the computer and the file name is index.html, because it is the page found by GNU Wget. This tool does not track any links found on that page, but it is easy to track:
Wget? R http://wikipedia.org/
|
In this command, the-r switch tells GNU Wget to recursively trace all links on that page, so the tool captures the entire site. However, you may not want to use this toggle for websites such as Wikipedia because it takes a long time (depending on the available bandwidth) to download the entire database for local access ).
Naming tools
GNU Wget is developed based on the Geturl of the program developed by Hrvoje nishijiic. Nishishijiic changed the name of his tool to WgetGetURLThe Amiga tool of is separated. The latter has the same functions, but is written in Amiga REXX. |
[1] [2] [3] [4] Next page