Notes on social network-based Data Mining

Source: Internet
Author: User
Tags oauth string tags gmail mail couchdb idf nltk

Social networks have changed from fashion to the mainstream, and some suggest replacing the World Wide Web (WWW) with a giant global graph (ggg). Further, semantic networks (www.foaf-project.org) is the trend of the future network.

 

The natural language Toolkit (nltk) provides a large number of tools for text analysis, including calculation of common metrics, information extraction, and NLP. The simplest way to answer "what people are discussing" is to analyze the basic word frequency. Grahviz is the main tool of the visual community. The dot language is a simple text-based format used by graphviz. Canviz (http://code.google.com/p/canviz) allows you to draw grahviz diagrams on the <canvas> element of a Web browser.

 

Micro-patterns (http://www.microformats.org) provide an effective mechanism to embed "more intelligent data" into a web page, and easy to implement by content creators. Microformats are simply agreed to explicitly include structured data in web pages in a fully value-added manner, typical microformats include xfn (http://gmpg.orf/xfn), GEO (http://microformats.org/wiki/geo ), hrecipe (http://microformats.org/wiki/hrecipe) and hreview (http://microformats.org/wiki/hreview ). among them, GEO is particularly worth noting that kml (http://code.google.com/apis/kml/documentation) output may be the simplest way to visualize GEO data.

 

The beautifulsoup package can implement simple web crawling. The two criteria for checking the crawling algorithm are performance and quality. Socialgraph node mapper (http://code.google.com/google-sgnodemapper) open source project standardizes some URLs, the existence of WWW.

 

It can import captured data into couchdb (http://couchdb.aphache.org), a document-oriented database that provides the MAP/reduce function, can be used to index data, it also provides a fully rest-based interface (http://en.wikipedia.org/wiki/Resentational_State_Transfer) that allows others to analyze and copy your database and integrate it into any Web architecture for easy use of its replication capabilities. Couchone provides Binary Download and cloudant provides online hosting.

 

Lucene (http://lucene.apache.org/java/docs/index.html) is a Java-based high-performance full-text index search engine library that combines keyword search functions into applications. The couchdb-Lucene (http://github.com/rnewon/couchdb-lucence) project is a Web Service encapsulation centered around Lucene's core functionality, able to index the couchdb documentation.

 

Simile timeline (http://simile-widgets.org/wiki/Timeline) is an easy-to-use powerful tool that visualizes event-centric data, especially for studying mail data. Getmail, poplib, and imaplib are both good mail-oriented Python packages, and graph your inbox chrome extensions (http://graphyourinbox.com) allow authorized access to mail data.

 

Oauth 2.0 (http://tools.ietf.org/html/draft-ietf-oauth-v2-10) is an emerging authorization solution that promotes a better user experience (http://hueniverse.com/2010/05/introducing-oauth-2-0), authorizing client applications to access protected resources rather than user names and passwords.

 

Redis (http://code.google.com/p/redis) is a data structure server, fast and easy to install, for powerful documentation Python clients. Redis: Under thehood (http://pauladamsmith.com/articles/redis_under_the_hood.html) is a good article. Redis provides local operations for common collections.

 

Infochimps (http://infochimps.org) is an organization that provides large data directories and provides strong link APIs for Twitter measurement and analysis. Ubigraph (http://ubigraphlab.net/ubigraph) is a visualization tool for 3D Interactive graphs, bound to Python.

 

A basic level of human intelligence is to classify things and derive a layered structure. Classification is essentially a layered structure that classifies elements into parent/child relationships. Public classifier is used as a means to describe the collaborative tag field and Social Index achievements in various web ecosystems. Essentially, it is a novel method to describe the tag dispersion field that emerged as a collective smart mechanism. For more information about how to find commonalities, see http://radar.oreilly.com/2010/07/data-science-democratized.

 

Tag Cloud is the most obvious choice for visualization of entities extracted from social data. Open source rotating label cloud WP-culumns (http://code.google.com/p/world-culumns-goog-vis/wiki/Userguide) is a good choice. Kevinhoffman's paper: "In search of the perfect tag cloud" (http://files.blog-city.com/files/J05/88284/ B /insearchofperfecttagcloud) provides a good overview of various design strategies for building a tag cloud.

 

LinkedIn firmly believes that personal professional network data is private, access to the http://developer.linked.com can get authorization certificates, you can use linkedinapi to mine the overall richness of available data, but also provides rate flow restrictions.

 

Intelligent clustering can bring remarkable user experience. Common metrics of common similarity measurements of clustering:

1) edit distance http://en.wikipedia.org/wiki/Vladimir_Levenshtein

2) N-element syntax similarity: Calculate all possible N-element syntaxes of two string tags, and calculate the similarity by calculating their common syntaxes.

3) yake distance: indicates the similarity between the two sets. It is the result of dividing different items in the two sets by the common items between the two sets.

4) Masi distance: http://www.cs.columbia.edu /~ Becky/pubs/lrec06masi.pdf

Clustering greedy algorithms are mainly based on Masi measurements. hierarchical clustering can calculate the full matrix of the distance between all items, and traverse the matrix cluster items that meet the minimum distance threshold, k-means clustering is a multi-dimensional space pre-allocated to N points, and then divided into k clusters.

 

The K-means clustering method is generally available for Geographic Information clustering. Google Earth provides a geographic encoder. (Http://code.google.com/p/geopy/wiki/GettingStarted) The dorlingcartogram in protovis is essentially a bubble map of geographical clustering. Open Source Project geodict is a good attempt to study GEO Data (http://petewarden.typepad.com/searchbrowser/2010/10/geodict-an-open-source-tool-for-extracting-locations-from-text.html ).

 

Between Twitter and blogs, Google buzz provides restful APIs (http://code.google.com/apis/buzz/vl/using-rest.html ). natural Language empirical law Qi PUF's law asserted that the word frequency in the corpus is inversely proportional to its ranking in the Word Frequency table. (En.wikipedia.org/wiki/zipf's_law) The Brown Corpus (http://en.wikipedia.org/wiki/Brown_Corpus) is a reasonable starting point.

 

TF-IDF (termfrequency-inverse Document Frequency) indicates the inverse Document Frequency of words, and the corpus can be queried by calculating the normalization score of the relative importance of words in the document, which indicates the product of the frequency of Word Frequency and inverse document: TF-IDF = TF * IDF. TF indicates the importance of a word in a specific document, and IDF indicates the importance of a word in the entire corpus.

 

The TF-IDF model transforms the document model into an unordered set of words. Another way to model the document is vector space model: Each document in a multi-dimensional space contains a vector, the distance between two vectors indicates the similarity of the corresponding document. To calculate the similarity between two documents, you only need to generate a word vector for each document and calculate the dot product of the unit vector of these documents. Therefore, it is easy to compare documents with Cosine similarity.

 

The xoauth tool (http://code.google.com/p/google-mail-xoauth-tools/wiki/XoauthDotPyRunThrough) that accesses Gmail mail, using the xoauth. py tool can generate oauth tokens and keys for anonymous users. Dumbo is a project that allows you to write and run hadoop programs in Python. Scrapy (http://scrapy.org) is an easy-to-use and sophisticated Web crawling and crawling framework.

 

Typical NLP pipelines using nltk:

1) end ofsentence (EOS)

2) Word Segmentation

3) part-of-speech tagging (POS)

4) multipart

5) Extraction

Use regular expressions to parse sentences. For details, see "unsupervised multilingual sentence Boundary Detection" (http://www.linguistics.ruhr-uni-bochum.de /~ Strunk/ks2005final.pdf). prerequisite of luhn Digest algorithm: The key sentence in this document is a sentence that contains frequently occurring words. Luhn does not understand data at a deeper semantic level. The analysis method centered on the sentence entity can refer to the Pennsylvania tree library label http://bulba.sdsu.edu/jeanette/thesis/PennTags.html)
Wordstemming in nltk can analyze the semantic triple, and WordNet (http://wordnet.princeton.edu) can find out the extra meaning of the item in the triple.

 

Facebook applications need to be hosted in their own server environment, the development process http://facebook.com/developers.facebook's open spectrum protocol (http://opengraphprotocol.org) Interactive Visualization can use javascriptinfovis Toolkit (http: // thejit.org ). sunburst visualizes a space fill of layers such as trees.

 

Web3.0 seems to be a semantic network, and Fuxi is a powerful Logical Reasoning System in the semantic network. It uses a technology called forward link (http://en.wikipedia.org/wiki/Forward_chaining) to deduce new information from existing information.

 

Do not destroy what you have because you don't have it. Remember that what you have was what you expected.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.