The technical revelation behind the Instagram 5 legendary Engineers (PPT)

Source: Internet
Author: User
Keywords nbsp we can through
Instagram 5 Legendary engineers behind the technical Revelation (PPT) published in 2013-03-28 22:13| Times Read | SOURCE csdn| 0 Reviews | Author Guo Shemei Postgresqlredismemcachedinstagram Open Source AWS Summary: Instagram, a photo-sharing app developer based on iOS and Android, with a unique operating philosophy, with only 5 engineers, The team had a total of 13 people, successfully sold their own 750 million dollars to Facebook. Behind the miracle, is "to minimize the burden of operation and optimize, monitor everything, concise technical ideas."

Instagram is a photo-sharing app developer for social photos based on iOS and Android. With a unique operating philosophy, since its inception in March 2010, a short span of one year has attracted 14 million of users. Then, with mobile camera changes, image processing upgrades, social flexibility to interact with Facebook, and services such as Android, users quickly hit 30 million and were bought by Facebook at $715 million in September 2012. By the end of February this year, its active users successfully exceeded 100 million.


Instagram two founders

Contrary to the rapid growth, from the inception of Kevin Histrom (Kevin systrom) and Maik Crigg (Mike Krieger) two founder, by 2011 won a round of 7 million dollars of 4 employees, and then the acquisition of the 13-person team, The organization of Instagram personnel has been extremely streamlined.

Such a small team can be so comfortable with the rapid growth of the number of users and provide innovative services, this is not to say that the Silicon Valley is another wealth legend. So that the Instagram technical team wrote the "Instagram: Hundreds of example of a large number of technology," once released, has been a enthusiastic response from the entrepreneurial CTO. At that time, Instagram's team was still looking for a "devops that can tame EC2 instance groups."

Did not think that the takeover was so menacing. April 10, 2012, Facebook announced the acquisition of Instagram. Two days later, Instagram's co-founder, Mike Krieger, made a public presentation on how to become a 1 billion dollar company, for the first time to fully demonstrate the Instagram entrepreneurial process and the technical "secrets" it had to say. In this paper, the full text translation of the presentation will help the innovative technical team to better understand and know the technology that the INSTAGRAM13 team relies on to create miracles:

Instagram technical team:

2010:2 Engineers, 2011:3 Engineers 2012:5 Engineers

Instagram core principles:

1 Simplicity (minimalist) 2 optimize for minimal operational burden (optimized for minimizing operational burden) 3 instrument modifiable (monitor everything) one, start-up stage:

Two founders without any back-end combat experience;

By hosting a machine somewhere in Los Angeles (even the performance is not MacBook Pro strong);

Storage uses CouchDB (Apache CouchDB is a document-oriented database management system);

Product online on the first day there are 25000 registered users.

Two, on-line stage:

Because I forgot favicon.ico icon file, I caused a lot of 404 errors on Django.

This is the first lesson to be learned. The following are also:

Ulimit-n, set the maximum number of file descriptors that the Linux kernel can open at the same time, such as size 4092.

Memcached-t 4, set the number of threads used to process the request.

Whether the prefork/postfork thread is preloaded or later loaded.

Obviously, most system expansion problems are cumbersome and difficult. In the process of constant problems and problem-solving, Instagram decided to relocate to the EC2 of AWS.

iii. Migration phase:

"Let's move to EC2" is like "replace all parts for a 100-yard speed car."

Specific analysis:

1. Database Extensions

Early: Django Orm+postgresql (PostGIS)

Because of PostGIS, Postgresql,postgis has been chosen to increase the ability to store management spatial data on an object-relational database PostgreSQL equivalent to the spatial portion of Oracle, which can be deployed on a stand-alone server.

As the number of photos exploded, EC2, with a maximum memory of 68G, was clearly not supported.

Change: Perform vertical partitioning (vertical partitioning) and make vertical partitioning easier through Django DB routers.

Such as: photos are mapped to Photodb

def db_for_read (self, model):<br>    if App_label = = ' Photos ':<br>        Return ' Photodb '

A few months later, photodb>60g, the use of horizontal partitioning (horizontal division, with "fragmentation" sharding implementation)

But Sharding also poses many problems:

1. Data retrieval (in most cases, it is difficult to know the user's primary access mode)

2. What happens when a fragment becomes too large?


Can be used, based on the scope of the fragmentation strategy (like MongoDB)


3. Performance degradation, especially by EC2 instance disk IO, is the solution: Pre-segmentation (pre-split), that is, cutting thousands of logical slices in advance, mapping them to fewer physical partition nodes.



2. Choose the right tool

Caching/anti-normalized data design

When the user uploads the picture:

1. Users upload photos with title information and geographic location information (optional);

2. Write synchronously to this user's corresponding database (fragment);

3). Perform the queue processing

A If geographic information is available, the information for this picture is sent to SOLR (Instagram for the Full-text Search server for the Geo-search API) via an asynchronous POST request.

b Follow the information distribution (follower IBuySpy), which tells my follower, I released a new photo. How to achieve it? Each user has a follower list, the new photo upload will send the photo ID to each user in the list, with Redis to deal with this business is simply great, fast insertion, rapid subset.

c When you need to generate feeds, we #的格式 through the id+ to find information directly in memcached

What kind of scene is Redis suitable for?

1. The data structure is relatively limited;

2. To cache complex objects where frequent get is performed;

Do not bind yourself to a scenario where the memory database is not the primary storage policy.

About Follow Atlas

First edition: Simple database table (Source_ ID, target_id, status)

Need to answer the following query: Who am I looking for? Who's Watching me? Do I care about someone? Does someone care about me?

When the pressure on the database becomes large, Instagram begins to store the concern map in Redis, but it also brings up the problem of content consistency (consistency). Inconsistencies can lead to caching failures.

PostGIS combines lightweight memcached caching to support tens of thousands of requests.

Note the point:

1. The core data storage part has a universal component support, just like: Redis;

2. Never think of two tools to do the same job;

3. Keep Agile

1. Extensive unit testing and functional testing

2). Adhere to the dry (Don ' t Repeat yourself) principle

3. Decoupling using notification/signaling mechanisms

4. Most of our work is done in Python, only when we have to do it, we use the C

5. Frequent code review, as far as possible to maintain the "wisdom of sharing."

6). Extensive system monitoring

4. Extend

to Android platform

12 hours to add 1 million new users key:

1. Great tools can make reading more extensible, for example: Redis:slaveof <host> <port> (the slaveof command is used to dynamically modify replication (replication) functionality while the Redis is running);

2. Shorter iteration cycle;

3. Do not repeat the invention of the wheel, for example, to develop a system monitoring daemon, completely unnecessary, haproxy fully qualified for this work;

4. Find a strong technical advisor;

5. The technical team maintains an open atmosphere and gives positive feedback to the open source world;

6. Focus on optimization, find ways to make the system faster than one times.

7). Keep Agile;

8. Use the least parts, the cleanest solution;

9. Do not overdo optimization unless you know in advance how your system will expand

PPT maintains Instagram's philosophy of "simplicity", even when it comes to technology, and is streamlined to the extreme. In order to make more friends understand, especially from their engineers blog to select more details to complement, and here, this is the experience of the 5 engineers summed up, including those very useful open source tools.

Iv. Other details Technical

1. Operating System/Host

Running the Ubuntu Linux 11.04 ("Natty narwhal") on Amazon EC2, this version is validated to be stable enough on EC2. But the previous version of the EC2 when the high flow of all kinds of unpredictable problems.

2. Load Balancing

Each access to the Instagram server is through a load balancing server; We use 2 Nginx machines for DNS polling. The disadvantage of this scenario is that when one of the units is decommissioned, it takes time to update DNS. Recently, in turn, Amazon's Elb (elastic load balancer) Load Balancer uses 3 Nginx instances to implement call-outs (and when a Nginx instance fails to detect, the system automatically pulls it out of the loop) and stops at the ELB level. SSL to ease nginx CPU pressure. Using Amazon's Route53 service as the DNS service, this is an added set of good GUI tools for the AWS console.

3. Application Server

Django was run on Amazon's high-cpu extra-large machine, and as the user grew, it ran 25 Django instances (luckily, because it was stateless, it was very easy to spread horizontally). However, individual workloads are found to be compute-intensive rather than IO-intensive, so instances of the HIGH-CPU extra-large type provide just the right proportion (CPU and memory).

To do this, use Gunicorn as the WSGI server. MOD_WSGI modules have been used under Apache in the past, but find gunicorn easier to configure and save CPU resources. Use Fabric to accelerate deployment. Fabric has recently added parallel mode, so deployment takes only a few seconds.

4. Data storage

Most of the data (user information, photo metadata, tags, etc.) are stored in PostgreSQL and are split based on different postgres instances. The main fragmentation cluster consists of 12 four times-fold large Memory cloud hosts (and 12 replicas in different regions);

Amazon's network Disk System (EBS) has not been able to find enough per second, so it is particularly important to put all the work in memory. In order to achieve reasonable performance, soft raid was created to enhance IO capability and to use the Mdadm tool for RAID management;

Here, Vmtouch is an excellent tool for managing memory data, especially when failover, from one machine to another, or even to an active memory profile. Here is the script, which is used to parse the Vmtouch output running on a machine and print out the corresponding Vmtouch command, executed on another machine, to match his current memory state;

All PostgreSQL instances are run in primary-standby mode (Master-replica), based on streaming replication, and often back up our systems using EBS snapshots. To ensure snapshot consistency (originally inspired by Ec2-consistent-snapshot) using XFS as our filesystem, with XFS, you can freeze & thaw RAID arrays when snapshots are taken. For streaming replication, our favorite tool is repmgr.

For connecting to the data from the application server, we used Pgbouncer to do the connection pool very early, which has a great effect on performance. We found that Christophe Pettus's blog has plenty of resources for Django, PostgreSQL and Pgbouncer secrets.

The photos are stored directly on Amazon's S3 and are currently stored in a few T photos. Using Amazon's CloudFront as our CDN, this speeds up photo-loading time for users around the world.

We've been using PostgreSQL for many months for the Geo-search API, but later migrated to Apache SOLR. He has a simple JSON interface, so our application-related is just another set of APIs.

Finally, like any modern web service, the memcached cache is used, and 6 memcached instances are currently used, and we connect using PYLIBMC & libmemcached. Amzon has recently enabled a flexible caching service (elastic cache services), but it's no cheaper than running our own instance, so we didn't switch it up;

5. Task Queues & Push Notifications

When a user decides to share a photo of Instagram to Twitter or Facebook, or when we need to notify a live subscriber that a new photo is posted, we push this task to Gearman, and a task queue system can be written in Danga. The task queue is done asynchronously by means that the media upload can be completed as soon as possible, while the "heavy load" can run in the background. We have about 200 working examples (all written in Python) consuming the tasks in the queue for a given time and distributing them to different services. Our feed feed fan-out also uses gearman so that posting responds to new users because he has a lot of followers.

For message push, the most cost-effective solution is an open-source twisted service that has handled more than 1 billion notifications for us and is absolutely reliable.

6. Monitoring

Based on Python-munin, many Munin plug-ins are written, using Munin for graphical measurements. Used to graphically measure things that are not system-level (for example, check-in numbers per second, number of photos per photo, etc.), use pingdom as an external monitoring service, Pagerduty for event notifications.

Python error reporting, is used by Sentry, a Disqus engineer to write an awesome open source Django app. At any time, you can see what's wrong with the system in real time.

Article here, and it's not over. For the Instagram, which has crossed the "billion" line today, "to optimize the program for the minimum operating burden, use all available (open source) tools and cloud platforms, minimalist technical claims" unchanged, do not believe, look at the latest news, the Instagram's load-balancing weapon: Eureka fills the big gap in Amazon's Web services and continues to write his own technology saga. (@ Heart Li's analysis contributes to this article, revisers/Zhonghao)

Link: Mike Krieger "How to become 1 billion dollar Company" PowerPoint presentation

"The First China Cloud computing Conference" will be held in June 2013 5-7th in the Beijing National Convention Center. Slam The Register!

Related activities have been hot launched:

2013 China Cloud Computing Survey, weekly awards and so you take! “

Innovation Cloud 2013 Cloud Innovation Product and Application project solicitation, welcome to the developer, team and entrepreneurial enterprises to participate!



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.