"Editor's note" in the famous tweet debate: MicroServices vs. Monolithic, we shared the debate on the microservices of Netflix, Thougtworks and Etsy engineers. After watching the whole debate, perhaps a large majority of people will agree with the service-oriented architecture. In fact, however, MicroServices's implementation is not simple. So how do you build an efficient service-oriented architecture? Here we can look at the sharing with Steve Robbins, MixRadio's chief architect.
The following translation
MixRadio provides a free music streaming service to deliver a customized radio station by learning the user's listening habits, and all the users need to do is click on some simple buttons. With extremely simple interactions, MixRadio provides an incredibly customizable level through "move first". It works for anyone to discover new music, not just enthusiastic music enthusiasts. Using MixRadio is like opening a radio, but it's all in your hands--light play Me, and your private radio station opens. Based on the regional genre and style, MixRadio includes hundreds of handmade songs, creating songs, and certainly not the lack of favorite songs and playing offline, so that even without the internet, wonderful music can still continue.
Now that services are available on Windows Phone, Windows 8, Nokia Asha, and the Web, and back-end systems that support these applications have been polished over several years, let's look at the architecture of the system.
Architecture Overview
In 2009, we got the chance to refactor back-end systems. And the evolution to now, the backend system has been composed of a series of restful services, which is often said that "microservices." These systems function, size, development language, data storage are all different, but there are some commonalities are essential, such as some good definition of the restful API, independent expansion, monitoring capabilities. Around these core services, the system also has two similar proxy services that are configured to provide a subset of the restful resources that are used for different functions. The Internal tooling API Proxy Server has an internal API to provide tools for customer service, content management, song publishing, monitoring, and other scenarios, client applications and Third-party developers using the RESTful API via the External API Auth Layer "Open, this service is also responsible for the implementation of appropriate licensing permissions and authorization schemes, such as OAuth2."
For end users, there is a wide range of applications that are serviced through the API. We offer a separate HTML5 Web site, as well as applications for Nokia phones and devices with Windows 8 systems, and we also open some of the same APIs to third-party developers. Below we will not publish too much system architecture details, and if you want to have more detailed information, you can view the articles previously published by my colleague Nathan. We will review the main parts of the system and the following figure shows the components of the system consisting of more than 50 microservices:
Technology used
Based on open source, we are pragmatic in choosing the right tools to complete the task, and look at the heavily used technology stack in the system:
Language
MicroServices uses the Clojure development device application and media IBuySpy service to use C # to develop the web side using HTML5, CSS, and JavaScript
Storage
MySQL SOLR MongoDB redis elasticsearch Zookeeper SQL Server
Infrastructure
Linux MicroServices uses AWS Media Storage and media services running on Windows to use Azure GitHub Enterprise for source control packer to build machine mirroring puppet for service opening and host mirroring checking
Monitoring
Nagios Graphite Tasseo seyren campfire pagerduty Keynote logstash Kibana
Architectural Principles
In order to maintain the consistency of more than 50 microservices APIs, we have set standards in URL, structure, paging, sorting, packaging, language code, and so on, but in an open culture, we usually use principles rather than rigid rules to maintain consistency. Our service needs to follow the following terms:
Loosely coupled, stateless, and provides a JSON restful interface on HTTP. Deploy independently and have your own data, which means other services access the data through the API, not the database connection. This can still be extended independently in the case of persistent technology and data structure changes. Not too big, because bloated, not too small, because it will waste resources; We use separate host instances for each service. Implement a health check API for monitoring and determining health status. Never destroy the consumption chain. We followed the standard of adding a version number to the resource path early on (for example,/1.x/products/12345/), so that if disruptive changes were to be made, the new version could be deployed at the same time and used by the upper class consumer. Although we still retain this ability, it has not been used for several years.
Following these internal principles, we have some additional criteria for external disclosure APIs:
API follows Mobile Optimization: We use JOSN, because comparing XML on a low performance device requires very little memory to parse it, and whenever possible, response response uses GZIP encoding and, most importantly, does not return data if it is not needed. The last point is a good balance of API consistency. Use caching as much as possible: The API returns the appropriate cache headers, so that the content can be cached on end user devices or browsers, and we use content distribution networks (CDN) to keep content as close to the consumer as possible. Hosting logic and data as much as possible in the cloud to maximize the reduction of logical replicas in applications and deliver a consistent experience. Third-party subsets, Web, Mobile, and desktop clients use the same APIs. However, in order to adapt to different needs and screen size, we use a lot of technology. For example, we often use "itemsperpage" query parameters to adjust the number of returned content, and another about restful API design is the content of resource return isolation. Content is often aggregated into containers we call views, which reduces the number of requests.
An example of using the View API is the artist Detail page in the application, with data extracted from multiple sources--artist profiles, pictures, tweets, gigs (mixed styles, popular songs, songs that friends have heard, similar artists). By putting these together in "view," the application can get around 5,000 bytes of data at a time.
Make microservices faster.
In recent years, we've been rewriting microservices from Java to Clojure. Cloujure is a static language, still running on top of the Java Virtual Machine (JVM), allowing access to the Java framework. The back-end team chooses Cloujure, mostly because of speed-whether it's development or runtime. Cloujure is simpler than Java, for example, with a Java-written microservices that reduces the amount of code from 44000 to 4000 lines (including all configurations, tests, and components) after using the Cloujure rewrite. We use Leiningen to speed development, and Leiningen provides a feature that customizes project templates to speed development, and we have a template called "Cljskel" as a framework template for all services. In the next article we will detail this template, and in terms of usage we can run the following command and get a feature restful service that will have the monitoring API:
Lein New Cljskel <project name>
If you're interested in why we're going deep into clojure, you might be interested in 2013 two of engineers ' speeches in London.
Data
We have two largest data sources, 32 million + track content metadata (including related artists, albums, blends, etc.) and profiling data from the application (such as playback, top/step, and browsing events).
The catalogue service provides content metadata and search capabilities for the consumer experience, and Master catalogue stores meta data from multiple data sources, such as record labels, in-house content teams, and Internet resources like Wikipedia. A configuration-driven data model specifies how resources are merged, such as updating a specified field based on a content team's changes in other sources, where fields can be searched and returned to the caller. For different use cases, we can return different fields. For example, sometimes we don't need a list of listings in search results, but we need this field when we show the track list details. The Master Catalogue service does not directly support user traffic, instead, the Search and Query APIs are the interfaces of the rest of the backend systems. The search and query service is built on Apache SOLR, and the index daemon crawls master catalogue to modify and push to the SOLR search index.
It is critical to collect common and analytical data during the customization experience, A/B testing, CRM, and computing business metrics. As continuous data is uploaded from a variety of applications to the backend, many services require simultaneous access to the same data. For example, when a user steps on a song, this is important to the list of songs that are playing now, helping to grasp the taste of a user, repeatedly stepping on the move means that the artist may not be liked by the user. In order to be able to handle the expected data, we identified the characteristics of the expected subscription/release system:
High availability, can not have any single point of failure by decoupling for the entire system to provide high scalability, and the ability to suspend the ingestion of messages allows for a message format agnostic write delay provides a simple subscription/Publishing API for subscribers to quickly provide output, while each may have a different schema, Handled at different speeds and schedules. such as real-time processing and batch aggregation (or archiving).
We chose the Apache Kafka of LinkedIn because it almost perfectly catered to our needs. As a stable messaging system, it is designed to support users in different states (such as the location where the data is read), rather than all users are a constant presence and consume data at the same speed.
Monitoring
The goal of the system is that the main use case delay is no more than 0.8 seconds, and 4 9 of the 90-day cycle is available, equivalent to 4.3 minutes of downtime per month. So when the error occurs, we need to quickly find and deal with the problem. Here, we use multiple monitoring layers to alert development and operational engineers.
At the bottom, we use Nagios to check the health status of the virtual machine and to remind the operational engineer by Pagerduty. In the system, each microservices will implement a health check API that allows the AWS load balancer to determine if a host needs to be restarted (you can learn more about the AWS Load balancer in previous reports). Graphite is used to collect operating system-level metrics, such as CPU usage, disk space, and so on. At the same time, each microservices will record the corresponding metric according to the engineer's requirements. Service metrics are performed at different levels, such as low level HTTP 500 error counts, and high-level abstractions (the number of active subscriptions), and we measure all the information we need. Here is a screenshot of the graphite visual interface:
On top of graphite we use Tasseo, which provides a more user-friendly view of data summarization, and we use Seyren to remind us when thresholds change. Tasseo was first introduced by several engineers, and it was used in the 2012 Obama re-election system.
At the highest level, we use keynote to measure use cases and worldwide response times:
Ultimately, for more detailed monitoring, we avoid having to connect to a particular server through log transfers. Collect system, request, error, and application-specific logs through Logstash, and use Kibana with custom dashboards to track specific errors or trends. The following example is the dashboard we customized a few years ago to reduce application error noise:
Continuous delivery
Continuous delivery is a practice of automated deployment and testing, focusing on whether software can be released quickly and in a repetitive way. Over the years, we've been working on this promotion, and now it's transitioning to Netflix's "Red/black" model based on AWS. At the continuous IBuySpy Meetup in London this June, our engineer Joe shared this content.
You can view the architecture improvements by looking at the number of software releases we have published over the past 5 years:
Original link: MixRadio architecture-playing with a eclectic Mix of Services (translation/Dongyang Zebian/Zhonghao)
Free Subscription "CSDN cloud Computing (left) and csdn large data (right)" micro-letter public number, real-time grasp of first-hand cloud news, to understand the latest big data progress!
CSDN publishes related cloud computing information, such as virtualization, Docker, OpenStack, Cloudstack, and data centers, sharing Hadoop, Spark, Nosql/newsql, HBase, Impala, memory calculations, stream computing, Machine learning and intelligent algorithms and other related large data views, providing cloud computing and large data technology, platform, practice and industry information services.