A blog system based on Microsoft Azure, ASP. NET Core and Docker

Source: Internet
Author: User
Tags classic asp azure sdk



In November 2008, I opened my personal account at the blog Park and published my first blog in the blog Park. Of course, I did not start a blog since 2008, and in the early days, the CSDN and the System Analyst Association (later known as the "Greek net") personal space published a number of programming and development-related articles. From the beginning to the present, I have been happy to share their own learning with netizens, I hope there will be more with me the same industry friends in the cause of success, but also for our software business to contribute their own strength it, this is my blog Park blog When the Vision: professional, realistic, and doubts. Therefore, when I write a blog post, are the objective and rigorous attitude to explain the technical knowledge, and as far as possible to better content organization to improve the readability of the article, as far as possible to answer the questions of netizens. There are a lot of fans of the blog Park with me, some said that my blog update is too slow, some said I have some series of articles have rotten tail phenomenon, for fans of the question, I have only one answer, that is, spare time is too limited, I have no way to resorting their own strength, in the limited spare time, in the premise of guaranteeing the quality To provide more and more support to the community. At this point, I chose Ningquewulan: I would rather have a longer release cycle than to share an article without quality. On the other hand, I have also released a lot of open source projects, some of my own personal gadgets for the code backup, there are some projects, such as Apworks, Byteart Retail, Wetext, Even my new blog daxnet.me source code Project Daxnet-blog, all in my GitHub repo. Honestly, I really don't have the time to introduce the details of each project to the blog, so I basically completed my own relatively satisfactory open source projects, I will write a blog to introduce the content of the project and the technology used, while guiding the reader to directly clone my project code to see, or direct folk (don't worry too much about licensing issues, except for the Raspkate project, most of the other projects are the MIT or Apache License Agreement). All in all, regardless of the form, I never gave up the original vision.



Also out of this insistence, I hope to better organize my blog posts, and even some other original works, in a more focused and efficient way to provide readers with a better learning and communication experience, I have always wanted to build their own blog services, I also after a lot of attempts. Back in 2012, I used word press in a foreign site to establish a blog system, but later because of foreign service provider reasons, the site has not been able to continue to maintain, after I also after several attempts, including the use of blogengine.net and other open source projects, but also failed to do a good job. Because of the enthusiasm and pursuit of technology, this time, I finally determined to use their knowledge, relying on Microsoft. NET platform, developed and deployed my own new blogging system: "http://daxnet.me".


Site features


First, briefly introduce the current site features. The right picture is the homepage of this site effect, I do very concise, not with too many fancy pictures, nor with Merry. Discerning eye knew this was a Web application that was developed based on ASP. NET MVC, using bootstrap. Good, basic correct! It should be emphasized that the blog site and the backend RESTful services are all based on the ASP. NET Core Runtime version 1.1.0, running in the Docker container. Hey, talking about the technology again, the function has not been introduced. Speaking of features, the current function is very simple: the homepage lists all of my own original or translated articles, readers can register user accounts, registered users can post comments, you can also change their nickname in the User Management page. Well, at present, the function is so much, do not look at the function, but I have to continue to spend 2 months and years after the time, only to achieve the present appearance. Of course, I will continue to update the site, so that its functions become more complete.



When it comes to ASP, does it lift your appetite for technology? Do not worry, next I will introduce the whole site of the technical selection of the parts, after reading, perhaps you will know why I spent 2 months of spare time, only to complete a simple thing.


Site Technology Introduction Overall architecture


All of the infrastructure used throughout the site runs in Microsoft Cloud (Windows Azure), uses some of the managed resources, and some unmanaged Azure VMs. The general situation is as follows:


    • Picture Storage service: Hosted by Azure Blob Storage service

    • Database system: by Azure SQL Database managed (geo-replication not enabled because no money)

    • Mail Service: Hosted by Azure SendGrid account (Pricing tier is F1, Free 25000 messages per month)

    • Application server: Ubuntu 16.04.1 lts virtual machine built on Azure, running two Docker containers: Blog-web and Blog-service, Host front-end Web sites and back-end restful services, respectively. The Backend RESTful API service does not have any authentication and authorization, the Web site accesses the RESTful API service through the internal subnet, and the Docker container runs in an unmanaged environment.

    • Continuous Integration System: Jenkins, Azure-built Windows Server R2 (Master), and an Ubuntu 16.04.1 LTS (Slave). The front-end and back-end of the site are compiled, packaged, and released in the latter (Ubuntu), enabling the deployment of one step

    • Code library: Github


Someone would ask: why run an application with an unmanaged azure VM environment? I've also thought about this, and in theory, the cloud-based system architecture best chooses a managed PAAs service that not only provides natural high availability (including disaster preparedness, such as AWS's cross-AZ deployment, availability of some services across regions, and load balancing), but also professional technical support. Consider using IaaS services only when there is a need for an old system to migrate to the cloud and to cater to the specific operational environment requirements of the old system. While these resources, such as virtual machines, are created and run by Azure, at this level azure can guarantee the availability of virtual machines, but the state of any programs running inside the VM, and the data used, the cloud services such as Azure are unknown, and the monitoring of this part becomes cumbersome. For security reasons, it is common for cloud service providers to not and should not have access to the operating data of a client program that is similar to a virtual machine, and the client is responsible for the risk of running the program from the VM service. This is also known as the principle of shared responsibility.



It looks like running an app with a virtual machine is not a good thing, but I chose to use it. There are several reasons:


    • Why not use azure Web App? On the one hand, Jenkins does an automated deployment, pushing a compiled app directly into Azure Web app doesn't seem too handy to write some PowerShell code, but my build system is Linux, but now there's a Linux version of PowerShell, and the Azure SDK Command line interface also has a Linux version, so this reason is a bit far-fetched, more reasonable explanation is: Labor will not! On the other hand, I do not have authentication and authorization on the server side, only through the subnet to provide services to the outside world, so I want my web app to run inside the subnet, and then expose the external access to the 80-port confession. So how does Azure Web app deploy to my own subnets? This is a technical problem, I believe there must be a solution, but I do not have much time and energy to scrutiny how to achieve, my first reaction is simply to deploy the front and back all in the Azure Web app, and then open the authentication mechanism of the backend. But it's going to take some extra effort. Well, that's the reason: Labor doesn't

    • Why not use Azure Container service? Azure Container Service Creates a complete set of network deployments in the resource Group (resource group) you specify, including several virtual machines, public IP, two load balancers, and so on, and I think you know why I didn't choose azure Container Service, the reason is: Labor and money


Is that a good enough reason? These services offered by Microsoft Windows Azure are great, and I didn't choose not to say that they are not good, but to consider them for their own practical reasons:


    1. Learning costs for some services

    2. Economic costs

    3. There's no need for a high availability rate of 99.99999%.

    4. Even if the app hangs, the cost of recovery is small: the data does not need to be recovered at all, the managed SQL Database, Blob storage will ensure that my data is not lost, and the application recovery is simple: rerun the Docker container and it's done.


OK, from the overall structure, my choice is that it is so, this choice is not necessarily completely correct, but I think at least appropriate, for reference only. The overall structure of this site is attached below.






Make a few notes:


    1. Three VMs are located in the same virtual network subnet, each VM has a separate network Security Group (NSG) on the network card, and inbound/outbound endpoints on the NSG. The IP address of the port access is strictly restricted. Access between three VMS using subnet IP address

    2. The Windows Server VM hosts the Jenkins Master, as well as the SEQ log service. It exposes 8080 ports and 5342 ports to the public network for access to the Jenkins service and the SEQ management interface, respectively.

    3. The first Ubuntu VM runs Jenkins Slave, which does not expose any ports to the public, only exposes 22 ports to the Jenkins Master machine and is used for the execution scheduling of the Jenkins Slave agent

    4. The second Ubuntu VM runs two Docker containers for the blogging system: Front-end application blog-web and back-end RESTful API Service program Blog-service. Web Access via subnet IP address SERVICE,VM only exposes 80 ports to the public, and background service is not accessible from the public network

    5. Two Docker containers running apps (Blog-web and Blog-service) have access to hosted Azure SQL database, Azure Storage blob, and SendGrid account Services

    6. The topology of the entire deployment may not be reasonable, such as no load balancing, no hosted application hosting services (such as Azure Web app, Container service, etc.), and no scaleset. Because there's no need and no money.


Next, back to the code, I'll show you some of the framework's technology choices, as well as several open source library projects available for ASP.


Front


Today's front-end technology is changing rapidly, with various JavaScript frameworks and JSX technologies making front-end development more convenient and efficient, and the user experience getting better. For example angular JS (including 1 and 22 versions), React + Redux, knockout.js, backbone, and so on. In the actual project, we also use most of this technology, however, in my blog system, I do not use a single page application of the solution, but continue to use the front-end razor+ back-end C # code way, yes, this is the ASP. NET Core mvc! I'm not using any MVVM framework, just simply using Bootstrap and jquery, for me, there are several reasons for this choice:


    1. Relatively familiar with ASP. NET MVC, easier to complete development tasks as soon as possible

    2. The site logic itself is not too complex, and there is no need to use these front-end frames temporarily

    3. Want to experience the new features of ASP.


Of course, in order to achieve some specific functions, I still choose some open source code and framework, now give you a general introduction.


Pagination implementation of the home page


The homepage implements the service-side paging of the blog post, requesting only limited data from the server at a time. The paging control is implemented by a set of algorithms written by itself, and the bootstrap pager style is applied to achieve a responsive user experience. The paging control uses the new tag helper technology from ASP. The page number is segmented by the size of each page and the total number of blogs, making the entire paging feature a good user experience.





About Verification Code generation


The generation of verification codes is very easy to implement in classic ASP. The classic ASP. NET application is based on the full, Windows Framework, which runs on IIS on Windows, and relies on the graphics library of Microsoft, which makes it easy to produce pictures. However, the ASP. NET core application is completely different, and in order to implement cross-platform, you cannot use the type under the System.Drawing namespace (you can, of course, specify that your ASP. NET core application uses NET45, but this does not cross the platform). Here I use the CoreCompact.System.Drawing library, which can be found through NuGet. It relies on the Microsoft.Win32.Primitives library, which defines some drawing-related data structures, but does not provide any graphics library implementations. Interested readers may wish to have a try.


About the Reply editor


Nothing to say, using the famous CKEditor as an editor, of course, I selectively enable/disable some features.


About code highlighting in a blog post


Using the famous Alex Gorbatchev Syntaxhighlighter, the blog Park is also used in this library, but I may not use the latest version.


About the time information in the reply


After each blog post will show the user's reply content. This displays the relationship between the response time and the current time, such as:






Showing this reply was posted 25 days ago. Don't underestimate the realization of this part, I use a library called Humanizer. This library is very interesting, it can provide some very useful API, such as give it an English noun, it can return the plural form, give it a date, it can return a more close to human natural language expression. It also has a lot of other interesting features that you can take a look at.


About the MetaWeblog API for blog posting


The blog system supports publishing blogs using Windows Live Writer, which implements the MetaWeblog API through the Wilderminds.metaweblog provided by Shawn Wildermuth. You can add a site directly to your account by using Windows Live Writer:






Basically, some of the technologies and third-party frameworks used in the front end are described above. below to see some of the background of the technology selection.


Background database and data Access Components


As mentioned above, the new blog system backstage uses azure SQL database, which is the managed SQL Server relational database. Why choose SQL Server instead of the current popular NoSQL scenarios like MongoDB? As a blog site, I did not find the reason to choose NoSQL, there is also a managed MongoDB service on Azure, only it is entrusted by Bitnami responsible for operations. On the other hand, although I chose Azure SQL Database, I did not use any third-party data access framework and did not use ORM, including the current popular dapper. There is no reason to choose ORM, on the one hand it feels that ORM is still too heavy in this scene, on the other hand, when I do technology selection, the Entity Framework core can not meet my needs, at least it can not from the perspective of the domain model to support many-to-many mappings. Then why not choose dapper? The main reason is still the same: unable to meet my needs. The native Dapper class library needs to write some SQL scripts, albeit lightweight, but without support for code refactoring, Dapper.contrib adds some more friendly APIs, but still doesn't meet its needs.



A few thoughts, I decided to write a small framework to support my own definition of the simple domain model, but also support the lambda-based syntax, support database transactions, support asynchronous API, support many types of relational database. This small framework code is located under the DaxnetBlog.Common.Storage namespace, using some very ingenious tricks, such as developers using lambda expressions to define query conditions, and the framework will turn lambda expressions through ExpressionVisitor (visitor mode) into an SQL statement. The following code is the code used for this framework:

varrowsAffected = awaitthis.storage.ExecuteAsync (async (connection, transaction, cancellationToken) => (varaccount = (awaitthis.accountStore.SelectAsync (connection, acct => acct.UserName == userName, transaction: transaction, cancellationToken: cancellationToken)). FirstOrDefault (); if (account == null) {thrownewServiceException (HttpStatusCode.NotFound, Reason.EntityNotFound, $ "Failed to find user account with account name {userName}.");} Account.DateLastLogin = DateTime.UtcNow; returnawaitthis .accountStore.UpdateAsync (account, connection, acct => acct.UserName == userName, newExpression <Func <Account, object >> [] {acct => acct.DateLastLogin}, transaction, cancellationToken);});


This code is used to update the login time of the user of the specified account name, and the code is not interspersed with SQL statements, but is expressed using a lambda expression. The storage object in the code refers to the entity of the relational database, while Accountstore represents the storage of an entity (the account entity here), somewhat like the concept of repository in a domain-driven design. This is designed to achieve separation of duties: Accountstore does not depend on the implementation of storage (that is, the relational database type).


Log


I used Serilog as the log frame and pushed the logs to the SEQ system, both front-end and back-end. I will be detailed in another blog post, which is not much to introduce. is the log output of this blog, in order to save money, when the Docker container starts, the log level is set to warning through environment variables.





API documentation


Not much to say, Swagger. I will also introduce the specific implementation in another article.





Cache


The cache is not being used temporarily, and the next step is increased.



Well, the entire blog architecture and the front and back of the technology is probably introduced so much, if you want to delve into every detail of technical practice, I think, estimated a few series of articles are not finished. Or as mentioned in the beginning of this article, the blog Code open source, we can learn to communicate. In the future I will still try to write more articles to introduce the relevant technology.


Will I continue to blog in the blog park?


Of course I will! Blog Park has always been my main place to communicate with you, the future is also. It can be understood that in order to provide more high-quality "dry", blog site to bloggers will have some restrictions on the article, the blog theme will have some constraints. As I personally, in this form of blogging, I should probably be able to express my technical career in more ways, even some of my own thoughts on things in life, which may be a kind of inspiration for the technological development of others, I can continue to improve myself after getting feedback and reply. With these related content, I will publish in my own blog, of course, I think, my own blog will still be based on technical articles.



Now this new blog shows the blog I once published in the blog Park (of course, just for the sake of sucks, so that the homepage does not seem so monotonous, all the pictures are still keep the blog Park link). I'm going to set up a three-month trial operation for this new blog, which is going to look at the health of the system and summarize the usage of the Microsoft Azure Cloud, and of course the most important thing is to measure the cost of how you can afford to operate. I will continue to add more features to the system throughout the commissioning phase.



If the operation fails, please also forgive me, right when I contribute to the community more than one open source project.


Summarize


This article first describes some of the real-time contributions I have made to the community, and thus leads to my own hand-crafted blog system based on the ASP. The overall architecture and deployment of the system, as well as some technology selection at the front and back, are briefly addressed. Immediately again into the new year, also quickly to their own MVP renew time, regardless of whether renew success (last year's contribution is not too high), I will continue to adhere to contribute to the community, and truly "professional, realistic, and doubts."



Original address: http://www.cnblogs.com/daxnet/p/6139317.html



A blog system based on Microsoft Azure, ASP. NET Core and Docker


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.