Full Scale 180 uses Windows Azure to process database scaling

Source: Internet
Author: User
Keywords Azure azure full processing database scaling

Today's post, written by Fullscale 180 first Trent Swanson, describes how the company uses Windows http://www.aliyun.com/zixun/aggregation/13357.html "> Azure and database partitions to build scalable solutions for their customers.

Full Scale 180 is Redmond, a Washington consulting firm specialising in cloud computing solutions, offering professional services from professional architecture consulting to solution delivery. The full Scale 180 team has built a reputation for solving seemingly impossible problems by providing innovative cloud computing solutions on Windows Azure. Full Scale 180 works with customers across a wide range of industries, and although each project is unique, these solutions often have many common considerations and requirements.

Although there are some very interesting challenges to the design and implementation of different projects for customers, we also have some very cool solutions on Windows Azure. The challenge we often encounter is database scaling.

For the full use of data storage, you need to focus on two points:

the best way to access data where data is stored

In the area of software development, the layer of complexity and high abstraction is a very interesting thing. Use a means (the word represents many different concepts, here are some things like APIs, libraries, programming paradigms, class libraries, frameworks, to end up with a balanced state, either you come up with higher-level abstractions, or you use a decision that comes from someone else, usually called a build/Fetch. Like nothing else, data storage follows the same pattern. When working with relational stores, such as SQL Azure, you need to operate on the rules set by the system.

Data storage location

When using SQL Azure, the physical components of the data store are no longer your concern, so you don't have to worry about the data files, filegroups, and disks that are full. You have to consider the resource limitations of the service itself. SQL Azure currently provides a personal database of GB.

Over time, it is widely expected that the consumption of application databases is growing. Unlike a non-cloud database, the only dimension you can control is the additional database space purchased from Windows Azure. There are two ways to do this: either plan to expand and purchase new space in advance (which could disrupt the purpose of running on the cloud) or automatically expand requirements based on policy. If you choose the latter, we need to find a way to partition the data across the database.

Optimize data transfer and fragmentation

Aside from the management of space, we need to ensure that incoming data and data storage are fast. For non cloud systems, you can optimize network and disk speeds. But on the cloud platform, this is usually not optimized, so a different approach is needed. Typically, this translates into parallel data access.

Data storage requirements will increase, but we need to use the largest database in the rules for platform setup. In the same way, we must learn to design solutions for these limits of scale and throughput. Whether connected to the data store, the physical storage throughput of the data store, or the limitations on the size of the data storage, extending beyond these unit limits is often a need to design a solution. If we could use a mechanism to store our data using a small collection of databases, where we might be able to access these smaller databases in parallel, we would be able to optimize the size and speed of our data storage solutions. The mechanism here should take automatic data partitioning and database procurement. One common solution is fragmentation. With fragmentation, data management and data access will change regardless of the method used. The SQL Azure Association provides a box fragmentation implementation for SQL Azure.

As we approached some of our customers, we found that the Federation of SQL Azure would be a solution. In addition to simply expanding the limit beyond the 150GB single database size, we have found that the combined multi-tenant cloud solution is effective.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.