This series of articles is a learning record about the fundamentals of azure services development, and because of time constraints, the process of wishing to discuss and explore yourself is from scratch, to be able to develop basic programming for azure services. There may be a very deep topic relative to each topic, and I would like to have time to do it through other articles. The positioning of this series is basically positioning, take 20-30 minutes, download the code first, follow the article, run to get the relevant experience.
The previous article is about Azure's queue storage, which is about table storage. Theoretically, table storage should be the most familiar way of storing, and it is also structured storage, similar to our usual relational database storage. But the first thing to emphasize here is the change in programming ideas, the Windows Azure platform is the cloud computing platform, so from the beginning of design, its positioning is managed, fully open and distributed. The same is true for table storage, which is reflected in Azure service, where the entire table store is a tenant-style design, and there must be a lot of people thinking about the concept of multiple leases (multi-tenancy) in SaaS and starting to associate Azure storage with it. The storage design of the Windows Azure platform is very ingenious, so let's turn the focus back to how the Azure service defines and operates the table store. Which ideas we need to know.
Regular 1:azure table storage is also followed and subordinate to an account, an account can be set up under a number of table storage, and queue as the table name following the scope of the account name, you can access the rest path to see the law: http://<Account> Table.core.windows.net/<tablename>
Rule 2: The data is stored in the table store, and a table is a collection of entities, the information of a set of entities is saved, each entity record is a row record; an entity is a collection of key attributes and associated attributes for that entity, and holds information about a set of entity attributes, each of which is a name/value pair, corresponding to a column. Datastore <->rows,properties<-> Columns.
Basically, the concept of Azure table storage is as follows:
Pictures source:http://blogs.msdn.com/jnak/archive/2008/10/28/walkthrough-simple-table-storage.aspx
Apply the above rule 2, you can see that a Contact table is a collection of contact entities that holds information about a group of contacts, and each contact record is a row record; A contact is a collection of key attributes and associated attributes for that contact, and a set of contact attributes (name,address) is saved The information that each attribute of the entity is a name/value pair (name/Xiao Wang, address/Beijing wangfujing), corresponding to a column.
Rule 3: An entity is fixed with two key properties, and these two key attributes combine uniquely to indicate an entity. The first key attribute is: Partitionkey, which is the partition identifier/region keyword for the entity in the database information, and the second property is Rowkey, which is the key word that uniquely identifies the entity within a partition.
The following figure can clearly see the definition of Rule 3
Pictures source:ES07.pptx of PDC 2008-windows Azure tables:programming Cloud Table Storage
Document is an azure table store that holds a set of document entities. Each document corresponds to a row of records, each containing key attributes (Document,version) and some related attributes (Changedon, Description); Each property is a name-value pair, corresponding to a column of the database.
According to the rules 3,partitionkey is actually a subdivision or granular of the document, used as a classification/partition indicator, whether the document is a FAQ document or a boilerplate document, and the Rowkey attribute in a category that uniquely identifies and navigates to the document. For example, v2.0.1 plus categories can be uniquely positioned to the v2.0.1 version of the examples document.
In fact how this subdivision to determine this granularity, is determined by the application, as you can see from the above diagram, you can define several different document classification keywords in a table, or you can define the entire partitionkey of the tables as one, the same keyword, so that the partition expands to a whole table, The version number becomes the real keyword. From Rule 3, we can see the design principles of Azure table storage, focusing on the direct proximity to the business scene and entity objects, and the second is to facilitate the query.
The rule 4:azure run environment aggregates and indexes the entity data according to Partitionkey, Partitionkey is the primary keyword of your entity query, which is the most important where in your SQL query. In terms of scalability, the Azure runtime Environment places the entities/data of different partitions on different storage nodes, trying to put the data of one partition (the same partitionkey keyword) on one storage node rather than on multiple storage nodes. This storage node is also prioritized for partition load balancing (automatically load balance partitions)
Pictures source:ES07.pptx of PDC 2008-windows Azure tables:programming Cloud Table Storage
From the above figure, we can draw the following conclusion from Rule 4: Because you search for a single partition, you will be able to query all FAQ doc documents quickly; Similarly, it will be slow to query all the FAQ doc documents from the end of May 30, 2008, because This query needs to retrieve multiple partitions, and if each partition falls on a different storage node, it will take more time.
Now let's assume that you want to design an azure version of the application for China Mobile, in terms of table storage design, you will be the entire China Mobile customers in a list may not be realistic, you will certainly consider zoning, Partitionkey can be geographical attributes, such as you by province to partition Rowkey is a mobile phone number, you can also set Partitionkey as a mobile brand; Rowkey is a mobile phone number; maybe you're going to make a big decision, you'll put a province's customer data into a table, and the entity becomes a mobile customer in Guangdong province, So Partitionkey can become the province of the following prefectures, such as Guangzhou, Shenzhen ... Rowkey is a mobile phone number, even if you still think that you are not thin enough, you can put the Guangzhou users into a table, the entity into a Guangzhou mobile customer, Partitionkey can become the city below the district , such as Baiyun District, Tianhe District, Rowkey can be changed into business type, even you want to subdivide ... So, it says that the data subdivision or granular depends on the application, but you should understand Windows Azure's design and rules.
For rule 4, this may also involve an entity's single table or multiple table extensions, as well as an entity corresponding to multiple tables, multiple entities corresponding to a table, an entity having different number/dynamic attributes in different states, and so on. I don't involve too much.
Rule 5: Each entity has a maximum of 255 properties, but only 253 of the customizations are even less, because all entities have two fixed key attributes Partitionkey and Rowkey. The definition and definition scripts for attributes are quite flexible and unrestricted, such as the two entities with different attributes/attributes can be defined in one table.
Understanding the above rules, only completed our first work, Azure table storage concepts and design concepts. The real way to define the operation of a table and access the table store is what we do next.
The Access technology for Azure table storage is the most dazzling and up-to-date technology, such as fully compliant ado.net data Services, accessed using the. NET Client class library in the latest. NET 3.5 SP1, and the Programming Query Language uses LINQ .... The Windows Azure platform's development model covers all of Microsoft's latest state-of-the-art technology, so it's useful to not waste the followers and hard work of fans from. NET 1.0 to. NET 4.0.
Then I went to the simplest table of azure applications to drill and explore some of the basic operations and programming techniques of azure table storage. Because if you understand the above rules and design points, I think the basic is to master the Azure table storage technology key, followed by the process is to look at the image of speech and some physical activity-the keyboard and mouse operation.
Next you can choose to experience it according to Jim Nakashima's article, Windows Azure Walkthrough:simple Table storage, or you can continue to experience the following steps, my actions are selected from Azure Services Training KIT-PDC Preview's one practice, the difference is that Jim Nakashima's article involves more topics like create Tables only once such high-performance demonstrations because Jim Nakashima decided he should provide high quality model code J
This exercise is mainly familiar with the creation and access of table storage, after which the following diagram is used:
For such a simple chat message board, the datasheet is more simple, basically a call Chat message entity, the entity has two attributes/fields, the name of the chat person (name) and chat content (body), of course, according to the above rules, and Paritionkey And Rowkey these two key and fixed fields.
The definition and configuration of Azure Services is still the first to go up an access queue. Define the properties of Tablestorageendpoint,accountname and Accountsharedkey, respectively.
Service definition file Servicedefinition.csdef, this example does not use the worker role or configure
<?xml version= "1.0" encoding= "Utf-8"
<servicedefinition name= "workingwithtables" xmlns= "http://" Schemas.microsoft.com/servicehosting/2008/10/servicedefinition "
<webrole name=" WebRole "
< Configurationsettings>
<setting name= "Tablestorageendpoint"/>
<setting name= "AccountName"/>
<setting name= "Accountsharedkey"/>
</configurationsettings>
<inputendpoints>
<!--moment-in use port to HTTP and port 443 for HTTPS when running in the cloud
<inputendpoint name= "Httpin" Protocol= "http" port= "/>"
</inputendpoints>
</webrole>
<workerrole name= "Workerrole
<configurationsettings>
<setting name= "Tablestorageendpoint"/>
<setting name= " AccountName "/>
<setting name=" Accountsharedkey "/>
</configurationsettings>
</ Workerrole>
</servicedefinition>
Service profile serviceconfiguration.cscfg, define values for each parameter in the service definition file
<?xml version= "1.0"
<serviceconfiguration servicename= "workingwithtables" xmlns= "http://" Schemas.microsoft.com/servicehosting/2008/10/serviceconfiguration "
<role name=" WebRole "
< Instances count= "1"/>
<configurationsettings>
<setting name= "tablestorageendpoint" value= "http ://127.0.0.1:10002 "/>
<setting name=" AccountName "value=" Devstoreaccount1 "/>
<setting name=" Accountsharedkey "Value=" EBY8VDM02XNOCQFLQUWJPLLMETLCDXJ1OUZFT50USRZ6IFSUFQ2UVERCZ4I6TQ/K1SZFPTOTR/KBHBEKSOGMGW = = "/>
</configurationsettings>
</role>
<role name=" Workerrole "
<instances Count= "1"/>
<configurationsettings>
<setting name= "Tablestorageendpoint" value= 127.0.0.1:10002 "/>
<setting name=" AccountName "value=" Devstoreaccount1 "/>
<setting name=" Accountsharedkey "Value=" EBY8VDM02XNOCQFLQUWJPLLMETLCDXJ1OUZFT50USRZ6IFSUFQ2UVERCZ4I6TQ/K1SZFPTOTR/KBHBEKSOGMGW = = "/>
</coNfigurationsettings>
</role>
</serviceconfiguration>
The Azure service table storage Access API, still uses the Storageclient class library in the SDK, and then adds storageclient references to the Web role project, respectively.
The next step is a more important work on table storage, which is to create a script for entity classes and table models.
Further, we didn't have to write the database table script to build the table store in the Azure storage system, but according to the rules above, we tell Windows Azure what our entities are, what Paritionkey and Rowkey are, and how to assign them, That meets the requirements of Rule 2 and Rule 3, Windows Azure automatically helps you create tables, index, and even store nodes. You can then access these tables through the exposed API.
The specific operation is that we need to get from Microsoft.Samples.ServiceHosting.StorageClient. Tablestorageentity inheritance derives our chat message entity classes, and the definition and description of this class are actually scripts for entity classes and table models, after which we can tell Windows Azure to build the table store for this entity by using the development tools or code execution. The difference is that if the storage is built up through the development tool table, the Code Establishment table can be established at runtime before accessing the table store. This is also the problem with the Create tables only once, because there is a question of what is the best time to create a table, and the development tools that are used to build table storage in this exercise.
The chat message entity class is defined as follows:
1:using Microsoft.Samples.ServiceHosting.StorageClient;
2:
3:namespace Workingwithtables_webrole
4:
5: {
6:
7:public class Mymessages:tablestorageentity
8:
9: {
10:
11:public Mymessages ()
12:
13: {
14:
15:partitionkey = "Mymessages";
16:
17:rowkey = string. Format ("{0:10}_{1}", Datetime.maxvalue.ticks-datetime.now.ticks, Guid.NewGuid ());
18:
://TimeSpan
20:
21:}
22:
23:public string Name {get; set;}
24:
25:public string Body {get; set;}
26:
27:}
28:
In the constructor, we define how the most critical Paritionkey and rowkey are defined, because the table is very simple, so the Paritionkey assignment I give is a fixed value that is not optimized according to the requirements of rule 4 and rule 5. After that is our custom attribute name and body.
Then you define the access class of a table, very much like the DAL, because the above is just a table definition class, our application will ultimately need to access the data in the table store. This we need to derive a class from Microsoft.Samples.ServiceHosting.StorageClient. Tablestoragedataservicecontext to create our own data queries and access classes:
1:using Microsoft.Samples.ServiceHosting.StorageClient;
2:
3:public class Messagedataservicecontext:tablestoragedataservicecontext
4:
5: {
6:
7:pub Lic Messagedataservicecontext (storageaccountinfo accountinfo)
8:
9:: Base (AccountInfo)
:
: {
12 :
:
15:public iqueryable<mymessages> Messages
:
: {
:
19:get
:
{
:
23:return this. Createquery<mymessages> ("Messages");
:
:
:
29:public int AddMessage (string name, String body)
:
: {
3 2:
33:int ret =-1;
:
35:try
:
: {
:
39:this. AddObject ("Messages", new Mymessages {name = name, BODY = body});
:
41:this. SaveChanges ();
:
43:ret = 0;
:
5:
47:catch (Exception e)
(
): {
:
51:throw e;
2:
:}
:
55:finally
(
): {
:
:}
:
61:return ret;
:
: "
$:
"
:
Then set up table storage in Azure's development environment
Actions are as follows: Select your item in Visual Studio, then right-click and click "Create Test Storage Tables"
The following figure:
The following balloon indicates that the table was created successfully.
You can view the Visual Studio Output window:
You will find that Visual Studio uses the command-line tools DevtableGen.exe in the SDK to execute the following command:
C:Program fileswindows Azure sdkv1.0bindevtablegen.exe "/forcecreate"/server:localhostsqlexpress ""/database: Workingwithtables "" Objdebugworkingwithtables_webrolebinstorageclient.dll;objdebugworkingwithtables_ Webrolebinworkingwithtables_webrole.dll; C:myprojectworkingwithtablesworkingwithtablesworkingwithtables_workerrolebindebugstorageclient.dll; C:myprojectworkingwithtablesworkingwithtablesworkingwithtables_workerrolebindebugworkingwithtables_ WorkerRole.dll "
We might as well go to the database and look at the database and the table.
Finally, we add some code in the Webrole to access the simple code that accesses the table store.
Query and data binding
1:protected void Page_Load (object sender, EventArgs e)
2:
3: {
4:
5:string StatusMessage = String.Empty;
6:
7:try
8:
9: {
10:
11:storageaccountinfo AccountInfo = storageaccountinfo.getaccountinfofromconfiguration ("TableStorageEndpoint");
12:
//dynamically create the tables
14:
15:tablestorage.createtablesfrommodel (typeof (Messagedataservicecontext), accountinfo);
16:
17:messagedataservicecontext context = new Messagedataservicecontext (accountinfo);
18:
19:this.messagelist.datasource = context. Messages.take (10);
20:
21:this.messagelist.databind ();
22:
23:}
24:
25:catch (Dataservicerequestexception ex)
26:
27: {
28:
29:statusmessage = "Unable to connect to the" Table storage server. Please check this service is running.<br> "
30:
+ ex. message;
32:
Adding records to table storage
1:protected void SubmitButton_Click (object sender, EventArgs e)
2:
3: {
4:
5:storageaccountinfo AccountInfo = storageaccountinfo.getaccountinfofromconfiguration ("TableStorageEndpoint");
6:
7:messagedataservicecontext context = new Messagedataservicecontext (accountinfo);
8:
9:context. AddMessage (This.nameBox.Text, This.messageBox.Text);
10:
11:}
12:
Once the F5 is running, check that the table storage service in the development environment starts.
After the success of the operation, we can also look at the situation in the SQL Server database, such as the Azure development environment for Paritionkey and Rowkey understanding and assignment
Through the experience above, we found that Azure table storage definition and operation, and traditional table definitions and operations are still somewhat different, in addition to the programming experience is basically fully integrated with the Ado.net Data service and the Ado.net Entry framework of the style and technology.
However, we need to have a clearer concept and understanding that Windows Azure may now implement a subset of the Ado.net Data service and the Ado.net Entry framework, if Ado.net Entry The framework is familiar, and may be very interested to know how much of the features of the Ado.net Entry framework are implemented on azure, which we need to explore later.
Finally, take a moment to discuss Jim Nakashima's article on the main point in Windows Azure Walkthrough:simple Table storage, which is that when you are actually deploying to the normal Windows Azure Runtime environment, The more popular approach is to use the second way mentioned above, which is to create a table store at run time, preferably only once. In this article, the main is to use the Tablestorage.createtablesfrommodel method to achieve. Jim explained that the process of creating a table in a running environment was programmed, but in Azure's development environment (local Development Storage) the use of tools to advance the necessary steps is a limitation of Azure's development environment. I think it's very likely that in the context of Windows Azure, table storage is not as simple as building a table on one SQL database, and the process can be very complex and magical. and Mark Seemann to use another method, is to use the PowerShell method to create a table in the azure running environment, basically is to invoke the Tablestorage.createtablesfrommodel method with the shell script.
The naming rules for tables stored in the Azure Services table are described in the previous article on queue naming rule 1, while the table store for Azure services is stored in the database in the Azure runtime environment, and is also a reliable and persistent storage (durable Storage).