A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
This series records the thinking methods and class design processes of Basic Library Development in actual work. Additionally, we will also mention how to use similar scrum processes in practice to organize your work and how to use TDD in such work.
This world is filled with something called dbhelper. Microsoft's entlib has been constantly updated for many years and the world has been able to find all kinds of dbhelper classes released by unknown people.Code, ManyProgramAll members have their own versions with full or incomplete proprietary intellectual property rights. This is actually a very painful thing, because too many choices will always be confusing.
So one day, I decided to end this ghost.
I should first think about why we should use ado.net and dbhelper? As a matter of fact, we have a lot of options, and both of them are a package,CommunityThere are many ORM tools, but most programmers still use the native ado.net. There are two reasons: Performance and programming complexity. From this point of view, it means that Microsoft's entlib, LINQ to SQL, and ado.net entity have not brought about any useful work.
Further, why dbhelper? If you want to write less code, that is, reuse code, and seamlessly switch between multiple databases, these two are the native motivation. However, this is not the answer. The real answer is that the design of ado.net itself is chaotic.
A good design should ensure that users, that is, programmers who use this system, can be very easy to use. This is reflected in two aspects. First, the concept should be as few as possible. Second, the code should be as few as possible. It is also very easy to understand this. The number of steps that programmers need to perform to provide end users is as small as possible, which is a problem of user experience.
The design and development of the framework should follow the same principle when the end user is a programmer.
What I want to do includes two layers: first, easy to understand: programmers only need to understand connections, commands, and dbreader to easily develop applications in ado.net, this eliminates the need for concepts such as dbproviderfactory, dbproviderfactories, dbdataadapter, dataset, and able. 2. Define objects and object set objects in a simple way. In most cases, you do not need to write Database Access Code directly.
Of course, the premise here is to maintain the performance level of ado.net.
Fortunately, this job is far less difficult than imagined. A few months ago, after five working days, the above goal was easily achieved. No one on earth can think of it. First, define a class inherited from dbconnection to solve the problem of eliminating dbproviderfactory. Second, use the indexer to convert object objects and datarow, this avoids reflection and ensures performance.
You can use the code generator to create an entity class. In most cases, you do not need to write any data access code.
The first problem we encountered was the problem of dbproviderfactory.
DB classes are basically abstract classes. For example, both sqlconnection and oledbconnection inherit from dbconnection. to switch between different databases as easily as possible, we need to use dB classes. Because it is an abstract class, you cannot instantiate it on your own. You can only create it through dbproviderfactory, which must be used for connection, adapter, and even parameters. Those who are familiar with the design model should know that this is a mix of the provider model and the factory model, and Microsoft's young students are really painstaking.
Dbproviderfactory is obtained through dbproviderfactories. getdbfactory ("provider name. In this way, you need to maintain a dbproviderfactory object in the Code. Then, the clumsy practice is that dbproviderfactory is used in every place to create dB series objects. createxxx and other methods. I hate this very much. Why do you hate it? You know:
Isn't that good? Dbconnection connection = new dbconnection (database type, connection string); even if your system does not involve the use of both Oracle and SQL Server, you can save the database type and connection string in the configuration file, so that you do not need to consider the parameters when creating the connection.
You do not need to understand dbproviderfactory, dbproviderfactories, or "provider name". I believe these things are hard to express in Natural Chinese. The code is much less each time it is instantiated, which is clearly in line with our goal of less concepts and less work.
The purpose is clear. So, as Mr. Q said, let's make a revolution.
The project management tool I selected is relatively lazy. Using team Foundation server and Microsoft's scrum 1.0 template, we created a team project named faster based on this template. By convention, we should first propose the project vision and list the product backlog. This is actually a collection of story, use cases, and user scenarios. I am not satisfied with these translations, therefore, I simply translated it into "project goals" and "function list ".
Project objective: to process data access in the simplest way and eliminate most coding tasks related to data access.
1. Simplified dbconnection creation method: workload evaluation 2 priority 1000
2. Obtain metadata in the database: workload evaluation 5 priority 2000
3. automatically generate the crud command: workload evaluation 3 priority 3000
4. implement data access generic categories: workload evaluation 8 priority 4000
5. Make a Code Generator: workload evaluation 3 priority 5000
6. process one-to-multiple relationships with foreign keys: workload evaluation 2 priority 6000
It is very important to clearly define the project objectives. In two sentences, we will solve the problem of "What should we do in this project.
The function list must use the user language. Each function meets a specific requirement of the user. It is very simple to list only the titles here. In actual work, for each function, I only write an introduction of less than two hundred words and would never like to write very huge texts, there is no need, and programmers will not really look at it. Note that the title is written in a format with no subject and a dynamic object structure, such as "do-what". The priority should be determined by the user, and the workload evaluation should be determined by the Team programmer through discussion. Here, the unit of workload evaluation is a relative unit. You can think of it as an "ideal workday", which only measures the size of each function. I first find the minimum first item in the function list and define it as 2. The second item I think its workload should be between 2 and 3 times of the first item, and it is defined as 5.
Then, we set the "iteration" of these functions, that is, the phase of my translation, to the first sprint of the first version. Let's make it simple: I think these tasks can be completed in one stage without dividing them into multiple stages. For a relatively large project, the project should be divided into multiple stages based on the priority and workload evaluation, to maintain a general balance of the total workload of each stage. The version concept in Microsoft scrum templates does not need to be considered.
Now, open the product backlog and there is nothing left.
In the face of your first sprint, that is, the first stage, all the functions have just been transferred here.
Next, we divide each function into "tasks". For example, for the first function to simplify the creation of dbconnection, I divide it into the following tasks:
1. It takes 1 hour to create a test database with a priority of 1005
2. inherit from dbconnection. It takes 2 hours to create a database class with a priority of 1010.
3. Create a connection. Priority 1015 takes 1 hour.
4. Use the default connection string in the configuration file to create a connection: Priority 1020, which takes 1 hour
5. Implement the other two connection creation methods: 1025 priority, 1 hour required
Generally, it takes six hours and one day of work. You can see that the status of each job is to do... Assigned to, or blank.
Then, I will first assign the first task to myself. Of course, if multiple people work, assign a task to each person. Then, I am ready to start the first job and set its status to in progress.
It must be noted that every programmer in the team faces only one task at any time, which is a concern.
Then, create a test database.
First, there is a brief description of the task content in the description column of this job. Then, first write a sentence in this format: starting at on May 25, October 3, with an expected hour. Record the work content, and add the end time and the interruption time. This is just my work habits.
First, create an SQL Server database. I have created three tables: Post, tag, and postfortag. It looks familiar. Well, this is a simple Weibo database and only handles the meager release, there is no user system, and there are no comments, forwarding, and other functions. We add an image field for post, which is certainly for testing convenience, but it also allows post table programming to process photos by category.
Then, we define the foreign key relationship for the three tables. Because post is a table with a large amount of data, we also define indexes for several fields that are frequently used for query.
Create a script for the database, add the script to the test project, and perform version management on the database script.
In addition, it is necessary to add a simple Excel file. To test oledb, we need to verify at least two different dB objects and check whether they work properly.
Since there is a task of creating a test database, it means that we didn't use the mock object when writing a unit test. This has always been rejected, because mock actually increases the workload and logic of unit testing work, and the benefits are quite limited. This is not in line with the "simple" principle.
This work was completed as scheduled. Change the task status to "done" and change the status of the second task to in progress...
I initially attempted to solve the problem using the extension method, but since the syntax of the extension method does not support attributes and fields, I decided to use the original method. Defines a DB class that inherits from dbconnection. Of course, it is difficult to implement this class to overwrite many methods and attributes of the abstract class. However, I avoided this kind of work in a strange way. In this class, I add a private dbconnection field connection. All methods, attributes, and events to be overwritten are passed to this connection. This is like copying a book, A custom dbconnection class is fixed in a few minutes. Of course, this is not an abstract class. In this class, a private field of the dbproviderfactory type is added at the same time, So programmers who use this class cannot feel the existence of dbproviderfactory.
Well, this is the so-called encapsulation. In fact, more than 70% of the so-called object-oriented scenarios only use "encapsulation ". Shields internal details and provides simple service interfaces.
However, what I actually do is to eliminate the provider and factory models in ado.net. In other words, I am patching Microsoft's ado.net to smash their advanced design patterns. Because of the complex nature of work and low-level work content, it is a bit embarrassing to talk about object orientation here.
The second task is so flickering that there is no need to write any unit tests. This is just a copy of the work.
Start the third task. First, we need to provide three constructor methods for the DB class to create a connection:
DB (): use the default provider and connection string in the configuration file to create a connection. We simply name this configuration item "applicationservices" because the Asp.net and Asp.net MVC projects contain this configuration item.
DB (connection string, provider name)
DB (configuration item name): obtains the connection string and provider name based on a configuration item in the configuration file, and creates a connection.
In addition, in desktop applications, we often use a unique connection throughout the entire program running cycle. Of course, I mean a desktop database similar to SQLite. Then, we provide a static property default for it, which returns the default dB object.
In this way, you can use the connection in the following two steps in most work scenarios,
1. In the connectionstring section of the configuration file, define the connection string and database type to maintain the same syntax:
<Add name = "applicationservices" connectionstring = "Data Source =.; initial catalog = miniblog; Integrated Security = true ;"
Providername = "system. Data. sqlclient"/>
<Add name = "RunTime" connectionstring = "Data Source =.; initial catalog = miniblog; Integrated Security = true ;"
Providername = "system. Data. sqlclient"/>
The second Runtime is the connection string used to access the Excel file. As you can see, this is used in APP. config of the unit test project.
Here, we should pay attention to how to write unit tests in vs2010 With TFS.
The general process is: first write unit test, compile the code, write the code to achieve the test passed, write the next unit test. This will cause the code syntax prompt to be unable to work during unit test writing.
In this case, first add a method to the class diagram, then "Create a unit test" in the method, then write the unit test, and then implement it. Then the next method or attribute. In this process, never perform unit tests on Private Members. Although vs2010 can also test Private Members by creating accessors, that is meaningless. Tests on public members will inevitably cover private members. If the logic of Private Members is complex, reconstruction is required.
The next article will describe the specific process of the third task and describe how TDD works in step by step mode. The class diagram is as follows:
Start building with 50+ products and up to 12 months usage for Elastic Compute Service