Last week, we described how to evaluate the code used to extend cloud applications. Now, we're going to look at the coding and system-change strategies that are likely to make the system more vulnerable over time. Because of the seemingly never-ending development requirements of the CRM system, the durability of our code will be a key factor in the long-term smooth running of these systems.
But before I begin, I need to declare that the usages and terms I cite apply only to the salesforce.com environment, and that other applications and platforms use different types of protocols (and even different abstract structures), and since I don't know much about them So please don't mistake me for a five-Maoist party that salesforce.com the hype.
The pros and cons between declarative development and programming
Cloud-computing infrastructure applications prefer what vendors call "declarative development" because they are clear, easy to learn, and easier to control in a SaaS environment. Most cloud computing applications bring heavy information Group validation rules, information groups and list constraints, and workflow for objects. In the past this approach has been quite effective because it provides a surprising amount of processing power and functionality.
But when we need to add new code to it, especially the code that creates new records, the trouble comes along. When developing new triggers, classes, or integrated "listening" services, coders are likely to work in a particular development or sandbox environment, and the configuration of these environments may not match the production system itself. When code is added to a product, various error conditions tend to emerge-and often cannot be reworked in the development environment. Unfortunately, the error message is not only very annoying to the user, it can not even provide enough clues for troubleshooting.
First set of tips:
1. Make sure that the development work is done in the newly updated "sandbox" environment so that the developer does not have a headache for the configuration differences between them and the production environment.
2. If possible, maximize the integration adapter and other plug-ins in the sandbox so that developers can see the consequences of state changes, especially from external sources ' mapping ' error states.
3. Once you start developing extensions and functionality for your objects, be sure to remove all validation rules and implement them in low-level code so that we can anticipate pitfalls and control error conditions.
4. For the same purpose, you should deploy any workflow that will trigger an update of the information group in low-level code.
5. Create a set of administrative rules that make it difficult to create new validation rules or workflows that can raise information group updates.
6. It is important to ensure that the code protects the information set or list of constraint conditions, and that a pre check of values will help us avoid difficult problems.
7. By checking to make sure that each information group is NULL, and that each set, list, or map is empty, you can then try to use it in a logical relationship (yes, even in all error-checking logical relationships).
8. As mentioned earlier in the topic "Error handling in cloud computing," write classes for real-time mastery of all application errors and send them as messages to the centralized error logging service in cloud computing.
Focus on the list as much as possible
As you all know, it's not a good idea to hard-code values in a class or trigger, so we should at least deploy these parameters in each module's declaration section. Or further, move these variables to a lookup list or a resource file, and so on, whenever the code runs.
Although databases are becoming more standardized and almost everything can be added to the lookup list, this approach is still somewhat abstract and broad. Excessive pursuit of pointer guidance makes it difficult for anyone other than the original developer to understand, and can cause the application to run slowly (even the governor limit in the cloud environment). So the following tips become very important:
9. Be sure to add the configuration parameters (such as the selection list assignment, the permitted status, or the configuration options, etc.) to the lookup list. Be sure to include comment lines in each lookup list and ensure that others can read the comments to understand the semantics, behavior, and update records of lists and values. If your cloud system supports it, you should also save the list in memory (' Custom settings ') to avoid the high latency caused by disk reads.
10. Be sure to place these lookup lists under configuration control. At a minimum, lock the access behavior and make sure that the list is backed up regularly.
11. Don't be lazy about naming lists and information groups--a moment of ease can often cause you great trouble in troubleshooting.
Cloud computing requires an agile, XP, or TDD (i.e., time-duplex) type of coding style
I don't know much about the cloud environment that excludes large modules, waterfall development, or a lot of nested/branching content, but you should occasionally encounter this example. But for once and for all, we must discard this practice, because it is not conducive to building strong, durable code.
12. Objects are not just for the UI. They exist to support understanding, recall, and code refactoring. But don't be silly; the object's support for understanding is all about the premise that without understanding, all the other benefits will vanish.
13. Ensure the module is small, simple and separable. Careful reading of the KISSS principle will also make testing and tuning work easier.
Don't be impatient, focus on the limitations of the platform
The cloud computing platform brings limitations to specific types of execution, such as database queries or in-memory list creation. So if you are developing a functional product for the first time, you must ensure that your first release does not exceed 50% of the resource indicator limit. Because not so long ago you will inevitably face new needs and emergency measures, which will lead to greater resource consumption.
14. Use the data in the memory cache as much as possible (such as ' bulkification ' and ' Dynamic SQL ') instead of the database every time. Take advantage of future and batch classes to handle large workloads and datasets.
15. Unless you have any hard design reasons, you must ensure that our test code gets 100% of the code coverage rate. Instead of doing drills for idle code, you should actually test the logical results (repeated validation through positive and negative tests). In addition, do not put no operation state into the code, so as to artificially raise the coverage of statistical data.
Original link:
http://www.infoworld.com/d/cloud-computing/how-head-coding-errors-in-the-cloud-176632?page=0,0
(Responsible editor: The good of the Legacy)