Redux Advanced (i)

Source: Internet
Author: User
Tags unique id
The non-changeable state of the state brings trouble

There are some problems when dealing with deep, complex data with Redux . Because of the JS feature, we know that when copying an object it is actually a reference to copy it, unless you are deeply copying the object. Redux asks you every time you return a new state, instead of modifying it. This requires that we make a deep copy of the original state. This often leads to complex operations (lookups, merges). A simple situation is handled by extending symbols or object.assign :

return {... state    ,    data: {        ... state.data,        id:5    }}

This is a convenient way to handle simple data, but if you encounter a deeper level of data structure it can be very weak and the code becomes verbose. Not only that, but when we're dealing with an array of objects, what do we do? I think there is probably no other way than to copy the array in depth and then modify the new array. And it's important that if you copy the entire array, the UI that binds the data is automatically rendered, even if their data doesn't change, simply because you've modified one of the table data, but the entire table is re-rendered on the interface:

Const TABLESSOURCE = {    query: ' Tables ',    tableid:10,    data: [{        key:11,        name: ' Yanbin ',        age:32,// I want to modify this, do you want to copy the whole array and then modify the new?        address: ' West L. District Lake Park 1th '    }, {        key:12,        name: ' Hu Yanzu ',        age:42,        address: ' West L. District Lake Bottom Park 1th '}    ;

A solution, the normalized data, is mentioned in the official Redux Document: The reduction of hierarchy, the unique ID index, and the construction of our data structure using the back-end table-building approach. One of the most important principles is flattening and correlation. Finally, we need to convert the data form into the following format:

{"Entities": {"Bykey": {"one": {"    key": One        ,        "name": "Yanbin",        "age": +, "        address" : "West L. District Lake Park 1th"      }, "a":      {        "key": "        Name": "Hu Yanzu", "Age": ","        Address ":" West L. District Lake Bottom Park No. 1th "      }    },    "table": {"      ten": {"        query": "Tables",        "TableId": Ten,        "data": [          11,        []}}  ,  "result": 10}

In accordance with the understanding of the brine boil, the normalized data is nothing more than to thin the object, and then the depth of the hierarchy, we also try to flatten them, which will reduce our search for the state to bring about the performance of consumption. The index table is then established to identify the connection between each set of data. So how do we get the data we want?

Normalizr Method Usage Guide

The official most recommend NORMALIZR module, its usage still need time to study. Let's use the above data as an example to illustrate its usage:

$ npm i normalizr-s//download module ... Import {normalize, schema} from ' Normalizr ';//daily import, no problem//raw Data Const Tablessource = {query: ' Tables ', tableid:10, data: [{key:11, n        Ame: ' Yanbin ', age:32, Address: ' West L. District Lake Bottom Park 1th '}, {key:12, Name: ' Hu Yanzu ', age:42, Address: ' West L. District Lake Park 1th '}]};//create an entity named Bykey, and we see that its second parameter is undefined, which indicates that it is the last level of the object const BYKEY = new schema. Entity (' Bykey ', undefined, {idattribute: ' key '});//create entities, Name table, index is TableID. Const TABLE = new schema. Entity (' table ', {data: [Bykey]//The relationship of these entities needs to be explained here, meaning Bykey the original table below is an array, he corresponds to the data, Bykey will take the data here to create a key-indexed object. }, {idattribute: ' tableId '//with TableId as Index}), const Normalizeddata = normalize (tablessource, table);//Generate new data structure 

Description: New schema. The first parameter of an entity represents the outermost entity name you create, the second parameter is its relationship to the other newly created entities, and the default is the lowest level, which is just the last layer, with no other levels. The third parameter contains a idattribute, which is the index of which field, the default is "id", it can also be a parameter, return the unique value of your own construction, remember, is a unique value. In this way, you can build the flattened data structure you want, regardless of how deep the source data is. We will eventually get the data structure of hope.

{"  entities": {    "Bykey": {, "Entity name      " 11 ": {/////The only key" key "We have set previously:"        name ":" Yanbin ",        " age ": 32 ,        "Address": "West L. District Lake Park 1th"      }, "a":      {        "key": "        Name": "Hu Yanzu",        "age":        "," "Address": " West L. District Lake Park 1th "      }    },    " table ": {//entity name      " 10 ": {,, only used tableId        " query ":" Tables ",        " tableId ":        "Data": [//data becomes a collection of index to store key values! Because before we explained the relationship between the two entities data: [Bykey] One, three]}}}  ,  "result": 10// This also stores the index set inside the table entity NORMALIZR (Tablesource, table)}

Detailed official documentation is available on GitHub, where you can simply explain how to use it, and you'll be able to take a closer look at the usage of the document, without the time you need to master it. So far, the Great Wall, finally finished the first step.

How to convert the normalized data to another

What the? Do you need to convert again to the data structure you want? I'm sorry to tell you, yes. Because the normalized data is only easy for us to maintain the Redux, and the interface service renders the data structure is often different from what we have processed, to give a chestnut: bootstrap or ant.design table rendering data structure is this kind of:

Const DataSource = [{  key: ' 1 ',  name: ' Yanbin ',  age:32,  address: ' West L. District Lake Bottom Park # 1th '}, {  key: ' 2 ',  name: ' Hu Yanzu ',  age:42,  address: ' West L. District Lake Bottom Park No. 1th '}];

So when we refer to the data in the state on the interface, we need an intermediary to transform the normalized data into the business data structure again. I believe this step is very simple, just write a simple converter on the line:

Const TRANSFORM = (source) = = {    const data = Source.entities.bykey;    return Object.keys (data). Map (v = data[v]);}; Const MAPSTATETOPROPS = (state,ownprops) = ({Table:transform (state)}) export default connect (Mapstatetoprops) (view )

If you are debugging in Mapstatetoprops , you will find that every time dispatch is forced to execute mapstateprops The method ensures that the object is up-to-date (unless you are referencing the underlying type data), so regardless of the interface's operation, the connect data is forcibly executed once, although the interface does not change, but obviously JS performance will be discounted, especially the complex processing of deep objects. Therefore, it is officially recommended that we create a memory function to efficiently calculate the derivative data in the Redux store.

Reselect Method Usage Guide
The index in cached data is const RENORMALDATASOURCE = (state, props) = state.app.entities.table['].data;// Cache Bykey inside the base data const RENORMAL = (state, props) + state.app.entities.bykey;//cache calculation result const CREATENORMALTABLEDATA = Createselector ([Renormaldatasource, Renormal], (keys, source) and keys.map (item = Source[item]));// Each time Mapstatetoprops is re-executed, the result of the last calculation is stored, it recalculates only the changed data, and the other non-related changes do not calculate the const MAPSTATETOPROPS = (state, own) = ({source: Createnormaltabledata (State)});

I've done a little trick here, and you can see that I'm following the Table.data array to find the interface business data. This allows us to only care about the simple one-dimensional array of table.data, which is especially useful when deleting or adding a piece of data.

We ended up referencing the Dot-prop-immutable module in order to calculate state, which is an extension of immutable and is very efficient for data calculation. I then used another Dot-prop-immutable-chain module, which added the chain usage of the dot-prop-immutable. About the usage of dot-prop-immutable the stew is no longer detailed, and can be seen at a glance in the examples given below, and is detailed in the official documentation. Below we can actually show our solution by adding and deleting a table.

import {normalize, schema} from ' Normalizr '; import dotprop from ' Dot-prop-immutable-chain '; Const REDUCER = (state = Normalizeddata, action) = {switch (action.type) {//Modify one count        According to case ' EDITOR ': Return Dotprop (State). Set (' Entities.bykey.${action.key}.age ', action.age). Value ();           Add a data case ' add ': const NEWID = uid++; return Dotprop (state). Set (' Entities.bykey.${newid} ', Object.assign ({}, model, {KEY:NEWID}))//Add a new piece of data. Merge  (' Entities.table.10.data ', [NewId]). Value ();//reference to data in new data//delete case ' delete ': Const index            = state.entities.table['].data.indexof (number (action.key)); As you can see, because our interface data is determined by the items in the database, we only need to work with the simple one-dimensional array of data, which is obviously easier to maintain with more return Dotprop (state). Delete (' entities.table.    10.data.${index} '). Value (); } return state; 

Look, we've shown the whole reducer, we believe it has become much easier to maintain, and because of the use of the paradigm-based data structure and the immutable extension module, it not only improves computational performance, reduces interface rendering, but also conforms to Redux 's the state cannot be modified by the principle.

Conclusion

There are many areas where the react+redux combination needs to be optimized in the actual application process, and here is simply a small point to show. Although react has done well enough to calculate the DOM interface changes, it does not mean that we do not need to render a problem back pot for the interface, react the task of most of the interface update calculation, and let the front-end developers to deal with the data more, so we can spend more time in this layer to do the project.

Resources

Redux Chinese documents

Organize your redux store into a database form

NORMALIZR's address on GitHub

Reselect's address on GitHub

Dot-prop-immutable's address on GitHub

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.