PHP Hash Table principle

Source: Internet
Author: User

Brief introduction

A hash table is used in almost every C program. Since the C language allows only integers to be used as the key name for an array, PHP designs a hash table that maps the key names of strings to arrays of limited size through a hashing algorithm. This unavoidable collision will occur, and PHP uses the linked list to solve the problem.

There is no perfect way to implement many hash tables. Each design focuses on a particular focus, some reduce CPU usage, some use memory more rationally, and others can support thread-level scaling.

There is a multiplicity of ways to implement a hash table because each implementation can only be promoted at its own point of focus and not exhaustive.

Data

Before we begin, we need to declare some things beforehand:

The hash table's key name may be a string or an integer. When it is a string, we declare that the type is zend_string; when it is an integer, it is declared as Zend_ulong.

The order of the hash tables follows the order in which elements in the table are inserted.

The capacity of the hash table is automatically scaled.

Internally, the hash table's capacity is always a multiple of 2.

Each element in the hash table must be a zval type of data.

The following is the structure of the HashTable:

Php

struct _zend_array {      zend_refcounted_h gc;    Union {        struct {            zend_endian_lohi_4 (                zend_uchar    flags,                Zend_uchar    napplycount,                zend_ Uchar    Niteratorscount,                Zend_uchar reserve    )        } v;        uint32_t flags;    } u;    uint32_t          Ntablemask;    Bucket           *ardata;    uint32_t          nnumused;    uint32_t          nnumofelements;    uint32_t          ntablesize;    uint32_t          Ninternalpointer;    Zend_long         nnextfreeelement;    dtor_func_t       Pdestructor;};

This structure occupies 56 bytes.

The most important of these fields is Ardata, which is a pointer to the bucket type data, and the bucket structure is defined as follows:

typedef struct _BUCKET {      zval              val;    Zend_ulong        H;                /* Hash value (or numeric index)   */    zend_string      *key;              /* String key or NULL for Numerics */} Bucket;

Instead of using a pointer to a zval type data, the Bucket uses the data itself directly. Because in PHP7, Zval no longer uses heap allocations because the data that requires heap allocation is stored as a pointer in the ZVAL structure. (such as a PHP string).

Here is the structure that Ardata stores in memory:

We note that all buckets are stored sequentially.

inserting elements

PHP guarantees that the elements of the array are stored in the order in which they are inserted. This way, when using the Foreach Loop array, it is possible to traverse in the order in which they are inserted. Let's say we have an array like this:

$a = [9 = "Foo", 2 = []];var_dump ($a); Array (3) {      [9]=>    string (3) "foo"    [2]=>    Int ([10]=>)    Array (0) {    }}

All of the data is contiguous in memory.

In doing so, the logic of the iterator that handles the hash table becomes quite simple. You only need to traverse the ardata array directly. Traversing adjacent data in memory will greatly utilize the CPU cache. Since the CPU cache is able to read the entire Ardata data, access to each element will be at a subtle level.

size_t i;  Bucket p;  Zval Val; for (i=0; i < ht->ntablesize; i++) {      p   = ht->ardata[i];    val = p.val;    /* Do something with Val */}

As you can see, the data is stored sequentially in the Ardata. To implement such a structure, we need to know the location of the next available node. This location is stored in the Nnumused field in the array structure body.

Whenever we add a new data, we will execute the ht->nnumused++ when we save it. The "Compress or expand" algorithm is triggered when the nnumused value reaches the maximum value (nnumofelements) of all elements of the hash table.

The following is an example of a simple implementation of inserting elements into a hash table:

IDX = ht->nnumused++;  /* Take the next avalaible slot number */  ht->nnumofelements++;/* Increment number of elements */  * ... */p = Ht->ardata + idx; /* Get The bucket in this slot from Ardata */  P->key = key;/* affect it the key we want to insert at/  */*. . */p->h = h = zstr_h (key);  /* Save the hash of the current key into the bucket *  /Zval_copy_value (&p->val, pData);/* COPY the VALUE into The bucket ' s value:add operation */

As we can see, inserts are inserted only at the end of the Ardata array, not the nodes that have been deleted.

Delete Element

When an element in a hash table is deleted, the Hashtable does not automatically scale the actual stored data space, but instead sets a zval value of UNDEF, which indicates that the current node has been deleted.

As shown in the following:

Therefore, when iterating over an array element, you need to judge the empty node specifically:

size_t i;  Bucket p;  Zval Val; for (i=0; i < ht->ntablesize; i++) {      p   = ht->ardata[i];    val = p.val;    if (Z_type (val) = = is_undef) {/* empty hole? */        continue;/* Skip it */    }/* Do something with    val */}

Even a very large hash table, looping through each node and skipping those deleted nodes is also very fast, thanks to the location of the ardata nodes that are always contiguous in memory.

Hash anchor Element

When we get the key name of a string, we have to use the hash algorithm to calculate the value of the hash, and to be able to find the corresponding element in the Ardata by the hash value index.

We cannot directly use the hashed value as an index to the Ardata array, because there is no guarantee that the elements will be stored in the order in which they are inserted.

For example: If I insert the key name first Foo, then bar, suppose that the result of Foo hash is 5, and bar hash result is 3. If we put Foo in ardata[5], and bar exists ardata[3], this means that the bar element is preceded by the Foo element, which is exactly the opposite of the order in which we inserted it.

So, when we hash the key name through the algorithm, we need a conversion table, the conversion table holds the hash result and the actual stored node mapping relationship.

It was a coincidence that the conversion table was stored with the Ardata start pointer as the starting point for mirror-mapped storage. In this way, we do not need additional space storage, and we allocate the conversion table while allocating the ardata space.

The following is a hash table with 8 elements + a data structure for conversion tables:

Now, when we want to access the element that Foo refers to, we can get the value of the node index stored in the conversion table by using the hash algorithm to get the values and then modulo the size of the elements allocated by the Hashtable.

As we can see, the index of the node in the conversion table is the inverse number of the node index of the array data element, Ntablemask equals the negative value of the hash table size, and by modulo we get a number between 0 and 7, thus locating the index value where we need the element. To sum up, when we allocate storage space for ardata, we need to calculate the storage size using tablesize * sizeof (BUCKET) + tablesize * sizeof (UInt32) calculation method.

In the source code also clearly divided into two areas:

#define HT_HASH_SIZE (Ntablemask) (((size_t) (uint32_t)-(int32_t) (ntablemask)) * sizeof (uint32_t)) #define Ht_data_ SIZE (Ntablesize) ((size_t) (ntablesize) * sizeof (Bucket)) #define HT_SIZE_EX (Ntablesize, Ntablemask) (Ht_data_size ( Ntablesize) + ht_hash_size ((ntablemask))) #define HT_SIZE (HT) HT_SIZE_EX ((HT)->ntablesize, (HT)->ntablemask) Bucket *ardata;  Ardata = Emalloc (Ht_size (HT)); /* Now alloc this */

We expand the results of the macro substitution:

(((size_t) ((->ntablesize)) * sizeof (Bucket)) + (((size_t) (uint32_t)-(int32_t) ((((HT)->ntablemask)) * sizeof (uint32_t)))

Collision conflict

Next we look at how to solve the collision problem of the hash table. The hash table key name may be hashed to the same node. So, when we access the converted node, we need to compare the key name to whether we are looking for it. If not, we will read the next data on the linked list through the Zval.u2.next field.

Note that the list structure here is not distributed in memory like a traditional linked list. Instead of getting the memory address scattered pointers through the heap (heap), we directly read the entire array of ardata.

This is an important point of PHP7 performance improvement. Data locality allows the CPU not to access slow primary storage frequently, but to read all the data directly from the CPU's L1 cache.

So, we see that adding an element to the hash table is something like this:

    IDX = ht->nnumused++;    ht->nnumofelements++;    if (Ht->ninternalpointer = = ht_invalid_idx) {        ht->ninternalpointer = IDX;    }    Zend_hash_iterators_update (HT, HT_INVALID_IDX, IDX);    p = ht->ardata + idx;    P->key = key;    if (! Zstr_is_interned (key)) {        zend_string_addref (key);        Ht->u.flags &= ~hash_flag_static_keys;        Zend_string_hash_val (key);    }    P->h = h = zstr_h (key);    Zval_copy_value (&p->val, pData);    NIndex = h | ht->ntablemask;    Z_next (P->val) = Ht_hash (HT, nIndex);    Ht_hash (HT, nIndex) = Ht_idx_to_hash (IDX);

The same rules apply to deleting elements:

#define HT_HASH_TO_BUCKET_EX (data, IDX) ((data) + (IDX)) #define HT_HASH_TO_BUCKET (HT, IDX) HT_HASH_TO_BUCKET_EX (HT)- >ardata, idx) H = zend_string_hash_val (key); /* Get the hash from the key (assuming string key here) */NIndex = h | ht->ntablemask; /* Get the Translation table index */idx = Ht_hash (HT, NIndex);  /* Get the slot corresponding to that translation index * * while (idx! = HT_INVALID_IDX) {/* If there is a corresponding Slot */p = ht_hash_to_bucket (HT, IDX);        /* Get the bucket from that slot */if ((P->key = = key) | |/* ARE it the right bucket? Same key pointer? * * (P->h = = h &&/* ... or same hash */P->key &&/* and a key (string key based) */ZS  Tr_len (p->key) = = Zstr_len (key) &&/* and same key length */memcmp (Zstr_val (P->key), Zstr_val (key),  Zstr_len (key)) = = 0) {/* and same key content? */_ZEND_HASH_DEL_EL_EX (HT, IDX, p, prev);/* That's us! delete US */Return Success;    } prev = P; IDX = Z_next (p->val); /* Get the next corresponding slot from the current one */}return FAILURE;

Initialization of conversion tables and hashes

Ht_invalid_idx as a special token, expressed in the conversion table: the corresponding data node does not have valid data, skip directly.

The hash table can greatly reduce the cost of those arrays that were created with null values, thanks to his two-step initialization process. When a new hash table is created, we create only two conversion table nodes, and all are given ht_invalid_idx tags.

#define HT_MIN_MASK ((uint32_t)-2) #define HT_HASH_SIZE (Ntablemask) (((size_t) (uint32_t)-(int32_t) (ntablemask)) * sizeof (uint32_t)) #define HT_SET_DATA_ADDR (HT, PTR) do {(HT)->ardata = (bucket*) (((char*) (PTR)) + ht_hash_size ((HT)- >ntablemask)); } while (0) static const uint32_t Uninitialized_bucket[-ht_min_mask] = {ht_invalid_idx, ht_invalid_idx}; /* Hash lazy init */zend_api void Zend_fastcall _zend_hash_init (HashTable *ht, uint32_t nSize, dtor_func_t pdestructor, ze Nd_bool persistent zend_file_line_dc)  {    /* ... *    /ht->ntablesize = Zend_hash_check_size (nSize);    Ht->ntablemask = Ht_min_mask;    HT_SET_DATA_ADDR (HT, &uninitialized_bucket);    ht->nnumused = 0;    ht->nnumofelements = 0;}

Note that there is no need to allocate memory using the heap, but rather to use a static memory area, which is lighter.

Then, when the first element is inserted, we initialize the hash table completely, and then we create the space of the required conversion table (8 elements by default if the array size is not determined). At this point, we will use the heap to allocate memory.

#define HT_HASH_EX (data, IDX) ((uint32_t*) (data)) [(int32_t) (IDX)] #define HT_HASH (HT, IDX) HT_HASH_EX ((HT)->ardata , IDX) (HT)->ntablemask =-(HT)->ntablesize; HT_SET_DATA_ADDR (HT, Pemalloc (Ht_size (HT), (HT)->u.flags & hash_flag_persistent);  memset (&ht_hash HT, (HT)->ntablemask), Ht_invalid_idx, Ht_hash_size ((HT)->ntablemask))

The Ht_hash macro can use a negative offset to access nodes in the conversion table. The mask of a hash table is always negative, because the index value of the node of the converted table is the inverse of the ardata array. This is the beauty of programming in C: You can create countless nodes without worrying about performance issues with memory access.

The following is a lazy-initialized hash table structure:

Fragmentation, reassembly, and compression of hash tables

When the hash table fills up and the element needs to be inserted, the hash table must recalculate its size. The size of the hash table always grows exponentially. When the hash table is scaled up, we pre-allocate a C array of type Arbucket and store a value of UNDEF in the empty node zval. This will waste (new_size–old_size) * sizeof (Bucket) bytes of space before the node inserts data.

If a hash table with 1024 nodes is added, the hash table will expand to 2048 nodes, with 1023 nodes empty, which consumes 1023 * bytes = 32KB of space. This is a flaw in the way the PHP hash table is implemented, because there is no perfect solution.

Programming is the process of constantly designing a compromise solution. In the underlying programming, there is a trade-off between CPU and memory.

The hash table may be all UNDEF nodes. When we insert many elements and then delete them, the hash table is fragmented. Because we will never insert data into the Ardata intermediate node, so that we may see many UNDEF nodes.

For example:

The recombinant Ardata can integrate fragmented array elements. When a hash table needs to be reorganized, it compresses itself first. When it is compressed, it calculates the need for expansion and, if necessary, doubles the expansion. If not required, the data is reassigned to the existing node. This algorithm does not run every time the element is deleted because it consumes a lot of CPU computation.

The following is the compressed array:

The compression algorithm iterates through all the elements in the Ardata and replaces the node with the original value as UNDEF. As shown below:

Bucket *p;  uint32_t NIndex, I;  Ht_hash_reset (HT);  i = 0; p = ht->ardata;        Do {if (Unexpected (z_type (p->val) = = Is_undef)) {uint32_t j = i;        Bucket *q = p;            while (++i < ht->nnumused) {p++;                if (Expected (Z_type_info (p->val)! = is_undef)) {Zval_copy_value (&q->val, &p->val);                Q->h = p->h; NIndex = Q->h |                ht->ntablemask;                Q->key = p->key;                Z_next (Q->val) = Ht_hash (HT, NIndex);                Ht_hash (HT, nIndex) = Ht_idx_to_hash (j);                if (unexpected (ht->ninternalpointer = = i)) {ht->ninternalpointer = J;                } q++;            j + +;        }} ht->nnumused = J;    Break } NIndex = P->h |    ht->ntablemask;    Z_next (P->val) = Ht_hash (HT, NIndex);    Ht_hash (HT, nIndex) = Ht_idx_to_hash (i); p++;} while (++i < ht->nnumused); 

Conclusion

Here, the implementation of the PHP hash table has been introduced, there are some advanced content on the hash table is not translated, because next I am ready to continue to share the PHP kernel other knowledge points, interested in the hash table can move to the original text.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.