Xenstore: usage, structure and principle (4. Monitoring: xs_watch)

Source: Internet
Author: User

The watch function of xenstore is very useful. Any changes made to the target folder of xenstore monitoring will notify the watch registrant. The xen Virtual Machine's backend driver uses watch to detect changes to the front-end devices.

Note:

(1) You do not need to start a transaction when registering watch. You only need to use xs_open to open the connection. The kernel can directly call register_xenbus_watch

(2) When watch is registered, xenstored immediately generates an event. It is designed in this way. If you do not want to handle this time, you need to ignore it.

Watch usage:

Kernel space: (from http://wiki.xen.org/xenwiki/XenBus)

[CPP]
View plaincopyprint?
  1. Static struct xenbus_watch xb_watch = {
  2. . Node = "Memory ",
  3. . Callback = watch_target;
  4. };
  5. Ret = register_xenbus_watch (& xb_watch );
  6. If (is_err (RET )){
  7. Iprintk ("failed to initialize balloon watcher \ n ");
  8. } Else {
  9. Iprintk ("balloon xenbus watcher initialized \ n ");
  10. }

User space: the user space is slightly different from the kernel. You can use the xs_watch function to define a monitoring function, but you cannot specify a callback function. Instead, you need to call xs_read_watch for blocking reading, as shown in the following code:

[CPP]
View plaincopyprint?
  1. Xs_handle * XH = xs_open (0 );
[CPP]
View plaincopyprint?
  1. Assert (xs_watch (XH, path, token ));
  2. Char ** result = 0;
  3. Result = xs_read_watch (XH, & num); // Blocking
  4. Free (result );
  5. Xs_unwatch (XH, path, token );

If you don't want to block it, you have to open a thread to listen. But here is a bug (version 4.1.3 xen ):Xs_read_watch is NOT thread-safe.If you use pthread_cancel to terminate the thread when xs_read_watch is blocked, xs_read_watch will not release all resources (locks ). If the main thread calls xs_close to close the connection with xenstore, it will be deadlocked because these resources cannot be obtained.The specific analysis is as follows:

Tools/xenstore/Xs. c

[CPP]
View plaincopyprint?
  1. Char ** xs_read_watch (struct xs_handle * H, unsigned int * num)
  2. {
  3. Struct xs_stored_msg * MSG;
  4. Char ** ret, * strings, c = 0;
  5. Unsigned int num_strings, I;
  6. Mutex_lock (& H-> watch_mutex); // ------------------> lock
  7. // Here we should add a pthread_cleanup_push (pthread_mutex_unlock, & H-> watch_mutex)
  8. # Ifdef use_pthread
  9. /* Wait on the condition variable for a watch to fire.
  10. * If the reader thread doesn't exist yet, then that's because
  11. * We haven'tcalled xs_watch. Presumably the application
  12. * Will do so later; in the meantime we just block.
  13. */
  14. While (list_empty (& H-> watch_list) & H-> FD! =-1)
  15. Condvar_wait (& H-> watch_condvar, & H-> watch_mutex );
  16. # Else /*! Defined (use_pthread )*/
  17. /* Read from comms channel ourselves if there are no threads
  18. * And therefore no reader thread .*/
  19. Assert (! Read_thread_exists (h);/* Not threadsafe but worth a check */
  20. If (read_message (H) =-1) // -----------------------> blocking is here when no message exists
  21. Return NULL;

Xs_read_watch locks at the beginning, and then calls read_message. Read_message will block the call to the READ function. If the main thread calls pthread_cancel to cancel the thread running xs_read_watch, H-> watch_mutex will not be released, resulting in the subsequent deadlock.

Solution: Before calling xs_unwatch and xs_close, check the internal lock of xs_handle and forcibly unlock the resource.

The xs_handle struct has only the type declaration in the header file. The specific definition is in the Xs. c file and is only used internally by xenstore. To call the object inside xs_handle, you must first extract the detailed declaration of xs_handle:

[CPP]
View plaincopyprint?
  1. // Used to extract (>_<) xs_handle internal members.
  2. Typedef struct list_head {
  3. Struct list_head * Next, * Prev;
  4. } List_head_struct;
  5. Typedef struct
  6. {
  7. Int FD;
  8. Pthread_t read_thr;
  9. Int read_thr_exists;
  10. Struct list_head watch_list;
  11. Pthread_mutex_t watch_mutex; // monitors semaphores and may cause deadlocks.
  12. Pthread_cond_t watch_condvar;
  13. Int watch_pipe [2];
  14. Struct list_head reply_list;
  15. Pthread_mutex_t reply_mutex; // returns the semaphore.
  16. Pthread_cond_t reply_condvar;
  17. Pthread_mutex_t request_mutex; // request semaphore
  18. } My_xs_handle;
  19. # If _ gnuc _> 3
  20. # Define offsetof (a, B) _ builtin_offsetof (a, B) // extract the macro of the Offset.
  21. # Else
  22. # Define offsetof (A, B) (unsigned long) & (A *) 0)-> B ))
  23. # Endif

Then extract the internal XH members as follows:

Pthread_mutex_t * pm_watch = (pthread_mutex_t *) (void *) XH) + offsetof (my_xs_handle, watch_mutex ));

The unlocked macro. Actually, you can write a function.

[CPP]
View plaincopyprint?
  1. # Define check_xh_lock (x) do {\
  2. Pthread_mutex_t * PM = (pthread_mutex_t *) (void *) pthis-> XH) + offsetof (my_xs_handle, x ));\
  3. If (pthread_mutex_trylock (PM) = ebusy) {// pthread_mutex_trylock lock if it is not locked; otherwise, ebusy is returned.
  4. Cout <"thread_cleanup->" # X "is already locked! "<Endl ;\
  5. If (0! = Pthread_mutex_unlock (PM ))\
  6. Cout <"thread_cleanup-> error unlocking! "<Endl ;\
  7. Else cout <"thread_cleanup-> unlocking" # x <Endl ;\
  8. } Else assert (pthread_mutex_unlock (PM) = 0 );\
  9. } While (0)
[CPP]
View plaincopyprint?
  1. Check_xh_lock (watch_mutex );
  2. Check_xh_lock (request_mutex );
  3. Check_xh_lock (reply_mutex );
  4. Cout <"----- unwatch -----" <Endl;
  5. Xs_unwatch (pthis-> XH, pthis-> path. c_str (), map_path ("watch", pthis-> path). c_str ());
  6. Check_xh_lock (watch_mutex );
  7. Check_xh_lock (request_mutex );
  8. Check_xh_lock (reply_mutex );
  9. Cout <"----- close -----" <Endl;
  10. Xs_close (pthis-> XH );
  11. Pthis-> XH = 0;

The test is as follows:

[Plain]
View plaincopyprint?
  1. Viktor @ buxiang-OptiPlex-330 :~ /Proj/xc_map $ sudo./domu. cpp
  2. Watch-> Add watch on path MMAP/domu-CPP callback func 0x8049aad // The main thread opens the monitoring thread
  3. Thread_func-> begin watch thread Path = MMAP/domu-CPP // monitoring thread registers watch
  4. Thread_func-> watch event Path = MMAP/domu-CPP token = Watch/MMAP/domu-CPP // an event occurs during registration. Ignore this event.
  5. Watch_callback-> entering
  6. // (Wait here, press Ctrl + C)
  7. ^ Cmain Received Signal interrupt
  8. Unwatch-> ** stopping work thread. waiting here... Work thread already stopped. // The main thread uses pthread_cancel to close the monitoring thread
  9. Thread_cleanup-> Path = MMAP/domu-CPP thread = b6d6eb70 // The monitoring thread exits and the main thread begins to clean up.
  10. ----- Unwatch -----
  11. Map_path-> Watch/MMAP/domu-CPP
  12. Thread_cleanup-> watch_mutex is already locked! // At this time, watch_mutex is locked, indicating that xs_read_watch has not released resources.
  13. Thread_cleanup-> unlocking watch_mutex // force unlock
  14. ----- Close ----- // call xs_unwatch and xs_close to close the connection.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.