Chrome Source--Threading model

Source: Internet
Author: User
Tags message queue traits
Many people like Chrome, like it's concise, like it's "fast." Concise Everyone at a glance, needless to say, the emphasis here is that it's "fast." What is "fast"? Probably a lot of people's first reaction is the Cnbeta on the Nikkei JavaScript run branch, chrome that call a quick ah. (In fact, every time I open this kind of article, I want to sympathize with our poor IE classmate, but recently IE9 really filled with expectations.) But is it really that important for JavaScript to run, can we really feel a big gap? Anyway, I'm not feeling very well. Why is it that chrome is fast? Chrome is fast on the UI responsive (UI responsiveness). To put it bluntly, you double click on the desktop icon, it can quickly start up (Firefox users should be deeply touched it); Click on a hyperlink, it can give you a moment to pop a new page (with IE friends have no less than the phenomenon of suspended animation). As a result, chrome is not just a tool for running points. How does Chrome's good UI responsive do it? This is going to benefit from its threading model.

To avoid duplicating the wheel, I'm not going to elaborate on the overall framework of the chrome threading model, but rather focus more on the implementation. Friends who are unfamiliar with the Chrome threading model framework can read this article in Duguguiyu. Conditional friends can go here to download a copy of chromium source code, or to view here online. Less gossip, let ' s go!

No.1 Smart Pointers

The entire chromium source code should have a hundred thousand of files, in the face of such a vast code sea, we should start from where. The proverb says, 工欲善其事, its prerequisite. Let's start with a couple of smart pointers from the chromium.

See Scoped_ptr First (see BASE/SCOPED_PTR.H). This is a smart pointer that will automatically delete the held object at the time of the destructor. Its behavior and std::tr1::scoped_ptr basically consistent, not to say more.

One of the more interesting common pointers in chromium is scoped_refptr (see BASE/REF_COUNTED.H). A little bit different from scoped_ptr, scoped_refptr can only reference types that explicitly implement the reference notation interface: code template < class T >
Class Scoped_refptr {
Public:
Scoped_refptr (T * p): Ptr_ (P) {
if (PTR_)
Ptr_-> AddRef ();
}
~ scoped_refptr () {
if (PTR_)
Ptr_-> release ();
}
Protected:
T * PTR_;
}; Copy Code

As shown above, class T must explicitly implement the AddRef and release interfaces, and Scoped_refptr will invoke them to increase/decrease the reference count at the time of construction and destructor. When an object reference count is 0 o'clock, the release method of class T is responsible for automatically deleting the object. A lot of people can't help asking, so it's a hassle to write a class T. Don't worry, Google has provided us with two default implementations: Refcounted and Refcountedthreadsafe (both in base/ref_counted.h), we can inherit them to enable our class to implement reference counting.

The

refcounted is simpler, but it is noteworthy that it does not guarantee thread security and therefore can only be applied to single-threaded-held objects. What to do if you need an object that can be referenced in multiple threads. Then refcountedthreadsafe to come in handy (personally do not like the name, perhaps called threadsaferefcounted will be more appropriate). Code template  < Class  T, typename Traits  =  defaultrefcountedthreadsafetraits < T >   >
Class  RefCountedThreadSafe :  public  subtle::refcountedthreadsafebase  {
  public:
   void  addref ()  {
    subtle:: Refcountedthreadsafebase::addref ();
  }
   void  release ()  {
     if   subtle:: Refcountedthreadsafebase::release ())  {
      traits::D estruct (static_ Cast < T *> (this));
    }
  }
};
void  refcountedthreadsafebase::addref ()  {
  atomicrefcountinc (& Ref_count_);
}
copy code

We can see that Refcountedthreadsafe is using atomic operations to ensure thread safety when adding and removing reference technologies.

Of course, if you think Refcountedthreadsafe is so simple, you underestimate it . Look carefully, in the type instantiation, in addition to the introduction of a class T, there is a traits. What is the use of this traits? Note the release method, which uses traits to delete the object when the reference count is 0 o'clock. This gives us an opportunity to customize the deletion of objects, which in many cases is useful. For example, in general, many UI objects cannot be refactored out of the main thread. Once we hand it over to scoped_refptr to manage, there is no guarantee that it will reference the count to 0 in which thread, and all that needs to be done is to use a custom traits to Cheng a message to the main thread, so that the main thread is responsible for deleting the object, which avoids many problems.

Writing here, can not help but want to compare scoped_refptr and std::tr1::shared_ptr a little bit. Also using reference notation to implement automatic deletion of objects, SCOPED_REFPTR requires that the referenced objects explicitly implement the AddRef and release interfaces, and that the destructor of the object is private and the scoped_refptr is set as a friend; shared_ PTR only requires the object to provide a public destructor. At one glance, it seems that shared_ptr is much more graceful. But it is this superficial elegance that makes it easy for many people to ignore intentionally and unintentionally that the object may be used in a multi-threaded environment, which brings us countless troubles. Compared to this elegance, I personally appreciate scoped_refptr, which has a clear semantics, and is not prone to misuse of the design. Another point is the traits described above, which is a very useful function. Unfortunately, if we deliver the object to the shared_ptr, there is no way to control its destructor. If it is not to see this code, perhaps I simply do not think that the original shared_ptr there are so many unsatisfactory places, so also strongly recommend you to read more such high-quality code.

There is also a pointer to Weakptr (BASE/WEAK_PTR.H), there is no more description, the need to understand the friend can read its own code.

No.2 Tasks

With the knowledge of smart pointers as the basis, let's take a look at the implementation of the task in Chrome.

The package for the task Class (Base/task.h) is simple, as follows: Class Task:public tracked_objects::tracked {
Public:
Virtual ~ Task ();
Tasks are automatically deleted after the Run is called.
virtual void Run () = 0;
};

What you can say is that it inherits the tracked class, which is used for debugging and performance tracking. It records the time the task was generated and destroyed, and where it was posted to the thread's message loop (through __line__, __file__, and other compiler-defined macros). For more details, refer to Base/tracked.h and related documents. Others, such as Cancelabletask, Deletetask and subclasses, are also small roles, and there is nothing to say.

Looking at the above implementation, many people may ask: if I want to distribute a task to other threads, it is annoying to write a class to implement the task interface. If you have the same worry, congratulations, you and Chromium's designers think of a piece of it. Task.h also provides a number of Newrunnablemethod methods that allow us to easily construct tasks with an object and its methods. Code template < class T, class method, Class A >
Inline Cancelabletask * Newrunnablemethod (T * Object, method method, const A & a) {
Return to New Runnablemethod < T, method, Tuple1 < A > > (object, Method, MakeTuple (A));
}

Template < class T, class method, Class Params >
Class Runnablemethod:public Cancelabletask {
Public:
Runnablemethod (T * obj, method meth, const Params & Params)
: Obj_ (obj), meth_ (Meth), Params_ (params) {
Traits_. Retaincallee (obj_);
Compile_assert ((methodusesscopedrefptrcorrectly < method, Params >:: Value),
Badrunnablemethodparams);
}
Private:
T * OBJ_;
Method Meth_;
Params Params_;
Runnablemethodtraits < T > traits_;
};
Copy Code

Note This sentence: Compile_assert ((Methodusesscopedrefptrcorrectly<method, Params>::value), Badrunnablemethodparams). This means that all parameters to the pointer type of method passed in are scoped_refptr, otherwise the compiler will make an error. (about how to implement the compile-time assert, you can check the source code or reference "modern C + + design.) A careful friend may also find that there is a traits here. As its name describes, all traits are actually just some little tricks. Runnablemethodtraits nothing more than doing some checking on whether class T implements the Refcountedthreadsafe interface when it comes to multithreading (only check in the Debug version). In addition, the method parameters passed in are copied (the pointer copies the value, not the object), and if you have some custom types to use special copy semantics, you can do template specialization for trupletraits.

With these, our task implementation is complete. The answer is obviously negative. We are likely to use Newrunnablemethod (new T, &t::somemethod) to build a task and distribute it to the specified thread. The problem is, if the object created by new T has been destroyed before the task is executed. The task is not a pending pointer, this will not cause access violation such a serious problem. So chromium offers us another solution: Scoped factories. In the spirit of not duplicating the principle of making wheels, be willing to do detailed understanding of friends please read Optman this article.

No.3 Cross-thread Request

All threads in Chrome have a clear division of labor, for example, the UI thread is specifically responsible for responding to user requests, and the file thread is specifically responsible for accessing the filesystem.

In a natural situation, a user might want to see the contents of a file. In Chrome, the completion of this task requires that the UI thread first add a task to the file thread's message queue that reads the files, and then the task is executed on the file thread (if there are other tasks in the file thread, it may still need to wait). When this task is done, add another task to the message queue of the UI thread to display the contents of the file. The file is not displayed until the task that displays the contents of the file has finished executing. It looks very complicated. And in these processes, there is no way to ensure that all threads are not refactored. Of course, it is impossible for a designer to expose such a complex function directly to the user. The Cancelablerequest class is used to encapsulate this complexity. Thanks again Optman, his article elaborated the Cancelablerequest class realization for us in detail.

However, if we just know how to achieve, and do not care about why to do so, but also become the sage said to learn without thinking, the consequences must be "reckless." Do not feel that instead of reading files directly in the UI thread, Chrome has a lot more to do with thread scheduling and task queues when it's doing a lot of stuff. Is it going to be a blind toss? Savor it, it makes perfect sense. It is because of this design that Chrome has a good UI responsive. we know that the main culprit in UI unresponsive is the time-consuming operation that is performed in the UI thread. The most immediate consequence of reading a file in the UI thread is that it is easy for the user to feel the card. Perhaps you will question, if I read only a very small file, so that users should not feel completely ah. But if your Kaspersky scans your hard drive in "Kerala", does your program maintain a good response under this extreme hard drive IO? Then look at chrome, no matter how hard you drive "Ka", the UI thread simply returns to respond to the user's action when it adds a task to the file thread, and the file thread reads it and updates it quickly. It may take longer for the user to actually display the contents of the file, but during this time, the user will be able to seamlessly switch tabs, open new pages, and so on. However, this is not possible if you use a UI thread to access the file directly.

No.4 Sample

Although there are messageloop, thread and other classes did not say, but has not prevented us to see the actual example. Many people may still have questions about how to circumvent the use of locks in Chrome's threading model. This is due to the requirement that objects are designed to exist in only one thread. Maybe that's not a good idea. There is an old saying, according to the gourd painting ladle. Now let's take out a gourd and see what it looks like. The next step is to describe the history management of Chrome as an example. (For specific code, see History.h and history.cc in the Chrome/browser/history directory.) )

There is an important class historyservice in the history.h that runs in the UI thread and serves the UI. It has two important members, Thread_ and History_backend_. Thread_ is the thread that initializes a background access history (history thread), while History_backend_ works diligently on this thread to help historyservice complete real data access. This is the so-called object can only live in a thread: Historyservice live in the UI thread, while History_backend_ live in the history thread. Notice how the History_backend_ is initialized and deleted. Code class Historyservice:public Cancelablerequestprovider,
Public Notificationobserver,
Public Base:: Refcountedthreadsafe < Historyservice > {
.........
Private:
Scoped_refptr < history::historybackend > history_backend_;
Base:: Thread * THREAD_;
};

Historyservice::historyservice ()
: Thread_ (new base:: Thread (khistorythreadname)) ... {
......
}

BOOL Historyservice::init (...) {
if (! Thread_-> Start ()) {
Cleanup ();
return false;
}
......
Create the history backend.
Loadbackendifnecessary ();
return true;
}

void Historyservice::loadbackendifnecessary () {
if (! Thread_ | | History_backend_)
return; Failed to Init, or already started loading.

Scoped_refptr < Historybackend > Backend (new Historybackend (...));
History_backend_.swap (backend);

Scheduleandforget (priority_ui, & Historybackend::init, no_db_);
}

Historyservice:: ~ Historyservice () {
Cleanup ();
}

void Historyservice::cleanup () {
if (! Thread_) {
We ' ve already cleaned up.
return;
}
Unload the backend.
Unloadbackend ();
Base:: Thread * thread = Thread_;
Thread_ = NULL;
Delete thread;
}

void Historyservice::unloadbackend () {
if (! History_backend_)
return; Already unloaded.
Task * Closing_task = Newrunnablemethod (history_backend_. Get (), & historybackend::closing);
History_backend_ = NULL;
ScheduleTask (Priority_normal, Closing_task);
}

Copy Code

What needs to be explained is that the History_backend_ construct simply puts all the pointer members null without a specific initialization action (Historybackend::init method). This ensures that the UI thread can quickly finish constructing historyservice without waiting, but also because it needs to be initialized in the thread represented by Thread_. When Historyservice needs to invoke the Historybackend method, it is done asynchronously by distributing the task to the Thread_ message loop. For historyservice about how to distribute tasks to Thread_ and initiate Cross-thread request, please refer to the code, basically using the above mentioned methods to compare understood.


That's a little bit of a pull. It seems that there is no special feeling to send, just feel that the code is really high quality and very sophisticated, a little finishing can also be used in our own projects. Of course, please remember to follow the rules of open source game. Friends who want to get to know each other can read the code themselves. I have a limited level, the above described in the wrong place, please do not hesitate to point out.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.