Translate

ebook【Delphi跨平台資料庫程式設計火速上手】電子書出版 (CHT)

Delphi 跨平台資料庫程式設計火速上手,是本關於整合 Delphi 的跨平台技術打造 2-Tier 架構的跨平台 APP 的入門技術書。 全書沒有需要理解的技術知識,只講套路。 力求短時間把製作 APP 的工法熟悉,未來要開發其它的應用程式也能舉一反三。 底下...

2012/04/13

[轉]Fixing TCriticalSection


TCriticalSection (along with TMonitor*) suffers from a severe design flaw in which entering/leaving different TCriticalSection instances can end up serializing your threads, and the whole can even end up performing worse than if your threads had been serialized.
This is because it’s a small, dynamically allocated object, so several TCriticalSection instances can end up in the same CPU cache line, and when that happens, you’ll have cache conflicts aplenty between the cores running the threads.
How severe can that be? Well, it depends on how many cores you have, but the more cores you have, the more severe it can get. On a quad core, a bad case of contention can easily result in a 200% slowdown on top of the serialization. And it won’t always be reproducible, since it’s related to dynamic memory allocation.
There is thankfully a simple fix for that, use TFixedCriticalSection:
type
   TFixedCriticalSection = class(TCriticalSection)
      private
         FDummy : array [0..95] of Byte;
   end;
That’s it folks. This makes sure the instance size larger than 96 bytes, which means that it’ll be larger than the cache line in all current CPUs, so no serialization anymore across distinct critical section instances.
As a bonus, it also ends up using one of the larger, more aligned, FastMM bucket, which seems to improve critical section code performance by about 7%. The downside is you use more RAM… but how many critical sections do you really have?
* (11-12-01): as noted by Allen Bauer in the comments, the issue is fixed for TMonitor in XE2.

沒有留言:

張貼留言