First up is the critical section object. This lock is used heavily by countless applications but has a sordid history. When I first started using critical sections, they were really simple. To create such a lock, all you needed was to allocate a CRITICAL_SECTION structure and call the InitializeCriticalSection function to prepare it for use. This function doesn’t return a value, implying that it can’t fail. Back in those days, however, it was necessary for this function to create various system resources, notably a kernel event object, and it was possible that in extremely low-memory situations this would fail, resulting in a structured exception being raised. Still, this was rather rare, so most developers ignored this possibility.
With the popularity of COM, the use of critical sections skyrocketed because many COM classes used critical sections for synchronization, but in many cases there was little to no actual contention to speak of. When multiprocessor computers became more widespread, the critical section’s internal event saw even less usage because the critical section would briefly spin in user mode while waiting to acquire the lock. A small spin count meant that many short-lived periods of contention could avoid a kernel transition, greatly improving performance.
Around this time some kernel developers realized that they could dramatically improve the scalability of Windows if they deferred the creation of critical section event objects until there was enough contention to necessitate their presence. This seemed like a good idea until the developers realized this meant that although InitializeCriticalSection could now not possibly fail, the EnterCriticalSection function (used to wait for lock ownership) was no longer reliable. This could not as easily be overlooked by developers, because it introduced a variety of failure conditions that would’ve made critical sections all but impossible to use correctly and break countless applications. Still, the scalability wins couldn’t be overlooked.
A kernel developer finally arrived at a solution in the form of a new, and undocumented, kernel event object called a keyed event. You can read a little about it in the book, “Windows Internals,” by Mark E. Russinovich, David A. Solomon and Alex Ionescu (Microsoft Press, 2012), but basically, instead of requiring an event object for every critical section, a single keyed event could be used for all critical sections in the system. This works because a keyed event object is just that: It relies on a key, which is just a pointer-sized identifier that’s naturally address-space local.
There was surely a temptation to update critical sections to use keyed events exclusively, but because many debuggers and other tools rely on the internals of critical sections, the keyed event was only used as a last resort if the kernel failed to allocate a regular event object.
This may sound like a lot of irrelevant history but for the fact that the performance of keyed events was significantly improved during the Windows Vista development cycle, and this lead to the introduction of a completely new lock object that was both simpler and faster—but more on that in a minute.
As the critical section object is now exempt from failure due to low-memory conditions, it really is very straightforward to use. Figure 1 provides a simple wrapper.
Figure 1 The Critical Section Lock
class lock { CRITICAL_SECTION h; lock(lock const &); lock const & operator=(lock const &); public: lock() { InitializeCriticalSection(&h); } ~lock() { DeleteCriticalSection(&h); } void enter() { EnterCriticalSection(&h); } bool try_enter() { return 0 != TryEnterCriticalSection(&h); } void exit() { LeaveCriticalSection(&h); } CRITICAL_SECTION * handle() { return &h; } };
https://docs.microsoft.com/en-us/archive/msdn-magazine/2012/november/windows-with-c-the-evolution-of-synchronization-in-windows-and-c