Contention on the enqueue lock is rare, but expensive when it happens. Once the dequeue stack is big, the dequeueing thread can spend most of its time processing, while the enqueuing thread spends most of its time enqueuing. When we dequeue the first time, we do an expensive O(n) operation of reversing the enqueue stack once, but now we have 1000 items on the dequeue stack. Suppose there are 1000 items on the enqueue stack and zero on the dequeue stack. Note that of course if there is only one thread dequeuing, then we don't need the dequeue lock at all, but with this scheme there can be many threads dequeuing. Assign the new stack to the dequeue stack variable.Pop an item from the dequeue stack, which produces a new stack in O(1).If the dequeue stack is empty, throw, or abandon and retry later, or sleep until signaled by the enqueue operation, or whatever the right thing to do here is.assign the new stack to the dequeue stack.assign an empty immutable stack to the enqueue stack variable.enumerate the enqueue stack and use it to build the dequeue stack this reverses the enqueue stack, which maintains the property we want: that the first in is the first out.Assign the newly produced stack to the enqueue stack variable.Push the item onto the enqueue stack this produces a new stack in O(1) time.What you want is two immutable stack variables called the enqueue stack and the dequeue stack, each with their own lock. The goal is to have a queue that supports simultaneous enqueue and dequeue operations and low lock contention. I would not consider myself to be competent to implement the scheme I'm about to describe, not without help of someone who is actually an expert on the memory model. It is sufficiently complicated that you should obtain the services of an expert on the C# memory model to verify the correctness of your solution if you go with this. Now that I've got those warnings out of the way: here is a way to address your concern. Second: If you are bent upon multiple threads and a shared memory data structure, I strongly encourage you to use designed-by-experts data types like concurrent queues, rather than rolling your own. Code that has multiple threads of control sharing access to data structures is hard to get right, and failures can be subtle, catastrophic, and hard to debug. Private static readonly object syncRoot = new object() įirst off: I strongly encourage you to reconsider whether your technique of having multiple threads and a shared memory data structure is even the right approach at all. Private readonly ConcurrentQueue someData If (singleData > someValue || singleData = 1 || singleData = 99) using System Ĭonsole.WriteLine("Enqueued " + newData) These are my first tries and this is an similar example for my problem. So my question is whether it is possible to do these both operations in parallel without blocking each other in a thread safe way. In my situation I need to enqueue at the end and dequeue from the beginning of the queue at the same time. This means if the first thread is enqueuing an object, the second thread can't peek or dequeue an object. I have tried to use the ConcurrentQueue which is a thread safe implementation of a simple queue, but the problem with this one is that all calls are blocking. If these are good the single object will be dequeued and passed to some processing. The second thread first peeks at a single object from that queue and makes some conditional checks. I have a simple scenario with two threads where the first thread reads permanently some data and enqueues that data into a queue.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |