Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Anthony Williams

Pages: [1] 2 3 ... 6
1
Here as well, we could use some more documentation and explaination of how to handle this situation while iterating several million records such that we can skip the node if it has beeen deleted.

Iteration automatically skips deleted nodes. If you hang on to your iterator for too long, then another thread might delete the element, but the iterator will still remain valid, can still be dereferenced and still used as a normal iterator. The following code "just works":

Code: [Select]
jss::concurrent_map<int,unsigned> map;
for(jss::concurrent_map<int,unsigned>::iterator it=map.begin();end=map.end();it!=end;++it){
  do_something_with(*it);
}

If the element is deleted after you obtained an iterator to it then you can detect this by calling it->is_deleted_node() but in most cases this isn't necessary --- just proceeding as normal will behave as-if the iteration was done immediately prior to the delete.

2
General Discussion about just::thread / Re: MVCC
« on: March 05, 2014, 05:27:37 PM »
Is there anyplace I can read more about how the map internally handle concurrency? We are trying to implement multi version concurrency control. Without looking at the internal implementation of how the data structure implements lock free writing, it would be impossible for us to try and implement MVCC. Your thoughts on this will be very helpful. Let me know if this is something we can discuss offline.

Underneath it all, the hash-map is a linked-list with bucket markers.

Inserts are done by a simple atomic compare-exchange on the relevant entry.

Deletes are done by adding a flag to the "next" pointer of the node being deleted, doing a CAS on the next pointer of the previous node to remove it from the list, and then putting the node aside to be collected later by the clean-up code.

Replaces are done by creating a new node for the required value, copying the next pointer from the existing node for that value into the new node, and then doing a compare-exchange to set the next pointer in the node being replaced with a pointer to the new node marked with the "node deleted" flag. The old value is now marked as deleted and the new node is in the list. The delete clean up code is now run to finish the job.

So, values are either in the map and visible to all, marked as deleted and thus ignored by all, or not in the map.

What do you need for MVCC?

3
Does the jss::concurrent_map handle the issue with iterator invalidation upon calling insert or erase?

Yes. Iterators remain valid until they are destroyed. If you compare an iterator to a deleted node to another iterator then you get a jss::concurrent_modification exception thrown unless you anchored one of the iterators (by calling anchor() on it).

4
General Discussion about just::thread / Re: MVCC
« on: March 05, 2014, 09:18:30 AM »
Is there any documentation on how to implement MVCC in a jss::concurrent_map? The whole idea is to not lock the map for writing when there might be multiple threads reading from the map.

jss::concurrent_map does not lock the map for writing whilst a read is taking place. The whole thing is lock-free, except it uses new and delete, which might therefore use a lock.

5
I read through the justthread documentation for the jss::concurrent_map and it said that it can allow concurrent access from 200 connections. What imposes the 200 limit? Can that be increased?

The 200 accessor limit is due to the size of the required data structure. Increasing the limit will increase the size of the data structure, and in turn increase the time spent checking it during erase.

It is just a constant in the header, and could in principle be changed, or even made configurable via a macro.

Why do you need more than 200 concurrent accesses?

6
Is there anyone on the forums who knows how to serialize a jss::concurrent_map? Are there any built in functions to do this?

By "serialize", I assume you mean write out to a stream. No, there are no functions supplied to do this. You could use the iteration interface:

Code: [Select]
jss::concurrent_map<int,unsigned> map;
for(jss::concurrent_map<int,unsigned>::iterator it=map.begin(),end=map.end();it!=end;++it){
    std::cout<<it->first<<":"<<it->second<<std::endl;
}

Of course, if another thread modifies the map in the mean time then you may or may not see their modifications.

7
Excellent. I'll roll that out to everyone.

8
After running your test program many times I managed to encounter a problem. It appeared to be a race condition in the internal logic of std::async. I have rewritten the relevant section, and have not encountered any further problems after running the program many more times.

I've added a v1.8.3.1 preview to your package. Please can you download and test.

Cheers,

Anthony

9
The Intel C++ Compiler is not currently supported. There are plans to port to it, but we have no definite time frame --- early port attempts demonstrated that it is more complicated than expected.

10
Thank you for the information. I am attempting to recreate the problem.

11
I am sorry to hear that you are experiencing problems.

What OS are you running? Which build of VS2012 are you using? (Open a VS2012 command prompt and type "cl") What compiler options are you using?

I cannot reproduce the problem on Windows 7 x64 with VS 2012, build 17.00.50727.1, on a 6-core machine.

I don't have access to a 24-core system just now, but I do have an 8-core one (also running Windows 7). I don't see that the number of cores should make a difference, but I'll try it out in case it does.

12
Operations with memory_order_relaxed can be freely reordered with other operations. In particular, they can be reordered with loads with memory_order_seq_cst unless they are to the same atomic variable. A load does not introduce any memory ordering constraints on the current thread, unless all operations involved are memory_order_seq_cst.

Therefore, I don't believe there is a problem with Just::Thread.

13
OK, new packages have been uploaded. Please let me know if you have any further problems.

14
Gah! Packaging error!

I'll update the packages and post here when the new versions are available.

Sorry for the inconvenience.

15
I believe that the VS2012 implementation is being overly conservative. If the appropriate synchronization is used on the store, then an atomic<int>::load need only be a MOV on x86.

Microsoft is trying to phase out the use of volatile for synchronization, and is encouraging people to use std::atomic instead, hence the compiler switch. It is a shame if their library generates suboptimal code in this case.

Just::Thread does not rely on the special semantics of volatile; where our source code uses volatile it is just to force the compiler to issue the load --- the _ReadWriteBarrier() intrinsic is used to restrict reordering by the compiler.

Pages: [1] 2 3 ... 6