##// END OF EJS Templates
hg: obtain lock when creating share from pooled repo (issue5104)...
hg: obtain lock when creating share from pooled repo (issue5104) There are race conditions between clients performing a shared clone to pooled storage: 1) Clients race to create the new shared repo in the pool directory 2) 1 client is seeding the repo in the pool directory and another goes to share it before it is fully cloned We prevent these race conditions by obtaining a lock in the pool directory that is derived from the name of the repo we will be accessing. To test this, a simple generic "lockdelay" extension has been added. The extension inserts an optional, configurable delay before or after lock acquisition. In the test, we delay 2 seconds after lock acquisition in the first process and 1 second before lock acquisition in the 2nd process. This means the first process has 1s to obtain the lock. There is a race condition here. If we encounter it in the wild, we could change the dummy extension to wait on the lock file to appear instead of relying on timing. But that's more complicated. Let's see what happens first.

File last commit:

r19008:9d33d6e0 default
r28289:d493d647 3.7.2 stable
Show More
wirestore.py
40 lines | 1.3 KiB | text/x-python | PythonLexer
# Copyright 2010-2011 Fog Creek Software
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
'''largefile store working over Mercurial's wire protocol'''
import lfutil
import remotestore
class wirestore(remotestore.remotestore):
def __init__(self, ui, repo, remote):
cap = remote.capable('largefiles')
if not cap:
raise lfutil.storeprotonotcapable([])
storetypes = cap.split(',')
if 'serve' not in storetypes:
raise lfutil.storeprotonotcapable(storetypes)
self.remote = remote
super(wirestore, self).__init__(ui, repo, remote.url())
def _put(self, hash, fd):
return self.remote.putlfile(hash, fd)
def _get(self, hash):
return self.remote.getlfile(hash)
def _stat(self, hashes):
'''For each hash, return 0 if it is available, other values if not.
It is usually 2 if the largefile is missing, but might be 1 the server
has a corrupted copy.'''
batch = self.remote.batch()
futures = {}
for hash in hashes:
futures[hash] = batch.statlfile(hash)
batch.submit()
retval = {}
for hash in hashes:
retval[hash] = futures[hash].value
return retval