##// END OF EJS Templates
hg: obtain lock when creating share from pooled repo (issue5104)...
hg: obtain lock when creating share from pooled repo (issue5104) There are race conditions between clients performing a shared clone to pooled storage: 1) Clients race to create the new shared repo in the pool directory 2) 1 client is seeding the repo in the pool directory and another goes to share it before it is fully cloned We prevent these race conditions by obtaining a lock in the pool directory that is derived from the name of the repo we will be accessing. To test this, a simple generic "lockdelay" extension has been added. The extension inserts an optional, configurable delay before or after lock acquisition. In the test, we delay 2 seconds after lock acquisition in the first process and 1 second before lock acquisition in the 2nd process. This means the first process has 1s to obtain the lock. There is a race condition here. If we encounter it in the wild, we could change the dummy extension to wait on the lock file to appear instead of relying on timing. But that's more complicated. Let's see what happens first.

File last commit:

r28074:a1924bc6 default
r28289:d493d647 3.7.2 stable
Show More
posplit
77 lines | 2.7 KiB | text/plain | TextLexer
#!/usr/bin/env python
#
# posplit - split messages in paragraphs on .po/.pot files
#
# license: MIT/X11/Expat
#
import re
import sys
import polib
def addentry(po, entry, cache):
e = cache.get(entry.msgid)
if e:
e.occurrences.extend(entry.occurrences)
else:
po.append(entry)
cache[entry.msgid] = entry
def mkentry(orig, delta, msgid, msgstr):
entry = polib.POEntry()
entry.merge(orig)
entry.msgid = msgid or orig.msgid
entry.msgstr = msgstr or orig.msgstr
entry.occurrences = [(p, int(l) + delta) for (p, l) in orig.occurrences]
return entry
if __name__ == "__main__":
po = polib.pofile(sys.argv[1])
cache = {}
entries = po[:]
po[:] = []
findd = re.compile(r' *\.\. (\w+)::') # for finding directives
for entry in entries:
msgids = entry.msgid.split(u'\n\n')
if entry.msgstr:
msgstrs = entry.msgstr.split(u'\n\n')
else:
msgstrs = [u''] * len(msgids)
if len(msgids) != len(msgstrs):
# places the whole existing translation as a fuzzy
# translation for each paragraph, to give the
# translator a chance to recover part of the old
# translation - erasing extra paragraphs is
# probably better than retranslating all from start
if 'fuzzy' not in entry.flags:
entry.flags.append('fuzzy')
msgstrs = [entry.msgstr] * len(msgids)
delta = 0
for msgid, msgstr in zip(msgids, msgstrs):
if msgid and msgid != '::':
newentry = mkentry(entry, delta, msgid, msgstr)
mdirective = findd.match(msgid)
if mdirective:
if not msgid[mdirective.end():].rstrip():
# only directive, nothing to translate here
continue
directive = mdirective.group(1)
if directive in ('container', 'include'):
if msgid.rstrip('\n').count('\n') == 0:
# only rst syntax, nothing to translate
continue
else:
# lines following directly, unexpected
print 'Warning: text follows line with directive' \
' %s' % directive
comment = 'do not translate: .. %s::' % directive
if not newentry.comment:
newentry.comment = comment
elif comment not in newentry.comment:
newentry.comment += '\n' + comment
addentry(po, newentry, cache)
delta += 2 + msgid.count('\n')
po.save()