##// END OF EJS Templates
bookmarks: merge rollback support into localrepo
Matt Mackall -
r13356:d96db730 default
parent child Browse files
Show More
@@ -1,423 +1,414
1 # Mercurial extension to provide the 'hg bookmark' command
1 # Mercurial extension to provide the 'hg bookmark' command
2 #
2 #
3 # Copyright 2008 David Soria Parra <dsp@php.net>
3 # Copyright 2008 David Soria Parra <dsp@php.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''track a line of development with movable markers
8 '''track a line of development with movable markers
9
9
10 Bookmarks are local movable markers to changesets. Every bookmark
10 Bookmarks are local movable markers to changesets. Every bookmark
11 points to a changeset identified by its hash. If you commit a
11 points to a changeset identified by its hash. If you commit a
12 changeset that is based on a changeset that has a bookmark on it, the
12 changeset that is based on a changeset that has a bookmark on it, the
13 bookmark shifts to the new changeset.
13 bookmark shifts to the new changeset.
14
14
15 It is possible to use bookmark names in every revision lookup (e.g.
15 It is possible to use bookmark names in every revision lookup (e.g.
16 :hg:`merge`, :hg:`update`).
16 :hg:`merge`, :hg:`update`).
17
17
18 By default, when several bookmarks point to the same changeset, they
18 By default, when several bookmarks point to the same changeset, they
19 will all move forward together. It is possible to obtain a more
19 will all move forward together. It is possible to obtain a more
20 git-like experience by adding the following configuration option to
20 git-like experience by adding the following configuration option to
21 your configuration file::
21 your configuration file::
22
22
23 [bookmarks]
23 [bookmarks]
24 track.current = True
24 track.current = True
25
25
26 This will cause Mercurial to track the bookmark that you are currently
26 This will cause Mercurial to track the bookmark that you are currently
27 using, and only update it. This is similar to git's approach to
27 using, and only update it. This is similar to git's approach to
28 branching.
28 branching.
29 '''
29 '''
30
30
31 from mercurial.i18n import _
31 from mercurial.i18n import _
32 from mercurial.node import nullid, nullrev, bin, hex, short
32 from mercurial.node import nullid, nullrev, bin, hex, short
33 from mercurial import util, commands, repair, extensions, pushkey, hg, url
33 from mercurial import util, commands, repair, extensions, pushkey, hg, url
34 from mercurial import revset, encoding
34 from mercurial import revset, encoding
35 from mercurial import bookmarks
35 from mercurial import bookmarks
36 import os
36 import os
37
37
38 def bookmark(ui, repo, mark=None, rev=None, force=False, delete=False, rename=None):
38 def bookmark(ui, repo, mark=None, rev=None, force=False, delete=False, rename=None):
39 '''track a line of development with movable markers
39 '''track a line of development with movable markers
40
40
41 Bookmarks are pointers to certain commits that move when
41 Bookmarks are pointers to certain commits that move when
42 committing. Bookmarks are local. They can be renamed, copied and
42 committing. Bookmarks are local. They can be renamed, copied and
43 deleted. It is possible to use bookmark names in :hg:`merge` and
43 deleted. It is possible to use bookmark names in :hg:`merge` and
44 :hg:`update` to merge and update respectively to a given bookmark.
44 :hg:`update` to merge and update respectively to a given bookmark.
45
45
46 You can use :hg:`bookmark NAME` to set a bookmark on the working
46 You can use :hg:`bookmark NAME` to set a bookmark on the working
47 directory's parent revision with the given name. If you specify
47 directory's parent revision with the given name. If you specify
48 a revision using -r REV (where REV may be an existing bookmark),
48 a revision using -r REV (where REV may be an existing bookmark),
49 the bookmark is assigned to that revision.
49 the bookmark is assigned to that revision.
50
50
51 Bookmarks can be pushed and pulled between repositories (see :hg:`help
51 Bookmarks can be pushed and pulled between repositories (see :hg:`help
52 push` and :hg:`help pull`). This requires the bookmark extension to be
52 push` and :hg:`help pull`). This requires the bookmark extension to be
53 enabled for both the local and remote repositories.
53 enabled for both the local and remote repositories.
54 '''
54 '''
55 hexfn = ui.debugflag and hex or short
55 hexfn = ui.debugflag and hex or short
56 marks = repo._bookmarks
56 marks = repo._bookmarks
57 cur = repo.changectx('.').node()
57 cur = repo.changectx('.').node()
58
58
59 if rename:
59 if rename:
60 if rename not in marks:
60 if rename not in marks:
61 raise util.Abort(_("a bookmark of this name does not exist"))
61 raise util.Abort(_("a bookmark of this name does not exist"))
62 if mark in marks and not force:
62 if mark in marks and not force:
63 raise util.Abort(_("a bookmark of the same name already exists"))
63 raise util.Abort(_("a bookmark of the same name already exists"))
64 if mark is None:
64 if mark is None:
65 raise util.Abort(_("new bookmark name required"))
65 raise util.Abort(_("new bookmark name required"))
66 marks[mark] = marks[rename]
66 marks[mark] = marks[rename]
67 del marks[rename]
67 del marks[rename]
68 if repo._bookmarkcurrent == rename:
68 if repo._bookmarkcurrent == rename:
69 bookmarks.setcurrent(repo, mark)
69 bookmarks.setcurrent(repo, mark)
70 bookmarks.write(repo)
70 bookmarks.write(repo)
71 return
71 return
72
72
73 if delete:
73 if delete:
74 if mark is None:
74 if mark is None:
75 raise util.Abort(_("bookmark name required"))
75 raise util.Abort(_("bookmark name required"))
76 if mark not in marks:
76 if mark not in marks:
77 raise util.Abort(_("a bookmark of this name does not exist"))
77 raise util.Abort(_("a bookmark of this name does not exist"))
78 if mark == repo._bookmarkcurrent:
78 if mark == repo._bookmarkcurrent:
79 bookmarks.setcurrent(repo, None)
79 bookmarks.setcurrent(repo, None)
80 del marks[mark]
80 del marks[mark]
81 bookmarks.write(repo)
81 bookmarks.write(repo)
82 return
82 return
83
83
84 if mark is not None:
84 if mark is not None:
85 if "\n" in mark:
85 if "\n" in mark:
86 raise util.Abort(_("bookmark name cannot contain newlines"))
86 raise util.Abort(_("bookmark name cannot contain newlines"))
87 mark = mark.strip()
87 mark = mark.strip()
88 if not mark:
88 if not mark:
89 raise util.Abort(_("bookmark names cannot consist entirely of "
89 raise util.Abort(_("bookmark names cannot consist entirely of "
90 "whitespace"))
90 "whitespace"))
91 if mark in marks and not force:
91 if mark in marks and not force:
92 raise util.Abort(_("a bookmark of the same name already exists"))
92 raise util.Abort(_("a bookmark of the same name already exists"))
93 if ((mark in repo.branchtags() or mark == repo.dirstate.branch())
93 if ((mark in repo.branchtags() or mark == repo.dirstate.branch())
94 and not force):
94 and not force):
95 raise util.Abort(
95 raise util.Abort(
96 _("a bookmark cannot have the name of an existing branch"))
96 _("a bookmark cannot have the name of an existing branch"))
97 if rev:
97 if rev:
98 marks[mark] = repo.lookup(rev)
98 marks[mark] = repo.lookup(rev)
99 else:
99 else:
100 marks[mark] = repo.changectx('.').node()
100 marks[mark] = repo.changectx('.').node()
101 bookmarks.setcurrent(repo, mark)
101 bookmarks.setcurrent(repo, mark)
102 bookmarks.write(repo)
102 bookmarks.write(repo)
103 return
103 return
104
104
105 if mark is None:
105 if mark is None:
106 if rev:
106 if rev:
107 raise util.Abort(_("bookmark name required"))
107 raise util.Abort(_("bookmark name required"))
108 if len(marks) == 0:
108 if len(marks) == 0:
109 ui.status(_("no bookmarks set\n"))
109 ui.status(_("no bookmarks set\n"))
110 else:
110 else:
111 for bmark, n in marks.iteritems():
111 for bmark, n in marks.iteritems():
112 if ui.configbool('bookmarks', 'track.current'):
112 if ui.configbool('bookmarks', 'track.current'):
113 current = repo._bookmarkcurrent
113 current = repo._bookmarkcurrent
114 if bmark == current and n == cur:
114 if bmark == current and n == cur:
115 prefix, label = '*', 'bookmarks.current'
115 prefix, label = '*', 'bookmarks.current'
116 else:
116 else:
117 prefix, label = ' ', ''
117 prefix, label = ' ', ''
118 else:
118 else:
119 if n == cur:
119 if n == cur:
120 prefix, label = '*', 'bookmarks.current'
120 prefix, label = '*', 'bookmarks.current'
121 else:
121 else:
122 prefix, label = ' ', ''
122 prefix, label = ' ', ''
123
123
124 if ui.quiet:
124 if ui.quiet:
125 ui.write("%s\n" % bmark, label=label)
125 ui.write("%s\n" % bmark, label=label)
126 else:
126 else:
127 ui.write(" %s %-25s %d:%s\n" % (
127 ui.write(" %s %-25s %d:%s\n" % (
128 prefix, bmark, repo.changelog.rev(n), hexfn(n)),
128 prefix, bmark, repo.changelog.rev(n), hexfn(n)),
129 label=label)
129 label=label)
130 return
130 return
131
131
132 def _revstostrip(changelog, node):
132 def _revstostrip(changelog, node):
133 srev = changelog.rev(node)
133 srev = changelog.rev(node)
134 tostrip = [srev]
134 tostrip = [srev]
135 saveheads = []
135 saveheads = []
136 for r in xrange(srev, len(changelog)):
136 for r in xrange(srev, len(changelog)):
137 parents = changelog.parentrevs(r)
137 parents = changelog.parentrevs(r)
138 if parents[0] in tostrip or parents[1] in tostrip:
138 if parents[0] in tostrip or parents[1] in tostrip:
139 tostrip.append(r)
139 tostrip.append(r)
140 if parents[1] != nullrev:
140 if parents[1] != nullrev:
141 for p in parents:
141 for p in parents:
142 if p not in tostrip and p > srev:
142 if p not in tostrip and p > srev:
143 saveheads.append(p)
143 saveheads.append(p)
144 return [r for r in tostrip if r not in saveheads]
144 return [r for r in tostrip if r not in saveheads]
145
145
146 def strip(oldstrip, ui, repo, node, backup="all"):
146 def strip(oldstrip, ui, repo, node, backup="all"):
147 """Strip bookmarks if revisions are stripped using
147 """Strip bookmarks if revisions are stripped using
148 the mercurial.strip method. This usually happens during
148 the mercurial.strip method. This usually happens during
149 qpush and qpop"""
149 qpush and qpop"""
150 revisions = _revstostrip(repo.changelog, node)
150 revisions = _revstostrip(repo.changelog, node)
151 marks = repo._bookmarks
151 marks = repo._bookmarks
152 update = []
152 update = []
153 for mark, n in marks.iteritems():
153 for mark, n in marks.iteritems():
154 if repo.changelog.rev(n) in revisions:
154 if repo.changelog.rev(n) in revisions:
155 update.append(mark)
155 update.append(mark)
156 oldstrip(ui, repo, node, backup)
156 oldstrip(ui, repo, node, backup)
157 if len(update) > 0:
157 if len(update) > 0:
158 for m in update:
158 for m in update:
159 marks[m] = repo.changectx('.').node()
159 marks[m] = repo.changectx('.').node()
160 bookmarks.write(repo)
160 bookmarks.write(repo)
161
161
162 def reposetup(ui, repo):
162 def reposetup(ui, repo):
163 if not repo.local():
163 if not repo.local():
164 return
164 return
165
165
166 class bookmark_repo(repo.__class__):
166 class bookmark_repo(repo.__class__):
167 def rollback(self, dryrun=False):
168 if os.path.exists(self.join('undo.bookmarks')):
169 if not dryrun:
170 util.rename(self.join('undo.bookmarks'), self.join('bookmarks'))
171 elif not os.path.exists(self.sjoin("undo")):
172 # avoid "no rollback information available" message
173 return 0
174 return super(bookmark_repo, self).rollback(dryrun)
175
176 def lookup(self, key):
167 def lookup(self, key):
177 if key in self._bookmarks:
168 if key in self._bookmarks:
178 key = self._bookmarks[key]
169 key = self._bookmarks[key]
179 return super(bookmark_repo, self).lookup(key)
170 return super(bookmark_repo, self).lookup(key)
180
171
181 def commitctx(self, ctx, error=False):
172 def commitctx(self, ctx, error=False):
182 """Add a revision to the repository and
173 """Add a revision to the repository and
183 move the bookmark"""
174 move the bookmark"""
184 wlock = self.wlock() # do both commit and bookmark with lock held
175 wlock = self.wlock() # do both commit and bookmark with lock held
185 try:
176 try:
186 node = super(bookmark_repo, self).commitctx(ctx, error)
177 node = super(bookmark_repo, self).commitctx(ctx, error)
187 if node is None:
178 if node is None:
188 return None
179 return None
189 parents = self.changelog.parents(node)
180 parents = self.changelog.parents(node)
190 if parents[1] == nullid:
181 if parents[1] == nullid:
191 parents = (parents[0],)
182 parents = (parents[0],)
192
183
193 bookmarks.update(self, parents, node)
184 bookmarks.update(self, parents, node)
194 return node
185 return node
195 finally:
186 finally:
196 wlock.release()
187 wlock.release()
197
188
198 def pull(self, remote, heads=None, force=False):
189 def pull(self, remote, heads=None, force=False):
199 result = super(bookmark_repo, self).pull(remote, heads, force)
190 result = super(bookmark_repo, self).pull(remote, heads, force)
200
191
201 self.ui.debug("checking for updated bookmarks\n")
192 self.ui.debug("checking for updated bookmarks\n")
202 rb = remote.listkeys('bookmarks')
193 rb = remote.listkeys('bookmarks')
203 changed = False
194 changed = False
204 for k in rb.keys():
195 for k in rb.keys():
205 if k in self._bookmarks:
196 if k in self._bookmarks:
206 nr, nl = rb[k], self._bookmarks[k]
197 nr, nl = rb[k], self._bookmarks[k]
207 if nr in self:
198 if nr in self:
208 cr = self[nr]
199 cr = self[nr]
209 cl = self[nl]
200 cl = self[nl]
210 if cl.rev() >= cr.rev():
201 if cl.rev() >= cr.rev():
211 continue
202 continue
212 if cr in cl.descendants():
203 if cr in cl.descendants():
213 self._bookmarks[k] = cr.node()
204 self._bookmarks[k] = cr.node()
214 changed = True
205 changed = True
215 self.ui.status(_("updating bookmark %s\n") % k)
206 self.ui.status(_("updating bookmark %s\n") % k)
216 else:
207 else:
217 self.ui.warn(_("not updating divergent"
208 self.ui.warn(_("not updating divergent"
218 " bookmark %s\n") % k)
209 " bookmark %s\n") % k)
219 if changed:
210 if changed:
220 bookmarks.write(repo)
211 bookmarks.write(repo)
221
212
222 return result
213 return result
223
214
224 def push(self, remote, force=False, revs=None, newbranch=False):
215 def push(self, remote, force=False, revs=None, newbranch=False):
225 result = super(bookmark_repo, self).push(remote, force, revs,
216 result = super(bookmark_repo, self).push(remote, force, revs,
226 newbranch)
217 newbranch)
227
218
228 self.ui.debug("checking for updated bookmarks\n")
219 self.ui.debug("checking for updated bookmarks\n")
229 rb = remote.listkeys('bookmarks')
220 rb = remote.listkeys('bookmarks')
230 for k in rb.keys():
221 for k in rb.keys():
231 if k in self._bookmarks:
222 if k in self._bookmarks:
232 nr, nl = rb[k], hex(self._bookmarks[k])
223 nr, nl = rb[k], hex(self._bookmarks[k])
233 if nr in self:
224 if nr in self:
234 cr = self[nr]
225 cr = self[nr]
235 cl = self[nl]
226 cl = self[nl]
236 if cl in cr.descendants():
227 if cl in cr.descendants():
237 r = remote.pushkey('bookmarks', k, nr, nl)
228 r = remote.pushkey('bookmarks', k, nr, nl)
238 if r:
229 if r:
239 self.ui.status(_("updating bookmark %s\n") % k)
230 self.ui.status(_("updating bookmark %s\n") % k)
240 else:
231 else:
241 self.ui.warn(_('updating bookmark %s'
232 self.ui.warn(_('updating bookmark %s'
242 ' failed!\n') % k)
233 ' failed!\n') % k)
243
234
244 return result
235 return result
245
236
246 def addchangegroup(self, *args, **kwargs):
237 def addchangegroup(self, *args, **kwargs):
247 result = super(bookmark_repo, self).addchangegroup(*args, **kwargs)
238 result = super(bookmark_repo, self).addchangegroup(*args, **kwargs)
248 if result > 1:
239 if result > 1:
249 # We have more heads than before
240 # We have more heads than before
250 return result
241 return result
251 node = self.changelog.tip()
242 node = self.changelog.tip()
252 parents = self.dirstate.parents()
243 parents = self.dirstate.parents()
253 bookmarks.update(self, parents, node)
244 bookmarks.update(self, parents, node)
254 return result
245 return result
255
246
256 def _findtags(self):
247 def _findtags(self):
257 """Merge bookmarks with normal tags"""
248 """Merge bookmarks with normal tags"""
258 (tags, tagtypes) = super(bookmark_repo, self)._findtags()
249 (tags, tagtypes) = super(bookmark_repo, self)._findtags()
259 tags.update(self._bookmarks)
250 tags.update(self._bookmarks)
260 return (tags, tagtypes)
251 return (tags, tagtypes)
261
252
262 if hasattr(repo, 'invalidate'):
253 if hasattr(repo, 'invalidate'):
263 def invalidate(self):
254 def invalidate(self):
264 super(bookmark_repo, self).invalidate()
255 super(bookmark_repo, self).invalidate()
265 for attr in ('_bookmarks', '_bookmarkcurrent'):
256 for attr in ('_bookmarks', '_bookmarkcurrent'):
266 if attr in self.__dict__:
257 if attr in self.__dict__:
267 delattr(self, attr)
258 delattr(self, attr)
268
259
269 repo.__class__ = bookmark_repo
260 repo.__class__ = bookmark_repo
270
261
271 def pull(oldpull, ui, repo, source="default", **opts):
262 def pull(oldpull, ui, repo, source="default", **opts):
272 # translate bookmark args to rev args for actual pull
263 # translate bookmark args to rev args for actual pull
273 if opts.get('bookmark'):
264 if opts.get('bookmark'):
274 # this is an unpleasant hack as pull will do this internally
265 # this is an unpleasant hack as pull will do this internally
275 source, branches = hg.parseurl(ui.expandpath(source),
266 source, branches = hg.parseurl(ui.expandpath(source),
276 opts.get('branch'))
267 opts.get('branch'))
277 other = hg.repository(hg.remoteui(repo, opts), source)
268 other = hg.repository(hg.remoteui(repo, opts), source)
278 rb = other.listkeys('bookmarks')
269 rb = other.listkeys('bookmarks')
279
270
280 for b in opts['bookmark']:
271 for b in opts['bookmark']:
281 if b not in rb:
272 if b not in rb:
282 raise util.Abort(_('remote bookmark %s not found!') % b)
273 raise util.Abort(_('remote bookmark %s not found!') % b)
283 opts.setdefault('rev', []).append(b)
274 opts.setdefault('rev', []).append(b)
284
275
285 result = oldpull(ui, repo, source, **opts)
276 result = oldpull(ui, repo, source, **opts)
286
277
287 # update specified bookmarks
278 # update specified bookmarks
288 if opts.get('bookmark'):
279 if opts.get('bookmark'):
289 for b in opts['bookmark']:
280 for b in opts['bookmark']:
290 # explicit pull overrides local bookmark if any
281 # explicit pull overrides local bookmark if any
291 ui.status(_("importing bookmark %s\n") % b)
282 ui.status(_("importing bookmark %s\n") % b)
292 repo._bookmarks[b] = repo[rb[b]].node()
283 repo._bookmarks[b] = repo[rb[b]].node()
293 bookmarks.write(repo)
284 bookmarks.write(repo)
294
285
295 return result
286 return result
296
287
297 def push(oldpush, ui, repo, dest=None, **opts):
288 def push(oldpush, ui, repo, dest=None, **opts):
298 dopush = True
289 dopush = True
299 if opts.get('bookmark'):
290 if opts.get('bookmark'):
300 dopush = False
291 dopush = False
301 for b in opts['bookmark']:
292 for b in opts['bookmark']:
302 if b in repo._bookmarks:
293 if b in repo._bookmarks:
303 dopush = True
294 dopush = True
304 opts.setdefault('rev', []).append(b)
295 opts.setdefault('rev', []).append(b)
305
296
306 result = 0
297 result = 0
307 if dopush:
298 if dopush:
308 result = oldpush(ui, repo, dest, **opts)
299 result = oldpush(ui, repo, dest, **opts)
309
300
310 if opts.get('bookmark'):
301 if opts.get('bookmark'):
311 # this is an unpleasant hack as push will do this internally
302 # this is an unpleasant hack as push will do this internally
312 dest = ui.expandpath(dest or 'default-push', dest or 'default')
303 dest = ui.expandpath(dest or 'default-push', dest or 'default')
313 dest, branches = hg.parseurl(dest, opts.get('branch'))
304 dest, branches = hg.parseurl(dest, opts.get('branch'))
314 other = hg.repository(hg.remoteui(repo, opts), dest)
305 other = hg.repository(hg.remoteui(repo, opts), dest)
315 rb = other.listkeys('bookmarks')
306 rb = other.listkeys('bookmarks')
316 for b in opts['bookmark']:
307 for b in opts['bookmark']:
317 # explicit push overrides remote bookmark if any
308 # explicit push overrides remote bookmark if any
318 if b in repo._bookmarks:
309 if b in repo._bookmarks:
319 ui.status(_("exporting bookmark %s\n") % b)
310 ui.status(_("exporting bookmark %s\n") % b)
320 new = repo[b].hex()
311 new = repo[b].hex()
321 elif b in rb:
312 elif b in rb:
322 ui.status(_("deleting remote bookmark %s\n") % b)
313 ui.status(_("deleting remote bookmark %s\n") % b)
323 new = '' # delete
314 new = '' # delete
324 else:
315 else:
325 ui.warn(_('bookmark %s does not exist on the local '
316 ui.warn(_('bookmark %s does not exist on the local '
326 'or remote repository!\n') % b)
317 'or remote repository!\n') % b)
327 return 2
318 return 2
328 old = rb.get(b, '')
319 old = rb.get(b, '')
329 r = other.pushkey('bookmarks', b, old, new)
320 r = other.pushkey('bookmarks', b, old, new)
330 if not r:
321 if not r:
331 ui.warn(_('updating bookmark %s failed!\n') % b)
322 ui.warn(_('updating bookmark %s failed!\n') % b)
332 if not result:
323 if not result:
333 result = 2
324 result = 2
334
325
335 return result
326 return result
336
327
337 def incoming(oldincoming, ui, repo, source="default", **opts):
328 def incoming(oldincoming, ui, repo, source="default", **opts):
338 if opts.get('bookmarks'):
329 if opts.get('bookmarks'):
339 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
330 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
340 other = hg.repository(hg.remoteui(repo, opts), source)
331 other = hg.repository(hg.remoteui(repo, opts), source)
341 ui.status(_('comparing with %s\n') % url.hidepassword(source))
332 ui.status(_('comparing with %s\n') % url.hidepassword(source))
342 return bookmarks.diff(ui, repo, other)
333 return bookmarks.diff(ui, repo, other)
343 else:
334 else:
344 return oldincoming(ui, repo, source, **opts)
335 return oldincoming(ui, repo, source, **opts)
345
336
346 def outgoing(oldoutgoing, ui, repo, dest=None, **opts):
337 def outgoing(oldoutgoing, ui, repo, dest=None, **opts):
347 if opts.get('bookmarks'):
338 if opts.get('bookmarks'):
348 dest = ui.expandpath(dest or 'default-push', dest or 'default')
339 dest = ui.expandpath(dest or 'default-push', dest or 'default')
349 dest, branches = hg.parseurl(dest, opts.get('branch'))
340 dest, branches = hg.parseurl(dest, opts.get('branch'))
350 other = hg.repository(hg.remoteui(repo, opts), dest)
341 other = hg.repository(hg.remoteui(repo, opts), dest)
351 ui.status(_('comparing with %s\n') % url.hidepassword(dest))
342 ui.status(_('comparing with %s\n') % url.hidepassword(dest))
352 return bookmarks.diff(ui, other, repo)
343 return bookmarks.diff(ui, other, repo)
353 else:
344 else:
354 return oldoutgoing(ui, repo, dest, **opts)
345 return oldoutgoing(ui, repo, dest, **opts)
355
346
356 def uisetup(ui):
347 def uisetup(ui):
357 extensions.wrapfunction(repair, "strip", strip)
348 extensions.wrapfunction(repair, "strip", strip)
358 if ui.configbool('bookmarks', 'track.current'):
349 if ui.configbool('bookmarks', 'track.current'):
359 extensions.wrapcommand(commands.table, 'update', updatecurbookmark)
350 extensions.wrapcommand(commands.table, 'update', updatecurbookmark)
360
351
361 entry = extensions.wrapcommand(commands.table, 'pull', pull)
352 entry = extensions.wrapcommand(commands.table, 'pull', pull)
362 entry[1].append(('B', 'bookmark', [],
353 entry[1].append(('B', 'bookmark', [],
363 _("bookmark to import"),
354 _("bookmark to import"),
364 _('BOOKMARK')))
355 _('BOOKMARK')))
365 entry = extensions.wrapcommand(commands.table, 'push', push)
356 entry = extensions.wrapcommand(commands.table, 'push', push)
366 entry[1].append(('B', 'bookmark', [],
357 entry[1].append(('B', 'bookmark', [],
367 _("bookmark to export"),
358 _("bookmark to export"),
368 _('BOOKMARK')))
359 _('BOOKMARK')))
369 entry = extensions.wrapcommand(commands.table, 'incoming', incoming)
360 entry = extensions.wrapcommand(commands.table, 'incoming', incoming)
370 entry[1].append(('B', 'bookmarks', False,
361 entry[1].append(('B', 'bookmarks', False,
371 _("compare bookmark")))
362 _("compare bookmark")))
372 entry = extensions.wrapcommand(commands.table, 'outgoing', outgoing)
363 entry = extensions.wrapcommand(commands.table, 'outgoing', outgoing)
373 entry[1].append(('B', 'bookmarks', False,
364 entry[1].append(('B', 'bookmarks', False,
374 _("compare bookmark")))
365 _("compare bookmark")))
375
366
376 def updatecurbookmark(orig, ui, repo, *args, **opts):
367 def updatecurbookmark(orig, ui, repo, *args, **opts):
377 '''Set the current bookmark
368 '''Set the current bookmark
378
369
379 If the user updates to a bookmark we update the .hg/bookmarks.current
370 If the user updates to a bookmark we update the .hg/bookmarks.current
380 file.
371 file.
381 '''
372 '''
382 res = orig(ui, repo, *args, **opts)
373 res = orig(ui, repo, *args, **opts)
383 rev = opts['rev']
374 rev = opts['rev']
384 if not rev and len(args) > 0:
375 if not rev and len(args) > 0:
385 rev = args[0]
376 rev = args[0]
386 bookmarks.setcurrent(repo, rev)
377 bookmarks.setcurrent(repo, rev)
387 return res
378 return res
388
379
389 def bmrevset(repo, subset, x):
380 def bmrevset(repo, subset, x):
390 """``bookmark([name])``
381 """``bookmark([name])``
391 The named bookmark or all bookmarks.
382 The named bookmark or all bookmarks.
392 """
383 """
393 # i18n: "bookmark" is a keyword
384 # i18n: "bookmark" is a keyword
394 args = revset.getargs(x, 0, 1, _('bookmark takes one or no arguments'))
385 args = revset.getargs(x, 0, 1, _('bookmark takes one or no arguments'))
395 if args:
386 if args:
396 bm = revset.getstring(args[0],
387 bm = revset.getstring(args[0],
397 # i18n: "bookmark" is a keyword
388 # i18n: "bookmark" is a keyword
398 _('the argument to bookmark must be a string'))
389 _('the argument to bookmark must be a string'))
399 bmrev = bookmarks.listbookmarks(repo).get(bm, None)
390 bmrev = bookmarks.listbookmarks(repo).get(bm, None)
400 if bmrev:
391 if bmrev:
401 bmrev = repo.changelog.rev(bin(bmrev))
392 bmrev = repo.changelog.rev(bin(bmrev))
402 return [r for r in subset if r == bmrev]
393 return [r for r in subset if r == bmrev]
403 bms = set([repo.changelog.rev(bin(r))
394 bms = set([repo.changelog.rev(bin(r))
404 for r in bookmarks.listbookmarks(repo).values()])
395 for r in bookmarks.listbookmarks(repo).values()])
405 return [r for r in subset if r in bms]
396 return [r for r in subset if r in bms]
406
397
407 def extsetup(ui):
398 def extsetup(ui):
408 revset.symbols['bookmark'] = bmrevset
399 revset.symbols['bookmark'] = bmrevset
409
400
410 cmdtable = {
401 cmdtable = {
411 "bookmarks":
402 "bookmarks":
412 (bookmark,
403 (bookmark,
413 [('f', 'force', False, _('force')),
404 [('f', 'force', False, _('force')),
414 ('r', 'rev', '', _('revision'), _('REV')),
405 ('r', 'rev', '', _('revision'), _('REV')),
415 ('d', 'delete', False, _('delete a given bookmark')),
406 ('d', 'delete', False, _('delete a given bookmark')),
416 ('m', 'rename', '', _('rename a given bookmark'), _('NAME'))],
407 ('m', 'rename', '', _('rename a given bookmark'), _('NAME'))],
417 _('hg bookmarks [-f] [-d] [-m NAME] [-r REV] [NAME]')),
408 _('hg bookmarks [-f] [-d] [-m NAME] [-r REV] [NAME]')),
418 }
409 }
419
410
420 colortable = {'bookmarks.current': 'green'}
411 colortable = {'bookmarks.current': 'green'}
421
412
422 # tell hggettext to extract docstrings from these functions:
413 # tell hggettext to extract docstrings from these functions:
423 i18nfunctions = [bmrevset]
414 i18nfunctions = [bmrevset]
@@ -1,1952 +1,1955
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from node import bin, hex, nullid, nullrev, short
8 from node import bin, hex, nullid, nullrev, short
9 from i18n import _
9 from i18n import _
10 import repo, changegroup, subrepo, discovery, pushkey
10 import repo, changegroup, subrepo, discovery, pushkey
11 import changelog, dirstate, filelog, manifest, context, bookmarks
11 import changelog, dirstate, filelog, manifest, context, bookmarks
12 import lock, transaction, store, encoding
12 import lock, transaction, store, encoding
13 import util, extensions, hook, error
13 import util, extensions, hook, error
14 import match as matchmod
14 import match as matchmod
15 import merge as mergemod
15 import merge as mergemod
16 import tags as tagsmod
16 import tags as tagsmod
17 import url as urlmod
17 import url as urlmod
18 from lock import release
18 from lock import release
19 import weakref, errno, os, time, inspect
19 import weakref, errno, os, time, inspect
20 propertycache = util.propertycache
20 propertycache = util.propertycache
21
21
22 class localrepository(repo.repository):
22 class localrepository(repo.repository):
23 capabilities = set(('lookup', 'changegroupsubset', 'branchmap', 'pushkey'))
23 capabilities = set(('lookup', 'changegroupsubset', 'branchmap', 'pushkey'))
24 supportedformats = set(('revlogv1', 'parentdelta'))
24 supportedformats = set(('revlogv1', 'parentdelta'))
25 supported = supportedformats | set(('store', 'fncache', 'shared',
25 supported = supportedformats | set(('store', 'fncache', 'shared',
26 'dotencode'))
26 'dotencode'))
27
27
28 def __init__(self, baseui, path=None, create=0):
28 def __init__(self, baseui, path=None, create=0):
29 repo.repository.__init__(self)
29 repo.repository.__init__(self)
30 self.root = os.path.realpath(util.expandpath(path))
30 self.root = os.path.realpath(util.expandpath(path))
31 self.path = os.path.join(self.root, ".hg")
31 self.path = os.path.join(self.root, ".hg")
32 self.origroot = path
32 self.origroot = path
33 self.auditor = util.path_auditor(self.root, self._checknested)
33 self.auditor = util.path_auditor(self.root, self._checknested)
34 self.opener = util.opener(self.path)
34 self.opener = util.opener(self.path)
35 self.wopener = util.opener(self.root)
35 self.wopener = util.opener(self.root)
36 self.baseui = baseui
36 self.baseui = baseui
37 self.ui = baseui.copy()
37 self.ui = baseui.copy()
38
38
39 try:
39 try:
40 self.ui.readconfig(self.join("hgrc"), self.root)
40 self.ui.readconfig(self.join("hgrc"), self.root)
41 extensions.loadall(self.ui)
41 extensions.loadall(self.ui)
42 except IOError:
42 except IOError:
43 pass
43 pass
44
44
45 if not os.path.isdir(self.path):
45 if not os.path.isdir(self.path):
46 if create:
46 if create:
47 if not os.path.exists(path):
47 if not os.path.exists(path):
48 util.makedirs(path)
48 util.makedirs(path)
49 os.mkdir(self.path)
49 os.mkdir(self.path)
50 requirements = ["revlogv1"]
50 requirements = ["revlogv1"]
51 if self.ui.configbool('format', 'usestore', True):
51 if self.ui.configbool('format', 'usestore', True):
52 os.mkdir(os.path.join(self.path, "store"))
52 os.mkdir(os.path.join(self.path, "store"))
53 requirements.append("store")
53 requirements.append("store")
54 if self.ui.configbool('format', 'usefncache', True):
54 if self.ui.configbool('format', 'usefncache', True):
55 requirements.append("fncache")
55 requirements.append("fncache")
56 if self.ui.configbool('format', 'dotencode', True):
56 if self.ui.configbool('format', 'dotencode', True):
57 requirements.append('dotencode')
57 requirements.append('dotencode')
58 # create an invalid changelog
58 # create an invalid changelog
59 self.opener("00changelog.i", "a").write(
59 self.opener("00changelog.i", "a").write(
60 '\0\0\0\2' # represents revlogv2
60 '\0\0\0\2' # represents revlogv2
61 ' dummy changelog to prevent using the old repo layout'
61 ' dummy changelog to prevent using the old repo layout'
62 )
62 )
63 if self.ui.configbool('format', 'parentdelta', False):
63 if self.ui.configbool('format', 'parentdelta', False):
64 requirements.append("parentdelta")
64 requirements.append("parentdelta")
65 else:
65 else:
66 raise error.RepoError(_("repository %s not found") % path)
66 raise error.RepoError(_("repository %s not found") % path)
67 elif create:
67 elif create:
68 raise error.RepoError(_("repository %s already exists") % path)
68 raise error.RepoError(_("repository %s already exists") % path)
69 else:
69 else:
70 # find requirements
70 # find requirements
71 requirements = set()
71 requirements = set()
72 try:
72 try:
73 requirements = set(self.opener("requires").read().splitlines())
73 requirements = set(self.opener("requires").read().splitlines())
74 except IOError, inst:
74 except IOError, inst:
75 if inst.errno != errno.ENOENT:
75 if inst.errno != errno.ENOENT:
76 raise
76 raise
77 for r in requirements - self.supported:
77 for r in requirements - self.supported:
78 raise error.RepoError(_("requirement '%s' not supported") % r)
78 raise error.RepoError(_("requirement '%s' not supported") % r)
79
79
80 self.sharedpath = self.path
80 self.sharedpath = self.path
81 try:
81 try:
82 s = os.path.realpath(self.opener("sharedpath").read())
82 s = os.path.realpath(self.opener("sharedpath").read())
83 if not os.path.exists(s):
83 if not os.path.exists(s):
84 raise error.RepoError(
84 raise error.RepoError(
85 _('.hg/sharedpath points to nonexistent directory %s') % s)
85 _('.hg/sharedpath points to nonexistent directory %s') % s)
86 self.sharedpath = s
86 self.sharedpath = s
87 except IOError, inst:
87 except IOError, inst:
88 if inst.errno != errno.ENOENT:
88 if inst.errno != errno.ENOENT:
89 raise
89 raise
90
90
91 self.store = store.store(requirements, self.sharedpath, util.opener)
91 self.store = store.store(requirements, self.sharedpath, util.opener)
92 self.spath = self.store.path
92 self.spath = self.store.path
93 self.sopener = self.store.opener
93 self.sopener = self.store.opener
94 self.sjoin = self.store.join
94 self.sjoin = self.store.join
95 self.opener.createmode = self.store.createmode
95 self.opener.createmode = self.store.createmode
96 self._applyrequirements(requirements)
96 self._applyrequirements(requirements)
97 if create:
97 if create:
98 self._writerequirements()
98 self._writerequirements()
99
99
100 # These two define the set of tags for this repository. _tags
100 # These two define the set of tags for this repository. _tags
101 # maps tag name to node; _tagtypes maps tag name to 'global' or
101 # maps tag name to node; _tagtypes maps tag name to 'global' or
102 # 'local'. (Global tags are defined by .hgtags across all
102 # 'local'. (Global tags are defined by .hgtags across all
103 # heads, and local tags are defined in .hg/localtags.) They
103 # heads, and local tags are defined in .hg/localtags.) They
104 # constitute the in-memory cache of tags.
104 # constitute the in-memory cache of tags.
105 self._tags = None
105 self._tags = None
106 self._tagtypes = None
106 self._tagtypes = None
107
107
108 self._branchcache = None
108 self._branchcache = None
109 self._branchcachetip = None
109 self._branchcachetip = None
110 self.nodetagscache = None
110 self.nodetagscache = None
111 self.filterpats = {}
111 self.filterpats = {}
112 self._datafilters = {}
112 self._datafilters = {}
113 self._transref = self._lockref = self._wlockref = None
113 self._transref = self._lockref = self._wlockref = None
114
114
115 def _applyrequirements(self, requirements):
115 def _applyrequirements(self, requirements):
116 self.requirements = requirements
116 self.requirements = requirements
117 self.sopener.options = {}
117 self.sopener.options = {}
118 if 'parentdelta' in requirements:
118 if 'parentdelta' in requirements:
119 self.sopener.options['parentdelta'] = 1
119 self.sopener.options['parentdelta'] = 1
120
120
121 def _writerequirements(self):
121 def _writerequirements(self):
122 reqfile = self.opener("requires", "w")
122 reqfile = self.opener("requires", "w")
123 for r in self.requirements:
123 for r in self.requirements:
124 reqfile.write("%s\n" % r)
124 reqfile.write("%s\n" % r)
125 reqfile.close()
125 reqfile.close()
126
126
127 def _checknested(self, path):
127 def _checknested(self, path):
128 """Determine if path is a legal nested repository."""
128 """Determine if path is a legal nested repository."""
129 if not path.startswith(self.root):
129 if not path.startswith(self.root):
130 return False
130 return False
131 subpath = path[len(self.root) + 1:]
131 subpath = path[len(self.root) + 1:]
132
132
133 # XXX: Checking against the current working copy is wrong in
133 # XXX: Checking against the current working copy is wrong in
134 # the sense that it can reject things like
134 # the sense that it can reject things like
135 #
135 #
136 # $ hg cat -r 10 sub/x.txt
136 # $ hg cat -r 10 sub/x.txt
137 #
137 #
138 # if sub/ is no longer a subrepository in the working copy
138 # if sub/ is no longer a subrepository in the working copy
139 # parent revision.
139 # parent revision.
140 #
140 #
141 # However, it can of course also allow things that would have
141 # However, it can of course also allow things that would have
142 # been rejected before, such as the above cat command if sub/
142 # been rejected before, such as the above cat command if sub/
143 # is a subrepository now, but was a normal directory before.
143 # is a subrepository now, but was a normal directory before.
144 # The old path auditor would have rejected by mistake since it
144 # The old path auditor would have rejected by mistake since it
145 # panics when it sees sub/.hg/.
145 # panics when it sees sub/.hg/.
146 #
146 #
147 # All in all, checking against the working copy seems sensible
147 # All in all, checking against the working copy seems sensible
148 # since we want to prevent access to nested repositories on
148 # since we want to prevent access to nested repositories on
149 # the filesystem *now*.
149 # the filesystem *now*.
150 ctx = self[None]
150 ctx = self[None]
151 parts = util.splitpath(subpath)
151 parts = util.splitpath(subpath)
152 while parts:
152 while parts:
153 prefix = os.sep.join(parts)
153 prefix = os.sep.join(parts)
154 if prefix in ctx.substate:
154 if prefix in ctx.substate:
155 if prefix == subpath:
155 if prefix == subpath:
156 return True
156 return True
157 else:
157 else:
158 sub = ctx.sub(prefix)
158 sub = ctx.sub(prefix)
159 return sub.checknested(subpath[len(prefix) + 1:])
159 return sub.checknested(subpath[len(prefix) + 1:])
160 else:
160 else:
161 parts.pop()
161 parts.pop()
162 return False
162 return False
163
163
164 @util.propertycache
164 @util.propertycache
165 def _bookmarks(self):
165 def _bookmarks(self):
166 return bookmarks.read(self)
166 return bookmarks.read(self)
167
167
168 @util.propertycache
168 @util.propertycache
169 def _bookmarkcurrent(self):
169 def _bookmarkcurrent(self):
170 return bookmarks.readcurrent(self)
170 return bookmarks.readcurrent(self)
171
171
172 @propertycache
172 @propertycache
173 def changelog(self):
173 def changelog(self):
174 c = changelog.changelog(self.sopener)
174 c = changelog.changelog(self.sopener)
175 if 'HG_PENDING' in os.environ:
175 if 'HG_PENDING' in os.environ:
176 p = os.environ['HG_PENDING']
176 p = os.environ['HG_PENDING']
177 if p.startswith(self.root):
177 if p.startswith(self.root):
178 c.readpending('00changelog.i.a')
178 c.readpending('00changelog.i.a')
179 self.sopener.options['defversion'] = c.version
179 self.sopener.options['defversion'] = c.version
180 return c
180 return c
181
181
182 @propertycache
182 @propertycache
183 def manifest(self):
183 def manifest(self):
184 return manifest.manifest(self.sopener)
184 return manifest.manifest(self.sopener)
185
185
186 @propertycache
186 @propertycache
187 def dirstate(self):
187 def dirstate(self):
188 warned = [0]
188 warned = [0]
189 def validate(node):
189 def validate(node):
190 try:
190 try:
191 r = self.changelog.rev(node)
191 r = self.changelog.rev(node)
192 return node
192 return node
193 except error.LookupError:
193 except error.LookupError:
194 if not warned[0]:
194 if not warned[0]:
195 warned[0] = True
195 warned[0] = True
196 self.ui.warn(_("warning: ignoring unknown"
196 self.ui.warn(_("warning: ignoring unknown"
197 " working parent %s!\n") % short(node))
197 " working parent %s!\n") % short(node))
198 return nullid
198 return nullid
199
199
200 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
200 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
201
201
202 def __getitem__(self, changeid):
202 def __getitem__(self, changeid):
203 if changeid is None:
203 if changeid is None:
204 return context.workingctx(self)
204 return context.workingctx(self)
205 return context.changectx(self, changeid)
205 return context.changectx(self, changeid)
206
206
207 def __contains__(self, changeid):
207 def __contains__(self, changeid):
208 try:
208 try:
209 return bool(self.lookup(changeid))
209 return bool(self.lookup(changeid))
210 except error.RepoLookupError:
210 except error.RepoLookupError:
211 return False
211 return False
212
212
213 def __nonzero__(self):
213 def __nonzero__(self):
214 return True
214 return True
215
215
216 def __len__(self):
216 def __len__(self):
217 return len(self.changelog)
217 return len(self.changelog)
218
218
219 def __iter__(self):
219 def __iter__(self):
220 for i in xrange(len(self)):
220 for i in xrange(len(self)):
221 yield i
221 yield i
222
222
223 def url(self):
223 def url(self):
224 return 'file:' + self.root
224 return 'file:' + self.root
225
225
226 def hook(self, name, throw=False, **args):
226 def hook(self, name, throw=False, **args):
227 return hook.hook(self.ui, self, name, throw, **args)
227 return hook.hook(self.ui, self, name, throw, **args)
228
228
229 tag_disallowed = ':\r\n'
229 tag_disallowed = ':\r\n'
230
230
231 def _tag(self, names, node, message, local, user, date, extra={}):
231 def _tag(self, names, node, message, local, user, date, extra={}):
232 if isinstance(names, str):
232 if isinstance(names, str):
233 allchars = names
233 allchars = names
234 names = (names,)
234 names = (names,)
235 else:
235 else:
236 allchars = ''.join(names)
236 allchars = ''.join(names)
237 for c in self.tag_disallowed:
237 for c in self.tag_disallowed:
238 if c in allchars:
238 if c in allchars:
239 raise util.Abort(_('%r cannot be used in a tag name') % c)
239 raise util.Abort(_('%r cannot be used in a tag name') % c)
240
240
241 branches = self.branchmap()
241 branches = self.branchmap()
242 for name in names:
242 for name in names:
243 self.hook('pretag', throw=True, node=hex(node), tag=name,
243 self.hook('pretag', throw=True, node=hex(node), tag=name,
244 local=local)
244 local=local)
245 if name in branches:
245 if name in branches:
246 self.ui.warn(_("warning: tag %s conflicts with existing"
246 self.ui.warn(_("warning: tag %s conflicts with existing"
247 " branch name\n") % name)
247 " branch name\n") % name)
248
248
249 def writetags(fp, names, munge, prevtags):
249 def writetags(fp, names, munge, prevtags):
250 fp.seek(0, 2)
250 fp.seek(0, 2)
251 if prevtags and prevtags[-1] != '\n':
251 if prevtags and prevtags[-1] != '\n':
252 fp.write('\n')
252 fp.write('\n')
253 for name in names:
253 for name in names:
254 m = munge and munge(name) or name
254 m = munge and munge(name) or name
255 if self._tagtypes and name in self._tagtypes:
255 if self._tagtypes and name in self._tagtypes:
256 old = self._tags.get(name, nullid)
256 old = self._tags.get(name, nullid)
257 fp.write('%s %s\n' % (hex(old), m))
257 fp.write('%s %s\n' % (hex(old), m))
258 fp.write('%s %s\n' % (hex(node), m))
258 fp.write('%s %s\n' % (hex(node), m))
259 fp.close()
259 fp.close()
260
260
261 prevtags = ''
261 prevtags = ''
262 if local:
262 if local:
263 try:
263 try:
264 fp = self.opener('localtags', 'r+')
264 fp = self.opener('localtags', 'r+')
265 except IOError:
265 except IOError:
266 fp = self.opener('localtags', 'a')
266 fp = self.opener('localtags', 'a')
267 else:
267 else:
268 prevtags = fp.read()
268 prevtags = fp.read()
269
269
270 # local tags are stored in the current charset
270 # local tags are stored in the current charset
271 writetags(fp, names, None, prevtags)
271 writetags(fp, names, None, prevtags)
272 for name in names:
272 for name in names:
273 self.hook('tag', node=hex(node), tag=name, local=local)
273 self.hook('tag', node=hex(node), tag=name, local=local)
274 return
274 return
275
275
276 try:
276 try:
277 fp = self.wfile('.hgtags', 'rb+')
277 fp = self.wfile('.hgtags', 'rb+')
278 except IOError:
278 except IOError:
279 fp = self.wfile('.hgtags', 'ab')
279 fp = self.wfile('.hgtags', 'ab')
280 else:
280 else:
281 prevtags = fp.read()
281 prevtags = fp.read()
282
282
283 # committed tags are stored in UTF-8
283 # committed tags are stored in UTF-8
284 writetags(fp, names, encoding.fromlocal, prevtags)
284 writetags(fp, names, encoding.fromlocal, prevtags)
285
285
286 if '.hgtags' not in self.dirstate:
286 if '.hgtags' not in self.dirstate:
287 self[None].add(['.hgtags'])
287 self[None].add(['.hgtags'])
288
288
289 m = matchmod.exact(self.root, '', ['.hgtags'])
289 m = matchmod.exact(self.root, '', ['.hgtags'])
290 tagnode = self.commit(message, user, date, extra=extra, match=m)
290 tagnode = self.commit(message, user, date, extra=extra, match=m)
291
291
292 for name in names:
292 for name in names:
293 self.hook('tag', node=hex(node), tag=name, local=local)
293 self.hook('tag', node=hex(node), tag=name, local=local)
294
294
295 return tagnode
295 return tagnode
296
296
297 def tag(self, names, node, message, local, user, date):
297 def tag(self, names, node, message, local, user, date):
298 '''tag a revision with one or more symbolic names.
298 '''tag a revision with one or more symbolic names.
299
299
300 names is a list of strings or, when adding a single tag, names may be a
300 names is a list of strings or, when adding a single tag, names may be a
301 string.
301 string.
302
302
303 if local is True, the tags are stored in a per-repository file.
303 if local is True, the tags are stored in a per-repository file.
304 otherwise, they are stored in the .hgtags file, and a new
304 otherwise, they are stored in the .hgtags file, and a new
305 changeset is committed with the change.
305 changeset is committed with the change.
306
306
307 keyword arguments:
307 keyword arguments:
308
308
309 local: whether to store tags in non-version-controlled file
309 local: whether to store tags in non-version-controlled file
310 (default False)
310 (default False)
311
311
312 message: commit message to use if committing
312 message: commit message to use if committing
313
313
314 user: name of user to use if committing
314 user: name of user to use if committing
315
315
316 date: date tuple to use if committing'''
316 date: date tuple to use if committing'''
317
317
318 if not local:
318 if not local:
319 for x in self.status()[:5]:
319 for x in self.status()[:5]:
320 if '.hgtags' in x:
320 if '.hgtags' in x:
321 raise util.Abort(_('working copy of .hgtags is changed '
321 raise util.Abort(_('working copy of .hgtags is changed '
322 '(please commit .hgtags manually)'))
322 '(please commit .hgtags manually)'))
323
323
324 self.tags() # instantiate the cache
324 self.tags() # instantiate the cache
325 self._tag(names, node, message, local, user, date)
325 self._tag(names, node, message, local, user, date)
326
326
327 def tags(self):
327 def tags(self):
328 '''return a mapping of tag to node'''
328 '''return a mapping of tag to node'''
329 if self._tags is None:
329 if self._tags is None:
330 (self._tags, self._tagtypes) = self._findtags()
330 (self._tags, self._tagtypes) = self._findtags()
331
331
332 return self._tags
332 return self._tags
333
333
334 def _findtags(self):
334 def _findtags(self):
335 '''Do the hard work of finding tags. Return a pair of dicts
335 '''Do the hard work of finding tags. Return a pair of dicts
336 (tags, tagtypes) where tags maps tag name to node, and tagtypes
336 (tags, tagtypes) where tags maps tag name to node, and tagtypes
337 maps tag name to a string like \'global\' or \'local\'.
337 maps tag name to a string like \'global\' or \'local\'.
338 Subclasses or extensions are free to add their own tags, but
338 Subclasses or extensions are free to add their own tags, but
339 should be aware that the returned dicts will be retained for the
339 should be aware that the returned dicts will be retained for the
340 duration of the localrepo object.'''
340 duration of the localrepo object.'''
341
341
342 # XXX what tagtype should subclasses/extensions use? Currently
342 # XXX what tagtype should subclasses/extensions use? Currently
343 # mq and bookmarks add tags, but do not set the tagtype at all.
343 # mq and bookmarks add tags, but do not set the tagtype at all.
344 # Should each extension invent its own tag type? Should there
344 # Should each extension invent its own tag type? Should there
345 # be one tagtype for all such "virtual" tags? Or is the status
345 # be one tagtype for all such "virtual" tags? Or is the status
346 # quo fine?
346 # quo fine?
347
347
348 alltags = {} # map tag name to (node, hist)
348 alltags = {} # map tag name to (node, hist)
349 tagtypes = {}
349 tagtypes = {}
350
350
351 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
351 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
352 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
352 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
353
353
354 # Build the return dicts. Have to re-encode tag names because
354 # Build the return dicts. Have to re-encode tag names because
355 # the tags module always uses UTF-8 (in order not to lose info
355 # the tags module always uses UTF-8 (in order not to lose info
356 # writing to the cache), but the rest of Mercurial wants them in
356 # writing to the cache), but the rest of Mercurial wants them in
357 # local encoding.
357 # local encoding.
358 tags = {}
358 tags = {}
359 for (name, (node, hist)) in alltags.iteritems():
359 for (name, (node, hist)) in alltags.iteritems():
360 if node != nullid:
360 if node != nullid:
361 tags[encoding.tolocal(name)] = node
361 tags[encoding.tolocal(name)] = node
362 tags['tip'] = self.changelog.tip()
362 tags['tip'] = self.changelog.tip()
363 tagtypes = dict([(encoding.tolocal(name), value)
363 tagtypes = dict([(encoding.tolocal(name), value)
364 for (name, value) in tagtypes.iteritems()])
364 for (name, value) in tagtypes.iteritems()])
365 return (tags, tagtypes)
365 return (tags, tagtypes)
366
366
367 def tagtype(self, tagname):
367 def tagtype(self, tagname):
368 '''
368 '''
369 return the type of the given tag. result can be:
369 return the type of the given tag. result can be:
370
370
371 'local' : a local tag
371 'local' : a local tag
372 'global' : a global tag
372 'global' : a global tag
373 None : tag does not exist
373 None : tag does not exist
374 '''
374 '''
375
375
376 self.tags()
376 self.tags()
377
377
378 return self._tagtypes.get(tagname)
378 return self._tagtypes.get(tagname)
379
379
380 def tagslist(self):
380 def tagslist(self):
381 '''return a list of tags ordered by revision'''
381 '''return a list of tags ordered by revision'''
382 l = []
382 l = []
383 for t, n in self.tags().iteritems():
383 for t, n in self.tags().iteritems():
384 try:
384 try:
385 r = self.changelog.rev(n)
385 r = self.changelog.rev(n)
386 except:
386 except:
387 r = -2 # sort to the beginning of the list if unknown
387 r = -2 # sort to the beginning of the list if unknown
388 l.append((r, t, n))
388 l.append((r, t, n))
389 return [(t, n) for r, t, n in sorted(l)]
389 return [(t, n) for r, t, n in sorted(l)]
390
390
391 def nodetags(self, node):
391 def nodetags(self, node):
392 '''return the tags associated with a node'''
392 '''return the tags associated with a node'''
393 if not self.nodetagscache:
393 if not self.nodetagscache:
394 self.nodetagscache = {}
394 self.nodetagscache = {}
395 for t, n in self.tags().iteritems():
395 for t, n in self.tags().iteritems():
396 self.nodetagscache.setdefault(n, []).append(t)
396 self.nodetagscache.setdefault(n, []).append(t)
397 for tags in self.nodetagscache.itervalues():
397 for tags in self.nodetagscache.itervalues():
398 tags.sort()
398 tags.sort()
399 return self.nodetagscache.get(node, [])
399 return self.nodetagscache.get(node, [])
400
400
401 def _branchtags(self, partial, lrev):
401 def _branchtags(self, partial, lrev):
402 # TODO: rename this function?
402 # TODO: rename this function?
403 tiprev = len(self) - 1
403 tiprev = len(self) - 1
404 if lrev != tiprev:
404 if lrev != tiprev:
405 ctxgen = (self[r] for r in xrange(lrev + 1, tiprev + 1))
405 ctxgen = (self[r] for r in xrange(lrev + 1, tiprev + 1))
406 self._updatebranchcache(partial, ctxgen)
406 self._updatebranchcache(partial, ctxgen)
407 self._writebranchcache(partial, self.changelog.tip(), tiprev)
407 self._writebranchcache(partial, self.changelog.tip(), tiprev)
408
408
409 return partial
409 return partial
410
410
411 def updatebranchcache(self):
411 def updatebranchcache(self):
412 tip = self.changelog.tip()
412 tip = self.changelog.tip()
413 if self._branchcache is not None and self._branchcachetip == tip:
413 if self._branchcache is not None and self._branchcachetip == tip:
414 return self._branchcache
414 return self._branchcache
415
415
416 oldtip = self._branchcachetip
416 oldtip = self._branchcachetip
417 self._branchcachetip = tip
417 self._branchcachetip = tip
418 if oldtip is None or oldtip not in self.changelog.nodemap:
418 if oldtip is None or oldtip not in self.changelog.nodemap:
419 partial, last, lrev = self._readbranchcache()
419 partial, last, lrev = self._readbranchcache()
420 else:
420 else:
421 lrev = self.changelog.rev(oldtip)
421 lrev = self.changelog.rev(oldtip)
422 partial = self._branchcache
422 partial = self._branchcache
423
423
424 self._branchtags(partial, lrev)
424 self._branchtags(partial, lrev)
425 # this private cache holds all heads (not just tips)
425 # this private cache holds all heads (not just tips)
426 self._branchcache = partial
426 self._branchcache = partial
427
427
428 def branchmap(self):
428 def branchmap(self):
429 '''returns a dictionary {branch: [branchheads]}'''
429 '''returns a dictionary {branch: [branchheads]}'''
430 self.updatebranchcache()
430 self.updatebranchcache()
431 return self._branchcache
431 return self._branchcache
432
432
433 def branchtags(self):
433 def branchtags(self):
434 '''return a dict where branch names map to the tipmost head of
434 '''return a dict where branch names map to the tipmost head of
435 the branch, open heads come before closed'''
435 the branch, open heads come before closed'''
436 bt = {}
436 bt = {}
437 for bn, heads in self.branchmap().iteritems():
437 for bn, heads in self.branchmap().iteritems():
438 tip = heads[-1]
438 tip = heads[-1]
439 for h in reversed(heads):
439 for h in reversed(heads):
440 if 'close' not in self.changelog.read(h)[5]:
440 if 'close' not in self.changelog.read(h)[5]:
441 tip = h
441 tip = h
442 break
442 break
443 bt[bn] = tip
443 bt[bn] = tip
444 return bt
444 return bt
445
445
446 def _readbranchcache(self):
446 def _readbranchcache(self):
447 partial = {}
447 partial = {}
448 try:
448 try:
449 f = self.opener("cache/branchheads")
449 f = self.opener("cache/branchheads")
450 lines = f.read().split('\n')
450 lines = f.read().split('\n')
451 f.close()
451 f.close()
452 except (IOError, OSError):
452 except (IOError, OSError):
453 return {}, nullid, nullrev
453 return {}, nullid, nullrev
454
454
455 try:
455 try:
456 last, lrev = lines.pop(0).split(" ", 1)
456 last, lrev = lines.pop(0).split(" ", 1)
457 last, lrev = bin(last), int(lrev)
457 last, lrev = bin(last), int(lrev)
458 if lrev >= len(self) or self[lrev].node() != last:
458 if lrev >= len(self) or self[lrev].node() != last:
459 # invalidate the cache
459 # invalidate the cache
460 raise ValueError('invalidating branch cache (tip differs)')
460 raise ValueError('invalidating branch cache (tip differs)')
461 for l in lines:
461 for l in lines:
462 if not l:
462 if not l:
463 continue
463 continue
464 node, label = l.split(" ", 1)
464 node, label = l.split(" ", 1)
465 label = encoding.tolocal(label.strip())
465 label = encoding.tolocal(label.strip())
466 partial.setdefault(label, []).append(bin(node))
466 partial.setdefault(label, []).append(bin(node))
467 except KeyboardInterrupt:
467 except KeyboardInterrupt:
468 raise
468 raise
469 except Exception, inst:
469 except Exception, inst:
470 if self.ui.debugflag:
470 if self.ui.debugflag:
471 self.ui.warn(str(inst), '\n')
471 self.ui.warn(str(inst), '\n')
472 partial, last, lrev = {}, nullid, nullrev
472 partial, last, lrev = {}, nullid, nullrev
473 return partial, last, lrev
473 return partial, last, lrev
474
474
475 def _writebranchcache(self, branches, tip, tiprev):
475 def _writebranchcache(self, branches, tip, tiprev):
476 try:
476 try:
477 f = self.opener("cache/branchheads", "w", atomictemp=True)
477 f = self.opener("cache/branchheads", "w", atomictemp=True)
478 f.write("%s %s\n" % (hex(tip), tiprev))
478 f.write("%s %s\n" % (hex(tip), tiprev))
479 for label, nodes in branches.iteritems():
479 for label, nodes in branches.iteritems():
480 for node in nodes:
480 for node in nodes:
481 f.write("%s %s\n" % (hex(node), encoding.fromlocal(label)))
481 f.write("%s %s\n" % (hex(node), encoding.fromlocal(label)))
482 f.rename()
482 f.rename()
483 except (IOError, OSError):
483 except (IOError, OSError):
484 pass
484 pass
485
485
486 def _updatebranchcache(self, partial, ctxgen):
486 def _updatebranchcache(self, partial, ctxgen):
487 # collect new branch entries
487 # collect new branch entries
488 newbranches = {}
488 newbranches = {}
489 for c in ctxgen:
489 for c in ctxgen:
490 newbranches.setdefault(c.branch(), []).append(c.node())
490 newbranches.setdefault(c.branch(), []).append(c.node())
491 # if older branchheads are reachable from new ones, they aren't
491 # if older branchheads are reachable from new ones, they aren't
492 # really branchheads. Note checking parents is insufficient:
492 # really branchheads. Note checking parents is insufficient:
493 # 1 (branch a) -> 2 (branch b) -> 3 (branch a)
493 # 1 (branch a) -> 2 (branch b) -> 3 (branch a)
494 for branch, newnodes in newbranches.iteritems():
494 for branch, newnodes in newbranches.iteritems():
495 bheads = partial.setdefault(branch, [])
495 bheads = partial.setdefault(branch, [])
496 bheads.extend(newnodes)
496 bheads.extend(newnodes)
497 if len(bheads) <= 1:
497 if len(bheads) <= 1:
498 continue
498 continue
499 # starting from tip means fewer passes over reachable
499 # starting from tip means fewer passes over reachable
500 while newnodes:
500 while newnodes:
501 latest = newnodes.pop()
501 latest = newnodes.pop()
502 if latest not in bheads:
502 if latest not in bheads:
503 continue
503 continue
504 minbhrev = self[min([self[bh].rev() for bh in bheads])].node()
504 minbhrev = self[min([self[bh].rev() for bh in bheads])].node()
505 reachable = self.changelog.reachable(latest, minbhrev)
505 reachable = self.changelog.reachable(latest, minbhrev)
506 reachable.remove(latest)
506 reachable.remove(latest)
507 bheads = [b for b in bheads if b not in reachable]
507 bheads = [b for b in bheads if b not in reachable]
508 partial[branch] = bheads
508 partial[branch] = bheads
509
509
510 def lookup(self, key):
510 def lookup(self, key):
511 if isinstance(key, int):
511 if isinstance(key, int):
512 return self.changelog.node(key)
512 return self.changelog.node(key)
513 elif key == '.':
513 elif key == '.':
514 return self.dirstate.parents()[0]
514 return self.dirstate.parents()[0]
515 elif key == 'null':
515 elif key == 'null':
516 return nullid
516 return nullid
517 elif key == 'tip':
517 elif key == 'tip':
518 return self.changelog.tip()
518 return self.changelog.tip()
519 n = self.changelog._match(key)
519 n = self.changelog._match(key)
520 if n:
520 if n:
521 return n
521 return n
522 if key in self.tags():
522 if key in self.tags():
523 return self.tags()[key]
523 return self.tags()[key]
524 if key in self.branchtags():
524 if key in self.branchtags():
525 return self.branchtags()[key]
525 return self.branchtags()[key]
526 n = self.changelog._partialmatch(key)
526 n = self.changelog._partialmatch(key)
527 if n:
527 if n:
528 return n
528 return n
529
529
530 # can't find key, check if it might have come from damaged dirstate
530 # can't find key, check if it might have come from damaged dirstate
531 if key in self.dirstate.parents():
531 if key in self.dirstate.parents():
532 raise error.Abort(_("working directory has unknown parent '%s'!")
532 raise error.Abort(_("working directory has unknown parent '%s'!")
533 % short(key))
533 % short(key))
534 try:
534 try:
535 if len(key) == 20:
535 if len(key) == 20:
536 key = hex(key)
536 key = hex(key)
537 except:
537 except:
538 pass
538 pass
539 raise error.RepoLookupError(_("unknown revision '%s'") % key)
539 raise error.RepoLookupError(_("unknown revision '%s'") % key)
540
540
541 def lookupbranch(self, key, remote=None):
541 def lookupbranch(self, key, remote=None):
542 repo = remote or self
542 repo = remote or self
543 if key in repo.branchmap():
543 if key in repo.branchmap():
544 return key
544 return key
545
545
546 repo = (remote and remote.local()) and remote or self
546 repo = (remote and remote.local()) and remote or self
547 return repo[key].branch()
547 return repo[key].branch()
548
548
549 def local(self):
549 def local(self):
550 return True
550 return True
551
551
552 def join(self, f):
552 def join(self, f):
553 return os.path.join(self.path, f)
553 return os.path.join(self.path, f)
554
554
555 def wjoin(self, f):
555 def wjoin(self, f):
556 return os.path.join(self.root, f)
556 return os.path.join(self.root, f)
557
557
558 def file(self, f):
558 def file(self, f):
559 if f[0] == '/':
559 if f[0] == '/':
560 f = f[1:]
560 f = f[1:]
561 return filelog.filelog(self.sopener, f)
561 return filelog.filelog(self.sopener, f)
562
562
563 def changectx(self, changeid):
563 def changectx(self, changeid):
564 return self[changeid]
564 return self[changeid]
565
565
566 def parents(self, changeid=None):
566 def parents(self, changeid=None):
567 '''get list of changectxs for parents of changeid'''
567 '''get list of changectxs for parents of changeid'''
568 return self[changeid].parents()
568 return self[changeid].parents()
569
569
570 def filectx(self, path, changeid=None, fileid=None):
570 def filectx(self, path, changeid=None, fileid=None):
571 """changeid can be a changeset revision, node, or tag.
571 """changeid can be a changeset revision, node, or tag.
572 fileid can be a file revision or node."""
572 fileid can be a file revision or node."""
573 return context.filectx(self, path, changeid, fileid)
573 return context.filectx(self, path, changeid, fileid)
574
574
575 def getcwd(self):
575 def getcwd(self):
576 return self.dirstate.getcwd()
576 return self.dirstate.getcwd()
577
577
578 def pathto(self, f, cwd=None):
578 def pathto(self, f, cwd=None):
579 return self.dirstate.pathto(f, cwd)
579 return self.dirstate.pathto(f, cwd)
580
580
581 def wfile(self, f, mode='r'):
581 def wfile(self, f, mode='r'):
582 return self.wopener(f, mode)
582 return self.wopener(f, mode)
583
583
584 def _link(self, f):
584 def _link(self, f):
585 return os.path.islink(self.wjoin(f))
585 return os.path.islink(self.wjoin(f))
586
586
587 def _loadfilter(self, filter):
587 def _loadfilter(self, filter):
588 if filter not in self.filterpats:
588 if filter not in self.filterpats:
589 l = []
589 l = []
590 for pat, cmd in self.ui.configitems(filter):
590 for pat, cmd in self.ui.configitems(filter):
591 if cmd == '!':
591 if cmd == '!':
592 continue
592 continue
593 mf = matchmod.match(self.root, '', [pat])
593 mf = matchmod.match(self.root, '', [pat])
594 fn = None
594 fn = None
595 params = cmd
595 params = cmd
596 for name, filterfn in self._datafilters.iteritems():
596 for name, filterfn in self._datafilters.iteritems():
597 if cmd.startswith(name):
597 if cmd.startswith(name):
598 fn = filterfn
598 fn = filterfn
599 params = cmd[len(name):].lstrip()
599 params = cmd[len(name):].lstrip()
600 break
600 break
601 if not fn:
601 if not fn:
602 fn = lambda s, c, **kwargs: util.filter(s, c)
602 fn = lambda s, c, **kwargs: util.filter(s, c)
603 # Wrap old filters not supporting keyword arguments
603 # Wrap old filters not supporting keyword arguments
604 if not inspect.getargspec(fn)[2]:
604 if not inspect.getargspec(fn)[2]:
605 oldfn = fn
605 oldfn = fn
606 fn = lambda s, c, **kwargs: oldfn(s, c)
606 fn = lambda s, c, **kwargs: oldfn(s, c)
607 l.append((mf, fn, params))
607 l.append((mf, fn, params))
608 self.filterpats[filter] = l
608 self.filterpats[filter] = l
609 return self.filterpats[filter]
609 return self.filterpats[filter]
610
610
611 def _filter(self, filterpats, filename, data):
611 def _filter(self, filterpats, filename, data):
612 for mf, fn, cmd in filterpats:
612 for mf, fn, cmd in filterpats:
613 if mf(filename):
613 if mf(filename):
614 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
614 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
615 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
615 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
616 break
616 break
617
617
618 return data
618 return data
619
619
620 @propertycache
620 @propertycache
621 def _encodefilterpats(self):
621 def _encodefilterpats(self):
622 return self._loadfilter('encode')
622 return self._loadfilter('encode')
623
623
624 @propertycache
624 @propertycache
625 def _decodefilterpats(self):
625 def _decodefilterpats(self):
626 return self._loadfilter('decode')
626 return self._loadfilter('decode')
627
627
628 def adddatafilter(self, name, filter):
628 def adddatafilter(self, name, filter):
629 self._datafilters[name] = filter
629 self._datafilters[name] = filter
630
630
631 def wread(self, filename):
631 def wread(self, filename):
632 if self._link(filename):
632 if self._link(filename):
633 data = os.readlink(self.wjoin(filename))
633 data = os.readlink(self.wjoin(filename))
634 else:
634 else:
635 data = self.wopener(filename, 'r').read()
635 data = self.wopener(filename, 'r').read()
636 return self._filter(self._encodefilterpats, filename, data)
636 return self._filter(self._encodefilterpats, filename, data)
637
637
638 def wwrite(self, filename, data, flags):
638 def wwrite(self, filename, data, flags):
639 data = self._filter(self._decodefilterpats, filename, data)
639 data = self._filter(self._decodefilterpats, filename, data)
640 if 'l' in flags:
640 if 'l' in flags:
641 self.wopener.symlink(data, filename)
641 self.wopener.symlink(data, filename)
642 else:
642 else:
643 self.wopener(filename, 'w').write(data)
643 self.wopener(filename, 'w').write(data)
644 if 'x' in flags:
644 if 'x' in flags:
645 util.set_flags(self.wjoin(filename), False, True)
645 util.set_flags(self.wjoin(filename), False, True)
646
646
647 def wwritedata(self, filename, data):
647 def wwritedata(self, filename, data):
648 return self._filter(self._decodefilterpats, filename, data)
648 return self._filter(self._decodefilterpats, filename, data)
649
649
650 def transaction(self, desc):
650 def transaction(self, desc):
651 tr = self._transref and self._transref() or None
651 tr = self._transref and self._transref() or None
652 if tr and tr.running():
652 if tr and tr.running():
653 return tr.nest()
653 return tr.nest()
654
654
655 # abort here if the journal already exists
655 # abort here if the journal already exists
656 if os.path.exists(self.sjoin("journal")):
656 if os.path.exists(self.sjoin("journal")):
657 raise error.RepoError(
657 raise error.RepoError(
658 _("abandoned transaction found - run hg recover"))
658 _("abandoned transaction found - run hg recover"))
659
659
660 # save dirstate for rollback
660 # save dirstate for rollback
661 try:
661 try:
662 ds = self.opener("dirstate").read()
662 ds = self.opener("dirstate").read()
663 except IOError:
663 except IOError:
664 ds = ""
664 ds = ""
665 self.opener("journal.dirstate", "w").write(ds)
665 self.opener("journal.dirstate", "w").write(ds)
666 self.opener("journal.branch", "w").write(
666 self.opener("journal.branch", "w").write(
667 encoding.fromlocal(self.dirstate.branch()))
667 encoding.fromlocal(self.dirstate.branch()))
668 self.opener("journal.desc", "w").write("%d\n%s\n" % (len(self), desc))
668 self.opener("journal.desc", "w").write("%d\n%s\n" % (len(self), desc))
669
669
670 renames = [(self.sjoin("journal"), self.sjoin("undo")),
670 renames = [(self.sjoin("journal"), self.sjoin("undo")),
671 (self.join("journal.dirstate"), self.join("undo.dirstate")),
671 (self.join("journal.dirstate"), self.join("undo.dirstate")),
672 (self.join("journal.branch"), self.join("undo.branch")),
672 (self.join("journal.branch"), self.join("undo.branch")),
673 (self.join("journal.desc"), self.join("undo.desc"))]
673 (self.join("journal.desc"), self.join("undo.desc"))]
674 tr = transaction.transaction(self.ui.warn, self.sopener,
674 tr = transaction.transaction(self.ui.warn, self.sopener,
675 self.sjoin("journal"),
675 self.sjoin("journal"),
676 aftertrans(renames),
676 aftertrans(renames),
677 self.store.createmode)
677 self.store.createmode)
678 self._transref = weakref.ref(tr)
678 self._transref = weakref.ref(tr)
679 return tr
679 return tr
680
680
681 def recover(self):
681 def recover(self):
682 lock = self.lock()
682 lock = self.lock()
683 try:
683 try:
684 if os.path.exists(self.sjoin("journal")):
684 if os.path.exists(self.sjoin("journal")):
685 self.ui.status(_("rolling back interrupted transaction\n"))
685 self.ui.status(_("rolling back interrupted transaction\n"))
686 transaction.rollback(self.sopener, self.sjoin("journal"),
686 transaction.rollback(self.sopener, self.sjoin("journal"),
687 self.ui.warn)
687 self.ui.warn)
688 self.invalidate()
688 self.invalidate()
689 return True
689 return True
690 else:
690 else:
691 self.ui.warn(_("no interrupted transaction available\n"))
691 self.ui.warn(_("no interrupted transaction available\n"))
692 return False
692 return False
693 finally:
693 finally:
694 lock.release()
694 lock.release()
695
695
696 def rollback(self, dryrun=False):
696 def rollback(self, dryrun=False):
697 wlock = lock = None
697 wlock = lock = None
698 try:
698 try:
699 wlock = self.wlock()
699 wlock = self.wlock()
700 lock = self.lock()
700 lock = self.lock()
701 if os.path.exists(self.sjoin("undo")):
701 if os.path.exists(self.sjoin("undo")):
702 try:
702 try:
703 args = self.opener("undo.desc", "r").read().splitlines()
703 args = self.opener("undo.desc", "r").read().splitlines()
704 if len(args) >= 3 and self.ui.verbose:
704 if len(args) >= 3 and self.ui.verbose:
705 desc = _("rolling back to revision %s"
705 desc = _("rolling back to revision %s"
706 " (undo %s: %s)\n") % (
706 " (undo %s: %s)\n") % (
707 int(args[0]) - 1, args[1], args[2])
707 int(args[0]) - 1, args[1], args[2])
708 elif len(args) >= 2:
708 elif len(args) >= 2:
709 desc = _("rolling back to revision %s (undo %s)\n") % (
709 desc = _("rolling back to revision %s (undo %s)\n") % (
710 int(args[0]) - 1, args[1])
710 int(args[0]) - 1, args[1])
711 except IOError:
711 except IOError:
712 desc = _("rolling back unknown transaction\n")
712 desc = _("rolling back unknown transaction\n")
713 self.ui.status(desc)
713 self.ui.status(desc)
714 if dryrun:
714 if dryrun:
715 return
715 return
716 transaction.rollback(self.sopener, self.sjoin("undo"),
716 transaction.rollback(self.sopener, self.sjoin("undo"),
717 self.ui.warn)
717 self.ui.warn)
718 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
718 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
719 if os.path.exists(self.join('undo.bookmarks')):
720 util.rename(self.join('undo.bookmarks'),
721 self.join('bookmarks'))
719 try:
722 try:
720 branch = self.opener("undo.branch").read()
723 branch = self.opener("undo.branch").read()
721 self.dirstate.setbranch(branch)
724 self.dirstate.setbranch(branch)
722 except IOError:
725 except IOError:
723 self.ui.warn(_("Named branch could not be reset, "
726 self.ui.warn(_("Named branch could not be reset, "
724 "current branch still is: %s\n")
727 "current branch still is: %s\n")
725 % self.dirstate.branch())
728 % self.dirstate.branch())
726 self.invalidate()
729 self.invalidate()
727 self.dirstate.invalidate()
730 self.dirstate.invalidate()
728 self.destroyed()
731 self.destroyed()
729 else:
732 else:
730 self.ui.warn(_("no rollback information available\n"))
733 self.ui.warn(_("no rollback information available\n"))
731 return 1
734 return 1
732 finally:
735 finally:
733 release(lock, wlock)
736 release(lock, wlock)
734
737
735 def invalidatecaches(self):
738 def invalidatecaches(self):
736 self._tags = None
739 self._tags = None
737 self._tagtypes = None
740 self._tagtypes = None
738 self.nodetagscache = None
741 self.nodetagscache = None
739 self._branchcache = None # in UTF-8
742 self._branchcache = None # in UTF-8
740 self._branchcachetip = None
743 self._branchcachetip = None
741
744
742 def invalidate(self):
745 def invalidate(self):
743 for a in ("changelog", "manifest"):
746 for a in ("changelog", "manifest"):
744 if a in self.__dict__:
747 if a in self.__dict__:
745 delattr(self, a)
748 delattr(self, a)
746 self.invalidatecaches()
749 self.invalidatecaches()
747
750
748 def _lock(self, lockname, wait, releasefn, acquirefn, desc):
751 def _lock(self, lockname, wait, releasefn, acquirefn, desc):
749 try:
752 try:
750 l = lock.lock(lockname, 0, releasefn, desc=desc)
753 l = lock.lock(lockname, 0, releasefn, desc=desc)
751 except error.LockHeld, inst:
754 except error.LockHeld, inst:
752 if not wait:
755 if not wait:
753 raise
756 raise
754 self.ui.warn(_("waiting for lock on %s held by %r\n") %
757 self.ui.warn(_("waiting for lock on %s held by %r\n") %
755 (desc, inst.locker))
758 (desc, inst.locker))
756 # default to 600 seconds timeout
759 # default to 600 seconds timeout
757 l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
760 l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
758 releasefn, desc=desc)
761 releasefn, desc=desc)
759 if acquirefn:
762 if acquirefn:
760 acquirefn()
763 acquirefn()
761 return l
764 return l
762
765
763 def lock(self, wait=True):
766 def lock(self, wait=True):
764 '''Lock the repository store (.hg/store) and return a weak reference
767 '''Lock the repository store (.hg/store) and return a weak reference
765 to the lock. Use this before modifying the store (e.g. committing or
768 to the lock. Use this before modifying the store (e.g. committing or
766 stripping). If you are opening a transaction, get a lock as well.)'''
769 stripping). If you are opening a transaction, get a lock as well.)'''
767 l = self._lockref and self._lockref()
770 l = self._lockref and self._lockref()
768 if l is not None and l.held:
771 if l is not None and l.held:
769 l.lock()
772 l.lock()
770 return l
773 return l
771
774
772 l = self._lock(self.sjoin("lock"), wait, None, self.invalidate,
775 l = self._lock(self.sjoin("lock"), wait, None, self.invalidate,
773 _('repository %s') % self.origroot)
776 _('repository %s') % self.origroot)
774 self._lockref = weakref.ref(l)
777 self._lockref = weakref.ref(l)
775 return l
778 return l
776
779
777 def wlock(self, wait=True):
780 def wlock(self, wait=True):
778 '''Lock the non-store parts of the repository (everything under
781 '''Lock the non-store parts of the repository (everything under
779 .hg except .hg/store) and return a weak reference to the lock.
782 .hg except .hg/store) and return a weak reference to the lock.
780 Use this before modifying files in .hg.'''
783 Use this before modifying files in .hg.'''
781 l = self._wlockref and self._wlockref()
784 l = self._wlockref and self._wlockref()
782 if l is not None and l.held:
785 if l is not None and l.held:
783 l.lock()
786 l.lock()
784 return l
787 return l
785
788
786 l = self._lock(self.join("wlock"), wait, self.dirstate.write,
789 l = self._lock(self.join("wlock"), wait, self.dirstate.write,
787 self.dirstate.invalidate, _('working directory of %s') %
790 self.dirstate.invalidate, _('working directory of %s') %
788 self.origroot)
791 self.origroot)
789 self._wlockref = weakref.ref(l)
792 self._wlockref = weakref.ref(l)
790 return l
793 return l
791
794
792 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
795 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
793 """
796 """
794 commit an individual file as part of a larger transaction
797 commit an individual file as part of a larger transaction
795 """
798 """
796
799
797 fname = fctx.path()
800 fname = fctx.path()
798 text = fctx.data()
801 text = fctx.data()
799 flog = self.file(fname)
802 flog = self.file(fname)
800 fparent1 = manifest1.get(fname, nullid)
803 fparent1 = manifest1.get(fname, nullid)
801 fparent2 = fparent2o = manifest2.get(fname, nullid)
804 fparent2 = fparent2o = manifest2.get(fname, nullid)
802
805
803 meta = {}
806 meta = {}
804 copy = fctx.renamed()
807 copy = fctx.renamed()
805 if copy and copy[0] != fname:
808 if copy and copy[0] != fname:
806 # Mark the new revision of this file as a copy of another
809 # Mark the new revision of this file as a copy of another
807 # file. This copy data will effectively act as a parent
810 # file. This copy data will effectively act as a parent
808 # of this new revision. If this is a merge, the first
811 # of this new revision. If this is a merge, the first
809 # parent will be the nullid (meaning "look up the copy data")
812 # parent will be the nullid (meaning "look up the copy data")
810 # and the second one will be the other parent. For example:
813 # and the second one will be the other parent. For example:
811 #
814 #
812 # 0 --- 1 --- 3 rev1 changes file foo
815 # 0 --- 1 --- 3 rev1 changes file foo
813 # \ / rev2 renames foo to bar and changes it
816 # \ / rev2 renames foo to bar and changes it
814 # \- 2 -/ rev3 should have bar with all changes and
817 # \- 2 -/ rev3 should have bar with all changes and
815 # should record that bar descends from
818 # should record that bar descends from
816 # bar in rev2 and foo in rev1
819 # bar in rev2 and foo in rev1
817 #
820 #
818 # this allows this merge to succeed:
821 # this allows this merge to succeed:
819 #
822 #
820 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
823 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
821 # \ / merging rev3 and rev4 should use bar@rev2
824 # \ / merging rev3 and rev4 should use bar@rev2
822 # \- 2 --- 4 as the merge base
825 # \- 2 --- 4 as the merge base
823 #
826 #
824
827
825 cfname = copy[0]
828 cfname = copy[0]
826 crev = manifest1.get(cfname)
829 crev = manifest1.get(cfname)
827 newfparent = fparent2
830 newfparent = fparent2
828
831
829 if manifest2: # branch merge
832 if manifest2: # branch merge
830 if fparent2 == nullid or crev is None: # copied on remote side
833 if fparent2 == nullid or crev is None: # copied on remote side
831 if cfname in manifest2:
834 if cfname in manifest2:
832 crev = manifest2[cfname]
835 crev = manifest2[cfname]
833 newfparent = fparent1
836 newfparent = fparent1
834
837
835 # find source in nearest ancestor if we've lost track
838 # find source in nearest ancestor if we've lost track
836 if not crev:
839 if not crev:
837 self.ui.debug(" %s: searching for copy revision for %s\n" %
840 self.ui.debug(" %s: searching for copy revision for %s\n" %
838 (fname, cfname))
841 (fname, cfname))
839 for ancestor in self[None].ancestors():
842 for ancestor in self[None].ancestors():
840 if cfname in ancestor:
843 if cfname in ancestor:
841 crev = ancestor[cfname].filenode()
844 crev = ancestor[cfname].filenode()
842 break
845 break
843
846
844 if crev:
847 if crev:
845 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
848 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
846 meta["copy"] = cfname
849 meta["copy"] = cfname
847 meta["copyrev"] = hex(crev)
850 meta["copyrev"] = hex(crev)
848 fparent1, fparent2 = nullid, newfparent
851 fparent1, fparent2 = nullid, newfparent
849 else:
852 else:
850 self.ui.warn(_("warning: can't find ancestor for '%s' "
853 self.ui.warn(_("warning: can't find ancestor for '%s' "
851 "copied from '%s'!\n") % (fname, cfname))
854 "copied from '%s'!\n") % (fname, cfname))
852
855
853 elif fparent2 != nullid:
856 elif fparent2 != nullid:
854 # is one parent an ancestor of the other?
857 # is one parent an ancestor of the other?
855 fparentancestor = flog.ancestor(fparent1, fparent2)
858 fparentancestor = flog.ancestor(fparent1, fparent2)
856 if fparentancestor == fparent1:
859 if fparentancestor == fparent1:
857 fparent1, fparent2 = fparent2, nullid
860 fparent1, fparent2 = fparent2, nullid
858 elif fparentancestor == fparent2:
861 elif fparentancestor == fparent2:
859 fparent2 = nullid
862 fparent2 = nullid
860
863
861 # is the file changed?
864 # is the file changed?
862 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
865 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
863 changelist.append(fname)
866 changelist.append(fname)
864 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
867 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
865
868
866 # are just the flags changed during merge?
869 # are just the flags changed during merge?
867 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
870 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
868 changelist.append(fname)
871 changelist.append(fname)
869
872
870 return fparent1
873 return fparent1
871
874
872 def commit(self, text="", user=None, date=None, match=None, force=False,
875 def commit(self, text="", user=None, date=None, match=None, force=False,
873 editor=False, extra={}):
876 editor=False, extra={}):
874 """Add a new revision to current repository.
877 """Add a new revision to current repository.
875
878
876 Revision information is gathered from the working directory,
879 Revision information is gathered from the working directory,
877 match can be used to filter the committed files. If editor is
880 match can be used to filter the committed files. If editor is
878 supplied, it is called to get a commit message.
881 supplied, it is called to get a commit message.
879 """
882 """
880
883
881 def fail(f, msg):
884 def fail(f, msg):
882 raise util.Abort('%s: %s' % (f, msg))
885 raise util.Abort('%s: %s' % (f, msg))
883
886
884 if not match:
887 if not match:
885 match = matchmod.always(self.root, '')
888 match = matchmod.always(self.root, '')
886
889
887 if not force:
890 if not force:
888 vdirs = []
891 vdirs = []
889 match.dir = vdirs.append
892 match.dir = vdirs.append
890 match.bad = fail
893 match.bad = fail
891
894
892 wlock = self.wlock()
895 wlock = self.wlock()
893 try:
896 try:
894 wctx = self[None]
897 wctx = self[None]
895 merge = len(wctx.parents()) > 1
898 merge = len(wctx.parents()) > 1
896
899
897 if (not force and merge and match and
900 if (not force and merge and match and
898 (match.files() or match.anypats())):
901 (match.files() or match.anypats())):
899 raise util.Abort(_('cannot partially commit a merge '
902 raise util.Abort(_('cannot partially commit a merge '
900 '(do not specify files or patterns)'))
903 '(do not specify files or patterns)'))
901
904
902 changes = self.status(match=match, clean=force)
905 changes = self.status(match=match, clean=force)
903 if force:
906 if force:
904 changes[0].extend(changes[6]) # mq may commit unchanged files
907 changes[0].extend(changes[6]) # mq may commit unchanged files
905
908
906 # check subrepos
909 # check subrepos
907 subs = []
910 subs = []
908 removedsubs = set()
911 removedsubs = set()
909 for p in wctx.parents():
912 for p in wctx.parents():
910 removedsubs.update(s for s in p.substate if match(s))
913 removedsubs.update(s for s in p.substate if match(s))
911 for s in wctx.substate:
914 for s in wctx.substate:
912 removedsubs.discard(s)
915 removedsubs.discard(s)
913 if match(s) and wctx.sub(s).dirty():
916 if match(s) and wctx.sub(s).dirty():
914 subs.append(s)
917 subs.append(s)
915 if (subs or removedsubs):
918 if (subs or removedsubs):
916 if (not match('.hgsub') and
919 if (not match('.hgsub') and
917 '.hgsub' in (wctx.modified() + wctx.added())):
920 '.hgsub' in (wctx.modified() + wctx.added())):
918 raise util.Abort(_("can't commit subrepos without .hgsub"))
921 raise util.Abort(_("can't commit subrepos without .hgsub"))
919 if '.hgsubstate' not in changes[0]:
922 if '.hgsubstate' not in changes[0]:
920 changes[0].insert(0, '.hgsubstate')
923 changes[0].insert(0, '.hgsubstate')
921
924
922 # make sure all explicit patterns are matched
925 # make sure all explicit patterns are matched
923 if not force and match.files():
926 if not force and match.files():
924 matched = set(changes[0] + changes[1] + changes[2])
927 matched = set(changes[0] + changes[1] + changes[2])
925
928
926 for f in match.files():
929 for f in match.files():
927 if f == '.' or f in matched or f in wctx.substate:
930 if f == '.' or f in matched or f in wctx.substate:
928 continue
931 continue
929 if f in changes[3]: # missing
932 if f in changes[3]: # missing
930 fail(f, _('file not found!'))
933 fail(f, _('file not found!'))
931 if f in vdirs: # visited directory
934 if f in vdirs: # visited directory
932 d = f + '/'
935 d = f + '/'
933 for mf in matched:
936 for mf in matched:
934 if mf.startswith(d):
937 if mf.startswith(d):
935 break
938 break
936 else:
939 else:
937 fail(f, _("no match under directory!"))
940 fail(f, _("no match under directory!"))
938 elif f not in self.dirstate:
941 elif f not in self.dirstate:
939 fail(f, _("file not tracked!"))
942 fail(f, _("file not tracked!"))
940
943
941 if (not force and not extra.get("close") and not merge
944 if (not force and not extra.get("close") and not merge
942 and not (changes[0] or changes[1] or changes[2])
945 and not (changes[0] or changes[1] or changes[2])
943 and wctx.branch() == wctx.p1().branch()):
946 and wctx.branch() == wctx.p1().branch()):
944 return None
947 return None
945
948
946 ms = mergemod.mergestate(self)
949 ms = mergemod.mergestate(self)
947 for f in changes[0]:
950 for f in changes[0]:
948 if f in ms and ms[f] == 'u':
951 if f in ms and ms[f] == 'u':
949 raise util.Abort(_("unresolved merge conflicts "
952 raise util.Abort(_("unresolved merge conflicts "
950 "(see hg resolve)"))
953 "(see hg resolve)"))
951
954
952 cctx = context.workingctx(self, text, user, date, extra, changes)
955 cctx = context.workingctx(self, text, user, date, extra, changes)
953 if editor:
956 if editor:
954 cctx._text = editor(self, cctx, subs)
957 cctx._text = editor(self, cctx, subs)
955 edited = (text != cctx._text)
958 edited = (text != cctx._text)
956
959
957 # commit subs
960 # commit subs
958 if subs or removedsubs:
961 if subs or removedsubs:
959 state = wctx.substate.copy()
962 state = wctx.substate.copy()
960 for s in sorted(subs):
963 for s in sorted(subs):
961 sub = wctx.sub(s)
964 sub = wctx.sub(s)
962 self.ui.status(_('committing subrepository %s\n') %
965 self.ui.status(_('committing subrepository %s\n') %
963 subrepo.subrelpath(sub))
966 subrepo.subrelpath(sub))
964 sr = sub.commit(cctx._text, user, date)
967 sr = sub.commit(cctx._text, user, date)
965 state[s] = (state[s][0], sr)
968 state[s] = (state[s][0], sr)
966 subrepo.writestate(self, state)
969 subrepo.writestate(self, state)
967
970
968 # Save commit message in case this transaction gets rolled back
971 # Save commit message in case this transaction gets rolled back
969 # (e.g. by a pretxncommit hook). Leave the content alone on
972 # (e.g. by a pretxncommit hook). Leave the content alone on
970 # the assumption that the user will use the same editor again.
973 # the assumption that the user will use the same editor again.
971 msgfile = self.opener('last-message.txt', 'wb')
974 msgfile = self.opener('last-message.txt', 'wb')
972 msgfile.write(cctx._text)
975 msgfile.write(cctx._text)
973 msgfile.close()
976 msgfile.close()
974
977
975 p1, p2 = self.dirstate.parents()
978 p1, p2 = self.dirstate.parents()
976 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
979 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
977 try:
980 try:
978 self.hook("precommit", throw=True, parent1=hookp1, parent2=hookp2)
981 self.hook("precommit", throw=True, parent1=hookp1, parent2=hookp2)
979 ret = self.commitctx(cctx, True)
982 ret = self.commitctx(cctx, True)
980 except:
983 except:
981 if edited:
984 if edited:
982 msgfn = self.pathto(msgfile.name[len(self.root)+1:])
985 msgfn = self.pathto(msgfile.name[len(self.root)+1:])
983 self.ui.write(
986 self.ui.write(
984 _('note: commit message saved in %s\n') % msgfn)
987 _('note: commit message saved in %s\n') % msgfn)
985 raise
988 raise
986
989
987 # update dirstate and mergestate
990 # update dirstate and mergestate
988 for f in changes[0] + changes[1]:
991 for f in changes[0] + changes[1]:
989 self.dirstate.normal(f)
992 self.dirstate.normal(f)
990 for f in changes[2]:
993 for f in changes[2]:
991 self.dirstate.forget(f)
994 self.dirstate.forget(f)
992 self.dirstate.setparents(ret)
995 self.dirstate.setparents(ret)
993 ms.reset()
996 ms.reset()
994 finally:
997 finally:
995 wlock.release()
998 wlock.release()
996
999
997 self.hook("commit", node=hex(ret), parent1=hookp1, parent2=hookp2)
1000 self.hook("commit", node=hex(ret), parent1=hookp1, parent2=hookp2)
998 return ret
1001 return ret
999
1002
1000 def commitctx(self, ctx, error=False):
1003 def commitctx(self, ctx, error=False):
1001 """Add a new revision to current repository.
1004 """Add a new revision to current repository.
1002 Revision information is passed via the context argument.
1005 Revision information is passed via the context argument.
1003 """
1006 """
1004
1007
1005 tr = lock = None
1008 tr = lock = None
1006 removed = list(ctx.removed())
1009 removed = list(ctx.removed())
1007 p1, p2 = ctx.p1(), ctx.p2()
1010 p1, p2 = ctx.p1(), ctx.p2()
1008 m1 = p1.manifest().copy()
1011 m1 = p1.manifest().copy()
1009 m2 = p2.manifest()
1012 m2 = p2.manifest()
1010 user = ctx.user()
1013 user = ctx.user()
1011
1014
1012 lock = self.lock()
1015 lock = self.lock()
1013 try:
1016 try:
1014 tr = self.transaction("commit")
1017 tr = self.transaction("commit")
1015 trp = weakref.proxy(tr)
1018 trp = weakref.proxy(tr)
1016
1019
1017 # check in files
1020 # check in files
1018 new = {}
1021 new = {}
1019 changed = []
1022 changed = []
1020 linkrev = len(self)
1023 linkrev = len(self)
1021 for f in sorted(ctx.modified() + ctx.added()):
1024 for f in sorted(ctx.modified() + ctx.added()):
1022 self.ui.note(f + "\n")
1025 self.ui.note(f + "\n")
1023 try:
1026 try:
1024 fctx = ctx[f]
1027 fctx = ctx[f]
1025 new[f] = self._filecommit(fctx, m1, m2, linkrev, trp,
1028 new[f] = self._filecommit(fctx, m1, m2, linkrev, trp,
1026 changed)
1029 changed)
1027 m1.set(f, fctx.flags())
1030 m1.set(f, fctx.flags())
1028 except OSError, inst:
1031 except OSError, inst:
1029 self.ui.warn(_("trouble committing %s!\n") % f)
1032 self.ui.warn(_("trouble committing %s!\n") % f)
1030 raise
1033 raise
1031 except IOError, inst:
1034 except IOError, inst:
1032 errcode = getattr(inst, 'errno', errno.ENOENT)
1035 errcode = getattr(inst, 'errno', errno.ENOENT)
1033 if error or errcode and errcode != errno.ENOENT:
1036 if error or errcode and errcode != errno.ENOENT:
1034 self.ui.warn(_("trouble committing %s!\n") % f)
1037 self.ui.warn(_("trouble committing %s!\n") % f)
1035 raise
1038 raise
1036 else:
1039 else:
1037 removed.append(f)
1040 removed.append(f)
1038
1041
1039 # update manifest
1042 # update manifest
1040 m1.update(new)
1043 m1.update(new)
1041 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1044 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1042 drop = [f for f in removed if f in m1]
1045 drop = [f for f in removed if f in m1]
1043 for f in drop:
1046 for f in drop:
1044 del m1[f]
1047 del m1[f]
1045 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1048 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1046 p2.manifestnode(), (new, drop))
1049 p2.manifestnode(), (new, drop))
1047
1050
1048 # update changelog
1051 # update changelog
1049 self.changelog.delayupdate()
1052 self.changelog.delayupdate()
1050 n = self.changelog.add(mn, changed + removed, ctx.description(),
1053 n = self.changelog.add(mn, changed + removed, ctx.description(),
1051 trp, p1.node(), p2.node(),
1054 trp, p1.node(), p2.node(),
1052 user, ctx.date(), ctx.extra().copy())
1055 user, ctx.date(), ctx.extra().copy())
1053 p = lambda: self.changelog.writepending() and self.root or ""
1056 p = lambda: self.changelog.writepending() and self.root or ""
1054 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1057 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1055 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1058 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1056 parent2=xp2, pending=p)
1059 parent2=xp2, pending=p)
1057 self.changelog.finalize(trp)
1060 self.changelog.finalize(trp)
1058 tr.close()
1061 tr.close()
1059
1062
1060 if self._branchcache:
1063 if self._branchcache:
1061 self.updatebranchcache()
1064 self.updatebranchcache()
1062 return n
1065 return n
1063 finally:
1066 finally:
1064 if tr:
1067 if tr:
1065 tr.release()
1068 tr.release()
1066 lock.release()
1069 lock.release()
1067
1070
1068 def destroyed(self):
1071 def destroyed(self):
1069 '''Inform the repository that nodes have been destroyed.
1072 '''Inform the repository that nodes have been destroyed.
1070 Intended for use by strip and rollback, so there's a common
1073 Intended for use by strip and rollback, so there's a common
1071 place for anything that has to be done after destroying history.'''
1074 place for anything that has to be done after destroying history.'''
1072 # XXX it might be nice if we could take the list of destroyed
1075 # XXX it might be nice if we could take the list of destroyed
1073 # nodes, but I don't see an easy way for rollback() to do that
1076 # nodes, but I don't see an easy way for rollback() to do that
1074
1077
1075 # Ensure the persistent tag cache is updated. Doing it now
1078 # Ensure the persistent tag cache is updated. Doing it now
1076 # means that the tag cache only has to worry about destroyed
1079 # means that the tag cache only has to worry about destroyed
1077 # heads immediately after a strip/rollback. That in turn
1080 # heads immediately after a strip/rollback. That in turn
1078 # guarantees that "cachetip == currenttip" (comparing both rev
1081 # guarantees that "cachetip == currenttip" (comparing both rev
1079 # and node) always means no nodes have been added or destroyed.
1082 # and node) always means no nodes have been added or destroyed.
1080
1083
1081 # XXX this is suboptimal when qrefresh'ing: we strip the current
1084 # XXX this is suboptimal when qrefresh'ing: we strip the current
1082 # head, refresh the tag cache, then immediately add a new head.
1085 # head, refresh the tag cache, then immediately add a new head.
1083 # But I think doing it this way is necessary for the "instant
1086 # But I think doing it this way is necessary for the "instant
1084 # tag cache retrieval" case to work.
1087 # tag cache retrieval" case to work.
1085 self.invalidatecaches()
1088 self.invalidatecaches()
1086
1089
1087 def walk(self, match, node=None):
1090 def walk(self, match, node=None):
1088 '''
1091 '''
1089 walk recursively through the directory tree or a given
1092 walk recursively through the directory tree or a given
1090 changeset, finding all files matched by the match
1093 changeset, finding all files matched by the match
1091 function
1094 function
1092 '''
1095 '''
1093 return self[node].walk(match)
1096 return self[node].walk(match)
1094
1097
1095 def status(self, node1='.', node2=None, match=None,
1098 def status(self, node1='.', node2=None, match=None,
1096 ignored=False, clean=False, unknown=False,
1099 ignored=False, clean=False, unknown=False,
1097 listsubrepos=False):
1100 listsubrepos=False):
1098 """return status of files between two nodes or node and working directory
1101 """return status of files between two nodes or node and working directory
1099
1102
1100 If node1 is None, use the first dirstate parent instead.
1103 If node1 is None, use the first dirstate parent instead.
1101 If node2 is None, compare node1 with working directory.
1104 If node2 is None, compare node1 with working directory.
1102 """
1105 """
1103
1106
1104 def mfmatches(ctx):
1107 def mfmatches(ctx):
1105 mf = ctx.manifest().copy()
1108 mf = ctx.manifest().copy()
1106 for fn in mf.keys():
1109 for fn in mf.keys():
1107 if not match(fn):
1110 if not match(fn):
1108 del mf[fn]
1111 del mf[fn]
1109 return mf
1112 return mf
1110
1113
1111 if isinstance(node1, context.changectx):
1114 if isinstance(node1, context.changectx):
1112 ctx1 = node1
1115 ctx1 = node1
1113 else:
1116 else:
1114 ctx1 = self[node1]
1117 ctx1 = self[node1]
1115 if isinstance(node2, context.changectx):
1118 if isinstance(node2, context.changectx):
1116 ctx2 = node2
1119 ctx2 = node2
1117 else:
1120 else:
1118 ctx2 = self[node2]
1121 ctx2 = self[node2]
1119
1122
1120 working = ctx2.rev() is None
1123 working = ctx2.rev() is None
1121 parentworking = working and ctx1 == self['.']
1124 parentworking = working and ctx1 == self['.']
1122 match = match or matchmod.always(self.root, self.getcwd())
1125 match = match or matchmod.always(self.root, self.getcwd())
1123 listignored, listclean, listunknown = ignored, clean, unknown
1126 listignored, listclean, listunknown = ignored, clean, unknown
1124
1127
1125 # load earliest manifest first for caching reasons
1128 # load earliest manifest first for caching reasons
1126 if not working and ctx2.rev() < ctx1.rev():
1129 if not working and ctx2.rev() < ctx1.rev():
1127 ctx2.manifest()
1130 ctx2.manifest()
1128
1131
1129 if not parentworking:
1132 if not parentworking:
1130 def bad(f, msg):
1133 def bad(f, msg):
1131 if f not in ctx1:
1134 if f not in ctx1:
1132 self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg))
1135 self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg))
1133 match.bad = bad
1136 match.bad = bad
1134
1137
1135 if working: # we need to scan the working dir
1138 if working: # we need to scan the working dir
1136 subrepos = []
1139 subrepos = []
1137 if '.hgsub' in self.dirstate:
1140 if '.hgsub' in self.dirstate:
1138 subrepos = ctx1.substate.keys()
1141 subrepos = ctx1.substate.keys()
1139 s = self.dirstate.status(match, subrepos, listignored,
1142 s = self.dirstate.status(match, subrepos, listignored,
1140 listclean, listunknown)
1143 listclean, listunknown)
1141 cmp, modified, added, removed, deleted, unknown, ignored, clean = s
1144 cmp, modified, added, removed, deleted, unknown, ignored, clean = s
1142
1145
1143 # check for any possibly clean files
1146 # check for any possibly clean files
1144 if parentworking and cmp:
1147 if parentworking and cmp:
1145 fixup = []
1148 fixup = []
1146 # do a full compare of any files that might have changed
1149 # do a full compare of any files that might have changed
1147 for f in sorted(cmp):
1150 for f in sorted(cmp):
1148 if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f)
1151 if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f)
1149 or ctx1[f].cmp(ctx2[f])):
1152 or ctx1[f].cmp(ctx2[f])):
1150 modified.append(f)
1153 modified.append(f)
1151 else:
1154 else:
1152 fixup.append(f)
1155 fixup.append(f)
1153
1156
1154 # update dirstate for files that are actually clean
1157 # update dirstate for files that are actually clean
1155 if fixup:
1158 if fixup:
1156 if listclean:
1159 if listclean:
1157 clean += fixup
1160 clean += fixup
1158
1161
1159 try:
1162 try:
1160 # updating the dirstate is optional
1163 # updating the dirstate is optional
1161 # so we don't wait on the lock
1164 # so we don't wait on the lock
1162 wlock = self.wlock(False)
1165 wlock = self.wlock(False)
1163 try:
1166 try:
1164 for f in fixup:
1167 for f in fixup:
1165 self.dirstate.normal(f)
1168 self.dirstate.normal(f)
1166 finally:
1169 finally:
1167 wlock.release()
1170 wlock.release()
1168 except error.LockError:
1171 except error.LockError:
1169 pass
1172 pass
1170
1173
1171 if not parentworking:
1174 if not parentworking:
1172 mf1 = mfmatches(ctx1)
1175 mf1 = mfmatches(ctx1)
1173 if working:
1176 if working:
1174 # we are comparing working dir against non-parent
1177 # we are comparing working dir against non-parent
1175 # generate a pseudo-manifest for the working dir
1178 # generate a pseudo-manifest for the working dir
1176 mf2 = mfmatches(self['.'])
1179 mf2 = mfmatches(self['.'])
1177 for f in cmp + modified + added:
1180 for f in cmp + modified + added:
1178 mf2[f] = None
1181 mf2[f] = None
1179 mf2.set(f, ctx2.flags(f))
1182 mf2.set(f, ctx2.flags(f))
1180 for f in removed:
1183 for f in removed:
1181 if f in mf2:
1184 if f in mf2:
1182 del mf2[f]
1185 del mf2[f]
1183 else:
1186 else:
1184 # we are comparing two revisions
1187 # we are comparing two revisions
1185 deleted, unknown, ignored = [], [], []
1188 deleted, unknown, ignored = [], [], []
1186 mf2 = mfmatches(ctx2)
1189 mf2 = mfmatches(ctx2)
1187
1190
1188 modified, added, clean = [], [], []
1191 modified, added, clean = [], [], []
1189 for fn in mf2:
1192 for fn in mf2:
1190 if fn in mf1:
1193 if fn in mf1:
1191 if (mf1.flags(fn) != mf2.flags(fn) or
1194 if (mf1.flags(fn) != mf2.flags(fn) or
1192 (mf1[fn] != mf2[fn] and
1195 (mf1[fn] != mf2[fn] and
1193 (mf2[fn] or ctx1[fn].cmp(ctx2[fn])))):
1196 (mf2[fn] or ctx1[fn].cmp(ctx2[fn])))):
1194 modified.append(fn)
1197 modified.append(fn)
1195 elif listclean:
1198 elif listclean:
1196 clean.append(fn)
1199 clean.append(fn)
1197 del mf1[fn]
1200 del mf1[fn]
1198 else:
1201 else:
1199 added.append(fn)
1202 added.append(fn)
1200 removed = mf1.keys()
1203 removed = mf1.keys()
1201
1204
1202 r = modified, added, removed, deleted, unknown, ignored, clean
1205 r = modified, added, removed, deleted, unknown, ignored, clean
1203
1206
1204 if listsubrepos:
1207 if listsubrepos:
1205 for subpath, sub in subrepo.itersubrepos(ctx1, ctx2):
1208 for subpath, sub in subrepo.itersubrepos(ctx1, ctx2):
1206 if working:
1209 if working:
1207 rev2 = None
1210 rev2 = None
1208 else:
1211 else:
1209 rev2 = ctx2.substate[subpath][1]
1212 rev2 = ctx2.substate[subpath][1]
1210 try:
1213 try:
1211 submatch = matchmod.narrowmatcher(subpath, match)
1214 submatch = matchmod.narrowmatcher(subpath, match)
1212 s = sub.status(rev2, match=submatch, ignored=listignored,
1215 s = sub.status(rev2, match=submatch, ignored=listignored,
1213 clean=listclean, unknown=listunknown,
1216 clean=listclean, unknown=listunknown,
1214 listsubrepos=True)
1217 listsubrepos=True)
1215 for rfiles, sfiles in zip(r, s):
1218 for rfiles, sfiles in zip(r, s):
1216 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
1219 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
1217 except error.LookupError:
1220 except error.LookupError:
1218 self.ui.status(_("skipping missing subrepository: %s\n")
1221 self.ui.status(_("skipping missing subrepository: %s\n")
1219 % subpath)
1222 % subpath)
1220
1223
1221 [l.sort() for l in r]
1224 [l.sort() for l in r]
1222 return r
1225 return r
1223
1226
1224 def heads(self, start=None):
1227 def heads(self, start=None):
1225 heads = self.changelog.heads(start)
1228 heads = self.changelog.heads(start)
1226 # sort the output in rev descending order
1229 # sort the output in rev descending order
1227 return sorted(heads, key=self.changelog.rev, reverse=True)
1230 return sorted(heads, key=self.changelog.rev, reverse=True)
1228
1231
1229 def branchheads(self, branch=None, start=None, closed=False):
1232 def branchheads(self, branch=None, start=None, closed=False):
1230 '''return a (possibly filtered) list of heads for the given branch
1233 '''return a (possibly filtered) list of heads for the given branch
1231
1234
1232 Heads are returned in topological order, from newest to oldest.
1235 Heads are returned in topological order, from newest to oldest.
1233 If branch is None, use the dirstate branch.
1236 If branch is None, use the dirstate branch.
1234 If start is not None, return only heads reachable from start.
1237 If start is not None, return only heads reachable from start.
1235 If closed is True, return heads that are marked as closed as well.
1238 If closed is True, return heads that are marked as closed as well.
1236 '''
1239 '''
1237 if branch is None:
1240 if branch is None:
1238 branch = self[None].branch()
1241 branch = self[None].branch()
1239 branches = self.branchmap()
1242 branches = self.branchmap()
1240 if branch not in branches:
1243 if branch not in branches:
1241 return []
1244 return []
1242 # the cache returns heads ordered lowest to highest
1245 # the cache returns heads ordered lowest to highest
1243 bheads = list(reversed(branches[branch]))
1246 bheads = list(reversed(branches[branch]))
1244 if start is not None:
1247 if start is not None:
1245 # filter out the heads that cannot be reached from startrev
1248 # filter out the heads that cannot be reached from startrev
1246 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1249 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1247 bheads = [h for h in bheads if h in fbheads]
1250 bheads = [h for h in bheads if h in fbheads]
1248 if not closed:
1251 if not closed:
1249 bheads = [h for h in bheads if
1252 bheads = [h for h in bheads if
1250 ('close' not in self.changelog.read(h)[5])]
1253 ('close' not in self.changelog.read(h)[5])]
1251 return bheads
1254 return bheads
1252
1255
1253 def branches(self, nodes):
1256 def branches(self, nodes):
1254 if not nodes:
1257 if not nodes:
1255 nodes = [self.changelog.tip()]
1258 nodes = [self.changelog.tip()]
1256 b = []
1259 b = []
1257 for n in nodes:
1260 for n in nodes:
1258 t = n
1261 t = n
1259 while 1:
1262 while 1:
1260 p = self.changelog.parents(n)
1263 p = self.changelog.parents(n)
1261 if p[1] != nullid or p[0] == nullid:
1264 if p[1] != nullid or p[0] == nullid:
1262 b.append((t, n, p[0], p[1]))
1265 b.append((t, n, p[0], p[1]))
1263 break
1266 break
1264 n = p[0]
1267 n = p[0]
1265 return b
1268 return b
1266
1269
1267 def between(self, pairs):
1270 def between(self, pairs):
1268 r = []
1271 r = []
1269
1272
1270 for top, bottom in pairs:
1273 for top, bottom in pairs:
1271 n, l, i = top, [], 0
1274 n, l, i = top, [], 0
1272 f = 1
1275 f = 1
1273
1276
1274 while n != bottom and n != nullid:
1277 while n != bottom and n != nullid:
1275 p = self.changelog.parents(n)[0]
1278 p = self.changelog.parents(n)[0]
1276 if i == f:
1279 if i == f:
1277 l.append(n)
1280 l.append(n)
1278 f = f * 2
1281 f = f * 2
1279 n = p
1282 n = p
1280 i += 1
1283 i += 1
1281
1284
1282 r.append(l)
1285 r.append(l)
1283
1286
1284 return r
1287 return r
1285
1288
1286 def pull(self, remote, heads=None, force=False):
1289 def pull(self, remote, heads=None, force=False):
1287 lock = self.lock()
1290 lock = self.lock()
1288 try:
1291 try:
1289 tmp = discovery.findcommonincoming(self, remote, heads=heads,
1292 tmp = discovery.findcommonincoming(self, remote, heads=heads,
1290 force=force)
1293 force=force)
1291 common, fetch, rheads = tmp
1294 common, fetch, rheads = tmp
1292 if not fetch:
1295 if not fetch:
1293 self.ui.status(_("no changes found\n"))
1296 self.ui.status(_("no changes found\n"))
1294 return 0
1297 return 0
1295
1298
1296 if heads is None and fetch == [nullid]:
1299 if heads is None and fetch == [nullid]:
1297 self.ui.status(_("requesting all changes\n"))
1300 self.ui.status(_("requesting all changes\n"))
1298 elif heads is None and remote.capable('changegroupsubset'):
1301 elif heads is None and remote.capable('changegroupsubset'):
1299 # issue1320, avoid a race if remote changed after discovery
1302 # issue1320, avoid a race if remote changed after discovery
1300 heads = rheads
1303 heads = rheads
1301
1304
1302 if heads is None:
1305 if heads is None:
1303 cg = remote.changegroup(fetch, 'pull')
1306 cg = remote.changegroup(fetch, 'pull')
1304 else:
1307 else:
1305 if not remote.capable('changegroupsubset'):
1308 if not remote.capable('changegroupsubset'):
1306 raise util.Abort(_("partial pull cannot be done because "
1309 raise util.Abort(_("partial pull cannot be done because "
1307 "other repository doesn't support "
1310 "other repository doesn't support "
1308 "changegroupsubset."))
1311 "changegroupsubset."))
1309 cg = remote.changegroupsubset(fetch, heads, 'pull')
1312 cg = remote.changegroupsubset(fetch, heads, 'pull')
1310 return self.addchangegroup(cg, 'pull', remote.url(), lock=lock)
1313 return self.addchangegroup(cg, 'pull', remote.url(), lock=lock)
1311 finally:
1314 finally:
1312 lock.release()
1315 lock.release()
1313
1316
1314 def checkpush(self, force, revs):
1317 def checkpush(self, force, revs):
1315 """Extensions can override this function if additional checks have
1318 """Extensions can override this function if additional checks have
1316 to be performed before pushing, or call it if they override push
1319 to be performed before pushing, or call it if they override push
1317 command.
1320 command.
1318 """
1321 """
1319 pass
1322 pass
1320
1323
1321 def push(self, remote, force=False, revs=None, newbranch=False):
1324 def push(self, remote, force=False, revs=None, newbranch=False):
1322 '''Push outgoing changesets (limited by revs) from the current
1325 '''Push outgoing changesets (limited by revs) from the current
1323 repository to remote. Return an integer:
1326 repository to remote. Return an integer:
1324 - 0 means HTTP error *or* nothing to push
1327 - 0 means HTTP error *or* nothing to push
1325 - 1 means we pushed and remote head count is unchanged *or*
1328 - 1 means we pushed and remote head count is unchanged *or*
1326 we have outgoing changesets but refused to push
1329 we have outgoing changesets but refused to push
1327 - other values as described by addchangegroup()
1330 - other values as described by addchangegroup()
1328 '''
1331 '''
1329 # there are two ways to push to remote repo:
1332 # there are two ways to push to remote repo:
1330 #
1333 #
1331 # addchangegroup assumes local user can lock remote
1334 # addchangegroup assumes local user can lock remote
1332 # repo (local filesystem, old ssh servers).
1335 # repo (local filesystem, old ssh servers).
1333 #
1336 #
1334 # unbundle assumes local user cannot lock remote repo (new ssh
1337 # unbundle assumes local user cannot lock remote repo (new ssh
1335 # servers, http servers).
1338 # servers, http servers).
1336
1339
1337 self.checkpush(force, revs)
1340 self.checkpush(force, revs)
1338 lock = None
1341 lock = None
1339 unbundle = remote.capable('unbundle')
1342 unbundle = remote.capable('unbundle')
1340 if not unbundle:
1343 if not unbundle:
1341 lock = remote.lock()
1344 lock = remote.lock()
1342 try:
1345 try:
1343 ret = discovery.prepush(self, remote, force, revs, newbranch)
1346 ret = discovery.prepush(self, remote, force, revs, newbranch)
1344 if ret[0] is None:
1347 if ret[0] is None:
1345 # and here we return 0 for "nothing to push" or 1 for
1348 # and here we return 0 for "nothing to push" or 1 for
1346 # "something to push but I refuse"
1349 # "something to push but I refuse"
1347 return ret[1]
1350 return ret[1]
1348
1351
1349 cg, remote_heads = ret
1352 cg, remote_heads = ret
1350 if unbundle:
1353 if unbundle:
1351 # local repo finds heads on server, finds out what revs it must
1354 # local repo finds heads on server, finds out what revs it must
1352 # push. once revs transferred, if server finds it has
1355 # push. once revs transferred, if server finds it has
1353 # different heads (someone else won commit/push race), server
1356 # different heads (someone else won commit/push race), server
1354 # aborts.
1357 # aborts.
1355 if force:
1358 if force:
1356 remote_heads = ['force']
1359 remote_heads = ['force']
1357 # ssh: return remote's addchangegroup()
1360 # ssh: return remote's addchangegroup()
1358 # http: return remote's addchangegroup() or 0 for error
1361 # http: return remote's addchangegroup() or 0 for error
1359 return remote.unbundle(cg, remote_heads, 'push')
1362 return remote.unbundle(cg, remote_heads, 'push')
1360 else:
1363 else:
1361 # we return an integer indicating remote head count change
1364 # we return an integer indicating remote head count change
1362 return remote.addchangegroup(cg, 'push', self.url(), lock=lock)
1365 return remote.addchangegroup(cg, 'push', self.url(), lock=lock)
1363 finally:
1366 finally:
1364 if lock is not None:
1367 if lock is not None:
1365 lock.release()
1368 lock.release()
1366
1369
1367 def changegroupinfo(self, nodes, source):
1370 def changegroupinfo(self, nodes, source):
1368 if self.ui.verbose or source == 'bundle':
1371 if self.ui.verbose or source == 'bundle':
1369 self.ui.status(_("%d changesets found\n") % len(nodes))
1372 self.ui.status(_("%d changesets found\n") % len(nodes))
1370 if self.ui.debugflag:
1373 if self.ui.debugflag:
1371 self.ui.debug("list of changesets:\n")
1374 self.ui.debug("list of changesets:\n")
1372 for node in nodes:
1375 for node in nodes:
1373 self.ui.debug("%s\n" % hex(node))
1376 self.ui.debug("%s\n" % hex(node))
1374
1377
1375 def changegroupsubset(self, bases, heads, source, extranodes=None):
1378 def changegroupsubset(self, bases, heads, source, extranodes=None):
1376 """Compute a changegroup consisting of all the nodes that are
1379 """Compute a changegroup consisting of all the nodes that are
1377 descendents of any of the bases and ancestors of any of the heads.
1380 descendents of any of the bases and ancestors of any of the heads.
1378 Return a chunkbuffer object whose read() method will return
1381 Return a chunkbuffer object whose read() method will return
1379 successive changegroup chunks.
1382 successive changegroup chunks.
1380
1383
1381 It is fairly complex as determining which filenodes and which
1384 It is fairly complex as determining which filenodes and which
1382 manifest nodes need to be included for the changeset to be complete
1385 manifest nodes need to be included for the changeset to be complete
1383 is non-trivial.
1386 is non-trivial.
1384
1387
1385 Another wrinkle is doing the reverse, figuring out which changeset in
1388 Another wrinkle is doing the reverse, figuring out which changeset in
1386 the changegroup a particular filenode or manifestnode belongs to.
1389 the changegroup a particular filenode or manifestnode belongs to.
1387
1390
1388 The caller can specify some nodes that must be included in the
1391 The caller can specify some nodes that must be included in the
1389 changegroup using the extranodes argument. It should be a dict
1392 changegroup using the extranodes argument. It should be a dict
1390 where the keys are the filenames (or 1 for the manifest), and the
1393 where the keys are the filenames (or 1 for the manifest), and the
1391 values are lists of (node, linknode) tuples, where node is a wanted
1394 values are lists of (node, linknode) tuples, where node is a wanted
1392 node and linknode is the changelog node that should be transmitted as
1395 node and linknode is the changelog node that should be transmitted as
1393 the linkrev.
1396 the linkrev.
1394 """
1397 """
1395
1398
1396 # Set up some initial variables
1399 # Set up some initial variables
1397 # Make it easy to refer to self.changelog
1400 # Make it easy to refer to self.changelog
1398 cl = self.changelog
1401 cl = self.changelog
1399 # Compute the list of changesets in this changegroup.
1402 # Compute the list of changesets in this changegroup.
1400 # Some bases may turn out to be superfluous, and some heads may be
1403 # Some bases may turn out to be superfluous, and some heads may be
1401 # too. nodesbetween will return the minimal set of bases and heads
1404 # too. nodesbetween will return the minimal set of bases and heads
1402 # necessary to re-create the changegroup.
1405 # necessary to re-create the changegroup.
1403 if not bases:
1406 if not bases:
1404 bases = [nullid]
1407 bases = [nullid]
1405 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1408 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1406
1409
1407 if extranodes is None:
1410 if extranodes is None:
1408 # can we go through the fast path ?
1411 # can we go through the fast path ?
1409 heads.sort()
1412 heads.sort()
1410 allheads = self.heads()
1413 allheads = self.heads()
1411 allheads.sort()
1414 allheads.sort()
1412 if heads == allheads:
1415 if heads == allheads:
1413 return self._changegroup(msng_cl_lst, source)
1416 return self._changegroup(msng_cl_lst, source)
1414
1417
1415 # slow path
1418 # slow path
1416 self.hook('preoutgoing', throw=True, source=source)
1419 self.hook('preoutgoing', throw=True, source=source)
1417
1420
1418 self.changegroupinfo(msng_cl_lst, source)
1421 self.changegroupinfo(msng_cl_lst, source)
1419
1422
1420 # We assume that all ancestors of bases are known
1423 # We assume that all ancestors of bases are known
1421 commonrevs = set(cl.ancestors(*[cl.rev(n) for n in bases]))
1424 commonrevs = set(cl.ancestors(*[cl.rev(n) for n in bases]))
1422
1425
1423 # Make it easy to refer to self.manifest
1426 # Make it easy to refer to self.manifest
1424 mnfst = self.manifest
1427 mnfst = self.manifest
1425 # We don't know which manifests are missing yet
1428 # We don't know which manifests are missing yet
1426 msng_mnfst_set = {}
1429 msng_mnfst_set = {}
1427 # Nor do we know which filenodes are missing.
1430 # Nor do we know which filenodes are missing.
1428 msng_filenode_set = {}
1431 msng_filenode_set = {}
1429
1432
1430 # A changeset always belongs to itself, so the changenode lookup
1433 # A changeset always belongs to itself, so the changenode lookup
1431 # function for a changenode is identity.
1434 # function for a changenode is identity.
1432 def identity(x):
1435 def identity(x):
1433 return x
1436 return x
1434
1437
1435 # A function generating function that sets up the initial environment
1438 # A function generating function that sets up the initial environment
1436 # the inner function.
1439 # the inner function.
1437 def filenode_collector(changedfiles):
1440 def filenode_collector(changedfiles):
1438 # This gathers information from each manifestnode included in the
1441 # This gathers information from each manifestnode included in the
1439 # changegroup about which filenodes the manifest node references
1442 # changegroup about which filenodes the manifest node references
1440 # so we can include those in the changegroup too.
1443 # so we can include those in the changegroup too.
1441 #
1444 #
1442 # It also remembers which changenode each filenode belongs to. It
1445 # It also remembers which changenode each filenode belongs to. It
1443 # does this by assuming the a filenode belongs to the changenode
1446 # does this by assuming the a filenode belongs to the changenode
1444 # the first manifest that references it belongs to.
1447 # the first manifest that references it belongs to.
1445 def collect_msng_filenodes(mnfstnode):
1448 def collect_msng_filenodes(mnfstnode):
1446 r = mnfst.rev(mnfstnode)
1449 r = mnfst.rev(mnfstnode)
1447 if mnfst.deltaparent(r) in mnfst.parentrevs(r):
1450 if mnfst.deltaparent(r) in mnfst.parentrevs(r):
1448 # If the previous rev is one of the parents,
1451 # If the previous rev is one of the parents,
1449 # we only need to see a diff.
1452 # we only need to see a diff.
1450 deltamf = mnfst.readdelta(mnfstnode)
1453 deltamf = mnfst.readdelta(mnfstnode)
1451 # For each line in the delta
1454 # For each line in the delta
1452 for f, fnode in deltamf.iteritems():
1455 for f, fnode in deltamf.iteritems():
1453 # And if the file is in the list of files we care
1456 # And if the file is in the list of files we care
1454 # about.
1457 # about.
1455 if f in changedfiles:
1458 if f in changedfiles:
1456 # Get the changenode this manifest belongs to
1459 # Get the changenode this manifest belongs to
1457 clnode = msng_mnfst_set[mnfstnode]
1460 clnode = msng_mnfst_set[mnfstnode]
1458 # Create the set of filenodes for the file if
1461 # Create the set of filenodes for the file if
1459 # there isn't one already.
1462 # there isn't one already.
1460 ndset = msng_filenode_set.setdefault(f, {})
1463 ndset = msng_filenode_set.setdefault(f, {})
1461 # And set the filenode's changelog node to the
1464 # And set the filenode's changelog node to the
1462 # manifest's if it hasn't been set already.
1465 # manifest's if it hasn't been set already.
1463 ndset.setdefault(fnode, clnode)
1466 ndset.setdefault(fnode, clnode)
1464 else:
1467 else:
1465 # Otherwise we need a full manifest.
1468 # Otherwise we need a full manifest.
1466 m = mnfst.read(mnfstnode)
1469 m = mnfst.read(mnfstnode)
1467 # For every file in we care about.
1470 # For every file in we care about.
1468 for f in changedfiles:
1471 for f in changedfiles:
1469 fnode = m.get(f, None)
1472 fnode = m.get(f, None)
1470 # If it's in the manifest
1473 # If it's in the manifest
1471 if fnode is not None:
1474 if fnode is not None:
1472 # See comments above.
1475 # See comments above.
1473 clnode = msng_mnfst_set[mnfstnode]
1476 clnode = msng_mnfst_set[mnfstnode]
1474 ndset = msng_filenode_set.setdefault(f, {})
1477 ndset = msng_filenode_set.setdefault(f, {})
1475 ndset.setdefault(fnode, clnode)
1478 ndset.setdefault(fnode, clnode)
1476 return collect_msng_filenodes
1479 return collect_msng_filenodes
1477
1480
1478 # If we determine that a particular file or manifest node must be a
1481 # If we determine that a particular file or manifest node must be a
1479 # node that the recipient of the changegroup will already have, we can
1482 # node that the recipient of the changegroup will already have, we can
1480 # also assume the recipient will have all the parents. This function
1483 # also assume the recipient will have all the parents. This function
1481 # prunes them from the set of missing nodes.
1484 # prunes them from the set of missing nodes.
1482 def prune(revlog, missingnodes):
1485 def prune(revlog, missingnodes):
1483 hasset = set()
1486 hasset = set()
1484 # If a 'missing' filenode thinks it belongs to a changenode we
1487 # If a 'missing' filenode thinks it belongs to a changenode we
1485 # assume the recipient must have, then the recipient must have
1488 # assume the recipient must have, then the recipient must have
1486 # that filenode.
1489 # that filenode.
1487 for n in missingnodes:
1490 for n in missingnodes:
1488 clrev = revlog.linkrev(revlog.rev(n))
1491 clrev = revlog.linkrev(revlog.rev(n))
1489 if clrev in commonrevs:
1492 if clrev in commonrevs:
1490 hasset.add(n)
1493 hasset.add(n)
1491 for n in hasset:
1494 for n in hasset:
1492 missingnodes.pop(n, None)
1495 missingnodes.pop(n, None)
1493 for r in revlog.ancestors(*[revlog.rev(n) for n in hasset]):
1496 for r in revlog.ancestors(*[revlog.rev(n) for n in hasset]):
1494 missingnodes.pop(revlog.node(r), None)
1497 missingnodes.pop(revlog.node(r), None)
1495
1498
1496 # Add the nodes that were explicitly requested.
1499 # Add the nodes that were explicitly requested.
1497 def add_extra_nodes(name, nodes):
1500 def add_extra_nodes(name, nodes):
1498 if not extranodes or name not in extranodes:
1501 if not extranodes or name not in extranodes:
1499 return
1502 return
1500
1503
1501 for node, linknode in extranodes[name]:
1504 for node, linknode in extranodes[name]:
1502 if node not in nodes:
1505 if node not in nodes:
1503 nodes[node] = linknode
1506 nodes[node] = linknode
1504
1507
1505 # Now that we have all theses utility functions to help out and
1508 # Now that we have all theses utility functions to help out and
1506 # logically divide up the task, generate the group.
1509 # logically divide up the task, generate the group.
1507 def gengroup():
1510 def gengroup():
1508 # The set of changed files starts empty.
1511 # The set of changed files starts empty.
1509 changedfiles = set()
1512 changedfiles = set()
1510 collect = changegroup.collector(cl, msng_mnfst_set, changedfiles)
1513 collect = changegroup.collector(cl, msng_mnfst_set, changedfiles)
1511
1514
1512 # Create a changenode group generator that will call our functions
1515 # Create a changenode group generator that will call our functions
1513 # back to lookup the owning changenode and collect information.
1516 # back to lookup the owning changenode and collect information.
1514 group = cl.group(msng_cl_lst, identity, collect)
1517 group = cl.group(msng_cl_lst, identity, collect)
1515 for cnt, chnk in enumerate(group):
1518 for cnt, chnk in enumerate(group):
1516 yield chnk
1519 yield chnk
1517 # revlog.group yields three entries per node, so
1520 # revlog.group yields three entries per node, so
1518 # dividing by 3 gives an approximation of how many
1521 # dividing by 3 gives an approximation of how many
1519 # nodes have been processed.
1522 # nodes have been processed.
1520 self.ui.progress(_('bundling'), cnt / 3,
1523 self.ui.progress(_('bundling'), cnt / 3,
1521 unit=_('changesets'))
1524 unit=_('changesets'))
1522 changecount = cnt / 3
1525 changecount = cnt / 3
1523 self.ui.progress(_('bundling'), None)
1526 self.ui.progress(_('bundling'), None)
1524
1527
1525 prune(mnfst, msng_mnfst_set)
1528 prune(mnfst, msng_mnfst_set)
1526 add_extra_nodes(1, msng_mnfst_set)
1529 add_extra_nodes(1, msng_mnfst_set)
1527 msng_mnfst_lst = msng_mnfst_set.keys()
1530 msng_mnfst_lst = msng_mnfst_set.keys()
1528 # Sort the manifestnodes by revision number.
1531 # Sort the manifestnodes by revision number.
1529 msng_mnfst_lst.sort(key=mnfst.rev)
1532 msng_mnfst_lst.sort(key=mnfst.rev)
1530 # Create a generator for the manifestnodes that calls our lookup
1533 # Create a generator for the manifestnodes that calls our lookup
1531 # and data collection functions back.
1534 # and data collection functions back.
1532 group = mnfst.group(msng_mnfst_lst,
1535 group = mnfst.group(msng_mnfst_lst,
1533 lambda mnode: msng_mnfst_set[mnode],
1536 lambda mnode: msng_mnfst_set[mnode],
1534 filenode_collector(changedfiles))
1537 filenode_collector(changedfiles))
1535 efiles = {}
1538 efiles = {}
1536 for cnt, chnk in enumerate(group):
1539 for cnt, chnk in enumerate(group):
1537 if cnt % 3 == 1:
1540 if cnt % 3 == 1:
1538 mnode = chnk[:20]
1541 mnode = chnk[:20]
1539 efiles.update(mnfst.readdelta(mnode))
1542 efiles.update(mnfst.readdelta(mnode))
1540 yield chnk
1543 yield chnk
1541 # see above comment for why we divide by 3
1544 # see above comment for why we divide by 3
1542 self.ui.progress(_('bundling'), cnt / 3,
1545 self.ui.progress(_('bundling'), cnt / 3,
1543 unit=_('manifests'), total=changecount)
1546 unit=_('manifests'), total=changecount)
1544 self.ui.progress(_('bundling'), None)
1547 self.ui.progress(_('bundling'), None)
1545 efiles = len(efiles)
1548 efiles = len(efiles)
1546
1549
1547 # These are no longer needed, dereference and toss the memory for
1550 # These are no longer needed, dereference and toss the memory for
1548 # them.
1551 # them.
1549 msng_mnfst_lst = None
1552 msng_mnfst_lst = None
1550 msng_mnfst_set.clear()
1553 msng_mnfst_set.clear()
1551
1554
1552 if extranodes:
1555 if extranodes:
1553 for fname in extranodes:
1556 for fname in extranodes:
1554 if isinstance(fname, int):
1557 if isinstance(fname, int):
1555 continue
1558 continue
1556 msng_filenode_set.setdefault(fname, {})
1559 msng_filenode_set.setdefault(fname, {})
1557 changedfiles.add(fname)
1560 changedfiles.add(fname)
1558 # Go through all our files in order sorted by name.
1561 # Go through all our files in order sorted by name.
1559 for idx, fname in enumerate(sorted(changedfiles)):
1562 for idx, fname in enumerate(sorted(changedfiles)):
1560 filerevlog = self.file(fname)
1563 filerevlog = self.file(fname)
1561 if not len(filerevlog):
1564 if not len(filerevlog):
1562 raise util.Abort(_("empty or missing revlog for %s") % fname)
1565 raise util.Abort(_("empty or missing revlog for %s") % fname)
1563 # Toss out the filenodes that the recipient isn't really
1566 # Toss out the filenodes that the recipient isn't really
1564 # missing.
1567 # missing.
1565 missingfnodes = msng_filenode_set.pop(fname, {})
1568 missingfnodes = msng_filenode_set.pop(fname, {})
1566 prune(filerevlog, missingfnodes)
1569 prune(filerevlog, missingfnodes)
1567 add_extra_nodes(fname, missingfnodes)
1570 add_extra_nodes(fname, missingfnodes)
1568 # If any filenodes are left, generate the group for them,
1571 # If any filenodes are left, generate the group for them,
1569 # otherwise don't bother.
1572 # otherwise don't bother.
1570 if missingfnodes:
1573 if missingfnodes:
1571 yield changegroup.chunkheader(len(fname))
1574 yield changegroup.chunkheader(len(fname))
1572 yield fname
1575 yield fname
1573 # Sort the filenodes by their revision # (topological order)
1576 # Sort the filenodes by their revision # (topological order)
1574 nodeiter = list(missingfnodes)
1577 nodeiter = list(missingfnodes)
1575 nodeiter.sort(key=filerevlog.rev)
1578 nodeiter.sort(key=filerevlog.rev)
1576 # Create a group generator and only pass in a changenode
1579 # Create a group generator and only pass in a changenode
1577 # lookup function as we need to collect no information
1580 # lookup function as we need to collect no information
1578 # from filenodes.
1581 # from filenodes.
1579 group = filerevlog.group(nodeiter,
1582 group = filerevlog.group(nodeiter,
1580 lambda fnode: missingfnodes[fnode])
1583 lambda fnode: missingfnodes[fnode])
1581 for chnk in group:
1584 for chnk in group:
1582 # even though we print the same progress on
1585 # even though we print the same progress on
1583 # most loop iterations, put the progress call
1586 # most loop iterations, put the progress call
1584 # here so that time estimates (if any) can be updated
1587 # here so that time estimates (if any) can be updated
1585 self.ui.progress(
1588 self.ui.progress(
1586 _('bundling'), idx, item=fname,
1589 _('bundling'), idx, item=fname,
1587 unit=_('files'), total=efiles)
1590 unit=_('files'), total=efiles)
1588 yield chnk
1591 yield chnk
1589 # Signal that no more groups are left.
1592 # Signal that no more groups are left.
1590 yield changegroup.closechunk()
1593 yield changegroup.closechunk()
1591 self.ui.progress(_('bundling'), None)
1594 self.ui.progress(_('bundling'), None)
1592
1595
1593 if msng_cl_lst:
1596 if msng_cl_lst:
1594 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1597 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1595
1598
1596 return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN')
1599 return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN')
1597
1600
1598 def changegroup(self, basenodes, source):
1601 def changegroup(self, basenodes, source):
1599 # to avoid a race we use changegroupsubset() (issue1320)
1602 # to avoid a race we use changegroupsubset() (issue1320)
1600 return self.changegroupsubset(basenodes, self.heads(), source)
1603 return self.changegroupsubset(basenodes, self.heads(), source)
1601
1604
1602 def _changegroup(self, nodes, source):
1605 def _changegroup(self, nodes, source):
1603 """Compute the changegroup of all nodes that we have that a recipient
1606 """Compute the changegroup of all nodes that we have that a recipient
1604 doesn't. Return a chunkbuffer object whose read() method will return
1607 doesn't. Return a chunkbuffer object whose read() method will return
1605 successive changegroup chunks.
1608 successive changegroup chunks.
1606
1609
1607 This is much easier than the previous function as we can assume that
1610 This is much easier than the previous function as we can assume that
1608 the recipient has any changenode we aren't sending them.
1611 the recipient has any changenode we aren't sending them.
1609
1612
1610 nodes is the set of nodes to send"""
1613 nodes is the set of nodes to send"""
1611
1614
1612 self.hook('preoutgoing', throw=True, source=source)
1615 self.hook('preoutgoing', throw=True, source=source)
1613
1616
1614 cl = self.changelog
1617 cl = self.changelog
1615 revset = set([cl.rev(n) for n in nodes])
1618 revset = set([cl.rev(n) for n in nodes])
1616 self.changegroupinfo(nodes, source)
1619 self.changegroupinfo(nodes, source)
1617
1620
1618 def identity(x):
1621 def identity(x):
1619 return x
1622 return x
1620
1623
1621 def gennodelst(log):
1624 def gennodelst(log):
1622 for r in log:
1625 for r in log:
1623 if log.linkrev(r) in revset:
1626 if log.linkrev(r) in revset:
1624 yield log.node(r)
1627 yield log.node(r)
1625
1628
1626 def lookuplinkrev_func(revlog):
1629 def lookuplinkrev_func(revlog):
1627 def lookuplinkrev(n):
1630 def lookuplinkrev(n):
1628 return cl.node(revlog.linkrev(revlog.rev(n)))
1631 return cl.node(revlog.linkrev(revlog.rev(n)))
1629 return lookuplinkrev
1632 return lookuplinkrev
1630
1633
1631 def gengroup():
1634 def gengroup():
1632 '''yield a sequence of changegroup chunks (strings)'''
1635 '''yield a sequence of changegroup chunks (strings)'''
1633 # construct a list of all changed files
1636 # construct a list of all changed files
1634 changedfiles = set()
1637 changedfiles = set()
1635 mmfs = {}
1638 mmfs = {}
1636 collect = changegroup.collector(cl, mmfs, changedfiles)
1639 collect = changegroup.collector(cl, mmfs, changedfiles)
1637
1640
1638 for cnt, chnk in enumerate(cl.group(nodes, identity, collect)):
1641 for cnt, chnk in enumerate(cl.group(nodes, identity, collect)):
1639 # revlog.group yields three entries per node, so
1642 # revlog.group yields three entries per node, so
1640 # dividing by 3 gives an approximation of how many
1643 # dividing by 3 gives an approximation of how many
1641 # nodes have been processed.
1644 # nodes have been processed.
1642 self.ui.progress(_('bundling'), cnt / 3, unit=_('changesets'))
1645 self.ui.progress(_('bundling'), cnt / 3, unit=_('changesets'))
1643 yield chnk
1646 yield chnk
1644 changecount = cnt / 3
1647 changecount = cnt / 3
1645 self.ui.progress(_('bundling'), None)
1648 self.ui.progress(_('bundling'), None)
1646
1649
1647 mnfst = self.manifest
1650 mnfst = self.manifest
1648 nodeiter = gennodelst(mnfst)
1651 nodeiter = gennodelst(mnfst)
1649 efiles = {}
1652 efiles = {}
1650 for cnt, chnk in enumerate(mnfst.group(nodeiter,
1653 for cnt, chnk in enumerate(mnfst.group(nodeiter,
1651 lookuplinkrev_func(mnfst))):
1654 lookuplinkrev_func(mnfst))):
1652 if cnt % 3 == 1:
1655 if cnt % 3 == 1:
1653 mnode = chnk[:20]
1656 mnode = chnk[:20]
1654 efiles.update(mnfst.readdelta(mnode))
1657 efiles.update(mnfst.readdelta(mnode))
1655 # see above comment for why we divide by 3
1658 # see above comment for why we divide by 3
1656 self.ui.progress(_('bundling'), cnt / 3,
1659 self.ui.progress(_('bundling'), cnt / 3,
1657 unit=_('manifests'), total=changecount)
1660 unit=_('manifests'), total=changecount)
1658 yield chnk
1661 yield chnk
1659 efiles = len(efiles)
1662 efiles = len(efiles)
1660 self.ui.progress(_('bundling'), None)
1663 self.ui.progress(_('bundling'), None)
1661
1664
1662 for idx, fname in enumerate(sorted(changedfiles)):
1665 for idx, fname in enumerate(sorted(changedfiles)):
1663 filerevlog = self.file(fname)
1666 filerevlog = self.file(fname)
1664 if not len(filerevlog):
1667 if not len(filerevlog):
1665 raise util.Abort(_("empty or missing revlog for %s") % fname)
1668 raise util.Abort(_("empty or missing revlog for %s") % fname)
1666 nodeiter = gennodelst(filerevlog)
1669 nodeiter = gennodelst(filerevlog)
1667 nodeiter = list(nodeiter)
1670 nodeiter = list(nodeiter)
1668 if nodeiter:
1671 if nodeiter:
1669 yield changegroup.chunkheader(len(fname))
1672 yield changegroup.chunkheader(len(fname))
1670 yield fname
1673 yield fname
1671 lookup = lookuplinkrev_func(filerevlog)
1674 lookup = lookuplinkrev_func(filerevlog)
1672 for chnk in filerevlog.group(nodeiter, lookup):
1675 for chnk in filerevlog.group(nodeiter, lookup):
1673 self.ui.progress(
1676 self.ui.progress(
1674 _('bundling'), idx, item=fname,
1677 _('bundling'), idx, item=fname,
1675 total=efiles, unit=_('files'))
1678 total=efiles, unit=_('files'))
1676 yield chnk
1679 yield chnk
1677 self.ui.progress(_('bundling'), None)
1680 self.ui.progress(_('bundling'), None)
1678
1681
1679 yield changegroup.closechunk()
1682 yield changegroup.closechunk()
1680
1683
1681 if nodes:
1684 if nodes:
1682 self.hook('outgoing', node=hex(nodes[0]), source=source)
1685 self.hook('outgoing', node=hex(nodes[0]), source=source)
1683
1686
1684 return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN')
1687 return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN')
1685
1688
1686 def addchangegroup(self, source, srctype, url, emptyok=False, lock=None):
1689 def addchangegroup(self, source, srctype, url, emptyok=False, lock=None):
1687 """Add the changegroup returned by source.read() to this repo.
1690 """Add the changegroup returned by source.read() to this repo.
1688 srctype is a string like 'push', 'pull', or 'unbundle'. url is
1691 srctype is a string like 'push', 'pull', or 'unbundle'. url is
1689 the URL of the repo where this changegroup is coming from.
1692 the URL of the repo where this changegroup is coming from.
1690 If lock is not None, the function takes ownership of the lock
1693 If lock is not None, the function takes ownership of the lock
1691 and releases it after the changegroup is added.
1694 and releases it after the changegroup is added.
1692
1695
1693 Return an integer summarizing the change to this repo:
1696 Return an integer summarizing the change to this repo:
1694 - nothing changed or no source: 0
1697 - nothing changed or no source: 0
1695 - more heads than before: 1+added heads (2..n)
1698 - more heads than before: 1+added heads (2..n)
1696 - fewer heads than before: -1-removed heads (-2..-n)
1699 - fewer heads than before: -1-removed heads (-2..-n)
1697 - number of heads stays the same: 1
1700 - number of heads stays the same: 1
1698 """
1701 """
1699 def csmap(x):
1702 def csmap(x):
1700 self.ui.debug("add changeset %s\n" % short(x))
1703 self.ui.debug("add changeset %s\n" % short(x))
1701 return len(cl)
1704 return len(cl)
1702
1705
1703 def revmap(x):
1706 def revmap(x):
1704 return cl.rev(x)
1707 return cl.rev(x)
1705
1708
1706 if not source:
1709 if not source:
1707 return 0
1710 return 0
1708
1711
1709 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1712 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1710
1713
1711 changesets = files = revisions = 0
1714 changesets = files = revisions = 0
1712 efiles = set()
1715 efiles = set()
1713
1716
1714 # write changelog data to temp files so concurrent readers will not see
1717 # write changelog data to temp files so concurrent readers will not see
1715 # inconsistent view
1718 # inconsistent view
1716 cl = self.changelog
1719 cl = self.changelog
1717 cl.delayupdate()
1720 cl.delayupdate()
1718 oldheads = len(cl.heads())
1721 oldheads = len(cl.heads())
1719
1722
1720 tr = self.transaction("\n".join([srctype, urlmod.hidepassword(url)]))
1723 tr = self.transaction("\n".join([srctype, urlmod.hidepassword(url)]))
1721 try:
1724 try:
1722 trp = weakref.proxy(tr)
1725 trp = weakref.proxy(tr)
1723 # pull off the changeset group
1726 # pull off the changeset group
1724 self.ui.status(_("adding changesets\n"))
1727 self.ui.status(_("adding changesets\n"))
1725 clstart = len(cl)
1728 clstart = len(cl)
1726 class prog(object):
1729 class prog(object):
1727 step = _('changesets')
1730 step = _('changesets')
1728 count = 1
1731 count = 1
1729 ui = self.ui
1732 ui = self.ui
1730 total = None
1733 total = None
1731 def __call__(self):
1734 def __call__(self):
1732 self.ui.progress(self.step, self.count, unit=_('chunks'),
1735 self.ui.progress(self.step, self.count, unit=_('chunks'),
1733 total=self.total)
1736 total=self.total)
1734 self.count += 1
1737 self.count += 1
1735 pr = prog()
1738 pr = prog()
1736 source.callback = pr
1739 source.callback = pr
1737
1740
1738 if (cl.addgroup(source, csmap, trp) is None
1741 if (cl.addgroup(source, csmap, trp) is None
1739 and not emptyok):
1742 and not emptyok):
1740 raise util.Abort(_("received changelog group is empty"))
1743 raise util.Abort(_("received changelog group is empty"))
1741 clend = len(cl)
1744 clend = len(cl)
1742 changesets = clend - clstart
1745 changesets = clend - clstart
1743 for c in xrange(clstart, clend):
1746 for c in xrange(clstart, clend):
1744 efiles.update(self[c].files())
1747 efiles.update(self[c].files())
1745 efiles = len(efiles)
1748 efiles = len(efiles)
1746 self.ui.progress(_('changesets'), None)
1749 self.ui.progress(_('changesets'), None)
1747
1750
1748 # pull off the manifest group
1751 # pull off the manifest group
1749 self.ui.status(_("adding manifests\n"))
1752 self.ui.status(_("adding manifests\n"))
1750 pr.step = _('manifests')
1753 pr.step = _('manifests')
1751 pr.count = 1
1754 pr.count = 1
1752 pr.total = changesets # manifests <= changesets
1755 pr.total = changesets # manifests <= changesets
1753 # no need to check for empty manifest group here:
1756 # no need to check for empty manifest group here:
1754 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1757 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1755 # no new manifest will be created and the manifest group will
1758 # no new manifest will be created and the manifest group will
1756 # be empty during the pull
1759 # be empty during the pull
1757 self.manifest.addgroup(source, revmap, trp)
1760 self.manifest.addgroup(source, revmap, trp)
1758 self.ui.progress(_('manifests'), None)
1761 self.ui.progress(_('manifests'), None)
1759
1762
1760 needfiles = {}
1763 needfiles = {}
1761 if self.ui.configbool('server', 'validate', default=False):
1764 if self.ui.configbool('server', 'validate', default=False):
1762 # validate incoming csets have their manifests
1765 # validate incoming csets have their manifests
1763 for cset in xrange(clstart, clend):
1766 for cset in xrange(clstart, clend):
1764 mfest = self.changelog.read(self.changelog.node(cset))[0]
1767 mfest = self.changelog.read(self.changelog.node(cset))[0]
1765 mfest = self.manifest.readdelta(mfest)
1768 mfest = self.manifest.readdelta(mfest)
1766 # store file nodes we must see
1769 # store file nodes we must see
1767 for f, n in mfest.iteritems():
1770 for f, n in mfest.iteritems():
1768 needfiles.setdefault(f, set()).add(n)
1771 needfiles.setdefault(f, set()).add(n)
1769
1772
1770 # process the files
1773 # process the files
1771 self.ui.status(_("adding file changes\n"))
1774 self.ui.status(_("adding file changes\n"))
1772 pr.step = 'files'
1775 pr.step = 'files'
1773 pr.count = 1
1776 pr.count = 1
1774 pr.total = efiles
1777 pr.total = efiles
1775 source.callback = None
1778 source.callback = None
1776
1779
1777 while 1:
1780 while 1:
1778 f = source.chunk()
1781 f = source.chunk()
1779 if not f:
1782 if not f:
1780 break
1783 break
1781 self.ui.debug("adding %s revisions\n" % f)
1784 self.ui.debug("adding %s revisions\n" % f)
1782 pr()
1785 pr()
1783 fl = self.file(f)
1786 fl = self.file(f)
1784 o = len(fl)
1787 o = len(fl)
1785 if fl.addgroup(source, revmap, trp) is None:
1788 if fl.addgroup(source, revmap, trp) is None:
1786 raise util.Abort(_("received file revlog group is empty"))
1789 raise util.Abort(_("received file revlog group is empty"))
1787 revisions += len(fl) - o
1790 revisions += len(fl) - o
1788 files += 1
1791 files += 1
1789 if f in needfiles:
1792 if f in needfiles:
1790 needs = needfiles[f]
1793 needs = needfiles[f]
1791 for new in xrange(o, len(fl)):
1794 for new in xrange(o, len(fl)):
1792 n = fl.node(new)
1795 n = fl.node(new)
1793 if n in needs:
1796 if n in needs:
1794 needs.remove(n)
1797 needs.remove(n)
1795 if not needs:
1798 if not needs:
1796 del needfiles[f]
1799 del needfiles[f]
1797 self.ui.progress(_('files'), None)
1800 self.ui.progress(_('files'), None)
1798
1801
1799 for f, needs in needfiles.iteritems():
1802 for f, needs in needfiles.iteritems():
1800 fl = self.file(f)
1803 fl = self.file(f)
1801 for n in needs:
1804 for n in needs:
1802 try:
1805 try:
1803 fl.rev(n)
1806 fl.rev(n)
1804 except error.LookupError:
1807 except error.LookupError:
1805 raise util.Abort(
1808 raise util.Abort(
1806 _('missing file data for %s:%s - run hg verify') %
1809 _('missing file data for %s:%s - run hg verify') %
1807 (f, hex(n)))
1810 (f, hex(n)))
1808
1811
1809 newheads = len(cl.heads())
1812 newheads = len(cl.heads())
1810 heads = ""
1813 heads = ""
1811 if oldheads and newheads != oldheads:
1814 if oldheads and newheads != oldheads:
1812 heads = _(" (%+d heads)") % (newheads - oldheads)
1815 heads = _(" (%+d heads)") % (newheads - oldheads)
1813
1816
1814 self.ui.status(_("added %d changesets"
1817 self.ui.status(_("added %d changesets"
1815 " with %d changes to %d files%s\n")
1818 " with %d changes to %d files%s\n")
1816 % (changesets, revisions, files, heads))
1819 % (changesets, revisions, files, heads))
1817
1820
1818 if changesets > 0:
1821 if changesets > 0:
1819 p = lambda: cl.writepending() and self.root or ""
1822 p = lambda: cl.writepending() and self.root or ""
1820 self.hook('pretxnchangegroup', throw=True,
1823 self.hook('pretxnchangegroup', throw=True,
1821 node=hex(cl.node(clstart)), source=srctype,
1824 node=hex(cl.node(clstart)), source=srctype,
1822 url=url, pending=p)
1825 url=url, pending=p)
1823
1826
1824 # make changelog see real files again
1827 # make changelog see real files again
1825 cl.finalize(trp)
1828 cl.finalize(trp)
1826
1829
1827 tr.close()
1830 tr.close()
1828 finally:
1831 finally:
1829 tr.release()
1832 tr.release()
1830 if lock:
1833 if lock:
1831 lock.release()
1834 lock.release()
1832
1835
1833 if changesets > 0:
1836 if changesets > 0:
1834 # forcefully update the on-disk branch cache
1837 # forcefully update the on-disk branch cache
1835 self.ui.debug("updating the branch cache\n")
1838 self.ui.debug("updating the branch cache\n")
1836 self.updatebranchcache()
1839 self.updatebranchcache()
1837 self.hook("changegroup", node=hex(cl.node(clstart)),
1840 self.hook("changegroup", node=hex(cl.node(clstart)),
1838 source=srctype, url=url)
1841 source=srctype, url=url)
1839
1842
1840 for i in xrange(clstart, clend):
1843 for i in xrange(clstart, clend):
1841 self.hook("incoming", node=hex(cl.node(i)),
1844 self.hook("incoming", node=hex(cl.node(i)),
1842 source=srctype, url=url)
1845 source=srctype, url=url)
1843
1846
1844 # never return 0 here:
1847 # never return 0 here:
1845 if newheads < oldheads:
1848 if newheads < oldheads:
1846 return newheads - oldheads - 1
1849 return newheads - oldheads - 1
1847 else:
1850 else:
1848 return newheads - oldheads + 1
1851 return newheads - oldheads + 1
1849
1852
1850
1853
1851 def stream_in(self, remote, requirements):
1854 def stream_in(self, remote, requirements):
1852 fp = remote.stream_out()
1855 fp = remote.stream_out()
1853 l = fp.readline()
1856 l = fp.readline()
1854 try:
1857 try:
1855 resp = int(l)
1858 resp = int(l)
1856 except ValueError:
1859 except ValueError:
1857 raise error.ResponseError(
1860 raise error.ResponseError(
1858 _('Unexpected response from remote server:'), l)
1861 _('Unexpected response from remote server:'), l)
1859 if resp == 1:
1862 if resp == 1:
1860 raise util.Abort(_('operation forbidden by server'))
1863 raise util.Abort(_('operation forbidden by server'))
1861 elif resp == 2:
1864 elif resp == 2:
1862 raise util.Abort(_('locking the remote repository failed'))
1865 raise util.Abort(_('locking the remote repository failed'))
1863 elif resp != 0:
1866 elif resp != 0:
1864 raise util.Abort(_('the server sent an unknown error code'))
1867 raise util.Abort(_('the server sent an unknown error code'))
1865 self.ui.status(_('streaming all changes\n'))
1868 self.ui.status(_('streaming all changes\n'))
1866 l = fp.readline()
1869 l = fp.readline()
1867 try:
1870 try:
1868 total_files, total_bytes = map(int, l.split(' ', 1))
1871 total_files, total_bytes = map(int, l.split(' ', 1))
1869 except (ValueError, TypeError):
1872 except (ValueError, TypeError):
1870 raise error.ResponseError(
1873 raise error.ResponseError(
1871 _('Unexpected response from remote server:'), l)
1874 _('Unexpected response from remote server:'), l)
1872 self.ui.status(_('%d files to transfer, %s of data\n') %
1875 self.ui.status(_('%d files to transfer, %s of data\n') %
1873 (total_files, util.bytecount(total_bytes)))
1876 (total_files, util.bytecount(total_bytes)))
1874 start = time.time()
1877 start = time.time()
1875 for i in xrange(total_files):
1878 for i in xrange(total_files):
1876 # XXX doesn't support '\n' or '\r' in filenames
1879 # XXX doesn't support '\n' or '\r' in filenames
1877 l = fp.readline()
1880 l = fp.readline()
1878 try:
1881 try:
1879 name, size = l.split('\0', 1)
1882 name, size = l.split('\0', 1)
1880 size = int(size)
1883 size = int(size)
1881 except (ValueError, TypeError):
1884 except (ValueError, TypeError):
1882 raise error.ResponseError(
1885 raise error.ResponseError(
1883 _('Unexpected response from remote server:'), l)
1886 _('Unexpected response from remote server:'), l)
1884 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
1887 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
1885 # for backwards compat, name was partially encoded
1888 # for backwards compat, name was partially encoded
1886 ofp = self.sopener(store.decodedir(name), 'w')
1889 ofp = self.sopener(store.decodedir(name), 'w')
1887 for chunk in util.filechunkiter(fp, limit=size):
1890 for chunk in util.filechunkiter(fp, limit=size):
1888 ofp.write(chunk)
1891 ofp.write(chunk)
1889 ofp.close()
1892 ofp.close()
1890 elapsed = time.time() - start
1893 elapsed = time.time() - start
1891 if elapsed <= 0:
1894 if elapsed <= 0:
1892 elapsed = 0.001
1895 elapsed = 0.001
1893 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1896 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1894 (util.bytecount(total_bytes), elapsed,
1897 (util.bytecount(total_bytes), elapsed,
1895 util.bytecount(total_bytes / elapsed)))
1898 util.bytecount(total_bytes / elapsed)))
1896
1899
1897 # new requirements = old non-format requirements + new format-related
1900 # new requirements = old non-format requirements + new format-related
1898 # requirements from the streamed-in repository
1901 # requirements from the streamed-in repository
1899 requirements.update(set(self.requirements) - self.supportedformats)
1902 requirements.update(set(self.requirements) - self.supportedformats)
1900 self._applyrequirements(requirements)
1903 self._applyrequirements(requirements)
1901 self._writerequirements()
1904 self._writerequirements()
1902
1905
1903 self.invalidate()
1906 self.invalidate()
1904 return len(self.heads()) + 1
1907 return len(self.heads()) + 1
1905
1908
1906 def clone(self, remote, heads=[], stream=False):
1909 def clone(self, remote, heads=[], stream=False):
1907 '''clone remote repository.
1910 '''clone remote repository.
1908
1911
1909 keyword arguments:
1912 keyword arguments:
1910 heads: list of revs to clone (forces use of pull)
1913 heads: list of revs to clone (forces use of pull)
1911 stream: use streaming clone if possible'''
1914 stream: use streaming clone if possible'''
1912
1915
1913 # now, all clients that can request uncompressed clones can
1916 # now, all clients that can request uncompressed clones can
1914 # read repo formats supported by all servers that can serve
1917 # read repo formats supported by all servers that can serve
1915 # them.
1918 # them.
1916
1919
1917 # if revlog format changes, client will have to check version
1920 # if revlog format changes, client will have to check version
1918 # and format flags on "stream" capability, and use
1921 # and format flags on "stream" capability, and use
1919 # uncompressed only if compatible.
1922 # uncompressed only if compatible.
1920
1923
1921 if stream and not heads:
1924 if stream and not heads:
1922 # 'stream' means remote revlog format is revlogv1 only
1925 # 'stream' means remote revlog format is revlogv1 only
1923 if remote.capable('stream'):
1926 if remote.capable('stream'):
1924 return self.stream_in(remote, set(('revlogv1',)))
1927 return self.stream_in(remote, set(('revlogv1',)))
1925 # otherwise, 'streamreqs' contains the remote revlog format
1928 # otherwise, 'streamreqs' contains the remote revlog format
1926 streamreqs = remote.capable('streamreqs')
1929 streamreqs = remote.capable('streamreqs')
1927 if streamreqs:
1930 if streamreqs:
1928 streamreqs = set(streamreqs.split(','))
1931 streamreqs = set(streamreqs.split(','))
1929 # if we support it, stream in and adjust our requirements
1932 # if we support it, stream in and adjust our requirements
1930 if not streamreqs - self.supportedformats:
1933 if not streamreqs - self.supportedformats:
1931 return self.stream_in(remote, streamreqs)
1934 return self.stream_in(remote, streamreqs)
1932 return self.pull(remote, heads)
1935 return self.pull(remote, heads)
1933
1936
1934 def pushkey(self, namespace, key, old, new):
1937 def pushkey(self, namespace, key, old, new):
1935 return pushkey.push(self, namespace, key, old, new)
1938 return pushkey.push(self, namespace, key, old, new)
1936
1939
1937 def listkeys(self, namespace):
1940 def listkeys(self, namespace):
1938 return pushkey.list(self, namespace)
1941 return pushkey.list(self, namespace)
1939
1942
1940 # used to avoid circular references so destructors work
1943 # used to avoid circular references so destructors work
1941 def aftertrans(files):
1944 def aftertrans(files):
1942 renamefiles = [tuple(t) for t in files]
1945 renamefiles = [tuple(t) for t in files]
1943 def a():
1946 def a():
1944 for src, dest in renamefiles:
1947 for src, dest in renamefiles:
1945 util.rename(src, dest)
1948 util.rename(src, dest)
1946 return a
1949 return a
1947
1950
1948 def instance(ui, path, create):
1951 def instance(ui, path, create):
1949 return localrepository(ui, util.drop_scheme('file', path), create)
1952 return localrepository(ui, util.drop_scheme('file', path), create)
1950
1953
1951 def islocal(path):
1954 def islocal(path):
1952 return True
1955 return True
@@ -1,100 +1,102
1 $ echo "[extensions]" >> $HGRCPATH
1 $ echo "[extensions]" >> $HGRCPATH
2 $ echo "bookmarks=" >> $HGRCPATH
2 $ echo "bookmarks=" >> $HGRCPATH
3 $ echo "mq=" >> $HGRCPATH
3 $ echo "mq=" >> $HGRCPATH
4
4
5 $ hg init
5 $ hg init
6
6
7 $ echo qqq>qqq.txt
7 $ echo qqq>qqq.txt
8
8
9 rollback dry run without rollback information
9 rollback dry run without rollback information
10
10
11 $ hg rollback
11 $ hg rollback
12 no rollback information available
12 no rollback information available
13 [1]
13 [1]
14
14
15 add file
15 add file
16
16
17 $ hg add
17 $ hg add
18 adding qqq.txt
18 adding qqq.txt
19
19
20 commit first revision
20 commit first revision
21
21
22 $ hg ci -m 1
22 $ hg ci -m 1
23
23
24 set bookmark
24 set bookmark
25
25
26 $ hg book test
26 $ hg book test
27
27
28 $ echo www>>qqq.txt
28 $ echo www>>qqq.txt
29
29
30 commit second revision
30 commit second revision
31
31
32 $ hg ci -m 2
32 $ hg ci -m 2
33
33
34 set bookmark
34 set bookmark
35
35
36 $ hg book test2
36 $ hg book test2
37
37
38 update to -2
38 update to -2
39
39
40 $ hg update -r -2
40 $ hg update -r -2
41 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
41 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
42
42
43 $ echo eee>>qqq.txt
43 $ echo eee>>qqq.txt
44
44
45 commit new head
45 commit new head
46
46
47 $ hg ci -m 3
47 $ hg ci -m 3
48 created new head
48 created new head
49
49
50 bookmarks updated?
50 bookmarks updated?
51
51
52 $ hg book
52 $ hg book
53 test 1:25e1ee7a0081
53 test 1:25e1ee7a0081
54 test2 1:25e1ee7a0081
54 test2 1:25e1ee7a0081
55
55
56 strip to revision 1
56 strip to revision 1
57
57
58 $ hg strip 1
58 $ hg strip 1
59 saved backup bundle to $TESTTMP/.hg/strip-backup/*-backup.hg (glob)
59 saved backup bundle to $TESTTMP/.hg/strip-backup/*-backup.hg (glob)
60
60
61 list bookmarks
61 list bookmarks
62
62
63 $ hg book
63 $ hg book
64 * test 1:8cf31af87a2b
64 * test 1:8cf31af87a2b
65 * test2 1:8cf31af87a2b
65 * test2 1:8cf31af87a2b
66
66
67 immediate rollback and reentrancy issue
67 immediate rollback and reentrancy issue
68
68
69 $ echo "mq=!" >> $HGRCPATH
69 $ echo "mq=!" >> $HGRCPATH
70 $ hg init repo
70 $ hg init repo
71 $ cd repo
71 $ cd repo
72 $ echo a > a
72 $ echo a > a
73 $ hg ci -Am adda
73 $ hg ci -Am adda
74 adding a
74 adding a
75 $ echo b > b
75 $ echo b > b
76 $ hg ci -Am addb
76 $ hg ci -Am addb
77 adding b
77 adding b
78 $ hg bookmarks markb
78 $ hg bookmarks markb
79 $ hg rollback
79 $ hg rollback
80 rolling back to revision 0 (undo commit)
80 rolling back to revision 0 (undo commit)
81
81
82 are you there?
82 are you there?
83
83
84 $ hg bookmarks
84 $ hg bookmarks
85 no bookmarks set
85 no bookmarks set
86
86
87 can you be added again?
87 can you be added again?
88
88
89 $ hg bookmarks markb
89 $ hg bookmarks markb
90 $ hg bookmarks
90 $ hg bookmarks
91 * markb 0:07f494440405
91 * markb 0:07f494440405
92
92
93 rollback dry run with rollback information
93 rollback dry run with rollback information
94
94
95 $ hg rollback -n
95 $ hg rollback -n
96 no rollback information available
97 [1]
96 $ hg bookmarks
98 $ hg bookmarks
97 * markb 0:07f494440405
99 * markb 0:07f494440405
98
100
99 $ cd ..
101 $ cd ..
100
102
General Comments 0
You need to be logged in to leave comments. Login now