##// END OF EJS Templates
bundlespec: move computing the bundle contentops in parsebundlespec...
Boris Feld -
r37182:6c7a6b04 default
parent child Browse files
Show More
@@ -1,393 +1,391 b''
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will prompt setting this when it must use this value.
90 # if unset, lfs will prompt setting this when it must use this value.
91 # (default: unset)
91 # (default: unset)
92 url = https://example.com/repo.git/info/lfs
92 url = https://example.com/repo.git/info/lfs
93
93
94 # Which files to track in LFS. Path tests are "**.extname" for file
94 # Which files to track in LFS. Path tests are "**.extname" for file
95 # extensions, and "path:under/some/directory" for path prefix. Both
95 # extensions, and "path:under/some/directory" for path prefix. Both
96 # are relative to the repository root.
96 # are relative to the repository root.
97 # File size can be tested with the "size()" fileset, and tests can be
97 # File size can be tested with the "size()" fileset, and tests can be
98 # joined with fileset operators. (See "hg help filesets.operators".)
98 # joined with fileset operators. (See "hg help filesets.operators".)
99 #
99 #
100 # Some examples:
100 # Some examples:
101 # - all() # everything
101 # - all() # everything
102 # - none() # nothing
102 # - none() # nothing
103 # - size(">20MB") # larger than 20MB
103 # - size(">20MB") # larger than 20MB
104 # - !**.txt # anything not a *.txt file
104 # - !**.txt # anything not a *.txt file
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 # - path:bin # files under "bin" in the project root
106 # - path:bin # files under "bin" in the project root
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 # | (path:bin & !path:/bin/README) | size(">1GB")
108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 # (default: none())
109 # (default: none())
110 #
110 #
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 # will eventually be deprecated and removed.
112 # will eventually be deprecated and removed.
113 track = size(">10M")
113 track = size(">10M")
114
114
115 # how many times to retry before giving up on transferring an object
115 # how many times to retry before giving up on transferring an object
116 retry = 5
116 retry = 5
117
117
118 # the local directory to store lfs files for sharing across local clones.
118 # the local directory to store lfs files for sharing across local clones.
119 # If not set, the cache is located in an OS specific cache location.
119 # If not set, the cache is located in an OS specific cache location.
120 usercache = /path/to/global/cache
120 usercache = /path/to/global/cache
121 """
121 """
122
122
123 from __future__ import absolute_import
123 from __future__ import absolute_import
124
124
125 from mercurial.i18n import _
125 from mercurial.i18n import _
126
126
127 from mercurial import (
127 from mercurial import (
128 bundle2,
128 bundle2,
129 changegroup,
129 changegroup,
130 cmdutil,
130 cmdutil,
131 config,
131 config,
132 context,
132 context,
133 error,
133 error,
134 exchange,
134 exchange,
135 extensions,
135 extensions,
136 filelog,
136 filelog,
137 fileset,
137 fileset,
138 hg,
138 hg,
139 localrepo,
139 localrepo,
140 minifileset,
140 minifileset,
141 node,
141 node,
142 pycompat,
142 pycompat,
143 registrar,
143 registrar,
144 revlog,
144 revlog,
145 scmutil,
145 scmutil,
146 templateutil,
146 templateutil,
147 upgrade,
147 upgrade,
148 util,
148 util,
149 vfs as vfsmod,
149 vfs as vfsmod,
150 wireproto,
150 wireproto,
151 wireprotoserver,
151 wireprotoserver,
152 )
152 )
153
153
154 from . import (
154 from . import (
155 blobstore,
155 blobstore,
156 wireprotolfsserver,
156 wireprotolfsserver,
157 wrapper,
157 wrapper,
158 )
158 )
159
159
160 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
160 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
161 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
161 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
162 # be specifying the version(s) of Mercurial they are tested with, or
162 # be specifying the version(s) of Mercurial they are tested with, or
163 # leave the attribute unspecified.
163 # leave the attribute unspecified.
164 testedwith = 'ships-with-hg-core'
164 testedwith = 'ships-with-hg-core'
165
165
166 configtable = {}
166 configtable = {}
167 configitem = registrar.configitem(configtable)
167 configitem = registrar.configitem(configtable)
168
168
169 configitem('experimental', 'lfs.user-agent',
169 configitem('experimental', 'lfs.user-agent',
170 default=None,
170 default=None,
171 )
171 )
172 configitem('experimental', 'lfs.worker-enable',
172 configitem('experimental', 'lfs.worker-enable',
173 default=False,
173 default=False,
174 )
174 )
175
175
176 configitem('lfs', 'url',
176 configitem('lfs', 'url',
177 default=None,
177 default=None,
178 )
178 )
179 configitem('lfs', 'usercache',
179 configitem('lfs', 'usercache',
180 default=None,
180 default=None,
181 )
181 )
182 # Deprecated
182 # Deprecated
183 configitem('lfs', 'threshold',
183 configitem('lfs', 'threshold',
184 default=None,
184 default=None,
185 )
185 )
186 configitem('lfs', 'track',
186 configitem('lfs', 'track',
187 default='none()',
187 default='none()',
188 )
188 )
189 configitem('lfs', 'retry',
189 configitem('lfs', 'retry',
190 default=5,
190 default=5,
191 )
191 )
192
192
193 cmdtable = {}
193 cmdtable = {}
194 command = registrar.command(cmdtable)
194 command = registrar.command(cmdtable)
195
195
196 templatekeyword = registrar.templatekeyword()
196 templatekeyword = registrar.templatekeyword()
197 filesetpredicate = registrar.filesetpredicate()
197 filesetpredicate = registrar.filesetpredicate()
198
198
199 def featuresetup(ui, supported):
199 def featuresetup(ui, supported):
200 # don't die on seeing a repo with the lfs requirement
200 # don't die on seeing a repo with the lfs requirement
201 supported |= {'lfs'}
201 supported |= {'lfs'}
202
202
203 def uisetup(ui):
203 def uisetup(ui):
204 localrepo.featuresetupfuncs.add(featuresetup)
204 localrepo.featuresetupfuncs.add(featuresetup)
205
205
206 def reposetup(ui, repo):
206 def reposetup(ui, repo):
207 # Nothing to do with a remote repo
207 # Nothing to do with a remote repo
208 if not repo.local():
208 if not repo.local():
209 return
209 return
210
210
211 repo.svfs.lfslocalblobstore = blobstore.local(repo)
211 repo.svfs.lfslocalblobstore = blobstore.local(repo)
212 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
212 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
213
213
214 class lfsrepo(repo.__class__):
214 class lfsrepo(repo.__class__):
215 @localrepo.unfilteredmethod
215 @localrepo.unfilteredmethod
216 def commitctx(self, ctx, error=False):
216 def commitctx(self, ctx, error=False):
217 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
217 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
218 return super(lfsrepo, self).commitctx(ctx, error)
218 return super(lfsrepo, self).commitctx(ctx, error)
219
219
220 repo.__class__ = lfsrepo
220 repo.__class__ = lfsrepo
221
221
222 if 'lfs' not in repo.requirements:
222 if 'lfs' not in repo.requirements:
223 def checkrequireslfs(ui, repo, **kwargs):
223 def checkrequireslfs(ui, repo, **kwargs):
224 if 'lfs' not in repo.requirements:
224 if 'lfs' not in repo.requirements:
225 last = kwargs.get(r'node_last')
225 last = kwargs.get(r'node_last')
226 _bin = node.bin
226 _bin = node.bin
227 if last:
227 if last:
228 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
228 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
229 else:
229 else:
230 s = repo.set('%n', _bin(kwargs[r'node']))
230 s = repo.set('%n', _bin(kwargs[r'node']))
231 match = repo.narrowmatch()
231 match = repo.narrowmatch()
232 for ctx in s:
232 for ctx in s:
233 # TODO: is there a way to just walk the files in the commit?
233 # TODO: is there a way to just walk the files in the commit?
234 if any(ctx[f].islfs() for f in ctx.files()
234 if any(ctx[f].islfs() for f in ctx.files()
235 if f in ctx and match(f)):
235 if f in ctx and match(f)):
236 repo.requirements.add('lfs')
236 repo.requirements.add('lfs')
237 repo._writerequirements()
237 repo._writerequirements()
238 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
238 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
239 break
239 break
240
240
241 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
241 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
242 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
242 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
243 else:
243 else:
244 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
244 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
245
245
246 def _trackedmatcher(repo):
246 def _trackedmatcher(repo):
247 """Return a function (path, size) -> bool indicating whether or not to
247 """Return a function (path, size) -> bool indicating whether or not to
248 track a given file with lfs."""
248 track a given file with lfs."""
249 if not repo.wvfs.exists('.hglfs'):
249 if not repo.wvfs.exists('.hglfs'):
250 # No '.hglfs' in wdir. Fallback to config for now.
250 # No '.hglfs' in wdir. Fallback to config for now.
251 trackspec = repo.ui.config('lfs', 'track')
251 trackspec = repo.ui.config('lfs', 'track')
252
252
253 # deprecated config: lfs.threshold
253 # deprecated config: lfs.threshold
254 threshold = repo.ui.configbytes('lfs', 'threshold')
254 threshold = repo.ui.configbytes('lfs', 'threshold')
255 if threshold:
255 if threshold:
256 fileset.parse(trackspec) # make sure syntax errors are confined
256 fileset.parse(trackspec) # make sure syntax errors are confined
257 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
257 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
258
258
259 return minifileset.compile(trackspec)
259 return minifileset.compile(trackspec)
260
260
261 data = repo.wvfs.tryread('.hglfs')
261 data = repo.wvfs.tryread('.hglfs')
262 if not data:
262 if not data:
263 return lambda p, s: False
263 return lambda p, s: False
264
264
265 # Parse errors here will abort with a message that points to the .hglfs file
265 # Parse errors here will abort with a message that points to the .hglfs file
266 # and line number.
266 # and line number.
267 cfg = config.config()
267 cfg = config.config()
268 cfg.parse('.hglfs', data)
268 cfg.parse('.hglfs', data)
269
269
270 try:
270 try:
271 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
271 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
272 for pattern, rule in cfg.items('track')]
272 for pattern, rule in cfg.items('track')]
273 except error.ParseError as e:
273 except error.ParseError as e:
274 # The original exception gives no indicator that the error is in the
274 # The original exception gives no indicator that the error is in the
275 # .hglfs file, so add that.
275 # .hglfs file, so add that.
276
276
277 # TODO: See if the line number of the file can be made available.
277 # TODO: See if the line number of the file can be made available.
278 raise error.Abort(_('parse error in .hglfs: %s') % e)
278 raise error.Abort(_('parse error in .hglfs: %s') % e)
279
279
280 def _match(path, size):
280 def _match(path, size):
281 for pat, rule in rules:
281 for pat, rule in rules:
282 if pat(path, size):
282 if pat(path, size):
283 return rule(path, size)
283 return rule(path, size)
284
284
285 return False
285 return False
286
286
287 return _match
287 return _match
288
288
289 def wrapfilelog(filelog):
289 def wrapfilelog(filelog):
290 wrapfunction = extensions.wrapfunction
290 wrapfunction = extensions.wrapfunction
291
291
292 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
292 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
293 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
293 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
294 wrapfunction(filelog, 'size', wrapper.filelogsize)
294 wrapfunction(filelog, 'size', wrapper.filelogsize)
295
295
296 def extsetup(ui):
296 def extsetup(ui):
297 wrapfilelog(filelog.filelog)
297 wrapfilelog(filelog.filelog)
298
298
299 wrapfunction = extensions.wrapfunction
299 wrapfunction = extensions.wrapfunction
300
300
301 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
301 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
302 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
302 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
303
303
304 wrapfunction(upgrade, '_finishdatamigration',
304 wrapfunction(upgrade, '_finishdatamigration',
305 wrapper.upgradefinishdatamigration)
305 wrapper.upgradefinishdatamigration)
306
306
307 wrapfunction(upgrade, 'preservedrequirements',
307 wrapfunction(upgrade, 'preservedrequirements',
308 wrapper.upgraderequirements)
308 wrapper.upgraderequirements)
309
309
310 wrapfunction(upgrade, 'supporteddestrequirements',
310 wrapfunction(upgrade, 'supporteddestrequirements',
311 wrapper.upgraderequirements)
311 wrapper.upgraderequirements)
312
312
313 wrapfunction(changegroup,
313 wrapfunction(changegroup,
314 'allsupportedversions',
314 'allsupportedversions',
315 wrapper.allsupportedversions)
315 wrapper.allsupportedversions)
316
316
317 wrapfunction(exchange, 'push', wrapper.push)
317 wrapfunction(exchange, 'push', wrapper.push)
318 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
318 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
319 wrapfunction(wireprotoserver, 'handlewsgirequest',
319 wrapfunction(wireprotoserver, 'handlewsgirequest',
320 wireprotolfsserver.handlewsgirequest)
320 wireprotolfsserver.handlewsgirequest)
321
321
322 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
322 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
323 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
323 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
324 context.basefilectx.islfs = wrapper.filectxislfs
324 context.basefilectx.islfs = wrapper.filectxislfs
325
325
326 revlog.addflagprocessor(
326 revlog.addflagprocessor(
327 revlog.REVIDX_EXTSTORED,
327 revlog.REVIDX_EXTSTORED,
328 (
328 (
329 wrapper.readfromstore,
329 wrapper.readfromstore,
330 wrapper.writetostore,
330 wrapper.writetostore,
331 wrapper.bypasscheckhash,
331 wrapper.bypasscheckhash,
332 ),
332 ),
333 )
333 )
334
334
335 wrapfunction(hg, 'clone', wrapper.hgclone)
335 wrapfunction(hg, 'clone', wrapper.hgclone)
336 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
336 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
337
337
338 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
338 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
339
339
340 # Make bundle choose changegroup3 instead of changegroup2. This affects
340 # Make bundle choose changegroup3 instead of changegroup2. This affects
341 # "hg bundle" command. Note: it does not cover all bundle formats like
341 # "hg bundle" command. Note: it does not cover all bundle formats like
342 # "packed1". Using "packed1" with lfs will likely cause trouble.
342 # "packed1". Using "packed1" with lfs will likely cause trouble.
343 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
343 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
344 for k in names:
345 exchange._bundlespeccgversions[k] = '03'
346
344
347 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
345 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
348 # options and blob stores are passed from othervfs to the new readonlyvfs.
346 # options and blob stores are passed from othervfs to the new readonlyvfs.
349 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
347 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
350
348
351 # when writing a bundle via "hg bundle" command, upload related LFS blobs
349 # when writing a bundle via "hg bundle" command, upload related LFS blobs
352 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
350 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
353
351
354 @filesetpredicate('lfs()', callstatus=True)
352 @filesetpredicate('lfs()', callstatus=True)
355 def lfsfileset(mctx, x):
353 def lfsfileset(mctx, x):
356 """File that uses LFS storage."""
354 """File that uses LFS storage."""
357 # i18n: "lfs" is a keyword
355 # i18n: "lfs" is a keyword
358 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
356 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
359 return [f for f in mctx.subset
357 return [f for f in mctx.subset
360 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
358 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
361
359
362 @templatekeyword('lfs_files', requires={'ctx'})
360 @templatekeyword('lfs_files', requires={'ctx'})
363 def lfsfiles(context, mapping):
361 def lfsfiles(context, mapping):
364 """List of strings. All files modified, added, or removed by this
362 """List of strings. All files modified, added, or removed by this
365 changeset."""
363 changeset."""
366 ctx = context.resource(mapping, 'ctx')
364 ctx = context.resource(mapping, 'ctx')
367
365
368 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
366 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
369 files = sorted(pointers.keys())
367 files = sorted(pointers.keys())
370
368
371 def pointer(v):
369 def pointer(v):
372 # In the file spec, version is first and the other keys are sorted.
370 # In the file spec, version is first and the other keys are sorted.
373 sortkeyfunc = lambda x: (x[0] != 'version', x)
371 sortkeyfunc = lambda x: (x[0] != 'version', x)
374 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
372 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
375 return util.sortdict(items)
373 return util.sortdict(items)
376
374
377 makemap = lambda v: {
375 makemap = lambda v: {
378 'file': v,
376 'file': v,
379 'lfsoid': pointers[v].oid() if pointers[v] else None,
377 'lfsoid': pointers[v].oid() if pointers[v] else None,
380 'lfspointer': templateutil.hybriddict(pointer(v)),
378 'lfspointer': templateutil.hybriddict(pointer(v)),
381 }
379 }
382
380
383 # TODO: make the separator ', '?
381 # TODO: make the separator ', '?
384 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
382 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
385 return templateutil.hybrid(f, files, makemap, pycompat.identity)
383 return templateutil.hybrid(f, files, makemap, pycompat.identity)
386
384
387 @command('debuglfsupload',
385 @command('debuglfsupload',
388 [('r', 'rev', [], _('upload large files introduced by REV'))])
386 [('r', 'rev', [], _('upload large files introduced by REV'))])
389 def debuglfsupload(ui, repo, **opts):
387 def debuglfsupload(ui, repo, **opts):
390 """upload lfs blobs added by the working copy parent or given revisions"""
388 """upload lfs blobs added by the working copy parent or given revisions"""
391 revs = opts.get(r'rev', [])
389 revs = opts.get(r'rev', [])
392 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
390 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
393 wrapper.uploadblobs(repo, pointers)
391 wrapper.uploadblobs(repo, pointers)
@@ -1,2225 +1,2228 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import collections
150 import collections
151 import errno
151 import errno
152 import os
152 import os
153 import re
153 import re
154 import string
154 import string
155 import struct
155 import struct
156 import sys
156 import sys
157
157
158 from .i18n import _
158 from .i18n import _
159 from . import (
159 from . import (
160 bookmarks,
160 bookmarks,
161 changegroup,
161 changegroup,
162 encoding,
162 encoding,
163 error,
163 error,
164 node as nodemod,
164 node as nodemod,
165 obsolete,
165 obsolete,
166 phases,
166 phases,
167 pushkey,
167 pushkey,
168 pycompat,
168 pycompat,
169 streamclone,
169 streamclone,
170 tags,
170 tags,
171 url,
171 url,
172 util,
172 util,
173 )
173 )
174 from .utils import (
174 from .utils import (
175 stringutil,
175 stringutil,
176 )
176 )
177
177
178 urlerr = util.urlerr
178 urlerr = util.urlerr
179 urlreq = util.urlreq
179 urlreq = util.urlreq
180
180
181 _pack = struct.pack
181 _pack = struct.pack
182 _unpack = struct.unpack
182 _unpack = struct.unpack
183
183
184 _fstreamparamsize = '>i'
184 _fstreamparamsize = '>i'
185 _fpartheadersize = '>i'
185 _fpartheadersize = '>i'
186 _fparttypesize = '>B'
186 _fparttypesize = '>B'
187 _fpartid = '>I'
187 _fpartid = '>I'
188 _fpayloadsize = '>i'
188 _fpayloadsize = '>i'
189 _fpartparamcount = '>BB'
189 _fpartparamcount = '>BB'
190
190
191 preferedchunksize = 32768
191 preferedchunksize = 32768
192
192
193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
194
194
195 def outdebug(ui, message):
195 def outdebug(ui, message):
196 """debug regarding output stream (bundling)"""
196 """debug regarding output stream (bundling)"""
197 if ui.configbool('devel', 'bundle2.debug'):
197 if ui.configbool('devel', 'bundle2.debug'):
198 ui.debug('bundle2-output: %s\n' % message)
198 ui.debug('bundle2-output: %s\n' % message)
199
199
200 def indebug(ui, message):
200 def indebug(ui, message):
201 """debug on input stream (unbundling)"""
201 """debug on input stream (unbundling)"""
202 if ui.configbool('devel', 'bundle2.debug'):
202 if ui.configbool('devel', 'bundle2.debug'):
203 ui.debug('bundle2-input: %s\n' % message)
203 ui.debug('bundle2-input: %s\n' % message)
204
204
205 def validateparttype(parttype):
205 def validateparttype(parttype):
206 """raise ValueError if a parttype contains invalid character"""
206 """raise ValueError if a parttype contains invalid character"""
207 if _parttypeforbidden.search(parttype):
207 if _parttypeforbidden.search(parttype):
208 raise ValueError(parttype)
208 raise ValueError(parttype)
209
209
210 def _makefpartparamsizes(nbparams):
210 def _makefpartparamsizes(nbparams):
211 """return a struct format to read part parameter sizes
211 """return a struct format to read part parameter sizes
212
212
213 The number parameters is variable so we need to build that format
213 The number parameters is variable so we need to build that format
214 dynamically.
214 dynamically.
215 """
215 """
216 return '>'+('BB'*nbparams)
216 return '>'+('BB'*nbparams)
217
217
218 parthandlermapping = {}
218 parthandlermapping = {}
219
219
220 def parthandler(parttype, params=()):
220 def parthandler(parttype, params=()):
221 """decorator that register a function as a bundle2 part handler
221 """decorator that register a function as a bundle2 part handler
222
222
223 eg::
223 eg::
224
224
225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
226 def myparttypehandler(...):
226 def myparttypehandler(...):
227 '''process a part of type "my part".'''
227 '''process a part of type "my part".'''
228 ...
228 ...
229 """
229 """
230 validateparttype(parttype)
230 validateparttype(parttype)
231 def _decorator(func):
231 def _decorator(func):
232 lparttype = parttype.lower() # enforce lower case matching.
232 lparttype = parttype.lower() # enforce lower case matching.
233 assert lparttype not in parthandlermapping
233 assert lparttype not in parthandlermapping
234 parthandlermapping[lparttype] = func
234 parthandlermapping[lparttype] = func
235 func.params = frozenset(params)
235 func.params = frozenset(params)
236 return func
236 return func
237 return _decorator
237 return _decorator
238
238
239 class unbundlerecords(object):
239 class unbundlerecords(object):
240 """keep record of what happens during and unbundle
240 """keep record of what happens during and unbundle
241
241
242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
243 category of record and obj is an arbitrary object.
243 category of record and obj is an arbitrary object.
244
244
245 `records['cat']` will return all entries of this category 'cat'.
245 `records['cat']` will return all entries of this category 'cat'.
246
246
247 Iterating on the object itself will yield `('category', obj)` tuples
247 Iterating on the object itself will yield `('category', obj)` tuples
248 for all entries.
248 for all entries.
249
249
250 All iterations happens in chronological order.
250 All iterations happens in chronological order.
251 """
251 """
252
252
253 def __init__(self):
253 def __init__(self):
254 self._categories = {}
254 self._categories = {}
255 self._sequences = []
255 self._sequences = []
256 self._replies = {}
256 self._replies = {}
257
257
258 def add(self, category, entry, inreplyto=None):
258 def add(self, category, entry, inreplyto=None):
259 """add a new record of a given category.
259 """add a new record of a given category.
260
260
261 The entry can then be retrieved in the list returned by
261 The entry can then be retrieved in the list returned by
262 self['category']."""
262 self['category']."""
263 self._categories.setdefault(category, []).append(entry)
263 self._categories.setdefault(category, []).append(entry)
264 self._sequences.append((category, entry))
264 self._sequences.append((category, entry))
265 if inreplyto is not None:
265 if inreplyto is not None:
266 self.getreplies(inreplyto).add(category, entry)
266 self.getreplies(inreplyto).add(category, entry)
267
267
268 def getreplies(self, partid):
268 def getreplies(self, partid):
269 """get the records that are replies to a specific part"""
269 """get the records that are replies to a specific part"""
270 return self._replies.setdefault(partid, unbundlerecords())
270 return self._replies.setdefault(partid, unbundlerecords())
271
271
272 def __getitem__(self, cat):
272 def __getitem__(self, cat):
273 return tuple(self._categories.get(cat, ()))
273 return tuple(self._categories.get(cat, ()))
274
274
275 def __iter__(self):
275 def __iter__(self):
276 return iter(self._sequences)
276 return iter(self._sequences)
277
277
278 def __len__(self):
278 def __len__(self):
279 return len(self._sequences)
279 return len(self._sequences)
280
280
281 def __nonzero__(self):
281 def __nonzero__(self):
282 return bool(self._sequences)
282 return bool(self._sequences)
283
283
284 __bool__ = __nonzero__
284 __bool__ = __nonzero__
285
285
286 class bundleoperation(object):
286 class bundleoperation(object):
287 """an object that represents a single bundling process
287 """an object that represents a single bundling process
288
288
289 Its purpose is to carry unbundle-related objects and states.
289 Its purpose is to carry unbundle-related objects and states.
290
290
291 A new object should be created at the beginning of each bundle processing.
291 A new object should be created at the beginning of each bundle processing.
292 The object is to be returned by the processing function.
292 The object is to be returned by the processing function.
293
293
294 The object has very little content now it will ultimately contain:
294 The object has very little content now it will ultimately contain:
295 * an access to the repo the bundle is applied to,
295 * an access to the repo the bundle is applied to,
296 * a ui object,
296 * a ui object,
297 * a way to retrieve a transaction to add changes to the repo,
297 * a way to retrieve a transaction to add changes to the repo,
298 * a way to record the result of processing each part,
298 * a way to record the result of processing each part,
299 * a way to construct a bundle response when applicable.
299 * a way to construct a bundle response when applicable.
300 """
300 """
301
301
302 def __init__(self, repo, transactiongetter, captureoutput=True):
302 def __init__(self, repo, transactiongetter, captureoutput=True):
303 self.repo = repo
303 self.repo = repo
304 self.ui = repo.ui
304 self.ui = repo.ui
305 self.records = unbundlerecords()
305 self.records = unbundlerecords()
306 self.reply = None
306 self.reply = None
307 self.captureoutput = captureoutput
307 self.captureoutput = captureoutput
308 self.hookargs = {}
308 self.hookargs = {}
309 self._gettransaction = transactiongetter
309 self._gettransaction = transactiongetter
310 # carries value that can modify part behavior
310 # carries value that can modify part behavior
311 self.modes = {}
311 self.modes = {}
312
312
313 def gettransaction(self):
313 def gettransaction(self):
314 transaction = self._gettransaction()
314 transaction = self._gettransaction()
315
315
316 if self.hookargs:
316 if self.hookargs:
317 # the ones added to the transaction supercede those added
317 # the ones added to the transaction supercede those added
318 # to the operation.
318 # to the operation.
319 self.hookargs.update(transaction.hookargs)
319 self.hookargs.update(transaction.hookargs)
320 transaction.hookargs = self.hookargs
320 transaction.hookargs = self.hookargs
321
321
322 # mark the hookargs as flushed. further attempts to add to
322 # mark the hookargs as flushed. further attempts to add to
323 # hookargs will result in an abort.
323 # hookargs will result in an abort.
324 self.hookargs = None
324 self.hookargs = None
325
325
326 return transaction
326 return transaction
327
327
328 def addhookargs(self, hookargs):
328 def addhookargs(self, hookargs):
329 if self.hookargs is None:
329 if self.hookargs is None:
330 raise error.ProgrammingError('attempted to add hookargs to '
330 raise error.ProgrammingError('attempted to add hookargs to '
331 'operation after transaction started')
331 'operation after transaction started')
332 self.hookargs.update(hookargs)
332 self.hookargs.update(hookargs)
333
333
334 class TransactionUnavailable(RuntimeError):
334 class TransactionUnavailable(RuntimeError):
335 pass
335 pass
336
336
337 def _notransaction():
337 def _notransaction():
338 """default method to get a transaction while processing a bundle
338 """default method to get a transaction while processing a bundle
339
339
340 Raise an exception to highlight the fact that no transaction was expected
340 Raise an exception to highlight the fact that no transaction was expected
341 to be created"""
341 to be created"""
342 raise TransactionUnavailable()
342 raise TransactionUnavailable()
343
343
344 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
344 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
345 # transform me into unbundler.apply() as soon as the freeze is lifted
345 # transform me into unbundler.apply() as soon as the freeze is lifted
346 if isinstance(unbundler, unbundle20):
346 if isinstance(unbundler, unbundle20):
347 tr.hookargs['bundle2'] = '1'
347 tr.hookargs['bundle2'] = '1'
348 if source is not None and 'source' not in tr.hookargs:
348 if source is not None and 'source' not in tr.hookargs:
349 tr.hookargs['source'] = source
349 tr.hookargs['source'] = source
350 if url is not None and 'url' not in tr.hookargs:
350 if url is not None and 'url' not in tr.hookargs:
351 tr.hookargs['url'] = url
351 tr.hookargs['url'] = url
352 return processbundle(repo, unbundler, lambda: tr)
352 return processbundle(repo, unbundler, lambda: tr)
353 else:
353 else:
354 # the transactiongetter won't be used, but we might as well set it
354 # the transactiongetter won't be used, but we might as well set it
355 op = bundleoperation(repo, lambda: tr)
355 op = bundleoperation(repo, lambda: tr)
356 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
356 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
357 return op
357 return op
358
358
359 class partiterator(object):
359 class partiterator(object):
360 def __init__(self, repo, op, unbundler):
360 def __init__(self, repo, op, unbundler):
361 self.repo = repo
361 self.repo = repo
362 self.op = op
362 self.op = op
363 self.unbundler = unbundler
363 self.unbundler = unbundler
364 self.iterator = None
364 self.iterator = None
365 self.count = 0
365 self.count = 0
366 self.current = None
366 self.current = None
367
367
368 def __enter__(self):
368 def __enter__(self):
369 def func():
369 def func():
370 itr = enumerate(self.unbundler.iterparts())
370 itr = enumerate(self.unbundler.iterparts())
371 for count, p in itr:
371 for count, p in itr:
372 self.count = count
372 self.count = count
373 self.current = p
373 self.current = p
374 yield p
374 yield p
375 p.consume()
375 p.consume()
376 self.current = None
376 self.current = None
377 self.iterator = func()
377 self.iterator = func()
378 return self.iterator
378 return self.iterator
379
379
380 def __exit__(self, type, exc, tb):
380 def __exit__(self, type, exc, tb):
381 if not self.iterator:
381 if not self.iterator:
382 return
382 return
383
383
384 # Only gracefully abort in a normal exception situation. User aborts
384 # Only gracefully abort in a normal exception situation. User aborts
385 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
385 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
386 # and should not gracefully cleanup.
386 # and should not gracefully cleanup.
387 if isinstance(exc, Exception):
387 if isinstance(exc, Exception):
388 # Any exceptions seeking to the end of the bundle at this point are
388 # Any exceptions seeking to the end of the bundle at this point are
389 # almost certainly related to the underlying stream being bad.
389 # almost certainly related to the underlying stream being bad.
390 # And, chances are that the exception we're handling is related to
390 # And, chances are that the exception we're handling is related to
391 # getting in that bad state. So, we swallow the seeking error and
391 # getting in that bad state. So, we swallow the seeking error and
392 # re-raise the original error.
392 # re-raise the original error.
393 seekerror = False
393 seekerror = False
394 try:
394 try:
395 if self.current:
395 if self.current:
396 # consume the part content to not corrupt the stream.
396 # consume the part content to not corrupt the stream.
397 self.current.consume()
397 self.current.consume()
398
398
399 for part in self.iterator:
399 for part in self.iterator:
400 # consume the bundle content
400 # consume the bundle content
401 part.consume()
401 part.consume()
402 except Exception:
402 except Exception:
403 seekerror = True
403 seekerror = True
404
404
405 # Small hack to let caller code distinguish exceptions from bundle2
405 # Small hack to let caller code distinguish exceptions from bundle2
406 # processing from processing the old format. This is mostly needed
406 # processing from processing the old format. This is mostly needed
407 # to handle different return codes to unbundle according to the type
407 # to handle different return codes to unbundle according to the type
408 # of bundle. We should probably clean up or drop this return code
408 # of bundle. We should probably clean up or drop this return code
409 # craziness in a future version.
409 # craziness in a future version.
410 exc.duringunbundle2 = True
410 exc.duringunbundle2 = True
411 salvaged = []
411 salvaged = []
412 replycaps = None
412 replycaps = None
413 if self.op.reply is not None:
413 if self.op.reply is not None:
414 salvaged = self.op.reply.salvageoutput()
414 salvaged = self.op.reply.salvageoutput()
415 replycaps = self.op.reply.capabilities
415 replycaps = self.op.reply.capabilities
416 exc._replycaps = replycaps
416 exc._replycaps = replycaps
417 exc._bundle2salvagedoutput = salvaged
417 exc._bundle2salvagedoutput = salvaged
418
418
419 # Re-raising from a variable loses the original stack. So only use
419 # Re-raising from a variable loses the original stack. So only use
420 # that form if we need to.
420 # that form if we need to.
421 if seekerror:
421 if seekerror:
422 raise exc
422 raise exc
423
423
424 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
424 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
425 self.count)
425 self.count)
426
426
427 def processbundle(repo, unbundler, transactiongetter=None, op=None):
427 def processbundle(repo, unbundler, transactiongetter=None, op=None):
428 """This function process a bundle, apply effect to/from a repo
428 """This function process a bundle, apply effect to/from a repo
429
429
430 It iterates over each part then searches for and uses the proper handling
430 It iterates over each part then searches for and uses the proper handling
431 code to process the part. Parts are processed in order.
431 code to process the part. Parts are processed in order.
432
432
433 Unknown Mandatory part will abort the process.
433 Unknown Mandatory part will abort the process.
434
434
435 It is temporarily possible to provide a prebuilt bundleoperation to the
435 It is temporarily possible to provide a prebuilt bundleoperation to the
436 function. This is used to ensure output is properly propagated in case of
436 function. This is used to ensure output is properly propagated in case of
437 an error during the unbundling. This output capturing part will likely be
437 an error during the unbundling. This output capturing part will likely be
438 reworked and this ability will probably go away in the process.
438 reworked and this ability will probably go away in the process.
439 """
439 """
440 if op is None:
440 if op is None:
441 if transactiongetter is None:
441 if transactiongetter is None:
442 transactiongetter = _notransaction
442 transactiongetter = _notransaction
443 op = bundleoperation(repo, transactiongetter)
443 op = bundleoperation(repo, transactiongetter)
444 # todo:
444 # todo:
445 # - replace this is a init function soon.
445 # - replace this is a init function soon.
446 # - exception catching
446 # - exception catching
447 unbundler.params
447 unbundler.params
448 if repo.ui.debugflag:
448 if repo.ui.debugflag:
449 msg = ['bundle2-input-bundle:']
449 msg = ['bundle2-input-bundle:']
450 if unbundler.params:
450 if unbundler.params:
451 msg.append(' %i params' % len(unbundler.params))
451 msg.append(' %i params' % len(unbundler.params))
452 if op._gettransaction is None or op._gettransaction is _notransaction:
452 if op._gettransaction is None or op._gettransaction is _notransaction:
453 msg.append(' no-transaction')
453 msg.append(' no-transaction')
454 else:
454 else:
455 msg.append(' with-transaction')
455 msg.append(' with-transaction')
456 msg.append('\n')
456 msg.append('\n')
457 repo.ui.debug(''.join(msg))
457 repo.ui.debug(''.join(msg))
458
458
459 processparts(repo, op, unbundler)
459 processparts(repo, op, unbundler)
460
460
461 return op
461 return op
462
462
463 def processparts(repo, op, unbundler):
463 def processparts(repo, op, unbundler):
464 with partiterator(repo, op, unbundler) as parts:
464 with partiterator(repo, op, unbundler) as parts:
465 for part in parts:
465 for part in parts:
466 _processpart(op, part)
466 _processpart(op, part)
467
467
468 def _processchangegroup(op, cg, tr, source, url, **kwargs):
468 def _processchangegroup(op, cg, tr, source, url, **kwargs):
469 ret = cg.apply(op.repo, tr, source, url, **kwargs)
469 ret = cg.apply(op.repo, tr, source, url, **kwargs)
470 op.records.add('changegroup', {
470 op.records.add('changegroup', {
471 'return': ret,
471 'return': ret,
472 })
472 })
473 return ret
473 return ret
474
474
475 def _gethandler(op, part):
475 def _gethandler(op, part):
476 status = 'unknown' # used by debug output
476 status = 'unknown' # used by debug output
477 try:
477 try:
478 handler = parthandlermapping.get(part.type)
478 handler = parthandlermapping.get(part.type)
479 if handler is None:
479 if handler is None:
480 status = 'unsupported-type'
480 status = 'unsupported-type'
481 raise error.BundleUnknownFeatureError(parttype=part.type)
481 raise error.BundleUnknownFeatureError(parttype=part.type)
482 indebug(op.ui, 'found a handler for part %s' % part.type)
482 indebug(op.ui, 'found a handler for part %s' % part.type)
483 unknownparams = part.mandatorykeys - handler.params
483 unknownparams = part.mandatorykeys - handler.params
484 if unknownparams:
484 if unknownparams:
485 unknownparams = list(unknownparams)
485 unknownparams = list(unknownparams)
486 unknownparams.sort()
486 unknownparams.sort()
487 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
487 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
488 raise error.BundleUnknownFeatureError(parttype=part.type,
488 raise error.BundleUnknownFeatureError(parttype=part.type,
489 params=unknownparams)
489 params=unknownparams)
490 status = 'supported'
490 status = 'supported'
491 except error.BundleUnknownFeatureError as exc:
491 except error.BundleUnknownFeatureError as exc:
492 if part.mandatory: # mandatory parts
492 if part.mandatory: # mandatory parts
493 raise
493 raise
494 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
494 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
495 return # skip to part processing
495 return # skip to part processing
496 finally:
496 finally:
497 if op.ui.debugflag:
497 if op.ui.debugflag:
498 msg = ['bundle2-input-part: "%s"' % part.type]
498 msg = ['bundle2-input-part: "%s"' % part.type]
499 if not part.mandatory:
499 if not part.mandatory:
500 msg.append(' (advisory)')
500 msg.append(' (advisory)')
501 nbmp = len(part.mandatorykeys)
501 nbmp = len(part.mandatorykeys)
502 nbap = len(part.params) - nbmp
502 nbap = len(part.params) - nbmp
503 if nbmp or nbap:
503 if nbmp or nbap:
504 msg.append(' (params:')
504 msg.append(' (params:')
505 if nbmp:
505 if nbmp:
506 msg.append(' %i mandatory' % nbmp)
506 msg.append(' %i mandatory' % nbmp)
507 if nbap:
507 if nbap:
508 msg.append(' %i advisory' % nbmp)
508 msg.append(' %i advisory' % nbmp)
509 msg.append(')')
509 msg.append(')')
510 msg.append(' %s\n' % status)
510 msg.append(' %s\n' % status)
511 op.ui.debug(''.join(msg))
511 op.ui.debug(''.join(msg))
512
512
513 return handler
513 return handler
514
514
515 def _processpart(op, part):
515 def _processpart(op, part):
516 """process a single part from a bundle
516 """process a single part from a bundle
517
517
518 The part is guaranteed to have been fully consumed when the function exits
518 The part is guaranteed to have been fully consumed when the function exits
519 (even if an exception is raised)."""
519 (even if an exception is raised)."""
520 handler = _gethandler(op, part)
520 handler = _gethandler(op, part)
521 if handler is None:
521 if handler is None:
522 return
522 return
523
523
524 # handler is called outside the above try block so that we don't
524 # handler is called outside the above try block so that we don't
525 # risk catching KeyErrors from anything other than the
525 # risk catching KeyErrors from anything other than the
526 # parthandlermapping lookup (any KeyError raised by handler()
526 # parthandlermapping lookup (any KeyError raised by handler()
527 # itself represents a defect of a different variety).
527 # itself represents a defect of a different variety).
528 output = None
528 output = None
529 if op.captureoutput and op.reply is not None:
529 if op.captureoutput and op.reply is not None:
530 op.ui.pushbuffer(error=True, subproc=True)
530 op.ui.pushbuffer(error=True, subproc=True)
531 output = ''
531 output = ''
532 try:
532 try:
533 handler(op, part)
533 handler(op, part)
534 finally:
534 finally:
535 if output is not None:
535 if output is not None:
536 output = op.ui.popbuffer()
536 output = op.ui.popbuffer()
537 if output:
537 if output:
538 outpart = op.reply.newpart('output', data=output,
538 outpart = op.reply.newpart('output', data=output,
539 mandatory=False)
539 mandatory=False)
540 outpart.addparam(
540 outpart.addparam(
541 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
541 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
542
542
543 def decodecaps(blob):
543 def decodecaps(blob):
544 """decode a bundle2 caps bytes blob into a dictionary
544 """decode a bundle2 caps bytes blob into a dictionary
545
545
546 The blob is a list of capabilities (one per line)
546 The blob is a list of capabilities (one per line)
547 Capabilities may have values using a line of the form::
547 Capabilities may have values using a line of the form::
548
548
549 capability=value1,value2,value3
549 capability=value1,value2,value3
550
550
551 The values are always a list."""
551 The values are always a list."""
552 caps = {}
552 caps = {}
553 for line in blob.splitlines():
553 for line in blob.splitlines():
554 if not line:
554 if not line:
555 continue
555 continue
556 if '=' not in line:
556 if '=' not in line:
557 key, vals = line, ()
557 key, vals = line, ()
558 else:
558 else:
559 key, vals = line.split('=', 1)
559 key, vals = line.split('=', 1)
560 vals = vals.split(',')
560 vals = vals.split(',')
561 key = urlreq.unquote(key)
561 key = urlreq.unquote(key)
562 vals = [urlreq.unquote(v) for v in vals]
562 vals = [urlreq.unquote(v) for v in vals]
563 caps[key] = vals
563 caps[key] = vals
564 return caps
564 return caps
565
565
566 def encodecaps(caps):
566 def encodecaps(caps):
567 """encode a bundle2 caps dictionary into a bytes blob"""
567 """encode a bundle2 caps dictionary into a bytes blob"""
568 chunks = []
568 chunks = []
569 for ca in sorted(caps):
569 for ca in sorted(caps):
570 vals = caps[ca]
570 vals = caps[ca]
571 ca = urlreq.quote(ca)
571 ca = urlreq.quote(ca)
572 vals = [urlreq.quote(v) for v in vals]
572 vals = [urlreq.quote(v) for v in vals]
573 if vals:
573 if vals:
574 ca = "%s=%s" % (ca, ','.join(vals))
574 ca = "%s=%s" % (ca, ','.join(vals))
575 chunks.append(ca)
575 chunks.append(ca)
576 return '\n'.join(chunks)
576 return '\n'.join(chunks)
577
577
578 bundletypes = {
578 bundletypes = {
579 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
579 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
580 # since the unification ssh accepts a header but there
580 # since the unification ssh accepts a header but there
581 # is no capability signaling it.
581 # is no capability signaling it.
582 "HG20": (), # special-cased below
582 "HG20": (), # special-cased below
583 "HG10UN": ("HG10UN", 'UN'),
583 "HG10UN": ("HG10UN", 'UN'),
584 "HG10BZ": ("HG10", 'BZ'),
584 "HG10BZ": ("HG10", 'BZ'),
585 "HG10GZ": ("HG10GZ", 'GZ'),
585 "HG10GZ": ("HG10GZ", 'GZ'),
586 }
586 }
587
587
588 # hgweb uses this list to communicate its preferred type
588 # hgweb uses this list to communicate its preferred type
589 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
589 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
590
590
591 class bundle20(object):
591 class bundle20(object):
592 """represent an outgoing bundle2 container
592 """represent an outgoing bundle2 container
593
593
594 Use the `addparam` method to add stream level parameter. and `newpart` to
594 Use the `addparam` method to add stream level parameter. and `newpart` to
595 populate it. Then call `getchunks` to retrieve all the binary chunks of
595 populate it. Then call `getchunks` to retrieve all the binary chunks of
596 data that compose the bundle2 container."""
596 data that compose the bundle2 container."""
597
597
598 _magicstring = 'HG20'
598 _magicstring = 'HG20'
599
599
600 def __init__(self, ui, capabilities=()):
600 def __init__(self, ui, capabilities=()):
601 self.ui = ui
601 self.ui = ui
602 self._params = []
602 self._params = []
603 self._parts = []
603 self._parts = []
604 self.capabilities = dict(capabilities)
604 self.capabilities = dict(capabilities)
605 self._compengine = util.compengines.forbundletype('UN')
605 self._compengine = util.compengines.forbundletype('UN')
606 self._compopts = None
606 self._compopts = None
607 # If compression is being handled by a consumer of the raw
607 # If compression is being handled by a consumer of the raw
608 # data (e.g. the wire protocol), unsetting this flag tells
608 # data (e.g. the wire protocol), unsetting this flag tells
609 # consumers that the bundle is best left uncompressed.
609 # consumers that the bundle is best left uncompressed.
610 self.prefercompressed = True
610 self.prefercompressed = True
611
611
612 def setcompression(self, alg, compopts=None):
612 def setcompression(self, alg, compopts=None):
613 """setup core part compression to <alg>"""
613 """setup core part compression to <alg>"""
614 if alg in (None, 'UN'):
614 if alg in (None, 'UN'):
615 return
615 return
616 assert not any(n.lower() == 'compression' for n, v in self._params)
616 assert not any(n.lower() == 'compression' for n, v in self._params)
617 self.addparam('Compression', alg)
617 self.addparam('Compression', alg)
618 self._compengine = util.compengines.forbundletype(alg)
618 self._compengine = util.compengines.forbundletype(alg)
619 self._compopts = compopts
619 self._compopts = compopts
620
620
621 @property
621 @property
622 def nbparts(self):
622 def nbparts(self):
623 """total number of parts added to the bundler"""
623 """total number of parts added to the bundler"""
624 return len(self._parts)
624 return len(self._parts)
625
625
626 # methods used to defines the bundle2 content
626 # methods used to defines the bundle2 content
627 def addparam(self, name, value=None):
627 def addparam(self, name, value=None):
628 """add a stream level parameter"""
628 """add a stream level parameter"""
629 if not name:
629 if not name:
630 raise ValueError(r'empty parameter name')
630 raise ValueError(r'empty parameter name')
631 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
631 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
632 raise ValueError(r'non letter first character: %s' % name)
632 raise ValueError(r'non letter first character: %s' % name)
633 self._params.append((name, value))
633 self._params.append((name, value))
634
634
635 def addpart(self, part):
635 def addpart(self, part):
636 """add a new part to the bundle2 container
636 """add a new part to the bundle2 container
637
637
638 Parts contains the actual applicative payload."""
638 Parts contains the actual applicative payload."""
639 assert part.id is None
639 assert part.id is None
640 part.id = len(self._parts) # very cheap counter
640 part.id = len(self._parts) # very cheap counter
641 self._parts.append(part)
641 self._parts.append(part)
642
642
643 def newpart(self, typeid, *args, **kwargs):
643 def newpart(self, typeid, *args, **kwargs):
644 """create a new part and add it to the containers
644 """create a new part and add it to the containers
645
645
646 As the part is directly added to the containers. For now, this means
646 As the part is directly added to the containers. For now, this means
647 that any failure to properly initialize the part after calling
647 that any failure to properly initialize the part after calling
648 ``newpart`` should result in a failure of the whole bundling process.
648 ``newpart`` should result in a failure of the whole bundling process.
649
649
650 You can still fall back to manually create and add if you need better
650 You can still fall back to manually create and add if you need better
651 control."""
651 control."""
652 part = bundlepart(typeid, *args, **kwargs)
652 part = bundlepart(typeid, *args, **kwargs)
653 self.addpart(part)
653 self.addpart(part)
654 return part
654 return part
655
655
656 # methods used to generate the bundle2 stream
656 # methods used to generate the bundle2 stream
657 def getchunks(self):
657 def getchunks(self):
658 if self.ui.debugflag:
658 if self.ui.debugflag:
659 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
659 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
660 if self._params:
660 if self._params:
661 msg.append(' (%i params)' % len(self._params))
661 msg.append(' (%i params)' % len(self._params))
662 msg.append(' %i parts total\n' % len(self._parts))
662 msg.append(' %i parts total\n' % len(self._parts))
663 self.ui.debug(''.join(msg))
663 self.ui.debug(''.join(msg))
664 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
664 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
665 yield self._magicstring
665 yield self._magicstring
666 param = self._paramchunk()
666 param = self._paramchunk()
667 outdebug(self.ui, 'bundle parameter: %s' % param)
667 outdebug(self.ui, 'bundle parameter: %s' % param)
668 yield _pack(_fstreamparamsize, len(param))
668 yield _pack(_fstreamparamsize, len(param))
669 if param:
669 if param:
670 yield param
670 yield param
671 for chunk in self._compengine.compressstream(self._getcorechunk(),
671 for chunk in self._compengine.compressstream(self._getcorechunk(),
672 self._compopts):
672 self._compopts):
673 yield chunk
673 yield chunk
674
674
675 def _paramchunk(self):
675 def _paramchunk(self):
676 """return a encoded version of all stream parameters"""
676 """return a encoded version of all stream parameters"""
677 blocks = []
677 blocks = []
678 for par, value in self._params:
678 for par, value in self._params:
679 par = urlreq.quote(par)
679 par = urlreq.quote(par)
680 if value is not None:
680 if value is not None:
681 value = urlreq.quote(value)
681 value = urlreq.quote(value)
682 par = '%s=%s' % (par, value)
682 par = '%s=%s' % (par, value)
683 blocks.append(par)
683 blocks.append(par)
684 return ' '.join(blocks)
684 return ' '.join(blocks)
685
685
686 def _getcorechunk(self):
686 def _getcorechunk(self):
687 """yield chunk for the core part of the bundle
687 """yield chunk for the core part of the bundle
688
688
689 (all but headers and parameters)"""
689 (all but headers and parameters)"""
690 outdebug(self.ui, 'start of parts')
690 outdebug(self.ui, 'start of parts')
691 for part in self._parts:
691 for part in self._parts:
692 outdebug(self.ui, 'bundle part: "%s"' % part.type)
692 outdebug(self.ui, 'bundle part: "%s"' % part.type)
693 for chunk in part.getchunks(ui=self.ui):
693 for chunk in part.getchunks(ui=self.ui):
694 yield chunk
694 yield chunk
695 outdebug(self.ui, 'end of bundle')
695 outdebug(self.ui, 'end of bundle')
696 yield _pack(_fpartheadersize, 0)
696 yield _pack(_fpartheadersize, 0)
697
697
698
698
699 def salvageoutput(self):
699 def salvageoutput(self):
700 """return a list with a copy of all output parts in the bundle
700 """return a list with a copy of all output parts in the bundle
701
701
702 This is meant to be used during error handling to make sure we preserve
702 This is meant to be used during error handling to make sure we preserve
703 server output"""
703 server output"""
704 salvaged = []
704 salvaged = []
705 for part in self._parts:
705 for part in self._parts:
706 if part.type.startswith('output'):
706 if part.type.startswith('output'):
707 salvaged.append(part.copy())
707 salvaged.append(part.copy())
708 return salvaged
708 return salvaged
709
709
710
710
711 class unpackermixin(object):
711 class unpackermixin(object):
712 """A mixin to extract bytes and struct data from a stream"""
712 """A mixin to extract bytes and struct data from a stream"""
713
713
714 def __init__(self, fp):
714 def __init__(self, fp):
715 self._fp = fp
715 self._fp = fp
716
716
717 def _unpack(self, format):
717 def _unpack(self, format):
718 """unpack this struct format from the stream
718 """unpack this struct format from the stream
719
719
720 This method is meant for internal usage by the bundle2 protocol only.
720 This method is meant for internal usage by the bundle2 protocol only.
721 They directly manipulate the low level stream including bundle2 level
721 They directly manipulate the low level stream including bundle2 level
722 instruction.
722 instruction.
723
723
724 Do not use it to implement higher-level logic or methods."""
724 Do not use it to implement higher-level logic or methods."""
725 data = self._readexact(struct.calcsize(format))
725 data = self._readexact(struct.calcsize(format))
726 return _unpack(format, data)
726 return _unpack(format, data)
727
727
728 def _readexact(self, size):
728 def _readexact(self, size):
729 """read exactly <size> bytes from the stream
729 """read exactly <size> bytes from the stream
730
730
731 This method is meant for internal usage by the bundle2 protocol only.
731 This method is meant for internal usage by the bundle2 protocol only.
732 They directly manipulate the low level stream including bundle2 level
732 They directly manipulate the low level stream including bundle2 level
733 instruction.
733 instruction.
734
734
735 Do not use it to implement higher-level logic or methods."""
735 Do not use it to implement higher-level logic or methods."""
736 return changegroup.readexactly(self._fp, size)
736 return changegroup.readexactly(self._fp, size)
737
737
738 def getunbundler(ui, fp, magicstring=None):
738 def getunbundler(ui, fp, magicstring=None):
739 """return a valid unbundler object for a given magicstring"""
739 """return a valid unbundler object for a given magicstring"""
740 if magicstring is None:
740 if magicstring is None:
741 magicstring = changegroup.readexactly(fp, 4)
741 magicstring = changegroup.readexactly(fp, 4)
742 magic, version = magicstring[0:2], magicstring[2:4]
742 magic, version = magicstring[0:2], magicstring[2:4]
743 if magic != 'HG':
743 if magic != 'HG':
744 ui.debug(
744 ui.debug(
745 "error: invalid magic: %r (version %r), should be 'HG'\n"
745 "error: invalid magic: %r (version %r), should be 'HG'\n"
746 % (magic, version))
746 % (magic, version))
747 raise error.Abort(_('not a Mercurial bundle'))
747 raise error.Abort(_('not a Mercurial bundle'))
748 unbundlerclass = formatmap.get(version)
748 unbundlerclass = formatmap.get(version)
749 if unbundlerclass is None:
749 if unbundlerclass is None:
750 raise error.Abort(_('unknown bundle version %s') % version)
750 raise error.Abort(_('unknown bundle version %s') % version)
751 unbundler = unbundlerclass(ui, fp)
751 unbundler = unbundlerclass(ui, fp)
752 indebug(ui, 'start processing of %s stream' % magicstring)
752 indebug(ui, 'start processing of %s stream' % magicstring)
753 return unbundler
753 return unbundler
754
754
755 class unbundle20(unpackermixin):
755 class unbundle20(unpackermixin):
756 """interpret a bundle2 stream
756 """interpret a bundle2 stream
757
757
758 This class is fed with a binary stream and yields parts through its
758 This class is fed with a binary stream and yields parts through its
759 `iterparts` methods."""
759 `iterparts` methods."""
760
760
761 _magicstring = 'HG20'
761 _magicstring = 'HG20'
762
762
763 def __init__(self, ui, fp):
763 def __init__(self, ui, fp):
764 """If header is specified, we do not read it out of the stream."""
764 """If header is specified, we do not read it out of the stream."""
765 self.ui = ui
765 self.ui = ui
766 self._compengine = util.compengines.forbundletype('UN')
766 self._compengine = util.compengines.forbundletype('UN')
767 self._compressed = None
767 self._compressed = None
768 super(unbundle20, self).__init__(fp)
768 super(unbundle20, self).__init__(fp)
769
769
770 @util.propertycache
770 @util.propertycache
771 def params(self):
771 def params(self):
772 """dictionary of stream level parameters"""
772 """dictionary of stream level parameters"""
773 indebug(self.ui, 'reading bundle2 stream parameters')
773 indebug(self.ui, 'reading bundle2 stream parameters')
774 params = {}
774 params = {}
775 paramssize = self._unpack(_fstreamparamsize)[0]
775 paramssize = self._unpack(_fstreamparamsize)[0]
776 if paramssize < 0:
776 if paramssize < 0:
777 raise error.BundleValueError('negative bundle param size: %i'
777 raise error.BundleValueError('negative bundle param size: %i'
778 % paramssize)
778 % paramssize)
779 if paramssize:
779 if paramssize:
780 params = self._readexact(paramssize)
780 params = self._readexact(paramssize)
781 params = self._processallparams(params)
781 params = self._processallparams(params)
782 return params
782 return params
783
783
784 def _processallparams(self, paramsblock):
784 def _processallparams(self, paramsblock):
785 """"""
785 """"""
786 params = util.sortdict()
786 params = util.sortdict()
787 for p in paramsblock.split(' '):
787 for p in paramsblock.split(' '):
788 p = p.split('=', 1)
788 p = p.split('=', 1)
789 p = [urlreq.unquote(i) for i in p]
789 p = [urlreq.unquote(i) for i in p]
790 if len(p) < 2:
790 if len(p) < 2:
791 p.append(None)
791 p.append(None)
792 self._processparam(*p)
792 self._processparam(*p)
793 params[p[0]] = p[1]
793 params[p[0]] = p[1]
794 return params
794 return params
795
795
796
796
797 def _processparam(self, name, value):
797 def _processparam(self, name, value):
798 """process a parameter, applying its effect if needed
798 """process a parameter, applying its effect if needed
799
799
800 Parameter starting with a lower case letter are advisory and will be
800 Parameter starting with a lower case letter are advisory and will be
801 ignored when unknown. Those starting with an upper case letter are
801 ignored when unknown. Those starting with an upper case letter are
802 mandatory and will this function will raise a KeyError when unknown.
802 mandatory and will this function will raise a KeyError when unknown.
803
803
804 Note: no option are currently supported. Any input will be either
804 Note: no option are currently supported. Any input will be either
805 ignored or failing.
805 ignored or failing.
806 """
806 """
807 if not name:
807 if not name:
808 raise ValueError(r'empty parameter name')
808 raise ValueError(r'empty parameter name')
809 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
809 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
810 raise ValueError(r'non letter first character: %s' % name)
810 raise ValueError(r'non letter first character: %s' % name)
811 try:
811 try:
812 handler = b2streamparamsmap[name.lower()]
812 handler = b2streamparamsmap[name.lower()]
813 except KeyError:
813 except KeyError:
814 if name[0:1].islower():
814 if name[0:1].islower():
815 indebug(self.ui, "ignoring unknown parameter %s" % name)
815 indebug(self.ui, "ignoring unknown parameter %s" % name)
816 else:
816 else:
817 raise error.BundleUnknownFeatureError(params=(name,))
817 raise error.BundleUnknownFeatureError(params=(name,))
818 else:
818 else:
819 handler(self, name, value)
819 handler(self, name, value)
820
820
821 def _forwardchunks(self):
821 def _forwardchunks(self):
822 """utility to transfer a bundle2 as binary
822 """utility to transfer a bundle2 as binary
823
823
824 This is made necessary by the fact the 'getbundle' command over 'ssh'
824 This is made necessary by the fact the 'getbundle' command over 'ssh'
825 have no way to know then the reply end, relying on the bundle to be
825 have no way to know then the reply end, relying on the bundle to be
826 interpreted to know its end. This is terrible and we are sorry, but we
826 interpreted to know its end. This is terrible and we are sorry, but we
827 needed to move forward to get general delta enabled.
827 needed to move forward to get general delta enabled.
828 """
828 """
829 yield self._magicstring
829 yield self._magicstring
830 assert 'params' not in vars(self)
830 assert 'params' not in vars(self)
831 paramssize = self._unpack(_fstreamparamsize)[0]
831 paramssize = self._unpack(_fstreamparamsize)[0]
832 if paramssize < 0:
832 if paramssize < 0:
833 raise error.BundleValueError('negative bundle param size: %i'
833 raise error.BundleValueError('negative bundle param size: %i'
834 % paramssize)
834 % paramssize)
835 yield _pack(_fstreamparamsize, paramssize)
835 yield _pack(_fstreamparamsize, paramssize)
836 if paramssize:
836 if paramssize:
837 params = self._readexact(paramssize)
837 params = self._readexact(paramssize)
838 self._processallparams(params)
838 self._processallparams(params)
839 yield params
839 yield params
840 assert self._compengine.bundletype == 'UN'
840 assert self._compengine.bundletype == 'UN'
841 # From there, payload might need to be decompressed
841 # From there, payload might need to be decompressed
842 self._fp = self._compengine.decompressorreader(self._fp)
842 self._fp = self._compengine.decompressorreader(self._fp)
843 emptycount = 0
843 emptycount = 0
844 while emptycount < 2:
844 while emptycount < 2:
845 # so we can brainlessly loop
845 # so we can brainlessly loop
846 assert _fpartheadersize == _fpayloadsize
846 assert _fpartheadersize == _fpayloadsize
847 size = self._unpack(_fpartheadersize)[0]
847 size = self._unpack(_fpartheadersize)[0]
848 yield _pack(_fpartheadersize, size)
848 yield _pack(_fpartheadersize, size)
849 if size:
849 if size:
850 emptycount = 0
850 emptycount = 0
851 else:
851 else:
852 emptycount += 1
852 emptycount += 1
853 continue
853 continue
854 if size == flaginterrupt:
854 if size == flaginterrupt:
855 continue
855 continue
856 elif size < 0:
856 elif size < 0:
857 raise error.BundleValueError('negative chunk size: %i')
857 raise error.BundleValueError('negative chunk size: %i')
858 yield self._readexact(size)
858 yield self._readexact(size)
859
859
860
860
861 def iterparts(self, seekable=False):
861 def iterparts(self, seekable=False):
862 """yield all parts contained in the stream"""
862 """yield all parts contained in the stream"""
863 cls = seekableunbundlepart if seekable else unbundlepart
863 cls = seekableunbundlepart if seekable else unbundlepart
864 # make sure param have been loaded
864 # make sure param have been loaded
865 self.params
865 self.params
866 # From there, payload need to be decompressed
866 # From there, payload need to be decompressed
867 self._fp = self._compengine.decompressorreader(self._fp)
867 self._fp = self._compengine.decompressorreader(self._fp)
868 indebug(self.ui, 'start extraction of bundle2 parts')
868 indebug(self.ui, 'start extraction of bundle2 parts')
869 headerblock = self._readpartheader()
869 headerblock = self._readpartheader()
870 while headerblock is not None:
870 while headerblock is not None:
871 part = cls(self.ui, headerblock, self._fp)
871 part = cls(self.ui, headerblock, self._fp)
872 yield part
872 yield part
873 # Ensure part is fully consumed so we can start reading the next
873 # Ensure part is fully consumed so we can start reading the next
874 # part.
874 # part.
875 part.consume()
875 part.consume()
876
876
877 headerblock = self._readpartheader()
877 headerblock = self._readpartheader()
878 indebug(self.ui, 'end of bundle2 stream')
878 indebug(self.ui, 'end of bundle2 stream')
879
879
880 def _readpartheader(self):
880 def _readpartheader(self):
881 """reads a part header size and return the bytes blob
881 """reads a part header size and return the bytes blob
882
882
883 returns None if empty"""
883 returns None if empty"""
884 headersize = self._unpack(_fpartheadersize)[0]
884 headersize = self._unpack(_fpartheadersize)[0]
885 if headersize < 0:
885 if headersize < 0:
886 raise error.BundleValueError('negative part header size: %i'
886 raise error.BundleValueError('negative part header size: %i'
887 % headersize)
887 % headersize)
888 indebug(self.ui, 'part header size: %i' % headersize)
888 indebug(self.ui, 'part header size: %i' % headersize)
889 if headersize:
889 if headersize:
890 return self._readexact(headersize)
890 return self._readexact(headersize)
891 return None
891 return None
892
892
893 def compressed(self):
893 def compressed(self):
894 self.params # load params
894 self.params # load params
895 return self._compressed
895 return self._compressed
896
896
897 def close(self):
897 def close(self):
898 """close underlying file"""
898 """close underlying file"""
899 if util.safehasattr(self._fp, 'close'):
899 if util.safehasattr(self._fp, 'close'):
900 return self._fp.close()
900 return self._fp.close()
901
901
902 formatmap = {'20': unbundle20}
902 formatmap = {'20': unbundle20}
903
903
904 b2streamparamsmap = {}
904 b2streamparamsmap = {}
905
905
906 def b2streamparamhandler(name):
906 def b2streamparamhandler(name):
907 """register a handler for a stream level parameter"""
907 """register a handler for a stream level parameter"""
908 def decorator(func):
908 def decorator(func):
909 assert name not in formatmap
909 assert name not in formatmap
910 b2streamparamsmap[name] = func
910 b2streamparamsmap[name] = func
911 return func
911 return func
912 return decorator
912 return decorator
913
913
914 @b2streamparamhandler('compression')
914 @b2streamparamhandler('compression')
915 def processcompression(unbundler, param, value):
915 def processcompression(unbundler, param, value):
916 """read compression parameter and install payload decompression"""
916 """read compression parameter and install payload decompression"""
917 if value not in util.compengines.supportedbundletypes:
917 if value not in util.compengines.supportedbundletypes:
918 raise error.BundleUnknownFeatureError(params=(param,),
918 raise error.BundleUnknownFeatureError(params=(param,),
919 values=(value,))
919 values=(value,))
920 unbundler._compengine = util.compengines.forbundletype(value)
920 unbundler._compengine = util.compengines.forbundletype(value)
921 if value is not None:
921 if value is not None:
922 unbundler._compressed = True
922 unbundler._compressed = True
923
923
924 class bundlepart(object):
924 class bundlepart(object):
925 """A bundle2 part contains application level payload
925 """A bundle2 part contains application level payload
926
926
927 The part `type` is used to route the part to the application level
927 The part `type` is used to route the part to the application level
928 handler.
928 handler.
929
929
930 The part payload is contained in ``part.data``. It could be raw bytes or a
930 The part payload is contained in ``part.data``. It could be raw bytes or a
931 generator of byte chunks.
931 generator of byte chunks.
932
932
933 You can add parameters to the part using the ``addparam`` method.
933 You can add parameters to the part using the ``addparam`` method.
934 Parameters can be either mandatory (default) or advisory. Remote side
934 Parameters can be either mandatory (default) or advisory. Remote side
935 should be able to safely ignore the advisory ones.
935 should be able to safely ignore the advisory ones.
936
936
937 Both data and parameters cannot be modified after the generation has begun.
937 Both data and parameters cannot be modified after the generation has begun.
938 """
938 """
939
939
940 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
940 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
941 data='', mandatory=True):
941 data='', mandatory=True):
942 validateparttype(parttype)
942 validateparttype(parttype)
943 self.id = None
943 self.id = None
944 self.type = parttype
944 self.type = parttype
945 self._data = data
945 self._data = data
946 self._mandatoryparams = list(mandatoryparams)
946 self._mandatoryparams = list(mandatoryparams)
947 self._advisoryparams = list(advisoryparams)
947 self._advisoryparams = list(advisoryparams)
948 # checking for duplicated entries
948 # checking for duplicated entries
949 self._seenparams = set()
949 self._seenparams = set()
950 for pname, __ in self._mandatoryparams + self._advisoryparams:
950 for pname, __ in self._mandatoryparams + self._advisoryparams:
951 if pname in self._seenparams:
951 if pname in self._seenparams:
952 raise error.ProgrammingError('duplicated params: %s' % pname)
952 raise error.ProgrammingError('duplicated params: %s' % pname)
953 self._seenparams.add(pname)
953 self._seenparams.add(pname)
954 # status of the part's generation:
954 # status of the part's generation:
955 # - None: not started,
955 # - None: not started,
956 # - False: currently generated,
956 # - False: currently generated,
957 # - True: generation done.
957 # - True: generation done.
958 self._generated = None
958 self._generated = None
959 self.mandatory = mandatory
959 self.mandatory = mandatory
960
960
961 def __repr__(self):
961 def __repr__(self):
962 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
962 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
963 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
963 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
964 % (cls, id(self), self.id, self.type, self.mandatory))
964 % (cls, id(self), self.id, self.type, self.mandatory))
965
965
966 def copy(self):
966 def copy(self):
967 """return a copy of the part
967 """return a copy of the part
968
968
969 The new part have the very same content but no partid assigned yet.
969 The new part have the very same content but no partid assigned yet.
970 Parts with generated data cannot be copied."""
970 Parts with generated data cannot be copied."""
971 assert not util.safehasattr(self.data, 'next')
971 assert not util.safehasattr(self.data, 'next')
972 return self.__class__(self.type, self._mandatoryparams,
972 return self.__class__(self.type, self._mandatoryparams,
973 self._advisoryparams, self._data, self.mandatory)
973 self._advisoryparams, self._data, self.mandatory)
974
974
975 # methods used to defines the part content
975 # methods used to defines the part content
976 @property
976 @property
977 def data(self):
977 def data(self):
978 return self._data
978 return self._data
979
979
980 @data.setter
980 @data.setter
981 def data(self, data):
981 def data(self, data):
982 if self._generated is not None:
982 if self._generated is not None:
983 raise error.ReadOnlyPartError('part is being generated')
983 raise error.ReadOnlyPartError('part is being generated')
984 self._data = data
984 self._data = data
985
985
986 @property
986 @property
987 def mandatoryparams(self):
987 def mandatoryparams(self):
988 # make it an immutable tuple to force people through ``addparam``
988 # make it an immutable tuple to force people through ``addparam``
989 return tuple(self._mandatoryparams)
989 return tuple(self._mandatoryparams)
990
990
991 @property
991 @property
992 def advisoryparams(self):
992 def advisoryparams(self):
993 # make it an immutable tuple to force people through ``addparam``
993 # make it an immutable tuple to force people through ``addparam``
994 return tuple(self._advisoryparams)
994 return tuple(self._advisoryparams)
995
995
996 def addparam(self, name, value='', mandatory=True):
996 def addparam(self, name, value='', mandatory=True):
997 """add a parameter to the part
997 """add a parameter to the part
998
998
999 If 'mandatory' is set to True, the remote handler must claim support
999 If 'mandatory' is set to True, the remote handler must claim support
1000 for this parameter or the unbundling will be aborted.
1000 for this parameter or the unbundling will be aborted.
1001
1001
1002 The 'name' and 'value' cannot exceed 255 bytes each.
1002 The 'name' and 'value' cannot exceed 255 bytes each.
1003 """
1003 """
1004 if self._generated is not None:
1004 if self._generated is not None:
1005 raise error.ReadOnlyPartError('part is being generated')
1005 raise error.ReadOnlyPartError('part is being generated')
1006 if name in self._seenparams:
1006 if name in self._seenparams:
1007 raise ValueError('duplicated params: %s' % name)
1007 raise ValueError('duplicated params: %s' % name)
1008 self._seenparams.add(name)
1008 self._seenparams.add(name)
1009 params = self._advisoryparams
1009 params = self._advisoryparams
1010 if mandatory:
1010 if mandatory:
1011 params = self._mandatoryparams
1011 params = self._mandatoryparams
1012 params.append((name, value))
1012 params.append((name, value))
1013
1013
1014 # methods used to generates the bundle2 stream
1014 # methods used to generates the bundle2 stream
1015 def getchunks(self, ui):
1015 def getchunks(self, ui):
1016 if self._generated is not None:
1016 if self._generated is not None:
1017 raise error.ProgrammingError('part can only be consumed once')
1017 raise error.ProgrammingError('part can only be consumed once')
1018 self._generated = False
1018 self._generated = False
1019
1019
1020 if ui.debugflag:
1020 if ui.debugflag:
1021 msg = ['bundle2-output-part: "%s"' % self.type]
1021 msg = ['bundle2-output-part: "%s"' % self.type]
1022 if not self.mandatory:
1022 if not self.mandatory:
1023 msg.append(' (advisory)')
1023 msg.append(' (advisory)')
1024 nbmp = len(self.mandatoryparams)
1024 nbmp = len(self.mandatoryparams)
1025 nbap = len(self.advisoryparams)
1025 nbap = len(self.advisoryparams)
1026 if nbmp or nbap:
1026 if nbmp or nbap:
1027 msg.append(' (params:')
1027 msg.append(' (params:')
1028 if nbmp:
1028 if nbmp:
1029 msg.append(' %i mandatory' % nbmp)
1029 msg.append(' %i mandatory' % nbmp)
1030 if nbap:
1030 if nbap:
1031 msg.append(' %i advisory' % nbmp)
1031 msg.append(' %i advisory' % nbmp)
1032 msg.append(')')
1032 msg.append(')')
1033 if not self.data:
1033 if not self.data:
1034 msg.append(' empty payload')
1034 msg.append(' empty payload')
1035 elif (util.safehasattr(self.data, 'next')
1035 elif (util.safehasattr(self.data, 'next')
1036 or util.safehasattr(self.data, '__next__')):
1036 or util.safehasattr(self.data, '__next__')):
1037 msg.append(' streamed payload')
1037 msg.append(' streamed payload')
1038 else:
1038 else:
1039 msg.append(' %i bytes payload' % len(self.data))
1039 msg.append(' %i bytes payload' % len(self.data))
1040 msg.append('\n')
1040 msg.append('\n')
1041 ui.debug(''.join(msg))
1041 ui.debug(''.join(msg))
1042
1042
1043 #### header
1043 #### header
1044 if self.mandatory:
1044 if self.mandatory:
1045 parttype = self.type.upper()
1045 parttype = self.type.upper()
1046 else:
1046 else:
1047 parttype = self.type.lower()
1047 parttype = self.type.lower()
1048 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1048 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1049 ## parttype
1049 ## parttype
1050 header = [_pack(_fparttypesize, len(parttype)),
1050 header = [_pack(_fparttypesize, len(parttype)),
1051 parttype, _pack(_fpartid, self.id),
1051 parttype, _pack(_fpartid, self.id),
1052 ]
1052 ]
1053 ## parameters
1053 ## parameters
1054 # count
1054 # count
1055 manpar = self.mandatoryparams
1055 manpar = self.mandatoryparams
1056 advpar = self.advisoryparams
1056 advpar = self.advisoryparams
1057 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1057 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1058 # size
1058 # size
1059 parsizes = []
1059 parsizes = []
1060 for key, value in manpar:
1060 for key, value in manpar:
1061 parsizes.append(len(key))
1061 parsizes.append(len(key))
1062 parsizes.append(len(value))
1062 parsizes.append(len(value))
1063 for key, value in advpar:
1063 for key, value in advpar:
1064 parsizes.append(len(key))
1064 parsizes.append(len(key))
1065 parsizes.append(len(value))
1065 parsizes.append(len(value))
1066 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1066 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1067 header.append(paramsizes)
1067 header.append(paramsizes)
1068 # key, value
1068 # key, value
1069 for key, value in manpar:
1069 for key, value in manpar:
1070 header.append(key)
1070 header.append(key)
1071 header.append(value)
1071 header.append(value)
1072 for key, value in advpar:
1072 for key, value in advpar:
1073 header.append(key)
1073 header.append(key)
1074 header.append(value)
1074 header.append(value)
1075 ## finalize header
1075 ## finalize header
1076 try:
1076 try:
1077 headerchunk = ''.join(header)
1077 headerchunk = ''.join(header)
1078 except TypeError:
1078 except TypeError:
1079 raise TypeError(r'Found a non-bytes trying to '
1079 raise TypeError(r'Found a non-bytes trying to '
1080 r'build bundle part header: %r' % header)
1080 r'build bundle part header: %r' % header)
1081 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1081 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1082 yield _pack(_fpartheadersize, len(headerchunk))
1082 yield _pack(_fpartheadersize, len(headerchunk))
1083 yield headerchunk
1083 yield headerchunk
1084 ## payload
1084 ## payload
1085 try:
1085 try:
1086 for chunk in self._payloadchunks():
1086 for chunk in self._payloadchunks():
1087 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1087 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1088 yield _pack(_fpayloadsize, len(chunk))
1088 yield _pack(_fpayloadsize, len(chunk))
1089 yield chunk
1089 yield chunk
1090 except GeneratorExit:
1090 except GeneratorExit:
1091 # GeneratorExit means that nobody is listening for our
1091 # GeneratorExit means that nobody is listening for our
1092 # results anyway, so just bail quickly rather than trying
1092 # results anyway, so just bail quickly rather than trying
1093 # to produce an error part.
1093 # to produce an error part.
1094 ui.debug('bundle2-generatorexit\n')
1094 ui.debug('bundle2-generatorexit\n')
1095 raise
1095 raise
1096 except BaseException as exc:
1096 except BaseException as exc:
1097 bexc = stringutil.forcebytestr(exc)
1097 bexc = stringutil.forcebytestr(exc)
1098 # backup exception data for later
1098 # backup exception data for later
1099 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1099 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1100 % bexc)
1100 % bexc)
1101 tb = sys.exc_info()[2]
1101 tb = sys.exc_info()[2]
1102 msg = 'unexpected error: %s' % bexc
1102 msg = 'unexpected error: %s' % bexc
1103 interpart = bundlepart('error:abort', [('message', msg)],
1103 interpart = bundlepart('error:abort', [('message', msg)],
1104 mandatory=False)
1104 mandatory=False)
1105 interpart.id = 0
1105 interpart.id = 0
1106 yield _pack(_fpayloadsize, -1)
1106 yield _pack(_fpayloadsize, -1)
1107 for chunk in interpart.getchunks(ui=ui):
1107 for chunk in interpart.getchunks(ui=ui):
1108 yield chunk
1108 yield chunk
1109 outdebug(ui, 'closing payload chunk')
1109 outdebug(ui, 'closing payload chunk')
1110 # abort current part payload
1110 # abort current part payload
1111 yield _pack(_fpayloadsize, 0)
1111 yield _pack(_fpayloadsize, 0)
1112 pycompat.raisewithtb(exc, tb)
1112 pycompat.raisewithtb(exc, tb)
1113 # end of payload
1113 # end of payload
1114 outdebug(ui, 'closing payload chunk')
1114 outdebug(ui, 'closing payload chunk')
1115 yield _pack(_fpayloadsize, 0)
1115 yield _pack(_fpayloadsize, 0)
1116 self._generated = True
1116 self._generated = True
1117
1117
1118 def _payloadchunks(self):
1118 def _payloadchunks(self):
1119 """yield chunks of a the part payload
1119 """yield chunks of a the part payload
1120
1120
1121 Exists to handle the different methods to provide data to a part."""
1121 Exists to handle the different methods to provide data to a part."""
1122 # we only support fixed size data now.
1122 # we only support fixed size data now.
1123 # This will be improved in the future.
1123 # This will be improved in the future.
1124 if (util.safehasattr(self.data, 'next')
1124 if (util.safehasattr(self.data, 'next')
1125 or util.safehasattr(self.data, '__next__')):
1125 or util.safehasattr(self.data, '__next__')):
1126 buff = util.chunkbuffer(self.data)
1126 buff = util.chunkbuffer(self.data)
1127 chunk = buff.read(preferedchunksize)
1127 chunk = buff.read(preferedchunksize)
1128 while chunk:
1128 while chunk:
1129 yield chunk
1129 yield chunk
1130 chunk = buff.read(preferedchunksize)
1130 chunk = buff.read(preferedchunksize)
1131 elif len(self.data):
1131 elif len(self.data):
1132 yield self.data
1132 yield self.data
1133
1133
1134
1134
1135 flaginterrupt = -1
1135 flaginterrupt = -1
1136
1136
1137 class interrupthandler(unpackermixin):
1137 class interrupthandler(unpackermixin):
1138 """read one part and process it with restricted capability
1138 """read one part and process it with restricted capability
1139
1139
1140 This allows to transmit exception raised on the producer size during part
1140 This allows to transmit exception raised on the producer size during part
1141 iteration while the consumer is reading a part.
1141 iteration while the consumer is reading a part.
1142
1142
1143 Part processed in this manner only have access to a ui object,"""
1143 Part processed in this manner only have access to a ui object,"""
1144
1144
1145 def __init__(self, ui, fp):
1145 def __init__(self, ui, fp):
1146 super(interrupthandler, self).__init__(fp)
1146 super(interrupthandler, self).__init__(fp)
1147 self.ui = ui
1147 self.ui = ui
1148
1148
1149 def _readpartheader(self):
1149 def _readpartheader(self):
1150 """reads a part header size and return the bytes blob
1150 """reads a part header size and return the bytes blob
1151
1151
1152 returns None if empty"""
1152 returns None if empty"""
1153 headersize = self._unpack(_fpartheadersize)[0]
1153 headersize = self._unpack(_fpartheadersize)[0]
1154 if headersize < 0:
1154 if headersize < 0:
1155 raise error.BundleValueError('negative part header size: %i'
1155 raise error.BundleValueError('negative part header size: %i'
1156 % headersize)
1156 % headersize)
1157 indebug(self.ui, 'part header size: %i\n' % headersize)
1157 indebug(self.ui, 'part header size: %i\n' % headersize)
1158 if headersize:
1158 if headersize:
1159 return self._readexact(headersize)
1159 return self._readexact(headersize)
1160 return None
1160 return None
1161
1161
1162 def __call__(self):
1162 def __call__(self):
1163
1163
1164 self.ui.debug('bundle2-input-stream-interrupt:'
1164 self.ui.debug('bundle2-input-stream-interrupt:'
1165 ' opening out of band context\n')
1165 ' opening out of band context\n')
1166 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1166 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1167 headerblock = self._readpartheader()
1167 headerblock = self._readpartheader()
1168 if headerblock is None:
1168 if headerblock is None:
1169 indebug(self.ui, 'no part found during interruption.')
1169 indebug(self.ui, 'no part found during interruption.')
1170 return
1170 return
1171 part = unbundlepart(self.ui, headerblock, self._fp)
1171 part = unbundlepart(self.ui, headerblock, self._fp)
1172 op = interruptoperation(self.ui)
1172 op = interruptoperation(self.ui)
1173 hardabort = False
1173 hardabort = False
1174 try:
1174 try:
1175 _processpart(op, part)
1175 _processpart(op, part)
1176 except (SystemExit, KeyboardInterrupt):
1176 except (SystemExit, KeyboardInterrupt):
1177 hardabort = True
1177 hardabort = True
1178 raise
1178 raise
1179 finally:
1179 finally:
1180 if not hardabort:
1180 if not hardabort:
1181 part.consume()
1181 part.consume()
1182 self.ui.debug('bundle2-input-stream-interrupt:'
1182 self.ui.debug('bundle2-input-stream-interrupt:'
1183 ' closing out of band context\n')
1183 ' closing out of band context\n')
1184
1184
1185 class interruptoperation(object):
1185 class interruptoperation(object):
1186 """A limited operation to be use by part handler during interruption
1186 """A limited operation to be use by part handler during interruption
1187
1187
1188 It only have access to an ui object.
1188 It only have access to an ui object.
1189 """
1189 """
1190
1190
1191 def __init__(self, ui):
1191 def __init__(self, ui):
1192 self.ui = ui
1192 self.ui = ui
1193 self.reply = None
1193 self.reply = None
1194 self.captureoutput = False
1194 self.captureoutput = False
1195
1195
1196 @property
1196 @property
1197 def repo(self):
1197 def repo(self):
1198 raise error.ProgrammingError('no repo access from stream interruption')
1198 raise error.ProgrammingError('no repo access from stream interruption')
1199
1199
1200 def gettransaction(self):
1200 def gettransaction(self):
1201 raise TransactionUnavailable('no repo access from stream interruption')
1201 raise TransactionUnavailable('no repo access from stream interruption')
1202
1202
1203 def decodepayloadchunks(ui, fh):
1203 def decodepayloadchunks(ui, fh):
1204 """Reads bundle2 part payload data into chunks.
1204 """Reads bundle2 part payload data into chunks.
1205
1205
1206 Part payload data consists of framed chunks. This function takes
1206 Part payload data consists of framed chunks. This function takes
1207 a file handle and emits those chunks.
1207 a file handle and emits those chunks.
1208 """
1208 """
1209 dolog = ui.configbool('devel', 'bundle2.debug')
1209 dolog = ui.configbool('devel', 'bundle2.debug')
1210 debug = ui.debug
1210 debug = ui.debug
1211
1211
1212 headerstruct = struct.Struct(_fpayloadsize)
1212 headerstruct = struct.Struct(_fpayloadsize)
1213 headersize = headerstruct.size
1213 headersize = headerstruct.size
1214 unpack = headerstruct.unpack
1214 unpack = headerstruct.unpack
1215
1215
1216 readexactly = changegroup.readexactly
1216 readexactly = changegroup.readexactly
1217 read = fh.read
1217 read = fh.read
1218
1218
1219 chunksize = unpack(readexactly(fh, headersize))[0]
1219 chunksize = unpack(readexactly(fh, headersize))[0]
1220 indebug(ui, 'payload chunk size: %i' % chunksize)
1220 indebug(ui, 'payload chunk size: %i' % chunksize)
1221
1221
1222 # changegroup.readexactly() is inlined below for performance.
1222 # changegroup.readexactly() is inlined below for performance.
1223 while chunksize:
1223 while chunksize:
1224 if chunksize >= 0:
1224 if chunksize >= 0:
1225 s = read(chunksize)
1225 s = read(chunksize)
1226 if len(s) < chunksize:
1226 if len(s) < chunksize:
1227 raise error.Abort(_('stream ended unexpectedly '
1227 raise error.Abort(_('stream ended unexpectedly '
1228 ' (got %d bytes, expected %d)') %
1228 ' (got %d bytes, expected %d)') %
1229 (len(s), chunksize))
1229 (len(s), chunksize))
1230
1230
1231 yield s
1231 yield s
1232 elif chunksize == flaginterrupt:
1232 elif chunksize == flaginterrupt:
1233 # Interrupt "signal" detected. The regular stream is interrupted
1233 # Interrupt "signal" detected. The regular stream is interrupted
1234 # and a bundle2 part follows. Consume it.
1234 # and a bundle2 part follows. Consume it.
1235 interrupthandler(ui, fh)()
1235 interrupthandler(ui, fh)()
1236 else:
1236 else:
1237 raise error.BundleValueError(
1237 raise error.BundleValueError(
1238 'negative payload chunk size: %s' % chunksize)
1238 'negative payload chunk size: %s' % chunksize)
1239
1239
1240 s = read(headersize)
1240 s = read(headersize)
1241 if len(s) < headersize:
1241 if len(s) < headersize:
1242 raise error.Abort(_('stream ended unexpectedly '
1242 raise error.Abort(_('stream ended unexpectedly '
1243 ' (got %d bytes, expected %d)') %
1243 ' (got %d bytes, expected %d)') %
1244 (len(s), chunksize))
1244 (len(s), chunksize))
1245
1245
1246 chunksize = unpack(s)[0]
1246 chunksize = unpack(s)[0]
1247
1247
1248 # indebug() inlined for performance.
1248 # indebug() inlined for performance.
1249 if dolog:
1249 if dolog:
1250 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1250 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1251
1251
1252 class unbundlepart(unpackermixin):
1252 class unbundlepart(unpackermixin):
1253 """a bundle part read from a bundle"""
1253 """a bundle part read from a bundle"""
1254
1254
1255 def __init__(self, ui, header, fp):
1255 def __init__(self, ui, header, fp):
1256 super(unbundlepart, self).__init__(fp)
1256 super(unbundlepart, self).__init__(fp)
1257 self._seekable = (util.safehasattr(fp, 'seek') and
1257 self._seekable = (util.safehasattr(fp, 'seek') and
1258 util.safehasattr(fp, 'tell'))
1258 util.safehasattr(fp, 'tell'))
1259 self.ui = ui
1259 self.ui = ui
1260 # unbundle state attr
1260 # unbundle state attr
1261 self._headerdata = header
1261 self._headerdata = header
1262 self._headeroffset = 0
1262 self._headeroffset = 0
1263 self._initialized = False
1263 self._initialized = False
1264 self.consumed = False
1264 self.consumed = False
1265 # part data
1265 # part data
1266 self.id = None
1266 self.id = None
1267 self.type = None
1267 self.type = None
1268 self.mandatoryparams = None
1268 self.mandatoryparams = None
1269 self.advisoryparams = None
1269 self.advisoryparams = None
1270 self.params = None
1270 self.params = None
1271 self.mandatorykeys = ()
1271 self.mandatorykeys = ()
1272 self._readheader()
1272 self._readheader()
1273 self._mandatory = None
1273 self._mandatory = None
1274 self._pos = 0
1274 self._pos = 0
1275
1275
1276 def _fromheader(self, size):
1276 def _fromheader(self, size):
1277 """return the next <size> byte from the header"""
1277 """return the next <size> byte from the header"""
1278 offset = self._headeroffset
1278 offset = self._headeroffset
1279 data = self._headerdata[offset:(offset + size)]
1279 data = self._headerdata[offset:(offset + size)]
1280 self._headeroffset = offset + size
1280 self._headeroffset = offset + size
1281 return data
1281 return data
1282
1282
1283 def _unpackheader(self, format):
1283 def _unpackheader(self, format):
1284 """read given format from header
1284 """read given format from header
1285
1285
1286 This automatically compute the size of the format to read."""
1286 This automatically compute the size of the format to read."""
1287 data = self._fromheader(struct.calcsize(format))
1287 data = self._fromheader(struct.calcsize(format))
1288 return _unpack(format, data)
1288 return _unpack(format, data)
1289
1289
1290 def _initparams(self, mandatoryparams, advisoryparams):
1290 def _initparams(self, mandatoryparams, advisoryparams):
1291 """internal function to setup all logic related parameters"""
1291 """internal function to setup all logic related parameters"""
1292 # make it read only to prevent people touching it by mistake.
1292 # make it read only to prevent people touching it by mistake.
1293 self.mandatoryparams = tuple(mandatoryparams)
1293 self.mandatoryparams = tuple(mandatoryparams)
1294 self.advisoryparams = tuple(advisoryparams)
1294 self.advisoryparams = tuple(advisoryparams)
1295 # user friendly UI
1295 # user friendly UI
1296 self.params = util.sortdict(self.mandatoryparams)
1296 self.params = util.sortdict(self.mandatoryparams)
1297 self.params.update(self.advisoryparams)
1297 self.params.update(self.advisoryparams)
1298 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1298 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1299
1299
1300 def _readheader(self):
1300 def _readheader(self):
1301 """read the header and setup the object"""
1301 """read the header and setup the object"""
1302 typesize = self._unpackheader(_fparttypesize)[0]
1302 typesize = self._unpackheader(_fparttypesize)[0]
1303 self.type = self._fromheader(typesize)
1303 self.type = self._fromheader(typesize)
1304 indebug(self.ui, 'part type: "%s"' % self.type)
1304 indebug(self.ui, 'part type: "%s"' % self.type)
1305 self.id = self._unpackheader(_fpartid)[0]
1305 self.id = self._unpackheader(_fpartid)[0]
1306 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1306 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1307 # extract mandatory bit from type
1307 # extract mandatory bit from type
1308 self.mandatory = (self.type != self.type.lower())
1308 self.mandatory = (self.type != self.type.lower())
1309 self.type = self.type.lower()
1309 self.type = self.type.lower()
1310 ## reading parameters
1310 ## reading parameters
1311 # param count
1311 # param count
1312 mancount, advcount = self._unpackheader(_fpartparamcount)
1312 mancount, advcount = self._unpackheader(_fpartparamcount)
1313 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1313 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1314 # param size
1314 # param size
1315 fparamsizes = _makefpartparamsizes(mancount + advcount)
1315 fparamsizes = _makefpartparamsizes(mancount + advcount)
1316 paramsizes = self._unpackheader(fparamsizes)
1316 paramsizes = self._unpackheader(fparamsizes)
1317 # make it a list of couple again
1317 # make it a list of couple again
1318 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1318 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1319 # split mandatory from advisory
1319 # split mandatory from advisory
1320 mansizes = paramsizes[:mancount]
1320 mansizes = paramsizes[:mancount]
1321 advsizes = paramsizes[mancount:]
1321 advsizes = paramsizes[mancount:]
1322 # retrieve param value
1322 # retrieve param value
1323 manparams = []
1323 manparams = []
1324 for key, value in mansizes:
1324 for key, value in mansizes:
1325 manparams.append((self._fromheader(key), self._fromheader(value)))
1325 manparams.append((self._fromheader(key), self._fromheader(value)))
1326 advparams = []
1326 advparams = []
1327 for key, value in advsizes:
1327 for key, value in advsizes:
1328 advparams.append((self._fromheader(key), self._fromheader(value)))
1328 advparams.append((self._fromheader(key), self._fromheader(value)))
1329 self._initparams(manparams, advparams)
1329 self._initparams(manparams, advparams)
1330 ## part payload
1330 ## part payload
1331 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1331 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1332 # we read the data, tell it
1332 # we read the data, tell it
1333 self._initialized = True
1333 self._initialized = True
1334
1334
1335 def _payloadchunks(self):
1335 def _payloadchunks(self):
1336 """Generator of decoded chunks in the payload."""
1336 """Generator of decoded chunks in the payload."""
1337 return decodepayloadchunks(self.ui, self._fp)
1337 return decodepayloadchunks(self.ui, self._fp)
1338
1338
1339 def consume(self):
1339 def consume(self):
1340 """Read the part payload until completion.
1340 """Read the part payload until completion.
1341
1341
1342 By consuming the part data, the underlying stream read offset will
1342 By consuming the part data, the underlying stream read offset will
1343 be advanced to the next part (or end of stream).
1343 be advanced to the next part (or end of stream).
1344 """
1344 """
1345 if self.consumed:
1345 if self.consumed:
1346 return
1346 return
1347
1347
1348 chunk = self.read(32768)
1348 chunk = self.read(32768)
1349 while chunk:
1349 while chunk:
1350 self._pos += len(chunk)
1350 self._pos += len(chunk)
1351 chunk = self.read(32768)
1351 chunk = self.read(32768)
1352
1352
1353 def read(self, size=None):
1353 def read(self, size=None):
1354 """read payload data"""
1354 """read payload data"""
1355 if not self._initialized:
1355 if not self._initialized:
1356 self._readheader()
1356 self._readheader()
1357 if size is None:
1357 if size is None:
1358 data = self._payloadstream.read()
1358 data = self._payloadstream.read()
1359 else:
1359 else:
1360 data = self._payloadstream.read(size)
1360 data = self._payloadstream.read(size)
1361 self._pos += len(data)
1361 self._pos += len(data)
1362 if size is None or len(data) < size:
1362 if size is None or len(data) < size:
1363 if not self.consumed and self._pos:
1363 if not self.consumed and self._pos:
1364 self.ui.debug('bundle2-input-part: total payload size %i\n'
1364 self.ui.debug('bundle2-input-part: total payload size %i\n'
1365 % self._pos)
1365 % self._pos)
1366 self.consumed = True
1366 self.consumed = True
1367 return data
1367 return data
1368
1368
1369 class seekableunbundlepart(unbundlepart):
1369 class seekableunbundlepart(unbundlepart):
1370 """A bundle2 part in a bundle that is seekable.
1370 """A bundle2 part in a bundle that is seekable.
1371
1371
1372 Regular ``unbundlepart`` instances can only be read once. This class
1372 Regular ``unbundlepart`` instances can only be read once. This class
1373 extends ``unbundlepart`` to enable bi-directional seeking within the
1373 extends ``unbundlepart`` to enable bi-directional seeking within the
1374 part.
1374 part.
1375
1375
1376 Bundle2 part data consists of framed chunks. Offsets when seeking
1376 Bundle2 part data consists of framed chunks. Offsets when seeking
1377 refer to the decoded data, not the offsets in the underlying bundle2
1377 refer to the decoded data, not the offsets in the underlying bundle2
1378 stream.
1378 stream.
1379
1379
1380 To facilitate quickly seeking within the decoded data, instances of this
1380 To facilitate quickly seeking within the decoded data, instances of this
1381 class maintain a mapping between offsets in the underlying stream and
1381 class maintain a mapping between offsets in the underlying stream and
1382 the decoded payload. This mapping will consume memory in proportion
1382 the decoded payload. This mapping will consume memory in proportion
1383 to the number of chunks within the payload (which almost certainly
1383 to the number of chunks within the payload (which almost certainly
1384 increases in proportion with the size of the part).
1384 increases in proportion with the size of the part).
1385 """
1385 """
1386 def __init__(self, ui, header, fp):
1386 def __init__(self, ui, header, fp):
1387 # (payload, file) offsets for chunk starts.
1387 # (payload, file) offsets for chunk starts.
1388 self._chunkindex = []
1388 self._chunkindex = []
1389
1389
1390 super(seekableunbundlepart, self).__init__(ui, header, fp)
1390 super(seekableunbundlepart, self).__init__(ui, header, fp)
1391
1391
1392 def _payloadchunks(self, chunknum=0):
1392 def _payloadchunks(self, chunknum=0):
1393 '''seek to specified chunk and start yielding data'''
1393 '''seek to specified chunk and start yielding data'''
1394 if len(self._chunkindex) == 0:
1394 if len(self._chunkindex) == 0:
1395 assert chunknum == 0, 'Must start with chunk 0'
1395 assert chunknum == 0, 'Must start with chunk 0'
1396 self._chunkindex.append((0, self._tellfp()))
1396 self._chunkindex.append((0, self._tellfp()))
1397 else:
1397 else:
1398 assert chunknum < len(self._chunkindex), \
1398 assert chunknum < len(self._chunkindex), \
1399 'Unknown chunk %d' % chunknum
1399 'Unknown chunk %d' % chunknum
1400 self._seekfp(self._chunkindex[chunknum][1])
1400 self._seekfp(self._chunkindex[chunknum][1])
1401
1401
1402 pos = self._chunkindex[chunknum][0]
1402 pos = self._chunkindex[chunknum][0]
1403
1403
1404 for chunk in decodepayloadchunks(self.ui, self._fp):
1404 for chunk in decodepayloadchunks(self.ui, self._fp):
1405 chunknum += 1
1405 chunknum += 1
1406 pos += len(chunk)
1406 pos += len(chunk)
1407 if chunknum == len(self._chunkindex):
1407 if chunknum == len(self._chunkindex):
1408 self._chunkindex.append((pos, self._tellfp()))
1408 self._chunkindex.append((pos, self._tellfp()))
1409
1409
1410 yield chunk
1410 yield chunk
1411
1411
1412 def _findchunk(self, pos):
1412 def _findchunk(self, pos):
1413 '''for a given payload position, return a chunk number and offset'''
1413 '''for a given payload position, return a chunk number and offset'''
1414 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1414 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1415 if ppos == pos:
1415 if ppos == pos:
1416 return chunk, 0
1416 return chunk, 0
1417 elif ppos > pos:
1417 elif ppos > pos:
1418 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1418 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1419 raise ValueError('Unknown chunk')
1419 raise ValueError('Unknown chunk')
1420
1420
1421 def tell(self):
1421 def tell(self):
1422 return self._pos
1422 return self._pos
1423
1423
1424 def seek(self, offset, whence=os.SEEK_SET):
1424 def seek(self, offset, whence=os.SEEK_SET):
1425 if whence == os.SEEK_SET:
1425 if whence == os.SEEK_SET:
1426 newpos = offset
1426 newpos = offset
1427 elif whence == os.SEEK_CUR:
1427 elif whence == os.SEEK_CUR:
1428 newpos = self._pos + offset
1428 newpos = self._pos + offset
1429 elif whence == os.SEEK_END:
1429 elif whence == os.SEEK_END:
1430 if not self.consumed:
1430 if not self.consumed:
1431 # Can't use self.consume() here because it advances self._pos.
1431 # Can't use self.consume() here because it advances self._pos.
1432 chunk = self.read(32768)
1432 chunk = self.read(32768)
1433 while chunk:
1433 while chunk:
1434 chunk = self.read(32768)
1434 chunk = self.read(32768)
1435 newpos = self._chunkindex[-1][0] - offset
1435 newpos = self._chunkindex[-1][0] - offset
1436 else:
1436 else:
1437 raise ValueError('Unknown whence value: %r' % (whence,))
1437 raise ValueError('Unknown whence value: %r' % (whence,))
1438
1438
1439 if newpos > self._chunkindex[-1][0] and not self.consumed:
1439 if newpos > self._chunkindex[-1][0] and not self.consumed:
1440 # Can't use self.consume() here because it advances self._pos.
1440 # Can't use self.consume() here because it advances self._pos.
1441 chunk = self.read(32768)
1441 chunk = self.read(32768)
1442 while chunk:
1442 while chunk:
1443 chunk = self.read(32668)
1443 chunk = self.read(32668)
1444
1444
1445 if not 0 <= newpos <= self._chunkindex[-1][0]:
1445 if not 0 <= newpos <= self._chunkindex[-1][0]:
1446 raise ValueError('Offset out of range')
1446 raise ValueError('Offset out of range')
1447
1447
1448 if self._pos != newpos:
1448 if self._pos != newpos:
1449 chunk, internaloffset = self._findchunk(newpos)
1449 chunk, internaloffset = self._findchunk(newpos)
1450 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1450 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1451 adjust = self.read(internaloffset)
1451 adjust = self.read(internaloffset)
1452 if len(adjust) != internaloffset:
1452 if len(adjust) != internaloffset:
1453 raise error.Abort(_('Seek failed\n'))
1453 raise error.Abort(_('Seek failed\n'))
1454 self._pos = newpos
1454 self._pos = newpos
1455
1455
1456 def _seekfp(self, offset, whence=0):
1456 def _seekfp(self, offset, whence=0):
1457 """move the underlying file pointer
1457 """move the underlying file pointer
1458
1458
1459 This method is meant for internal usage by the bundle2 protocol only.
1459 This method is meant for internal usage by the bundle2 protocol only.
1460 They directly manipulate the low level stream including bundle2 level
1460 They directly manipulate the low level stream including bundle2 level
1461 instruction.
1461 instruction.
1462
1462
1463 Do not use it to implement higher-level logic or methods."""
1463 Do not use it to implement higher-level logic or methods."""
1464 if self._seekable:
1464 if self._seekable:
1465 return self._fp.seek(offset, whence)
1465 return self._fp.seek(offset, whence)
1466 else:
1466 else:
1467 raise NotImplementedError(_('File pointer is not seekable'))
1467 raise NotImplementedError(_('File pointer is not seekable'))
1468
1468
1469 def _tellfp(self):
1469 def _tellfp(self):
1470 """return the file offset, or None if file is not seekable
1470 """return the file offset, or None if file is not seekable
1471
1471
1472 This method is meant for internal usage by the bundle2 protocol only.
1472 This method is meant for internal usage by the bundle2 protocol only.
1473 They directly manipulate the low level stream including bundle2 level
1473 They directly manipulate the low level stream including bundle2 level
1474 instruction.
1474 instruction.
1475
1475
1476 Do not use it to implement higher-level logic or methods."""
1476 Do not use it to implement higher-level logic or methods."""
1477 if self._seekable:
1477 if self._seekable:
1478 try:
1478 try:
1479 return self._fp.tell()
1479 return self._fp.tell()
1480 except IOError as e:
1480 except IOError as e:
1481 if e.errno == errno.ESPIPE:
1481 if e.errno == errno.ESPIPE:
1482 self._seekable = False
1482 self._seekable = False
1483 else:
1483 else:
1484 raise
1484 raise
1485 return None
1485 return None
1486
1486
1487 # These are only the static capabilities.
1487 # These are only the static capabilities.
1488 # Check the 'getrepocaps' function for the rest.
1488 # Check the 'getrepocaps' function for the rest.
1489 capabilities = {'HG20': (),
1489 capabilities = {'HG20': (),
1490 'bookmarks': (),
1490 'bookmarks': (),
1491 'error': ('abort', 'unsupportedcontent', 'pushraced',
1491 'error': ('abort', 'unsupportedcontent', 'pushraced',
1492 'pushkey'),
1492 'pushkey'),
1493 'listkeys': (),
1493 'listkeys': (),
1494 'pushkey': (),
1494 'pushkey': (),
1495 'digests': tuple(sorted(util.DIGESTS.keys())),
1495 'digests': tuple(sorted(util.DIGESTS.keys())),
1496 'remote-changegroup': ('http', 'https'),
1496 'remote-changegroup': ('http', 'https'),
1497 'hgtagsfnodes': (),
1497 'hgtagsfnodes': (),
1498 'rev-branch-cache': (),
1498 'rev-branch-cache': (),
1499 'phases': ('heads',),
1499 'phases': ('heads',),
1500 'stream': ('v2',),
1500 'stream': ('v2',),
1501 }
1501 }
1502
1502
1503 def getrepocaps(repo, allowpushback=False, role=None):
1503 def getrepocaps(repo, allowpushback=False, role=None):
1504 """return the bundle2 capabilities for a given repo
1504 """return the bundle2 capabilities for a given repo
1505
1505
1506 Exists to allow extensions (like evolution) to mutate the capabilities.
1506 Exists to allow extensions (like evolution) to mutate the capabilities.
1507
1507
1508 The returned value is used for servers advertising their capabilities as
1508 The returned value is used for servers advertising their capabilities as
1509 well as clients advertising their capabilities to servers as part of
1509 well as clients advertising their capabilities to servers as part of
1510 bundle2 requests. The ``role`` argument specifies which is which.
1510 bundle2 requests. The ``role`` argument specifies which is which.
1511 """
1511 """
1512 if role not in ('client', 'server'):
1512 if role not in ('client', 'server'):
1513 raise error.ProgrammingError('role argument must be client or server')
1513 raise error.ProgrammingError('role argument must be client or server')
1514
1514
1515 caps = capabilities.copy()
1515 caps = capabilities.copy()
1516 caps['changegroup'] = tuple(sorted(
1516 caps['changegroup'] = tuple(sorted(
1517 changegroup.supportedincomingversions(repo)))
1517 changegroup.supportedincomingversions(repo)))
1518 if obsolete.isenabled(repo, obsolete.exchangeopt):
1518 if obsolete.isenabled(repo, obsolete.exchangeopt):
1519 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1519 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1520 caps['obsmarkers'] = supportedformat
1520 caps['obsmarkers'] = supportedformat
1521 if allowpushback:
1521 if allowpushback:
1522 caps['pushback'] = ()
1522 caps['pushback'] = ()
1523 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1523 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1524 if cpmode == 'check-related':
1524 if cpmode == 'check-related':
1525 caps['checkheads'] = ('related',)
1525 caps['checkheads'] = ('related',)
1526 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1526 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1527 caps.pop('phases')
1527 caps.pop('phases')
1528
1528
1529 # Don't advertise stream clone support in server mode if not configured.
1529 # Don't advertise stream clone support in server mode if not configured.
1530 if role == 'server':
1530 if role == 'server':
1531 streamsupported = repo.ui.configbool('server', 'uncompressed',
1531 streamsupported = repo.ui.configbool('server', 'uncompressed',
1532 untrusted=True)
1532 untrusted=True)
1533 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1533 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1534
1534
1535 if not streamsupported or not featuresupported:
1535 if not streamsupported or not featuresupported:
1536 caps.pop('stream')
1536 caps.pop('stream')
1537 # Else always advertise support on client, because payload support
1537 # Else always advertise support on client, because payload support
1538 # should always be advertised.
1538 # should always be advertised.
1539
1539
1540 return caps
1540 return caps
1541
1541
1542 def bundle2caps(remote):
1542 def bundle2caps(remote):
1543 """return the bundle capabilities of a peer as dict"""
1543 """return the bundle capabilities of a peer as dict"""
1544 raw = remote.capable('bundle2')
1544 raw = remote.capable('bundle2')
1545 if not raw and raw != '':
1545 if not raw and raw != '':
1546 return {}
1546 return {}
1547 capsblob = urlreq.unquote(remote.capable('bundle2'))
1547 capsblob = urlreq.unquote(remote.capable('bundle2'))
1548 return decodecaps(capsblob)
1548 return decodecaps(capsblob)
1549
1549
1550 def obsmarkersversion(caps):
1550 def obsmarkersversion(caps):
1551 """extract the list of supported obsmarkers versions from a bundle2caps dict
1551 """extract the list of supported obsmarkers versions from a bundle2caps dict
1552 """
1552 """
1553 obscaps = caps.get('obsmarkers', ())
1553 obscaps = caps.get('obsmarkers', ())
1554 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1554 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1555
1555
1556 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1556 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1557 vfs=None, compression=None, compopts=None):
1557 vfs=None, compression=None, compopts=None):
1558 if bundletype.startswith('HG10'):
1558 if bundletype.startswith('HG10'):
1559 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1559 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1560 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1560 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1561 compression=compression, compopts=compopts)
1561 compression=compression, compopts=compopts)
1562 elif not bundletype.startswith('HG20'):
1562 elif not bundletype.startswith('HG20'):
1563 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1563 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1564
1564
1565 caps = {}
1565 caps = {}
1566 if 'obsolescence' in opts:
1566 if 'obsolescence' in opts:
1567 caps['obsmarkers'] = ('V1',)
1567 caps['obsmarkers'] = ('V1',)
1568 bundle = bundle20(ui, caps)
1568 bundle = bundle20(ui, caps)
1569 bundle.setcompression(compression, compopts)
1569 bundle.setcompression(compression, compopts)
1570 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1570 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1571 chunkiter = bundle.getchunks()
1571 chunkiter = bundle.getchunks()
1572
1572
1573 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1573 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1574
1574
1575 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1575 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1576 # We should eventually reconcile this logic with the one behind
1576 # We should eventually reconcile this logic with the one behind
1577 # 'exchange.getbundle2partsgenerator'.
1577 # 'exchange.getbundle2partsgenerator'.
1578 #
1578 #
1579 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1579 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1580 # different right now. So we keep them separated for now for the sake of
1580 # different right now. So we keep them separated for now for the sake of
1581 # simplicity.
1581 # simplicity.
1582
1582
1583 # we might not always want a changegroup in such bundle, for example in
1583 # we might not always want a changegroup in such bundle, for example in
1584 # stream bundles
1584 # stream bundles
1585 if opts.get('changegroup', True):
1585 if opts.get('changegroup', True):
1586 cgversion = opts.get('cg.version')
1586 cgversion = opts.get('cg.version')
1587 if cgversion is None:
1587 if cgversion is None:
1588 cgversion = changegroup.safeversion(repo)
1588 cgversion = changegroup.safeversion(repo)
1589 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1589 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1590 part = bundler.newpart('changegroup', data=cg.getchunks())
1590 part = bundler.newpart('changegroup', data=cg.getchunks())
1591 part.addparam('version', cg.version)
1591 part.addparam('version', cg.version)
1592 if 'clcount' in cg.extras:
1592 if 'clcount' in cg.extras:
1593 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1593 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1594 mandatory=False)
1594 mandatory=False)
1595 if opts.get('phases') and repo.revs('%ln and secret()',
1595 if opts.get('phases') and repo.revs('%ln and secret()',
1596 outgoing.missingheads):
1596 outgoing.missingheads):
1597 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1597 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1598
1598
1599 addparttagsfnodescache(repo, bundler, outgoing)
1599 if opts.get('tagsfnodescache', True):
1600 addpartrevbranchcache(repo, bundler, outgoing)
1600 addparttagsfnodescache(repo, bundler, outgoing)
1601
1602 if opts.get('revbranchcache', True):
1603 addpartrevbranchcache(repo, bundler, outgoing)
1601
1604
1602 if opts.get('obsolescence', False):
1605 if opts.get('obsolescence', False):
1603 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1606 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1604 buildobsmarkerspart(bundler, obsmarkers)
1607 buildobsmarkerspart(bundler, obsmarkers)
1605
1608
1606 if opts.get('phases', False):
1609 if opts.get('phases', False):
1607 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1610 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1608 phasedata = phases.binaryencode(headsbyphase)
1611 phasedata = phases.binaryencode(headsbyphase)
1609 bundler.newpart('phase-heads', data=phasedata)
1612 bundler.newpart('phase-heads', data=phasedata)
1610
1613
1611 def addparttagsfnodescache(repo, bundler, outgoing):
1614 def addparttagsfnodescache(repo, bundler, outgoing):
1612 # we include the tags fnode cache for the bundle changeset
1615 # we include the tags fnode cache for the bundle changeset
1613 # (as an optional parts)
1616 # (as an optional parts)
1614 cache = tags.hgtagsfnodescache(repo.unfiltered())
1617 cache = tags.hgtagsfnodescache(repo.unfiltered())
1615 chunks = []
1618 chunks = []
1616
1619
1617 # .hgtags fnodes are only relevant for head changesets. While we could
1620 # .hgtags fnodes are only relevant for head changesets. While we could
1618 # transfer values for all known nodes, there will likely be little to
1621 # transfer values for all known nodes, there will likely be little to
1619 # no benefit.
1622 # no benefit.
1620 #
1623 #
1621 # We don't bother using a generator to produce output data because
1624 # We don't bother using a generator to produce output data because
1622 # a) we only have 40 bytes per head and even esoteric numbers of heads
1625 # a) we only have 40 bytes per head and even esoteric numbers of heads
1623 # consume little memory (1M heads is 40MB) b) we don't want to send the
1626 # consume little memory (1M heads is 40MB) b) we don't want to send the
1624 # part if we don't have entries and knowing if we have entries requires
1627 # part if we don't have entries and knowing if we have entries requires
1625 # cache lookups.
1628 # cache lookups.
1626 for node in outgoing.missingheads:
1629 for node in outgoing.missingheads:
1627 # Don't compute missing, as this may slow down serving.
1630 # Don't compute missing, as this may slow down serving.
1628 fnode = cache.getfnode(node, computemissing=False)
1631 fnode = cache.getfnode(node, computemissing=False)
1629 if fnode is not None:
1632 if fnode is not None:
1630 chunks.extend([node, fnode])
1633 chunks.extend([node, fnode])
1631
1634
1632 if chunks:
1635 if chunks:
1633 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1636 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1634
1637
1635 def addpartrevbranchcache(repo, bundler, outgoing):
1638 def addpartrevbranchcache(repo, bundler, outgoing):
1636 # we include the rev branch cache for the bundle changeset
1639 # we include the rev branch cache for the bundle changeset
1637 # (as an optional parts)
1640 # (as an optional parts)
1638 cache = repo.revbranchcache()
1641 cache = repo.revbranchcache()
1639 cl = repo.unfiltered().changelog
1642 cl = repo.unfiltered().changelog
1640 branchesdata = collections.defaultdict(lambda: (set(), set()))
1643 branchesdata = collections.defaultdict(lambda: (set(), set()))
1641 for node in outgoing.missing:
1644 for node in outgoing.missing:
1642 branch, close = cache.branchinfo(cl.rev(node))
1645 branch, close = cache.branchinfo(cl.rev(node))
1643 branchesdata[branch][close].add(node)
1646 branchesdata[branch][close].add(node)
1644
1647
1645 def generate():
1648 def generate():
1646 for branch, (nodes, closed) in sorted(branchesdata.items()):
1649 for branch, (nodes, closed) in sorted(branchesdata.items()):
1647 utf8branch = encoding.fromlocal(branch)
1650 utf8branch = encoding.fromlocal(branch)
1648 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1651 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1649 yield utf8branch
1652 yield utf8branch
1650 for n in sorted(nodes):
1653 for n in sorted(nodes):
1651 yield n
1654 yield n
1652 for n in sorted(closed):
1655 for n in sorted(closed):
1653 yield n
1656 yield n
1654
1657
1655 bundler.newpart('cache:rev-branch-cache', data=generate())
1658 bundler.newpart('cache:rev-branch-cache', data=generate())
1656
1659
1657 def buildobsmarkerspart(bundler, markers):
1660 def buildobsmarkerspart(bundler, markers):
1658 """add an obsmarker part to the bundler with <markers>
1661 """add an obsmarker part to the bundler with <markers>
1659
1662
1660 No part is created if markers is empty.
1663 No part is created if markers is empty.
1661 Raises ValueError if the bundler doesn't support any known obsmarker format.
1664 Raises ValueError if the bundler doesn't support any known obsmarker format.
1662 """
1665 """
1663 if not markers:
1666 if not markers:
1664 return None
1667 return None
1665
1668
1666 remoteversions = obsmarkersversion(bundler.capabilities)
1669 remoteversions = obsmarkersversion(bundler.capabilities)
1667 version = obsolete.commonversion(remoteversions)
1670 version = obsolete.commonversion(remoteversions)
1668 if version is None:
1671 if version is None:
1669 raise ValueError('bundler does not support common obsmarker format')
1672 raise ValueError('bundler does not support common obsmarker format')
1670 stream = obsolete.encodemarkers(markers, True, version=version)
1673 stream = obsolete.encodemarkers(markers, True, version=version)
1671 return bundler.newpart('obsmarkers', data=stream)
1674 return bundler.newpart('obsmarkers', data=stream)
1672
1675
1673 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1676 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1674 compopts=None):
1677 compopts=None):
1675 """Write a bundle file and return its filename.
1678 """Write a bundle file and return its filename.
1676
1679
1677 Existing files will not be overwritten.
1680 Existing files will not be overwritten.
1678 If no filename is specified, a temporary file is created.
1681 If no filename is specified, a temporary file is created.
1679 bz2 compression can be turned off.
1682 bz2 compression can be turned off.
1680 The bundle file will be deleted in case of errors.
1683 The bundle file will be deleted in case of errors.
1681 """
1684 """
1682
1685
1683 if bundletype == "HG20":
1686 if bundletype == "HG20":
1684 bundle = bundle20(ui)
1687 bundle = bundle20(ui)
1685 bundle.setcompression(compression, compopts)
1688 bundle.setcompression(compression, compopts)
1686 part = bundle.newpart('changegroup', data=cg.getchunks())
1689 part = bundle.newpart('changegroup', data=cg.getchunks())
1687 part.addparam('version', cg.version)
1690 part.addparam('version', cg.version)
1688 if 'clcount' in cg.extras:
1691 if 'clcount' in cg.extras:
1689 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1692 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1690 mandatory=False)
1693 mandatory=False)
1691 chunkiter = bundle.getchunks()
1694 chunkiter = bundle.getchunks()
1692 else:
1695 else:
1693 # compression argument is only for the bundle2 case
1696 # compression argument is only for the bundle2 case
1694 assert compression is None
1697 assert compression is None
1695 if cg.version != '01':
1698 if cg.version != '01':
1696 raise error.Abort(_('old bundle types only supports v1 '
1699 raise error.Abort(_('old bundle types only supports v1 '
1697 'changegroups'))
1700 'changegroups'))
1698 header, comp = bundletypes[bundletype]
1701 header, comp = bundletypes[bundletype]
1699 if comp not in util.compengines.supportedbundletypes:
1702 if comp not in util.compengines.supportedbundletypes:
1700 raise error.Abort(_('unknown stream compression type: %s')
1703 raise error.Abort(_('unknown stream compression type: %s')
1701 % comp)
1704 % comp)
1702 compengine = util.compengines.forbundletype(comp)
1705 compengine = util.compengines.forbundletype(comp)
1703 def chunkiter():
1706 def chunkiter():
1704 yield header
1707 yield header
1705 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1708 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1706 yield chunk
1709 yield chunk
1707 chunkiter = chunkiter()
1710 chunkiter = chunkiter()
1708
1711
1709 # parse the changegroup data, otherwise we will block
1712 # parse the changegroup data, otherwise we will block
1710 # in case of sshrepo because we don't know the end of the stream
1713 # in case of sshrepo because we don't know the end of the stream
1711 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1714 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1712
1715
1713 def combinechangegroupresults(op):
1716 def combinechangegroupresults(op):
1714 """logic to combine 0 or more addchangegroup results into one"""
1717 """logic to combine 0 or more addchangegroup results into one"""
1715 results = [r.get('return', 0)
1718 results = [r.get('return', 0)
1716 for r in op.records['changegroup']]
1719 for r in op.records['changegroup']]
1717 changedheads = 0
1720 changedheads = 0
1718 result = 1
1721 result = 1
1719 for ret in results:
1722 for ret in results:
1720 # If any changegroup result is 0, return 0
1723 # If any changegroup result is 0, return 0
1721 if ret == 0:
1724 if ret == 0:
1722 result = 0
1725 result = 0
1723 break
1726 break
1724 if ret < -1:
1727 if ret < -1:
1725 changedheads += ret + 1
1728 changedheads += ret + 1
1726 elif ret > 1:
1729 elif ret > 1:
1727 changedheads += ret - 1
1730 changedheads += ret - 1
1728 if changedheads > 0:
1731 if changedheads > 0:
1729 result = 1 + changedheads
1732 result = 1 + changedheads
1730 elif changedheads < 0:
1733 elif changedheads < 0:
1731 result = -1 + changedheads
1734 result = -1 + changedheads
1732 return result
1735 return result
1733
1736
1734 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1737 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1735 'targetphase'))
1738 'targetphase'))
1736 def handlechangegroup(op, inpart):
1739 def handlechangegroup(op, inpart):
1737 """apply a changegroup part on the repo
1740 """apply a changegroup part on the repo
1738
1741
1739 This is a very early implementation that will massive rework before being
1742 This is a very early implementation that will massive rework before being
1740 inflicted to any end-user.
1743 inflicted to any end-user.
1741 """
1744 """
1742 tr = op.gettransaction()
1745 tr = op.gettransaction()
1743 unpackerversion = inpart.params.get('version', '01')
1746 unpackerversion = inpart.params.get('version', '01')
1744 # We should raise an appropriate exception here
1747 # We should raise an appropriate exception here
1745 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1748 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1746 # the source and url passed here are overwritten by the one contained in
1749 # the source and url passed here are overwritten by the one contained in
1747 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1750 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1748 nbchangesets = None
1751 nbchangesets = None
1749 if 'nbchanges' in inpart.params:
1752 if 'nbchanges' in inpart.params:
1750 nbchangesets = int(inpart.params.get('nbchanges'))
1753 nbchangesets = int(inpart.params.get('nbchanges'))
1751 if ('treemanifest' in inpart.params and
1754 if ('treemanifest' in inpart.params and
1752 'treemanifest' not in op.repo.requirements):
1755 'treemanifest' not in op.repo.requirements):
1753 if len(op.repo.changelog) != 0:
1756 if len(op.repo.changelog) != 0:
1754 raise error.Abort(_(
1757 raise error.Abort(_(
1755 "bundle contains tree manifests, but local repo is "
1758 "bundle contains tree manifests, but local repo is "
1756 "non-empty and does not use tree manifests"))
1759 "non-empty and does not use tree manifests"))
1757 op.repo.requirements.add('treemanifest')
1760 op.repo.requirements.add('treemanifest')
1758 op.repo._applyopenerreqs()
1761 op.repo._applyopenerreqs()
1759 op.repo._writerequirements()
1762 op.repo._writerequirements()
1760 extrakwargs = {}
1763 extrakwargs = {}
1761 targetphase = inpart.params.get('targetphase')
1764 targetphase = inpart.params.get('targetphase')
1762 if targetphase is not None:
1765 if targetphase is not None:
1763 extrakwargs[r'targetphase'] = int(targetphase)
1766 extrakwargs[r'targetphase'] = int(targetphase)
1764 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1767 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1765 expectedtotal=nbchangesets, **extrakwargs)
1768 expectedtotal=nbchangesets, **extrakwargs)
1766 if op.reply is not None:
1769 if op.reply is not None:
1767 # This is definitely not the final form of this
1770 # This is definitely not the final form of this
1768 # return. But one need to start somewhere.
1771 # return. But one need to start somewhere.
1769 part = op.reply.newpart('reply:changegroup', mandatory=False)
1772 part = op.reply.newpart('reply:changegroup', mandatory=False)
1770 part.addparam(
1773 part.addparam(
1771 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1774 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1772 part.addparam('return', '%i' % ret, mandatory=False)
1775 part.addparam('return', '%i' % ret, mandatory=False)
1773 assert not inpart.read()
1776 assert not inpart.read()
1774
1777
1775 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1778 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1776 ['digest:%s' % k for k in util.DIGESTS.keys()])
1779 ['digest:%s' % k for k in util.DIGESTS.keys()])
1777 @parthandler('remote-changegroup', _remotechangegroupparams)
1780 @parthandler('remote-changegroup', _remotechangegroupparams)
1778 def handleremotechangegroup(op, inpart):
1781 def handleremotechangegroup(op, inpart):
1779 """apply a bundle10 on the repo, given an url and validation information
1782 """apply a bundle10 on the repo, given an url and validation information
1780
1783
1781 All the information about the remote bundle to import are given as
1784 All the information about the remote bundle to import are given as
1782 parameters. The parameters include:
1785 parameters. The parameters include:
1783 - url: the url to the bundle10.
1786 - url: the url to the bundle10.
1784 - size: the bundle10 file size. It is used to validate what was
1787 - size: the bundle10 file size. It is used to validate what was
1785 retrieved by the client matches the server knowledge about the bundle.
1788 retrieved by the client matches the server knowledge about the bundle.
1786 - digests: a space separated list of the digest types provided as
1789 - digests: a space separated list of the digest types provided as
1787 parameters.
1790 parameters.
1788 - digest:<digest-type>: the hexadecimal representation of the digest with
1791 - digest:<digest-type>: the hexadecimal representation of the digest with
1789 that name. Like the size, it is used to validate what was retrieved by
1792 that name. Like the size, it is used to validate what was retrieved by
1790 the client matches what the server knows about the bundle.
1793 the client matches what the server knows about the bundle.
1791
1794
1792 When multiple digest types are given, all of them are checked.
1795 When multiple digest types are given, all of them are checked.
1793 """
1796 """
1794 try:
1797 try:
1795 raw_url = inpart.params['url']
1798 raw_url = inpart.params['url']
1796 except KeyError:
1799 except KeyError:
1797 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1800 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1798 parsed_url = util.url(raw_url)
1801 parsed_url = util.url(raw_url)
1799 if parsed_url.scheme not in capabilities['remote-changegroup']:
1802 if parsed_url.scheme not in capabilities['remote-changegroup']:
1800 raise error.Abort(_('remote-changegroup does not support %s urls') %
1803 raise error.Abort(_('remote-changegroup does not support %s urls') %
1801 parsed_url.scheme)
1804 parsed_url.scheme)
1802
1805
1803 try:
1806 try:
1804 size = int(inpart.params['size'])
1807 size = int(inpart.params['size'])
1805 except ValueError:
1808 except ValueError:
1806 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1809 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1807 % 'size')
1810 % 'size')
1808 except KeyError:
1811 except KeyError:
1809 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1812 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1810
1813
1811 digests = {}
1814 digests = {}
1812 for typ in inpart.params.get('digests', '').split():
1815 for typ in inpart.params.get('digests', '').split():
1813 param = 'digest:%s' % typ
1816 param = 'digest:%s' % typ
1814 try:
1817 try:
1815 value = inpart.params[param]
1818 value = inpart.params[param]
1816 except KeyError:
1819 except KeyError:
1817 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1820 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1818 param)
1821 param)
1819 digests[typ] = value
1822 digests[typ] = value
1820
1823
1821 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1824 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1822
1825
1823 tr = op.gettransaction()
1826 tr = op.gettransaction()
1824 from . import exchange
1827 from . import exchange
1825 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1828 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1826 if not isinstance(cg, changegroup.cg1unpacker):
1829 if not isinstance(cg, changegroup.cg1unpacker):
1827 raise error.Abort(_('%s: not a bundle version 1.0') %
1830 raise error.Abort(_('%s: not a bundle version 1.0') %
1828 util.hidepassword(raw_url))
1831 util.hidepassword(raw_url))
1829 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1832 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1830 if op.reply is not None:
1833 if op.reply is not None:
1831 # This is definitely not the final form of this
1834 # This is definitely not the final form of this
1832 # return. But one need to start somewhere.
1835 # return. But one need to start somewhere.
1833 part = op.reply.newpart('reply:changegroup')
1836 part = op.reply.newpart('reply:changegroup')
1834 part.addparam(
1837 part.addparam(
1835 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1838 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1836 part.addparam('return', '%i' % ret, mandatory=False)
1839 part.addparam('return', '%i' % ret, mandatory=False)
1837 try:
1840 try:
1838 real_part.validate()
1841 real_part.validate()
1839 except error.Abort as e:
1842 except error.Abort as e:
1840 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1843 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1841 (util.hidepassword(raw_url), str(e)))
1844 (util.hidepassword(raw_url), str(e)))
1842 assert not inpart.read()
1845 assert not inpart.read()
1843
1846
1844 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1847 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1845 def handlereplychangegroup(op, inpart):
1848 def handlereplychangegroup(op, inpart):
1846 ret = int(inpart.params['return'])
1849 ret = int(inpart.params['return'])
1847 replyto = int(inpart.params['in-reply-to'])
1850 replyto = int(inpart.params['in-reply-to'])
1848 op.records.add('changegroup', {'return': ret}, replyto)
1851 op.records.add('changegroup', {'return': ret}, replyto)
1849
1852
1850 @parthandler('check:bookmarks')
1853 @parthandler('check:bookmarks')
1851 def handlecheckbookmarks(op, inpart):
1854 def handlecheckbookmarks(op, inpart):
1852 """check location of bookmarks
1855 """check location of bookmarks
1853
1856
1854 This part is to be used to detect push race regarding bookmark, it
1857 This part is to be used to detect push race regarding bookmark, it
1855 contains binary encoded (bookmark, node) tuple. If the local state does
1858 contains binary encoded (bookmark, node) tuple. If the local state does
1856 not marks the one in the part, a PushRaced exception is raised
1859 not marks the one in the part, a PushRaced exception is raised
1857 """
1860 """
1858 bookdata = bookmarks.binarydecode(inpart)
1861 bookdata = bookmarks.binarydecode(inpart)
1859
1862
1860 msgstandard = ('repository changed while pushing - please try again '
1863 msgstandard = ('repository changed while pushing - please try again '
1861 '(bookmark "%s" move from %s to %s)')
1864 '(bookmark "%s" move from %s to %s)')
1862 msgmissing = ('repository changed while pushing - please try again '
1865 msgmissing = ('repository changed while pushing - please try again '
1863 '(bookmark "%s" is missing, expected %s)')
1866 '(bookmark "%s" is missing, expected %s)')
1864 msgexist = ('repository changed while pushing - please try again '
1867 msgexist = ('repository changed while pushing - please try again '
1865 '(bookmark "%s" set on %s, expected missing)')
1868 '(bookmark "%s" set on %s, expected missing)')
1866 for book, node in bookdata:
1869 for book, node in bookdata:
1867 currentnode = op.repo._bookmarks.get(book)
1870 currentnode = op.repo._bookmarks.get(book)
1868 if currentnode != node:
1871 if currentnode != node:
1869 if node is None:
1872 if node is None:
1870 finalmsg = msgexist % (book, nodemod.short(currentnode))
1873 finalmsg = msgexist % (book, nodemod.short(currentnode))
1871 elif currentnode is None:
1874 elif currentnode is None:
1872 finalmsg = msgmissing % (book, nodemod.short(node))
1875 finalmsg = msgmissing % (book, nodemod.short(node))
1873 else:
1876 else:
1874 finalmsg = msgstandard % (book, nodemod.short(node),
1877 finalmsg = msgstandard % (book, nodemod.short(node),
1875 nodemod.short(currentnode))
1878 nodemod.short(currentnode))
1876 raise error.PushRaced(finalmsg)
1879 raise error.PushRaced(finalmsg)
1877
1880
1878 @parthandler('check:heads')
1881 @parthandler('check:heads')
1879 def handlecheckheads(op, inpart):
1882 def handlecheckheads(op, inpart):
1880 """check that head of the repo did not change
1883 """check that head of the repo did not change
1881
1884
1882 This is used to detect a push race when using unbundle.
1885 This is used to detect a push race when using unbundle.
1883 This replaces the "heads" argument of unbundle."""
1886 This replaces the "heads" argument of unbundle."""
1884 h = inpart.read(20)
1887 h = inpart.read(20)
1885 heads = []
1888 heads = []
1886 while len(h) == 20:
1889 while len(h) == 20:
1887 heads.append(h)
1890 heads.append(h)
1888 h = inpart.read(20)
1891 h = inpart.read(20)
1889 assert not h
1892 assert not h
1890 # Trigger a transaction so that we are guaranteed to have the lock now.
1893 # Trigger a transaction so that we are guaranteed to have the lock now.
1891 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1894 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1892 op.gettransaction()
1895 op.gettransaction()
1893 if sorted(heads) != sorted(op.repo.heads()):
1896 if sorted(heads) != sorted(op.repo.heads()):
1894 raise error.PushRaced('repository changed while pushing - '
1897 raise error.PushRaced('repository changed while pushing - '
1895 'please try again')
1898 'please try again')
1896
1899
1897 @parthandler('check:updated-heads')
1900 @parthandler('check:updated-heads')
1898 def handlecheckupdatedheads(op, inpart):
1901 def handlecheckupdatedheads(op, inpart):
1899 """check for race on the heads touched by a push
1902 """check for race on the heads touched by a push
1900
1903
1901 This is similar to 'check:heads' but focus on the heads actually updated
1904 This is similar to 'check:heads' but focus on the heads actually updated
1902 during the push. If other activities happen on unrelated heads, it is
1905 during the push. If other activities happen on unrelated heads, it is
1903 ignored.
1906 ignored.
1904
1907
1905 This allow server with high traffic to avoid push contention as long as
1908 This allow server with high traffic to avoid push contention as long as
1906 unrelated parts of the graph are involved."""
1909 unrelated parts of the graph are involved."""
1907 h = inpart.read(20)
1910 h = inpart.read(20)
1908 heads = []
1911 heads = []
1909 while len(h) == 20:
1912 while len(h) == 20:
1910 heads.append(h)
1913 heads.append(h)
1911 h = inpart.read(20)
1914 h = inpart.read(20)
1912 assert not h
1915 assert not h
1913 # trigger a transaction so that we are guaranteed to have the lock now.
1916 # trigger a transaction so that we are guaranteed to have the lock now.
1914 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1917 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1915 op.gettransaction()
1918 op.gettransaction()
1916
1919
1917 currentheads = set()
1920 currentheads = set()
1918 for ls in op.repo.branchmap().itervalues():
1921 for ls in op.repo.branchmap().itervalues():
1919 currentheads.update(ls)
1922 currentheads.update(ls)
1920
1923
1921 for h in heads:
1924 for h in heads:
1922 if h not in currentheads:
1925 if h not in currentheads:
1923 raise error.PushRaced('repository changed while pushing - '
1926 raise error.PushRaced('repository changed while pushing - '
1924 'please try again')
1927 'please try again')
1925
1928
1926 @parthandler('check:phases')
1929 @parthandler('check:phases')
1927 def handlecheckphases(op, inpart):
1930 def handlecheckphases(op, inpart):
1928 """check that phase boundaries of the repository did not change
1931 """check that phase boundaries of the repository did not change
1929
1932
1930 This is used to detect a push race.
1933 This is used to detect a push race.
1931 """
1934 """
1932 phasetonodes = phases.binarydecode(inpart)
1935 phasetonodes = phases.binarydecode(inpart)
1933 unfi = op.repo.unfiltered()
1936 unfi = op.repo.unfiltered()
1934 cl = unfi.changelog
1937 cl = unfi.changelog
1935 phasecache = unfi._phasecache
1938 phasecache = unfi._phasecache
1936 msg = ('repository changed while pushing - please try again '
1939 msg = ('repository changed while pushing - please try again '
1937 '(%s is %s expected %s)')
1940 '(%s is %s expected %s)')
1938 for expectedphase, nodes in enumerate(phasetonodes):
1941 for expectedphase, nodes in enumerate(phasetonodes):
1939 for n in nodes:
1942 for n in nodes:
1940 actualphase = phasecache.phase(unfi, cl.rev(n))
1943 actualphase = phasecache.phase(unfi, cl.rev(n))
1941 if actualphase != expectedphase:
1944 if actualphase != expectedphase:
1942 finalmsg = msg % (nodemod.short(n),
1945 finalmsg = msg % (nodemod.short(n),
1943 phases.phasenames[actualphase],
1946 phases.phasenames[actualphase],
1944 phases.phasenames[expectedphase])
1947 phases.phasenames[expectedphase])
1945 raise error.PushRaced(finalmsg)
1948 raise error.PushRaced(finalmsg)
1946
1949
1947 @parthandler('output')
1950 @parthandler('output')
1948 def handleoutput(op, inpart):
1951 def handleoutput(op, inpart):
1949 """forward output captured on the server to the client"""
1952 """forward output captured on the server to the client"""
1950 for line in inpart.read().splitlines():
1953 for line in inpart.read().splitlines():
1951 op.ui.status(_('remote: %s\n') % line)
1954 op.ui.status(_('remote: %s\n') % line)
1952
1955
1953 @parthandler('replycaps')
1956 @parthandler('replycaps')
1954 def handlereplycaps(op, inpart):
1957 def handlereplycaps(op, inpart):
1955 """Notify that a reply bundle should be created
1958 """Notify that a reply bundle should be created
1956
1959
1957 The payload contains the capabilities information for the reply"""
1960 The payload contains the capabilities information for the reply"""
1958 caps = decodecaps(inpart.read())
1961 caps = decodecaps(inpart.read())
1959 if op.reply is None:
1962 if op.reply is None:
1960 op.reply = bundle20(op.ui, caps)
1963 op.reply = bundle20(op.ui, caps)
1961
1964
1962 class AbortFromPart(error.Abort):
1965 class AbortFromPart(error.Abort):
1963 """Sub-class of Abort that denotes an error from a bundle2 part."""
1966 """Sub-class of Abort that denotes an error from a bundle2 part."""
1964
1967
1965 @parthandler('error:abort', ('message', 'hint'))
1968 @parthandler('error:abort', ('message', 'hint'))
1966 def handleerrorabort(op, inpart):
1969 def handleerrorabort(op, inpart):
1967 """Used to transmit abort error over the wire"""
1970 """Used to transmit abort error over the wire"""
1968 raise AbortFromPart(inpart.params['message'],
1971 raise AbortFromPart(inpart.params['message'],
1969 hint=inpart.params.get('hint'))
1972 hint=inpart.params.get('hint'))
1970
1973
1971 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1974 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1972 'in-reply-to'))
1975 'in-reply-to'))
1973 def handleerrorpushkey(op, inpart):
1976 def handleerrorpushkey(op, inpart):
1974 """Used to transmit failure of a mandatory pushkey over the wire"""
1977 """Used to transmit failure of a mandatory pushkey over the wire"""
1975 kwargs = {}
1978 kwargs = {}
1976 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1979 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1977 value = inpart.params.get(name)
1980 value = inpart.params.get(name)
1978 if value is not None:
1981 if value is not None:
1979 kwargs[name] = value
1982 kwargs[name] = value
1980 raise error.PushkeyFailed(inpart.params['in-reply-to'],
1983 raise error.PushkeyFailed(inpart.params['in-reply-to'],
1981 **pycompat.strkwargs(kwargs))
1984 **pycompat.strkwargs(kwargs))
1982
1985
1983 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1986 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1984 def handleerrorunsupportedcontent(op, inpart):
1987 def handleerrorunsupportedcontent(op, inpart):
1985 """Used to transmit unknown content error over the wire"""
1988 """Used to transmit unknown content error over the wire"""
1986 kwargs = {}
1989 kwargs = {}
1987 parttype = inpart.params.get('parttype')
1990 parttype = inpart.params.get('parttype')
1988 if parttype is not None:
1991 if parttype is not None:
1989 kwargs['parttype'] = parttype
1992 kwargs['parttype'] = parttype
1990 params = inpart.params.get('params')
1993 params = inpart.params.get('params')
1991 if params is not None:
1994 if params is not None:
1992 kwargs['params'] = params.split('\0')
1995 kwargs['params'] = params.split('\0')
1993
1996
1994 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
1997 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
1995
1998
1996 @parthandler('error:pushraced', ('message',))
1999 @parthandler('error:pushraced', ('message',))
1997 def handleerrorpushraced(op, inpart):
2000 def handleerrorpushraced(op, inpart):
1998 """Used to transmit push race error over the wire"""
2001 """Used to transmit push race error over the wire"""
1999 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2002 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2000
2003
2001 @parthandler('listkeys', ('namespace',))
2004 @parthandler('listkeys', ('namespace',))
2002 def handlelistkeys(op, inpart):
2005 def handlelistkeys(op, inpart):
2003 """retrieve pushkey namespace content stored in a bundle2"""
2006 """retrieve pushkey namespace content stored in a bundle2"""
2004 namespace = inpart.params['namespace']
2007 namespace = inpart.params['namespace']
2005 r = pushkey.decodekeys(inpart.read())
2008 r = pushkey.decodekeys(inpart.read())
2006 op.records.add('listkeys', (namespace, r))
2009 op.records.add('listkeys', (namespace, r))
2007
2010
2008 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2011 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2009 def handlepushkey(op, inpart):
2012 def handlepushkey(op, inpart):
2010 """process a pushkey request"""
2013 """process a pushkey request"""
2011 dec = pushkey.decode
2014 dec = pushkey.decode
2012 namespace = dec(inpart.params['namespace'])
2015 namespace = dec(inpart.params['namespace'])
2013 key = dec(inpart.params['key'])
2016 key = dec(inpart.params['key'])
2014 old = dec(inpart.params['old'])
2017 old = dec(inpart.params['old'])
2015 new = dec(inpart.params['new'])
2018 new = dec(inpart.params['new'])
2016 # Grab the transaction to ensure that we have the lock before performing the
2019 # Grab the transaction to ensure that we have the lock before performing the
2017 # pushkey.
2020 # pushkey.
2018 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2021 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2019 op.gettransaction()
2022 op.gettransaction()
2020 ret = op.repo.pushkey(namespace, key, old, new)
2023 ret = op.repo.pushkey(namespace, key, old, new)
2021 record = {'namespace': namespace,
2024 record = {'namespace': namespace,
2022 'key': key,
2025 'key': key,
2023 'old': old,
2026 'old': old,
2024 'new': new}
2027 'new': new}
2025 op.records.add('pushkey', record)
2028 op.records.add('pushkey', record)
2026 if op.reply is not None:
2029 if op.reply is not None:
2027 rpart = op.reply.newpart('reply:pushkey')
2030 rpart = op.reply.newpart('reply:pushkey')
2028 rpart.addparam(
2031 rpart.addparam(
2029 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2032 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2030 rpart.addparam('return', '%i' % ret, mandatory=False)
2033 rpart.addparam('return', '%i' % ret, mandatory=False)
2031 if inpart.mandatory and not ret:
2034 if inpart.mandatory and not ret:
2032 kwargs = {}
2035 kwargs = {}
2033 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2036 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2034 if key in inpart.params:
2037 if key in inpart.params:
2035 kwargs[key] = inpart.params[key]
2038 kwargs[key] = inpart.params[key]
2036 raise error.PushkeyFailed(partid='%d' % inpart.id,
2039 raise error.PushkeyFailed(partid='%d' % inpart.id,
2037 **pycompat.strkwargs(kwargs))
2040 **pycompat.strkwargs(kwargs))
2038
2041
2039 @parthandler('bookmarks')
2042 @parthandler('bookmarks')
2040 def handlebookmark(op, inpart):
2043 def handlebookmark(op, inpart):
2041 """transmit bookmark information
2044 """transmit bookmark information
2042
2045
2043 The part contains binary encoded bookmark information.
2046 The part contains binary encoded bookmark information.
2044
2047
2045 The exact behavior of this part can be controlled by the 'bookmarks' mode
2048 The exact behavior of this part can be controlled by the 'bookmarks' mode
2046 on the bundle operation.
2049 on the bundle operation.
2047
2050
2048 When mode is 'apply' (the default) the bookmark information is applied as
2051 When mode is 'apply' (the default) the bookmark information is applied as
2049 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2052 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2050 issued earlier to check for push races in such update. This behavior is
2053 issued earlier to check for push races in such update. This behavior is
2051 suitable for pushing.
2054 suitable for pushing.
2052
2055
2053 When mode is 'records', the information is recorded into the 'bookmarks'
2056 When mode is 'records', the information is recorded into the 'bookmarks'
2054 records of the bundle operation. This behavior is suitable for pulling.
2057 records of the bundle operation. This behavior is suitable for pulling.
2055 """
2058 """
2056 changes = bookmarks.binarydecode(inpart)
2059 changes = bookmarks.binarydecode(inpart)
2057
2060
2058 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2061 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2059 bookmarksmode = op.modes.get('bookmarks', 'apply')
2062 bookmarksmode = op.modes.get('bookmarks', 'apply')
2060
2063
2061 if bookmarksmode == 'apply':
2064 if bookmarksmode == 'apply':
2062 tr = op.gettransaction()
2065 tr = op.gettransaction()
2063 bookstore = op.repo._bookmarks
2066 bookstore = op.repo._bookmarks
2064 if pushkeycompat:
2067 if pushkeycompat:
2065 allhooks = []
2068 allhooks = []
2066 for book, node in changes:
2069 for book, node in changes:
2067 hookargs = tr.hookargs.copy()
2070 hookargs = tr.hookargs.copy()
2068 hookargs['pushkeycompat'] = '1'
2071 hookargs['pushkeycompat'] = '1'
2069 hookargs['namespace'] = 'bookmarks'
2072 hookargs['namespace'] = 'bookmarks'
2070 hookargs['key'] = book
2073 hookargs['key'] = book
2071 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2074 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2072 hookargs['new'] = nodemod.hex(node if node is not None else '')
2075 hookargs['new'] = nodemod.hex(node if node is not None else '')
2073 allhooks.append(hookargs)
2076 allhooks.append(hookargs)
2074
2077
2075 for hookargs in allhooks:
2078 for hookargs in allhooks:
2076 op.repo.hook('prepushkey', throw=True,
2079 op.repo.hook('prepushkey', throw=True,
2077 **pycompat.strkwargs(hookargs))
2080 **pycompat.strkwargs(hookargs))
2078
2081
2079 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2082 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2080
2083
2081 if pushkeycompat:
2084 if pushkeycompat:
2082 def runhook():
2085 def runhook():
2083 for hookargs in allhooks:
2086 for hookargs in allhooks:
2084 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2087 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2085 op.repo._afterlock(runhook)
2088 op.repo._afterlock(runhook)
2086
2089
2087 elif bookmarksmode == 'records':
2090 elif bookmarksmode == 'records':
2088 for book, node in changes:
2091 for book, node in changes:
2089 record = {'bookmark': book, 'node': node}
2092 record = {'bookmark': book, 'node': node}
2090 op.records.add('bookmarks', record)
2093 op.records.add('bookmarks', record)
2091 else:
2094 else:
2092 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2095 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2093
2096
2094 @parthandler('phase-heads')
2097 @parthandler('phase-heads')
2095 def handlephases(op, inpart):
2098 def handlephases(op, inpart):
2096 """apply phases from bundle part to repo"""
2099 """apply phases from bundle part to repo"""
2097 headsbyphase = phases.binarydecode(inpart)
2100 headsbyphase = phases.binarydecode(inpart)
2098 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2101 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2099
2102
2100 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2103 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2101 def handlepushkeyreply(op, inpart):
2104 def handlepushkeyreply(op, inpart):
2102 """retrieve the result of a pushkey request"""
2105 """retrieve the result of a pushkey request"""
2103 ret = int(inpart.params['return'])
2106 ret = int(inpart.params['return'])
2104 partid = int(inpart.params['in-reply-to'])
2107 partid = int(inpart.params['in-reply-to'])
2105 op.records.add('pushkey', {'return': ret}, partid)
2108 op.records.add('pushkey', {'return': ret}, partid)
2106
2109
2107 @parthandler('obsmarkers')
2110 @parthandler('obsmarkers')
2108 def handleobsmarker(op, inpart):
2111 def handleobsmarker(op, inpart):
2109 """add a stream of obsmarkers to the repo"""
2112 """add a stream of obsmarkers to the repo"""
2110 tr = op.gettransaction()
2113 tr = op.gettransaction()
2111 markerdata = inpart.read()
2114 markerdata = inpart.read()
2112 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2115 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2113 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2116 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2114 % len(markerdata))
2117 % len(markerdata))
2115 # The mergemarkers call will crash if marker creation is not enabled.
2118 # The mergemarkers call will crash if marker creation is not enabled.
2116 # we want to avoid this if the part is advisory.
2119 # we want to avoid this if the part is advisory.
2117 if not inpart.mandatory and op.repo.obsstore.readonly:
2120 if not inpart.mandatory and op.repo.obsstore.readonly:
2118 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2121 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2119 return
2122 return
2120 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2123 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2121 op.repo.invalidatevolatilesets()
2124 op.repo.invalidatevolatilesets()
2122 if new:
2125 if new:
2123 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2126 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2124 op.records.add('obsmarkers', {'new': new})
2127 op.records.add('obsmarkers', {'new': new})
2125 if op.reply is not None:
2128 if op.reply is not None:
2126 rpart = op.reply.newpart('reply:obsmarkers')
2129 rpart = op.reply.newpart('reply:obsmarkers')
2127 rpart.addparam(
2130 rpart.addparam(
2128 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2131 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2129 rpart.addparam('new', '%i' % new, mandatory=False)
2132 rpart.addparam('new', '%i' % new, mandatory=False)
2130
2133
2131
2134
2132 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2135 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2133 def handleobsmarkerreply(op, inpart):
2136 def handleobsmarkerreply(op, inpart):
2134 """retrieve the result of a pushkey request"""
2137 """retrieve the result of a pushkey request"""
2135 ret = int(inpart.params['new'])
2138 ret = int(inpart.params['new'])
2136 partid = int(inpart.params['in-reply-to'])
2139 partid = int(inpart.params['in-reply-to'])
2137 op.records.add('obsmarkers', {'new': ret}, partid)
2140 op.records.add('obsmarkers', {'new': ret}, partid)
2138
2141
2139 @parthandler('hgtagsfnodes')
2142 @parthandler('hgtagsfnodes')
2140 def handlehgtagsfnodes(op, inpart):
2143 def handlehgtagsfnodes(op, inpart):
2141 """Applies .hgtags fnodes cache entries to the local repo.
2144 """Applies .hgtags fnodes cache entries to the local repo.
2142
2145
2143 Payload is pairs of 20 byte changeset nodes and filenodes.
2146 Payload is pairs of 20 byte changeset nodes and filenodes.
2144 """
2147 """
2145 # Grab the transaction so we ensure that we have the lock at this point.
2148 # Grab the transaction so we ensure that we have the lock at this point.
2146 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2149 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2147 op.gettransaction()
2150 op.gettransaction()
2148 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2151 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2149
2152
2150 count = 0
2153 count = 0
2151 while True:
2154 while True:
2152 node = inpart.read(20)
2155 node = inpart.read(20)
2153 fnode = inpart.read(20)
2156 fnode = inpart.read(20)
2154 if len(node) < 20 or len(fnode) < 20:
2157 if len(node) < 20 or len(fnode) < 20:
2155 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2158 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2156 break
2159 break
2157 cache.setfnode(node, fnode)
2160 cache.setfnode(node, fnode)
2158 count += 1
2161 count += 1
2159
2162
2160 cache.write()
2163 cache.write()
2161 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2164 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2162
2165
2163 rbcstruct = struct.Struct('>III')
2166 rbcstruct = struct.Struct('>III')
2164
2167
2165 @parthandler('cache:rev-branch-cache')
2168 @parthandler('cache:rev-branch-cache')
2166 def handlerbc(op, inpart):
2169 def handlerbc(op, inpart):
2167 """receive a rev-branch-cache payload and update the local cache
2170 """receive a rev-branch-cache payload and update the local cache
2168
2171
2169 The payload is a series of data related to each branch
2172 The payload is a series of data related to each branch
2170
2173
2171 1) branch name length
2174 1) branch name length
2172 2) number of open heads
2175 2) number of open heads
2173 3) number of closed heads
2176 3) number of closed heads
2174 4) open heads nodes
2177 4) open heads nodes
2175 5) closed heads nodes
2178 5) closed heads nodes
2176 """
2179 """
2177 total = 0
2180 total = 0
2178 rawheader = inpart.read(rbcstruct.size)
2181 rawheader = inpart.read(rbcstruct.size)
2179 cache = op.repo.revbranchcache()
2182 cache = op.repo.revbranchcache()
2180 cl = op.repo.unfiltered().changelog
2183 cl = op.repo.unfiltered().changelog
2181 while rawheader:
2184 while rawheader:
2182 header = rbcstruct.unpack(rawheader)
2185 header = rbcstruct.unpack(rawheader)
2183 total += header[1] + header[2]
2186 total += header[1] + header[2]
2184 utf8branch = inpart.read(header[0])
2187 utf8branch = inpart.read(header[0])
2185 branch = encoding.tolocal(utf8branch)
2188 branch = encoding.tolocal(utf8branch)
2186 for x in xrange(header[1]):
2189 for x in xrange(header[1]):
2187 node = inpart.read(20)
2190 node = inpart.read(20)
2188 rev = cl.rev(node)
2191 rev = cl.rev(node)
2189 cache.setdata(branch, rev, node, False)
2192 cache.setdata(branch, rev, node, False)
2190 for x in xrange(header[2]):
2193 for x in xrange(header[2]):
2191 node = inpart.read(20)
2194 node = inpart.read(20)
2192 rev = cl.rev(node)
2195 rev = cl.rev(node)
2193 cache.setdata(branch, rev, node, True)
2196 cache.setdata(branch, rev, node, True)
2194 rawheader = inpart.read(rbcstruct.size)
2197 rawheader = inpart.read(rbcstruct.size)
2195 cache.write()
2198 cache.write()
2196
2199
2197 @parthandler('pushvars')
2200 @parthandler('pushvars')
2198 def bundle2getvars(op, part):
2201 def bundle2getvars(op, part):
2199 '''unbundle a bundle2 containing shellvars on the server'''
2202 '''unbundle a bundle2 containing shellvars on the server'''
2200 # An option to disable unbundling on server-side for security reasons
2203 # An option to disable unbundling on server-side for security reasons
2201 if op.ui.configbool('push', 'pushvars.server'):
2204 if op.ui.configbool('push', 'pushvars.server'):
2202 hookargs = {}
2205 hookargs = {}
2203 for key, value in part.advisoryparams:
2206 for key, value in part.advisoryparams:
2204 key = key.upper()
2207 key = key.upper()
2205 # We want pushed variables to have USERVAR_ prepended so we know
2208 # We want pushed variables to have USERVAR_ prepended so we know
2206 # they came from the --pushvar flag.
2209 # they came from the --pushvar flag.
2207 key = "USERVAR_" + key
2210 key = "USERVAR_" + key
2208 hookargs[key] = value
2211 hookargs[key] = value
2209 op.addhookargs(hookargs)
2212 op.addhookargs(hookargs)
2210
2213
2211 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2214 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2212 def handlestreamv2bundle(op, part):
2215 def handlestreamv2bundle(op, part):
2213
2216
2214 requirements = urlreq.unquote(part.params['requirements']).split(',')
2217 requirements = urlreq.unquote(part.params['requirements']).split(',')
2215 filecount = int(part.params['filecount'])
2218 filecount = int(part.params['filecount'])
2216 bytecount = int(part.params['bytecount'])
2219 bytecount = int(part.params['bytecount'])
2217
2220
2218 repo = op.repo
2221 repo = op.repo
2219 if len(repo):
2222 if len(repo):
2220 msg = _('cannot apply stream clone to non empty repository')
2223 msg = _('cannot apply stream clone to non empty repository')
2221 raise error.Abort(msg)
2224 raise error.Abort(msg)
2222
2225
2223 repo.ui.debug('applying stream bundle\n')
2226 repo.ui.debug('applying stream bundle\n')
2224 streamclone.applybundlev2(repo, part, filecount, bytecount,
2227 streamclone.applybundlev2(repo, part, filecount, bytecount,
2225 requirements)
2228 requirements)
@@ -1,5638 +1,5639 b''
1 # commands.py - command processing for mercurial
1 # commands.py - command processing for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import difflib
10 import difflib
11 import errno
11 import errno
12 import os
12 import os
13 import re
13 import re
14 import sys
14 import sys
15
15
16 from .i18n import _
16 from .i18n import _
17 from .node import (
17 from .node import (
18 hex,
18 hex,
19 nullid,
19 nullid,
20 nullrev,
20 nullrev,
21 short,
21 short,
22 )
22 )
23 from . import (
23 from . import (
24 archival,
24 archival,
25 bookmarks,
25 bookmarks,
26 bundle2,
26 bundle2,
27 changegroup,
27 changegroup,
28 cmdutil,
28 cmdutil,
29 copies,
29 copies,
30 debugcommands as debugcommandsmod,
30 debugcommands as debugcommandsmod,
31 destutil,
31 destutil,
32 dirstateguard,
32 dirstateguard,
33 discovery,
33 discovery,
34 encoding,
34 encoding,
35 error,
35 error,
36 exchange,
36 exchange,
37 extensions,
37 extensions,
38 formatter,
38 formatter,
39 graphmod,
39 graphmod,
40 hbisect,
40 hbisect,
41 help,
41 help,
42 hg,
42 hg,
43 lock as lockmod,
43 lock as lockmod,
44 logcmdutil,
44 logcmdutil,
45 merge as mergemod,
45 merge as mergemod,
46 obsolete,
46 obsolete,
47 obsutil,
47 obsutil,
48 patch,
48 patch,
49 phases,
49 phases,
50 pycompat,
50 pycompat,
51 rcutil,
51 rcutil,
52 registrar,
52 registrar,
53 revsetlang,
53 revsetlang,
54 rewriteutil,
54 rewriteutil,
55 scmutil,
55 scmutil,
56 server,
56 server,
57 streamclone,
57 streamclone,
58 tags as tagsmod,
58 tags as tagsmod,
59 templatekw,
59 templatekw,
60 ui as uimod,
60 ui as uimod,
61 util,
61 util,
62 wireprotoserver,
62 wireprotoserver,
63 )
63 )
64 from .utils import (
64 from .utils import (
65 dateutil,
65 dateutil,
66 procutil,
66 procutil,
67 stringutil,
67 stringutil,
68 )
68 )
69
69
70 release = lockmod.release
70 release = lockmod.release
71
71
72 table = {}
72 table = {}
73 table.update(debugcommandsmod.command._table)
73 table.update(debugcommandsmod.command._table)
74
74
75 command = registrar.command(table)
75 command = registrar.command(table)
76 readonly = registrar.command.readonly
76 readonly = registrar.command.readonly
77
77
78 # common command options
78 # common command options
79
79
80 globalopts = [
80 globalopts = [
81 ('R', 'repository', '',
81 ('R', 'repository', '',
82 _('repository root directory or name of overlay bundle file'),
82 _('repository root directory or name of overlay bundle file'),
83 _('REPO')),
83 _('REPO')),
84 ('', 'cwd', '',
84 ('', 'cwd', '',
85 _('change working directory'), _('DIR')),
85 _('change working directory'), _('DIR')),
86 ('y', 'noninteractive', None,
86 ('y', 'noninteractive', None,
87 _('do not prompt, automatically pick the first choice for all prompts')),
87 _('do not prompt, automatically pick the first choice for all prompts')),
88 ('q', 'quiet', None, _('suppress output')),
88 ('q', 'quiet', None, _('suppress output')),
89 ('v', 'verbose', None, _('enable additional output')),
89 ('v', 'verbose', None, _('enable additional output')),
90 ('', 'color', '',
90 ('', 'color', '',
91 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
91 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
92 # and should not be translated
92 # and should not be translated
93 _("when to colorize (boolean, always, auto, never, or debug)"),
93 _("when to colorize (boolean, always, auto, never, or debug)"),
94 _('TYPE')),
94 _('TYPE')),
95 ('', 'config', [],
95 ('', 'config', [],
96 _('set/override config option (use \'section.name=value\')'),
96 _('set/override config option (use \'section.name=value\')'),
97 _('CONFIG')),
97 _('CONFIG')),
98 ('', 'debug', None, _('enable debugging output')),
98 ('', 'debug', None, _('enable debugging output')),
99 ('', 'debugger', None, _('start debugger')),
99 ('', 'debugger', None, _('start debugger')),
100 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
100 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
101 _('ENCODE')),
101 _('ENCODE')),
102 ('', 'encodingmode', encoding.encodingmode,
102 ('', 'encodingmode', encoding.encodingmode,
103 _('set the charset encoding mode'), _('MODE')),
103 _('set the charset encoding mode'), _('MODE')),
104 ('', 'traceback', None, _('always print a traceback on exception')),
104 ('', 'traceback', None, _('always print a traceback on exception')),
105 ('', 'time', None, _('time how long the command takes')),
105 ('', 'time', None, _('time how long the command takes')),
106 ('', 'profile', None, _('print command execution profile')),
106 ('', 'profile', None, _('print command execution profile')),
107 ('', 'version', None, _('output version information and exit')),
107 ('', 'version', None, _('output version information and exit')),
108 ('h', 'help', None, _('display help and exit')),
108 ('h', 'help', None, _('display help and exit')),
109 ('', 'hidden', False, _('consider hidden changesets')),
109 ('', 'hidden', False, _('consider hidden changesets')),
110 ('', 'pager', 'auto',
110 ('', 'pager', 'auto',
111 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
111 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
112 ]
112 ]
113
113
114 dryrunopts = cmdutil.dryrunopts
114 dryrunopts = cmdutil.dryrunopts
115 remoteopts = cmdutil.remoteopts
115 remoteopts = cmdutil.remoteopts
116 walkopts = cmdutil.walkopts
116 walkopts = cmdutil.walkopts
117 commitopts = cmdutil.commitopts
117 commitopts = cmdutil.commitopts
118 commitopts2 = cmdutil.commitopts2
118 commitopts2 = cmdutil.commitopts2
119 formatteropts = cmdutil.formatteropts
119 formatteropts = cmdutil.formatteropts
120 templateopts = cmdutil.templateopts
120 templateopts = cmdutil.templateopts
121 logopts = cmdutil.logopts
121 logopts = cmdutil.logopts
122 diffopts = cmdutil.diffopts
122 diffopts = cmdutil.diffopts
123 diffwsopts = cmdutil.diffwsopts
123 diffwsopts = cmdutil.diffwsopts
124 diffopts2 = cmdutil.diffopts2
124 diffopts2 = cmdutil.diffopts2
125 mergetoolopts = cmdutil.mergetoolopts
125 mergetoolopts = cmdutil.mergetoolopts
126 similarityopts = cmdutil.similarityopts
126 similarityopts = cmdutil.similarityopts
127 subrepoopts = cmdutil.subrepoopts
127 subrepoopts = cmdutil.subrepoopts
128 debugrevlogopts = cmdutil.debugrevlogopts
128 debugrevlogopts = cmdutil.debugrevlogopts
129
129
130 # Commands start here, listed alphabetically
130 # Commands start here, listed alphabetically
131
131
132 @command('^add',
132 @command('^add',
133 walkopts + subrepoopts + dryrunopts,
133 walkopts + subrepoopts + dryrunopts,
134 _('[OPTION]... [FILE]...'),
134 _('[OPTION]... [FILE]...'),
135 inferrepo=True)
135 inferrepo=True)
136 def add(ui, repo, *pats, **opts):
136 def add(ui, repo, *pats, **opts):
137 """add the specified files on the next commit
137 """add the specified files on the next commit
138
138
139 Schedule files to be version controlled and added to the
139 Schedule files to be version controlled and added to the
140 repository.
140 repository.
141
141
142 The files will be added to the repository at the next commit. To
142 The files will be added to the repository at the next commit. To
143 undo an add before that, see :hg:`forget`.
143 undo an add before that, see :hg:`forget`.
144
144
145 If no names are given, add all files to the repository (except
145 If no names are given, add all files to the repository (except
146 files matching ``.hgignore``).
146 files matching ``.hgignore``).
147
147
148 .. container:: verbose
148 .. container:: verbose
149
149
150 Examples:
150 Examples:
151
151
152 - New (unknown) files are added
152 - New (unknown) files are added
153 automatically by :hg:`add`::
153 automatically by :hg:`add`::
154
154
155 $ ls
155 $ ls
156 foo.c
156 foo.c
157 $ hg status
157 $ hg status
158 ? foo.c
158 ? foo.c
159 $ hg add
159 $ hg add
160 adding foo.c
160 adding foo.c
161 $ hg status
161 $ hg status
162 A foo.c
162 A foo.c
163
163
164 - Specific files to be added can be specified::
164 - Specific files to be added can be specified::
165
165
166 $ ls
166 $ ls
167 bar.c foo.c
167 bar.c foo.c
168 $ hg status
168 $ hg status
169 ? bar.c
169 ? bar.c
170 ? foo.c
170 ? foo.c
171 $ hg add bar.c
171 $ hg add bar.c
172 $ hg status
172 $ hg status
173 A bar.c
173 A bar.c
174 ? foo.c
174 ? foo.c
175
175
176 Returns 0 if all files are successfully added.
176 Returns 0 if all files are successfully added.
177 """
177 """
178
178
179 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
179 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
180 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
180 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
181 return rejected and 1 or 0
181 return rejected and 1 or 0
182
182
183 @command('addremove',
183 @command('addremove',
184 similarityopts + subrepoopts + walkopts + dryrunopts,
184 similarityopts + subrepoopts + walkopts + dryrunopts,
185 _('[OPTION]... [FILE]...'),
185 _('[OPTION]... [FILE]...'),
186 inferrepo=True)
186 inferrepo=True)
187 def addremove(ui, repo, *pats, **opts):
187 def addremove(ui, repo, *pats, **opts):
188 """add all new files, delete all missing files
188 """add all new files, delete all missing files
189
189
190 Add all new files and remove all missing files from the
190 Add all new files and remove all missing files from the
191 repository.
191 repository.
192
192
193 Unless names are given, new files are ignored if they match any of
193 Unless names are given, new files are ignored if they match any of
194 the patterns in ``.hgignore``. As with add, these changes take
194 the patterns in ``.hgignore``. As with add, these changes take
195 effect at the next commit.
195 effect at the next commit.
196
196
197 Use the -s/--similarity option to detect renamed files. This
197 Use the -s/--similarity option to detect renamed files. This
198 option takes a percentage between 0 (disabled) and 100 (files must
198 option takes a percentage between 0 (disabled) and 100 (files must
199 be identical) as its parameter. With a parameter greater than 0,
199 be identical) as its parameter. With a parameter greater than 0,
200 this compares every removed file with every added file and records
200 this compares every removed file with every added file and records
201 those similar enough as renames. Detecting renamed files this way
201 those similar enough as renames. Detecting renamed files this way
202 can be expensive. After using this option, :hg:`status -C` can be
202 can be expensive. After using this option, :hg:`status -C` can be
203 used to check which files were identified as moved or renamed. If
203 used to check which files were identified as moved or renamed. If
204 not specified, -s/--similarity defaults to 100 and only renames of
204 not specified, -s/--similarity defaults to 100 and only renames of
205 identical files are detected.
205 identical files are detected.
206
206
207 .. container:: verbose
207 .. container:: verbose
208
208
209 Examples:
209 Examples:
210
210
211 - A number of files (bar.c and foo.c) are new,
211 - A number of files (bar.c and foo.c) are new,
212 while foobar.c has been removed (without using :hg:`remove`)
212 while foobar.c has been removed (without using :hg:`remove`)
213 from the repository::
213 from the repository::
214
214
215 $ ls
215 $ ls
216 bar.c foo.c
216 bar.c foo.c
217 $ hg status
217 $ hg status
218 ! foobar.c
218 ! foobar.c
219 ? bar.c
219 ? bar.c
220 ? foo.c
220 ? foo.c
221 $ hg addremove
221 $ hg addremove
222 adding bar.c
222 adding bar.c
223 adding foo.c
223 adding foo.c
224 removing foobar.c
224 removing foobar.c
225 $ hg status
225 $ hg status
226 A bar.c
226 A bar.c
227 A foo.c
227 A foo.c
228 R foobar.c
228 R foobar.c
229
229
230 - A file foobar.c was moved to foo.c without using :hg:`rename`.
230 - A file foobar.c was moved to foo.c without using :hg:`rename`.
231 Afterwards, it was edited slightly::
231 Afterwards, it was edited slightly::
232
232
233 $ ls
233 $ ls
234 foo.c
234 foo.c
235 $ hg status
235 $ hg status
236 ! foobar.c
236 ! foobar.c
237 ? foo.c
237 ? foo.c
238 $ hg addremove --similarity 90
238 $ hg addremove --similarity 90
239 removing foobar.c
239 removing foobar.c
240 adding foo.c
240 adding foo.c
241 recording removal of foobar.c as rename to foo.c (94% similar)
241 recording removal of foobar.c as rename to foo.c (94% similar)
242 $ hg status -C
242 $ hg status -C
243 A foo.c
243 A foo.c
244 foobar.c
244 foobar.c
245 R foobar.c
245 R foobar.c
246
246
247 Returns 0 if all files are successfully added.
247 Returns 0 if all files are successfully added.
248 """
248 """
249 opts = pycompat.byteskwargs(opts)
249 opts = pycompat.byteskwargs(opts)
250 try:
250 try:
251 sim = float(opts.get('similarity') or 100)
251 sim = float(opts.get('similarity') or 100)
252 except ValueError:
252 except ValueError:
253 raise error.Abort(_('similarity must be a number'))
253 raise error.Abort(_('similarity must be a number'))
254 if sim < 0 or sim > 100:
254 if sim < 0 or sim > 100:
255 raise error.Abort(_('similarity must be between 0 and 100'))
255 raise error.Abort(_('similarity must be between 0 and 100'))
256 matcher = scmutil.match(repo[None], pats, opts)
256 matcher = scmutil.match(repo[None], pats, opts)
257 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
257 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
258
258
259 @command('^annotate|blame',
259 @command('^annotate|blame',
260 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
260 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
261 ('', 'follow', None,
261 ('', 'follow', None,
262 _('follow copies/renames and list the filename (DEPRECATED)')),
262 _('follow copies/renames and list the filename (DEPRECATED)')),
263 ('', 'no-follow', None, _("don't follow copies and renames")),
263 ('', 'no-follow', None, _("don't follow copies and renames")),
264 ('a', 'text', None, _('treat all files as text')),
264 ('a', 'text', None, _('treat all files as text')),
265 ('u', 'user', None, _('list the author (long with -v)')),
265 ('u', 'user', None, _('list the author (long with -v)')),
266 ('f', 'file', None, _('list the filename')),
266 ('f', 'file', None, _('list the filename')),
267 ('d', 'date', None, _('list the date (short with -q)')),
267 ('d', 'date', None, _('list the date (short with -q)')),
268 ('n', 'number', None, _('list the revision number (default)')),
268 ('n', 'number', None, _('list the revision number (default)')),
269 ('c', 'changeset', None, _('list the changeset')),
269 ('c', 'changeset', None, _('list the changeset')),
270 ('l', 'line-number', None, _('show line number at the first appearance')),
270 ('l', 'line-number', None, _('show line number at the first appearance')),
271 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
271 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
272 ] + diffwsopts + walkopts + formatteropts,
272 ] + diffwsopts + walkopts + formatteropts,
273 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
273 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
274 inferrepo=True)
274 inferrepo=True)
275 def annotate(ui, repo, *pats, **opts):
275 def annotate(ui, repo, *pats, **opts):
276 """show changeset information by line for each file
276 """show changeset information by line for each file
277
277
278 List changes in files, showing the revision id responsible for
278 List changes in files, showing the revision id responsible for
279 each line.
279 each line.
280
280
281 This command is useful for discovering when a change was made and
281 This command is useful for discovering when a change was made and
282 by whom.
282 by whom.
283
283
284 If you include --file, --user, or --date, the revision number is
284 If you include --file, --user, or --date, the revision number is
285 suppressed unless you also include --number.
285 suppressed unless you also include --number.
286
286
287 Without the -a/--text option, annotate will avoid processing files
287 Without the -a/--text option, annotate will avoid processing files
288 it detects as binary. With -a, annotate will annotate the file
288 it detects as binary. With -a, annotate will annotate the file
289 anyway, although the results will probably be neither useful
289 anyway, although the results will probably be neither useful
290 nor desirable.
290 nor desirable.
291
291
292 Returns 0 on success.
292 Returns 0 on success.
293 """
293 """
294 opts = pycompat.byteskwargs(opts)
294 opts = pycompat.byteskwargs(opts)
295 if not pats:
295 if not pats:
296 raise error.Abort(_('at least one filename or pattern is required'))
296 raise error.Abort(_('at least one filename or pattern is required'))
297
297
298 if opts.get('follow'):
298 if opts.get('follow'):
299 # --follow is deprecated and now just an alias for -f/--file
299 # --follow is deprecated and now just an alias for -f/--file
300 # to mimic the behavior of Mercurial before version 1.5
300 # to mimic the behavior of Mercurial before version 1.5
301 opts['file'] = True
301 opts['file'] = True
302
302
303 rev = opts.get('rev')
303 rev = opts.get('rev')
304 if rev:
304 if rev:
305 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
305 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
306 ctx = scmutil.revsingle(repo, rev)
306 ctx = scmutil.revsingle(repo, rev)
307
307
308 rootfm = ui.formatter('annotate', opts)
308 rootfm = ui.formatter('annotate', opts)
309 if ui.quiet:
309 if ui.quiet:
310 datefunc = dateutil.shortdate
310 datefunc = dateutil.shortdate
311 else:
311 else:
312 datefunc = dateutil.datestr
312 datefunc = dateutil.datestr
313 if ctx.rev() is None:
313 if ctx.rev() is None:
314 def hexfn(node):
314 def hexfn(node):
315 if node is None:
315 if node is None:
316 return None
316 return None
317 else:
317 else:
318 return rootfm.hexfunc(node)
318 return rootfm.hexfunc(node)
319 if opts.get('changeset'):
319 if opts.get('changeset'):
320 # omit "+" suffix which is appended to node hex
320 # omit "+" suffix which is appended to node hex
321 def formatrev(rev):
321 def formatrev(rev):
322 if rev is None:
322 if rev is None:
323 return '%d' % ctx.p1().rev()
323 return '%d' % ctx.p1().rev()
324 else:
324 else:
325 return '%d' % rev
325 return '%d' % rev
326 else:
326 else:
327 def formatrev(rev):
327 def formatrev(rev):
328 if rev is None:
328 if rev is None:
329 return '%d+' % ctx.p1().rev()
329 return '%d+' % ctx.p1().rev()
330 else:
330 else:
331 return '%d ' % rev
331 return '%d ' % rev
332 def formathex(hex):
332 def formathex(hex):
333 if hex is None:
333 if hex is None:
334 return '%s+' % rootfm.hexfunc(ctx.p1().node())
334 return '%s+' % rootfm.hexfunc(ctx.p1().node())
335 else:
335 else:
336 return '%s ' % hex
336 return '%s ' % hex
337 else:
337 else:
338 hexfn = rootfm.hexfunc
338 hexfn = rootfm.hexfunc
339 formatrev = formathex = pycompat.bytestr
339 formatrev = formathex = pycompat.bytestr
340
340
341 opmap = [('user', ' ', lambda x: x.fctx.user(), ui.shortuser),
341 opmap = [('user', ' ', lambda x: x.fctx.user(), ui.shortuser),
342 ('number', ' ', lambda x: x.fctx.rev(), formatrev),
342 ('number', ' ', lambda x: x.fctx.rev(), formatrev),
343 ('changeset', ' ', lambda x: hexfn(x.fctx.node()), formathex),
343 ('changeset', ' ', lambda x: hexfn(x.fctx.node()), formathex),
344 ('date', ' ', lambda x: x.fctx.date(), util.cachefunc(datefunc)),
344 ('date', ' ', lambda x: x.fctx.date(), util.cachefunc(datefunc)),
345 ('file', ' ', lambda x: x.fctx.path(), pycompat.bytestr),
345 ('file', ' ', lambda x: x.fctx.path(), pycompat.bytestr),
346 ('line_number', ':', lambda x: x.lineno, pycompat.bytestr),
346 ('line_number', ':', lambda x: x.lineno, pycompat.bytestr),
347 ]
347 ]
348 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
348 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
349
349
350 if (not opts.get('user') and not opts.get('changeset')
350 if (not opts.get('user') and not opts.get('changeset')
351 and not opts.get('date') and not opts.get('file')):
351 and not opts.get('date') and not opts.get('file')):
352 opts['number'] = True
352 opts['number'] = True
353
353
354 linenumber = opts.get('line_number') is not None
354 linenumber = opts.get('line_number') is not None
355 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
355 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
356 raise error.Abort(_('at least one of -n/-c is required for -l'))
356 raise error.Abort(_('at least one of -n/-c is required for -l'))
357
357
358 ui.pager('annotate')
358 ui.pager('annotate')
359
359
360 if rootfm.isplain():
360 if rootfm.isplain():
361 def makefunc(get, fmt):
361 def makefunc(get, fmt):
362 return lambda x: fmt(get(x))
362 return lambda x: fmt(get(x))
363 else:
363 else:
364 def makefunc(get, fmt):
364 def makefunc(get, fmt):
365 return get
365 return get
366 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
366 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
367 if opts.get(op)]
367 if opts.get(op)]
368 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
368 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
369 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
369 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
370 if opts.get(op))
370 if opts.get(op))
371
371
372 def bad(x, y):
372 def bad(x, y):
373 raise error.Abort("%s: %s" % (x, y))
373 raise error.Abort("%s: %s" % (x, y))
374
374
375 m = scmutil.match(ctx, pats, opts, badfn=bad)
375 m = scmutil.match(ctx, pats, opts, badfn=bad)
376
376
377 follow = not opts.get('no_follow')
377 follow = not opts.get('no_follow')
378 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
378 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
379 whitespace=True)
379 whitespace=True)
380 skiprevs = opts.get('skip')
380 skiprevs = opts.get('skip')
381 if skiprevs:
381 if skiprevs:
382 skiprevs = scmutil.revrange(repo, skiprevs)
382 skiprevs = scmutil.revrange(repo, skiprevs)
383
383
384 for abs in ctx.walk(m):
384 for abs in ctx.walk(m):
385 fctx = ctx[abs]
385 fctx = ctx[abs]
386 rootfm.startitem()
386 rootfm.startitem()
387 rootfm.data(abspath=abs, path=m.rel(abs))
387 rootfm.data(abspath=abs, path=m.rel(abs))
388 if not opts.get('text') and fctx.isbinary():
388 if not opts.get('text') and fctx.isbinary():
389 rootfm.plain(_("%s: binary file\n")
389 rootfm.plain(_("%s: binary file\n")
390 % ((pats and m.rel(abs)) or abs))
390 % ((pats and m.rel(abs)) or abs))
391 continue
391 continue
392
392
393 fm = rootfm.nested('lines')
393 fm = rootfm.nested('lines')
394 lines = fctx.annotate(follow=follow, skiprevs=skiprevs,
394 lines = fctx.annotate(follow=follow, skiprevs=skiprevs,
395 diffopts=diffopts)
395 diffopts=diffopts)
396 if not lines:
396 if not lines:
397 fm.end()
397 fm.end()
398 continue
398 continue
399 formats = []
399 formats = []
400 pieces = []
400 pieces = []
401
401
402 for f, sep in funcmap:
402 for f, sep in funcmap:
403 l = [f(n) for n in lines]
403 l = [f(n) for n in lines]
404 if fm.isplain():
404 if fm.isplain():
405 sizes = [encoding.colwidth(x) for x in l]
405 sizes = [encoding.colwidth(x) for x in l]
406 ml = max(sizes)
406 ml = max(sizes)
407 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
407 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
408 else:
408 else:
409 formats.append(['%s' for x in l])
409 formats.append(['%s' for x in l])
410 pieces.append(l)
410 pieces.append(l)
411
411
412 for f, p, n in zip(zip(*formats), zip(*pieces), lines):
412 for f, p, n in zip(zip(*formats), zip(*pieces), lines):
413 fm.startitem()
413 fm.startitem()
414 fm.context(fctx=n.fctx)
414 fm.context(fctx=n.fctx)
415 fm.write(fields, "".join(f), *p)
415 fm.write(fields, "".join(f), *p)
416 if n.skip:
416 if n.skip:
417 fmt = "* %s"
417 fmt = "* %s"
418 else:
418 else:
419 fmt = ": %s"
419 fmt = ": %s"
420 fm.write('line', fmt, n.text)
420 fm.write('line', fmt, n.text)
421
421
422 if not lines[-1].text.endswith('\n'):
422 if not lines[-1].text.endswith('\n'):
423 fm.plain('\n')
423 fm.plain('\n')
424 fm.end()
424 fm.end()
425
425
426 rootfm.end()
426 rootfm.end()
427
427
428 @command('archive',
428 @command('archive',
429 [('', 'no-decode', None, _('do not pass files through decoders')),
429 [('', 'no-decode', None, _('do not pass files through decoders')),
430 ('p', 'prefix', '', _('directory prefix for files in archive'),
430 ('p', 'prefix', '', _('directory prefix for files in archive'),
431 _('PREFIX')),
431 _('PREFIX')),
432 ('r', 'rev', '', _('revision to distribute'), _('REV')),
432 ('r', 'rev', '', _('revision to distribute'), _('REV')),
433 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
433 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
434 ] + subrepoopts + walkopts,
434 ] + subrepoopts + walkopts,
435 _('[OPTION]... DEST'))
435 _('[OPTION]... DEST'))
436 def archive(ui, repo, dest, **opts):
436 def archive(ui, repo, dest, **opts):
437 '''create an unversioned archive of a repository revision
437 '''create an unversioned archive of a repository revision
438
438
439 By default, the revision used is the parent of the working
439 By default, the revision used is the parent of the working
440 directory; use -r/--rev to specify a different revision.
440 directory; use -r/--rev to specify a different revision.
441
441
442 The archive type is automatically detected based on file
442 The archive type is automatically detected based on file
443 extension (to override, use -t/--type).
443 extension (to override, use -t/--type).
444
444
445 .. container:: verbose
445 .. container:: verbose
446
446
447 Examples:
447 Examples:
448
448
449 - create a zip file containing the 1.0 release::
449 - create a zip file containing the 1.0 release::
450
450
451 hg archive -r 1.0 project-1.0.zip
451 hg archive -r 1.0 project-1.0.zip
452
452
453 - create a tarball excluding .hg files::
453 - create a tarball excluding .hg files::
454
454
455 hg archive project.tar.gz -X ".hg*"
455 hg archive project.tar.gz -X ".hg*"
456
456
457 Valid types are:
457 Valid types are:
458
458
459 :``files``: a directory full of files (default)
459 :``files``: a directory full of files (default)
460 :``tar``: tar archive, uncompressed
460 :``tar``: tar archive, uncompressed
461 :``tbz2``: tar archive, compressed using bzip2
461 :``tbz2``: tar archive, compressed using bzip2
462 :``tgz``: tar archive, compressed using gzip
462 :``tgz``: tar archive, compressed using gzip
463 :``uzip``: zip archive, uncompressed
463 :``uzip``: zip archive, uncompressed
464 :``zip``: zip archive, compressed using deflate
464 :``zip``: zip archive, compressed using deflate
465
465
466 The exact name of the destination archive or directory is given
466 The exact name of the destination archive or directory is given
467 using a format string; see :hg:`help export` for details.
467 using a format string; see :hg:`help export` for details.
468
468
469 Each member added to an archive file has a directory prefix
469 Each member added to an archive file has a directory prefix
470 prepended. Use -p/--prefix to specify a format string for the
470 prepended. Use -p/--prefix to specify a format string for the
471 prefix. The default is the basename of the archive, with suffixes
471 prefix. The default is the basename of the archive, with suffixes
472 removed.
472 removed.
473
473
474 Returns 0 on success.
474 Returns 0 on success.
475 '''
475 '''
476
476
477 opts = pycompat.byteskwargs(opts)
477 opts = pycompat.byteskwargs(opts)
478 rev = opts.get('rev')
478 rev = opts.get('rev')
479 if rev:
479 if rev:
480 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
480 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
481 ctx = scmutil.revsingle(repo, rev)
481 ctx = scmutil.revsingle(repo, rev)
482 if not ctx:
482 if not ctx:
483 raise error.Abort(_('no working directory: please specify a revision'))
483 raise error.Abort(_('no working directory: please specify a revision'))
484 node = ctx.node()
484 node = ctx.node()
485 dest = cmdutil.makefilename(ctx, dest)
485 dest = cmdutil.makefilename(ctx, dest)
486 if os.path.realpath(dest) == repo.root:
486 if os.path.realpath(dest) == repo.root:
487 raise error.Abort(_('repository root cannot be destination'))
487 raise error.Abort(_('repository root cannot be destination'))
488
488
489 kind = opts.get('type') or archival.guesskind(dest) or 'files'
489 kind = opts.get('type') or archival.guesskind(dest) or 'files'
490 prefix = opts.get('prefix')
490 prefix = opts.get('prefix')
491
491
492 if dest == '-':
492 if dest == '-':
493 if kind == 'files':
493 if kind == 'files':
494 raise error.Abort(_('cannot archive plain files to stdout'))
494 raise error.Abort(_('cannot archive plain files to stdout'))
495 dest = cmdutil.makefileobj(ctx, dest)
495 dest = cmdutil.makefileobj(ctx, dest)
496 if not prefix:
496 if not prefix:
497 prefix = os.path.basename(repo.root) + '-%h'
497 prefix = os.path.basename(repo.root) + '-%h'
498
498
499 prefix = cmdutil.makefilename(ctx, prefix)
499 prefix = cmdutil.makefilename(ctx, prefix)
500 match = scmutil.match(ctx, [], opts)
500 match = scmutil.match(ctx, [], opts)
501 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
501 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
502 match, prefix, subrepos=opts.get('subrepos'))
502 match, prefix, subrepos=opts.get('subrepos'))
503
503
504 @command('backout',
504 @command('backout',
505 [('', 'merge', None, _('merge with old dirstate parent after backout')),
505 [('', 'merge', None, _('merge with old dirstate parent after backout')),
506 ('', 'commit', None,
506 ('', 'commit', None,
507 _('commit if no conflicts were encountered (DEPRECATED)')),
507 _('commit if no conflicts were encountered (DEPRECATED)')),
508 ('', 'no-commit', None, _('do not commit')),
508 ('', 'no-commit', None, _('do not commit')),
509 ('', 'parent', '',
509 ('', 'parent', '',
510 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
510 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
511 ('r', 'rev', '', _('revision to backout'), _('REV')),
511 ('r', 'rev', '', _('revision to backout'), _('REV')),
512 ('e', 'edit', False, _('invoke editor on commit messages')),
512 ('e', 'edit', False, _('invoke editor on commit messages')),
513 ] + mergetoolopts + walkopts + commitopts + commitopts2,
513 ] + mergetoolopts + walkopts + commitopts + commitopts2,
514 _('[OPTION]... [-r] REV'))
514 _('[OPTION]... [-r] REV'))
515 def backout(ui, repo, node=None, rev=None, **opts):
515 def backout(ui, repo, node=None, rev=None, **opts):
516 '''reverse effect of earlier changeset
516 '''reverse effect of earlier changeset
517
517
518 Prepare a new changeset with the effect of REV undone in the
518 Prepare a new changeset with the effect of REV undone in the
519 current working directory. If no conflicts were encountered,
519 current working directory. If no conflicts were encountered,
520 it will be committed immediately.
520 it will be committed immediately.
521
521
522 If REV is the parent of the working directory, then this new changeset
522 If REV is the parent of the working directory, then this new changeset
523 is committed automatically (unless --no-commit is specified).
523 is committed automatically (unless --no-commit is specified).
524
524
525 .. note::
525 .. note::
526
526
527 :hg:`backout` cannot be used to fix either an unwanted or
527 :hg:`backout` cannot be used to fix either an unwanted or
528 incorrect merge.
528 incorrect merge.
529
529
530 .. container:: verbose
530 .. container:: verbose
531
531
532 Examples:
532 Examples:
533
533
534 - Reverse the effect of the parent of the working directory.
534 - Reverse the effect of the parent of the working directory.
535 This backout will be committed immediately::
535 This backout will be committed immediately::
536
536
537 hg backout -r .
537 hg backout -r .
538
538
539 - Reverse the effect of previous bad revision 23::
539 - Reverse the effect of previous bad revision 23::
540
540
541 hg backout -r 23
541 hg backout -r 23
542
542
543 - Reverse the effect of previous bad revision 23 and
543 - Reverse the effect of previous bad revision 23 and
544 leave changes uncommitted::
544 leave changes uncommitted::
545
545
546 hg backout -r 23 --no-commit
546 hg backout -r 23 --no-commit
547 hg commit -m "Backout revision 23"
547 hg commit -m "Backout revision 23"
548
548
549 By default, the pending changeset will have one parent,
549 By default, the pending changeset will have one parent,
550 maintaining a linear history. With --merge, the pending
550 maintaining a linear history. With --merge, the pending
551 changeset will instead have two parents: the old parent of the
551 changeset will instead have two parents: the old parent of the
552 working directory and a new child of REV that simply undoes REV.
552 working directory and a new child of REV that simply undoes REV.
553
553
554 Before version 1.7, the behavior without --merge was equivalent
554 Before version 1.7, the behavior without --merge was equivalent
555 to specifying --merge followed by :hg:`update --clean .` to
555 to specifying --merge followed by :hg:`update --clean .` to
556 cancel the merge and leave the child of REV as a head to be
556 cancel the merge and leave the child of REV as a head to be
557 merged separately.
557 merged separately.
558
558
559 See :hg:`help dates` for a list of formats valid for -d/--date.
559 See :hg:`help dates` for a list of formats valid for -d/--date.
560
560
561 See :hg:`help revert` for a way to restore files to the state
561 See :hg:`help revert` for a way to restore files to the state
562 of another revision.
562 of another revision.
563
563
564 Returns 0 on success, 1 if nothing to backout or there are unresolved
564 Returns 0 on success, 1 if nothing to backout or there are unresolved
565 files.
565 files.
566 '''
566 '''
567 wlock = lock = None
567 wlock = lock = None
568 try:
568 try:
569 wlock = repo.wlock()
569 wlock = repo.wlock()
570 lock = repo.lock()
570 lock = repo.lock()
571 return _dobackout(ui, repo, node, rev, **opts)
571 return _dobackout(ui, repo, node, rev, **opts)
572 finally:
572 finally:
573 release(lock, wlock)
573 release(lock, wlock)
574
574
575 def _dobackout(ui, repo, node=None, rev=None, **opts):
575 def _dobackout(ui, repo, node=None, rev=None, **opts):
576 opts = pycompat.byteskwargs(opts)
576 opts = pycompat.byteskwargs(opts)
577 if opts.get('commit') and opts.get('no_commit'):
577 if opts.get('commit') and opts.get('no_commit'):
578 raise error.Abort(_("cannot use --commit with --no-commit"))
578 raise error.Abort(_("cannot use --commit with --no-commit"))
579 if opts.get('merge') and opts.get('no_commit'):
579 if opts.get('merge') and opts.get('no_commit'):
580 raise error.Abort(_("cannot use --merge with --no-commit"))
580 raise error.Abort(_("cannot use --merge with --no-commit"))
581
581
582 if rev and node:
582 if rev and node:
583 raise error.Abort(_("please specify just one revision"))
583 raise error.Abort(_("please specify just one revision"))
584
584
585 if not rev:
585 if not rev:
586 rev = node
586 rev = node
587
587
588 if not rev:
588 if not rev:
589 raise error.Abort(_("please specify a revision to backout"))
589 raise error.Abort(_("please specify a revision to backout"))
590
590
591 date = opts.get('date')
591 date = opts.get('date')
592 if date:
592 if date:
593 opts['date'] = dateutil.parsedate(date)
593 opts['date'] = dateutil.parsedate(date)
594
594
595 cmdutil.checkunfinished(repo)
595 cmdutil.checkunfinished(repo)
596 cmdutil.bailifchanged(repo)
596 cmdutil.bailifchanged(repo)
597 node = scmutil.revsingle(repo, rev).node()
597 node = scmutil.revsingle(repo, rev).node()
598
598
599 op1, op2 = repo.dirstate.parents()
599 op1, op2 = repo.dirstate.parents()
600 if not repo.changelog.isancestor(node, op1):
600 if not repo.changelog.isancestor(node, op1):
601 raise error.Abort(_('cannot backout change that is not an ancestor'))
601 raise error.Abort(_('cannot backout change that is not an ancestor'))
602
602
603 p1, p2 = repo.changelog.parents(node)
603 p1, p2 = repo.changelog.parents(node)
604 if p1 == nullid:
604 if p1 == nullid:
605 raise error.Abort(_('cannot backout a change with no parents'))
605 raise error.Abort(_('cannot backout a change with no parents'))
606 if p2 != nullid:
606 if p2 != nullid:
607 if not opts.get('parent'):
607 if not opts.get('parent'):
608 raise error.Abort(_('cannot backout a merge changeset'))
608 raise error.Abort(_('cannot backout a merge changeset'))
609 p = repo.lookup(opts['parent'])
609 p = repo.lookup(opts['parent'])
610 if p not in (p1, p2):
610 if p not in (p1, p2):
611 raise error.Abort(_('%s is not a parent of %s') %
611 raise error.Abort(_('%s is not a parent of %s') %
612 (short(p), short(node)))
612 (short(p), short(node)))
613 parent = p
613 parent = p
614 else:
614 else:
615 if opts.get('parent'):
615 if opts.get('parent'):
616 raise error.Abort(_('cannot use --parent on non-merge changeset'))
616 raise error.Abort(_('cannot use --parent on non-merge changeset'))
617 parent = p1
617 parent = p1
618
618
619 # the backout should appear on the same branch
619 # the backout should appear on the same branch
620 branch = repo.dirstate.branch()
620 branch = repo.dirstate.branch()
621 bheads = repo.branchheads(branch)
621 bheads = repo.branchheads(branch)
622 rctx = scmutil.revsingle(repo, hex(parent))
622 rctx = scmutil.revsingle(repo, hex(parent))
623 if not opts.get('merge') and op1 != node:
623 if not opts.get('merge') and op1 != node:
624 dsguard = dirstateguard.dirstateguard(repo, 'backout')
624 dsguard = dirstateguard.dirstateguard(repo, 'backout')
625 try:
625 try:
626 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
626 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
627 'backout')
627 'backout')
628 stats = mergemod.update(repo, parent, True, True, node, False)
628 stats = mergemod.update(repo, parent, True, True, node, False)
629 repo.setparents(op1, op2)
629 repo.setparents(op1, op2)
630 dsguard.close()
630 dsguard.close()
631 hg._showstats(repo, stats)
631 hg._showstats(repo, stats)
632 if stats.unresolvedcount:
632 if stats.unresolvedcount:
633 repo.ui.status(_("use 'hg resolve' to retry unresolved "
633 repo.ui.status(_("use 'hg resolve' to retry unresolved "
634 "file merges\n"))
634 "file merges\n"))
635 return 1
635 return 1
636 finally:
636 finally:
637 ui.setconfig('ui', 'forcemerge', '', '')
637 ui.setconfig('ui', 'forcemerge', '', '')
638 lockmod.release(dsguard)
638 lockmod.release(dsguard)
639 else:
639 else:
640 hg.clean(repo, node, show_stats=False)
640 hg.clean(repo, node, show_stats=False)
641 repo.dirstate.setbranch(branch)
641 repo.dirstate.setbranch(branch)
642 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
642 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
643
643
644 if opts.get('no_commit'):
644 if opts.get('no_commit'):
645 msg = _("changeset %s backed out, "
645 msg = _("changeset %s backed out, "
646 "don't forget to commit.\n")
646 "don't forget to commit.\n")
647 ui.status(msg % short(node))
647 ui.status(msg % short(node))
648 return 0
648 return 0
649
649
650 def commitfunc(ui, repo, message, match, opts):
650 def commitfunc(ui, repo, message, match, opts):
651 editform = 'backout'
651 editform = 'backout'
652 e = cmdutil.getcommiteditor(editform=editform,
652 e = cmdutil.getcommiteditor(editform=editform,
653 **pycompat.strkwargs(opts))
653 **pycompat.strkwargs(opts))
654 if not message:
654 if not message:
655 # we don't translate commit messages
655 # we don't translate commit messages
656 message = "Backed out changeset %s" % short(node)
656 message = "Backed out changeset %s" % short(node)
657 e = cmdutil.getcommiteditor(edit=True, editform=editform)
657 e = cmdutil.getcommiteditor(edit=True, editform=editform)
658 return repo.commit(message, opts.get('user'), opts.get('date'),
658 return repo.commit(message, opts.get('user'), opts.get('date'),
659 match, editor=e)
659 match, editor=e)
660 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
660 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
661 if not newnode:
661 if not newnode:
662 ui.status(_("nothing changed\n"))
662 ui.status(_("nothing changed\n"))
663 return 1
663 return 1
664 cmdutil.commitstatus(repo, newnode, branch, bheads)
664 cmdutil.commitstatus(repo, newnode, branch, bheads)
665
665
666 def nice(node):
666 def nice(node):
667 return '%d:%s' % (repo.changelog.rev(node), short(node))
667 return '%d:%s' % (repo.changelog.rev(node), short(node))
668 ui.status(_('changeset %s backs out changeset %s\n') %
668 ui.status(_('changeset %s backs out changeset %s\n') %
669 (nice(repo.changelog.tip()), nice(node)))
669 (nice(repo.changelog.tip()), nice(node)))
670 if opts.get('merge') and op1 != node:
670 if opts.get('merge') and op1 != node:
671 hg.clean(repo, op1, show_stats=False)
671 hg.clean(repo, op1, show_stats=False)
672 ui.status(_('merging with changeset %s\n')
672 ui.status(_('merging with changeset %s\n')
673 % nice(repo.changelog.tip()))
673 % nice(repo.changelog.tip()))
674 try:
674 try:
675 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
675 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
676 'backout')
676 'backout')
677 return hg.merge(repo, hex(repo.changelog.tip()))
677 return hg.merge(repo, hex(repo.changelog.tip()))
678 finally:
678 finally:
679 ui.setconfig('ui', 'forcemerge', '', '')
679 ui.setconfig('ui', 'forcemerge', '', '')
680 return 0
680 return 0
681
681
682 @command('bisect',
682 @command('bisect',
683 [('r', 'reset', False, _('reset bisect state')),
683 [('r', 'reset', False, _('reset bisect state')),
684 ('g', 'good', False, _('mark changeset good')),
684 ('g', 'good', False, _('mark changeset good')),
685 ('b', 'bad', False, _('mark changeset bad')),
685 ('b', 'bad', False, _('mark changeset bad')),
686 ('s', 'skip', False, _('skip testing changeset')),
686 ('s', 'skip', False, _('skip testing changeset')),
687 ('e', 'extend', False, _('extend the bisect range')),
687 ('e', 'extend', False, _('extend the bisect range')),
688 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
688 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
689 ('U', 'noupdate', False, _('do not update to target'))],
689 ('U', 'noupdate', False, _('do not update to target'))],
690 _("[-gbsr] [-U] [-c CMD] [REV]"))
690 _("[-gbsr] [-U] [-c CMD] [REV]"))
691 def bisect(ui, repo, rev=None, extra=None, command=None,
691 def bisect(ui, repo, rev=None, extra=None, command=None,
692 reset=None, good=None, bad=None, skip=None, extend=None,
692 reset=None, good=None, bad=None, skip=None, extend=None,
693 noupdate=None):
693 noupdate=None):
694 """subdivision search of changesets
694 """subdivision search of changesets
695
695
696 This command helps to find changesets which introduce problems. To
696 This command helps to find changesets which introduce problems. To
697 use, mark the earliest changeset you know exhibits the problem as
697 use, mark the earliest changeset you know exhibits the problem as
698 bad, then mark the latest changeset which is free from the problem
698 bad, then mark the latest changeset which is free from the problem
699 as good. Bisect will update your working directory to a revision
699 as good. Bisect will update your working directory to a revision
700 for testing (unless the -U/--noupdate option is specified). Once
700 for testing (unless the -U/--noupdate option is specified). Once
701 you have performed tests, mark the working directory as good or
701 you have performed tests, mark the working directory as good or
702 bad, and bisect will either update to another candidate changeset
702 bad, and bisect will either update to another candidate changeset
703 or announce that it has found the bad revision.
703 or announce that it has found the bad revision.
704
704
705 As a shortcut, you can also use the revision argument to mark a
705 As a shortcut, you can also use the revision argument to mark a
706 revision as good or bad without checking it out first.
706 revision as good or bad without checking it out first.
707
707
708 If you supply a command, it will be used for automatic bisection.
708 If you supply a command, it will be used for automatic bisection.
709 The environment variable HG_NODE will contain the ID of the
709 The environment variable HG_NODE will contain the ID of the
710 changeset being tested. The exit status of the command will be
710 changeset being tested. The exit status of the command will be
711 used to mark revisions as good or bad: status 0 means good, 125
711 used to mark revisions as good or bad: status 0 means good, 125
712 means to skip the revision, 127 (command not found) will abort the
712 means to skip the revision, 127 (command not found) will abort the
713 bisection, and any other non-zero exit status means the revision
713 bisection, and any other non-zero exit status means the revision
714 is bad.
714 is bad.
715
715
716 .. container:: verbose
716 .. container:: verbose
717
717
718 Some examples:
718 Some examples:
719
719
720 - start a bisection with known bad revision 34, and good revision 12::
720 - start a bisection with known bad revision 34, and good revision 12::
721
721
722 hg bisect --bad 34
722 hg bisect --bad 34
723 hg bisect --good 12
723 hg bisect --good 12
724
724
725 - advance the current bisection by marking current revision as good or
725 - advance the current bisection by marking current revision as good or
726 bad::
726 bad::
727
727
728 hg bisect --good
728 hg bisect --good
729 hg bisect --bad
729 hg bisect --bad
730
730
731 - mark the current revision, or a known revision, to be skipped (e.g. if
731 - mark the current revision, or a known revision, to be skipped (e.g. if
732 that revision is not usable because of another issue)::
732 that revision is not usable because of another issue)::
733
733
734 hg bisect --skip
734 hg bisect --skip
735 hg bisect --skip 23
735 hg bisect --skip 23
736
736
737 - skip all revisions that do not touch directories ``foo`` or ``bar``::
737 - skip all revisions that do not touch directories ``foo`` or ``bar``::
738
738
739 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
739 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
740
740
741 - forget the current bisection::
741 - forget the current bisection::
742
742
743 hg bisect --reset
743 hg bisect --reset
744
744
745 - use 'make && make tests' to automatically find the first broken
745 - use 'make && make tests' to automatically find the first broken
746 revision::
746 revision::
747
747
748 hg bisect --reset
748 hg bisect --reset
749 hg bisect --bad 34
749 hg bisect --bad 34
750 hg bisect --good 12
750 hg bisect --good 12
751 hg bisect --command "make && make tests"
751 hg bisect --command "make && make tests"
752
752
753 - see all changesets whose states are already known in the current
753 - see all changesets whose states are already known in the current
754 bisection::
754 bisection::
755
755
756 hg log -r "bisect(pruned)"
756 hg log -r "bisect(pruned)"
757
757
758 - see the changeset currently being bisected (especially useful
758 - see the changeset currently being bisected (especially useful
759 if running with -U/--noupdate)::
759 if running with -U/--noupdate)::
760
760
761 hg log -r "bisect(current)"
761 hg log -r "bisect(current)"
762
762
763 - see all changesets that took part in the current bisection::
763 - see all changesets that took part in the current bisection::
764
764
765 hg log -r "bisect(range)"
765 hg log -r "bisect(range)"
766
766
767 - you can even get a nice graph::
767 - you can even get a nice graph::
768
768
769 hg log --graph -r "bisect(range)"
769 hg log --graph -r "bisect(range)"
770
770
771 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
771 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
772
772
773 Returns 0 on success.
773 Returns 0 on success.
774 """
774 """
775 # backward compatibility
775 # backward compatibility
776 if rev in "good bad reset init".split():
776 if rev in "good bad reset init".split():
777 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
777 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
778 cmd, rev, extra = rev, extra, None
778 cmd, rev, extra = rev, extra, None
779 if cmd == "good":
779 if cmd == "good":
780 good = True
780 good = True
781 elif cmd == "bad":
781 elif cmd == "bad":
782 bad = True
782 bad = True
783 else:
783 else:
784 reset = True
784 reset = True
785 elif extra:
785 elif extra:
786 raise error.Abort(_('incompatible arguments'))
786 raise error.Abort(_('incompatible arguments'))
787
787
788 incompatibles = {
788 incompatibles = {
789 '--bad': bad,
789 '--bad': bad,
790 '--command': bool(command),
790 '--command': bool(command),
791 '--extend': extend,
791 '--extend': extend,
792 '--good': good,
792 '--good': good,
793 '--reset': reset,
793 '--reset': reset,
794 '--skip': skip,
794 '--skip': skip,
795 }
795 }
796
796
797 enabled = [x for x in incompatibles if incompatibles[x]]
797 enabled = [x for x in incompatibles if incompatibles[x]]
798
798
799 if len(enabled) > 1:
799 if len(enabled) > 1:
800 raise error.Abort(_('%s and %s are incompatible') %
800 raise error.Abort(_('%s and %s are incompatible') %
801 tuple(sorted(enabled)[0:2]))
801 tuple(sorted(enabled)[0:2]))
802
802
803 if reset:
803 if reset:
804 hbisect.resetstate(repo)
804 hbisect.resetstate(repo)
805 return
805 return
806
806
807 state = hbisect.load_state(repo)
807 state = hbisect.load_state(repo)
808
808
809 # update state
809 # update state
810 if good or bad or skip:
810 if good or bad or skip:
811 if rev:
811 if rev:
812 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
812 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
813 else:
813 else:
814 nodes = [repo.lookup('.')]
814 nodes = [repo.lookup('.')]
815 if good:
815 if good:
816 state['good'] += nodes
816 state['good'] += nodes
817 elif bad:
817 elif bad:
818 state['bad'] += nodes
818 state['bad'] += nodes
819 elif skip:
819 elif skip:
820 state['skip'] += nodes
820 state['skip'] += nodes
821 hbisect.save_state(repo, state)
821 hbisect.save_state(repo, state)
822 if not (state['good'] and state['bad']):
822 if not (state['good'] and state['bad']):
823 return
823 return
824
824
825 def mayupdate(repo, node, show_stats=True):
825 def mayupdate(repo, node, show_stats=True):
826 """common used update sequence"""
826 """common used update sequence"""
827 if noupdate:
827 if noupdate:
828 return
828 return
829 cmdutil.checkunfinished(repo)
829 cmdutil.checkunfinished(repo)
830 cmdutil.bailifchanged(repo)
830 cmdutil.bailifchanged(repo)
831 return hg.clean(repo, node, show_stats=show_stats)
831 return hg.clean(repo, node, show_stats=show_stats)
832
832
833 displayer = logcmdutil.changesetdisplayer(ui, repo, {})
833 displayer = logcmdutil.changesetdisplayer(ui, repo, {})
834
834
835 if command:
835 if command:
836 changesets = 1
836 changesets = 1
837 if noupdate:
837 if noupdate:
838 try:
838 try:
839 node = state['current'][0]
839 node = state['current'][0]
840 except LookupError:
840 except LookupError:
841 raise error.Abort(_('current bisect revision is unknown - '
841 raise error.Abort(_('current bisect revision is unknown - '
842 'start a new bisect to fix'))
842 'start a new bisect to fix'))
843 else:
843 else:
844 node, p2 = repo.dirstate.parents()
844 node, p2 = repo.dirstate.parents()
845 if p2 != nullid:
845 if p2 != nullid:
846 raise error.Abort(_('current bisect revision is a merge'))
846 raise error.Abort(_('current bisect revision is a merge'))
847 if rev:
847 if rev:
848 node = repo[scmutil.revsingle(repo, rev, node)].node()
848 node = repo[scmutil.revsingle(repo, rev, node)].node()
849 try:
849 try:
850 while changesets:
850 while changesets:
851 # update state
851 # update state
852 state['current'] = [node]
852 state['current'] = [node]
853 hbisect.save_state(repo, state)
853 hbisect.save_state(repo, state)
854 status = ui.system(command, environ={'HG_NODE': hex(node)},
854 status = ui.system(command, environ={'HG_NODE': hex(node)},
855 blockedtag='bisect_check')
855 blockedtag='bisect_check')
856 if status == 125:
856 if status == 125:
857 transition = "skip"
857 transition = "skip"
858 elif status == 0:
858 elif status == 0:
859 transition = "good"
859 transition = "good"
860 # status < 0 means process was killed
860 # status < 0 means process was killed
861 elif status == 127:
861 elif status == 127:
862 raise error.Abort(_("failed to execute %s") % command)
862 raise error.Abort(_("failed to execute %s") % command)
863 elif status < 0:
863 elif status < 0:
864 raise error.Abort(_("%s killed") % command)
864 raise error.Abort(_("%s killed") % command)
865 else:
865 else:
866 transition = "bad"
866 transition = "bad"
867 state[transition].append(node)
867 state[transition].append(node)
868 ctx = repo[node]
868 ctx = repo[node]
869 ui.status(_('changeset %d:%s: %s\n') % (ctx.rev(), ctx,
869 ui.status(_('changeset %d:%s: %s\n') % (ctx.rev(), ctx,
870 transition))
870 transition))
871 hbisect.checkstate(state)
871 hbisect.checkstate(state)
872 # bisect
872 # bisect
873 nodes, changesets, bgood = hbisect.bisect(repo, state)
873 nodes, changesets, bgood = hbisect.bisect(repo, state)
874 # update to next check
874 # update to next check
875 node = nodes[0]
875 node = nodes[0]
876 mayupdate(repo, node, show_stats=False)
876 mayupdate(repo, node, show_stats=False)
877 finally:
877 finally:
878 state['current'] = [node]
878 state['current'] = [node]
879 hbisect.save_state(repo, state)
879 hbisect.save_state(repo, state)
880 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
880 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
881 return
881 return
882
882
883 hbisect.checkstate(state)
883 hbisect.checkstate(state)
884
884
885 # actually bisect
885 # actually bisect
886 nodes, changesets, good = hbisect.bisect(repo, state)
886 nodes, changesets, good = hbisect.bisect(repo, state)
887 if extend:
887 if extend:
888 if not changesets:
888 if not changesets:
889 extendnode = hbisect.extendrange(repo, state, nodes, good)
889 extendnode = hbisect.extendrange(repo, state, nodes, good)
890 if extendnode is not None:
890 if extendnode is not None:
891 ui.write(_("Extending search to changeset %d:%s\n")
891 ui.write(_("Extending search to changeset %d:%s\n")
892 % (extendnode.rev(), extendnode))
892 % (extendnode.rev(), extendnode))
893 state['current'] = [extendnode.node()]
893 state['current'] = [extendnode.node()]
894 hbisect.save_state(repo, state)
894 hbisect.save_state(repo, state)
895 return mayupdate(repo, extendnode.node())
895 return mayupdate(repo, extendnode.node())
896 raise error.Abort(_("nothing to extend"))
896 raise error.Abort(_("nothing to extend"))
897
897
898 if changesets == 0:
898 if changesets == 0:
899 hbisect.printresult(ui, repo, state, displayer, nodes, good)
899 hbisect.printresult(ui, repo, state, displayer, nodes, good)
900 else:
900 else:
901 assert len(nodes) == 1 # only a single node can be tested next
901 assert len(nodes) == 1 # only a single node can be tested next
902 node = nodes[0]
902 node = nodes[0]
903 # compute the approximate number of remaining tests
903 # compute the approximate number of remaining tests
904 tests, size = 0, 2
904 tests, size = 0, 2
905 while size <= changesets:
905 while size <= changesets:
906 tests, size = tests + 1, size * 2
906 tests, size = tests + 1, size * 2
907 rev = repo.changelog.rev(node)
907 rev = repo.changelog.rev(node)
908 ui.write(_("Testing changeset %d:%s "
908 ui.write(_("Testing changeset %d:%s "
909 "(%d changesets remaining, ~%d tests)\n")
909 "(%d changesets remaining, ~%d tests)\n")
910 % (rev, short(node), changesets, tests))
910 % (rev, short(node), changesets, tests))
911 state['current'] = [node]
911 state['current'] = [node]
912 hbisect.save_state(repo, state)
912 hbisect.save_state(repo, state)
913 return mayupdate(repo, node)
913 return mayupdate(repo, node)
914
914
915 @command('bookmarks|bookmark',
915 @command('bookmarks|bookmark',
916 [('f', 'force', False, _('force')),
916 [('f', 'force', False, _('force')),
917 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
917 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
918 ('d', 'delete', False, _('delete a given bookmark')),
918 ('d', 'delete', False, _('delete a given bookmark')),
919 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
919 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
920 ('i', 'inactive', False, _('mark a bookmark inactive')),
920 ('i', 'inactive', False, _('mark a bookmark inactive')),
921 ] + formatteropts,
921 ] + formatteropts,
922 _('hg bookmarks [OPTIONS]... [NAME]...'))
922 _('hg bookmarks [OPTIONS]... [NAME]...'))
923 def bookmark(ui, repo, *names, **opts):
923 def bookmark(ui, repo, *names, **opts):
924 '''create a new bookmark or list existing bookmarks
924 '''create a new bookmark or list existing bookmarks
925
925
926 Bookmarks are labels on changesets to help track lines of development.
926 Bookmarks are labels on changesets to help track lines of development.
927 Bookmarks are unversioned and can be moved, renamed and deleted.
927 Bookmarks are unversioned and can be moved, renamed and deleted.
928 Deleting or moving a bookmark has no effect on the associated changesets.
928 Deleting or moving a bookmark has no effect on the associated changesets.
929
929
930 Creating or updating to a bookmark causes it to be marked as 'active'.
930 Creating or updating to a bookmark causes it to be marked as 'active'.
931 The active bookmark is indicated with a '*'.
931 The active bookmark is indicated with a '*'.
932 When a commit is made, the active bookmark will advance to the new commit.
932 When a commit is made, the active bookmark will advance to the new commit.
933 A plain :hg:`update` will also advance an active bookmark, if possible.
933 A plain :hg:`update` will also advance an active bookmark, if possible.
934 Updating away from a bookmark will cause it to be deactivated.
934 Updating away from a bookmark will cause it to be deactivated.
935
935
936 Bookmarks can be pushed and pulled between repositories (see
936 Bookmarks can be pushed and pulled between repositories (see
937 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
937 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
938 diverged, a new 'divergent bookmark' of the form 'name@path' will
938 diverged, a new 'divergent bookmark' of the form 'name@path' will
939 be created. Using :hg:`merge` will resolve the divergence.
939 be created. Using :hg:`merge` will resolve the divergence.
940
940
941 Specifying bookmark as '.' to -m or -d options is equivalent to specifying
941 Specifying bookmark as '.' to -m or -d options is equivalent to specifying
942 the active bookmark's name.
942 the active bookmark's name.
943
943
944 A bookmark named '@' has the special property that :hg:`clone` will
944 A bookmark named '@' has the special property that :hg:`clone` will
945 check it out by default if it exists.
945 check it out by default if it exists.
946
946
947 .. container:: verbose
947 .. container:: verbose
948
948
949 Examples:
949 Examples:
950
950
951 - create an active bookmark for a new line of development::
951 - create an active bookmark for a new line of development::
952
952
953 hg book new-feature
953 hg book new-feature
954
954
955 - create an inactive bookmark as a place marker::
955 - create an inactive bookmark as a place marker::
956
956
957 hg book -i reviewed
957 hg book -i reviewed
958
958
959 - create an inactive bookmark on another changeset::
959 - create an inactive bookmark on another changeset::
960
960
961 hg book -r .^ tested
961 hg book -r .^ tested
962
962
963 - rename bookmark turkey to dinner::
963 - rename bookmark turkey to dinner::
964
964
965 hg book -m turkey dinner
965 hg book -m turkey dinner
966
966
967 - move the '@' bookmark from another branch::
967 - move the '@' bookmark from another branch::
968
968
969 hg book -f @
969 hg book -f @
970 '''
970 '''
971 force = opts.get(r'force')
971 force = opts.get(r'force')
972 rev = opts.get(r'rev')
972 rev = opts.get(r'rev')
973 delete = opts.get(r'delete')
973 delete = opts.get(r'delete')
974 rename = opts.get(r'rename')
974 rename = opts.get(r'rename')
975 inactive = opts.get(r'inactive')
975 inactive = opts.get(r'inactive')
976
976
977 if delete and rename:
977 if delete and rename:
978 raise error.Abort(_("--delete and --rename are incompatible"))
978 raise error.Abort(_("--delete and --rename are incompatible"))
979 if delete and rev:
979 if delete and rev:
980 raise error.Abort(_("--rev is incompatible with --delete"))
980 raise error.Abort(_("--rev is incompatible with --delete"))
981 if rename and rev:
981 if rename and rev:
982 raise error.Abort(_("--rev is incompatible with --rename"))
982 raise error.Abort(_("--rev is incompatible with --rename"))
983 if not names and (delete or rev):
983 if not names and (delete or rev):
984 raise error.Abort(_("bookmark name required"))
984 raise error.Abort(_("bookmark name required"))
985
985
986 if delete or rename or names or inactive:
986 if delete or rename or names or inactive:
987 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
987 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
988 if delete:
988 if delete:
989 names = pycompat.maplist(repo._bookmarks.expandname, names)
989 names = pycompat.maplist(repo._bookmarks.expandname, names)
990 bookmarks.delete(repo, tr, names)
990 bookmarks.delete(repo, tr, names)
991 elif rename:
991 elif rename:
992 if not names:
992 if not names:
993 raise error.Abort(_("new bookmark name required"))
993 raise error.Abort(_("new bookmark name required"))
994 elif len(names) > 1:
994 elif len(names) > 1:
995 raise error.Abort(_("only one new bookmark name allowed"))
995 raise error.Abort(_("only one new bookmark name allowed"))
996 rename = repo._bookmarks.expandname(rename)
996 rename = repo._bookmarks.expandname(rename)
997 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
997 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
998 elif names:
998 elif names:
999 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
999 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
1000 elif inactive:
1000 elif inactive:
1001 if len(repo._bookmarks) == 0:
1001 if len(repo._bookmarks) == 0:
1002 ui.status(_("no bookmarks set\n"))
1002 ui.status(_("no bookmarks set\n"))
1003 elif not repo._activebookmark:
1003 elif not repo._activebookmark:
1004 ui.status(_("no active bookmark\n"))
1004 ui.status(_("no active bookmark\n"))
1005 else:
1005 else:
1006 bookmarks.deactivate(repo)
1006 bookmarks.deactivate(repo)
1007 else: # show bookmarks
1007 else: # show bookmarks
1008 bookmarks.printbookmarks(ui, repo, **opts)
1008 bookmarks.printbookmarks(ui, repo, **opts)
1009
1009
1010 @command('branch',
1010 @command('branch',
1011 [('f', 'force', None,
1011 [('f', 'force', None,
1012 _('set branch name even if it shadows an existing branch')),
1012 _('set branch name even if it shadows an existing branch')),
1013 ('C', 'clean', None, _('reset branch name to parent branch name')),
1013 ('C', 'clean', None, _('reset branch name to parent branch name')),
1014 ('r', 'rev', [], _('change branches of the given revs (EXPERIMENTAL)')),
1014 ('r', 'rev', [], _('change branches of the given revs (EXPERIMENTAL)')),
1015 ],
1015 ],
1016 _('[-fC] [NAME]'))
1016 _('[-fC] [NAME]'))
1017 def branch(ui, repo, label=None, **opts):
1017 def branch(ui, repo, label=None, **opts):
1018 """set or show the current branch name
1018 """set or show the current branch name
1019
1019
1020 .. note::
1020 .. note::
1021
1021
1022 Branch names are permanent and global. Use :hg:`bookmark` to create a
1022 Branch names are permanent and global. Use :hg:`bookmark` to create a
1023 light-weight bookmark instead. See :hg:`help glossary` for more
1023 light-weight bookmark instead. See :hg:`help glossary` for more
1024 information about named branches and bookmarks.
1024 information about named branches and bookmarks.
1025
1025
1026 With no argument, show the current branch name. With one argument,
1026 With no argument, show the current branch name. With one argument,
1027 set the working directory branch name (the branch will not exist
1027 set the working directory branch name (the branch will not exist
1028 in the repository until the next commit). Standard practice
1028 in the repository until the next commit). Standard practice
1029 recommends that primary development take place on the 'default'
1029 recommends that primary development take place on the 'default'
1030 branch.
1030 branch.
1031
1031
1032 Unless -f/--force is specified, branch will not let you set a
1032 Unless -f/--force is specified, branch will not let you set a
1033 branch name that already exists.
1033 branch name that already exists.
1034
1034
1035 Use -C/--clean to reset the working directory branch to that of
1035 Use -C/--clean to reset the working directory branch to that of
1036 the parent of the working directory, negating a previous branch
1036 the parent of the working directory, negating a previous branch
1037 change.
1037 change.
1038
1038
1039 Use the command :hg:`update` to switch to an existing branch. Use
1039 Use the command :hg:`update` to switch to an existing branch. Use
1040 :hg:`commit --close-branch` to mark this branch head as closed.
1040 :hg:`commit --close-branch` to mark this branch head as closed.
1041 When all heads of a branch are closed, the branch will be
1041 When all heads of a branch are closed, the branch will be
1042 considered closed.
1042 considered closed.
1043
1043
1044 Returns 0 on success.
1044 Returns 0 on success.
1045 """
1045 """
1046 opts = pycompat.byteskwargs(opts)
1046 opts = pycompat.byteskwargs(opts)
1047 revs = opts.get('rev')
1047 revs = opts.get('rev')
1048 if label:
1048 if label:
1049 label = label.strip()
1049 label = label.strip()
1050
1050
1051 if not opts.get('clean') and not label:
1051 if not opts.get('clean') and not label:
1052 if revs:
1052 if revs:
1053 raise error.Abort(_("no branch name specified for the revisions"))
1053 raise error.Abort(_("no branch name specified for the revisions"))
1054 ui.write("%s\n" % repo.dirstate.branch())
1054 ui.write("%s\n" % repo.dirstate.branch())
1055 return
1055 return
1056
1056
1057 with repo.wlock():
1057 with repo.wlock():
1058 if opts.get('clean'):
1058 if opts.get('clean'):
1059 label = repo[None].p1().branch()
1059 label = repo[None].p1().branch()
1060 repo.dirstate.setbranch(label)
1060 repo.dirstate.setbranch(label)
1061 ui.status(_('reset working directory to branch %s\n') % label)
1061 ui.status(_('reset working directory to branch %s\n') % label)
1062 elif label:
1062 elif label:
1063
1063
1064 scmutil.checknewlabel(repo, label, 'branch')
1064 scmutil.checknewlabel(repo, label, 'branch')
1065 if revs:
1065 if revs:
1066 return cmdutil.changebranch(ui, repo, revs, label)
1066 return cmdutil.changebranch(ui, repo, revs, label)
1067
1067
1068 if not opts.get('force') and label in repo.branchmap():
1068 if not opts.get('force') and label in repo.branchmap():
1069 if label not in [p.branch() for p in repo[None].parents()]:
1069 if label not in [p.branch() for p in repo[None].parents()]:
1070 raise error.Abort(_('a branch of the same name already'
1070 raise error.Abort(_('a branch of the same name already'
1071 ' exists'),
1071 ' exists'),
1072 # i18n: "it" refers to an existing branch
1072 # i18n: "it" refers to an existing branch
1073 hint=_("use 'hg update' to switch to it"))
1073 hint=_("use 'hg update' to switch to it"))
1074
1074
1075 repo.dirstate.setbranch(label)
1075 repo.dirstate.setbranch(label)
1076 ui.status(_('marked working directory as branch %s\n') % label)
1076 ui.status(_('marked working directory as branch %s\n') % label)
1077
1077
1078 # find any open named branches aside from default
1078 # find any open named branches aside from default
1079 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1079 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1080 if n != "default" and not c]
1080 if n != "default" and not c]
1081 if not others:
1081 if not others:
1082 ui.status(_('(branches are permanent and global, '
1082 ui.status(_('(branches are permanent and global, '
1083 'did you want a bookmark?)\n'))
1083 'did you want a bookmark?)\n'))
1084
1084
1085 @command('branches',
1085 @command('branches',
1086 [('a', 'active', False,
1086 [('a', 'active', False,
1087 _('show only branches that have unmerged heads (DEPRECATED)')),
1087 _('show only branches that have unmerged heads (DEPRECATED)')),
1088 ('c', 'closed', False, _('show normal and closed branches')),
1088 ('c', 'closed', False, _('show normal and closed branches')),
1089 ] + formatteropts,
1089 ] + formatteropts,
1090 _('[-c]'), cmdtype=readonly)
1090 _('[-c]'), cmdtype=readonly)
1091 def branches(ui, repo, active=False, closed=False, **opts):
1091 def branches(ui, repo, active=False, closed=False, **opts):
1092 """list repository named branches
1092 """list repository named branches
1093
1093
1094 List the repository's named branches, indicating which ones are
1094 List the repository's named branches, indicating which ones are
1095 inactive. If -c/--closed is specified, also list branches which have
1095 inactive. If -c/--closed is specified, also list branches which have
1096 been marked closed (see :hg:`commit --close-branch`).
1096 been marked closed (see :hg:`commit --close-branch`).
1097
1097
1098 Use the command :hg:`update` to switch to an existing branch.
1098 Use the command :hg:`update` to switch to an existing branch.
1099
1099
1100 Returns 0.
1100 Returns 0.
1101 """
1101 """
1102
1102
1103 opts = pycompat.byteskwargs(opts)
1103 opts = pycompat.byteskwargs(opts)
1104 ui.pager('branches')
1104 ui.pager('branches')
1105 fm = ui.formatter('branches', opts)
1105 fm = ui.formatter('branches', opts)
1106 hexfunc = fm.hexfunc
1106 hexfunc = fm.hexfunc
1107
1107
1108 allheads = set(repo.heads())
1108 allheads = set(repo.heads())
1109 branches = []
1109 branches = []
1110 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1110 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1111 isactive = False
1111 isactive = False
1112 if not isclosed:
1112 if not isclosed:
1113 openheads = set(repo.branchmap().iteropen(heads))
1113 openheads = set(repo.branchmap().iteropen(heads))
1114 isactive = bool(openheads & allheads)
1114 isactive = bool(openheads & allheads)
1115 branches.append((tag, repo[tip], isactive, not isclosed))
1115 branches.append((tag, repo[tip], isactive, not isclosed))
1116 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1116 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1117 reverse=True)
1117 reverse=True)
1118
1118
1119 for tag, ctx, isactive, isopen in branches:
1119 for tag, ctx, isactive, isopen in branches:
1120 if active and not isactive:
1120 if active and not isactive:
1121 continue
1121 continue
1122 if isactive:
1122 if isactive:
1123 label = 'branches.active'
1123 label = 'branches.active'
1124 notice = ''
1124 notice = ''
1125 elif not isopen:
1125 elif not isopen:
1126 if not closed:
1126 if not closed:
1127 continue
1127 continue
1128 label = 'branches.closed'
1128 label = 'branches.closed'
1129 notice = _(' (closed)')
1129 notice = _(' (closed)')
1130 else:
1130 else:
1131 label = 'branches.inactive'
1131 label = 'branches.inactive'
1132 notice = _(' (inactive)')
1132 notice = _(' (inactive)')
1133 current = (tag == repo.dirstate.branch())
1133 current = (tag == repo.dirstate.branch())
1134 if current:
1134 if current:
1135 label = 'branches.current'
1135 label = 'branches.current'
1136
1136
1137 fm.startitem()
1137 fm.startitem()
1138 fm.write('branch', '%s', tag, label=label)
1138 fm.write('branch', '%s', tag, label=label)
1139 rev = ctx.rev()
1139 rev = ctx.rev()
1140 padsize = max(31 - len("%d" % rev) - encoding.colwidth(tag), 0)
1140 padsize = max(31 - len("%d" % rev) - encoding.colwidth(tag), 0)
1141 fmt = ' ' * padsize + ' %d:%s'
1141 fmt = ' ' * padsize + ' %d:%s'
1142 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1142 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1143 label='log.changeset changeset.%s' % ctx.phasestr())
1143 label='log.changeset changeset.%s' % ctx.phasestr())
1144 fm.context(ctx=ctx)
1144 fm.context(ctx=ctx)
1145 fm.data(active=isactive, closed=not isopen, current=current)
1145 fm.data(active=isactive, closed=not isopen, current=current)
1146 if not ui.quiet:
1146 if not ui.quiet:
1147 fm.plain(notice)
1147 fm.plain(notice)
1148 fm.plain('\n')
1148 fm.plain('\n')
1149 fm.end()
1149 fm.end()
1150
1150
1151 @command('bundle',
1151 @command('bundle',
1152 [('f', 'force', None, _('run even when the destination is unrelated')),
1152 [('f', 'force', None, _('run even when the destination is unrelated')),
1153 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1153 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1154 _('REV')),
1154 _('REV')),
1155 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1155 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1156 _('BRANCH')),
1156 _('BRANCH')),
1157 ('', 'base', [],
1157 ('', 'base', [],
1158 _('a base changeset assumed to be available at the destination'),
1158 _('a base changeset assumed to be available at the destination'),
1159 _('REV')),
1159 _('REV')),
1160 ('a', 'all', None, _('bundle all changesets in the repository')),
1160 ('a', 'all', None, _('bundle all changesets in the repository')),
1161 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1161 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1162 ] + remoteopts,
1162 ] + remoteopts,
1163 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1163 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1164 def bundle(ui, repo, fname, dest=None, **opts):
1164 def bundle(ui, repo, fname, dest=None, **opts):
1165 """create a bundle file
1165 """create a bundle file
1166
1166
1167 Generate a bundle file containing data to be transferred to another
1167 Generate a bundle file containing data to be transferred to another
1168 repository.
1168 repository.
1169
1169
1170 To create a bundle containing all changesets, use -a/--all
1170 To create a bundle containing all changesets, use -a/--all
1171 (or --base null). Otherwise, hg assumes the destination will have
1171 (or --base null). Otherwise, hg assumes the destination will have
1172 all the nodes you specify with --base parameters. Otherwise, hg
1172 all the nodes you specify with --base parameters. Otherwise, hg
1173 will assume the repository has all the nodes in destination, or
1173 will assume the repository has all the nodes in destination, or
1174 default-push/default if no destination is specified, where destination
1174 default-push/default if no destination is specified, where destination
1175 is the repository you provide through DEST option.
1175 is the repository you provide through DEST option.
1176
1176
1177 You can change bundle format with the -t/--type option. See
1177 You can change bundle format with the -t/--type option. See
1178 :hg:`help bundlespec` for documentation on this format. By default,
1178 :hg:`help bundlespec` for documentation on this format. By default,
1179 the most appropriate format is used and compression defaults to
1179 the most appropriate format is used and compression defaults to
1180 bzip2.
1180 bzip2.
1181
1181
1182 The bundle file can then be transferred using conventional means
1182 The bundle file can then be transferred using conventional means
1183 and applied to another repository with the unbundle or pull
1183 and applied to another repository with the unbundle or pull
1184 command. This is useful when direct push and pull are not
1184 command. This is useful when direct push and pull are not
1185 available or when exporting an entire repository is undesirable.
1185 available or when exporting an entire repository is undesirable.
1186
1186
1187 Applying bundles preserves all changeset contents including
1187 Applying bundles preserves all changeset contents including
1188 permissions, copy/rename information, and revision history.
1188 permissions, copy/rename information, and revision history.
1189
1189
1190 Returns 0 on success, 1 if no changes found.
1190 Returns 0 on success, 1 if no changes found.
1191 """
1191 """
1192 opts = pycompat.byteskwargs(opts)
1192 opts = pycompat.byteskwargs(opts)
1193 revs = None
1193 revs = None
1194 if 'rev' in opts:
1194 if 'rev' in opts:
1195 revstrings = opts['rev']
1195 revstrings = opts['rev']
1196 revs = scmutil.revrange(repo, revstrings)
1196 revs = scmutil.revrange(repo, revstrings)
1197 if revstrings and not revs:
1197 if revstrings and not revs:
1198 raise error.Abort(_('no commits to bundle'))
1198 raise error.Abort(_('no commits to bundle'))
1199
1199
1200 bundletype = opts.get('type', 'bzip2').lower()
1200 bundletype = opts.get('type', 'bzip2').lower()
1201 try:
1201 try:
1202 bundlespec = exchange.parsebundlespec(repo, bundletype, strict=False)
1202 bundlespec = exchange.parsebundlespec(repo, bundletype, strict=False)
1203 except error.UnsupportedBundleSpecification as e:
1203 except error.UnsupportedBundleSpecification as e:
1204 raise error.Abort(pycompat.bytestr(e),
1204 raise error.Abort(pycompat.bytestr(e),
1205 hint=_("see 'hg help bundlespec' for supported "
1205 hint=_("see 'hg help bundlespec' for supported "
1206 "values for --type"))
1206 "values for --type"))
1207 cgversion = bundlespec.version
1207 cgversion = bundlespec.contentopts["cg.version"]
1208
1208
1209 # Packed bundles are a pseudo bundle format for now.
1209 # Packed bundles are a pseudo bundle format for now.
1210 if cgversion == 's1':
1210 if cgversion == 's1':
1211 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1211 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1212 hint=_("use 'hg debugcreatestreamclonebundle'"))
1212 hint=_("use 'hg debugcreatestreamclonebundle'"))
1213
1213
1214 if opts.get('all'):
1214 if opts.get('all'):
1215 if dest:
1215 if dest:
1216 raise error.Abort(_("--all is incompatible with specifying "
1216 raise error.Abort(_("--all is incompatible with specifying "
1217 "a destination"))
1217 "a destination"))
1218 if opts.get('base'):
1218 if opts.get('base'):
1219 ui.warn(_("ignoring --base because --all was specified\n"))
1219 ui.warn(_("ignoring --base because --all was specified\n"))
1220 base = ['null']
1220 base = ['null']
1221 else:
1221 else:
1222 base = scmutil.revrange(repo, opts.get('base'))
1222 base = scmutil.revrange(repo, opts.get('base'))
1223 if cgversion not in changegroup.supportedoutgoingversions(repo):
1223 if cgversion not in changegroup.supportedoutgoingversions(repo):
1224 raise error.Abort(_("repository does not support bundle version %s") %
1224 raise error.Abort(_("repository does not support bundle version %s") %
1225 cgversion)
1225 cgversion)
1226
1226
1227 if base:
1227 if base:
1228 if dest:
1228 if dest:
1229 raise error.Abort(_("--base is incompatible with specifying "
1229 raise error.Abort(_("--base is incompatible with specifying "
1230 "a destination"))
1230 "a destination"))
1231 common = [repo.lookup(rev) for rev in base]
1231 common = [repo.lookup(rev) for rev in base]
1232 heads = [repo.lookup(r) for r in revs] if revs else None
1232 heads = [repo.lookup(r) for r in revs] if revs else None
1233 outgoing = discovery.outgoing(repo, common, heads)
1233 outgoing = discovery.outgoing(repo, common, heads)
1234 else:
1234 else:
1235 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1235 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1236 dest, branches = hg.parseurl(dest, opts.get('branch'))
1236 dest, branches = hg.parseurl(dest, opts.get('branch'))
1237 other = hg.peer(repo, opts, dest)
1237 other = hg.peer(repo, opts, dest)
1238 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1238 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1239 heads = revs and map(repo.lookup, revs) or revs
1239 heads = revs and map(repo.lookup, revs) or revs
1240 outgoing = discovery.findcommonoutgoing(repo, other,
1240 outgoing = discovery.findcommonoutgoing(repo, other,
1241 onlyheads=heads,
1241 onlyheads=heads,
1242 force=opts.get('force'),
1242 force=opts.get('force'),
1243 portable=True)
1243 portable=True)
1244
1244
1245 if not outgoing.missing:
1245 if not outgoing.missing:
1246 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1246 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1247 return 1
1247 return 1
1248
1248
1249 bcompression = bundlespec.compression
1249 bcompression = bundlespec.compression
1250 if cgversion == '01': #bundle1
1250 if cgversion == '01': #bundle1
1251 if bcompression is None:
1251 if bcompression is None:
1252 bcompression = 'UN'
1252 bcompression = 'UN'
1253 bversion = 'HG10' + bcompression
1253 bversion = 'HG10' + bcompression
1254 bcompression = None
1254 bcompression = None
1255 elif cgversion in ('02', '03'):
1255 elif cgversion in ('02', '03'):
1256 bversion = 'HG20'
1256 bversion = 'HG20'
1257 else:
1257 else:
1258 raise error.ProgrammingError(
1258 raise error.ProgrammingError(
1259 'bundle: unexpected changegroup version %s' % cgversion)
1259 'bundle: unexpected changegroup version %s' % cgversion)
1260
1260
1261 # TODO compression options should be derived from bundlespec parsing.
1261 # TODO compression options should be derived from bundlespec parsing.
1262 # This is a temporary hack to allow adjusting bundle compression
1262 # This is a temporary hack to allow adjusting bundle compression
1263 # level without a) formalizing the bundlespec changes to declare it
1263 # level without a) formalizing the bundlespec changes to declare it
1264 # b) introducing a command flag.
1264 # b) introducing a command flag.
1265 compopts = {}
1265 compopts = {}
1266 complevel = ui.configint('experimental', 'bundlecomplevel')
1266 complevel = ui.configint('experimental', 'bundlecomplevel')
1267 if complevel is not None:
1267 if complevel is not None:
1268 compopts['level'] = complevel
1268 compopts['level'] = complevel
1269
1269
1270
1270 # Allow overriding the bundling of obsmarker in phases through
1271 contentopts = {'cg.version': cgversion, 'changegroup': True}
1271 # configuration while we don't have a bundle version that include them
1272 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker'):
1272 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker'):
1273 contentopts['obsolescence'] = True
1273 bundlespec.contentopts['obsolescence'] = True
1274 if repo.ui.configbool('experimental', 'bundle-phases'):
1274 if repo.ui.configbool('experimental', 'bundle-phases'):
1275 contentopts['phases'] = True
1275 bundlespec.contentopts['phases'] = True
1276
1276 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1277 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1277 contentopts, compression=bcompression,
1278 bundlespec.contentopts, compression=bcompression,
1278 compopts=compopts)
1279 compopts=compopts)
1279
1280
1280 @command('cat',
1281 @command('cat',
1281 [('o', 'output', '',
1282 [('o', 'output', '',
1282 _('print output to file with formatted name'), _('FORMAT')),
1283 _('print output to file with formatted name'), _('FORMAT')),
1283 ('r', 'rev', '', _('print the given revision'), _('REV')),
1284 ('r', 'rev', '', _('print the given revision'), _('REV')),
1284 ('', 'decode', None, _('apply any matching decode filter')),
1285 ('', 'decode', None, _('apply any matching decode filter')),
1285 ] + walkopts + formatteropts,
1286 ] + walkopts + formatteropts,
1286 _('[OPTION]... FILE...'),
1287 _('[OPTION]... FILE...'),
1287 inferrepo=True, cmdtype=readonly)
1288 inferrepo=True, cmdtype=readonly)
1288 def cat(ui, repo, file1, *pats, **opts):
1289 def cat(ui, repo, file1, *pats, **opts):
1289 """output the current or given revision of files
1290 """output the current or given revision of files
1290
1291
1291 Print the specified files as they were at the given revision. If
1292 Print the specified files as they were at the given revision. If
1292 no revision is given, the parent of the working directory is used.
1293 no revision is given, the parent of the working directory is used.
1293
1294
1294 Output may be to a file, in which case the name of the file is
1295 Output may be to a file, in which case the name of the file is
1295 given using a template string. See :hg:`help templates`. In addition
1296 given using a template string. See :hg:`help templates`. In addition
1296 to the common template keywords, the following formatting rules are
1297 to the common template keywords, the following formatting rules are
1297 supported:
1298 supported:
1298
1299
1299 :``%%``: literal "%" character
1300 :``%%``: literal "%" character
1300 :``%s``: basename of file being printed
1301 :``%s``: basename of file being printed
1301 :``%d``: dirname of file being printed, or '.' if in repository root
1302 :``%d``: dirname of file being printed, or '.' if in repository root
1302 :``%p``: root-relative path name of file being printed
1303 :``%p``: root-relative path name of file being printed
1303 :``%H``: changeset hash (40 hexadecimal digits)
1304 :``%H``: changeset hash (40 hexadecimal digits)
1304 :``%R``: changeset revision number
1305 :``%R``: changeset revision number
1305 :``%h``: short-form changeset hash (12 hexadecimal digits)
1306 :``%h``: short-form changeset hash (12 hexadecimal digits)
1306 :``%r``: zero-padded changeset revision number
1307 :``%r``: zero-padded changeset revision number
1307 :``%b``: basename of the exporting repository
1308 :``%b``: basename of the exporting repository
1308 :``\\``: literal "\\" character
1309 :``\\``: literal "\\" character
1309
1310
1310 Returns 0 on success.
1311 Returns 0 on success.
1311 """
1312 """
1312 opts = pycompat.byteskwargs(opts)
1313 opts = pycompat.byteskwargs(opts)
1313 rev = opts.get('rev')
1314 rev = opts.get('rev')
1314 if rev:
1315 if rev:
1315 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
1316 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
1316 ctx = scmutil.revsingle(repo, rev)
1317 ctx = scmutil.revsingle(repo, rev)
1317 m = scmutil.match(ctx, (file1,) + pats, opts)
1318 m = scmutil.match(ctx, (file1,) + pats, opts)
1318 fntemplate = opts.pop('output', '')
1319 fntemplate = opts.pop('output', '')
1319 if cmdutil.isstdiofilename(fntemplate):
1320 if cmdutil.isstdiofilename(fntemplate):
1320 fntemplate = ''
1321 fntemplate = ''
1321
1322
1322 if fntemplate:
1323 if fntemplate:
1323 fm = formatter.nullformatter(ui, 'cat')
1324 fm = formatter.nullformatter(ui, 'cat')
1324 else:
1325 else:
1325 ui.pager('cat')
1326 ui.pager('cat')
1326 fm = ui.formatter('cat', opts)
1327 fm = ui.formatter('cat', opts)
1327 with fm:
1328 with fm:
1328 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '',
1329 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '',
1329 **pycompat.strkwargs(opts))
1330 **pycompat.strkwargs(opts))
1330
1331
1331 @command('^clone',
1332 @command('^clone',
1332 [('U', 'noupdate', None, _('the clone will include an empty working '
1333 [('U', 'noupdate', None, _('the clone will include an empty working '
1333 'directory (only a repository)')),
1334 'directory (only a repository)')),
1334 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1335 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1335 _('REV')),
1336 _('REV')),
1336 ('r', 'rev', [], _('do not clone everything, but include this changeset'
1337 ('r', 'rev', [], _('do not clone everything, but include this changeset'
1337 ' and its ancestors'), _('REV')),
1338 ' and its ancestors'), _('REV')),
1338 ('b', 'branch', [], _('do not clone everything, but include this branch\'s'
1339 ('b', 'branch', [], _('do not clone everything, but include this branch\'s'
1339 ' changesets and their ancestors'), _('BRANCH')),
1340 ' changesets and their ancestors'), _('BRANCH')),
1340 ('', 'pull', None, _('use pull protocol to copy metadata')),
1341 ('', 'pull', None, _('use pull protocol to copy metadata')),
1341 ('', 'uncompressed', None,
1342 ('', 'uncompressed', None,
1342 _('an alias to --stream (DEPRECATED)')),
1343 _('an alias to --stream (DEPRECATED)')),
1343 ('', 'stream', None,
1344 ('', 'stream', None,
1344 _('clone with minimal data processing')),
1345 _('clone with minimal data processing')),
1345 ] + remoteopts,
1346 ] + remoteopts,
1346 _('[OPTION]... SOURCE [DEST]'),
1347 _('[OPTION]... SOURCE [DEST]'),
1347 norepo=True)
1348 norepo=True)
1348 def clone(ui, source, dest=None, **opts):
1349 def clone(ui, source, dest=None, **opts):
1349 """make a copy of an existing repository
1350 """make a copy of an existing repository
1350
1351
1351 Create a copy of an existing repository in a new directory.
1352 Create a copy of an existing repository in a new directory.
1352
1353
1353 If no destination directory name is specified, it defaults to the
1354 If no destination directory name is specified, it defaults to the
1354 basename of the source.
1355 basename of the source.
1355
1356
1356 The location of the source is added to the new repository's
1357 The location of the source is added to the new repository's
1357 ``.hg/hgrc`` file, as the default to be used for future pulls.
1358 ``.hg/hgrc`` file, as the default to be used for future pulls.
1358
1359
1359 Only local paths and ``ssh://`` URLs are supported as
1360 Only local paths and ``ssh://`` URLs are supported as
1360 destinations. For ``ssh://`` destinations, no working directory or
1361 destinations. For ``ssh://`` destinations, no working directory or
1361 ``.hg/hgrc`` will be created on the remote side.
1362 ``.hg/hgrc`` will be created on the remote side.
1362
1363
1363 If the source repository has a bookmark called '@' set, that
1364 If the source repository has a bookmark called '@' set, that
1364 revision will be checked out in the new repository by default.
1365 revision will be checked out in the new repository by default.
1365
1366
1366 To check out a particular version, use -u/--update, or
1367 To check out a particular version, use -u/--update, or
1367 -U/--noupdate to create a clone with no working directory.
1368 -U/--noupdate to create a clone with no working directory.
1368
1369
1369 To pull only a subset of changesets, specify one or more revisions
1370 To pull only a subset of changesets, specify one or more revisions
1370 identifiers with -r/--rev or branches with -b/--branch. The
1371 identifiers with -r/--rev or branches with -b/--branch. The
1371 resulting clone will contain only the specified changesets and
1372 resulting clone will contain only the specified changesets and
1372 their ancestors. These options (or 'clone src#rev dest') imply
1373 their ancestors. These options (or 'clone src#rev dest') imply
1373 --pull, even for local source repositories.
1374 --pull, even for local source repositories.
1374
1375
1375 In normal clone mode, the remote normalizes repository data into a common
1376 In normal clone mode, the remote normalizes repository data into a common
1376 exchange format and the receiving end translates this data into its local
1377 exchange format and the receiving end translates this data into its local
1377 storage format. --stream activates a different clone mode that essentially
1378 storage format. --stream activates a different clone mode that essentially
1378 copies repository files from the remote with minimal data processing. This
1379 copies repository files from the remote with minimal data processing. This
1379 significantly reduces the CPU cost of a clone both remotely and locally.
1380 significantly reduces the CPU cost of a clone both remotely and locally.
1380 However, it often increases the transferred data size by 30-40%. This can
1381 However, it often increases the transferred data size by 30-40%. This can
1381 result in substantially faster clones where I/O throughput is plentiful,
1382 result in substantially faster clones where I/O throughput is plentiful,
1382 especially for larger repositories. A side-effect of --stream clones is
1383 especially for larger repositories. A side-effect of --stream clones is
1383 that storage settings and requirements on the remote are applied locally:
1384 that storage settings and requirements on the remote are applied locally:
1384 a modern client may inherit legacy or inefficient storage used by the
1385 a modern client may inherit legacy or inefficient storage used by the
1385 remote or a legacy Mercurial client may not be able to clone from a
1386 remote or a legacy Mercurial client may not be able to clone from a
1386 modern Mercurial remote.
1387 modern Mercurial remote.
1387
1388
1388 .. note::
1389 .. note::
1389
1390
1390 Specifying a tag will include the tagged changeset but not the
1391 Specifying a tag will include the tagged changeset but not the
1391 changeset containing the tag.
1392 changeset containing the tag.
1392
1393
1393 .. container:: verbose
1394 .. container:: verbose
1394
1395
1395 For efficiency, hardlinks are used for cloning whenever the
1396 For efficiency, hardlinks are used for cloning whenever the
1396 source and destination are on the same filesystem (note this
1397 source and destination are on the same filesystem (note this
1397 applies only to the repository data, not to the working
1398 applies only to the repository data, not to the working
1398 directory). Some filesystems, such as AFS, implement hardlinking
1399 directory). Some filesystems, such as AFS, implement hardlinking
1399 incorrectly, but do not report errors. In these cases, use the
1400 incorrectly, but do not report errors. In these cases, use the
1400 --pull option to avoid hardlinking.
1401 --pull option to avoid hardlinking.
1401
1402
1402 Mercurial will update the working directory to the first applicable
1403 Mercurial will update the working directory to the first applicable
1403 revision from this list:
1404 revision from this list:
1404
1405
1405 a) null if -U or the source repository has no changesets
1406 a) null if -U or the source repository has no changesets
1406 b) if -u . and the source repository is local, the first parent of
1407 b) if -u . and the source repository is local, the first parent of
1407 the source repository's working directory
1408 the source repository's working directory
1408 c) the changeset specified with -u (if a branch name, this means the
1409 c) the changeset specified with -u (if a branch name, this means the
1409 latest head of that branch)
1410 latest head of that branch)
1410 d) the changeset specified with -r
1411 d) the changeset specified with -r
1411 e) the tipmost head specified with -b
1412 e) the tipmost head specified with -b
1412 f) the tipmost head specified with the url#branch source syntax
1413 f) the tipmost head specified with the url#branch source syntax
1413 g) the revision marked with the '@' bookmark, if present
1414 g) the revision marked with the '@' bookmark, if present
1414 h) the tipmost head of the default branch
1415 h) the tipmost head of the default branch
1415 i) tip
1416 i) tip
1416
1417
1417 When cloning from servers that support it, Mercurial may fetch
1418 When cloning from servers that support it, Mercurial may fetch
1418 pre-generated data from a server-advertised URL. When this is done,
1419 pre-generated data from a server-advertised URL. When this is done,
1419 hooks operating on incoming changesets and changegroups may fire twice,
1420 hooks operating on incoming changesets and changegroups may fire twice,
1420 once for the bundle fetched from the URL and another for any additional
1421 once for the bundle fetched from the URL and another for any additional
1421 data not fetched from this URL. In addition, if an error occurs, the
1422 data not fetched from this URL. In addition, if an error occurs, the
1422 repository may be rolled back to a partial clone. This behavior may
1423 repository may be rolled back to a partial clone. This behavior may
1423 change in future releases. See :hg:`help -e clonebundles` for more.
1424 change in future releases. See :hg:`help -e clonebundles` for more.
1424
1425
1425 Examples:
1426 Examples:
1426
1427
1427 - clone a remote repository to a new directory named hg/::
1428 - clone a remote repository to a new directory named hg/::
1428
1429
1429 hg clone https://www.mercurial-scm.org/repo/hg/
1430 hg clone https://www.mercurial-scm.org/repo/hg/
1430
1431
1431 - create a lightweight local clone::
1432 - create a lightweight local clone::
1432
1433
1433 hg clone project/ project-feature/
1434 hg clone project/ project-feature/
1434
1435
1435 - clone from an absolute path on an ssh server (note double-slash)::
1436 - clone from an absolute path on an ssh server (note double-slash)::
1436
1437
1437 hg clone ssh://user@server//home/projects/alpha/
1438 hg clone ssh://user@server//home/projects/alpha/
1438
1439
1439 - do a streaming clone while checking out a specified version::
1440 - do a streaming clone while checking out a specified version::
1440
1441
1441 hg clone --stream http://server/repo -u 1.5
1442 hg clone --stream http://server/repo -u 1.5
1442
1443
1443 - create a repository without changesets after a particular revision::
1444 - create a repository without changesets after a particular revision::
1444
1445
1445 hg clone -r 04e544 experimental/ good/
1446 hg clone -r 04e544 experimental/ good/
1446
1447
1447 - clone (and track) a particular named branch::
1448 - clone (and track) a particular named branch::
1448
1449
1449 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1450 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1450
1451
1451 See :hg:`help urls` for details on specifying URLs.
1452 See :hg:`help urls` for details on specifying URLs.
1452
1453
1453 Returns 0 on success.
1454 Returns 0 on success.
1454 """
1455 """
1455 opts = pycompat.byteskwargs(opts)
1456 opts = pycompat.byteskwargs(opts)
1456 if opts.get('noupdate') and opts.get('updaterev'):
1457 if opts.get('noupdate') and opts.get('updaterev'):
1457 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1458 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1458
1459
1459 r = hg.clone(ui, opts, source, dest,
1460 r = hg.clone(ui, opts, source, dest,
1460 pull=opts.get('pull'),
1461 pull=opts.get('pull'),
1461 stream=opts.get('stream') or opts.get('uncompressed'),
1462 stream=opts.get('stream') or opts.get('uncompressed'),
1462 rev=opts.get('rev'),
1463 rev=opts.get('rev'),
1463 update=opts.get('updaterev') or not opts.get('noupdate'),
1464 update=opts.get('updaterev') or not opts.get('noupdate'),
1464 branch=opts.get('branch'),
1465 branch=opts.get('branch'),
1465 shareopts=opts.get('shareopts'))
1466 shareopts=opts.get('shareopts'))
1466
1467
1467 return r is None
1468 return r is None
1468
1469
1469 @command('^commit|ci',
1470 @command('^commit|ci',
1470 [('A', 'addremove', None,
1471 [('A', 'addremove', None,
1471 _('mark new/missing files as added/removed before committing')),
1472 _('mark new/missing files as added/removed before committing')),
1472 ('', 'close-branch', None,
1473 ('', 'close-branch', None,
1473 _('mark a branch head as closed')),
1474 _('mark a branch head as closed')),
1474 ('', 'amend', None, _('amend the parent of the working directory')),
1475 ('', 'amend', None, _('amend the parent of the working directory')),
1475 ('s', 'secret', None, _('use the secret phase for committing')),
1476 ('s', 'secret', None, _('use the secret phase for committing')),
1476 ('e', 'edit', None, _('invoke editor on commit messages')),
1477 ('e', 'edit', None, _('invoke editor on commit messages')),
1477 ('i', 'interactive', None, _('use interactive mode')),
1478 ('i', 'interactive', None, _('use interactive mode')),
1478 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1479 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1479 _('[OPTION]... [FILE]...'),
1480 _('[OPTION]... [FILE]...'),
1480 inferrepo=True)
1481 inferrepo=True)
1481 def commit(ui, repo, *pats, **opts):
1482 def commit(ui, repo, *pats, **opts):
1482 """commit the specified files or all outstanding changes
1483 """commit the specified files or all outstanding changes
1483
1484
1484 Commit changes to the given files into the repository. Unlike a
1485 Commit changes to the given files into the repository. Unlike a
1485 centralized SCM, this operation is a local operation. See
1486 centralized SCM, this operation is a local operation. See
1486 :hg:`push` for a way to actively distribute your changes.
1487 :hg:`push` for a way to actively distribute your changes.
1487
1488
1488 If a list of files is omitted, all changes reported by :hg:`status`
1489 If a list of files is omitted, all changes reported by :hg:`status`
1489 will be committed.
1490 will be committed.
1490
1491
1491 If you are committing the result of a merge, do not provide any
1492 If you are committing the result of a merge, do not provide any
1492 filenames or -I/-X filters.
1493 filenames or -I/-X filters.
1493
1494
1494 If no commit message is specified, Mercurial starts your
1495 If no commit message is specified, Mercurial starts your
1495 configured editor where you can enter a message. In case your
1496 configured editor where you can enter a message. In case your
1496 commit fails, you will find a backup of your message in
1497 commit fails, you will find a backup of your message in
1497 ``.hg/last-message.txt``.
1498 ``.hg/last-message.txt``.
1498
1499
1499 The --close-branch flag can be used to mark the current branch
1500 The --close-branch flag can be used to mark the current branch
1500 head closed. When all heads of a branch are closed, the branch
1501 head closed. When all heads of a branch are closed, the branch
1501 will be considered closed and no longer listed.
1502 will be considered closed and no longer listed.
1502
1503
1503 The --amend flag can be used to amend the parent of the
1504 The --amend flag can be used to amend the parent of the
1504 working directory with a new commit that contains the changes
1505 working directory with a new commit that contains the changes
1505 in the parent in addition to those currently reported by :hg:`status`,
1506 in the parent in addition to those currently reported by :hg:`status`,
1506 if there are any. The old commit is stored in a backup bundle in
1507 if there are any. The old commit is stored in a backup bundle in
1507 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1508 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1508 on how to restore it).
1509 on how to restore it).
1509
1510
1510 Message, user and date are taken from the amended commit unless
1511 Message, user and date are taken from the amended commit unless
1511 specified. When a message isn't specified on the command line,
1512 specified. When a message isn't specified on the command line,
1512 the editor will open with the message of the amended commit.
1513 the editor will open with the message of the amended commit.
1513
1514
1514 It is not possible to amend public changesets (see :hg:`help phases`)
1515 It is not possible to amend public changesets (see :hg:`help phases`)
1515 or changesets that have children.
1516 or changesets that have children.
1516
1517
1517 See :hg:`help dates` for a list of formats valid for -d/--date.
1518 See :hg:`help dates` for a list of formats valid for -d/--date.
1518
1519
1519 Returns 0 on success, 1 if nothing changed.
1520 Returns 0 on success, 1 if nothing changed.
1520
1521
1521 .. container:: verbose
1522 .. container:: verbose
1522
1523
1523 Examples:
1524 Examples:
1524
1525
1525 - commit all files ending in .py::
1526 - commit all files ending in .py::
1526
1527
1527 hg commit --include "set:**.py"
1528 hg commit --include "set:**.py"
1528
1529
1529 - commit all non-binary files::
1530 - commit all non-binary files::
1530
1531
1531 hg commit --exclude "set:binary()"
1532 hg commit --exclude "set:binary()"
1532
1533
1533 - amend the current commit and set the date to now::
1534 - amend the current commit and set the date to now::
1534
1535
1535 hg commit --amend --date now
1536 hg commit --amend --date now
1536 """
1537 """
1537 wlock = lock = None
1538 wlock = lock = None
1538 try:
1539 try:
1539 wlock = repo.wlock()
1540 wlock = repo.wlock()
1540 lock = repo.lock()
1541 lock = repo.lock()
1541 return _docommit(ui, repo, *pats, **opts)
1542 return _docommit(ui, repo, *pats, **opts)
1542 finally:
1543 finally:
1543 release(lock, wlock)
1544 release(lock, wlock)
1544
1545
1545 def _docommit(ui, repo, *pats, **opts):
1546 def _docommit(ui, repo, *pats, **opts):
1546 if opts.get(r'interactive'):
1547 if opts.get(r'interactive'):
1547 opts.pop(r'interactive')
1548 opts.pop(r'interactive')
1548 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1549 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1549 cmdutil.recordfilter, *pats,
1550 cmdutil.recordfilter, *pats,
1550 **opts)
1551 **opts)
1551 # ret can be 0 (no changes to record) or the value returned by
1552 # ret can be 0 (no changes to record) or the value returned by
1552 # commit(), 1 if nothing changed or None on success.
1553 # commit(), 1 if nothing changed or None on success.
1553 return 1 if ret == 0 else ret
1554 return 1 if ret == 0 else ret
1554
1555
1555 opts = pycompat.byteskwargs(opts)
1556 opts = pycompat.byteskwargs(opts)
1556 if opts.get('subrepos'):
1557 if opts.get('subrepos'):
1557 if opts.get('amend'):
1558 if opts.get('amend'):
1558 raise error.Abort(_('cannot amend with --subrepos'))
1559 raise error.Abort(_('cannot amend with --subrepos'))
1559 # Let --subrepos on the command line override config setting.
1560 # Let --subrepos on the command line override config setting.
1560 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1561 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1561
1562
1562 cmdutil.checkunfinished(repo, commit=True)
1563 cmdutil.checkunfinished(repo, commit=True)
1563
1564
1564 branch = repo[None].branch()
1565 branch = repo[None].branch()
1565 bheads = repo.branchheads(branch)
1566 bheads = repo.branchheads(branch)
1566
1567
1567 extra = {}
1568 extra = {}
1568 if opts.get('close_branch'):
1569 if opts.get('close_branch'):
1569 extra['close'] = '1'
1570 extra['close'] = '1'
1570
1571
1571 if not bheads:
1572 if not bheads:
1572 raise error.Abort(_('can only close branch heads'))
1573 raise error.Abort(_('can only close branch heads'))
1573 elif opts.get('amend'):
1574 elif opts.get('amend'):
1574 if repo[None].parents()[0].p1().branch() != branch and \
1575 if repo[None].parents()[0].p1().branch() != branch and \
1575 repo[None].parents()[0].p2().branch() != branch:
1576 repo[None].parents()[0].p2().branch() != branch:
1576 raise error.Abort(_('can only close branch heads'))
1577 raise error.Abort(_('can only close branch heads'))
1577
1578
1578 if opts.get('amend'):
1579 if opts.get('amend'):
1579 if ui.configbool('ui', 'commitsubrepos'):
1580 if ui.configbool('ui', 'commitsubrepos'):
1580 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1581 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1581
1582
1582 old = repo['.']
1583 old = repo['.']
1583 rewriteutil.precheck(repo, [old.rev()], 'amend')
1584 rewriteutil.precheck(repo, [old.rev()], 'amend')
1584
1585
1585 # Currently histedit gets confused if an amend happens while histedit
1586 # Currently histedit gets confused if an amend happens while histedit
1586 # is in progress. Since we have a checkunfinished command, we are
1587 # is in progress. Since we have a checkunfinished command, we are
1587 # temporarily honoring it.
1588 # temporarily honoring it.
1588 #
1589 #
1589 # Note: eventually this guard will be removed. Please do not expect
1590 # Note: eventually this guard will be removed. Please do not expect
1590 # this behavior to remain.
1591 # this behavior to remain.
1591 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1592 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1592 cmdutil.checkunfinished(repo)
1593 cmdutil.checkunfinished(repo)
1593
1594
1594 node = cmdutil.amend(ui, repo, old, extra, pats, opts)
1595 node = cmdutil.amend(ui, repo, old, extra, pats, opts)
1595 if node == old.node():
1596 if node == old.node():
1596 ui.status(_("nothing changed\n"))
1597 ui.status(_("nothing changed\n"))
1597 return 1
1598 return 1
1598 else:
1599 else:
1599 def commitfunc(ui, repo, message, match, opts):
1600 def commitfunc(ui, repo, message, match, opts):
1600 overrides = {}
1601 overrides = {}
1601 if opts.get('secret'):
1602 if opts.get('secret'):
1602 overrides[('phases', 'new-commit')] = 'secret'
1603 overrides[('phases', 'new-commit')] = 'secret'
1603
1604
1604 baseui = repo.baseui
1605 baseui = repo.baseui
1605 with baseui.configoverride(overrides, 'commit'):
1606 with baseui.configoverride(overrides, 'commit'):
1606 with ui.configoverride(overrides, 'commit'):
1607 with ui.configoverride(overrides, 'commit'):
1607 editform = cmdutil.mergeeditform(repo[None],
1608 editform = cmdutil.mergeeditform(repo[None],
1608 'commit.normal')
1609 'commit.normal')
1609 editor = cmdutil.getcommiteditor(
1610 editor = cmdutil.getcommiteditor(
1610 editform=editform, **pycompat.strkwargs(opts))
1611 editform=editform, **pycompat.strkwargs(opts))
1611 return repo.commit(message,
1612 return repo.commit(message,
1612 opts.get('user'),
1613 opts.get('user'),
1613 opts.get('date'),
1614 opts.get('date'),
1614 match,
1615 match,
1615 editor=editor,
1616 editor=editor,
1616 extra=extra)
1617 extra=extra)
1617
1618
1618 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1619 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1619
1620
1620 if not node:
1621 if not node:
1621 stat = cmdutil.postcommitstatus(repo, pats, opts)
1622 stat = cmdutil.postcommitstatus(repo, pats, opts)
1622 if stat[3]:
1623 if stat[3]:
1623 ui.status(_("nothing changed (%d missing files, see "
1624 ui.status(_("nothing changed (%d missing files, see "
1624 "'hg status')\n") % len(stat[3]))
1625 "'hg status')\n") % len(stat[3]))
1625 else:
1626 else:
1626 ui.status(_("nothing changed\n"))
1627 ui.status(_("nothing changed\n"))
1627 return 1
1628 return 1
1628
1629
1629 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1630 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1630
1631
1631 @command('config|showconfig|debugconfig',
1632 @command('config|showconfig|debugconfig',
1632 [('u', 'untrusted', None, _('show untrusted configuration options')),
1633 [('u', 'untrusted', None, _('show untrusted configuration options')),
1633 ('e', 'edit', None, _('edit user config')),
1634 ('e', 'edit', None, _('edit user config')),
1634 ('l', 'local', None, _('edit repository config')),
1635 ('l', 'local', None, _('edit repository config')),
1635 ('g', 'global', None, _('edit global config'))] + formatteropts,
1636 ('g', 'global', None, _('edit global config'))] + formatteropts,
1636 _('[-u] [NAME]...'),
1637 _('[-u] [NAME]...'),
1637 optionalrepo=True, cmdtype=readonly)
1638 optionalrepo=True, cmdtype=readonly)
1638 def config(ui, repo, *values, **opts):
1639 def config(ui, repo, *values, **opts):
1639 """show combined config settings from all hgrc files
1640 """show combined config settings from all hgrc files
1640
1641
1641 With no arguments, print names and values of all config items.
1642 With no arguments, print names and values of all config items.
1642
1643
1643 With one argument of the form section.name, print just the value
1644 With one argument of the form section.name, print just the value
1644 of that config item.
1645 of that config item.
1645
1646
1646 With multiple arguments, print names and values of all config
1647 With multiple arguments, print names and values of all config
1647 items with matching section names or section.names.
1648 items with matching section names or section.names.
1648
1649
1649 With --edit, start an editor on the user-level config file. With
1650 With --edit, start an editor on the user-level config file. With
1650 --global, edit the system-wide config file. With --local, edit the
1651 --global, edit the system-wide config file. With --local, edit the
1651 repository-level config file.
1652 repository-level config file.
1652
1653
1653 With --debug, the source (filename and line number) is printed
1654 With --debug, the source (filename and line number) is printed
1654 for each config item.
1655 for each config item.
1655
1656
1656 See :hg:`help config` for more information about config files.
1657 See :hg:`help config` for more information about config files.
1657
1658
1658 Returns 0 on success, 1 if NAME does not exist.
1659 Returns 0 on success, 1 if NAME does not exist.
1659
1660
1660 """
1661 """
1661
1662
1662 opts = pycompat.byteskwargs(opts)
1663 opts = pycompat.byteskwargs(opts)
1663 if opts.get('edit') or opts.get('local') or opts.get('global'):
1664 if opts.get('edit') or opts.get('local') or opts.get('global'):
1664 if opts.get('local') and opts.get('global'):
1665 if opts.get('local') and opts.get('global'):
1665 raise error.Abort(_("can't use --local and --global together"))
1666 raise error.Abort(_("can't use --local and --global together"))
1666
1667
1667 if opts.get('local'):
1668 if opts.get('local'):
1668 if not repo:
1669 if not repo:
1669 raise error.Abort(_("can't use --local outside a repository"))
1670 raise error.Abort(_("can't use --local outside a repository"))
1670 paths = [repo.vfs.join('hgrc')]
1671 paths = [repo.vfs.join('hgrc')]
1671 elif opts.get('global'):
1672 elif opts.get('global'):
1672 paths = rcutil.systemrcpath()
1673 paths = rcutil.systemrcpath()
1673 else:
1674 else:
1674 paths = rcutil.userrcpath()
1675 paths = rcutil.userrcpath()
1675
1676
1676 for f in paths:
1677 for f in paths:
1677 if os.path.exists(f):
1678 if os.path.exists(f):
1678 break
1679 break
1679 else:
1680 else:
1680 if opts.get('global'):
1681 if opts.get('global'):
1681 samplehgrc = uimod.samplehgrcs['global']
1682 samplehgrc = uimod.samplehgrcs['global']
1682 elif opts.get('local'):
1683 elif opts.get('local'):
1683 samplehgrc = uimod.samplehgrcs['local']
1684 samplehgrc = uimod.samplehgrcs['local']
1684 else:
1685 else:
1685 samplehgrc = uimod.samplehgrcs['user']
1686 samplehgrc = uimod.samplehgrcs['user']
1686
1687
1687 f = paths[0]
1688 f = paths[0]
1688 fp = open(f, "wb")
1689 fp = open(f, "wb")
1689 fp.write(util.tonativeeol(samplehgrc))
1690 fp.write(util.tonativeeol(samplehgrc))
1690 fp.close()
1691 fp.close()
1691
1692
1692 editor = ui.geteditor()
1693 editor = ui.geteditor()
1693 ui.system("%s \"%s\"" % (editor, f),
1694 ui.system("%s \"%s\"" % (editor, f),
1694 onerr=error.Abort, errprefix=_("edit failed"),
1695 onerr=error.Abort, errprefix=_("edit failed"),
1695 blockedtag='config_edit')
1696 blockedtag='config_edit')
1696 return
1697 return
1697 ui.pager('config')
1698 ui.pager('config')
1698 fm = ui.formatter('config', opts)
1699 fm = ui.formatter('config', opts)
1699 for t, f in rcutil.rccomponents():
1700 for t, f in rcutil.rccomponents():
1700 if t == 'path':
1701 if t == 'path':
1701 ui.debug('read config from: %s\n' % f)
1702 ui.debug('read config from: %s\n' % f)
1702 elif t == 'items':
1703 elif t == 'items':
1703 for section, name, value, source in f:
1704 for section, name, value, source in f:
1704 ui.debug('set config by: %s\n' % source)
1705 ui.debug('set config by: %s\n' % source)
1705 else:
1706 else:
1706 raise error.ProgrammingError('unknown rctype: %s' % t)
1707 raise error.ProgrammingError('unknown rctype: %s' % t)
1707 untrusted = bool(opts.get('untrusted'))
1708 untrusted = bool(opts.get('untrusted'))
1708
1709
1709 selsections = selentries = []
1710 selsections = selentries = []
1710 if values:
1711 if values:
1711 selsections = [v for v in values if '.' not in v]
1712 selsections = [v for v in values if '.' not in v]
1712 selentries = [v for v in values if '.' in v]
1713 selentries = [v for v in values if '.' in v]
1713 uniquesel = (len(selentries) == 1 and not selsections)
1714 uniquesel = (len(selentries) == 1 and not selsections)
1714 selsections = set(selsections)
1715 selsections = set(selsections)
1715 selentries = set(selentries)
1716 selentries = set(selentries)
1716
1717
1717 matched = False
1718 matched = False
1718 for section, name, value in ui.walkconfig(untrusted=untrusted):
1719 for section, name, value in ui.walkconfig(untrusted=untrusted):
1719 source = ui.configsource(section, name, untrusted)
1720 source = ui.configsource(section, name, untrusted)
1720 value = pycompat.bytestr(value)
1721 value = pycompat.bytestr(value)
1721 if fm.isplain():
1722 if fm.isplain():
1722 source = source or 'none'
1723 source = source or 'none'
1723 value = value.replace('\n', '\\n')
1724 value = value.replace('\n', '\\n')
1724 entryname = section + '.' + name
1725 entryname = section + '.' + name
1725 if values and not (section in selsections or entryname in selentries):
1726 if values and not (section in selsections or entryname in selentries):
1726 continue
1727 continue
1727 fm.startitem()
1728 fm.startitem()
1728 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1729 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1729 if uniquesel:
1730 if uniquesel:
1730 fm.data(name=entryname)
1731 fm.data(name=entryname)
1731 fm.write('value', '%s\n', value)
1732 fm.write('value', '%s\n', value)
1732 else:
1733 else:
1733 fm.write('name value', '%s=%s\n', entryname, value)
1734 fm.write('name value', '%s=%s\n', entryname, value)
1734 matched = True
1735 matched = True
1735 fm.end()
1736 fm.end()
1736 if matched:
1737 if matched:
1737 return 0
1738 return 0
1738 return 1
1739 return 1
1739
1740
1740 @command('copy|cp',
1741 @command('copy|cp',
1741 [('A', 'after', None, _('record a copy that has already occurred')),
1742 [('A', 'after', None, _('record a copy that has already occurred')),
1742 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1743 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1743 ] + walkopts + dryrunopts,
1744 ] + walkopts + dryrunopts,
1744 _('[OPTION]... [SOURCE]... DEST'))
1745 _('[OPTION]... [SOURCE]... DEST'))
1745 def copy(ui, repo, *pats, **opts):
1746 def copy(ui, repo, *pats, **opts):
1746 """mark files as copied for the next commit
1747 """mark files as copied for the next commit
1747
1748
1748 Mark dest as having copies of source files. If dest is a
1749 Mark dest as having copies of source files. If dest is a
1749 directory, copies are put in that directory. If dest is a file,
1750 directory, copies are put in that directory. If dest is a file,
1750 the source must be a single file.
1751 the source must be a single file.
1751
1752
1752 By default, this command copies the contents of files as they
1753 By default, this command copies the contents of files as they
1753 exist in the working directory. If invoked with -A/--after, the
1754 exist in the working directory. If invoked with -A/--after, the
1754 operation is recorded, but no copying is performed.
1755 operation is recorded, but no copying is performed.
1755
1756
1756 This command takes effect with the next commit. To undo a copy
1757 This command takes effect with the next commit. To undo a copy
1757 before that, see :hg:`revert`.
1758 before that, see :hg:`revert`.
1758
1759
1759 Returns 0 on success, 1 if errors are encountered.
1760 Returns 0 on success, 1 if errors are encountered.
1760 """
1761 """
1761 opts = pycompat.byteskwargs(opts)
1762 opts = pycompat.byteskwargs(opts)
1762 with repo.wlock(False):
1763 with repo.wlock(False):
1763 return cmdutil.copy(ui, repo, pats, opts)
1764 return cmdutil.copy(ui, repo, pats, opts)
1764
1765
1765 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1766 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1766 def debugcommands(ui, cmd='', *args):
1767 def debugcommands(ui, cmd='', *args):
1767 """list all available commands and options"""
1768 """list all available commands and options"""
1768 for cmd, vals in sorted(table.iteritems()):
1769 for cmd, vals in sorted(table.iteritems()):
1769 cmd = cmd.split('|')[0].strip('^')
1770 cmd = cmd.split('|')[0].strip('^')
1770 opts = ', '.join([i[1] for i in vals[1]])
1771 opts = ', '.join([i[1] for i in vals[1]])
1771 ui.write('%s: %s\n' % (cmd, opts))
1772 ui.write('%s: %s\n' % (cmd, opts))
1772
1773
1773 @command('debugcomplete',
1774 @command('debugcomplete',
1774 [('o', 'options', None, _('show the command options'))],
1775 [('o', 'options', None, _('show the command options'))],
1775 _('[-o] CMD'),
1776 _('[-o] CMD'),
1776 norepo=True)
1777 norepo=True)
1777 def debugcomplete(ui, cmd='', **opts):
1778 def debugcomplete(ui, cmd='', **opts):
1778 """returns the completion list associated with the given command"""
1779 """returns the completion list associated with the given command"""
1779
1780
1780 if opts.get(r'options'):
1781 if opts.get(r'options'):
1781 options = []
1782 options = []
1782 otables = [globalopts]
1783 otables = [globalopts]
1783 if cmd:
1784 if cmd:
1784 aliases, entry = cmdutil.findcmd(cmd, table, False)
1785 aliases, entry = cmdutil.findcmd(cmd, table, False)
1785 otables.append(entry[1])
1786 otables.append(entry[1])
1786 for t in otables:
1787 for t in otables:
1787 for o in t:
1788 for o in t:
1788 if "(DEPRECATED)" in o[3]:
1789 if "(DEPRECATED)" in o[3]:
1789 continue
1790 continue
1790 if o[0]:
1791 if o[0]:
1791 options.append('-%s' % o[0])
1792 options.append('-%s' % o[0])
1792 options.append('--%s' % o[1])
1793 options.append('--%s' % o[1])
1793 ui.write("%s\n" % "\n".join(options))
1794 ui.write("%s\n" % "\n".join(options))
1794 return
1795 return
1795
1796
1796 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1797 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1797 if ui.verbose:
1798 if ui.verbose:
1798 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1799 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1799 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1800 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1800
1801
1801 @command('^diff',
1802 @command('^diff',
1802 [('r', 'rev', [], _('revision'), _('REV')),
1803 [('r', 'rev', [], _('revision'), _('REV')),
1803 ('c', 'change', '', _('change made by revision'), _('REV'))
1804 ('c', 'change', '', _('change made by revision'), _('REV'))
1804 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1805 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1805 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1806 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1806 inferrepo=True, cmdtype=readonly)
1807 inferrepo=True, cmdtype=readonly)
1807 def diff(ui, repo, *pats, **opts):
1808 def diff(ui, repo, *pats, **opts):
1808 """diff repository (or selected files)
1809 """diff repository (or selected files)
1809
1810
1810 Show differences between revisions for the specified files.
1811 Show differences between revisions for the specified files.
1811
1812
1812 Differences between files are shown using the unified diff format.
1813 Differences between files are shown using the unified diff format.
1813
1814
1814 .. note::
1815 .. note::
1815
1816
1816 :hg:`diff` may generate unexpected results for merges, as it will
1817 :hg:`diff` may generate unexpected results for merges, as it will
1817 default to comparing against the working directory's first
1818 default to comparing against the working directory's first
1818 parent changeset if no revisions are specified.
1819 parent changeset if no revisions are specified.
1819
1820
1820 When two revision arguments are given, then changes are shown
1821 When two revision arguments are given, then changes are shown
1821 between those revisions. If only one revision is specified then
1822 between those revisions. If only one revision is specified then
1822 that revision is compared to the working directory, and, when no
1823 that revision is compared to the working directory, and, when no
1823 revisions are specified, the working directory files are compared
1824 revisions are specified, the working directory files are compared
1824 to its first parent.
1825 to its first parent.
1825
1826
1826 Alternatively you can specify -c/--change with a revision to see
1827 Alternatively you can specify -c/--change with a revision to see
1827 the changes in that changeset relative to its first parent.
1828 the changes in that changeset relative to its first parent.
1828
1829
1829 Without the -a/--text option, diff will avoid generating diffs of
1830 Without the -a/--text option, diff will avoid generating diffs of
1830 files it detects as binary. With -a, diff will generate a diff
1831 files it detects as binary. With -a, diff will generate a diff
1831 anyway, probably with undesirable results.
1832 anyway, probably with undesirable results.
1832
1833
1833 Use the -g/--git option to generate diffs in the git extended diff
1834 Use the -g/--git option to generate diffs in the git extended diff
1834 format. For more information, read :hg:`help diffs`.
1835 format. For more information, read :hg:`help diffs`.
1835
1836
1836 .. container:: verbose
1837 .. container:: verbose
1837
1838
1838 Examples:
1839 Examples:
1839
1840
1840 - compare a file in the current working directory to its parent::
1841 - compare a file in the current working directory to its parent::
1841
1842
1842 hg diff foo.c
1843 hg diff foo.c
1843
1844
1844 - compare two historical versions of a directory, with rename info::
1845 - compare two historical versions of a directory, with rename info::
1845
1846
1846 hg diff --git -r 1.0:1.2 lib/
1847 hg diff --git -r 1.0:1.2 lib/
1847
1848
1848 - get change stats relative to the last change on some date::
1849 - get change stats relative to the last change on some date::
1849
1850
1850 hg diff --stat -r "date('may 2')"
1851 hg diff --stat -r "date('may 2')"
1851
1852
1852 - diff all newly-added files that contain a keyword::
1853 - diff all newly-added files that contain a keyword::
1853
1854
1854 hg diff "set:added() and grep(GNU)"
1855 hg diff "set:added() and grep(GNU)"
1855
1856
1856 - compare a revision and its parents::
1857 - compare a revision and its parents::
1857
1858
1858 hg diff -c 9353 # compare against first parent
1859 hg diff -c 9353 # compare against first parent
1859 hg diff -r 9353^:9353 # same using revset syntax
1860 hg diff -r 9353^:9353 # same using revset syntax
1860 hg diff -r 9353^2:9353 # compare against the second parent
1861 hg diff -r 9353^2:9353 # compare against the second parent
1861
1862
1862 Returns 0 on success.
1863 Returns 0 on success.
1863 """
1864 """
1864
1865
1865 opts = pycompat.byteskwargs(opts)
1866 opts = pycompat.byteskwargs(opts)
1866 revs = opts.get('rev')
1867 revs = opts.get('rev')
1867 change = opts.get('change')
1868 change = opts.get('change')
1868 stat = opts.get('stat')
1869 stat = opts.get('stat')
1869 reverse = opts.get('reverse')
1870 reverse = opts.get('reverse')
1870
1871
1871 if revs and change:
1872 if revs and change:
1872 msg = _('cannot specify --rev and --change at the same time')
1873 msg = _('cannot specify --rev and --change at the same time')
1873 raise error.Abort(msg)
1874 raise error.Abort(msg)
1874 elif change:
1875 elif change:
1875 repo = scmutil.unhidehashlikerevs(repo, [change], 'nowarn')
1876 repo = scmutil.unhidehashlikerevs(repo, [change], 'nowarn')
1876 node2 = scmutil.revsingle(repo, change, None).node()
1877 node2 = scmutil.revsingle(repo, change, None).node()
1877 node1 = repo[node2].p1().node()
1878 node1 = repo[node2].p1().node()
1878 else:
1879 else:
1879 repo = scmutil.unhidehashlikerevs(repo, revs, 'nowarn')
1880 repo = scmutil.unhidehashlikerevs(repo, revs, 'nowarn')
1880 node1, node2 = scmutil.revpair(repo, revs)
1881 node1, node2 = scmutil.revpair(repo, revs)
1881
1882
1882 if reverse:
1883 if reverse:
1883 node1, node2 = node2, node1
1884 node1, node2 = node2, node1
1884
1885
1885 diffopts = patch.diffallopts(ui, opts)
1886 diffopts = patch.diffallopts(ui, opts)
1886 m = scmutil.match(repo[node2], pats, opts)
1887 m = scmutil.match(repo[node2], pats, opts)
1887 ui.pager('diff')
1888 ui.pager('diff')
1888 logcmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1889 logcmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1889 listsubrepos=opts.get('subrepos'),
1890 listsubrepos=opts.get('subrepos'),
1890 root=opts.get('root'))
1891 root=opts.get('root'))
1891
1892
1892 @command('^export',
1893 @command('^export',
1893 [('o', 'output', '',
1894 [('o', 'output', '',
1894 _('print output to file with formatted name'), _('FORMAT')),
1895 _('print output to file with formatted name'), _('FORMAT')),
1895 ('', 'switch-parent', None, _('diff against the second parent')),
1896 ('', 'switch-parent', None, _('diff against the second parent')),
1896 ('r', 'rev', [], _('revisions to export'), _('REV')),
1897 ('r', 'rev', [], _('revisions to export'), _('REV')),
1897 ] + diffopts,
1898 ] + diffopts,
1898 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'), cmdtype=readonly)
1899 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'), cmdtype=readonly)
1899 def export(ui, repo, *changesets, **opts):
1900 def export(ui, repo, *changesets, **opts):
1900 """dump the header and diffs for one or more changesets
1901 """dump the header and diffs for one or more changesets
1901
1902
1902 Print the changeset header and diffs for one or more revisions.
1903 Print the changeset header and diffs for one or more revisions.
1903 If no revision is given, the parent of the working directory is used.
1904 If no revision is given, the parent of the working directory is used.
1904
1905
1905 The information shown in the changeset header is: author, date,
1906 The information shown in the changeset header is: author, date,
1906 branch name (if non-default), changeset hash, parent(s) and commit
1907 branch name (if non-default), changeset hash, parent(s) and commit
1907 comment.
1908 comment.
1908
1909
1909 .. note::
1910 .. note::
1910
1911
1911 :hg:`export` may generate unexpected diff output for merge
1912 :hg:`export` may generate unexpected diff output for merge
1912 changesets, as it will compare the merge changeset against its
1913 changesets, as it will compare the merge changeset against its
1913 first parent only.
1914 first parent only.
1914
1915
1915 Output may be to a file, in which case the name of the file is
1916 Output may be to a file, in which case the name of the file is
1916 given using a template string. See :hg:`help templates`. In addition
1917 given using a template string. See :hg:`help templates`. In addition
1917 to the common template keywords, the following formatting rules are
1918 to the common template keywords, the following formatting rules are
1918 supported:
1919 supported:
1919
1920
1920 :``%%``: literal "%" character
1921 :``%%``: literal "%" character
1921 :``%H``: changeset hash (40 hexadecimal digits)
1922 :``%H``: changeset hash (40 hexadecimal digits)
1922 :``%N``: number of patches being generated
1923 :``%N``: number of patches being generated
1923 :``%R``: changeset revision number
1924 :``%R``: changeset revision number
1924 :``%b``: basename of the exporting repository
1925 :``%b``: basename of the exporting repository
1925 :``%h``: short-form changeset hash (12 hexadecimal digits)
1926 :``%h``: short-form changeset hash (12 hexadecimal digits)
1926 :``%m``: first line of the commit message (only alphanumeric characters)
1927 :``%m``: first line of the commit message (only alphanumeric characters)
1927 :``%n``: zero-padded sequence number, starting at 1
1928 :``%n``: zero-padded sequence number, starting at 1
1928 :``%r``: zero-padded changeset revision number
1929 :``%r``: zero-padded changeset revision number
1929 :``\\``: literal "\\" character
1930 :``\\``: literal "\\" character
1930
1931
1931 Without the -a/--text option, export will avoid generating diffs
1932 Without the -a/--text option, export will avoid generating diffs
1932 of files it detects as binary. With -a, export will generate a
1933 of files it detects as binary. With -a, export will generate a
1933 diff anyway, probably with undesirable results.
1934 diff anyway, probably with undesirable results.
1934
1935
1935 Use the -g/--git option to generate diffs in the git extended diff
1936 Use the -g/--git option to generate diffs in the git extended diff
1936 format. See :hg:`help diffs` for more information.
1937 format. See :hg:`help diffs` for more information.
1937
1938
1938 With the --switch-parent option, the diff will be against the
1939 With the --switch-parent option, the diff will be against the
1939 second parent. It can be useful to review a merge.
1940 second parent. It can be useful to review a merge.
1940
1941
1941 .. container:: verbose
1942 .. container:: verbose
1942
1943
1943 Examples:
1944 Examples:
1944
1945
1945 - use export and import to transplant a bugfix to the current
1946 - use export and import to transplant a bugfix to the current
1946 branch::
1947 branch::
1947
1948
1948 hg export -r 9353 | hg import -
1949 hg export -r 9353 | hg import -
1949
1950
1950 - export all the changesets between two revisions to a file with
1951 - export all the changesets between two revisions to a file with
1951 rename information::
1952 rename information::
1952
1953
1953 hg export --git -r 123:150 > changes.txt
1954 hg export --git -r 123:150 > changes.txt
1954
1955
1955 - split outgoing changes into a series of patches with
1956 - split outgoing changes into a series of patches with
1956 descriptive names::
1957 descriptive names::
1957
1958
1958 hg export -r "outgoing()" -o "%n-%m.patch"
1959 hg export -r "outgoing()" -o "%n-%m.patch"
1959
1960
1960 Returns 0 on success.
1961 Returns 0 on success.
1961 """
1962 """
1962 opts = pycompat.byteskwargs(opts)
1963 opts = pycompat.byteskwargs(opts)
1963 changesets += tuple(opts.get('rev', []))
1964 changesets += tuple(opts.get('rev', []))
1964 if not changesets:
1965 if not changesets:
1965 changesets = ['.']
1966 changesets = ['.']
1966 repo = scmutil.unhidehashlikerevs(repo, changesets, 'nowarn')
1967 repo = scmutil.unhidehashlikerevs(repo, changesets, 'nowarn')
1967 revs = scmutil.revrange(repo, changesets)
1968 revs = scmutil.revrange(repo, changesets)
1968 if not revs:
1969 if not revs:
1969 raise error.Abort(_("export requires at least one changeset"))
1970 raise error.Abort(_("export requires at least one changeset"))
1970 if len(revs) > 1:
1971 if len(revs) > 1:
1971 ui.note(_('exporting patches:\n'))
1972 ui.note(_('exporting patches:\n'))
1972 else:
1973 else:
1973 ui.note(_('exporting patch:\n'))
1974 ui.note(_('exporting patch:\n'))
1974 ui.pager('export')
1975 ui.pager('export')
1975 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1976 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1976 switch_parent=opts.get('switch_parent'),
1977 switch_parent=opts.get('switch_parent'),
1977 opts=patch.diffallopts(ui, opts))
1978 opts=patch.diffallopts(ui, opts))
1978
1979
1979 @command('files',
1980 @command('files',
1980 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1981 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1981 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1982 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1982 ] + walkopts + formatteropts + subrepoopts,
1983 ] + walkopts + formatteropts + subrepoopts,
1983 _('[OPTION]... [FILE]...'), cmdtype=readonly)
1984 _('[OPTION]... [FILE]...'), cmdtype=readonly)
1984 def files(ui, repo, *pats, **opts):
1985 def files(ui, repo, *pats, **opts):
1985 """list tracked files
1986 """list tracked files
1986
1987
1987 Print files under Mercurial control in the working directory or
1988 Print files under Mercurial control in the working directory or
1988 specified revision for given files (excluding removed files).
1989 specified revision for given files (excluding removed files).
1989 Files can be specified as filenames or filesets.
1990 Files can be specified as filenames or filesets.
1990
1991
1991 If no files are given to match, this command prints the names
1992 If no files are given to match, this command prints the names
1992 of all files under Mercurial control.
1993 of all files under Mercurial control.
1993
1994
1994 .. container:: verbose
1995 .. container:: verbose
1995
1996
1996 Examples:
1997 Examples:
1997
1998
1998 - list all files under the current directory::
1999 - list all files under the current directory::
1999
2000
2000 hg files .
2001 hg files .
2001
2002
2002 - shows sizes and flags for current revision::
2003 - shows sizes and flags for current revision::
2003
2004
2004 hg files -vr .
2005 hg files -vr .
2005
2006
2006 - list all files named README::
2007 - list all files named README::
2007
2008
2008 hg files -I "**/README"
2009 hg files -I "**/README"
2009
2010
2010 - list all binary files::
2011 - list all binary files::
2011
2012
2012 hg files "set:binary()"
2013 hg files "set:binary()"
2013
2014
2014 - find files containing a regular expression::
2015 - find files containing a regular expression::
2015
2016
2016 hg files "set:grep('bob')"
2017 hg files "set:grep('bob')"
2017
2018
2018 - search tracked file contents with xargs and grep::
2019 - search tracked file contents with xargs and grep::
2019
2020
2020 hg files -0 | xargs -0 grep foo
2021 hg files -0 | xargs -0 grep foo
2021
2022
2022 See :hg:`help patterns` and :hg:`help filesets` for more information
2023 See :hg:`help patterns` and :hg:`help filesets` for more information
2023 on specifying file patterns.
2024 on specifying file patterns.
2024
2025
2025 Returns 0 if a match is found, 1 otherwise.
2026 Returns 0 if a match is found, 1 otherwise.
2026
2027
2027 """
2028 """
2028
2029
2029 opts = pycompat.byteskwargs(opts)
2030 opts = pycompat.byteskwargs(opts)
2030 rev = opts.get('rev')
2031 rev = opts.get('rev')
2031 if rev:
2032 if rev:
2032 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2033 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2033 ctx = scmutil.revsingle(repo, rev, None)
2034 ctx = scmutil.revsingle(repo, rev, None)
2034
2035
2035 end = '\n'
2036 end = '\n'
2036 if opts.get('print0'):
2037 if opts.get('print0'):
2037 end = '\0'
2038 end = '\0'
2038 fmt = '%s' + end
2039 fmt = '%s' + end
2039
2040
2040 m = scmutil.match(ctx, pats, opts)
2041 m = scmutil.match(ctx, pats, opts)
2041 ui.pager('files')
2042 ui.pager('files')
2042 with ui.formatter('files', opts) as fm:
2043 with ui.formatter('files', opts) as fm:
2043 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
2044 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
2044
2045
2045 @command(
2046 @command(
2046 '^forget',
2047 '^forget',
2047 walkopts + dryrunopts,
2048 walkopts + dryrunopts,
2048 _('[OPTION]... FILE...'), inferrepo=True)
2049 _('[OPTION]... FILE...'), inferrepo=True)
2049 def forget(ui, repo, *pats, **opts):
2050 def forget(ui, repo, *pats, **opts):
2050 """forget the specified files on the next commit
2051 """forget the specified files on the next commit
2051
2052
2052 Mark the specified files so they will no longer be tracked
2053 Mark the specified files so they will no longer be tracked
2053 after the next commit.
2054 after the next commit.
2054
2055
2055 This only removes files from the current branch, not from the
2056 This only removes files from the current branch, not from the
2056 entire project history, and it does not delete them from the
2057 entire project history, and it does not delete them from the
2057 working directory.
2058 working directory.
2058
2059
2059 To delete the file from the working directory, see :hg:`remove`.
2060 To delete the file from the working directory, see :hg:`remove`.
2060
2061
2061 To undo a forget before the next commit, see :hg:`add`.
2062 To undo a forget before the next commit, see :hg:`add`.
2062
2063
2063 .. container:: verbose
2064 .. container:: verbose
2064
2065
2065 Examples:
2066 Examples:
2066
2067
2067 - forget newly-added binary files::
2068 - forget newly-added binary files::
2068
2069
2069 hg forget "set:added() and binary()"
2070 hg forget "set:added() and binary()"
2070
2071
2071 - forget files that would be excluded by .hgignore::
2072 - forget files that would be excluded by .hgignore::
2072
2073
2073 hg forget "set:hgignore()"
2074 hg forget "set:hgignore()"
2074
2075
2075 Returns 0 on success.
2076 Returns 0 on success.
2076 """
2077 """
2077
2078
2078 opts = pycompat.byteskwargs(opts)
2079 opts = pycompat.byteskwargs(opts)
2079 if not pats:
2080 if not pats:
2080 raise error.Abort(_('no files specified'))
2081 raise error.Abort(_('no files specified'))
2081
2082
2082 m = scmutil.match(repo[None], pats, opts)
2083 m = scmutil.match(repo[None], pats, opts)
2083 dryrun = opts.get(r'dry_run')
2084 dryrun = opts.get(r'dry_run')
2084 rejected = cmdutil.forget(ui, repo, m, prefix="",
2085 rejected = cmdutil.forget(ui, repo, m, prefix="",
2085 explicitonly=False, dryrun=dryrun)[0]
2086 explicitonly=False, dryrun=dryrun)[0]
2086 return rejected and 1 or 0
2087 return rejected and 1 or 0
2087
2088
2088 @command(
2089 @command(
2089 'graft',
2090 'graft',
2090 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2091 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2091 ('c', 'continue', False, _('resume interrupted graft')),
2092 ('c', 'continue', False, _('resume interrupted graft')),
2092 ('e', 'edit', False, _('invoke editor on commit messages')),
2093 ('e', 'edit', False, _('invoke editor on commit messages')),
2093 ('', 'log', None, _('append graft info to log message')),
2094 ('', 'log', None, _('append graft info to log message')),
2094 ('f', 'force', False, _('force graft')),
2095 ('f', 'force', False, _('force graft')),
2095 ('D', 'currentdate', False,
2096 ('D', 'currentdate', False,
2096 _('record the current date as commit date')),
2097 _('record the current date as commit date')),
2097 ('U', 'currentuser', False,
2098 ('U', 'currentuser', False,
2098 _('record the current user as committer'), _('DATE'))]
2099 _('record the current user as committer'), _('DATE'))]
2099 + commitopts2 + mergetoolopts + dryrunopts,
2100 + commitopts2 + mergetoolopts + dryrunopts,
2100 _('[OPTION]... [-r REV]... REV...'))
2101 _('[OPTION]... [-r REV]... REV...'))
2101 def graft(ui, repo, *revs, **opts):
2102 def graft(ui, repo, *revs, **opts):
2102 '''copy changes from other branches onto the current branch
2103 '''copy changes from other branches onto the current branch
2103
2104
2104 This command uses Mercurial's merge logic to copy individual
2105 This command uses Mercurial's merge logic to copy individual
2105 changes from other branches without merging branches in the
2106 changes from other branches without merging branches in the
2106 history graph. This is sometimes known as 'backporting' or
2107 history graph. This is sometimes known as 'backporting' or
2107 'cherry-picking'. By default, graft will copy user, date, and
2108 'cherry-picking'. By default, graft will copy user, date, and
2108 description from the source changesets.
2109 description from the source changesets.
2109
2110
2110 Changesets that are ancestors of the current revision, that have
2111 Changesets that are ancestors of the current revision, that have
2111 already been grafted, or that are merges will be skipped.
2112 already been grafted, or that are merges will be skipped.
2112
2113
2113 If --log is specified, log messages will have a comment appended
2114 If --log is specified, log messages will have a comment appended
2114 of the form::
2115 of the form::
2115
2116
2116 (grafted from CHANGESETHASH)
2117 (grafted from CHANGESETHASH)
2117
2118
2118 If --force is specified, revisions will be grafted even if they
2119 If --force is specified, revisions will be grafted even if they
2119 are already ancestors of, or have been grafted to, the destination.
2120 are already ancestors of, or have been grafted to, the destination.
2120 This is useful when the revisions have since been backed out.
2121 This is useful when the revisions have since been backed out.
2121
2122
2122 If a graft merge results in conflicts, the graft process is
2123 If a graft merge results in conflicts, the graft process is
2123 interrupted so that the current merge can be manually resolved.
2124 interrupted so that the current merge can be manually resolved.
2124 Once all conflicts are addressed, the graft process can be
2125 Once all conflicts are addressed, the graft process can be
2125 continued with the -c/--continue option.
2126 continued with the -c/--continue option.
2126
2127
2127 .. note::
2128 .. note::
2128
2129
2129 The -c/--continue option does not reapply earlier options, except
2130 The -c/--continue option does not reapply earlier options, except
2130 for --force.
2131 for --force.
2131
2132
2132 .. container:: verbose
2133 .. container:: verbose
2133
2134
2134 Examples:
2135 Examples:
2135
2136
2136 - copy a single change to the stable branch and edit its description::
2137 - copy a single change to the stable branch and edit its description::
2137
2138
2138 hg update stable
2139 hg update stable
2139 hg graft --edit 9393
2140 hg graft --edit 9393
2140
2141
2141 - graft a range of changesets with one exception, updating dates::
2142 - graft a range of changesets with one exception, updating dates::
2142
2143
2143 hg graft -D "2085::2093 and not 2091"
2144 hg graft -D "2085::2093 and not 2091"
2144
2145
2145 - continue a graft after resolving conflicts::
2146 - continue a graft after resolving conflicts::
2146
2147
2147 hg graft -c
2148 hg graft -c
2148
2149
2149 - show the source of a grafted changeset::
2150 - show the source of a grafted changeset::
2150
2151
2151 hg log --debug -r .
2152 hg log --debug -r .
2152
2153
2153 - show revisions sorted by date::
2154 - show revisions sorted by date::
2154
2155
2155 hg log -r "sort(all(), date)"
2156 hg log -r "sort(all(), date)"
2156
2157
2157 See :hg:`help revisions` for more about specifying revisions.
2158 See :hg:`help revisions` for more about specifying revisions.
2158
2159
2159 Returns 0 on successful completion.
2160 Returns 0 on successful completion.
2160 '''
2161 '''
2161 with repo.wlock():
2162 with repo.wlock():
2162 return _dograft(ui, repo, *revs, **opts)
2163 return _dograft(ui, repo, *revs, **opts)
2163
2164
2164 def _dograft(ui, repo, *revs, **opts):
2165 def _dograft(ui, repo, *revs, **opts):
2165 opts = pycompat.byteskwargs(opts)
2166 opts = pycompat.byteskwargs(opts)
2166 if revs and opts.get('rev'):
2167 if revs and opts.get('rev'):
2167 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2168 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2168 'revision ordering!\n'))
2169 'revision ordering!\n'))
2169
2170
2170 revs = list(revs)
2171 revs = list(revs)
2171 revs.extend(opts.get('rev'))
2172 revs.extend(opts.get('rev'))
2172
2173
2173 if not opts.get('user') and opts.get('currentuser'):
2174 if not opts.get('user') and opts.get('currentuser'):
2174 opts['user'] = ui.username()
2175 opts['user'] = ui.username()
2175 if not opts.get('date') and opts.get('currentdate'):
2176 if not opts.get('date') and opts.get('currentdate'):
2176 opts['date'] = "%d %d" % dateutil.makedate()
2177 opts['date'] = "%d %d" % dateutil.makedate()
2177
2178
2178 editor = cmdutil.getcommiteditor(editform='graft',
2179 editor = cmdutil.getcommiteditor(editform='graft',
2179 **pycompat.strkwargs(opts))
2180 **pycompat.strkwargs(opts))
2180
2181
2181 cont = False
2182 cont = False
2182 if opts.get('continue'):
2183 if opts.get('continue'):
2183 cont = True
2184 cont = True
2184 if revs:
2185 if revs:
2185 raise error.Abort(_("can't specify --continue and revisions"))
2186 raise error.Abort(_("can't specify --continue and revisions"))
2186 # read in unfinished revisions
2187 # read in unfinished revisions
2187 try:
2188 try:
2188 nodes = repo.vfs.read('graftstate').splitlines()
2189 nodes = repo.vfs.read('graftstate').splitlines()
2189 revs = [repo[node].rev() for node in nodes]
2190 revs = [repo[node].rev() for node in nodes]
2190 except IOError as inst:
2191 except IOError as inst:
2191 if inst.errno != errno.ENOENT:
2192 if inst.errno != errno.ENOENT:
2192 raise
2193 raise
2193 cmdutil.wrongtooltocontinue(repo, _('graft'))
2194 cmdutil.wrongtooltocontinue(repo, _('graft'))
2194 else:
2195 else:
2195 if not revs:
2196 if not revs:
2196 raise error.Abort(_('no revisions specified'))
2197 raise error.Abort(_('no revisions specified'))
2197 cmdutil.checkunfinished(repo)
2198 cmdutil.checkunfinished(repo)
2198 cmdutil.bailifchanged(repo)
2199 cmdutil.bailifchanged(repo)
2199 revs = scmutil.revrange(repo, revs)
2200 revs = scmutil.revrange(repo, revs)
2200
2201
2201 skipped = set()
2202 skipped = set()
2202 # check for merges
2203 # check for merges
2203 for rev in repo.revs('%ld and merge()', revs):
2204 for rev in repo.revs('%ld and merge()', revs):
2204 ui.warn(_('skipping ungraftable merge revision %d\n') % rev)
2205 ui.warn(_('skipping ungraftable merge revision %d\n') % rev)
2205 skipped.add(rev)
2206 skipped.add(rev)
2206 revs = [r for r in revs if r not in skipped]
2207 revs = [r for r in revs if r not in skipped]
2207 if not revs:
2208 if not revs:
2208 return -1
2209 return -1
2209
2210
2210 # Don't check in the --continue case, in effect retaining --force across
2211 # Don't check in the --continue case, in effect retaining --force across
2211 # --continues. That's because without --force, any revisions we decided to
2212 # --continues. That's because without --force, any revisions we decided to
2212 # skip would have been filtered out here, so they wouldn't have made their
2213 # skip would have been filtered out here, so they wouldn't have made their
2213 # way to the graftstate. With --force, any revisions we would have otherwise
2214 # way to the graftstate. With --force, any revisions we would have otherwise
2214 # skipped would not have been filtered out, and if they hadn't been applied
2215 # skipped would not have been filtered out, and if they hadn't been applied
2215 # already, they'd have been in the graftstate.
2216 # already, they'd have been in the graftstate.
2216 if not (cont or opts.get('force')):
2217 if not (cont or opts.get('force')):
2217 # check for ancestors of dest branch
2218 # check for ancestors of dest branch
2218 crev = repo['.'].rev()
2219 crev = repo['.'].rev()
2219 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2220 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2220 # XXX make this lazy in the future
2221 # XXX make this lazy in the future
2221 # don't mutate while iterating, create a copy
2222 # don't mutate while iterating, create a copy
2222 for rev in list(revs):
2223 for rev in list(revs):
2223 if rev in ancestors:
2224 if rev in ancestors:
2224 ui.warn(_('skipping ancestor revision %d:%s\n') %
2225 ui.warn(_('skipping ancestor revision %d:%s\n') %
2225 (rev, repo[rev]))
2226 (rev, repo[rev]))
2226 # XXX remove on list is slow
2227 # XXX remove on list is slow
2227 revs.remove(rev)
2228 revs.remove(rev)
2228 if not revs:
2229 if not revs:
2229 return -1
2230 return -1
2230
2231
2231 # analyze revs for earlier grafts
2232 # analyze revs for earlier grafts
2232 ids = {}
2233 ids = {}
2233 for ctx in repo.set("%ld", revs):
2234 for ctx in repo.set("%ld", revs):
2234 ids[ctx.hex()] = ctx.rev()
2235 ids[ctx.hex()] = ctx.rev()
2235 n = ctx.extra().get('source')
2236 n = ctx.extra().get('source')
2236 if n:
2237 if n:
2237 ids[n] = ctx.rev()
2238 ids[n] = ctx.rev()
2238
2239
2239 # check ancestors for earlier grafts
2240 # check ancestors for earlier grafts
2240 ui.debug('scanning for duplicate grafts\n')
2241 ui.debug('scanning for duplicate grafts\n')
2241
2242
2242 # The only changesets we can be sure doesn't contain grafts of any
2243 # The only changesets we can be sure doesn't contain grafts of any
2243 # revs, are the ones that are common ancestors of *all* revs:
2244 # revs, are the ones that are common ancestors of *all* revs:
2244 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2245 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2245 ctx = repo[rev]
2246 ctx = repo[rev]
2246 n = ctx.extra().get('source')
2247 n = ctx.extra().get('source')
2247 if n in ids:
2248 if n in ids:
2248 try:
2249 try:
2249 r = repo[n].rev()
2250 r = repo[n].rev()
2250 except error.RepoLookupError:
2251 except error.RepoLookupError:
2251 r = None
2252 r = None
2252 if r in revs:
2253 if r in revs:
2253 ui.warn(_('skipping revision %d:%s '
2254 ui.warn(_('skipping revision %d:%s '
2254 '(already grafted to %d:%s)\n')
2255 '(already grafted to %d:%s)\n')
2255 % (r, repo[r], rev, ctx))
2256 % (r, repo[r], rev, ctx))
2256 revs.remove(r)
2257 revs.remove(r)
2257 elif ids[n] in revs:
2258 elif ids[n] in revs:
2258 if r is None:
2259 if r is None:
2259 ui.warn(_('skipping already grafted revision %d:%s '
2260 ui.warn(_('skipping already grafted revision %d:%s '
2260 '(%d:%s also has unknown origin %s)\n')
2261 '(%d:%s also has unknown origin %s)\n')
2261 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2262 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2262 else:
2263 else:
2263 ui.warn(_('skipping already grafted revision %d:%s '
2264 ui.warn(_('skipping already grafted revision %d:%s '
2264 '(%d:%s also has origin %d:%s)\n')
2265 '(%d:%s also has origin %d:%s)\n')
2265 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2266 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2266 revs.remove(ids[n])
2267 revs.remove(ids[n])
2267 elif ctx.hex() in ids:
2268 elif ctx.hex() in ids:
2268 r = ids[ctx.hex()]
2269 r = ids[ctx.hex()]
2269 ui.warn(_('skipping already grafted revision %d:%s '
2270 ui.warn(_('skipping already grafted revision %d:%s '
2270 '(was grafted from %d:%s)\n') %
2271 '(was grafted from %d:%s)\n') %
2271 (r, repo[r], rev, ctx))
2272 (r, repo[r], rev, ctx))
2272 revs.remove(r)
2273 revs.remove(r)
2273 if not revs:
2274 if not revs:
2274 return -1
2275 return -1
2275
2276
2276 for pos, ctx in enumerate(repo.set("%ld", revs)):
2277 for pos, ctx in enumerate(repo.set("%ld", revs)):
2277 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2278 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2278 ctx.description().split('\n', 1)[0])
2279 ctx.description().split('\n', 1)[0])
2279 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2280 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2280 if names:
2281 if names:
2281 desc += ' (%s)' % ' '.join(names)
2282 desc += ' (%s)' % ' '.join(names)
2282 ui.status(_('grafting %s\n') % desc)
2283 ui.status(_('grafting %s\n') % desc)
2283 if opts.get('dry_run'):
2284 if opts.get('dry_run'):
2284 continue
2285 continue
2285
2286
2286 source = ctx.extra().get('source')
2287 source = ctx.extra().get('source')
2287 extra = {}
2288 extra = {}
2288 if source:
2289 if source:
2289 extra['source'] = source
2290 extra['source'] = source
2290 extra['intermediate-source'] = ctx.hex()
2291 extra['intermediate-source'] = ctx.hex()
2291 else:
2292 else:
2292 extra['source'] = ctx.hex()
2293 extra['source'] = ctx.hex()
2293 user = ctx.user()
2294 user = ctx.user()
2294 if opts.get('user'):
2295 if opts.get('user'):
2295 user = opts['user']
2296 user = opts['user']
2296 date = ctx.date()
2297 date = ctx.date()
2297 if opts.get('date'):
2298 if opts.get('date'):
2298 date = opts['date']
2299 date = opts['date']
2299 message = ctx.description()
2300 message = ctx.description()
2300 if opts.get('log'):
2301 if opts.get('log'):
2301 message += '\n(grafted from %s)' % ctx.hex()
2302 message += '\n(grafted from %s)' % ctx.hex()
2302
2303
2303 # we don't merge the first commit when continuing
2304 # we don't merge the first commit when continuing
2304 if not cont:
2305 if not cont:
2305 # perform the graft merge with p1(rev) as 'ancestor'
2306 # perform the graft merge with p1(rev) as 'ancestor'
2306 try:
2307 try:
2307 # ui.forcemerge is an internal variable, do not document
2308 # ui.forcemerge is an internal variable, do not document
2308 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2309 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2309 'graft')
2310 'graft')
2310 stats = mergemod.graft(repo, ctx, ctx.p1(),
2311 stats = mergemod.graft(repo, ctx, ctx.p1(),
2311 ['local', 'graft'])
2312 ['local', 'graft'])
2312 finally:
2313 finally:
2313 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2314 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2314 # report any conflicts
2315 # report any conflicts
2315 if stats.unresolvedcount > 0:
2316 if stats.unresolvedcount > 0:
2316 # write out state for --continue
2317 # write out state for --continue
2317 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2318 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2318 repo.vfs.write('graftstate', ''.join(nodelines))
2319 repo.vfs.write('graftstate', ''.join(nodelines))
2319 extra = ''
2320 extra = ''
2320 if opts.get('user'):
2321 if opts.get('user'):
2321 extra += ' --user %s' % procutil.shellquote(opts['user'])
2322 extra += ' --user %s' % procutil.shellquote(opts['user'])
2322 if opts.get('date'):
2323 if opts.get('date'):
2323 extra += ' --date %s' % procutil.shellquote(opts['date'])
2324 extra += ' --date %s' % procutil.shellquote(opts['date'])
2324 if opts.get('log'):
2325 if opts.get('log'):
2325 extra += ' --log'
2326 extra += ' --log'
2326 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2327 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2327 raise error.Abort(
2328 raise error.Abort(
2328 _("unresolved conflicts, can't continue"),
2329 _("unresolved conflicts, can't continue"),
2329 hint=hint)
2330 hint=hint)
2330 else:
2331 else:
2331 cont = False
2332 cont = False
2332
2333
2333 # commit
2334 # commit
2334 node = repo.commit(text=message, user=user,
2335 node = repo.commit(text=message, user=user,
2335 date=date, extra=extra, editor=editor)
2336 date=date, extra=extra, editor=editor)
2336 if node is None:
2337 if node is None:
2337 ui.warn(
2338 ui.warn(
2338 _('note: graft of %d:%s created no changes to commit\n') %
2339 _('note: graft of %d:%s created no changes to commit\n') %
2339 (ctx.rev(), ctx))
2340 (ctx.rev(), ctx))
2340
2341
2341 # remove state when we complete successfully
2342 # remove state when we complete successfully
2342 if not opts.get('dry_run'):
2343 if not opts.get('dry_run'):
2343 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2344 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2344
2345
2345 return 0
2346 return 0
2346
2347
2347 @command('grep',
2348 @command('grep',
2348 [('0', 'print0', None, _('end fields with NUL')),
2349 [('0', 'print0', None, _('end fields with NUL')),
2349 ('', 'all', None, _('print all revisions that match')),
2350 ('', 'all', None, _('print all revisions that match')),
2350 ('a', 'text', None, _('treat all files as text')),
2351 ('a', 'text', None, _('treat all files as text')),
2351 ('f', 'follow', None,
2352 ('f', 'follow', None,
2352 _('follow changeset history,'
2353 _('follow changeset history,'
2353 ' or file history across copies and renames')),
2354 ' or file history across copies and renames')),
2354 ('i', 'ignore-case', None, _('ignore case when matching')),
2355 ('i', 'ignore-case', None, _('ignore case when matching')),
2355 ('l', 'files-with-matches', None,
2356 ('l', 'files-with-matches', None,
2356 _('print only filenames and revisions that match')),
2357 _('print only filenames and revisions that match')),
2357 ('n', 'line-number', None, _('print matching line numbers')),
2358 ('n', 'line-number', None, _('print matching line numbers')),
2358 ('r', 'rev', [],
2359 ('r', 'rev', [],
2359 _('only search files changed within revision range'), _('REV')),
2360 _('only search files changed within revision range'), _('REV')),
2360 ('u', 'user', None, _('list the author (long with -v)')),
2361 ('u', 'user', None, _('list the author (long with -v)')),
2361 ('d', 'date', None, _('list the date (short with -q)')),
2362 ('d', 'date', None, _('list the date (short with -q)')),
2362 ] + formatteropts + walkopts,
2363 ] + formatteropts + walkopts,
2363 _('[OPTION]... PATTERN [FILE]...'),
2364 _('[OPTION]... PATTERN [FILE]...'),
2364 inferrepo=True, cmdtype=readonly)
2365 inferrepo=True, cmdtype=readonly)
2365 def grep(ui, repo, pattern, *pats, **opts):
2366 def grep(ui, repo, pattern, *pats, **opts):
2366 """search revision history for a pattern in specified files
2367 """search revision history for a pattern in specified files
2367
2368
2368 Search revision history for a regular expression in the specified
2369 Search revision history for a regular expression in the specified
2369 files or the entire project.
2370 files or the entire project.
2370
2371
2371 By default, grep prints the most recent revision number for each
2372 By default, grep prints the most recent revision number for each
2372 file in which it finds a match. To get it to print every revision
2373 file in which it finds a match. To get it to print every revision
2373 that contains a change in match status ("-" for a match that becomes
2374 that contains a change in match status ("-" for a match that becomes
2374 a non-match, or "+" for a non-match that becomes a match), use the
2375 a non-match, or "+" for a non-match that becomes a match), use the
2375 --all flag.
2376 --all flag.
2376
2377
2377 PATTERN can be any Python (roughly Perl-compatible) regular
2378 PATTERN can be any Python (roughly Perl-compatible) regular
2378 expression.
2379 expression.
2379
2380
2380 If no FILEs are specified (and -f/--follow isn't set), all files in
2381 If no FILEs are specified (and -f/--follow isn't set), all files in
2381 the repository are searched, including those that don't exist in the
2382 the repository are searched, including those that don't exist in the
2382 current branch or have been deleted in a prior changeset.
2383 current branch or have been deleted in a prior changeset.
2383
2384
2384 Returns 0 if a match is found, 1 otherwise.
2385 Returns 0 if a match is found, 1 otherwise.
2385 """
2386 """
2386 opts = pycompat.byteskwargs(opts)
2387 opts = pycompat.byteskwargs(opts)
2387 reflags = re.M
2388 reflags = re.M
2388 if opts.get('ignore_case'):
2389 if opts.get('ignore_case'):
2389 reflags |= re.I
2390 reflags |= re.I
2390 try:
2391 try:
2391 regexp = util.re.compile(pattern, reflags)
2392 regexp = util.re.compile(pattern, reflags)
2392 except re.error as inst:
2393 except re.error as inst:
2393 ui.warn(_("grep: invalid match pattern: %s\n") % pycompat.bytestr(inst))
2394 ui.warn(_("grep: invalid match pattern: %s\n") % pycompat.bytestr(inst))
2394 return 1
2395 return 1
2395 sep, eol = ':', '\n'
2396 sep, eol = ':', '\n'
2396 if opts.get('print0'):
2397 if opts.get('print0'):
2397 sep = eol = '\0'
2398 sep = eol = '\0'
2398
2399
2399 getfile = util.lrucachefunc(repo.file)
2400 getfile = util.lrucachefunc(repo.file)
2400
2401
2401 def matchlines(body):
2402 def matchlines(body):
2402 begin = 0
2403 begin = 0
2403 linenum = 0
2404 linenum = 0
2404 while begin < len(body):
2405 while begin < len(body):
2405 match = regexp.search(body, begin)
2406 match = regexp.search(body, begin)
2406 if not match:
2407 if not match:
2407 break
2408 break
2408 mstart, mend = match.span()
2409 mstart, mend = match.span()
2409 linenum += body.count('\n', begin, mstart) + 1
2410 linenum += body.count('\n', begin, mstart) + 1
2410 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2411 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2411 begin = body.find('\n', mend) + 1 or len(body) + 1
2412 begin = body.find('\n', mend) + 1 or len(body) + 1
2412 lend = begin - 1
2413 lend = begin - 1
2413 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2414 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2414
2415
2415 class linestate(object):
2416 class linestate(object):
2416 def __init__(self, line, linenum, colstart, colend):
2417 def __init__(self, line, linenum, colstart, colend):
2417 self.line = line
2418 self.line = line
2418 self.linenum = linenum
2419 self.linenum = linenum
2419 self.colstart = colstart
2420 self.colstart = colstart
2420 self.colend = colend
2421 self.colend = colend
2421
2422
2422 def __hash__(self):
2423 def __hash__(self):
2423 return hash((self.linenum, self.line))
2424 return hash((self.linenum, self.line))
2424
2425
2425 def __eq__(self, other):
2426 def __eq__(self, other):
2426 return self.line == other.line
2427 return self.line == other.line
2427
2428
2428 def findpos(self):
2429 def findpos(self):
2429 """Iterate all (start, end) indices of matches"""
2430 """Iterate all (start, end) indices of matches"""
2430 yield self.colstart, self.colend
2431 yield self.colstart, self.colend
2431 p = self.colend
2432 p = self.colend
2432 while p < len(self.line):
2433 while p < len(self.line):
2433 m = regexp.search(self.line, p)
2434 m = regexp.search(self.line, p)
2434 if not m:
2435 if not m:
2435 break
2436 break
2436 yield m.span()
2437 yield m.span()
2437 p = m.end()
2438 p = m.end()
2438
2439
2439 matches = {}
2440 matches = {}
2440 copies = {}
2441 copies = {}
2441 def grepbody(fn, rev, body):
2442 def grepbody(fn, rev, body):
2442 matches[rev].setdefault(fn, [])
2443 matches[rev].setdefault(fn, [])
2443 m = matches[rev][fn]
2444 m = matches[rev][fn]
2444 for lnum, cstart, cend, line in matchlines(body):
2445 for lnum, cstart, cend, line in matchlines(body):
2445 s = linestate(line, lnum, cstart, cend)
2446 s = linestate(line, lnum, cstart, cend)
2446 m.append(s)
2447 m.append(s)
2447
2448
2448 def difflinestates(a, b):
2449 def difflinestates(a, b):
2449 sm = difflib.SequenceMatcher(None, a, b)
2450 sm = difflib.SequenceMatcher(None, a, b)
2450 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2451 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2451 if tag == 'insert':
2452 if tag == 'insert':
2452 for i in xrange(blo, bhi):
2453 for i in xrange(blo, bhi):
2453 yield ('+', b[i])
2454 yield ('+', b[i])
2454 elif tag == 'delete':
2455 elif tag == 'delete':
2455 for i in xrange(alo, ahi):
2456 for i in xrange(alo, ahi):
2456 yield ('-', a[i])
2457 yield ('-', a[i])
2457 elif tag == 'replace':
2458 elif tag == 'replace':
2458 for i in xrange(alo, ahi):
2459 for i in xrange(alo, ahi):
2459 yield ('-', a[i])
2460 yield ('-', a[i])
2460 for i in xrange(blo, bhi):
2461 for i in xrange(blo, bhi):
2461 yield ('+', b[i])
2462 yield ('+', b[i])
2462
2463
2463 def display(fm, fn, ctx, pstates, states):
2464 def display(fm, fn, ctx, pstates, states):
2464 rev = ctx.rev()
2465 rev = ctx.rev()
2465 if fm.isplain():
2466 if fm.isplain():
2466 formatuser = ui.shortuser
2467 formatuser = ui.shortuser
2467 else:
2468 else:
2468 formatuser = str
2469 formatuser = str
2469 if ui.quiet:
2470 if ui.quiet:
2470 datefmt = '%Y-%m-%d'
2471 datefmt = '%Y-%m-%d'
2471 else:
2472 else:
2472 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2473 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2473 found = False
2474 found = False
2474 @util.cachefunc
2475 @util.cachefunc
2475 def binary():
2476 def binary():
2476 flog = getfile(fn)
2477 flog = getfile(fn)
2477 return stringutil.binary(flog.read(ctx.filenode(fn)))
2478 return stringutil.binary(flog.read(ctx.filenode(fn)))
2478
2479
2479 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2480 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2480 if opts.get('all'):
2481 if opts.get('all'):
2481 iter = difflinestates(pstates, states)
2482 iter = difflinestates(pstates, states)
2482 else:
2483 else:
2483 iter = [('', l) for l in states]
2484 iter = [('', l) for l in states]
2484 for change, l in iter:
2485 for change, l in iter:
2485 fm.startitem()
2486 fm.startitem()
2486 fm.data(node=fm.hexfunc(ctx.node()))
2487 fm.data(node=fm.hexfunc(ctx.node()))
2487 cols = [
2488 cols = [
2488 ('filename', fn, True),
2489 ('filename', fn, True),
2489 ('rev', rev, True),
2490 ('rev', rev, True),
2490 ('linenumber', l.linenum, opts.get('line_number')),
2491 ('linenumber', l.linenum, opts.get('line_number')),
2491 ]
2492 ]
2492 if opts.get('all'):
2493 if opts.get('all'):
2493 cols.append(('change', change, True))
2494 cols.append(('change', change, True))
2494 cols.extend([
2495 cols.extend([
2495 ('user', formatuser(ctx.user()), opts.get('user')),
2496 ('user', formatuser(ctx.user()), opts.get('user')),
2496 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2497 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2497 ])
2498 ])
2498 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2499 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2499 for name, data, cond in cols:
2500 for name, data, cond in cols:
2500 field = fieldnamemap.get(name, name)
2501 field = fieldnamemap.get(name, name)
2501 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2502 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2502 if cond and name != lastcol:
2503 if cond and name != lastcol:
2503 fm.plain(sep, label='grep.sep')
2504 fm.plain(sep, label='grep.sep')
2504 if not opts.get('files_with_matches'):
2505 if not opts.get('files_with_matches'):
2505 fm.plain(sep, label='grep.sep')
2506 fm.plain(sep, label='grep.sep')
2506 if not opts.get('text') and binary():
2507 if not opts.get('text') and binary():
2507 fm.plain(_(" Binary file matches"))
2508 fm.plain(_(" Binary file matches"))
2508 else:
2509 else:
2509 displaymatches(fm.nested('texts'), l)
2510 displaymatches(fm.nested('texts'), l)
2510 fm.plain(eol)
2511 fm.plain(eol)
2511 found = True
2512 found = True
2512 if opts.get('files_with_matches'):
2513 if opts.get('files_with_matches'):
2513 break
2514 break
2514 return found
2515 return found
2515
2516
2516 def displaymatches(fm, l):
2517 def displaymatches(fm, l):
2517 p = 0
2518 p = 0
2518 for s, e in l.findpos():
2519 for s, e in l.findpos():
2519 if p < s:
2520 if p < s:
2520 fm.startitem()
2521 fm.startitem()
2521 fm.write('text', '%s', l.line[p:s])
2522 fm.write('text', '%s', l.line[p:s])
2522 fm.data(matched=False)
2523 fm.data(matched=False)
2523 fm.startitem()
2524 fm.startitem()
2524 fm.write('text', '%s', l.line[s:e], label='grep.match')
2525 fm.write('text', '%s', l.line[s:e], label='grep.match')
2525 fm.data(matched=True)
2526 fm.data(matched=True)
2526 p = e
2527 p = e
2527 if p < len(l.line):
2528 if p < len(l.line):
2528 fm.startitem()
2529 fm.startitem()
2529 fm.write('text', '%s', l.line[p:])
2530 fm.write('text', '%s', l.line[p:])
2530 fm.data(matched=False)
2531 fm.data(matched=False)
2531 fm.end()
2532 fm.end()
2532
2533
2533 skip = {}
2534 skip = {}
2534 revfiles = {}
2535 revfiles = {}
2535 match = scmutil.match(repo[None], pats, opts)
2536 match = scmutil.match(repo[None], pats, opts)
2536 found = False
2537 found = False
2537 follow = opts.get('follow')
2538 follow = opts.get('follow')
2538
2539
2539 def prep(ctx, fns):
2540 def prep(ctx, fns):
2540 rev = ctx.rev()
2541 rev = ctx.rev()
2541 pctx = ctx.p1()
2542 pctx = ctx.p1()
2542 parent = pctx.rev()
2543 parent = pctx.rev()
2543 matches.setdefault(rev, {})
2544 matches.setdefault(rev, {})
2544 matches.setdefault(parent, {})
2545 matches.setdefault(parent, {})
2545 files = revfiles.setdefault(rev, [])
2546 files = revfiles.setdefault(rev, [])
2546 for fn in fns:
2547 for fn in fns:
2547 flog = getfile(fn)
2548 flog = getfile(fn)
2548 try:
2549 try:
2549 fnode = ctx.filenode(fn)
2550 fnode = ctx.filenode(fn)
2550 except error.LookupError:
2551 except error.LookupError:
2551 continue
2552 continue
2552
2553
2553 copied = flog.renamed(fnode)
2554 copied = flog.renamed(fnode)
2554 copy = follow and copied and copied[0]
2555 copy = follow and copied and copied[0]
2555 if copy:
2556 if copy:
2556 copies.setdefault(rev, {})[fn] = copy
2557 copies.setdefault(rev, {})[fn] = copy
2557 if fn in skip:
2558 if fn in skip:
2558 if copy:
2559 if copy:
2559 skip[copy] = True
2560 skip[copy] = True
2560 continue
2561 continue
2561 files.append(fn)
2562 files.append(fn)
2562
2563
2563 if fn not in matches[rev]:
2564 if fn not in matches[rev]:
2564 grepbody(fn, rev, flog.read(fnode))
2565 grepbody(fn, rev, flog.read(fnode))
2565
2566
2566 pfn = copy or fn
2567 pfn = copy or fn
2567 if pfn not in matches[parent]:
2568 if pfn not in matches[parent]:
2568 try:
2569 try:
2569 fnode = pctx.filenode(pfn)
2570 fnode = pctx.filenode(pfn)
2570 grepbody(pfn, parent, flog.read(fnode))
2571 grepbody(pfn, parent, flog.read(fnode))
2571 except error.LookupError:
2572 except error.LookupError:
2572 pass
2573 pass
2573
2574
2574 ui.pager('grep')
2575 ui.pager('grep')
2575 fm = ui.formatter('grep', opts)
2576 fm = ui.formatter('grep', opts)
2576 for ctx in cmdutil.walkchangerevs(repo, match, opts, prep):
2577 for ctx in cmdutil.walkchangerevs(repo, match, opts, prep):
2577 rev = ctx.rev()
2578 rev = ctx.rev()
2578 parent = ctx.p1().rev()
2579 parent = ctx.p1().rev()
2579 for fn in sorted(revfiles.get(rev, [])):
2580 for fn in sorted(revfiles.get(rev, [])):
2580 states = matches[rev][fn]
2581 states = matches[rev][fn]
2581 copy = copies.get(rev, {}).get(fn)
2582 copy = copies.get(rev, {}).get(fn)
2582 if fn in skip:
2583 if fn in skip:
2583 if copy:
2584 if copy:
2584 skip[copy] = True
2585 skip[copy] = True
2585 continue
2586 continue
2586 pstates = matches.get(parent, {}).get(copy or fn, [])
2587 pstates = matches.get(parent, {}).get(copy or fn, [])
2587 if pstates or states:
2588 if pstates or states:
2588 r = display(fm, fn, ctx, pstates, states)
2589 r = display(fm, fn, ctx, pstates, states)
2589 found = found or r
2590 found = found or r
2590 if r and not opts.get('all'):
2591 if r and not opts.get('all'):
2591 skip[fn] = True
2592 skip[fn] = True
2592 if copy:
2593 if copy:
2593 skip[copy] = True
2594 skip[copy] = True
2594 del revfiles[rev]
2595 del revfiles[rev]
2595 # We will keep the matches dict for the duration of the window
2596 # We will keep the matches dict for the duration of the window
2596 # clear the matches dict once the window is over
2597 # clear the matches dict once the window is over
2597 if not revfiles:
2598 if not revfiles:
2598 matches.clear()
2599 matches.clear()
2599 fm.end()
2600 fm.end()
2600
2601
2601 return not found
2602 return not found
2602
2603
2603 @command('heads',
2604 @command('heads',
2604 [('r', 'rev', '',
2605 [('r', 'rev', '',
2605 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2606 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2606 ('t', 'topo', False, _('show topological heads only')),
2607 ('t', 'topo', False, _('show topological heads only')),
2607 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2608 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2608 ('c', 'closed', False, _('show normal and closed branch heads')),
2609 ('c', 'closed', False, _('show normal and closed branch heads')),
2609 ] + templateopts,
2610 ] + templateopts,
2610 _('[-ct] [-r STARTREV] [REV]...'), cmdtype=readonly)
2611 _('[-ct] [-r STARTREV] [REV]...'), cmdtype=readonly)
2611 def heads(ui, repo, *branchrevs, **opts):
2612 def heads(ui, repo, *branchrevs, **opts):
2612 """show branch heads
2613 """show branch heads
2613
2614
2614 With no arguments, show all open branch heads in the repository.
2615 With no arguments, show all open branch heads in the repository.
2615 Branch heads are changesets that have no descendants on the
2616 Branch heads are changesets that have no descendants on the
2616 same branch. They are where development generally takes place and
2617 same branch. They are where development generally takes place and
2617 are the usual targets for update and merge operations.
2618 are the usual targets for update and merge operations.
2618
2619
2619 If one or more REVs are given, only open branch heads on the
2620 If one or more REVs are given, only open branch heads on the
2620 branches associated with the specified changesets are shown. This
2621 branches associated with the specified changesets are shown. This
2621 means that you can use :hg:`heads .` to see the heads on the
2622 means that you can use :hg:`heads .` to see the heads on the
2622 currently checked-out branch.
2623 currently checked-out branch.
2623
2624
2624 If -c/--closed is specified, also show branch heads marked closed
2625 If -c/--closed is specified, also show branch heads marked closed
2625 (see :hg:`commit --close-branch`).
2626 (see :hg:`commit --close-branch`).
2626
2627
2627 If STARTREV is specified, only those heads that are descendants of
2628 If STARTREV is specified, only those heads that are descendants of
2628 STARTREV will be displayed.
2629 STARTREV will be displayed.
2629
2630
2630 If -t/--topo is specified, named branch mechanics will be ignored and only
2631 If -t/--topo is specified, named branch mechanics will be ignored and only
2631 topological heads (changesets with no children) will be shown.
2632 topological heads (changesets with no children) will be shown.
2632
2633
2633 Returns 0 if matching heads are found, 1 if not.
2634 Returns 0 if matching heads are found, 1 if not.
2634 """
2635 """
2635
2636
2636 opts = pycompat.byteskwargs(opts)
2637 opts = pycompat.byteskwargs(opts)
2637 start = None
2638 start = None
2638 rev = opts.get('rev')
2639 rev = opts.get('rev')
2639 if rev:
2640 if rev:
2640 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2641 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2641 start = scmutil.revsingle(repo, rev, None).node()
2642 start = scmutil.revsingle(repo, rev, None).node()
2642
2643
2643 if opts.get('topo'):
2644 if opts.get('topo'):
2644 heads = [repo[h] for h in repo.heads(start)]
2645 heads = [repo[h] for h in repo.heads(start)]
2645 else:
2646 else:
2646 heads = []
2647 heads = []
2647 for branch in repo.branchmap():
2648 for branch in repo.branchmap():
2648 heads += repo.branchheads(branch, start, opts.get('closed'))
2649 heads += repo.branchheads(branch, start, opts.get('closed'))
2649 heads = [repo[h] for h in heads]
2650 heads = [repo[h] for h in heads]
2650
2651
2651 if branchrevs:
2652 if branchrevs:
2652 branches = set(repo[br].branch() for br in branchrevs)
2653 branches = set(repo[br].branch() for br in branchrevs)
2653 heads = [h for h in heads if h.branch() in branches]
2654 heads = [h for h in heads if h.branch() in branches]
2654
2655
2655 if opts.get('active') and branchrevs:
2656 if opts.get('active') and branchrevs:
2656 dagheads = repo.heads(start)
2657 dagheads = repo.heads(start)
2657 heads = [h for h in heads if h.node() in dagheads]
2658 heads = [h for h in heads if h.node() in dagheads]
2658
2659
2659 if branchrevs:
2660 if branchrevs:
2660 haveheads = set(h.branch() for h in heads)
2661 haveheads = set(h.branch() for h in heads)
2661 if branches - haveheads:
2662 if branches - haveheads:
2662 headless = ', '.join(b for b in branches - haveheads)
2663 headless = ', '.join(b for b in branches - haveheads)
2663 msg = _('no open branch heads found on branches %s')
2664 msg = _('no open branch heads found on branches %s')
2664 if opts.get('rev'):
2665 if opts.get('rev'):
2665 msg += _(' (started at %s)') % opts['rev']
2666 msg += _(' (started at %s)') % opts['rev']
2666 ui.warn((msg + '\n') % headless)
2667 ui.warn((msg + '\n') % headless)
2667
2668
2668 if not heads:
2669 if not heads:
2669 return 1
2670 return 1
2670
2671
2671 ui.pager('heads')
2672 ui.pager('heads')
2672 heads = sorted(heads, key=lambda x: -x.rev())
2673 heads = sorted(heads, key=lambda x: -x.rev())
2673 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
2674 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
2674 for ctx in heads:
2675 for ctx in heads:
2675 displayer.show(ctx)
2676 displayer.show(ctx)
2676 displayer.close()
2677 displayer.close()
2677
2678
2678 @command('help',
2679 @command('help',
2679 [('e', 'extension', None, _('show only help for extensions')),
2680 [('e', 'extension', None, _('show only help for extensions')),
2680 ('c', 'command', None, _('show only help for commands')),
2681 ('c', 'command', None, _('show only help for commands')),
2681 ('k', 'keyword', None, _('show topics matching keyword')),
2682 ('k', 'keyword', None, _('show topics matching keyword')),
2682 ('s', 'system', [], _('show help for specific platform(s)')),
2683 ('s', 'system', [], _('show help for specific platform(s)')),
2683 ],
2684 ],
2684 _('[-ecks] [TOPIC]'),
2685 _('[-ecks] [TOPIC]'),
2685 norepo=True, cmdtype=readonly)
2686 norepo=True, cmdtype=readonly)
2686 def help_(ui, name=None, **opts):
2687 def help_(ui, name=None, **opts):
2687 """show help for a given topic or a help overview
2688 """show help for a given topic or a help overview
2688
2689
2689 With no arguments, print a list of commands with short help messages.
2690 With no arguments, print a list of commands with short help messages.
2690
2691
2691 Given a topic, extension, or command name, print help for that
2692 Given a topic, extension, or command name, print help for that
2692 topic.
2693 topic.
2693
2694
2694 Returns 0 if successful.
2695 Returns 0 if successful.
2695 """
2696 """
2696
2697
2697 keep = opts.get(r'system') or []
2698 keep = opts.get(r'system') or []
2698 if len(keep) == 0:
2699 if len(keep) == 0:
2699 if pycompat.sysplatform.startswith('win'):
2700 if pycompat.sysplatform.startswith('win'):
2700 keep.append('windows')
2701 keep.append('windows')
2701 elif pycompat.sysplatform == 'OpenVMS':
2702 elif pycompat.sysplatform == 'OpenVMS':
2702 keep.append('vms')
2703 keep.append('vms')
2703 elif pycompat.sysplatform == 'plan9':
2704 elif pycompat.sysplatform == 'plan9':
2704 keep.append('plan9')
2705 keep.append('plan9')
2705 else:
2706 else:
2706 keep.append('unix')
2707 keep.append('unix')
2707 keep.append(pycompat.sysplatform.lower())
2708 keep.append(pycompat.sysplatform.lower())
2708 if ui.verbose:
2709 if ui.verbose:
2709 keep.append('verbose')
2710 keep.append('verbose')
2710
2711
2711 commands = sys.modules[__name__]
2712 commands = sys.modules[__name__]
2712 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2713 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2713 ui.pager('help')
2714 ui.pager('help')
2714 ui.write(formatted)
2715 ui.write(formatted)
2715
2716
2716
2717
2717 @command('identify|id',
2718 @command('identify|id',
2718 [('r', 'rev', '',
2719 [('r', 'rev', '',
2719 _('identify the specified revision'), _('REV')),
2720 _('identify the specified revision'), _('REV')),
2720 ('n', 'num', None, _('show local revision number')),
2721 ('n', 'num', None, _('show local revision number')),
2721 ('i', 'id', None, _('show global revision id')),
2722 ('i', 'id', None, _('show global revision id')),
2722 ('b', 'branch', None, _('show branch')),
2723 ('b', 'branch', None, _('show branch')),
2723 ('t', 'tags', None, _('show tags')),
2724 ('t', 'tags', None, _('show tags')),
2724 ('B', 'bookmarks', None, _('show bookmarks')),
2725 ('B', 'bookmarks', None, _('show bookmarks')),
2725 ] + remoteopts + formatteropts,
2726 ] + remoteopts + formatteropts,
2726 _('[-nibtB] [-r REV] [SOURCE]'),
2727 _('[-nibtB] [-r REV] [SOURCE]'),
2727 optionalrepo=True, cmdtype=readonly)
2728 optionalrepo=True, cmdtype=readonly)
2728 def identify(ui, repo, source=None, rev=None,
2729 def identify(ui, repo, source=None, rev=None,
2729 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2730 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2730 """identify the working directory or specified revision
2731 """identify the working directory or specified revision
2731
2732
2732 Print a summary identifying the repository state at REV using one or
2733 Print a summary identifying the repository state at REV using one or
2733 two parent hash identifiers, followed by a "+" if the working
2734 two parent hash identifiers, followed by a "+" if the working
2734 directory has uncommitted changes, the branch name (if not default),
2735 directory has uncommitted changes, the branch name (if not default),
2735 a list of tags, and a list of bookmarks.
2736 a list of tags, and a list of bookmarks.
2736
2737
2737 When REV is not given, print a summary of the current state of the
2738 When REV is not given, print a summary of the current state of the
2738 repository including the working directory. Specify -r. to get information
2739 repository including the working directory. Specify -r. to get information
2739 of the working directory parent without scanning uncommitted changes.
2740 of the working directory parent without scanning uncommitted changes.
2740
2741
2741 Specifying a path to a repository root or Mercurial bundle will
2742 Specifying a path to a repository root or Mercurial bundle will
2742 cause lookup to operate on that repository/bundle.
2743 cause lookup to operate on that repository/bundle.
2743
2744
2744 .. container:: verbose
2745 .. container:: verbose
2745
2746
2746 Examples:
2747 Examples:
2747
2748
2748 - generate a build identifier for the working directory::
2749 - generate a build identifier for the working directory::
2749
2750
2750 hg id --id > build-id.dat
2751 hg id --id > build-id.dat
2751
2752
2752 - find the revision corresponding to a tag::
2753 - find the revision corresponding to a tag::
2753
2754
2754 hg id -n -r 1.3
2755 hg id -n -r 1.3
2755
2756
2756 - check the most recent revision of a remote repository::
2757 - check the most recent revision of a remote repository::
2757
2758
2758 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2759 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2759
2760
2760 See :hg:`log` for generating more information about specific revisions,
2761 See :hg:`log` for generating more information about specific revisions,
2761 including full hash identifiers.
2762 including full hash identifiers.
2762
2763
2763 Returns 0 if successful.
2764 Returns 0 if successful.
2764 """
2765 """
2765
2766
2766 opts = pycompat.byteskwargs(opts)
2767 opts = pycompat.byteskwargs(opts)
2767 if not repo and not source:
2768 if not repo and not source:
2768 raise error.Abort(_("there is no Mercurial repository here "
2769 raise error.Abort(_("there is no Mercurial repository here "
2769 "(.hg not found)"))
2770 "(.hg not found)"))
2770
2771
2771 if ui.debugflag:
2772 if ui.debugflag:
2772 hexfunc = hex
2773 hexfunc = hex
2773 else:
2774 else:
2774 hexfunc = short
2775 hexfunc = short
2775 default = not (num or id or branch or tags or bookmarks)
2776 default = not (num or id or branch or tags or bookmarks)
2776 output = []
2777 output = []
2777 revs = []
2778 revs = []
2778
2779
2779 if source:
2780 if source:
2780 source, branches = hg.parseurl(ui.expandpath(source))
2781 source, branches = hg.parseurl(ui.expandpath(source))
2781 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2782 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2782 repo = peer.local()
2783 repo = peer.local()
2783 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2784 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2784
2785
2785 fm = ui.formatter('identify', opts)
2786 fm = ui.formatter('identify', opts)
2786 fm.startitem()
2787 fm.startitem()
2787
2788
2788 if not repo:
2789 if not repo:
2789 if num or branch or tags:
2790 if num or branch or tags:
2790 raise error.Abort(
2791 raise error.Abort(
2791 _("can't query remote revision number, branch, or tags"))
2792 _("can't query remote revision number, branch, or tags"))
2792 if not rev and revs:
2793 if not rev and revs:
2793 rev = revs[0]
2794 rev = revs[0]
2794 if not rev:
2795 if not rev:
2795 rev = "tip"
2796 rev = "tip"
2796
2797
2797 remoterev = peer.lookup(rev)
2798 remoterev = peer.lookup(rev)
2798 hexrev = hexfunc(remoterev)
2799 hexrev = hexfunc(remoterev)
2799 if default or id:
2800 if default or id:
2800 output = [hexrev]
2801 output = [hexrev]
2801 fm.data(id=hexrev)
2802 fm.data(id=hexrev)
2802
2803
2803 def getbms():
2804 def getbms():
2804 bms = []
2805 bms = []
2805
2806
2806 if 'bookmarks' in peer.listkeys('namespaces'):
2807 if 'bookmarks' in peer.listkeys('namespaces'):
2807 hexremoterev = hex(remoterev)
2808 hexremoterev = hex(remoterev)
2808 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2809 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2809 if bmr == hexremoterev]
2810 if bmr == hexremoterev]
2810
2811
2811 return sorted(bms)
2812 return sorted(bms)
2812
2813
2813 bms = getbms()
2814 bms = getbms()
2814 if bookmarks:
2815 if bookmarks:
2815 output.extend(bms)
2816 output.extend(bms)
2816 elif default and not ui.quiet:
2817 elif default and not ui.quiet:
2817 # multiple bookmarks for a single parent separated by '/'
2818 # multiple bookmarks for a single parent separated by '/'
2818 bm = '/'.join(bms)
2819 bm = '/'.join(bms)
2819 if bm:
2820 if bm:
2820 output.append(bm)
2821 output.append(bm)
2821
2822
2822 fm.data(node=hex(remoterev))
2823 fm.data(node=hex(remoterev))
2823 fm.data(bookmarks=fm.formatlist(bms, name='bookmark'))
2824 fm.data(bookmarks=fm.formatlist(bms, name='bookmark'))
2824 else:
2825 else:
2825 if rev:
2826 if rev:
2826 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2827 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2827 ctx = scmutil.revsingle(repo, rev, None)
2828 ctx = scmutil.revsingle(repo, rev, None)
2828
2829
2829 if ctx.rev() is None:
2830 if ctx.rev() is None:
2830 ctx = repo[None]
2831 ctx = repo[None]
2831 parents = ctx.parents()
2832 parents = ctx.parents()
2832 taglist = []
2833 taglist = []
2833 for p in parents:
2834 for p in parents:
2834 taglist.extend(p.tags())
2835 taglist.extend(p.tags())
2835
2836
2836 dirty = ""
2837 dirty = ""
2837 if ctx.dirty(missing=True, merge=False, branch=False):
2838 if ctx.dirty(missing=True, merge=False, branch=False):
2838 dirty = '+'
2839 dirty = '+'
2839 fm.data(dirty=dirty)
2840 fm.data(dirty=dirty)
2840
2841
2841 hexoutput = [hexfunc(p.node()) for p in parents]
2842 hexoutput = [hexfunc(p.node()) for p in parents]
2842 if default or id:
2843 if default or id:
2843 output = ["%s%s" % ('+'.join(hexoutput), dirty)]
2844 output = ["%s%s" % ('+'.join(hexoutput), dirty)]
2844 fm.data(id="%s%s" % ('+'.join(hexoutput), dirty))
2845 fm.data(id="%s%s" % ('+'.join(hexoutput), dirty))
2845
2846
2846 if num:
2847 if num:
2847 numoutput = ["%d" % p.rev() for p in parents]
2848 numoutput = ["%d" % p.rev() for p in parents]
2848 output.append("%s%s" % ('+'.join(numoutput), dirty))
2849 output.append("%s%s" % ('+'.join(numoutput), dirty))
2849
2850
2850 fn = fm.nested('parents')
2851 fn = fm.nested('parents')
2851 for p in parents:
2852 for p in parents:
2852 fn.startitem()
2853 fn.startitem()
2853 fn.data(rev=p.rev())
2854 fn.data(rev=p.rev())
2854 fn.data(node=p.hex())
2855 fn.data(node=p.hex())
2855 fn.context(ctx=p)
2856 fn.context(ctx=p)
2856 fn.end()
2857 fn.end()
2857 else:
2858 else:
2858 hexoutput = hexfunc(ctx.node())
2859 hexoutput = hexfunc(ctx.node())
2859 if default or id:
2860 if default or id:
2860 output = [hexoutput]
2861 output = [hexoutput]
2861 fm.data(id=hexoutput)
2862 fm.data(id=hexoutput)
2862
2863
2863 if num:
2864 if num:
2864 output.append(pycompat.bytestr(ctx.rev()))
2865 output.append(pycompat.bytestr(ctx.rev()))
2865 taglist = ctx.tags()
2866 taglist = ctx.tags()
2866
2867
2867 if default and not ui.quiet:
2868 if default and not ui.quiet:
2868 b = ctx.branch()
2869 b = ctx.branch()
2869 if b != 'default':
2870 if b != 'default':
2870 output.append("(%s)" % b)
2871 output.append("(%s)" % b)
2871
2872
2872 # multiple tags for a single parent separated by '/'
2873 # multiple tags for a single parent separated by '/'
2873 t = '/'.join(taglist)
2874 t = '/'.join(taglist)
2874 if t:
2875 if t:
2875 output.append(t)
2876 output.append(t)
2876
2877
2877 # multiple bookmarks for a single parent separated by '/'
2878 # multiple bookmarks for a single parent separated by '/'
2878 bm = '/'.join(ctx.bookmarks())
2879 bm = '/'.join(ctx.bookmarks())
2879 if bm:
2880 if bm:
2880 output.append(bm)
2881 output.append(bm)
2881 else:
2882 else:
2882 if branch:
2883 if branch:
2883 output.append(ctx.branch())
2884 output.append(ctx.branch())
2884
2885
2885 if tags:
2886 if tags:
2886 output.extend(taglist)
2887 output.extend(taglist)
2887
2888
2888 if bookmarks:
2889 if bookmarks:
2889 output.extend(ctx.bookmarks())
2890 output.extend(ctx.bookmarks())
2890
2891
2891 fm.data(node=ctx.hex())
2892 fm.data(node=ctx.hex())
2892 fm.data(branch=ctx.branch())
2893 fm.data(branch=ctx.branch())
2893 fm.data(tags=fm.formatlist(taglist, name='tag', sep=':'))
2894 fm.data(tags=fm.formatlist(taglist, name='tag', sep=':'))
2894 fm.data(bookmarks=fm.formatlist(ctx.bookmarks(), name='bookmark'))
2895 fm.data(bookmarks=fm.formatlist(ctx.bookmarks(), name='bookmark'))
2895 fm.context(ctx=ctx)
2896 fm.context(ctx=ctx)
2896
2897
2897 fm.plain("%s\n" % ' '.join(output))
2898 fm.plain("%s\n" % ' '.join(output))
2898 fm.end()
2899 fm.end()
2899
2900
2900 @command('import|patch',
2901 @command('import|patch',
2901 [('p', 'strip', 1,
2902 [('p', 'strip', 1,
2902 _('directory strip option for patch. This has the same '
2903 _('directory strip option for patch. This has the same '
2903 'meaning as the corresponding patch option'), _('NUM')),
2904 'meaning as the corresponding patch option'), _('NUM')),
2904 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2905 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2905 ('e', 'edit', False, _('invoke editor on commit messages')),
2906 ('e', 'edit', False, _('invoke editor on commit messages')),
2906 ('f', 'force', None,
2907 ('f', 'force', None,
2907 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2908 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2908 ('', 'no-commit', None,
2909 ('', 'no-commit', None,
2909 _("don't commit, just update the working directory")),
2910 _("don't commit, just update the working directory")),
2910 ('', 'bypass', None,
2911 ('', 'bypass', None,
2911 _("apply patch without touching the working directory")),
2912 _("apply patch without touching the working directory")),
2912 ('', 'partial', None,
2913 ('', 'partial', None,
2913 _('commit even if some hunks fail')),
2914 _('commit even if some hunks fail')),
2914 ('', 'exact', None,
2915 ('', 'exact', None,
2915 _('abort if patch would apply lossily')),
2916 _('abort if patch would apply lossily')),
2916 ('', 'prefix', '',
2917 ('', 'prefix', '',
2917 _('apply patch to subdirectory'), _('DIR')),
2918 _('apply patch to subdirectory'), _('DIR')),
2918 ('', 'import-branch', None,
2919 ('', 'import-branch', None,
2919 _('use any branch information in patch (implied by --exact)'))] +
2920 _('use any branch information in patch (implied by --exact)'))] +
2920 commitopts + commitopts2 + similarityopts,
2921 commitopts + commitopts2 + similarityopts,
2921 _('[OPTION]... PATCH...'))
2922 _('[OPTION]... PATCH...'))
2922 def import_(ui, repo, patch1=None, *patches, **opts):
2923 def import_(ui, repo, patch1=None, *patches, **opts):
2923 """import an ordered set of patches
2924 """import an ordered set of patches
2924
2925
2925 Import a list of patches and commit them individually (unless
2926 Import a list of patches and commit them individually (unless
2926 --no-commit is specified).
2927 --no-commit is specified).
2927
2928
2928 To read a patch from standard input (stdin), use "-" as the patch
2929 To read a patch from standard input (stdin), use "-" as the patch
2929 name. If a URL is specified, the patch will be downloaded from
2930 name. If a URL is specified, the patch will be downloaded from
2930 there.
2931 there.
2931
2932
2932 Import first applies changes to the working directory (unless
2933 Import first applies changes to the working directory (unless
2933 --bypass is specified), import will abort if there are outstanding
2934 --bypass is specified), import will abort if there are outstanding
2934 changes.
2935 changes.
2935
2936
2936 Use --bypass to apply and commit patches directly to the
2937 Use --bypass to apply and commit patches directly to the
2937 repository, without affecting the working directory. Without
2938 repository, without affecting the working directory. Without
2938 --exact, patches will be applied on top of the working directory
2939 --exact, patches will be applied on top of the working directory
2939 parent revision.
2940 parent revision.
2940
2941
2941 You can import a patch straight from a mail message. Even patches
2942 You can import a patch straight from a mail message. Even patches
2942 as attachments work (to use the body part, it must have type
2943 as attachments work (to use the body part, it must have type
2943 text/plain or text/x-patch). From and Subject headers of email
2944 text/plain or text/x-patch). From and Subject headers of email
2944 message are used as default committer and commit message. All
2945 message are used as default committer and commit message. All
2945 text/plain body parts before first diff are added to the commit
2946 text/plain body parts before first diff are added to the commit
2946 message.
2947 message.
2947
2948
2948 If the imported patch was generated by :hg:`export`, user and
2949 If the imported patch was generated by :hg:`export`, user and
2949 description from patch override values from message headers and
2950 description from patch override values from message headers and
2950 body. Values given on command line with -m/--message and -u/--user
2951 body. Values given on command line with -m/--message and -u/--user
2951 override these.
2952 override these.
2952
2953
2953 If --exact is specified, import will set the working directory to
2954 If --exact is specified, import will set the working directory to
2954 the parent of each patch before applying it, and will abort if the
2955 the parent of each patch before applying it, and will abort if the
2955 resulting changeset has a different ID than the one recorded in
2956 resulting changeset has a different ID than the one recorded in
2956 the patch. This will guard against various ways that portable
2957 the patch. This will guard against various ways that portable
2957 patch formats and mail systems might fail to transfer Mercurial
2958 patch formats and mail systems might fail to transfer Mercurial
2958 data or metadata. See :hg:`bundle` for lossless transmission.
2959 data or metadata. See :hg:`bundle` for lossless transmission.
2959
2960
2960 Use --partial to ensure a changeset will be created from the patch
2961 Use --partial to ensure a changeset will be created from the patch
2961 even if some hunks fail to apply. Hunks that fail to apply will be
2962 even if some hunks fail to apply. Hunks that fail to apply will be
2962 written to a <target-file>.rej file. Conflicts can then be resolved
2963 written to a <target-file>.rej file. Conflicts can then be resolved
2963 by hand before :hg:`commit --amend` is run to update the created
2964 by hand before :hg:`commit --amend` is run to update the created
2964 changeset. This flag exists to let people import patches that
2965 changeset. This flag exists to let people import patches that
2965 partially apply without losing the associated metadata (author,
2966 partially apply without losing the associated metadata (author,
2966 date, description, ...).
2967 date, description, ...).
2967
2968
2968 .. note::
2969 .. note::
2969
2970
2970 When no hunks apply cleanly, :hg:`import --partial` will create
2971 When no hunks apply cleanly, :hg:`import --partial` will create
2971 an empty changeset, importing only the patch metadata.
2972 an empty changeset, importing only the patch metadata.
2972
2973
2973 With -s/--similarity, hg will attempt to discover renames and
2974 With -s/--similarity, hg will attempt to discover renames and
2974 copies in the patch in the same way as :hg:`addremove`.
2975 copies in the patch in the same way as :hg:`addremove`.
2975
2976
2976 It is possible to use external patch programs to perform the patch
2977 It is possible to use external patch programs to perform the patch
2977 by setting the ``ui.patch`` configuration option. For the default
2978 by setting the ``ui.patch`` configuration option. For the default
2978 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2979 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2979 See :hg:`help config` for more information about configuration
2980 See :hg:`help config` for more information about configuration
2980 files and how to use these options.
2981 files and how to use these options.
2981
2982
2982 See :hg:`help dates` for a list of formats valid for -d/--date.
2983 See :hg:`help dates` for a list of formats valid for -d/--date.
2983
2984
2984 .. container:: verbose
2985 .. container:: verbose
2985
2986
2986 Examples:
2987 Examples:
2987
2988
2988 - import a traditional patch from a website and detect renames::
2989 - import a traditional patch from a website and detect renames::
2989
2990
2990 hg import -s 80 http://example.com/bugfix.patch
2991 hg import -s 80 http://example.com/bugfix.patch
2991
2992
2992 - import a changeset from an hgweb server::
2993 - import a changeset from an hgweb server::
2993
2994
2994 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2995 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2995
2996
2996 - import all the patches in an Unix-style mbox::
2997 - import all the patches in an Unix-style mbox::
2997
2998
2998 hg import incoming-patches.mbox
2999 hg import incoming-patches.mbox
2999
3000
3000 - import patches from stdin::
3001 - import patches from stdin::
3001
3002
3002 hg import -
3003 hg import -
3003
3004
3004 - attempt to exactly restore an exported changeset (not always
3005 - attempt to exactly restore an exported changeset (not always
3005 possible)::
3006 possible)::
3006
3007
3007 hg import --exact proposed-fix.patch
3008 hg import --exact proposed-fix.patch
3008
3009
3009 - use an external tool to apply a patch which is too fuzzy for
3010 - use an external tool to apply a patch which is too fuzzy for
3010 the default internal tool.
3011 the default internal tool.
3011
3012
3012 hg import --config ui.patch="patch --merge" fuzzy.patch
3013 hg import --config ui.patch="patch --merge" fuzzy.patch
3013
3014
3014 - change the default fuzzing from 2 to a less strict 7
3015 - change the default fuzzing from 2 to a less strict 7
3015
3016
3016 hg import --config ui.fuzz=7 fuzz.patch
3017 hg import --config ui.fuzz=7 fuzz.patch
3017
3018
3018 Returns 0 on success, 1 on partial success (see --partial).
3019 Returns 0 on success, 1 on partial success (see --partial).
3019 """
3020 """
3020
3021
3021 opts = pycompat.byteskwargs(opts)
3022 opts = pycompat.byteskwargs(opts)
3022 if not patch1:
3023 if not patch1:
3023 raise error.Abort(_('need at least one patch to import'))
3024 raise error.Abort(_('need at least one patch to import'))
3024
3025
3025 patches = (patch1,) + patches
3026 patches = (patch1,) + patches
3026
3027
3027 date = opts.get('date')
3028 date = opts.get('date')
3028 if date:
3029 if date:
3029 opts['date'] = dateutil.parsedate(date)
3030 opts['date'] = dateutil.parsedate(date)
3030
3031
3031 exact = opts.get('exact')
3032 exact = opts.get('exact')
3032 update = not opts.get('bypass')
3033 update = not opts.get('bypass')
3033 if not update and opts.get('no_commit'):
3034 if not update and opts.get('no_commit'):
3034 raise error.Abort(_('cannot use --no-commit with --bypass'))
3035 raise error.Abort(_('cannot use --no-commit with --bypass'))
3035 try:
3036 try:
3036 sim = float(opts.get('similarity') or 0)
3037 sim = float(opts.get('similarity') or 0)
3037 except ValueError:
3038 except ValueError:
3038 raise error.Abort(_('similarity must be a number'))
3039 raise error.Abort(_('similarity must be a number'))
3039 if sim < 0 or sim > 100:
3040 if sim < 0 or sim > 100:
3040 raise error.Abort(_('similarity must be between 0 and 100'))
3041 raise error.Abort(_('similarity must be between 0 and 100'))
3041 if sim and not update:
3042 if sim and not update:
3042 raise error.Abort(_('cannot use --similarity with --bypass'))
3043 raise error.Abort(_('cannot use --similarity with --bypass'))
3043 if exact:
3044 if exact:
3044 if opts.get('edit'):
3045 if opts.get('edit'):
3045 raise error.Abort(_('cannot use --exact with --edit'))
3046 raise error.Abort(_('cannot use --exact with --edit'))
3046 if opts.get('prefix'):
3047 if opts.get('prefix'):
3047 raise error.Abort(_('cannot use --exact with --prefix'))
3048 raise error.Abort(_('cannot use --exact with --prefix'))
3048
3049
3049 base = opts["base"]
3050 base = opts["base"]
3050 wlock = dsguard = lock = tr = None
3051 wlock = dsguard = lock = tr = None
3051 msgs = []
3052 msgs = []
3052 ret = 0
3053 ret = 0
3053
3054
3054
3055
3055 try:
3056 try:
3056 wlock = repo.wlock()
3057 wlock = repo.wlock()
3057
3058
3058 if update:
3059 if update:
3059 cmdutil.checkunfinished(repo)
3060 cmdutil.checkunfinished(repo)
3060 if (exact or not opts.get('force')):
3061 if (exact or not opts.get('force')):
3061 cmdutil.bailifchanged(repo)
3062 cmdutil.bailifchanged(repo)
3062
3063
3063 if not opts.get('no_commit'):
3064 if not opts.get('no_commit'):
3064 lock = repo.lock()
3065 lock = repo.lock()
3065 tr = repo.transaction('import')
3066 tr = repo.transaction('import')
3066 else:
3067 else:
3067 dsguard = dirstateguard.dirstateguard(repo, 'import')
3068 dsguard = dirstateguard.dirstateguard(repo, 'import')
3068 parents = repo[None].parents()
3069 parents = repo[None].parents()
3069 for patchurl in patches:
3070 for patchurl in patches:
3070 if patchurl == '-':
3071 if patchurl == '-':
3071 ui.status(_('applying patch from stdin\n'))
3072 ui.status(_('applying patch from stdin\n'))
3072 patchfile = ui.fin
3073 patchfile = ui.fin
3073 patchurl = 'stdin' # for error message
3074 patchurl = 'stdin' # for error message
3074 else:
3075 else:
3075 patchurl = os.path.join(base, patchurl)
3076 patchurl = os.path.join(base, patchurl)
3076 ui.status(_('applying %s\n') % patchurl)
3077 ui.status(_('applying %s\n') % patchurl)
3077 patchfile = hg.openpath(ui, patchurl)
3078 patchfile = hg.openpath(ui, patchurl)
3078
3079
3079 haspatch = False
3080 haspatch = False
3080 for hunk in patch.split(patchfile):
3081 for hunk in patch.split(patchfile):
3081 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
3082 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
3082 parents, opts,
3083 parents, opts,
3083 msgs, hg.clean)
3084 msgs, hg.clean)
3084 if msg:
3085 if msg:
3085 haspatch = True
3086 haspatch = True
3086 ui.note(msg + '\n')
3087 ui.note(msg + '\n')
3087 if update or exact:
3088 if update or exact:
3088 parents = repo[None].parents()
3089 parents = repo[None].parents()
3089 else:
3090 else:
3090 parents = [repo[node]]
3091 parents = [repo[node]]
3091 if rej:
3092 if rej:
3092 ui.write_err(_("patch applied partially\n"))
3093 ui.write_err(_("patch applied partially\n"))
3093 ui.write_err(_("(fix the .rej files and run "
3094 ui.write_err(_("(fix the .rej files and run "
3094 "`hg commit --amend`)\n"))
3095 "`hg commit --amend`)\n"))
3095 ret = 1
3096 ret = 1
3096 break
3097 break
3097
3098
3098 if not haspatch:
3099 if not haspatch:
3099 raise error.Abort(_('%s: no diffs found') % patchurl)
3100 raise error.Abort(_('%s: no diffs found') % patchurl)
3100
3101
3101 if tr:
3102 if tr:
3102 tr.close()
3103 tr.close()
3103 if msgs:
3104 if msgs:
3104 repo.savecommitmessage('\n* * *\n'.join(msgs))
3105 repo.savecommitmessage('\n* * *\n'.join(msgs))
3105 if dsguard:
3106 if dsguard:
3106 dsguard.close()
3107 dsguard.close()
3107 return ret
3108 return ret
3108 finally:
3109 finally:
3109 if tr:
3110 if tr:
3110 tr.release()
3111 tr.release()
3111 release(lock, dsguard, wlock)
3112 release(lock, dsguard, wlock)
3112
3113
3113 @command('incoming|in',
3114 @command('incoming|in',
3114 [('f', 'force', None,
3115 [('f', 'force', None,
3115 _('run even if remote repository is unrelated')),
3116 _('run even if remote repository is unrelated')),
3116 ('n', 'newest-first', None, _('show newest record first')),
3117 ('n', 'newest-first', None, _('show newest record first')),
3117 ('', 'bundle', '',
3118 ('', 'bundle', '',
3118 _('file to store the bundles into'), _('FILE')),
3119 _('file to store the bundles into'), _('FILE')),
3119 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3120 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3120 ('B', 'bookmarks', False, _("compare bookmarks")),
3121 ('B', 'bookmarks', False, _("compare bookmarks")),
3121 ('b', 'branch', [],
3122 ('b', 'branch', [],
3122 _('a specific branch you would like to pull'), _('BRANCH')),
3123 _('a specific branch you would like to pull'), _('BRANCH')),
3123 ] + logopts + remoteopts + subrepoopts,
3124 ] + logopts + remoteopts + subrepoopts,
3124 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3125 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3125 def incoming(ui, repo, source="default", **opts):
3126 def incoming(ui, repo, source="default", **opts):
3126 """show new changesets found in source
3127 """show new changesets found in source
3127
3128
3128 Show new changesets found in the specified path/URL or the default
3129 Show new changesets found in the specified path/URL or the default
3129 pull location. These are the changesets that would have been pulled
3130 pull location. These are the changesets that would have been pulled
3130 by :hg:`pull` at the time you issued this command.
3131 by :hg:`pull` at the time you issued this command.
3131
3132
3132 See pull for valid source format details.
3133 See pull for valid source format details.
3133
3134
3134 .. container:: verbose
3135 .. container:: verbose
3135
3136
3136 With -B/--bookmarks, the result of bookmark comparison between
3137 With -B/--bookmarks, the result of bookmark comparison between
3137 local and remote repositories is displayed. With -v/--verbose,
3138 local and remote repositories is displayed. With -v/--verbose,
3138 status is also displayed for each bookmark like below::
3139 status is also displayed for each bookmark like below::
3139
3140
3140 BM1 01234567890a added
3141 BM1 01234567890a added
3141 BM2 1234567890ab advanced
3142 BM2 1234567890ab advanced
3142 BM3 234567890abc diverged
3143 BM3 234567890abc diverged
3143 BM4 34567890abcd changed
3144 BM4 34567890abcd changed
3144
3145
3145 The action taken locally when pulling depends on the
3146 The action taken locally when pulling depends on the
3146 status of each bookmark:
3147 status of each bookmark:
3147
3148
3148 :``added``: pull will create it
3149 :``added``: pull will create it
3149 :``advanced``: pull will update it
3150 :``advanced``: pull will update it
3150 :``diverged``: pull will create a divergent bookmark
3151 :``diverged``: pull will create a divergent bookmark
3151 :``changed``: result depends on remote changesets
3152 :``changed``: result depends on remote changesets
3152
3153
3153 From the point of view of pulling behavior, bookmark
3154 From the point of view of pulling behavior, bookmark
3154 existing only in the remote repository are treated as ``added``,
3155 existing only in the remote repository are treated as ``added``,
3155 even if it is in fact locally deleted.
3156 even if it is in fact locally deleted.
3156
3157
3157 .. container:: verbose
3158 .. container:: verbose
3158
3159
3159 For remote repository, using --bundle avoids downloading the
3160 For remote repository, using --bundle avoids downloading the
3160 changesets twice if the incoming is followed by a pull.
3161 changesets twice if the incoming is followed by a pull.
3161
3162
3162 Examples:
3163 Examples:
3163
3164
3164 - show incoming changes with patches and full description::
3165 - show incoming changes with patches and full description::
3165
3166
3166 hg incoming -vp
3167 hg incoming -vp
3167
3168
3168 - show incoming changes excluding merges, store a bundle::
3169 - show incoming changes excluding merges, store a bundle::
3169
3170
3170 hg in -vpM --bundle incoming.hg
3171 hg in -vpM --bundle incoming.hg
3171 hg pull incoming.hg
3172 hg pull incoming.hg
3172
3173
3173 - briefly list changes inside a bundle::
3174 - briefly list changes inside a bundle::
3174
3175
3175 hg in changes.hg -T "{desc|firstline}\\n"
3176 hg in changes.hg -T "{desc|firstline}\\n"
3176
3177
3177 Returns 0 if there are incoming changes, 1 otherwise.
3178 Returns 0 if there are incoming changes, 1 otherwise.
3178 """
3179 """
3179 opts = pycompat.byteskwargs(opts)
3180 opts = pycompat.byteskwargs(opts)
3180 if opts.get('graph'):
3181 if opts.get('graph'):
3181 logcmdutil.checkunsupportedgraphflags([], opts)
3182 logcmdutil.checkunsupportedgraphflags([], opts)
3182 def display(other, chlist, displayer):
3183 def display(other, chlist, displayer):
3183 revdag = logcmdutil.graphrevs(other, chlist, opts)
3184 revdag = logcmdutil.graphrevs(other, chlist, opts)
3184 logcmdutil.displaygraph(ui, repo, revdag, displayer,
3185 logcmdutil.displaygraph(ui, repo, revdag, displayer,
3185 graphmod.asciiedges)
3186 graphmod.asciiedges)
3186
3187
3187 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3188 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3188 return 0
3189 return 0
3189
3190
3190 if opts.get('bundle') and opts.get('subrepos'):
3191 if opts.get('bundle') and opts.get('subrepos'):
3191 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3192 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3192
3193
3193 if opts.get('bookmarks'):
3194 if opts.get('bookmarks'):
3194 source, branches = hg.parseurl(ui.expandpath(source),
3195 source, branches = hg.parseurl(ui.expandpath(source),
3195 opts.get('branch'))
3196 opts.get('branch'))
3196 other = hg.peer(repo, opts, source)
3197 other = hg.peer(repo, opts, source)
3197 if 'bookmarks' not in other.listkeys('namespaces'):
3198 if 'bookmarks' not in other.listkeys('namespaces'):
3198 ui.warn(_("remote doesn't support bookmarks\n"))
3199 ui.warn(_("remote doesn't support bookmarks\n"))
3199 return 0
3200 return 0
3200 ui.pager('incoming')
3201 ui.pager('incoming')
3201 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3202 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3202 return bookmarks.incoming(ui, repo, other)
3203 return bookmarks.incoming(ui, repo, other)
3203
3204
3204 repo._subtoppath = ui.expandpath(source)
3205 repo._subtoppath = ui.expandpath(source)
3205 try:
3206 try:
3206 return hg.incoming(ui, repo, source, opts)
3207 return hg.incoming(ui, repo, source, opts)
3207 finally:
3208 finally:
3208 del repo._subtoppath
3209 del repo._subtoppath
3209
3210
3210
3211
3211 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3212 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3212 norepo=True)
3213 norepo=True)
3213 def init(ui, dest=".", **opts):
3214 def init(ui, dest=".", **opts):
3214 """create a new repository in the given directory
3215 """create a new repository in the given directory
3215
3216
3216 Initialize a new repository in the given directory. If the given
3217 Initialize a new repository in the given directory. If the given
3217 directory does not exist, it will be created.
3218 directory does not exist, it will be created.
3218
3219
3219 If no directory is given, the current directory is used.
3220 If no directory is given, the current directory is used.
3220
3221
3221 It is possible to specify an ``ssh://`` URL as the destination.
3222 It is possible to specify an ``ssh://`` URL as the destination.
3222 See :hg:`help urls` for more information.
3223 See :hg:`help urls` for more information.
3223
3224
3224 Returns 0 on success.
3225 Returns 0 on success.
3225 """
3226 """
3226 opts = pycompat.byteskwargs(opts)
3227 opts = pycompat.byteskwargs(opts)
3227 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3228 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3228
3229
3229 @command('locate',
3230 @command('locate',
3230 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3231 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3231 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3232 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3232 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3233 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3233 ] + walkopts,
3234 ] + walkopts,
3234 _('[OPTION]... [PATTERN]...'))
3235 _('[OPTION]... [PATTERN]...'))
3235 def locate(ui, repo, *pats, **opts):
3236 def locate(ui, repo, *pats, **opts):
3236 """locate files matching specific patterns (DEPRECATED)
3237 """locate files matching specific patterns (DEPRECATED)
3237
3238
3238 Print files under Mercurial control in the working directory whose
3239 Print files under Mercurial control in the working directory whose
3239 names match the given patterns.
3240 names match the given patterns.
3240
3241
3241 By default, this command searches all directories in the working
3242 By default, this command searches all directories in the working
3242 directory. To search just the current directory and its
3243 directory. To search just the current directory and its
3243 subdirectories, use "--include .".
3244 subdirectories, use "--include .".
3244
3245
3245 If no patterns are given to match, this command prints the names
3246 If no patterns are given to match, this command prints the names
3246 of all files under Mercurial control in the working directory.
3247 of all files under Mercurial control in the working directory.
3247
3248
3248 If you want to feed the output of this command into the "xargs"
3249 If you want to feed the output of this command into the "xargs"
3249 command, use the -0 option to both this command and "xargs". This
3250 command, use the -0 option to both this command and "xargs". This
3250 will avoid the problem of "xargs" treating single filenames that
3251 will avoid the problem of "xargs" treating single filenames that
3251 contain whitespace as multiple filenames.
3252 contain whitespace as multiple filenames.
3252
3253
3253 See :hg:`help files` for a more versatile command.
3254 See :hg:`help files` for a more versatile command.
3254
3255
3255 Returns 0 if a match is found, 1 otherwise.
3256 Returns 0 if a match is found, 1 otherwise.
3256 """
3257 """
3257 opts = pycompat.byteskwargs(opts)
3258 opts = pycompat.byteskwargs(opts)
3258 if opts.get('print0'):
3259 if opts.get('print0'):
3259 end = '\0'
3260 end = '\0'
3260 else:
3261 else:
3261 end = '\n'
3262 end = '\n'
3262 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3263 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3263
3264
3264 ret = 1
3265 ret = 1
3265 m = scmutil.match(ctx, pats, opts, default='relglob',
3266 m = scmutil.match(ctx, pats, opts, default='relglob',
3266 badfn=lambda x, y: False)
3267 badfn=lambda x, y: False)
3267
3268
3268 ui.pager('locate')
3269 ui.pager('locate')
3269 for abs in ctx.matches(m):
3270 for abs in ctx.matches(m):
3270 if opts.get('fullpath'):
3271 if opts.get('fullpath'):
3271 ui.write(repo.wjoin(abs), end)
3272 ui.write(repo.wjoin(abs), end)
3272 else:
3273 else:
3273 ui.write(((pats and m.rel(abs)) or abs), end)
3274 ui.write(((pats and m.rel(abs)) or abs), end)
3274 ret = 0
3275 ret = 0
3275
3276
3276 return ret
3277 return ret
3277
3278
3278 @command('^log|history',
3279 @command('^log|history',
3279 [('f', 'follow', None,
3280 [('f', 'follow', None,
3280 _('follow changeset history, or file history across copies and renames')),
3281 _('follow changeset history, or file history across copies and renames')),
3281 ('', 'follow-first', None,
3282 ('', 'follow-first', None,
3282 _('only follow the first parent of merge changesets (DEPRECATED)')),
3283 _('only follow the first parent of merge changesets (DEPRECATED)')),
3283 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3284 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3284 ('C', 'copies', None, _('show copied files')),
3285 ('C', 'copies', None, _('show copied files')),
3285 ('k', 'keyword', [],
3286 ('k', 'keyword', [],
3286 _('do case-insensitive search for a given text'), _('TEXT')),
3287 _('do case-insensitive search for a given text'), _('TEXT')),
3287 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3288 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3288 ('L', 'line-range', [],
3289 ('L', 'line-range', [],
3289 _('follow line range of specified file (EXPERIMENTAL)'),
3290 _('follow line range of specified file (EXPERIMENTAL)'),
3290 _('FILE,RANGE')),
3291 _('FILE,RANGE')),
3291 ('', 'removed', None, _('include revisions where files were removed')),
3292 ('', 'removed', None, _('include revisions where files were removed')),
3292 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3293 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3293 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3294 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3294 ('', 'only-branch', [],
3295 ('', 'only-branch', [],
3295 _('show only changesets within the given named branch (DEPRECATED)'),
3296 _('show only changesets within the given named branch (DEPRECATED)'),
3296 _('BRANCH')),
3297 _('BRANCH')),
3297 ('b', 'branch', [],
3298 ('b', 'branch', [],
3298 _('show changesets within the given named branch'), _('BRANCH')),
3299 _('show changesets within the given named branch'), _('BRANCH')),
3299 ('P', 'prune', [],
3300 ('P', 'prune', [],
3300 _('do not display revision or any of its ancestors'), _('REV')),
3301 _('do not display revision or any of its ancestors'), _('REV')),
3301 ] + logopts + walkopts,
3302 ] + logopts + walkopts,
3302 _('[OPTION]... [FILE]'),
3303 _('[OPTION]... [FILE]'),
3303 inferrepo=True, cmdtype=readonly)
3304 inferrepo=True, cmdtype=readonly)
3304 def log(ui, repo, *pats, **opts):
3305 def log(ui, repo, *pats, **opts):
3305 """show revision history of entire repository or files
3306 """show revision history of entire repository or files
3306
3307
3307 Print the revision history of the specified files or the entire
3308 Print the revision history of the specified files or the entire
3308 project.
3309 project.
3309
3310
3310 If no revision range is specified, the default is ``tip:0`` unless
3311 If no revision range is specified, the default is ``tip:0`` unless
3311 --follow is set, in which case the working directory parent is
3312 --follow is set, in which case the working directory parent is
3312 used as the starting revision.
3313 used as the starting revision.
3313
3314
3314 File history is shown without following rename or copy history of
3315 File history is shown without following rename or copy history of
3315 files. Use -f/--follow with a filename to follow history across
3316 files. Use -f/--follow with a filename to follow history across
3316 renames and copies. --follow without a filename will only show
3317 renames and copies. --follow without a filename will only show
3317 ancestors of the starting revision.
3318 ancestors of the starting revision.
3318
3319
3319 By default this command prints revision number and changeset id,
3320 By default this command prints revision number and changeset id,
3320 tags, non-trivial parents, user, date and time, and a summary for
3321 tags, non-trivial parents, user, date and time, and a summary for
3321 each commit. When the -v/--verbose switch is used, the list of
3322 each commit. When the -v/--verbose switch is used, the list of
3322 changed files and full commit message are shown.
3323 changed files and full commit message are shown.
3323
3324
3324 With --graph the revisions are shown as an ASCII art DAG with the most
3325 With --graph the revisions are shown as an ASCII art DAG with the most
3325 recent changeset at the top.
3326 recent changeset at the top.
3326 'o' is a changeset, '@' is a working directory parent, '_' closes a branch,
3327 'o' is a changeset, '@' is a working directory parent, '_' closes a branch,
3327 'x' is obsolete, '*' is unstable, and '+' represents a fork where the
3328 'x' is obsolete, '*' is unstable, and '+' represents a fork where the
3328 changeset from the lines below is a parent of the 'o' merge on the same
3329 changeset from the lines below is a parent of the 'o' merge on the same
3329 line.
3330 line.
3330 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3331 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3331 of a '|' indicates one or more revisions in a path are omitted.
3332 of a '|' indicates one or more revisions in a path are omitted.
3332
3333
3333 .. container:: verbose
3334 .. container:: verbose
3334
3335
3335 Use -L/--line-range FILE,M:N options to follow the history of lines
3336 Use -L/--line-range FILE,M:N options to follow the history of lines
3336 from M to N in FILE. With -p/--patch only diff hunks affecting
3337 from M to N in FILE. With -p/--patch only diff hunks affecting
3337 specified line range will be shown. This option requires --follow;
3338 specified line range will be shown. This option requires --follow;
3338 it can be specified multiple times. Currently, this option is not
3339 it can be specified multiple times. Currently, this option is not
3339 compatible with --graph. This option is experimental.
3340 compatible with --graph. This option is experimental.
3340
3341
3341 .. note::
3342 .. note::
3342
3343
3343 :hg:`log --patch` may generate unexpected diff output for merge
3344 :hg:`log --patch` may generate unexpected diff output for merge
3344 changesets, as it will only compare the merge changeset against
3345 changesets, as it will only compare the merge changeset against
3345 its first parent. Also, only files different from BOTH parents
3346 its first parent. Also, only files different from BOTH parents
3346 will appear in files:.
3347 will appear in files:.
3347
3348
3348 .. note::
3349 .. note::
3349
3350
3350 For performance reasons, :hg:`log FILE` may omit duplicate changes
3351 For performance reasons, :hg:`log FILE` may omit duplicate changes
3351 made on branches and will not show removals or mode changes. To
3352 made on branches and will not show removals or mode changes. To
3352 see all such changes, use the --removed switch.
3353 see all such changes, use the --removed switch.
3353
3354
3354 .. container:: verbose
3355 .. container:: verbose
3355
3356
3356 .. note::
3357 .. note::
3357
3358
3358 The history resulting from -L/--line-range options depends on diff
3359 The history resulting from -L/--line-range options depends on diff
3359 options; for instance if white-spaces are ignored, respective changes
3360 options; for instance if white-spaces are ignored, respective changes
3360 with only white-spaces in specified line range will not be listed.
3361 with only white-spaces in specified line range will not be listed.
3361
3362
3362 .. container:: verbose
3363 .. container:: verbose
3363
3364
3364 Some examples:
3365 Some examples:
3365
3366
3366 - changesets with full descriptions and file lists::
3367 - changesets with full descriptions and file lists::
3367
3368
3368 hg log -v
3369 hg log -v
3369
3370
3370 - changesets ancestral to the working directory::
3371 - changesets ancestral to the working directory::
3371
3372
3372 hg log -f
3373 hg log -f
3373
3374
3374 - last 10 commits on the current branch::
3375 - last 10 commits on the current branch::
3375
3376
3376 hg log -l 10 -b .
3377 hg log -l 10 -b .
3377
3378
3378 - changesets showing all modifications of a file, including removals::
3379 - changesets showing all modifications of a file, including removals::
3379
3380
3380 hg log --removed file.c
3381 hg log --removed file.c
3381
3382
3382 - all changesets that touch a directory, with diffs, excluding merges::
3383 - all changesets that touch a directory, with diffs, excluding merges::
3383
3384
3384 hg log -Mp lib/
3385 hg log -Mp lib/
3385
3386
3386 - all revision numbers that match a keyword::
3387 - all revision numbers that match a keyword::
3387
3388
3388 hg log -k bug --template "{rev}\\n"
3389 hg log -k bug --template "{rev}\\n"
3389
3390
3390 - the full hash identifier of the working directory parent::
3391 - the full hash identifier of the working directory parent::
3391
3392
3392 hg log -r . --template "{node}\\n"
3393 hg log -r . --template "{node}\\n"
3393
3394
3394 - list available log templates::
3395 - list available log templates::
3395
3396
3396 hg log -T list
3397 hg log -T list
3397
3398
3398 - check if a given changeset is included in a tagged release::
3399 - check if a given changeset is included in a tagged release::
3399
3400
3400 hg log -r "a21ccf and ancestor(1.9)"
3401 hg log -r "a21ccf and ancestor(1.9)"
3401
3402
3402 - find all changesets by some user in a date range::
3403 - find all changesets by some user in a date range::
3403
3404
3404 hg log -k alice -d "may 2008 to jul 2008"
3405 hg log -k alice -d "may 2008 to jul 2008"
3405
3406
3406 - summary of all changesets after the last tag::
3407 - summary of all changesets after the last tag::
3407
3408
3408 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3409 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3409
3410
3410 - changesets touching lines 13 to 23 for file.c::
3411 - changesets touching lines 13 to 23 for file.c::
3411
3412
3412 hg log -L file.c,13:23
3413 hg log -L file.c,13:23
3413
3414
3414 - changesets touching lines 13 to 23 for file.c and lines 2 to 6 of
3415 - changesets touching lines 13 to 23 for file.c and lines 2 to 6 of
3415 main.c with patch::
3416 main.c with patch::
3416
3417
3417 hg log -L file.c,13:23 -L main.c,2:6 -p
3418 hg log -L file.c,13:23 -L main.c,2:6 -p
3418
3419
3419 See :hg:`help dates` for a list of formats valid for -d/--date.
3420 See :hg:`help dates` for a list of formats valid for -d/--date.
3420
3421
3421 See :hg:`help revisions` for more about specifying and ordering
3422 See :hg:`help revisions` for more about specifying and ordering
3422 revisions.
3423 revisions.
3423
3424
3424 See :hg:`help templates` for more about pre-packaged styles and
3425 See :hg:`help templates` for more about pre-packaged styles and
3425 specifying custom templates. The default template used by the log
3426 specifying custom templates. The default template used by the log
3426 command can be customized via the ``ui.logtemplate`` configuration
3427 command can be customized via the ``ui.logtemplate`` configuration
3427 setting.
3428 setting.
3428
3429
3429 Returns 0 on success.
3430 Returns 0 on success.
3430
3431
3431 """
3432 """
3432 opts = pycompat.byteskwargs(opts)
3433 opts = pycompat.byteskwargs(opts)
3433 linerange = opts.get('line_range')
3434 linerange = opts.get('line_range')
3434
3435
3435 if linerange and not opts.get('follow'):
3436 if linerange and not opts.get('follow'):
3436 raise error.Abort(_('--line-range requires --follow'))
3437 raise error.Abort(_('--line-range requires --follow'))
3437
3438
3438 if linerange and pats:
3439 if linerange and pats:
3439 # TODO: take pats as patterns with no line-range filter
3440 # TODO: take pats as patterns with no line-range filter
3440 raise error.Abort(
3441 raise error.Abort(
3441 _('FILE arguments are not compatible with --line-range option')
3442 _('FILE arguments are not compatible with --line-range option')
3442 )
3443 )
3443
3444
3444 repo = scmutil.unhidehashlikerevs(repo, opts.get('rev'), 'nowarn')
3445 repo = scmutil.unhidehashlikerevs(repo, opts.get('rev'), 'nowarn')
3445 revs, differ = logcmdutil.getrevs(repo, pats, opts)
3446 revs, differ = logcmdutil.getrevs(repo, pats, opts)
3446 if linerange:
3447 if linerange:
3447 # TODO: should follow file history from logcmdutil._initialrevs(),
3448 # TODO: should follow file history from logcmdutil._initialrevs(),
3448 # then filter the result by logcmdutil._makerevset() and --limit
3449 # then filter the result by logcmdutil._makerevset() and --limit
3449 revs, differ = logcmdutil.getlinerangerevs(repo, revs, opts)
3450 revs, differ = logcmdutil.getlinerangerevs(repo, revs, opts)
3450
3451
3451 getrenamed = None
3452 getrenamed = None
3452 if opts.get('copies'):
3453 if opts.get('copies'):
3453 endrev = None
3454 endrev = None
3454 if opts.get('rev'):
3455 if opts.get('rev'):
3455 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3456 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3456 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3457 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3457
3458
3458 ui.pager('log')
3459 ui.pager('log')
3459 displayer = logcmdutil.changesetdisplayer(ui, repo, opts, differ,
3460 displayer = logcmdutil.changesetdisplayer(ui, repo, opts, differ,
3460 buffered=True)
3461 buffered=True)
3461 if opts.get('graph'):
3462 if opts.get('graph'):
3462 displayfn = logcmdutil.displaygraphrevs
3463 displayfn = logcmdutil.displaygraphrevs
3463 else:
3464 else:
3464 displayfn = logcmdutil.displayrevs
3465 displayfn = logcmdutil.displayrevs
3465 displayfn(ui, repo, revs, displayer, getrenamed)
3466 displayfn(ui, repo, revs, displayer, getrenamed)
3466
3467
3467 @command('manifest',
3468 @command('manifest',
3468 [('r', 'rev', '', _('revision to display'), _('REV')),
3469 [('r', 'rev', '', _('revision to display'), _('REV')),
3469 ('', 'all', False, _("list files from all revisions"))]
3470 ('', 'all', False, _("list files from all revisions"))]
3470 + formatteropts,
3471 + formatteropts,
3471 _('[-r REV]'), cmdtype=readonly)
3472 _('[-r REV]'), cmdtype=readonly)
3472 def manifest(ui, repo, node=None, rev=None, **opts):
3473 def manifest(ui, repo, node=None, rev=None, **opts):
3473 """output the current or given revision of the project manifest
3474 """output the current or given revision of the project manifest
3474
3475
3475 Print a list of version controlled files for the given revision.
3476 Print a list of version controlled files for the given revision.
3476 If no revision is given, the first parent of the working directory
3477 If no revision is given, the first parent of the working directory
3477 is used, or the null revision if no revision is checked out.
3478 is used, or the null revision if no revision is checked out.
3478
3479
3479 With -v, print file permissions, symlink and executable bits.
3480 With -v, print file permissions, symlink and executable bits.
3480 With --debug, print file revision hashes.
3481 With --debug, print file revision hashes.
3481
3482
3482 If option --all is specified, the list of all files from all revisions
3483 If option --all is specified, the list of all files from all revisions
3483 is printed. This includes deleted and renamed files.
3484 is printed. This includes deleted and renamed files.
3484
3485
3485 Returns 0 on success.
3486 Returns 0 on success.
3486 """
3487 """
3487 opts = pycompat.byteskwargs(opts)
3488 opts = pycompat.byteskwargs(opts)
3488 fm = ui.formatter('manifest', opts)
3489 fm = ui.formatter('manifest', opts)
3489
3490
3490 if opts.get('all'):
3491 if opts.get('all'):
3491 if rev or node:
3492 if rev or node:
3492 raise error.Abort(_("can't specify a revision with --all"))
3493 raise error.Abort(_("can't specify a revision with --all"))
3493
3494
3494 res = []
3495 res = []
3495 prefix = "data/"
3496 prefix = "data/"
3496 suffix = ".i"
3497 suffix = ".i"
3497 plen = len(prefix)
3498 plen = len(prefix)
3498 slen = len(suffix)
3499 slen = len(suffix)
3499 with repo.lock():
3500 with repo.lock():
3500 for fn, b, size in repo.store.datafiles():
3501 for fn, b, size in repo.store.datafiles():
3501 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3502 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3502 res.append(fn[plen:-slen])
3503 res.append(fn[plen:-slen])
3503 ui.pager('manifest')
3504 ui.pager('manifest')
3504 for f in res:
3505 for f in res:
3505 fm.startitem()
3506 fm.startitem()
3506 fm.write("path", '%s\n', f)
3507 fm.write("path", '%s\n', f)
3507 fm.end()
3508 fm.end()
3508 return
3509 return
3509
3510
3510 if rev and node:
3511 if rev and node:
3511 raise error.Abort(_("please specify just one revision"))
3512 raise error.Abort(_("please specify just one revision"))
3512
3513
3513 if not node:
3514 if not node:
3514 node = rev
3515 node = rev
3515
3516
3516 char = {'l': '@', 'x': '*', '': '', 't': 'd'}
3517 char = {'l': '@', 'x': '*', '': '', 't': 'd'}
3517 mode = {'l': '644', 'x': '755', '': '644', 't': '755'}
3518 mode = {'l': '644', 'x': '755', '': '644', 't': '755'}
3518 if node:
3519 if node:
3519 repo = scmutil.unhidehashlikerevs(repo, [node], 'nowarn')
3520 repo = scmutil.unhidehashlikerevs(repo, [node], 'nowarn')
3520 ctx = scmutil.revsingle(repo, node)
3521 ctx = scmutil.revsingle(repo, node)
3521 mf = ctx.manifest()
3522 mf = ctx.manifest()
3522 ui.pager('manifest')
3523 ui.pager('manifest')
3523 for f in ctx:
3524 for f in ctx:
3524 fm.startitem()
3525 fm.startitem()
3525 fl = ctx[f].flags()
3526 fl = ctx[f].flags()
3526 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3527 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3527 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3528 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3528 fm.write('path', '%s\n', f)
3529 fm.write('path', '%s\n', f)
3529 fm.end()
3530 fm.end()
3530
3531
3531 @command('^merge',
3532 @command('^merge',
3532 [('f', 'force', None,
3533 [('f', 'force', None,
3533 _('force a merge including outstanding changes (DEPRECATED)')),
3534 _('force a merge including outstanding changes (DEPRECATED)')),
3534 ('r', 'rev', '', _('revision to merge'), _('REV')),
3535 ('r', 'rev', '', _('revision to merge'), _('REV')),
3535 ('P', 'preview', None,
3536 ('P', 'preview', None,
3536 _('review revisions to merge (no merge is performed)')),
3537 _('review revisions to merge (no merge is performed)')),
3537 ('', 'abort', None, _('abort the ongoing merge')),
3538 ('', 'abort', None, _('abort the ongoing merge')),
3538 ] + mergetoolopts,
3539 ] + mergetoolopts,
3539 _('[-P] [[-r] REV]'))
3540 _('[-P] [[-r] REV]'))
3540 def merge(ui, repo, node=None, **opts):
3541 def merge(ui, repo, node=None, **opts):
3541 """merge another revision into working directory
3542 """merge another revision into working directory
3542
3543
3543 The current working directory is updated with all changes made in
3544 The current working directory is updated with all changes made in
3544 the requested revision since the last common predecessor revision.
3545 the requested revision since the last common predecessor revision.
3545
3546
3546 Files that changed between either parent are marked as changed for
3547 Files that changed between either parent are marked as changed for
3547 the next commit and a commit must be performed before any further
3548 the next commit and a commit must be performed before any further
3548 updates to the repository are allowed. The next commit will have
3549 updates to the repository are allowed. The next commit will have
3549 two parents.
3550 two parents.
3550
3551
3551 ``--tool`` can be used to specify the merge tool used for file
3552 ``--tool`` can be used to specify the merge tool used for file
3552 merges. It overrides the HGMERGE environment variable and your
3553 merges. It overrides the HGMERGE environment variable and your
3553 configuration files. See :hg:`help merge-tools` for options.
3554 configuration files. See :hg:`help merge-tools` for options.
3554
3555
3555 If no revision is specified, the working directory's parent is a
3556 If no revision is specified, the working directory's parent is a
3556 head revision, and the current branch contains exactly one other
3557 head revision, and the current branch contains exactly one other
3557 head, the other head is merged with by default. Otherwise, an
3558 head, the other head is merged with by default. Otherwise, an
3558 explicit revision with which to merge with must be provided.
3559 explicit revision with which to merge with must be provided.
3559
3560
3560 See :hg:`help resolve` for information on handling file conflicts.
3561 See :hg:`help resolve` for information on handling file conflicts.
3561
3562
3562 To undo an uncommitted merge, use :hg:`merge --abort` which
3563 To undo an uncommitted merge, use :hg:`merge --abort` which
3563 will check out a clean copy of the original merge parent, losing
3564 will check out a clean copy of the original merge parent, losing
3564 all changes.
3565 all changes.
3565
3566
3566 Returns 0 on success, 1 if there are unresolved files.
3567 Returns 0 on success, 1 if there are unresolved files.
3567 """
3568 """
3568
3569
3569 opts = pycompat.byteskwargs(opts)
3570 opts = pycompat.byteskwargs(opts)
3570 abort = opts.get('abort')
3571 abort = opts.get('abort')
3571 if abort and repo.dirstate.p2() == nullid:
3572 if abort and repo.dirstate.p2() == nullid:
3572 cmdutil.wrongtooltocontinue(repo, _('merge'))
3573 cmdutil.wrongtooltocontinue(repo, _('merge'))
3573 if abort:
3574 if abort:
3574 if node:
3575 if node:
3575 raise error.Abort(_("cannot specify a node with --abort"))
3576 raise error.Abort(_("cannot specify a node with --abort"))
3576 if opts.get('rev'):
3577 if opts.get('rev'):
3577 raise error.Abort(_("cannot specify both --rev and --abort"))
3578 raise error.Abort(_("cannot specify both --rev and --abort"))
3578 if opts.get('preview'):
3579 if opts.get('preview'):
3579 raise error.Abort(_("cannot specify --preview with --abort"))
3580 raise error.Abort(_("cannot specify --preview with --abort"))
3580 if opts.get('rev') and node:
3581 if opts.get('rev') and node:
3581 raise error.Abort(_("please specify just one revision"))
3582 raise error.Abort(_("please specify just one revision"))
3582 if not node:
3583 if not node:
3583 node = opts.get('rev')
3584 node = opts.get('rev')
3584
3585
3585 if node:
3586 if node:
3586 node = scmutil.revsingle(repo, node).node()
3587 node = scmutil.revsingle(repo, node).node()
3587
3588
3588 if not node and not abort:
3589 if not node and not abort:
3589 node = repo[destutil.destmerge(repo)].node()
3590 node = repo[destutil.destmerge(repo)].node()
3590
3591
3591 if opts.get('preview'):
3592 if opts.get('preview'):
3592 # find nodes that are ancestors of p2 but not of p1
3593 # find nodes that are ancestors of p2 but not of p1
3593 p1 = repo.lookup('.')
3594 p1 = repo.lookup('.')
3594 p2 = repo.lookup(node)
3595 p2 = repo.lookup(node)
3595 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3596 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3596
3597
3597 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
3598 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
3598 for node in nodes:
3599 for node in nodes:
3599 displayer.show(repo[node])
3600 displayer.show(repo[node])
3600 displayer.close()
3601 displayer.close()
3601 return 0
3602 return 0
3602
3603
3603 try:
3604 try:
3604 # ui.forcemerge is an internal variable, do not document
3605 # ui.forcemerge is an internal variable, do not document
3605 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3606 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3606 force = opts.get('force')
3607 force = opts.get('force')
3607 labels = ['working copy', 'merge rev']
3608 labels = ['working copy', 'merge rev']
3608 return hg.merge(repo, node, force=force, mergeforce=force,
3609 return hg.merge(repo, node, force=force, mergeforce=force,
3609 labels=labels, abort=abort)
3610 labels=labels, abort=abort)
3610 finally:
3611 finally:
3611 ui.setconfig('ui', 'forcemerge', '', 'merge')
3612 ui.setconfig('ui', 'forcemerge', '', 'merge')
3612
3613
3613 @command('outgoing|out',
3614 @command('outgoing|out',
3614 [('f', 'force', None, _('run even when the destination is unrelated')),
3615 [('f', 'force', None, _('run even when the destination is unrelated')),
3615 ('r', 'rev', [],
3616 ('r', 'rev', [],
3616 _('a changeset intended to be included in the destination'), _('REV')),
3617 _('a changeset intended to be included in the destination'), _('REV')),
3617 ('n', 'newest-first', None, _('show newest record first')),
3618 ('n', 'newest-first', None, _('show newest record first')),
3618 ('B', 'bookmarks', False, _('compare bookmarks')),
3619 ('B', 'bookmarks', False, _('compare bookmarks')),
3619 ('b', 'branch', [], _('a specific branch you would like to push'),
3620 ('b', 'branch', [], _('a specific branch you would like to push'),
3620 _('BRANCH')),
3621 _('BRANCH')),
3621 ] + logopts + remoteopts + subrepoopts,
3622 ] + logopts + remoteopts + subrepoopts,
3622 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3623 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3623 def outgoing(ui, repo, dest=None, **opts):
3624 def outgoing(ui, repo, dest=None, **opts):
3624 """show changesets not found in the destination
3625 """show changesets not found in the destination
3625
3626
3626 Show changesets not found in the specified destination repository
3627 Show changesets not found in the specified destination repository
3627 or the default push location. These are the changesets that would
3628 or the default push location. These are the changesets that would
3628 be pushed if a push was requested.
3629 be pushed if a push was requested.
3629
3630
3630 See pull for details of valid destination formats.
3631 See pull for details of valid destination formats.
3631
3632
3632 .. container:: verbose
3633 .. container:: verbose
3633
3634
3634 With -B/--bookmarks, the result of bookmark comparison between
3635 With -B/--bookmarks, the result of bookmark comparison between
3635 local and remote repositories is displayed. With -v/--verbose,
3636 local and remote repositories is displayed. With -v/--verbose,
3636 status is also displayed for each bookmark like below::
3637 status is also displayed for each bookmark like below::
3637
3638
3638 BM1 01234567890a added
3639 BM1 01234567890a added
3639 BM2 deleted
3640 BM2 deleted
3640 BM3 234567890abc advanced
3641 BM3 234567890abc advanced
3641 BM4 34567890abcd diverged
3642 BM4 34567890abcd diverged
3642 BM5 4567890abcde changed
3643 BM5 4567890abcde changed
3643
3644
3644 The action taken when pushing depends on the
3645 The action taken when pushing depends on the
3645 status of each bookmark:
3646 status of each bookmark:
3646
3647
3647 :``added``: push with ``-B`` will create it
3648 :``added``: push with ``-B`` will create it
3648 :``deleted``: push with ``-B`` will delete it
3649 :``deleted``: push with ``-B`` will delete it
3649 :``advanced``: push will update it
3650 :``advanced``: push will update it
3650 :``diverged``: push with ``-B`` will update it
3651 :``diverged``: push with ``-B`` will update it
3651 :``changed``: push with ``-B`` will update it
3652 :``changed``: push with ``-B`` will update it
3652
3653
3653 From the point of view of pushing behavior, bookmarks
3654 From the point of view of pushing behavior, bookmarks
3654 existing only in the remote repository are treated as
3655 existing only in the remote repository are treated as
3655 ``deleted``, even if it is in fact added remotely.
3656 ``deleted``, even if it is in fact added remotely.
3656
3657
3657 Returns 0 if there are outgoing changes, 1 otherwise.
3658 Returns 0 if there are outgoing changes, 1 otherwise.
3658 """
3659 """
3659 opts = pycompat.byteskwargs(opts)
3660 opts = pycompat.byteskwargs(opts)
3660 if opts.get('graph'):
3661 if opts.get('graph'):
3661 logcmdutil.checkunsupportedgraphflags([], opts)
3662 logcmdutil.checkunsupportedgraphflags([], opts)
3662 o, other = hg._outgoing(ui, repo, dest, opts)
3663 o, other = hg._outgoing(ui, repo, dest, opts)
3663 if not o:
3664 if not o:
3664 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3665 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3665 return
3666 return
3666
3667
3667 revdag = logcmdutil.graphrevs(repo, o, opts)
3668 revdag = logcmdutil.graphrevs(repo, o, opts)
3668 ui.pager('outgoing')
3669 ui.pager('outgoing')
3669 displayer = logcmdutil.changesetdisplayer(ui, repo, opts, buffered=True)
3670 displayer = logcmdutil.changesetdisplayer(ui, repo, opts, buffered=True)
3670 logcmdutil.displaygraph(ui, repo, revdag, displayer,
3671 logcmdutil.displaygraph(ui, repo, revdag, displayer,
3671 graphmod.asciiedges)
3672 graphmod.asciiedges)
3672 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3673 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3673 return 0
3674 return 0
3674
3675
3675 if opts.get('bookmarks'):
3676 if opts.get('bookmarks'):
3676 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3677 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3677 dest, branches = hg.parseurl(dest, opts.get('branch'))
3678 dest, branches = hg.parseurl(dest, opts.get('branch'))
3678 other = hg.peer(repo, opts, dest)
3679 other = hg.peer(repo, opts, dest)
3679 if 'bookmarks' not in other.listkeys('namespaces'):
3680 if 'bookmarks' not in other.listkeys('namespaces'):
3680 ui.warn(_("remote doesn't support bookmarks\n"))
3681 ui.warn(_("remote doesn't support bookmarks\n"))
3681 return 0
3682 return 0
3682 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3683 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3683 ui.pager('outgoing')
3684 ui.pager('outgoing')
3684 return bookmarks.outgoing(ui, repo, other)
3685 return bookmarks.outgoing(ui, repo, other)
3685
3686
3686 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3687 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3687 try:
3688 try:
3688 return hg.outgoing(ui, repo, dest, opts)
3689 return hg.outgoing(ui, repo, dest, opts)
3689 finally:
3690 finally:
3690 del repo._subtoppath
3691 del repo._subtoppath
3691
3692
3692 @command('parents',
3693 @command('parents',
3693 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3694 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3694 ] + templateopts,
3695 ] + templateopts,
3695 _('[-r REV] [FILE]'),
3696 _('[-r REV] [FILE]'),
3696 inferrepo=True)
3697 inferrepo=True)
3697 def parents(ui, repo, file_=None, **opts):
3698 def parents(ui, repo, file_=None, **opts):
3698 """show the parents of the working directory or revision (DEPRECATED)
3699 """show the parents of the working directory or revision (DEPRECATED)
3699
3700
3700 Print the working directory's parent revisions. If a revision is
3701 Print the working directory's parent revisions. If a revision is
3701 given via -r/--rev, the parent of that revision will be printed.
3702 given via -r/--rev, the parent of that revision will be printed.
3702 If a file argument is given, the revision in which the file was
3703 If a file argument is given, the revision in which the file was
3703 last changed (before the working directory revision or the
3704 last changed (before the working directory revision or the
3704 argument to --rev if given) is printed.
3705 argument to --rev if given) is printed.
3705
3706
3706 This command is equivalent to::
3707 This command is equivalent to::
3707
3708
3708 hg log -r "p1()+p2()" or
3709 hg log -r "p1()+p2()" or
3709 hg log -r "p1(REV)+p2(REV)" or
3710 hg log -r "p1(REV)+p2(REV)" or
3710 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3711 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3711 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3712 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3712
3713
3713 See :hg:`summary` and :hg:`help revsets` for related information.
3714 See :hg:`summary` and :hg:`help revsets` for related information.
3714
3715
3715 Returns 0 on success.
3716 Returns 0 on success.
3716 """
3717 """
3717
3718
3718 opts = pycompat.byteskwargs(opts)
3719 opts = pycompat.byteskwargs(opts)
3719 rev = opts.get('rev')
3720 rev = opts.get('rev')
3720 if rev:
3721 if rev:
3721 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
3722 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
3722 ctx = scmutil.revsingle(repo, rev, None)
3723 ctx = scmutil.revsingle(repo, rev, None)
3723
3724
3724 if file_:
3725 if file_:
3725 m = scmutil.match(ctx, (file_,), opts)
3726 m = scmutil.match(ctx, (file_,), opts)
3726 if m.anypats() or len(m.files()) != 1:
3727 if m.anypats() or len(m.files()) != 1:
3727 raise error.Abort(_('can only specify an explicit filename'))
3728 raise error.Abort(_('can only specify an explicit filename'))
3728 file_ = m.files()[0]
3729 file_ = m.files()[0]
3729 filenodes = []
3730 filenodes = []
3730 for cp in ctx.parents():
3731 for cp in ctx.parents():
3731 if not cp:
3732 if not cp:
3732 continue
3733 continue
3733 try:
3734 try:
3734 filenodes.append(cp.filenode(file_))
3735 filenodes.append(cp.filenode(file_))
3735 except error.LookupError:
3736 except error.LookupError:
3736 pass
3737 pass
3737 if not filenodes:
3738 if not filenodes:
3738 raise error.Abort(_("'%s' not found in manifest!") % file_)
3739 raise error.Abort(_("'%s' not found in manifest!") % file_)
3739 p = []
3740 p = []
3740 for fn in filenodes:
3741 for fn in filenodes:
3741 fctx = repo.filectx(file_, fileid=fn)
3742 fctx = repo.filectx(file_, fileid=fn)
3742 p.append(fctx.node())
3743 p.append(fctx.node())
3743 else:
3744 else:
3744 p = [cp.node() for cp in ctx.parents()]
3745 p = [cp.node() for cp in ctx.parents()]
3745
3746
3746 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
3747 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
3747 for n in p:
3748 for n in p:
3748 if n != nullid:
3749 if n != nullid:
3749 displayer.show(repo[n])
3750 displayer.show(repo[n])
3750 displayer.close()
3751 displayer.close()
3751
3752
3752 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True,
3753 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True,
3753 cmdtype=readonly)
3754 cmdtype=readonly)
3754 def paths(ui, repo, search=None, **opts):
3755 def paths(ui, repo, search=None, **opts):
3755 """show aliases for remote repositories
3756 """show aliases for remote repositories
3756
3757
3757 Show definition of symbolic path name NAME. If no name is given,
3758 Show definition of symbolic path name NAME. If no name is given,
3758 show definition of all available names.
3759 show definition of all available names.
3759
3760
3760 Option -q/--quiet suppresses all output when searching for NAME
3761 Option -q/--quiet suppresses all output when searching for NAME
3761 and shows only the path names when listing all definitions.
3762 and shows only the path names when listing all definitions.
3762
3763
3763 Path names are defined in the [paths] section of your
3764 Path names are defined in the [paths] section of your
3764 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3765 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3765 repository, ``.hg/hgrc`` is used, too.
3766 repository, ``.hg/hgrc`` is used, too.
3766
3767
3767 The path names ``default`` and ``default-push`` have a special
3768 The path names ``default`` and ``default-push`` have a special
3768 meaning. When performing a push or pull operation, they are used
3769 meaning. When performing a push or pull operation, they are used
3769 as fallbacks if no location is specified on the command-line.
3770 as fallbacks if no location is specified on the command-line.
3770 When ``default-push`` is set, it will be used for push and
3771 When ``default-push`` is set, it will be used for push and
3771 ``default`` will be used for pull; otherwise ``default`` is used
3772 ``default`` will be used for pull; otherwise ``default`` is used
3772 as the fallback for both. When cloning a repository, the clone
3773 as the fallback for both. When cloning a repository, the clone
3773 source is written as ``default`` in ``.hg/hgrc``.
3774 source is written as ``default`` in ``.hg/hgrc``.
3774
3775
3775 .. note::
3776 .. note::
3776
3777
3777 ``default`` and ``default-push`` apply to all inbound (e.g.
3778 ``default`` and ``default-push`` apply to all inbound (e.g.
3778 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3779 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3779 and :hg:`bundle`) operations.
3780 and :hg:`bundle`) operations.
3780
3781
3781 See :hg:`help urls` for more information.
3782 See :hg:`help urls` for more information.
3782
3783
3783 Returns 0 on success.
3784 Returns 0 on success.
3784 """
3785 """
3785
3786
3786 opts = pycompat.byteskwargs(opts)
3787 opts = pycompat.byteskwargs(opts)
3787 ui.pager('paths')
3788 ui.pager('paths')
3788 if search:
3789 if search:
3789 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3790 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3790 if name == search]
3791 if name == search]
3791 else:
3792 else:
3792 pathitems = sorted(ui.paths.iteritems())
3793 pathitems = sorted(ui.paths.iteritems())
3793
3794
3794 fm = ui.formatter('paths', opts)
3795 fm = ui.formatter('paths', opts)
3795 if fm.isplain():
3796 if fm.isplain():
3796 hidepassword = util.hidepassword
3797 hidepassword = util.hidepassword
3797 else:
3798 else:
3798 hidepassword = bytes
3799 hidepassword = bytes
3799 if ui.quiet:
3800 if ui.quiet:
3800 namefmt = '%s\n'
3801 namefmt = '%s\n'
3801 else:
3802 else:
3802 namefmt = '%s = '
3803 namefmt = '%s = '
3803 showsubopts = not search and not ui.quiet
3804 showsubopts = not search and not ui.quiet
3804
3805
3805 for name, path in pathitems:
3806 for name, path in pathitems:
3806 fm.startitem()
3807 fm.startitem()
3807 fm.condwrite(not search, 'name', namefmt, name)
3808 fm.condwrite(not search, 'name', namefmt, name)
3808 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3809 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3809 for subopt, value in sorted(path.suboptions.items()):
3810 for subopt, value in sorted(path.suboptions.items()):
3810 assert subopt not in ('name', 'url')
3811 assert subopt not in ('name', 'url')
3811 if showsubopts:
3812 if showsubopts:
3812 fm.plain('%s:%s = ' % (name, subopt))
3813 fm.plain('%s:%s = ' % (name, subopt))
3813 fm.condwrite(showsubopts, subopt, '%s\n', value)
3814 fm.condwrite(showsubopts, subopt, '%s\n', value)
3814
3815
3815 fm.end()
3816 fm.end()
3816
3817
3817 if search and not pathitems:
3818 if search and not pathitems:
3818 if not ui.quiet:
3819 if not ui.quiet:
3819 ui.warn(_("not found!\n"))
3820 ui.warn(_("not found!\n"))
3820 return 1
3821 return 1
3821 else:
3822 else:
3822 return 0
3823 return 0
3823
3824
3824 @command('phase',
3825 @command('phase',
3825 [('p', 'public', False, _('set changeset phase to public')),
3826 [('p', 'public', False, _('set changeset phase to public')),
3826 ('d', 'draft', False, _('set changeset phase to draft')),
3827 ('d', 'draft', False, _('set changeset phase to draft')),
3827 ('s', 'secret', False, _('set changeset phase to secret')),
3828 ('s', 'secret', False, _('set changeset phase to secret')),
3828 ('f', 'force', False, _('allow to move boundary backward')),
3829 ('f', 'force', False, _('allow to move boundary backward')),
3829 ('r', 'rev', [], _('target revision'), _('REV')),
3830 ('r', 'rev', [], _('target revision'), _('REV')),
3830 ],
3831 ],
3831 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3832 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3832 def phase(ui, repo, *revs, **opts):
3833 def phase(ui, repo, *revs, **opts):
3833 """set or show the current phase name
3834 """set or show the current phase name
3834
3835
3835 With no argument, show the phase name of the current revision(s).
3836 With no argument, show the phase name of the current revision(s).
3836
3837
3837 With one of -p/--public, -d/--draft or -s/--secret, change the
3838 With one of -p/--public, -d/--draft or -s/--secret, change the
3838 phase value of the specified revisions.
3839 phase value of the specified revisions.
3839
3840
3840 Unless -f/--force is specified, :hg:`phase` won't move changesets from a
3841 Unless -f/--force is specified, :hg:`phase` won't move changesets from a
3841 lower phase to a higher phase. Phases are ordered as follows::
3842 lower phase to a higher phase. Phases are ordered as follows::
3842
3843
3843 public < draft < secret
3844 public < draft < secret
3844
3845
3845 Returns 0 on success, 1 if some phases could not be changed.
3846 Returns 0 on success, 1 if some phases could not be changed.
3846
3847
3847 (For more information about the phases concept, see :hg:`help phases`.)
3848 (For more information about the phases concept, see :hg:`help phases`.)
3848 """
3849 """
3849 opts = pycompat.byteskwargs(opts)
3850 opts = pycompat.byteskwargs(opts)
3850 # search for a unique phase argument
3851 # search for a unique phase argument
3851 targetphase = None
3852 targetphase = None
3852 for idx, name in enumerate(phases.phasenames):
3853 for idx, name in enumerate(phases.phasenames):
3853 if opts[name]:
3854 if opts[name]:
3854 if targetphase is not None:
3855 if targetphase is not None:
3855 raise error.Abort(_('only one phase can be specified'))
3856 raise error.Abort(_('only one phase can be specified'))
3856 targetphase = idx
3857 targetphase = idx
3857
3858
3858 # look for specified revision
3859 # look for specified revision
3859 revs = list(revs)
3860 revs = list(revs)
3860 revs.extend(opts['rev'])
3861 revs.extend(opts['rev'])
3861 if not revs:
3862 if not revs:
3862 # display both parents as the second parent phase can influence
3863 # display both parents as the second parent phase can influence
3863 # the phase of a merge commit
3864 # the phase of a merge commit
3864 revs = [c.rev() for c in repo[None].parents()]
3865 revs = [c.rev() for c in repo[None].parents()]
3865
3866
3866 revs = scmutil.revrange(repo, revs)
3867 revs = scmutil.revrange(repo, revs)
3867
3868
3868 ret = 0
3869 ret = 0
3869 if targetphase is None:
3870 if targetphase is None:
3870 # display
3871 # display
3871 for r in revs:
3872 for r in revs:
3872 ctx = repo[r]
3873 ctx = repo[r]
3873 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3874 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3874 else:
3875 else:
3875 with repo.lock(), repo.transaction("phase") as tr:
3876 with repo.lock(), repo.transaction("phase") as tr:
3876 # set phase
3877 # set phase
3877 if not revs:
3878 if not revs:
3878 raise error.Abort(_('empty revision set'))
3879 raise error.Abort(_('empty revision set'))
3879 nodes = [repo[r].node() for r in revs]
3880 nodes = [repo[r].node() for r in revs]
3880 # moving revision from public to draft may hide them
3881 # moving revision from public to draft may hide them
3881 # We have to check result on an unfiltered repository
3882 # We have to check result on an unfiltered repository
3882 unfi = repo.unfiltered()
3883 unfi = repo.unfiltered()
3883 getphase = unfi._phasecache.phase
3884 getphase = unfi._phasecache.phase
3884 olddata = [getphase(unfi, r) for r in unfi]
3885 olddata = [getphase(unfi, r) for r in unfi]
3885 phases.advanceboundary(repo, tr, targetphase, nodes)
3886 phases.advanceboundary(repo, tr, targetphase, nodes)
3886 if opts['force']:
3887 if opts['force']:
3887 phases.retractboundary(repo, tr, targetphase, nodes)
3888 phases.retractboundary(repo, tr, targetphase, nodes)
3888 getphase = unfi._phasecache.phase
3889 getphase = unfi._phasecache.phase
3889 newdata = [getphase(unfi, r) for r in unfi]
3890 newdata = [getphase(unfi, r) for r in unfi]
3890 changes = sum(newdata[r] != olddata[r] for r in unfi)
3891 changes = sum(newdata[r] != olddata[r] for r in unfi)
3891 cl = unfi.changelog
3892 cl = unfi.changelog
3892 rejected = [n for n in nodes
3893 rejected = [n for n in nodes
3893 if newdata[cl.rev(n)] < targetphase]
3894 if newdata[cl.rev(n)] < targetphase]
3894 if rejected:
3895 if rejected:
3895 ui.warn(_('cannot move %i changesets to a higher '
3896 ui.warn(_('cannot move %i changesets to a higher '
3896 'phase, use --force\n') % len(rejected))
3897 'phase, use --force\n') % len(rejected))
3897 ret = 1
3898 ret = 1
3898 if changes:
3899 if changes:
3899 msg = _('phase changed for %i changesets\n') % changes
3900 msg = _('phase changed for %i changesets\n') % changes
3900 if ret:
3901 if ret:
3901 ui.status(msg)
3902 ui.status(msg)
3902 else:
3903 else:
3903 ui.note(msg)
3904 ui.note(msg)
3904 else:
3905 else:
3905 ui.warn(_('no phases changed\n'))
3906 ui.warn(_('no phases changed\n'))
3906 return ret
3907 return ret
3907
3908
3908 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3909 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3909 """Run after a changegroup has been added via pull/unbundle
3910 """Run after a changegroup has been added via pull/unbundle
3910
3911
3911 This takes arguments below:
3912 This takes arguments below:
3912
3913
3913 :modheads: change of heads by pull/unbundle
3914 :modheads: change of heads by pull/unbundle
3914 :optupdate: updating working directory is needed or not
3915 :optupdate: updating working directory is needed or not
3915 :checkout: update destination revision (or None to default destination)
3916 :checkout: update destination revision (or None to default destination)
3916 :brev: a name, which might be a bookmark to be activated after updating
3917 :brev: a name, which might be a bookmark to be activated after updating
3917 """
3918 """
3918 if modheads == 0:
3919 if modheads == 0:
3919 return
3920 return
3920 if optupdate:
3921 if optupdate:
3921 try:
3922 try:
3922 return hg.updatetotally(ui, repo, checkout, brev)
3923 return hg.updatetotally(ui, repo, checkout, brev)
3923 except error.UpdateAbort as inst:
3924 except error.UpdateAbort as inst:
3924 msg = _("not updating: %s") % stringutil.forcebytestr(inst)
3925 msg = _("not updating: %s") % stringutil.forcebytestr(inst)
3925 hint = inst.hint
3926 hint = inst.hint
3926 raise error.UpdateAbort(msg, hint=hint)
3927 raise error.UpdateAbort(msg, hint=hint)
3927 if modheads > 1:
3928 if modheads > 1:
3928 currentbranchheads = len(repo.branchheads())
3929 currentbranchheads = len(repo.branchheads())
3929 if currentbranchheads == modheads:
3930 if currentbranchheads == modheads:
3930 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3931 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3931 elif currentbranchheads > 1:
3932 elif currentbranchheads > 1:
3932 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3933 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3933 "merge)\n"))
3934 "merge)\n"))
3934 else:
3935 else:
3935 ui.status(_("(run 'hg heads' to see heads)\n"))
3936 ui.status(_("(run 'hg heads' to see heads)\n"))
3936 elif not ui.configbool('commands', 'update.requiredest'):
3937 elif not ui.configbool('commands', 'update.requiredest'):
3937 ui.status(_("(run 'hg update' to get a working copy)\n"))
3938 ui.status(_("(run 'hg update' to get a working copy)\n"))
3938
3939
3939 @command('^pull',
3940 @command('^pull',
3940 [('u', 'update', None,
3941 [('u', 'update', None,
3941 _('update to new branch head if new descendants were pulled')),
3942 _('update to new branch head if new descendants were pulled')),
3942 ('f', 'force', None, _('run even when remote repository is unrelated')),
3943 ('f', 'force', None, _('run even when remote repository is unrelated')),
3943 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3944 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3944 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3945 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3945 ('b', 'branch', [], _('a specific branch you would like to pull'),
3946 ('b', 'branch', [], _('a specific branch you would like to pull'),
3946 _('BRANCH')),
3947 _('BRANCH')),
3947 ] + remoteopts,
3948 ] + remoteopts,
3948 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3949 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3949 def pull(ui, repo, source="default", **opts):
3950 def pull(ui, repo, source="default", **opts):
3950 """pull changes from the specified source
3951 """pull changes from the specified source
3951
3952
3952 Pull changes from a remote repository to a local one.
3953 Pull changes from a remote repository to a local one.
3953
3954
3954 This finds all changes from the repository at the specified path
3955 This finds all changes from the repository at the specified path
3955 or URL and adds them to a local repository (the current one unless
3956 or URL and adds them to a local repository (the current one unless
3956 -R is specified). By default, this does not update the copy of the
3957 -R is specified). By default, this does not update the copy of the
3957 project in the working directory.
3958 project in the working directory.
3958
3959
3959 Use :hg:`incoming` if you want to see what would have been added
3960 Use :hg:`incoming` if you want to see what would have been added
3960 by a pull at the time you issued this command. If you then decide
3961 by a pull at the time you issued this command. If you then decide
3961 to add those changes to the repository, you should use :hg:`pull
3962 to add those changes to the repository, you should use :hg:`pull
3962 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3963 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3963
3964
3964 If SOURCE is omitted, the 'default' path will be used.
3965 If SOURCE is omitted, the 'default' path will be used.
3965 See :hg:`help urls` for more information.
3966 See :hg:`help urls` for more information.
3966
3967
3967 Specifying bookmark as ``.`` is equivalent to specifying the active
3968 Specifying bookmark as ``.`` is equivalent to specifying the active
3968 bookmark's name.
3969 bookmark's name.
3969
3970
3970 Returns 0 on success, 1 if an update had unresolved files.
3971 Returns 0 on success, 1 if an update had unresolved files.
3971 """
3972 """
3972
3973
3973 opts = pycompat.byteskwargs(opts)
3974 opts = pycompat.byteskwargs(opts)
3974 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3975 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3975 msg = _('update destination required by configuration')
3976 msg = _('update destination required by configuration')
3976 hint = _('use hg pull followed by hg update DEST')
3977 hint = _('use hg pull followed by hg update DEST')
3977 raise error.Abort(msg, hint=hint)
3978 raise error.Abort(msg, hint=hint)
3978
3979
3979 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3980 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3980 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3981 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3981 other = hg.peer(repo, opts, source)
3982 other = hg.peer(repo, opts, source)
3982 try:
3983 try:
3983 revs, checkout = hg.addbranchrevs(repo, other, branches,
3984 revs, checkout = hg.addbranchrevs(repo, other, branches,
3984 opts.get('rev'))
3985 opts.get('rev'))
3985
3986
3986
3987
3987 pullopargs = {}
3988 pullopargs = {}
3988 if opts.get('bookmark'):
3989 if opts.get('bookmark'):
3989 if not revs:
3990 if not revs:
3990 revs = []
3991 revs = []
3991 # The list of bookmark used here is not the one used to actually
3992 # The list of bookmark used here is not the one used to actually
3992 # update the bookmark name. This can result in the revision pulled
3993 # update the bookmark name. This can result in the revision pulled
3993 # not ending up with the name of the bookmark because of a race
3994 # not ending up with the name of the bookmark because of a race
3994 # condition on the server. (See issue 4689 for details)
3995 # condition on the server. (See issue 4689 for details)
3995 remotebookmarks = other.listkeys('bookmarks')
3996 remotebookmarks = other.listkeys('bookmarks')
3996 remotebookmarks = bookmarks.unhexlifybookmarks(remotebookmarks)
3997 remotebookmarks = bookmarks.unhexlifybookmarks(remotebookmarks)
3997 pullopargs['remotebookmarks'] = remotebookmarks
3998 pullopargs['remotebookmarks'] = remotebookmarks
3998 for b in opts['bookmark']:
3999 for b in opts['bookmark']:
3999 b = repo._bookmarks.expandname(b)
4000 b = repo._bookmarks.expandname(b)
4000 if b not in remotebookmarks:
4001 if b not in remotebookmarks:
4001 raise error.Abort(_('remote bookmark %s not found!') % b)
4002 raise error.Abort(_('remote bookmark %s not found!') % b)
4002 revs.append(hex(remotebookmarks[b]))
4003 revs.append(hex(remotebookmarks[b]))
4003
4004
4004 if revs:
4005 if revs:
4005 try:
4006 try:
4006 # When 'rev' is a bookmark name, we cannot guarantee that it
4007 # When 'rev' is a bookmark name, we cannot guarantee that it
4007 # will be updated with that name because of a race condition
4008 # will be updated with that name because of a race condition
4008 # server side. (See issue 4689 for details)
4009 # server side. (See issue 4689 for details)
4009 oldrevs = revs
4010 oldrevs = revs
4010 revs = [] # actually, nodes
4011 revs = [] # actually, nodes
4011 for r in oldrevs:
4012 for r in oldrevs:
4012 node = other.lookup(r)
4013 node = other.lookup(r)
4013 revs.append(node)
4014 revs.append(node)
4014 if r == checkout:
4015 if r == checkout:
4015 checkout = node
4016 checkout = node
4016 except error.CapabilityError:
4017 except error.CapabilityError:
4017 err = _("other repository doesn't support revision lookup, "
4018 err = _("other repository doesn't support revision lookup, "
4018 "so a rev cannot be specified.")
4019 "so a rev cannot be specified.")
4019 raise error.Abort(err)
4020 raise error.Abort(err)
4020
4021
4021 wlock = util.nullcontextmanager()
4022 wlock = util.nullcontextmanager()
4022 if opts.get('update'):
4023 if opts.get('update'):
4023 wlock = repo.wlock()
4024 wlock = repo.wlock()
4024 with wlock:
4025 with wlock:
4025 pullopargs.update(opts.get('opargs', {}))
4026 pullopargs.update(opts.get('opargs', {}))
4026 modheads = exchange.pull(repo, other, heads=revs,
4027 modheads = exchange.pull(repo, other, heads=revs,
4027 force=opts.get('force'),
4028 force=opts.get('force'),
4028 bookmarks=opts.get('bookmark', ()),
4029 bookmarks=opts.get('bookmark', ()),
4029 opargs=pullopargs).cgresult
4030 opargs=pullopargs).cgresult
4030
4031
4031 # brev is a name, which might be a bookmark to be activated at
4032 # brev is a name, which might be a bookmark to be activated at
4032 # the end of the update. In other words, it is an explicit
4033 # the end of the update. In other words, it is an explicit
4033 # destination of the update
4034 # destination of the update
4034 brev = None
4035 brev = None
4035
4036
4036 if checkout:
4037 if checkout:
4037 checkout = "%d" % repo.changelog.rev(checkout)
4038 checkout = "%d" % repo.changelog.rev(checkout)
4038
4039
4039 # order below depends on implementation of
4040 # order below depends on implementation of
4040 # hg.addbranchrevs(). opts['bookmark'] is ignored,
4041 # hg.addbranchrevs(). opts['bookmark'] is ignored,
4041 # because 'checkout' is determined without it.
4042 # because 'checkout' is determined without it.
4042 if opts.get('rev'):
4043 if opts.get('rev'):
4043 brev = opts['rev'][0]
4044 brev = opts['rev'][0]
4044 elif opts.get('branch'):
4045 elif opts.get('branch'):
4045 brev = opts['branch'][0]
4046 brev = opts['branch'][0]
4046 else:
4047 else:
4047 brev = branches[0]
4048 brev = branches[0]
4048 repo._subtoppath = source
4049 repo._subtoppath = source
4049 try:
4050 try:
4050 ret = postincoming(ui, repo, modheads, opts.get('update'),
4051 ret = postincoming(ui, repo, modheads, opts.get('update'),
4051 checkout, brev)
4052 checkout, brev)
4052
4053
4053 finally:
4054 finally:
4054 del repo._subtoppath
4055 del repo._subtoppath
4055
4056
4056 finally:
4057 finally:
4057 other.close()
4058 other.close()
4058 return ret
4059 return ret
4059
4060
4060 @command('^push',
4061 @command('^push',
4061 [('f', 'force', None, _('force push')),
4062 [('f', 'force', None, _('force push')),
4062 ('r', 'rev', [],
4063 ('r', 'rev', [],
4063 _('a changeset intended to be included in the destination'),
4064 _('a changeset intended to be included in the destination'),
4064 _('REV')),
4065 _('REV')),
4065 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
4066 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
4066 ('b', 'branch', [],
4067 ('b', 'branch', [],
4067 _('a specific branch you would like to push'), _('BRANCH')),
4068 _('a specific branch you would like to push'), _('BRANCH')),
4068 ('', 'new-branch', False, _('allow pushing a new branch')),
4069 ('', 'new-branch', False, _('allow pushing a new branch')),
4069 ('', 'pushvars', [], _('variables that can be sent to server (ADVANCED)')),
4070 ('', 'pushvars', [], _('variables that can be sent to server (ADVANCED)')),
4070 ] + remoteopts,
4071 ] + remoteopts,
4071 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
4072 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
4072 def push(ui, repo, dest=None, **opts):
4073 def push(ui, repo, dest=None, **opts):
4073 """push changes to the specified destination
4074 """push changes to the specified destination
4074
4075
4075 Push changesets from the local repository to the specified
4076 Push changesets from the local repository to the specified
4076 destination.
4077 destination.
4077
4078
4078 This operation is symmetrical to pull: it is identical to a pull
4079 This operation is symmetrical to pull: it is identical to a pull
4079 in the destination repository from the current one.
4080 in the destination repository from the current one.
4080
4081
4081 By default, push will not allow creation of new heads at the
4082 By default, push will not allow creation of new heads at the
4082 destination, since multiple heads would make it unclear which head
4083 destination, since multiple heads would make it unclear which head
4083 to use. In this situation, it is recommended to pull and merge
4084 to use. In this situation, it is recommended to pull and merge
4084 before pushing.
4085 before pushing.
4085
4086
4086 Use --new-branch if you want to allow push to create a new named
4087 Use --new-branch if you want to allow push to create a new named
4087 branch that is not present at the destination. This allows you to
4088 branch that is not present at the destination. This allows you to
4088 only create a new branch without forcing other changes.
4089 only create a new branch without forcing other changes.
4089
4090
4090 .. note::
4091 .. note::
4091
4092
4092 Extra care should be taken with the -f/--force option,
4093 Extra care should be taken with the -f/--force option,
4093 which will push all new heads on all branches, an action which will
4094 which will push all new heads on all branches, an action which will
4094 almost always cause confusion for collaborators.
4095 almost always cause confusion for collaborators.
4095
4096
4096 If -r/--rev is used, the specified revision and all its ancestors
4097 If -r/--rev is used, the specified revision and all its ancestors
4097 will be pushed to the remote repository.
4098 will be pushed to the remote repository.
4098
4099
4099 If -B/--bookmark is used, the specified bookmarked revision, its
4100 If -B/--bookmark is used, the specified bookmarked revision, its
4100 ancestors, and the bookmark will be pushed to the remote
4101 ancestors, and the bookmark will be pushed to the remote
4101 repository. Specifying ``.`` is equivalent to specifying the active
4102 repository. Specifying ``.`` is equivalent to specifying the active
4102 bookmark's name.
4103 bookmark's name.
4103
4104
4104 Please see :hg:`help urls` for important details about ``ssh://``
4105 Please see :hg:`help urls` for important details about ``ssh://``
4105 URLs. If DESTINATION is omitted, a default path will be used.
4106 URLs. If DESTINATION is omitted, a default path will be used.
4106
4107
4107 .. container:: verbose
4108 .. container:: verbose
4108
4109
4109 The --pushvars option sends strings to the server that become
4110 The --pushvars option sends strings to the server that become
4110 environment variables prepended with ``HG_USERVAR_``. For example,
4111 environment variables prepended with ``HG_USERVAR_``. For example,
4111 ``--pushvars ENABLE_FEATURE=true``, provides the server side hooks with
4112 ``--pushvars ENABLE_FEATURE=true``, provides the server side hooks with
4112 ``HG_USERVAR_ENABLE_FEATURE=true`` as part of their environment.
4113 ``HG_USERVAR_ENABLE_FEATURE=true`` as part of their environment.
4113
4114
4114 pushvars can provide for user-overridable hooks as well as set debug
4115 pushvars can provide for user-overridable hooks as well as set debug
4115 levels. One example is having a hook that blocks commits containing
4116 levels. One example is having a hook that blocks commits containing
4116 conflict markers, but enables the user to override the hook if the file
4117 conflict markers, but enables the user to override the hook if the file
4117 is using conflict markers for testing purposes or the file format has
4118 is using conflict markers for testing purposes or the file format has
4118 strings that look like conflict markers.
4119 strings that look like conflict markers.
4119
4120
4120 By default, servers will ignore `--pushvars`. To enable it add the
4121 By default, servers will ignore `--pushvars`. To enable it add the
4121 following to your configuration file::
4122 following to your configuration file::
4122
4123
4123 [push]
4124 [push]
4124 pushvars.server = true
4125 pushvars.server = true
4125
4126
4126 Returns 0 if push was successful, 1 if nothing to push.
4127 Returns 0 if push was successful, 1 if nothing to push.
4127 """
4128 """
4128
4129
4129 opts = pycompat.byteskwargs(opts)
4130 opts = pycompat.byteskwargs(opts)
4130 if opts.get('bookmark'):
4131 if opts.get('bookmark'):
4131 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
4132 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
4132 for b in opts['bookmark']:
4133 for b in opts['bookmark']:
4133 # translate -B options to -r so changesets get pushed
4134 # translate -B options to -r so changesets get pushed
4134 b = repo._bookmarks.expandname(b)
4135 b = repo._bookmarks.expandname(b)
4135 if b in repo._bookmarks:
4136 if b in repo._bookmarks:
4136 opts.setdefault('rev', []).append(b)
4137 opts.setdefault('rev', []).append(b)
4137 else:
4138 else:
4138 # if we try to push a deleted bookmark, translate it to null
4139 # if we try to push a deleted bookmark, translate it to null
4139 # this lets simultaneous -r, -b options continue working
4140 # this lets simultaneous -r, -b options continue working
4140 opts.setdefault('rev', []).append("null")
4141 opts.setdefault('rev', []).append("null")
4141
4142
4142 path = ui.paths.getpath(dest, default=('default-push', 'default'))
4143 path = ui.paths.getpath(dest, default=('default-push', 'default'))
4143 if not path:
4144 if not path:
4144 raise error.Abort(_('default repository not configured!'),
4145 raise error.Abort(_('default repository not configured!'),
4145 hint=_("see 'hg help config.paths'"))
4146 hint=_("see 'hg help config.paths'"))
4146 dest = path.pushloc or path.loc
4147 dest = path.pushloc or path.loc
4147 branches = (path.branch, opts.get('branch') or [])
4148 branches = (path.branch, opts.get('branch') or [])
4148 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4149 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4149 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4150 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4150 other = hg.peer(repo, opts, dest)
4151 other = hg.peer(repo, opts, dest)
4151
4152
4152 if revs:
4153 if revs:
4153 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4154 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4154 if not revs:
4155 if not revs:
4155 raise error.Abort(_("specified revisions evaluate to an empty set"),
4156 raise error.Abort(_("specified revisions evaluate to an empty set"),
4156 hint=_("use different revision arguments"))
4157 hint=_("use different revision arguments"))
4157 elif path.pushrev:
4158 elif path.pushrev:
4158 # It doesn't make any sense to specify ancestor revisions. So limit
4159 # It doesn't make any sense to specify ancestor revisions. So limit
4159 # to DAG heads to make discovery simpler.
4160 # to DAG heads to make discovery simpler.
4160 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4161 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4161 revs = scmutil.revrange(repo, [expr])
4162 revs = scmutil.revrange(repo, [expr])
4162 revs = [repo[rev].node() for rev in revs]
4163 revs = [repo[rev].node() for rev in revs]
4163 if not revs:
4164 if not revs:
4164 raise error.Abort(_('default push revset for path evaluates to an '
4165 raise error.Abort(_('default push revset for path evaluates to an '
4165 'empty set'))
4166 'empty set'))
4166
4167
4167 repo._subtoppath = dest
4168 repo._subtoppath = dest
4168 try:
4169 try:
4169 # push subrepos depth-first for coherent ordering
4170 # push subrepos depth-first for coherent ordering
4170 c = repo['.']
4171 c = repo['.']
4171 subs = c.substate # only repos that are committed
4172 subs = c.substate # only repos that are committed
4172 for s in sorted(subs):
4173 for s in sorted(subs):
4173 result = c.sub(s).push(opts)
4174 result = c.sub(s).push(opts)
4174 if result == 0:
4175 if result == 0:
4175 return not result
4176 return not result
4176 finally:
4177 finally:
4177 del repo._subtoppath
4178 del repo._subtoppath
4178
4179
4179 opargs = dict(opts.get('opargs', {})) # copy opargs since we may mutate it
4180 opargs = dict(opts.get('opargs', {})) # copy opargs since we may mutate it
4180 opargs.setdefault('pushvars', []).extend(opts.get('pushvars', []))
4181 opargs.setdefault('pushvars', []).extend(opts.get('pushvars', []))
4181
4182
4182 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4183 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4183 newbranch=opts.get('new_branch'),
4184 newbranch=opts.get('new_branch'),
4184 bookmarks=opts.get('bookmark', ()),
4185 bookmarks=opts.get('bookmark', ()),
4185 opargs=opargs)
4186 opargs=opargs)
4186
4187
4187 result = not pushop.cgresult
4188 result = not pushop.cgresult
4188
4189
4189 if pushop.bkresult is not None:
4190 if pushop.bkresult is not None:
4190 if pushop.bkresult == 2:
4191 if pushop.bkresult == 2:
4191 result = 2
4192 result = 2
4192 elif not result and pushop.bkresult:
4193 elif not result and pushop.bkresult:
4193 result = 2
4194 result = 2
4194
4195
4195 return result
4196 return result
4196
4197
4197 @command('recover', [])
4198 @command('recover', [])
4198 def recover(ui, repo):
4199 def recover(ui, repo):
4199 """roll back an interrupted transaction
4200 """roll back an interrupted transaction
4200
4201
4201 Recover from an interrupted commit or pull.
4202 Recover from an interrupted commit or pull.
4202
4203
4203 This command tries to fix the repository status after an
4204 This command tries to fix the repository status after an
4204 interrupted operation. It should only be necessary when Mercurial
4205 interrupted operation. It should only be necessary when Mercurial
4205 suggests it.
4206 suggests it.
4206
4207
4207 Returns 0 if successful, 1 if nothing to recover or verify fails.
4208 Returns 0 if successful, 1 if nothing to recover or verify fails.
4208 """
4209 """
4209 if repo.recover():
4210 if repo.recover():
4210 return hg.verify(repo)
4211 return hg.verify(repo)
4211 return 1
4212 return 1
4212
4213
4213 @command('^remove|rm',
4214 @command('^remove|rm',
4214 [('A', 'after', None, _('record delete for missing files')),
4215 [('A', 'after', None, _('record delete for missing files')),
4215 ('f', 'force', None,
4216 ('f', 'force', None,
4216 _('forget added files, delete modified files')),
4217 _('forget added files, delete modified files')),
4217 ] + subrepoopts + walkopts + dryrunopts,
4218 ] + subrepoopts + walkopts + dryrunopts,
4218 _('[OPTION]... FILE...'),
4219 _('[OPTION]... FILE...'),
4219 inferrepo=True)
4220 inferrepo=True)
4220 def remove(ui, repo, *pats, **opts):
4221 def remove(ui, repo, *pats, **opts):
4221 """remove the specified files on the next commit
4222 """remove the specified files on the next commit
4222
4223
4223 Schedule the indicated files for removal from the current branch.
4224 Schedule the indicated files for removal from the current branch.
4224
4225
4225 This command schedules the files to be removed at the next commit.
4226 This command schedules the files to be removed at the next commit.
4226 To undo a remove before that, see :hg:`revert`. To undo added
4227 To undo a remove before that, see :hg:`revert`. To undo added
4227 files, see :hg:`forget`.
4228 files, see :hg:`forget`.
4228
4229
4229 .. container:: verbose
4230 .. container:: verbose
4230
4231
4231 -A/--after can be used to remove only files that have already
4232 -A/--after can be used to remove only files that have already
4232 been deleted, -f/--force can be used to force deletion, and -Af
4233 been deleted, -f/--force can be used to force deletion, and -Af
4233 can be used to remove files from the next revision without
4234 can be used to remove files from the next revision without
4234 deleting them from the working directory.
4235 deleting them from the working directory.
4235
4236
4236 The following table details the behavior of remove for different
4237 The following table details the behavior of remove for different
4237 file states (columns) and option combinations (rows). The file
4238 file states (columns) and option combinations (rows). The file
4238 states are Added [A], Clean [C], Modified [M] and Missing [!]
4239 states are Added [A], Clean [C], Modified [M] and Missing [!]
4239 (as reported by :hg:`status`). The actions are Warn, Remove
4240 (as reported by :hg:`status`). The actions are Warn, Remove
4240 (from branch) and Delete (from disk):
4241 (from branch) and Delete (from disk):
4241
4242
4242 ========= == == == ==
4243 ========= == == == ==
4243 opt/state A C M !
4244 opt/state A C M !
4244 ========= == == == ==
4245 ========= == == == ==
4245 none W RD W R
4246 none W RD W R
4246 -f R RD RD R
4247 -f R RD RD R
4247 -A W W W R
4248 -A W W W R
4248 -Af R R R R
4249 -Af R R R R
4249 ========= == == == ==
4250 ========= == == == ==
4250
4251
4251 .. note::
4252 .. note::
4252
4253
4253 :hg:`remove` never deletes files in Added [A] state from the
4254 :hg:`remove` never deletes files in Added [A] state from the
4254 working directory, not even if ``--force`` is specified.
4255 working directory, not even if ``--force`` is specified.
4255
4256
4256 Returns 0 on success, 1 if any warnings encountered.
4257 Returns 0 on success, 1 if any warnings encountered.
4257 """
4258 """
4258
4259
4259 opts = pycompat.byteskwargs(opts)
4260 opts = pycompat.byteskwargs(opts)
4260 after, force = opts.get('after'), opts.get('force')
4261 after, force = opts.get('after'), opts.get('force')
4261 dryrun = opts.get('dry_run')
4262 dryrun = opts.get('dry_run')
4262 if not pats and not after:
4263 if not pats and not after:
4263 raise error.Abort(_('no files specified'))
4264 raise error.Abort(_('no files specified'))
4264
4265
4265 m = scmutil.match(repo[None], pats, opts)
4266 m = scmutil.match(repo[None], pats, opts)
4266 subrepos = opts.get('subrepos')
4267 subrepos = opts.get('subrepos')
4267 return cmdutil.remove(ui, repo, m, "", after, force, subrepos,
4268 return cmdutil.remove(ui, repo, m, "", after, force, subrepos,
4268 dryrun=dryrun)
4269 dryrun=dryrun)
4269
4270
4270 @command('rename|move|mv',
4271 @command('rename|move|mv',
4271 [('A', 'after', None, _('record a rename that has already occurred')),
4272 [('A', 'after', None, _('record a rename that has already occurred')),
4272 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4273 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4273 ] + walkopts + dryrunopts,
4274 ] + walkopts + dryrunopts,
4274 _('[OPTION]... SOURCE... DEST'))
4275 _('[OPTION]... SOURCE... DEST'))
4275 def rename(ui, repo, *pats, **opts):
4276 def rename(ui, repo, *pats, **opts):
4276 """rename files; equivalent of copy + remove
4277 """rename files; equivalent of copy + remove
4277
4278
4278 Mark dest as copies of sources; mark sources for deletion. If dest
4279 Mark dest as copies of sources; mark sources for deletion. If dest
4279 is a directory, copies are put in that directory. If dest is a
4280 is a directory, copies are put in that directory. If dest is a
4280 file, there can only be one source.
4281 file, there can only be one source.
4281
4282
4282 By default, this command copies the contents of files as they
4283 By default, this command copies the contents of files as they
4283 exist in the working directory. If invoked with -A/--after, the
4284 exist in the working directory. If invoked with -A/--after, the
4284 operation is recorded, but no copying is performed.
4285 operation is recorded, but no copying is performed.
4285
4286
4286 This command takes effect at the next commit. To undo a rename
4287 This command takes effect at the next commit. To undo a rename
4287 before that, see :hg:`revert`.
4288 before that, see :hg:`revert`.
4288
4289
4289 Returns 0 on success, 1 if errors are encountered.
4290 Returns 0 on success, 1 if errors are encountered.
4290 """
4291 """
4291 opts = pycompat.byteskwargs(opts)
4292 opts = pycompat.byteskwargs(opts)
4292 with repo.wlock(False):
4293 with repo.wlock(False):
4293 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4294 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4294
4295
4295 @command('resolve',
4296 @command('resolve',
4296 [('a', 'all', None, _('select all unresolved files')),
4297 [('a', 'all', None, _('select all unresolved files')),
4297 ('l', 'list', None, _('list state of files needing merge')),
4298 ('l', 'list', None, _('list state of files needing merge')),
4298 ('m', 'mark', None, _('mark files as resolved')),
4299 ('m', 'mark', None, _('mark files as resolved')),
4299 ('u', 'unmark', None, _('mark files as unresolved')),
4300 ('u', 'unmark', None, _('mark files as unresolved')),
4300 ('n', 'no-status', None, _('hide status prefix'))]
4301 ('n', 'no-status', None, _('hide status prefix'))]
4301 + mergetoolopts + walkopts + formatteropts,
4302 + mergetoolopts + walkopts + formatteropts,
4302 _('[OPTION]... [FILE]...'),
4303 _('[OPTION]... [FILE]...'),
4303 inferrepo=True)
4304 inferrepo=True)
4304 def resolve(ui, repo, *pats, **opts):
4305 def resolve(ui, repo, *pats, **opts):
4305 """redo merges or set/view the merge status of files
4306 """redo merges or set/view the merge status of files
4306
4307
4307 Merges with unresolved conflicts are often the result of
4308 Merges with unresolved conflicts are often the result of
4308 non-interactive merging using the ``internal:merge`` configuration
4309 non-interactive merging using the ``internal:merge`` configuration
4309 setting, or a command-line merge tool like ``diff3``. The resolve
4310 setting, or a command-line merge tool like ``diff3``. The resolve
4310 command is used to manage the files involved in a merge, after
4311 command is used to manage the files involved in a merge, after
4311 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4312 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4312 working directory must have two parents). See :hg:`help
4313 working directory must have two parents). See :hg:`help
4313 merge-tools` for information on configuring merge tools.
4314 merge-tools` for information on configuring merge tools.
4314
4315
4315 The resolve command can be used in the following ways:
4316 The resolve command can be used in the following ways:
4316
4317
4317 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4318 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4318 files, discarding any previous merge attempts. Re-merging is not
4319 files, discarding any previous merge attempts. Re-merging is not
4319 performed for files already marked as resolved. Use ``--all/-a``
4320 performed for files already marked as resolved. Use ``--all/-a``
4320 to select all unresolved files. ``--tool`` can be used to specify
4321 to select all unresolved files. ``--tool`` can be used to specify
4321 the merge tool used for the given files. It overrides the HGMERGE
4322 the merge tool used for the given files. It overrides the HGMERGE
4322 environment variable and your configuration files. Previous file
4323 environment variable and your configuration files. Previous file
4323 contents are saved with a ``.orig`` suffix.
4324 contents are saved with a ``.orig`` suffix.
4324
4325
4325 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4326 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4326 (e.g. after having manually fixed-up the files). The default is
4327 (e.g. after having manually fixed-up the files). The default is
4327 to mark all unresolved files.
4328 to mark all unresolved files.
4328
4329
4329 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4330 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4330 default is to mark all resolved files.
4331 default is to mark all resolved files.
4331
4332
4332 - :hg:`resolve -l`: list files which had or still have conflicts.
4333 - :hg:`resolve -l`: list files which had or still have conflicts.
4333 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4334 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4334 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4335 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4335 the list. See :hg:`help filesets` for details.
4336 the list. See :hg:`help filesets` for details.
4336
4337
4337 .. note::
4338 .. note::
4338
4339
4339 Mercurial will not let you commit files with unresolved merge
4340 Mercurial will not let you commit files with unresolved merge
4340 conflicts. You must use :hg:`resolve -m ...` before you can
4341 conflicts. You must use :hg:`resolve -m ...` before you can
4341 commit after a conflicting merge.
4342 commit after a conflicting merge.
4342
4343
4343 Returns 0 on success, 1 if any files fail a resolve attempt.
4344 Returns 0 on success, 1 if any files fail a resolve attempt.
4344 """
4345 """
4345
4346
4346 opts = pycompat.byteskwargs(opts)
4347 opts = pycompat.byteskwargs(opts)
4347 flaglist = 'all mark unmark list no_status'.split()
4348 flaglist = 'all mark unmark list no_status'.split()
4348 all, mark, unmark, show, nostatus = \
4349 all, mark, unmark, show, nostatus = \
4349 [opts.get(o) for o in flaglist]
4350 [opts.get(o) for o in flaglist]
4350
4351
4351 if (show and (mark or unmark)) or (mark and unmark):
4352 if (show and (mark or unmark)) or (mark and unmark):
4352 raise error.Abort(_("too many options specified"))
4353 raise error.Abort(_("too many options specified"))
4353 if pats and all:
4354 if pats and all:
4354 raise error.Abort(_("can't specify --all and patterns"))
4355 raise error.Abort(_("can't specify --all and patterns"))
4355 if not (all or pats or show or mark or unmark):
4356 if not (all or pats or show or mark or unmark):
4356 raise error.Abort(_('no files or directories specified'),
4357 raise error.Abort(_('no files or directories specified'),
4357 hint=('use --all to re-merge all unresolved files'))
4358 hint=('use --all to re-merge all unresolved files'))
4358
4359
4359 if show:
4360 if show:
4360 ui.pager('resolve')
4361 ui.pager('resolve')
4361 fm = ui.formatter('resolve', opts)
4362 fm = ui.formatter('resolve', opts)
4362 ms = mergemod.mergestate.read(repo)
4363 ms = mergemod.mergestate.read(repo)
4363 m = scmutil.match(repo[None], pats, opts)
4364 m = scmutil.match(repo[None], pats, opts)
4364
4365
4365 # Labels and keys based on merge state. Unresolved path conflicts show
4366 # Labels and keys based on merge state. Unresolved path conflicts show
4366 # as 'P'. Resolved path conflicts show as 'R', the same as normal
4367 # as 'P'. Resolved path conflicts show as 'R', the same as normal
4367 # resolved conflicts.
4368 # resolved conflicts.
4368 mergestateinfo = {
4369 mergestateinfo = {
4369 mergemod.MERGE_RECORD_UNRESOLVED: ('resolve.unresolved', 'U'),
4370 mergemod.MERGE_RECORD_UNRESOLVED: ('resolve.unresolved', 'U'),
4370 mergemod.MERGE_RECORD_RESOLVED: ('resolve.resolved', 'R'),
4371 mergemod.MERGE_RECORD_RESOLVED: ('resolve.resolved', 'R'),
4371 mergemod.MERGE_RECORD_UNRESOLVED_PATH: ('resolve.unresolved', 'P'),
4372 mergemod.MERGE_RECORD_UNRESOLVED_PATH: ('resolve.unresolved', 'P'),
4372 mergemod.MERGE_RECORD_RESOLVED_PATH: ('resolve.resolved', 'R'),
4373 mergemod.MERGE_RECORD_RESOLVED_PATH: ('resolve.resolved', 'R'),
4373 mergemod.MERGE_RECORD_DRIVER_RESOLVED: ('resolve.driverresolved',
4374 mergemod.MERGE_RECORD_DRIVER_RESOLVED: ('resolve.driverresolved',
4374 'D'),
4375 'D'),
4375 }
4376 }
4376
4377
4377 for f in ms:
4378 for f in ms:
4378 if not m(f):
4379 if not m(f):
4379 continue
4380 continue
4380
4381
4381 label, key = mergestateinfo[ms[f]]
4382 label, key = mergestateinfo[ms[f]]
4382 fm.startitem()
4383 fm.startitem()
4383 fm.condwrite(not nostatus, 'status', '%s ', key, label=label)
4384 fm.condwrite(not nostatus, 'status', '%s ', key, label=label)
4384 fm.write('path', '%s\n', f, label=label)
4385 fm.write('path', '%s\n', f, label=label)
4385 fm.end()
4386 fm.end()
4386 return 0
4387 return 0
4387
4388
4388 with repo.wlock():
4389 with repo.wlock():
4389 ms = mergemod.mergestate.read(repo)
4390 ms = mergemod.mergestate.read(repo)
4390
4391
4391 if not (ms.active() or repo.dirstate.p2() != nullid):
4392 if not (ms.active() or repo.dirstate.p2() != nullid):
4392 raise error.Abort(
4393 raise error.Abort(
4393 _('resolve command not applicable when not merging'))
4394 _('resolve command not applicable when not merging'))
4394
4395
4395 wctx = repo[None]
4396 wctx = repo[None]
4396
4397
4397 if (ms.mergedriver
4398 if (ms.mergedriver
4398 and ms.mdstate() == mergemod.MERGE_DRIVER_STATE_UNMARKED):
4399 and ms.mdstate() == mergemod.MERGE_DRIVER_STATE_UNMARKED):
4399 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4400 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4400 ms.commit()
4401 ms.commit()
4401 # allow mark and unmark to go through
4402 # allow mark and unmark to go through
4402 if not mark and not unmark and not proceed:
4403 if not mark and not unmark and not proceed:
4403 return 1
4404 return 1
4404
4405
4405 m = scmutil.match(wctx, pats, opts)
4406 m = scmutil.match(wctx, pats, opts)
4406 ret = 0
4407 ret = 0
4407 didwork = False
4408 didwork = False
4408 runconclude = False
4409 runconclude = False
4409
4410
4410 tocomplete = []
4411 tocomplete = []
4411 for f in ms:
4412 for f in ms:
4412 if not m(f):
4413 if not m(f):
4413 continue
4414 continue
4414
4415
4415 didwork = True
4416 didwork = True
4416
4417
4417 # don't let driver-resolved files be marked, and run the conclude
4418 # don't let driver-resolved files be marked, and run the conclude
4418 # step if asked to resolve
4419 # step if asked to resolve
4419 if ms[f] == mergemod.MERGE_RECORD_DRIVER_RESOLVED:
4420 if ms[f] == mergemod.MERGE_RECORD_DRIVER_RESOLVED:
4420 exact = m.exact(f)
4421 exact = m.exact(f)
4421 if mark:
4422 if mark:
4422 if exact:
4423 if exact:
4423 ui.warn(_('not marking %s as it is driver-resolved\n')
4424 ui.warn(_('not marking %s as it is driver-resolved\n')
4424 % f)
4425 % f)
4425 elif unmark:
4426 elif unmark:
4426 if exact:
4427 if exact:
4427 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4428 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4428 % f)
4429 % f)
4429 else:
4430 else:
4430 runconclude = True
4431 runconclude = True
4431 continue
4432 continue
4432
4433
4433 # path conflicts must be resolved manually
4434 # path conflicts must be resolved manually
4434 if ms[f] in (mergemod.MERGE_RECORD_UNRESOLVED_PATH,
4435 if ms[f] in (mergemod.MERGE_RECORD_UNRESOLVED_PATH,
4435 mergemod.MERGE_RECORD_RESOLVED_PATH):
4436 mergemod.MERGE_RECORD_RESOLVED_PATH):
4436 if mark:
4437 if mark:
4437 ms.mark(f, mergemod.MERGE_RECORD_RESOLVED_PATH)
4438 ms.mark(f, mergemod.MERGE_RECORD_RESOLVED_PATH)
4438 elif unmark:
4439 elif unmark:
4439 ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED_PATH)
4440 ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED_PATH)
4440 elif ms[f] == mergemod.MERGE_RECORD_UNRESOLVED_PATH:
4441 elif ms[f] == mergemod.MERGE_RECORD_UNRESOLVED_PATH:
4441 ui.warn(_('%s: path conflict must be resolved manually\n')
4442 ui.warn(_('%s: path conflict must be resolved manually\n')
4442 % f)
4443 % f)
4443 continue
4444 continue
4444
4445
4445 if mark:
4446 if mark:
4446 ms.mark(f, mergemod.MERGE_RECORD_RESOLVED)
4447 ms.mark(f, mergemod.MERGE_RECORD_RESOLVED)
4447 elif unmark:
4448 elif unmark:
4448 ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED)
4449 ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED)
4449 else:
4450 else:
4450 # backup pre-resolve (merge uses .orig for its own purposes)
4451 # backup pre-resolve (merge uses .orig for its own purposes)
4451 a = repo.wjoin(f)
4452 a = repo.wjoin(f)
4452 try:
4453 try:
4453 util.copyfile(a, a + ".resolve")
4454 util.copyfile(a, a + ".resolve")
4454 except (IOError, OSError) as inst:
4455 except (IOError, OSError) as inst:
4455 if inst.errno != errno.ENOENT:
4456 if inst.errno != errno.ENOENT:
4456 raise
4457 raise
4457
4458
4458 try:
4459 try:
4459 # preresolve file
4460 # preresolve file
4460 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4461 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4461 'resolve')
4462 'resolve')
4462 complete, r = ms.preresolve(f, wctx)
4463 complete, r = ms.preresolve(f, wctx)
4463 if not complete:
4464 if not complete:
4464 tocomplete.append(f)
4465 tocomplete.append(f)
4465 elif r:
4466 elif r:
4466 ret = 1
4467 ret = 1
4467 finally:
4468 finally:
4468 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4469 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4469 ms.commit()
4470 ms.commit()
4470
4471
4471 # replace filemerge's .orig file with our resolve file, but only
4472 # replace filemerge's .orig file with our resolve file, but only
4472 # for merges that are complete
4473 # for merges that are complete
4473 if complete:
4474 if complete:
4474 try:
4475 try:
4475 util.rename(a + ".resolve",
4476 util.rename(a + ".resolve",
4476 scmutil.origpath(ui, repo, a))
4477 scmutil.origpath(ui, repo, a))
4477 except OSError as inst:
4478 except OSError as inst:
4478 if inst.errno != errno.ENOENT:
4479 if inst.errno != errno.ENOENT:
4479 raise
4480 raise
4480
4481
4481 for f in tocomplete:
4482 for f in tocomplete:
4482 try:
4483 try:
4483 # resolve file
4484 # resolve file
4484 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4485 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4485 'resolve')
4486 'resolve')
4486 r = ms.resolve(f, wctx)
4487 r = ms.resolve(f, wctx)
4487 if r:
4488 if r:
4488 ret = 1
4489 ret = 1
4489 finally:
4490 finally:
4490 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4491 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4491 ms.commit()
4492 ms.commit()
4492
4493
4493 # replace filemerge's .orig file with our resolve file
4494 # replace filemerge's .orig file with our resolve file
4494 a = repo.wjoin(f)
4495 a = repo.wjoin(f)
4495 try:
4496 try:
4496 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4497 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4497 except OSError as inst:
4498 except OSError as inst:
4498 if inst.errno != errno.ENOENT:
4499 if inst.errno != errno.ENOENT:
4499 raise
4500 raise
4500
4501
4501 ms.commit()
4502 ms.commit()
4502 ms.recordactions()
4503 ms.recordactions()
4503
4504
4504 if not didwork and pats:
4505 if not didwork and pats:
4505 hint = None
4506 hint = None
4506 if not any([p for p in pats if p.find(':') >= 0]):
4507 if not any([p for p in pats if p.find(':') >= 0]):
4507 pats = ['path:%s' % p for p in pats]
4508 pats = ['path:%s' % p for p in pats]
4508 m = scmutil.match(wctx, pats, opts)
4509 m = scmutil.match(wctx, pats, opts)
4509 for f in ms:
4510 for f in ms:
4510 if not m(f):
4511 if not m(f):
4511 continue
4512 continue
4512 flags = ''.join(['-%s ' % o[0:1] for o in flaglist
4513 flags = ''.join(['-%s ' % o[0:1] for o in flaglist
4513 if opts.get(o)])
4514 if opts.get(o)])
4514 hint = _("(try: hg resolve %s%s)\n") % (
4515 hint = _("(try: hg resolve %s%s)\n") % (
4515 flags,
4516 flags,
4516 ' '.join(pats))
4517 ' '.join(pats))
4517 break
4518 break
4518 ui.warn(_("arguments do not match paths that need resolving\n"))
4519 ui.warn(_("arguments do not match paths that need resolving\n"))
4519 if hint:
4520 if hint:
4520 ui.warn(hint)
4521 ui.warn(hint)
4521 elif ms.mergedriver and ms.mdstate() != 's':
4522 elif ms.mergedriver and ms.mdstate() != 's':
4522 # run conclude step when either a driver-resolved file is requested
4523 # run conclude step when either a driver-resolved file is requested
4523 # or there are no driver-resolved files
4524 # or there are no driver-resolved files
4524 # we can't use 'ret' to determine whether any files are unresolved
4525 # we can't use 'ret' to determine whether any files are unresolved
4525 # because we might not have tried to resolve some
4526 # because we might not have tried to resolve some
4526 if ((runconclude or not list(ms.driverresolved()))
4527 if ((runconclude or not list(ms.driverresolved()))
4527 and not list(ms.unresolved())):
4528 and not list(ms.unresolved())):
4528 proceed = mergemod.driverconclude(repo, ms, wctx)
4529 proceed = mergemod.driverconclude(repo, ms, wctx)
4529 ms.commit()
4530 ms.commit()
4530 if not proceed:
4531 if not proceed:
4531 return 1
4532 return 1
4532
4533
4533 # Nudge users into finishing an unfinished operation
4534 # Nudge users into finishing an unfinished operation
4534 unresolvedf = list(ms.unresolved())
4535 unresolvedf = list(ms.unresolved())
4535 driverresolvedf = list(ms.driverresolved())
4536 driverresolvedf = list(ms.driverresolved())
4536 if not unresolvedf and not driverresolvedf:
4537 if not unresolvedf and not driverresolvedf:
4537 ui.status(_('(no more unresolved files)\n'))
4538 ui.status(_('(no more unresolved files)\n'))
4538 cmdutil.checkafterresolved(repo)
4539 cmdutil.checkafterresolved(repo)
4539 elif not unresolvedf:
4540 elif not unresolvedf:
4540 ui.status(_('(no more unresolved files -- '
4541 ui.status(_('(no more unresolved files -- '
4541 'run "hg resolve --all" to conclude)\n'))
4542 'run "hg resolve --all" to conclude)\n'))
4542
4543
4543 return ret
4544 return ret
4544
4545
4545 @command('revert',
4546 @command('revert',
4546 [('a', 'all', None, _('revert all changes when no arguments given')),
4547 [('a', 'all', None, _('revert all changes when no arguments given')),
4547 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4548 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4548 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4549 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4549 ('C', 'no-backup', None, _('do not save backup copies of files')),
4550 ('C', 'no-backup', None, _('do not save backup copies of files')),
4550 ('i', 'interactive', None, _('interactively select the changes')),
4551 ('i', 'interactive', None, _('interactively select the changes')),
4551 ] + walkopts + dryrunopts,
4552 ] + walkopts + dryrunopts,
4552 _('[OPTION]... [-r REV] [NAME]...'))
4553 _('[OPTION]... [-r REV] [NAME]...'))
4553 def revert(ui, repo, *pats, **opts):
4554 def revert(ui, repo, *pats, **opts):
4554 """restore files to their checkout state
4555 """restore files to their checkout state
4555
4556
4556 .. note::
4557 .. note::
4557
4558
4558 To check out earlier revisions, you should use :hg:`update REV`.
4559 To check out earlier revisions, you should use :hg:`update REV`.
4559 To cancel an uncommitted merge (and lose your changes),
4560 To cancel an uncommitted merge (and lose your changes),
4560 use :hg:`merge --abort`.
4561 use :hg:`merge --abort`.
4561
4562
4562 With no revision specified, revert the specified files or directories
4563 With no revision specified, revert the specified files or directories
4563 to the contents they had in the parent of the working directory.
4564 to the contents they had in the parent of the working directory.
4564 This restores the contents of files to an unmodified
4565 This restores the contents of files to an unmodified
4565 state and unschedules adds, removes, copies, and renames. If the
4566 state and unschedules adds, removes, copies, and renames. If the
4566 working directory has two parents, you must explicitly specify a
4567 working directory has two parents, you must explicitly specify a
4567 revision.
4568 revision.
4568
4569
4569 Using the -r/--rev or -d/--date options, revert the given files or
4570 Using the -r/--rev or -d/--date options, revert the given files or
4570 directories to their states as of a specific revision. Because
4571 directories to their states as of a specific revision. Because
4571 revert does not change the working directory parents, this will
4572 revert does not change the working directory parents, this will
4572 cause these files to appear modified. This can be helpful to "back
4573 cause these files to appear modified. This can be helpful to "back
4573 out" some or all of an earlier change. See :hg:`backout` for a
4574 out" some or all of an earlier change. See :hg:`backout` for a
4574 related method.
4575 related method.
4575
4576
4576 Modified files are saved with a .orig suffix before reverting.
4577 Modified files are saved with a .orig suffix before reverting.
4577 To disable these backups, use --no-backup. It is possible to store
4578 To disable these backups, use --no-backup. It is possible to store
4578 the backup files in a custom directory relative to the root of the
4579 the backup files in a custom directory relative to the root of the
4579 repository by setting the ``ui.origbackuppath`` configuration
4580 repository by setting the ``ui.origbackuppath`` configuration
4580 option.
4581 option.
4581
4582
4582 See :hg:`help dates` for a list of formats valid for -d/--date.
4583 See :hg:`help dates` for a list of formats valid for -d/--date.
4583
4584
4584 See :hg:`help backout` for a way to reverse the effect of an
4585 See :hg:`help backout` for a way to reverse the effect of an
4585 earlier changeset.
4586 earlier changeset.
4586
4587
4587 Returns 0 on success.
4588 Returns 0 on success.
4588 """
4589 """
4589
4590
4590 opts = pycompat.byteskwargs(opts)
4591 opts = pycompat.byteskwargs(opts)
4591 if opts.get("date"):
4592 if opts.get("date"):
4592 if opts.get("rev"):
4593 if opts.get("rev"):
4593 raise error.Abort(_("you can't specify a revision and a date"))
4594 raise error.Abort(_("you can't specify a revision and a date"))
4594 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4595 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4595
4596
4596 parent, p2 = repo.dirstate.parents()
4597 parent, p2 = repo.dirstate.parents()
4597 if not opts.get('rev') and p2 != nullid:
4598 if not opts.get('rev') and p2 != nullid:
4598 # revert after merge is a trap for new users (issue2915)
4599 # revert after merge is a trap for new users (issue2915)
4599 raise error.Abort(_('uncommitted merge with no revision specified'),
4600 raise error.Abort(_('uncommitted merge with no revision specified'),
4600 hint=_("use 'hg update' or see 'hg help revert'"))
4601 hint=_("use 'hg update' or see 'hg help revert'"))
4601
4602
4602 rev = opts.get('rev')
4603 rev = opts.get('rev')
4603 if rev:
4604 if rev:
4604 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
4605 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
4605 ctx = scmutil.revsingle(repo, rev)
4606 ctx = scmutil.revsingle(repo, rev)
4606
4607
4607 if (not (pats or opts.get('include') or opts.get('exclude') or
4608 if (not (pats or opts.get('include') or opts.get('exclude') or
4608 opts.get('all') or opts.get('interactive'))):
4609 opts.get('all') or opts.get('interactive'))):
4609 msg = _("no files or directories specified")
4610 msg = _("no files or directories specified")
4610 if p2 != nullid:
4611 if p2 != nullid:
4611 hint = _("uncommitted merge, use --all to discard all changes,"
4612 hint = _("uncommitted merge, use --all to discard all changes,"
4612 " or 'hg update -C .' to abort the merge")
4613 " or 'hg update -C .' to abort the merge")
4613 raise error.Abort(msg, hint=hint)
4614 raise error.Abort(msg, hint=hint)
4614 dirty = any(repo.status())
4615 dirty = any(repo.status())
4615 node = ctx.node()
4616 node = ctx.node()
4616 if node != parent:
4617 if node != parent:
4617 if dirty:
4618 if dirty:
4618 hint = _("uncommitted changes, use --all to discard all"
4619 hint = _("uncommitted changes, use --all to discard all"
4619 " changes, or 'hg update %s' to update") % ctx.rev()
4620 " changes, or 'hg update %s' to update") % ctx.rev()
4620 else:
4621 else:
4621 hint = _("use --all to revert all files,"
4622 hint = _("use --all to revert all files,"
4622 " or 'hg update %s' to update") % ctx.rev()
4623 " or 'hg update %s' to update") % ctx.rev()
4623 elif dirty:
4624 elif dirty:
4624 hint = _("uncommitted changes, use --all to discard all changes")
4625 hint = _("uncommitted changes, use --all to discard all changes")
4625 else:
4626 else:
4626 hint = _("use --all to revert all files")
4627 hint = _("use --all to revert all files")
4627 raise error.Abort(msg, hint=hint)
4628 raise error.Abort(msg, hint=hint)
4628
4629
4629 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats,
4630 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats,
4630 **pycompat.strkwargs(opts))
4631 **pycompat.strkwargs(opts))
4631
4632
4632 @command('rollback', dryrunopts +
4633 @command('rollback', dryrunopts +
4633 [('f', 'force', False, _('ignore safety measures'))])
4634 [('f', 'force', False, _('ignore safety measures'))])
4634 def rollback(ui, repo, **opts):
4635 def rollback(ui, repo, **opts):
4635 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4636 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4636
4637
4637 Please use :hg:`commit --amend` instead of rollback to correct
4638 Please use :hg:`commit --amend` instead of rollback to correct
4638 mistakes in the last commit.
4639 mistakes in the last commit.
4639
4640
4640 This command should be used with care. There is only one level of
4641 This command should be used with care. There is only one level of
4641 rollback, and there is no way to undo a rollback. It will also
4642 rollback, and there is no way to undo a rollback. It will also
4642 restore the dirstate at the time of the last transaction, losing
4643 restore the dirstate at the time of the last transaction, losing
4643 any dirstate changes since that time. This command does not alter
4644 any dirstate changes since that time. This command does not alter
4644 the working directory.
4645 the working directory.
4645
4646
4646 Transactions are used to encapsulate the effects of all commands
4647 Transactions are used to encapsulate the effects of all commands
4647 that create new changesets or propagate existing changesets into a
4648 that create new changesets or propagate existing changesets into a
4648 repository.
4649 repository.
4649
4650
4650 .. container:: verbose
4651 .. container:: verbose
4651
4652
4652 For example, the following commands are transactional, and their
4653 For example, the following commands are transactional, and their
4653 effects can be rolled back:
4654 effects can be rolled back:
4654
4655
4655 - commit
4656 - commit
4656 - import
4657 - import
4657 - pull
4658 - pull
4658 - push (with this repository as the destination)
4659 - push (with this repository as the destination)
4659 - unbundle
4660 - unbundle
4660
4661
4661 To avoid permanent data loss, rollback will refuse to rollback a
4662 To avoid permanent data loss, rollback will refuse to rollback a
4662 commit transaction if it isn't checked out. Use --force to
4663 commit transaction if it isn't checked out. Use --force to
4663 override this protection.
4664 override this protection.
4664
4665
4665 The rollback command can be entirely disabled by setting the
4666 The rollback command can be entirely disabled by setting the
4666 ``ui.rollback`` configuration setting to false. If you're here
4667 ``ui.rollback`` configuration setting to false. If you're here
4667 because you want to use rollback and it's disabled, you can
4668 because you want to use rollback and it's disabled, you can
4668 re-enable the command by setting ``ui.rollback`` to true.
4669 re-enable the command by setting ``ui.rollback`` to true.
4669
4670
4670 This command is not intended for use on public repositories. Once
4671 This command is not intended for use on public repositories. Once
4671 changes are visible for pull by other users, rolling a transaction
4672 changes are visible for pull by other users, rolling a transaction
4672 back locally is ineffective (someone else may already have pulled
4673 back locally is ineffective (someone else may already have pulled
4673 the changes). Furthermore, a race is possible with readers of the
4674 the changes). Furthermore, a race is possible with readers of the
4674 repository; for example an in-progress pull from the repository
4675 repository; for example an in-progress pull from the repository
4675 may fail if a rollback is performed.
4676 may fail if a rollback is performed.
4676
4677
4677 Returns 0 on success, 1 if no rollback data is available.
4678 Returns 0 on success, 1 if no rollback data is available.
4678 """
4679 """
4679 if not ui.configbool('ui', 'rollback'):
4680 if not ui.configbool('ui', 'rollback'):
4680 raise error.Abort(_('rollback is disabled because it is unsafe'),
4681 raise error.Abort(_('rollback is disabled because it is unsafe'),
4681 hint=('see `hg help -v rollback` for information'))
4682 hint=('see `hg help -v rollback` for information'))
4682 return repo.rollback(dryrun=opts.get(r'dry_run'),
4683 return repo.rollback(dryrun=opts.get(r'dry_run'),
4683 force=opts.get(r'force'))
4684 force=opts.get(r'force'))
4684
4685
4685 @command('root', [], cmdtype=readonly)
4686 @command('root', [], cmdtype=readonly)
4686 def root(ui, repo):
4687 def root(ui, repo):
4687 """print the root (top) of the current working directory
4688 """print the root (top) of the current working directory
4688
4689
4689 Print the root directory of the current repository.
4690 Print the root directory of the current repository.
4690
4691
4691 Returns 0 on success.
4692 Returns 0 on success.
4692 """
4693 """
4693 ui.write(repo.root + "\n")
4694 ui.write(repo.root + "\n")
4694
4695
4695 @command('^serve',
4696 @command('^serve',
4696 [('A', 'accesslog', '', _('name of access log file to write to'),
4697 [('A', 'accesslog', '', _('name of access log file to write to'),
4697 _('FILE')),
4698 _('FILE')),
4698 ('d', 'daemon', None, _('run server in background')),
4699 ('d', 'daemon', None, _('run server in background')),
4699 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4700 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4700 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4701 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4701 # use string type, then we can check if something was passed
4702 # use string type, then we can check if something was passed
4702 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4703 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4703 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4704 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4704 _('ADDR')),
4705 _('ADDR')),
4705 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4706 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4706 _('PREFIX')),
4707 _('PREFIX')),
4707 ('n', 'name', '',
4708 ('n', 'name', '',
4708 _('name to show in web pages (default: working directory)'), _('NAME')),
4709 _('name to show in web pages (default: working directory)'), _('NAME')),
4709 ('', 'web-conf', '',
4710 ('', 'web-conf', '',
4710 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4711 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4711 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4712 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4712 _('FILE')),
4713 _('FILE')),
4713 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4714 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4714 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4715 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4715 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4716 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4716 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4717 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4717 ('', 'style', '', _('template style to use'), _('STYLE')),
4718 ('', 'style', '', _('template style to use'), _('STYLE')),
4718 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4719 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4719 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4720 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4720 + subrepoopts,
4721 + subrepoopts,
4721 _('[OPTION]...'),
4722 _('[OPTION]...'),
4722 optionalrepo=True)
4723 optionalrepo=True)
4723 def serve(ui, repo, **opts):
4724 def serve(ui, repo, **opts):
4724 """start stand-alone webserver
4725 """start stand-alone webserver
4725
4726
4726 Start a local HTTP repository browser and pull server. You can use
4727 Start a local HTTP repository browser and pull server. You can use
4727 this for ad-hoc sharing and browsing of repositories. It is
4728 this for ad-hoc sharing and browsing of repositories. It is
4728 recommended to use a real web server to serve a repository for
4729 recommended to use a real web server to serve a repository for
4729 longer periods of time.
4730 longer periods of time.
4730
4731
4731 Please note that the server does not implement access control.
4732 Please note that the server does not implement access control.
4732 This means that, by default, anybody can read from the server and
4733 This means that, by default, anybody can read from the server and
4733 nobody can write to it by default. Set the ``web.allow-push``
4734 nobody can write to it by default. Set the ``web.allow-push``
4734 option to ``*`` to allow everybody to push to the server. You
4735 option to ``*`` to allow everybody to push to the server. You
4735 should use a real web server if you need to authenticate users.
4736 should use a real web server if you need to authenticate users.
4736
4737
4737 By default, the server logs accesses to stdout and errors to
4738 By default, the server logs accesses to stdout and errors to
4738 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4739 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4739 files.
4740 files.
4740
4741
4741 To have the server choose a free port number to listen on, specify
4742 To have the server choose a free port number to listen on, specify
4742 a port number of 0; in this case, the server will print the port
4743 a port number of 0; in this case, the server will print the port
4743 number it uses.
4744 number it uses.
4744
4745
4745 Returns 0 on success.
4746 Returns 0 on success.
4746 """
4747 """
4747
4748
4748 opts = pycompat.byteskwargs(opts)
4749 opts = pycompat.byteskwargs(opts)
4749 if opts["stdio"] and opts["cmdserver"]:
4750 if opts["stdio"] and opts["cmdserver"]:
4750 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4751 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4751
4752
4752 if opts["stdio"]:
4753 if opts["stdio"]:
4753 if repo is None:
4754 if repo is None:
4754 raise error.RepoError(_("there is no Mercurial repository here"
4755 raise error.RepoError(_("there is no Mercurial repository here"
4755 " (.hg not found)"))
4756 " (.hg not found)"))
4756 s = wireprotoserver.sshserver(ui, repo)
4757 s = wireprotoserver.sshserver(ui, repo)
4757 s.serve_forever()
4758 s.serve_forever()
4758
4759
4759 service = server.createservice(ui, repo, opts)
4760 service = server.createservice(ui, repo, opts)
4760 return server.runservice(opts, initfn=service.init, runfn=service.run)
4761 return server.runservice(opts, initfn=service.init, runfn=service.run)
4761
4762
4762 @command('^status|st',
4763 @command('^status|st',
4763 [('A', 'all', None, _('show status of all files')),
4764 [('A', 'all', None, _('show status of all files')),
4764 ('m', 'modified', None, _('show only modified files')),
4765 ('m', 'modified', None, _('show only modified files')),
4765 ('a', 'added', None, _('show only added files')),
4766 ('a', 'added', None, _('show only added files')),
4766 ('r', 'removed', None, _('show only removed files')),
4767 ('r', 'removed', None, _('show only removed files')),
4767 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4768 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4768 ('c', 'clean', None, _('show only files without changes')),
4769 ('c', 'clean', None, _('show only files without changes')),
4769 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4770 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4770 ('i', 'ignored', None, _('show only ignored files')),
4771 ('i', 'ignored', None, _('show only ignored files')),
4771 ('n', 'no-status', None, _('hide status prefix')),
4772 ('n', 'no-status', None, _('hide status prefix')),
4772 ('t', 'terse', '', _('show the terse output (EXPERIMENTAL)')),
4773 ('t', 'terse', '', _('show the terse output (EXPERIMENTAL)')),
4773 ('C', 'copies', None, _('show source of copied files')),
4774 ('C', 'copies', None, _('show source of copied files')),
4774 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4775 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4775 ('', 'rev', [], _('show difference from revision'), _('REV')),
4776 ('', 'rev', [], _('show difference from revision'), _('REV')),
4776 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4777 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4777 ] + walkopts + subrepoopts + formatteropts,
4778 ] + walkopts + subrepoopts + formatteropts,
4778 _('[OPTION]... [FILE]...'),
4779 _('[OPTION]... [FILE]...'),
4779 inferrepo=True, cmdtype=readonly)
4780 inferrepo=True, cmdtype=readonly)
4780 def status(ui, repo, *pats, **opts):
4781 def status(ui, repo, *pats, **opts):
4781 """show changed files in the working directory
4782 """show changed files in the working directory
4782
4783
4783 Show status of files in the repository. If names are given, only
4784 Show status of files in the repository. If names are given, only
4784 files that match are shown. Files that are clean or ignored or
4785 files that match are shown. Files that are clean or ignored or
4785 the source of a copy/move operation, are not listed unless
4786 the source of a copy/move operation, are not listed unless
4786 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4787 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4787 Unless options described with "show only ..." are given, the
4788 Unless options described with "show only ..." are given, the
4788 options -mardu are used.
4789 options -mardu are used.
4789
4790
4790 Option -q/--quiet hides untracked (unknown and ignored) files
4791 Option -q/--quiet hides untracked (unknown and ignored) files
4791 unless explicitly requested with -u/--unknown or -i/--ignored.
4792 unless explicitly requested with -u/--unknown or -i/--ignored.
4792
4793
4793 .. note::
4794 .. note::
4794
4795
4795 :hg:`status` may appear to disagree with diff if permissions have
4796 :hg:`status` may appear to disagree with diff if permissions have
4796 changed or a merge has occurred. The standard diff format does
4797 changed or a merge has occurred. The standard diff format does
4797 not report permission changes and diff only reports changes
4798 not report permission changes and diff only reports changes
4798 relative to one merge parent.
4799 relative to one merge parent.
4799
4800
4800 If one revision is given, it is used as the base revision.
4801 If one revision is given, it is used as the base revision.
4801 If two revisions are given, the differences between them are
4802 If two revisions are given, the differences between them are
4802 shown. The --change option can also be used as a shortcut to list
4803 shown. The --change option can also be used as a shortcut to list
4803 the changed files of a revision from its first parent.
4804 the changed files of a revision from its first parent.
4804
4805
4805 The codes used to show the status of files are::
4806 The codes used to show the status of files are::
4806
4807
4807 M = modified
4808 M = modified
4808 A = added
4809 A = added
4809 R = removed
4810 R = removed
4810 C = clean
4811 C = clean
4811 ! = missing (deleted by non-hg command, but still tracked)
4812 ! = missing (deleted by non-hg command, but still tracked)
4812 ? = not tracked
4813 ? = not tracked
4813 I = ignored
4814 I = ignored
4814 = origin of the previous file (with --copies)
4815 = origin of the previous file (with --copies)
4815
4816
4816 .. container:: verbose
4817 .. container:: verbose
4817
4818
4818 The -t/--terse option abbreviates the output by showing only the directory
4819 The -t/--terse option abbreviates the output by showing only the directory
4819 name if all the files in it share the same status. The option takes an
4820 name if all the files in it share the same status. The option takes an
4820 argument indicating the statuses to abbreviate: 'm' for 'modified', 'a'
4821 argument indicating the statuses to abbreviate: 'm' for 'modified', 'a'
4821 for 'added', 'r' for 'removed', 'd' for 'deleted', 'u' for 'unknown', 'i'
4822 for 'added', 'r' for 'removed', 'd' for 'deleted', 'u' for 'unknown', 'i'
4822 for 'ignored' and 'c' for clean.
4823 for 'ignored' and 'c' for clean.
4823
4824
4824 It abbreviates only those statuses which are passed. Note that clean and
4825 It abbreviates only those statuses which are passed. Note that clean and
4825 ignored files are not displayed with '--terse ic' unless the -c/--clean
4826 ignored files are not displayed with '--terse ic' unless the -c/--clean
4826 and -i/--ignored options are also used.
4827 and -i/--ignored options are also used.
4827
4828
4828 The -v/--verbose option shows information when the repository is in an
4829 The -v/--verbose option shows information when the repository is in an
4829 unfinished merge, shelve, rebase state etc. You can have this behavior
4830 unfinished merge, shelve, rebase state etc. You can have this behavior
4830 turned on by default by enabling the ``commands.status.verbose`` option.
4831 turned on by default by enabling the ``commands.status.verbose`` option.
4831
4832
4832 You can skip displaying some of these states by setting
4833 You can skip displaying some of these states by setting
4833 ``commands.status.skipstates`` to one or more of: 'bisect', 'graft',
4834 ``commands.status.skipstates`` to one or more of: 'bisect', 'graft',
4834 'histedit', 'merge', 'rebase', or 'unshelve'.
4835 'histedit', 'merge', 'rebase', or 'unshelve'.
4835
4836
4836 Examples:
4837 Examples:
4837
4838
4838 - show changes in the working directory relative to a
4839 - show changes in the working directory relative to a
4839 changeset::
4840 changeset::
4840
4841
4841 hg status --rev 9353
4842 hg status --rev 9353
4842
4843
4843 - show changes in the working directory relative to the
4844 - show changes in the working directory relative to the
4844 current directory (see :hg:`help patterns` for more information)::
4845 current directory (see :hg:`help patterns` for more information)::
4845
4846
4846 hg status re:
4847 hg status re:
4847
4848
4848 - show all changes including copies in an existing changeset::
4849 - show all changes including copies in an existing changeset::
4849
4850
4850 hg status --copies --change 9353
4851 hg status --copies --change 9353
4851
4852
4852 - get a NUL separated list of added files, suitable for xargs::
4853 - get a NUL separated list of added files, suitable for xargs::
4853
4854
4854 hg status -an0
4855 hg status -an0
4855
4856
4856 - show more information about the repository status, abbreviating
4857 - show more information about the repository status, abbreviating
4857 added, removed, modified, deleted, and untracked paths::
4858 added, removed, modified, deleted, and untracked paths::
4858
4859
4859 hg status -v -t mardu
4860 hg status -v -t mardu
4860
4861
4861 Returns 0 on success.
4862 Returns 0 on success.
4862
4863
4863 """
4864 """
4864
4865
4865 opts = pycompat.byteskwargs(opts)
4866 opts = pycompat.byteskwargs(opts)
4866 revs = opts.get('rev')
4867 revs = opts.get('rev')
4867 change = opts.get('change')
4868 change = opts.get('change')
4868 terse = opts.get('terse')
4869 terse = opts.get('terse')
4869
4870
4870 if revs and change:
4871 if revs and change:
4871 msg = _('cannot specify --rev and --change at the same time')
4872 msg = _('cannot specify --rev and --change at the same time')
4872 raise error.Abort(msg)
4873 raise error.Abort(msg)
4873 elif revs and terse:
4874 elif revs and terse:
4874 msg = _('cannot use --terse with --rev')
4875 msg = _('cannot use --terse with --rev')
4875 raise error.Abort(msg)
4876 raise error.Abort(msg)
4876 elif change:
4877 elif change:
4877 repo = scmutil.unhidehashlikerevs(repo, [change], 'nowarn')
4878 repo = scmutil.unhidehashlikerevs(repo, [change], 'nowarn')
4878 node2 = scmutil.revsingle(repo, change, None).node()
4879 node2 = scmutil.revsingle(repo, change, None).node()
4879 node1 = repo[node2].p1().node()
4880 node1 = repo[node2].p1().node()
4880 else:
4881 else:
4881 repo = scmutil.unhidehashlikerevs(repo, revs, 'nowarn')
4882 repo = scmutil.unhidehashlikerevs(repo, revs, 'nowarn')
4882 node1, node2 = scmutil.revpair(repo, revs)
4883 node1, node2 = scmutil.revpair(repo, revs)
4883
4884
4884 if pats or ui.configbool('commands', 'status.relative'):
4885 if pats or ui.configbool('commands', 'status.relative'):
4885 cwd = repo.getcwd()
4886 cwd = repo.getcwd()
4886 else:
4887 else:
4887 cwd = ''
4888 cwd = ''
4888
4889
4889 if opts.get('print0'):
4890 if opts.get('print0'):
4890 end = '\0'
4891 end = '\0'
4891 else:
4892 else:
4892 end = '\n'
4893 end = '\n'
4893 copy = {}
4894 copy = {}
4894 states = 'modified added removed deleted unknown ignored clean'.split()
4895 states = 'modified added removed deleted unknown ignored clean'.split()
4895 show = [k for k in states if opts.get(k)]
4896 show = [k for k in states if opts.get(k)]
4896 if opts.get('all'):
4897 if opts.get('all'):
4897 show += ui.quiet and (states[:4] + ['clean']) or states
4898 show += ui.quiet and (states[:4] + ['clean']) or states
4898
4899
4899 if not show:
4900 if not show:
4900 if ui.quiet:
4901 if ui.quiet:
4901 show = states[:4]
4902 show = states[:4]
4902 else:
4903 else:
4903 show = states[:5]
4904 show = states[:5]
4904
4905
4905 m = scmutil.match(repo[node2], pats, opts)
4906 m = scmutil.match(repo[node2], pats, opts)
4906 if terse:
4907 if terse:
4907 # we need to compute clean and unknown to terse
4908 # we need to compute clean and unknown to terse
4908 stat = repo.status(node1, node2, m,
4909 stat = repo.status(node1, node2, m,
4909 'ignored' in show or 'i' in terse,
4910 'ignored' in show or 'i' in terse,
4910 True, True, opts.get('subrepos'))
4911 True, True, opts.get('subrepos'))
4911
4912
4912 stat = cmdutil.tersedir(stat, terse)
4913 stat = cmdutil.tersedir(stat, terse)
4913 else:
4914 else:
4914 stat = repo.status(node1, node2, m,
4915 stat = repo.status(node1, node2, m,
4915 'ignored' in show, 'clean' in show,
4916 'ignored' in show, 'clean' in show,
4916 'unknown' in show, opts.get('subrepos'))
4917 'unknown' in show, opts.get('subrepos'))
4917
4918
4918 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4919 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4919
4920
4920 if (opts.get('all') or opts.get('copies')
4921 if (opts.get('all') or opts.get('copies')
4921 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4922 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4922 copy = copies.pathcopies(repo[node1], repo[node2], m)
4923 copy = copies.pathcopies(repo[node1], repo[node2], m)
4923
4924
4924 ui.pager('status')
4925 ui.pager('status')
4925 fm = ui.formatter('status', opts)
4926 fm = ui.formatter('status', opts)
4926 fmt = '%s' + end
4927 fmt = '%s' + end
4927 showchar = not opts.get('no_status')
4928 showchar = not opts.get('no_status')
4928
4929
4929 for state, char, files in changestates:
4930 for state, char, files in changestates:
4930 if state in show:
4931 if state in show:
4931 label = 'status.' + state
4932 label = 'status.' + state
4932 for f in files:
4933 for f in files:
4933 fm.startitem()
4934 fm.startitem()
4934 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4935 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4935 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4936 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4936 if f in copy:
4937 if f in copy:
4937 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4938 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4938 label='status.copied')
4939 label='status.copied')
4939
4940
4940 if ((ui.verbose or ui.configbool('commands', 'status.verbose'))
4941 if ((ui.verbose or ui.configbool('commands', 'status.verbose'))
4941 and not ui.plain()):
4942 and not ui.plain()):
4942 cmdutil.morestatus(repo, fm)
4943 cmdutil.morestatus(repo, fm)
4943 fm.end()
4944 fm.end()
4944
4945
4945 @command('^summary|sum',
4946 @command('^summary|sum',
4946 [('', 'remote', None, _('check for push and pull'))],
4947 [('', 'remote', None, _('check for push and pull'))],
4947 '[--remote]', cmdtype=readonly)
4948 '[--remote]', cmdtype=readonly)
4948 def summary(ui, repo, **opts):
4949 def summary(ui, repo, **opts):
4949 """summarize working directory state
4950 """summarize working directory state
4950
4951
4951 This generates a brief summary of the working directory state,
4952 This generates a brief summary of the working directory state,
4952 including parents, branch, commit status, phase and available updates.
4953 including parents, branch, commit status, phase and available updates.
4953
4954
4954 With the --remote option, this will check the default paths for
4955 With the --remote option, this will check the default paths for
4955 incoming and outgoing changes. This can be time-consuming.
4956 incoming and outgoing changes. This can be time-consuming.
4956
4957
4957 Returns 0 on success.
4958 Returns 0 on success.
4958 """
4959 """
4959
4960
4960 opts = pycompat.byteskwargs(opts)
4961 opts = pycompat.byteskwargs(opts)
4961 ui.pager('summary')
4962 ui.pager('summary')
4962 ctx = repo[None]
4963 ctx = repo[None]
4963 parents = ctx.parents()
4964 parents = ctx.parents()
4964 pnode = parents[0].node()
4965 pnode = parents[0].node()
4965 marks = []
4966 marks = []
4966
4967
4967 ms = None
4968 ms = None
4968 try:
4969 try:
4969 ms = mergemod.mergestate.read(repo)
4970 ms = mergemod.mergestate.read(repo)
4970 except error.UnsupportedMergeRecords as e:
4971 except error.UnsupportedMergeRecords as e:
4971 s = ' '.join(e.recordtypes)
4972 s = ' '.join(e.recordtypes)
4972 ui.warn(
4973 ui.warn(
4973 _('warning: merge state has unsupported record types: %s\n') % s)
4974 _('warning: merge state has unsupported record types: %s\n') % s)
4974 unresolved = []
4975 unresolved = []
4975 else:
4976 else:
4976 unresolved = list(ms.unresolved())
4977 unresolved = list(ms.unresolved())
4977
4978
4978 for p in parents:
4979 for p in parents:
4979 # label with log.changeset (instead of log.parent) since this
4980 # label with log.changeset (instead of log.parent) since this
4980 # shows a working directory parent *changeset*:
4981 # shows a working directory parent *changeset*:
4981 # i18n: column positioning for "hg summary"
4982 # i18n: column positioning for "hg summary"
4982 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4983 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4983 label=logcmdutil.changesetlabels(p))
4984 label=logcmdutil.changesetlabels(p))
4984 ui.write(' '.join(p.tags()), label='log.tag')
4985 ui.write(' '.join(p.tags()), label='log.tag')
4985 if p.bookmarks():
4986 if p.bookmarks():
4986 marks.extend(p.bookmarks())
4987 marks.extend(p.bookmarks())
4987 if p.rev() == -1:
4988 if p.rev() == -1:
4988 if not len(repo):
4989 if not len(repo):
4989 ui.write(_(' (empty repository)'))
4990 ui.write(_(' (empty repository)'))
4990 else:
4991 else:
4991 ui.write(_(' (no revision checked out)'))
4992 ui.write(_(' (no revision checked out)'))
4992 if p.obsolete():
4993 if p.obsolete():
4993 ui.write(_(' (obsolete)'))
4994 ui.write(_(' (obsolete)'))
4994 if p.isunstable():
4995 if p.isunstable():
4995 instabilities = (ui.label(instability, 'trouble.%s' % instability)
4996 instabilities = (ui.label(instability, 'trouble.%s' % instability)
4996 for instability in p.instabilities())
4997 for instability in p.instabilities())
4997 ui.write(' ('
4998 ui.write(' ('
4998 + ', '.join(instabilities)
4999 + ', '.join(instabilities)
4999 + ')')
5000 + ')')
5000 ui.write('\n')
5001 ui.write('\n')
5001 if p.description():
5002 if p.description():
5002 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
5003 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
5003 label='log.summary')
5004 label='log.summary')
5004
5005
5005 branch = ctx.branch()
5006 branch = ctx.branch()
5006 bheads = repo.branchheads(branch)
5007 bheads = repo.branchheads(branch)
5007 # i18n: column positioning for "hg summary"
5008 # i18n: column positioning for "hg summary"
5008 m = _('branch: %s\n') % branch
5009 m = _('branch: %s\n') % branch
5009 if branch != 'default':
5010 if branch != 'default':
5010 ui.write(m, label='log.branch')
5011 ui.write(m, label='log.branch')
5011 else:
5012 else:
5012 ui.status(m, label='log.branch')
5013 ui.status(m, label='log.branch')
5013
5014
5014 if marks:
5015 if marks:
5015 active = repo._activebookmark
5016 active = repo._activebookmark
5016 # i18n: column positioning for "hg summary"
5017 # i18n: column positioning for "hg summary"
5017 ui.write(_('bookmarks:'), label='log.bookmark')
5018 ui.write(_('bookmarks:'), label='log.bookmark')
5018 if active is not None:
5019 if active is not None:
5019 if active in marks:
5020 if active in marks:
5020 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
5021 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
5021 marks.remove(active)
5022 marks.remove(active)
5022 else:
5023 else:
5023 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
5024 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
5024 for m in marks:
5025 for m in marks:
5025 ui.write(' ' + m, label='log.bookmark')
5026 ui.write(' ' + m, label='log.bookmark')
5026 ui.write('\n', label='log.bookmark')
5027 ui.write('\n', label='log.bookmark')
5027
5028
5028 status = repo.status(unknown=True)
5029 status = repo.status(unknown=True)
5029
5030
5030 c = repo.dirstate.copies()
5031 c = repo.dirstate.copies()
5031 copied, renamed = [], []
5032 copied, renamed = [], []
5032 for d, s in c.iteritems():
5033 for d, s in c.iteritems():
5033 if s in status.removed:
5034 if s in status.removed:
5034 status.removed.remove(s)
5035 status.removed.remove(s)
5035 renamed.append(d)
5036 renamed.append(d)
5036 else:
5037 else:
5037 copied.append(d)
5038 copied.append(d)
5038 if d in status.added:
5039 if d in status.added:
5039 status.added.remove(d)
5040 status.added.remove(d)
5040
5041
5041 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
5042 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
5042
5043
5043 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
5044 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
5044 (ui.label(_('%d added'), 'status.added'), status.added),
5045 (ui.label(_('%d added'), 'status.added'), status.added),
5045 (ui.label(_('%d removed'), 'status.removed'), status.removed),
5046 (ui.label(_('%d removed'), 'status.removed'), status.removed),
5046 (ui.label(_('%d renamed'), 'status.copied'), renamed),
5047 (ui.label(_('%d renamed'), 'status.copied'), renamed),
5047 (ui.label(_('%d copied'), 'status.copied'), copied),
5048 (ui.label(_('%d copied'), 'status.copied'), copied),
5048 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
5049 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
5049 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
5050 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
5050 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
5051 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
5051 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
5052 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
5052 t = []
5053 t = []
5053 for l, s in labels:
5054 for l, s in labels:
5054 if s:
5055 if s:
5055 t.append(l % len(s))
5056 t.append(l % len(s))
5056
5057
5057 t = ', '.join(t)
5058 t = ', '.join(t)
5058 cleanworkdir = False
5059 cleanworkdir = False
5059
5060
5060 if repo.vfs.exists('graftstate'):
5061 if repo.vfs.exists('graftstate'):
5061 t += _(' (graft in progress)')
5062 t += _(' (graft in progress)')
5062 if repo.vfs.exists('updatestate'):
5063 if repo.vfs.exists('updatestate'):
5063 t += _(' (interrupted update)')
5064 t += _(' (interrupted update)')
5064 elif len(parents) > 1:
5065 elif len(parents) > 1:
5065 t += _(' (merge)')
5066 t += _(' (merge)')
5066 elif branch != parents[0].branch():
5067 elif branch != parents[0].branch():
5067 t += _(' (new branch)')
5068 t += _(' (new branch)')
5068 elif (parents[0].closesbranch() and
5069 elif (parents[0].closesbranch() and
5069 pnode in repo.branchheads(branch, closed=True)):
5070 pnode in repo.branchheads(branch, closed=True)):
5070 t += _(' (head closed)')
5071 t += _(' (head closed)')
5071 elif not (status.modified or status.added or status.removed or renamed or
5072 elif not (status.modified or status.added or status.removed or renamed or
5072 copied or subs):
5073 copied or subs):
5073 t += _(' (clean)')
5074 t += _(' (clean)')
5074 cleanworkdir = True
5075 cleanworkdir = True
5075 elif pnode not in bheads:
5076 elif pnode not in bheads:
5076 t += _(' (new branch head)')
5077 t += _(' (new branch head)')
5077
5078
5078 if parents:
5079 if parents:
5079 pendingphase = max(p.phase() for p in parents)
5080 pendingphase = max(p.phase() for p in parents)
5080 else:
5081 else:
5081 pendingphase = phases.public
5082 pendingphase = phases.public
5082
5083
5083 if pendingphase > phases.newcommitphase(ui):
5084 if pendingphase > phases.newcommitphase(ui):
5084 t += ' (%s)' % phases.phasenames[pendingphase]
5085 t += ' (%s)' % phases.phasenames[pendingphase]
5085
5086
5086 if cleanworkdir:
5087 if cleanworkdir:
5087 # i18n: column positioning for "hg summary"
5088 # i18n: column positioning for "hg summary"
5088 ui.status(_('commit: %s\n') % t.strip())
5089 ui.status(_('commit: %s\n') % t.strip())
5089 else:
5090 else:
5090 # i18n: column positioning for "hg summary"
5091 # i18n: column positioning for "hg summary"
5091 ui.write(_('commit: %s\n') % t.strip())
5092 ui.write(_('commit: %s\n') % t.strip())
5092
5093
5093 # all ancestors of branch heads - all ancestors of parent = new csets
5094 # all ancestors of branch heads - all ancestors of parent = new csets
5094 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
5095 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
5095 bheads))
5096 bheads))
5096
5097
5097 if new == 0:
5098 if new == 0:
5098 # i18n: column positioning for "hg summary"
5099 # i18n: column positioning for "hg summary"
5099 ui.status(_('update: (current)\n'))
5100 ui.status(_('update: (current)\n'))
5100 elif pnode not in bheads:
5101 elif pnode not in bheads:
5101 # i18n: column positioning for "hg summary"
5102 # i18n: column positioning for "hg summary"
5102 ui.write(_('update: %d new changesets (update)\n') % new)
5103 ui.write(_('update: %d new changesets (update)\n') % new)
5103 else:
5104 else:
5104 # i18n: column positioning for "hg summary"
5105 # i18n: column positioning for "hg summary"
5105 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
5106 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
5106 (new, len(bheads)))
5107 (new, len(bheads)))
5107
5108
5108 t = []
5109 t = []
5109 draft = len(repo.revs('draft()'))
5110 draft = len(repo.revs('draft()'))
5110 if draft:
5111 if draft:
5111 t.append(_('%d draft') % draft)
5112 t.append(_('%d draft') % draft)
5112 secret = len(repo.revs('secret()'))
5113 secret = len(repo.revs('secret()'))
5113 if secret:
5114 if secret:
5114 t.append(_('%d secret') % secret)
5115 t.append(_('%d secret') % secret)
5115
5116
5116 if draft or secret:
5117 if draft or secret:
5117 ui.status(_('phases: %s\n') % ', '.join(t))
5118 ui.status(_('phases: %s\n') % ', '.join(t))
5118
5119
5119 if obsolete.isenabled(repo, obsolete.createmarkersopt):
5120 if obsolete.isenabled(repo, obsolete.createmarkersopt):
5120 for trouble in ("orphan", "contentdivergent", "phasedivergent"):
5121 for trouble in ("orphan", "contentdivergent", "phasedivergent"):
5121 numtrouble = len(repo.revs(trouble + "()"))
5122 numtrouble = len(repo.revs(trouble + "()"))
5122 # We write all the possibilities to ease translation
5123 # We write all the possibilities to ease translation
5123 troublemsg = {
5124 troublemsg = {
5124 "orphan": _("orphan: %d changesets"),
5125 "orphan": _("orphan: %d changesets"),
5125 "contentdivergent": _("content-divergent: %d changesets"),
5126 "contentdivergent": _("content-divergent: %d changesets"),
5126 "phasedivergent": _("phase-divergent: %d changesets"),
5127 "phasedivergent": _("phase-divergent: %d changesets"),
5127 }
5128 }
5128 if numtrouble > 0:
5129 if numtrouble > 0:
5129 ui.status(troublemsg[trouble] % numtrouble + "\n")
5130 ui.status(troublemsg[trouble] % numtrouble + "\n")
5130
5131
5131 cmdutil.summaryhooks(ui, repo)
5132 cmdutil.summaryhooks(ui, repo)
5132
5133
5133 if opts.get('remote'):
5134 if opts.get('remote'):
5134 needsincoming, needsoutgoing = True, True
5135 needsincoming, needsoutgoing = True, True
5135 else:
5136 else:
5136 needsincoming, needsoutgoing = False, False
5137 needsincoming, needsoutgoing = False, False
5137 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
5138 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
5138 if i:
5139 if i:
5139 needsincoming = True
5140 needsincoming = True
5140 if o:
5141 if o:
5141 needsoutgoing = True
5142 needsoutgoing = True
5142 if not needsincoming and not needsoutgoing:
5143 if not needsincoming and not needsoutgoing:
5143 return
5144 return
5144
5145
5145 def getincoming():
5146 def getincoming():
5146 source, branches = hg.parseurl(ui.expandpath('default'))
5147 source, branches = hg.parseurl(ui.expandpath('default'))
5147 sbranch = branches[0]
5148 sbranch = branches[0]
5148 try:
5149 try:
5149 other = hg.peer(repo, {}, source)
5150 other = hg.peer(repo, {}, source)
5150 except error.RepoError:
5151 except error.RepoError:
5151 if opts.get('remote'):
5152 if opts.get('remote'):
5152 raise
5153 raise
5153 return source, sbranch, None, None, None
5154 return source, sbranch, None, None, None
5154 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
5155 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
5155 if revs:
5156 if revs:
5156 revs = [other.lookup(rev) for rev in revs]
5157 revs = [other.lookup(rev) for rev in revs]
5157 ui.debug('comparing with %s\n' % util.hidepassword(source))
5158 ui.debug('comparing with %s\n' % util.hidepassword(source))
5158 repo.ui.pushbuffer()
5159 repo.ui.pushbuffer()
5159 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
5160 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
5160 repo.ui.popbuffer()
5161 repo.ui.popbuffer()
5161 return source, sbranch, other, commoninc, commoninc[1]
5162 return source, sbranch, other, commoninc, commoninc[1]
5162
5163
5163 if needsincoming:
5164 if needsincoming:
5164 source, sbranch, sother, commoninc, incoming = getincoming()
5165 source, sbranch, sother, commoninc, incoming = getincoming()
5165 else:
5166 else:
5166 source = sbranch = sother = commoninc = incoming = None
5167 source = sbranch = sother = commoninc = incoming = None
5167
5168
5168 def getoutgoing():
5169 def getoutgoing():
5169 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
5170 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
5170 dbranch = branches[0]
5171 dbranch = branches[0]
5171 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
5172 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
5172 if source != dest:
5173 if source != dest:
5173 try:
5174 try:
5174 dother = hg.peer(repo, {}, dest)
5175 dother = hg.peer(repo, {}, dest)
5175 except error.RepoError:
5176 except error.RepoError:
5176 if opts.get('remote'):
5177 if opts.get('remote'):
5177 raise
5178 raise
5178 return dest, dbranch, None, None
5179 return dest, dbranch, None, None
5179 ui.debug('comparing with %s\n' % util.hidepassword(dest))
5180 ui.debug('comparing with %s\n' % util.hidepassword(dest))
5180 elif sother is None:
5181 elif sother is None:
5181 # there is no explicit destination peer, but source one is invalid
5182 # there is no explicit destination peer, but source one is invalid
5182 return dest, dbranch, None, None
5183 return dest, dbranch, None, None
5183 else:
5184 else:
5184 dother = sother
5185 dother = sother
5185 if (source != dest or (sbranch is not None and sbranch != dbranch)):
5186 if (source != dest or (sbranch is not None and sbranch != dbranch)):
5186 common = None
5187 common = None
5187 else:
5188 else:
5188 common = commoninc
5189 common = commoninc
5189 if revs:
5190 if revs:
5190 revs = [repo.lookup(rev) for rev in revs]
5191 revs = [repo.lookup(rev) for rev in revs]
5191 repo.ui.pushbuffer()
5192 repo.ui.pushbuffer()
5192 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
5193 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
5193 commoninc=common)
5194 commoninc=common)
5194 repo.ui.popbuffer()
5195 repo.ui.popbuffer()
5195 return dest, dbranch, dother, outgoing
5196 return dest, dbranch, dother, outgoing
5196
5197
5197 if needsoutgoing:
5198 if needsoutgoing:
5198 dest, dbranch, dother, outgoing = getoutgoing()
5199 dest, dbranch, dother, outgoing = getoutgoing()
5199 else:
5200 else:
5200 dest = dbranch = dother = outgoing = None
5201 dest = dbranch = dother = outgoing = None
5201
5202
5202 if opts.get('remote'):
5203 if opts.get('remote'):
5203 t = []
5204 t = []
5204 if incoming:
5205 if incoming:
5205 t.append(_('1 or more incoming'))
5206 t.append(_('1 or more incoming'))
5206 o = outgoing.missing
5207 o = outgoing.missing
5207 if o:
5208 if o:
5208 t.append(_('%d outgoing') % len(o))
5209 t.append(_('%d outgoing') % len(o))
5209 other = dother or sother
5210 other = dother or sother
5210 if 'bookmarks' in other.listkeys('namespaces'):
5211 if 'bookmarks' in other.listkeys('namespaces'):
5211 counts = bookmarks.summary(repo, other)
5212 counts = bookmarks.summary(repo, other)
5212 if counts[0] > 0:
5213 if counts[0] > 0:
5213 t.append(_('%d incoming bookmarks') % counts[0])
5214 t.append(_('%d incoming bookmarks') % counts[0])
5214 if counts[1] > 0:
5215 if counts[1] > 0:
5215 t.append(_('%d outgoing bookmarks') % counts[1])
5216 t.append(_('%d outgoing bookmarks') % counts[1])
5216
5217
5217 if t:
5218 if t:
5218 # i18n: column positioning for "hg summary"
5219 # i18n: column positioning for "hg summary"
5219 ui.write(_('remote: %s\n') % (', '.join(t)))
5220 ui.write(_('remote: %s\n') % (', '.join(t)))
5220 else:
5221 else:
5221 # i18n: column positioning for "hg summary"
5222 # i18n: column positioning for "hg summary"
5222 ui.status(_('remote: (synced)\n'))
5223 ui.status(_('remote: (synced)\n'))
5223
5224
5224 cmdutil.summaryremotehooks(ui, repo, opts,
5225 cmdutil.summaryremotehooks(ui, repo, opts,
5225 ((source, sbranch, sother, commoninc),
5226 ((source, sbranch, sother, commoninc),
5226 (dest, dbranch, dother, outgoing)))
5227 (dest, dbranch, dother, outgoing)))
5227
5228
5228 @command('tag',
5229 @command('tag',
5229 [('f', 'force', None, _('force tag')),
5230 [('f', 'force', None, _('force tag')),
5230 ('l', 'local', None, _('make the tag local')),
5231 ('l', 'local', None, _('make the tag local')),
5231 ('r', 'rev', '', _('revision to tag'), _('REV')),
5232 ('r', 'rev', '', _('revision to tag'), _('REV')),
5232 ('', 'remove', None, _('remove a tag')),
5233 ('', 'remove', None, _('remove a tag')),
5233 # -l/--local is already there, commitopts cannot be used
5234 # -l/--local is already there, commitopts cannot be used
5234 ('e', 'edit', None, _('invoke editor on commit messages')),
5235 ('e', 'edit', None, _('invoke editor on commit messages')),
5235 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5236 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5236 ] + commitopts2,
5237 ] + commitopts2,
5237 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5238 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5238 def tag(ui, repo, name1, *names, **opts):
5239 def tag(ui, repo, name1, *names, **opts):
5239 """add one or more tags for the current or given revision
5240 """add one or more tags for the current or given revision
5240
5241
5241 Name a particular revision using <name>.
5242 Name a particular revision using <name>.
5242
5243
5243 Tags are used to name particular revisions of the repository and are
5244 Tags are used to name particular revisions of the repository and are
5244 very useful to compare different revisions, to go back to significant
5245 very useful to compare different revisions, to go back to significant
5245 earlier versions or to mark branch points as releases, etc. Changing
5246 earlier versions or to mark branch points as releases, etc. Changing
5246 an existing tag is normally disallowed; use -f/--force to override.
5247 an existing tag is normally disallowed; use -f/--force to override.
5247
5248
5248 If no revision is given, the parent of the working directory is
5249 If no revision is given, the parent of the working directory is
5249 used.
5250 used.
5250
5251
5251 To facilitate version control, distribution, and merging of tags,
5252 To facilitate version control, distribution, and merging of tags,
5252 they are stored as a file named ".hgtags" which is managed similarly
5253 they are stored as a file named ".hgtags" which is managed similarly
5253 to other project files and can be hand-edited if necessary. This
5254 to other project files and can be hand-edited if necessary. This
5254 also means that tagging creates a new commit. The file
5255 also means that tagging creates a new commit. The file
5255 ".hg/localtags" is used for local tags (not shared among
5256 ".hg/localtags" is used for local tags (not shared among
5256 repositories).
5257 repositories).
5257
5258
5258 Tag commits are usually made at the head of a branch. If the parent
5259 Tag commits are usually made at the head of a branch. If the parent
5259 of the working directory is not a branch head, :hg:`tag` aborts; use
5260 of the working directory is not a branch head, :hg:`tag` aborts; use
5260 -f/--force to force the tag commit to be based on a non-head
5261 -f/--force to force the tag commit to be based on a non-head
5261 changeset.
5262 changeset.
5262
5263
5263 See :hg:`help dates` for a list of formats valid for -d/--date.
5264 See :hg:`help dates` for a list of formats valid for -d/--date.
5264
5265
5265 Since tag names have priority over branch names during revision
5266 Since tag names have priority over branch names during revision
5266 lookup, using an existing branch name as a tag name is discouraged.
5267 lookup, using an existing branch name as a tag name is discouraged.
5267
5268
5268 Returns 0 on success.
5269 Returns 0 on success.
5269 """
5270 """
5270 opts = pycompat.byteskwargs(opts)
5271 opts = pycompat.byteskwargs(opts)
5271 wlock = lock = None
5272 wlock = lock = None
5272 try:
5273 try:
5273 wlock = repo.wlock()
5274 wlock = repo.wlock()
5274 lock = repo.lock()
5275 lock = repo.lock()
5275 rev_ = "."
5276 rev_ = "."
5276 names = [t.strip() for t in (name1,) + names]
5277 names = [t.strip() for t in (name1,) + names]
5277 if len(names) != len(set(names)):
5278 if len(names) != len(set(names)):
5278 raise error.Abort(_('tag names must be unique'))
5279 raise error.Abort(_('tag names must be unique'))
5279 for n in names:
5280 for n in names:
5280 scmutil.checknewlabel(repo, n, 'tag')
5281 scmutil.checknewlabel(repo, n, 'tag')
5281 if not n:
5282 if not n:
5282 raise error.Abort(_('tag names cannot consist entirely of '
5283 raise error.Abort(_('tag names cannot consist entirely of '
5283 'whitespace'))
5284 'whitespace'))
5284 if opts.get('rev') and opts.get('remove'):
5285 if opts.get('rev') and opts.get('remove'):
5285 raise error.Abort(_("--rev and --remove are incompatible"))
5286 raise error.Abort(_("--rev and --remove are incompatible"))
5286 if opts.get('rev'):
5287 if opts.get('rev'):
5287 rev_ = opts['rev']
5288 rev_ = opts['rev']
5288 message = opts.get('message')
5289 message = opts.get('message')
5289 if opts.get('remove'):
5290 if opts.get('remove'):
5290 if opts.get('local'):
5291 if opts.get('local'):
5291 expectedtype = 'local'
5292 expectedtype = 'local'
5292 else:
5293 else:
5293 expectedtype = 'global'
5294 expectedtype = 'global'
5294
5295
5295 for n in names:
5296 for n in names:
5296 if not repo.tagtype(n):
5297 if not repo.tagtype(n):
5297 raise error.Abort(_("tag '%s' does not exist") % n)
5298 raise error.Abort(_("tag '%s' does not exist") % n)
5298 if repo.tagtype(n) != expectedtype:
5299 if repo.tagtype(n) != expectedtype:
5299 if expectedtype == 'global':
5300 if expectedtype == 'global':
5300 raise error.Abort(_("tag '%s' is not a global tag") % n)
5301 raise error.Abort(_("tag '%s' is not a global tag") % n)
5301 else:
5302 else:
5302 raise error.Abort(_("tag '%s' is not a local tag") % n)
5303 raise error.Abort(_("tag '%s' is not a local tag") % n)
5303 rev_ = 'null'
5304 rev_ = 'null'
5304 if not message:
5305 if not message:
5305 # we don't translate commit messages
5306 # we don't translate commit messages
5306 message = 'Removed tag %s' % ', '.join(names)
5307 message = 'Removed tag %s' % ', '.join(names)
5307 elif not opts.get('force'):
5308 elif not opts.get('force'):
5308 for n in names:
5309 for n in names:
5309 if n in repo.tags():
5310 if n in repo.tags():
5310 raise error.Abort(_("tag '%s' already exists "
5311 raise error.Abort(_("tag '%s' already exists "
5311 "(use -f to force)") % n)
5312 "(use -f to force)") % n)
5312 if not opts.get('local'):
5313 if not opts.get('local'):
5313 p1, p2 = repo.dirstate.parents()
5314 p1, p2 = repo.dirstate.parents()
5314 if p2 != nullid:
5315 if p2 != nullid:
5315 raise error.Abort(_('uncommitted merge'))
5316 raise error.Abort(_('uncommitted merge'))
5316 bheads = repo.branchheads()
5317 bheads = repo.branchheads()
5317 if not opts.get('force') and bheads and p1 not in bheads:
5318 if not opts.get('force') and bheads and p1 not in bheads:
5318 raise error.Abort(_('working directory is not at a branch head '
5319 raise error.Abort(_('working directory is not at a branch head '
5319 '(use -f to force)'))
5320 '(use -f to force)'))
5320 node = scmutil.revsingle(repo, rev_).node()
5321 node = scmutil.revsingle(repo, rev_).node()
5321
5322
5322 if not message:
5323 if not message:
5323 # we don't translate commit messages
5324 # we don't translate commit messages
5324 message = ('Added tag %s for changeset %s' %
5325 message = ('Added tag %s for changeset %s' %
5325 (', '.join(names), short(node)))
5326 (', '.join(names), short(node)))
5326
5327
5327 date = opts.get('date')
5328 date = opts.get('date')
5328 if date:
5329 if date:
5329 date = dateutil.parsedate(date)
5330 date = dateutil.parsedate(date)
5330
5331
5331 if opts.get('remove'):
5332 if opts.get('remove'):
5332 editform = 'tag.remove'
5333 editform = 'tag.remove'
5333 else:
5334 else:
5334 editform = 'tag.add'
5335 editform = 'tag.add'
5335 editor = cmdutil.getcommiteditor(editform=editform,
5336 editor = cmdutil.getcommiteditor(editform=editform,
5336 **pycompat.strkwargs(opts))
5337 **pycompat.strkwargs(opts))
5337
5338
5338 # don't allow tagging the null rev
5339 # don't allow tagging the null rev
5339 if (not opts.get('remove') and
5340 if (not opts.get('remove') and
5340 scmutil.revsingle(repo, rev_).rev() == nullrev):
5341 scmutil.revsingle(repo, rev_).rev() == nullrev):
5341 raise error.Abort(_("cannot tag null revision"))
5342 raise error.Abort(_("cannot tag null revision"))
5342
5343
5343 tagsmod.tag(repo, names, node, message, opts.get('local'),
5344 tagsmod.tag(repo, names, node, message, opts.get('local'),
5344 opts.get('user'), date, editor=editor)
5345 opts.get('user'), date, editor=editor)
5345 finally:
5346 finally:
5346 release(lock, wlock)
5347 release(lock, wlock)
5347
5348
5348 @command('tags', formatteropts, '', cmdtype=readonly)
5349 @command('tags', formatteropts, '', cmdtype=readonly)
5349 def tags(ui, repo, **opts):
5350 def tags(ui, repo, **opts):
5350 """list repository tags
5351 """list repository tags
5351
5352
5352 This lists both regular and local tags. When the -v/--verbose
5353 This lists both regular and local tags. When the -v/--verbose
5353 switch is used, a third column "local" is printed for local tags.
5354 switch is used, a third column "local" is printed for local tags.
5354 When the -q/--quiet switch is used, only the tag name is printed.
5355 When the -q/--quiet switch is used, only the tag name is printed.
5355
5356
5356 Returns 0 on success.
5357 Returns 0 on success.
5357 """
5358 """
5358
5359
5359 opts = pycompat.byteskwargs(opts)
5360 opts = pycompat.byteskwargs(opts)
5360 ui.pager('tags')
5361 ui.pager('tags')
5361 fm = ui.formatter('tags', opts)
5362 fm = ui.formatter('tags', opts)
5362 hexfunc = fm.hexfunc
5363 hexfunc = fm.hexfunc
5363 tagtype = ""
5364 tagtype = ""
5364
5365
5365 for t, n in reversed(repo.tagslist()):
5366 for t, n in reversed(repo.tagslist()):
5366 hn = hexfunc(n)
5367 hn = hexfunc(n)
5367 label = 'tags.normal'
5368 label = 'tags.normal'
5368 tagtype = ''
5369 tagtype = ''
5369 if repo.tagtype(t) == 'local':
5370 if repo.tagtype(t) == 'local':
5370 label = 'tags.local'
5371 label = 'tags.local'
5371 tagtype = 'local'
5372 tagtype = 'local'
5372
5373
5373 fm.startitem()
5374 fm.startitem()
5374 fm.write('tag', '%s', t, label=label)
5375 fm.write('tag', '%s', t, label=label)
5375 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5376 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5376 fm.condwrite(not ui.quiet, 'rev node', fmt,
5377 fm.condwrite(not ui.quiet, 'rev node', fmt,
5377 repo.changelog.rev(n), hn, label=label)
5378 repo.changelog.rev(n), hn, label=label)
5378 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5379 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5379 tagtype, label=label)
5380 tagtype, label=label)
5380 fm.plain('\n')
5381 fm.plain('\n')
5381 fm.end()
5382 fm.end()
5382
5383
5383 @command('tip',
5384 @command('tip',
5384 [('p', 'patch', None, _('show patch')),
5385 [('p', 'patch', None, _('show patch')),
5385 ('g', 'git', None, _('use git extended diff format')),
5386 ('g', 'git', None, _('use git extended diff format')),
5386 ] + templateopts,
5387 ] + templateopts,
5387 _('[-p] [-g]'))
5388 _('[-p] [-g]'))
5388 def tip(ui, repo, **opts):
5389 def tip(ui, repo, **opts):
5389 """show the tip revision (DEPRECATED)
5390 """show the tip revision (DEPRECATED)
5390
5391
5391 The tip revision (usually just called the tip) is the changeset
5392 The tip revision (usually just called the tip) is the changeset
5392 most recently added to the repository (and therefore the most
5393 most recently added to the repository (and therefore the most
5393 recently changed head).
5394 recently changed head).
5394
5395
5395 If you have just made a commit, that commit will be the tip. If
5396 If you have just made a commit, that commit will be the tip. If
5396 you have just pulled changes from another repository, the tip of
5397 you have just pulled changes from another repository, the tip of
5397 that repository becomes the current tip. The "tip" tag is special
5398 that repository becomes the current tip. The "tip" tag is special
5398 and cannot be renamed or assigned to a different changeset.
5399 and cannot be renamed or assigned to a different changeset.
5399
5400
5400 This command is deprecated, please use :hg:`heads` instead.
5401 This command is deprecated, please use :hg:`heads` instead.
5401
5402
5402 Returns 0 on success.
5403 Returns 0 on success.
5403 """
5404 """
5404 opts = pycompat.byteskwargs(opts)
5405 opts = pycompat.byteskwargs(opts)
5405 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
5406 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
5406 displayer.show(repo['tip'])
5407 displayer.show(repo['tip'])
5407 displayer.close()
5408 displayer.close()
5408
5409
5409 @command('unbundle',
5410 @command('unbundle',
5410 [('u', 'update', None,
5411 [('u', 'update', None,
5411 _('update to new branch head if changesets were unbundled'))],
5412 _('update to new branch head if changesets were unbundled'))],
5412 _('[-u] FILE...'))
5413 _('[-u] FILE...'))
5413 def unbundle(ui, repo, fname1, *fnames, **opts):
5414 def unbundle(ui, repo, fname1, *fnames, **opts):
5414 """apply one or more bundle files
5415 """apply one or more bundle files
5415
5416
5416 Apply one or more bundle files generated by :hg:`bundle`.
5417 Apply one or more bundle files generated by :hg:`bundle`.
5417
5418
5418 Returns 0 on success, 1 if an update has unresolved files.
5419 Returns 0 on success, 1 if an update has unresolved files.
5419 """
5420 """
5420 fnames = (fname1,) + fnames
5421 fnames = (fname1,) + fnames
5421
5422
5422 with repo.lock():
5423 with repo.lock():
5423 for fname in fnames:
5424 for fname in fnames:
5424 f = hg.openpath(ui, fname)
5425 f = hg.openpath(ui, fname)
5425 gen = exchange.readbundle(ui, f, fname)
5426 gen = exchange.readbundle(ui, f, fname)
5426 if isinstance(gen, streamclone.streamcloneapplier):
5427 if isinstance(gen, streamclone.streamcloneapplier):
5427 raise error.Abort(
5428 raise error.Abort(
5428 _('packed bundles cannot be applied with '
5429 _('packed bundles cannot be applied with '
5429 '"hg unbundle"'),
5430 '"hg unbundle"'),
5430 hint=_('use "hg debugapplystreamclonebundle"'))
5431 hint=_('use "hg debugapplystreamclonebundle"'))
5431 url = 'bundle:' + fname
5432 url = 'bundle:' + fname
5432 try:
5433 try:
5433 txnname = 'unbundle'
5434 txnname = 'unbundle'
5434 if not isinstance(gen, bundle2.unbundle20):
5435 if not isinstance(gen, bundle2.unbundle20):
5435 txnname = 'unbundle\n%s' % util.hidepassword(url)
5436 txnname = 'unbundle\n%s' % util.hidepassword(url)
5436 with repo.transaction(txnname) as tr:
5437 with repo.transaction(txnname) as tr:
5437 op = bundle2.applybundle(repo, gen, tr, source='unbundle',
5438 op = bundle2.applybundle(repo, gen, tr, source='unbundle',
5438 url=url)
5439 url=url)
5439 except error.BundleUnknownFeatureError as exc:
5440 except error.BundleUnknownFeatureError as exc:
5440 raise error.Abort(
5441 raise error.Abort(
5441 _('%s: unknown bundle feature, %s') % (fname, exc),
5442 _('%s: unknown bundle feature, %s') % (fname, exc),
5442 hint=_("see https://mercurial-scm.org/"
5443 hint=_("see https://mercurial-scm.org/"
5443 "wiki/BundleFeature for more "
5444 "wiki/BundleFeature for more "
5444 "information"))
5445 "information"))
5445 modheads = bundle2.combinechangegroupresults(op)
5446 modheads = bundle2.combinechangegroupresults(op)
5446
5447
5447 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5448 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5448
5449
5449 @command('^update|up|checkout|co',
5450 @command('^update|up|checkout|co',
5450 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5451 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5451 ('c', 'check', None, _('require clean working directory')),
5452 ('c', 'check', None, _('require clean working directory')),
5452 ('m', 'merge', None, _('merge uncommitted changes')),
5453 ('m', 'merge', None, _('merge uncommitted changes')),
5453 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5454 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5454 ('r', 'rev', '', _('revision'), _('REV'))
5455 ('r', 'rev', '', _('revision'), _('REV'))
5455 ] + mergetoolopts,
5456 ] + mergetoolopts,
5456 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5457 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5457 def update(ui, repo, node=None, **opts):
5458 def update(ui, repo, node=None, **opts):
5458 """update working directory (or switch revisions)
5459 """update working directory (or switch revisions)
5459
5460
5460 Update the repository's working directory to the specified
5461 Update the repository's working directory to the specified
5461 changeset. If no changeset is specified, update to the tip of the
5462 changeset. If no changeset is specified, update to the tip of the
5462 current named branch and move the active bookmark (see :hg:`help
5463 current named branch and move the active bookmark (see :hg:`help
5463 bookmarks`).
5464 bookmarks`).
5464
5465
5465 Update sets the working directory's parent revision to the specified
5466 Update sets the working directory's parent revision to the specified
5466 changeset (see :hg:`help parents`).
5467 changeset (see :hg:`help parents`).
5467
5468
5468 If the changeset is not a descendant or ancestor of the working
5469 If the changeset is not a descendant or ancestor of the working
5469 directory's parent and there are uncommitted changes, the update is
5470 directory's parent and there are uncommitted changes, the update is
5470 aborted. With the -c/--check option, the working directory is checked
5471 aborted. With the -c/--check option, the working directory is checked
5471 for uncommitted changes; if none are found, the working directory is
5472 for uncommitted changes; if none are found, the working directory is
5472 updated to the specified changeset.
5473 updated to the specified changeset.
5473
5474
5474 .. container:: verbose
5475 .. container:: verbose
5475
5476
5476 The -C/--clean, -c/--check, and -m/--merge options control what
5477 The -C/--clean, -c/--check, and -m/--merge options control what
5477 happens if the working directory contains uncommitted changes.
5478 happens if the working directory contains uncommitted changes.
5478 At most of one of them can be specified.
5479 At most of one of them can be specified.
5479
5480
5480 1. If no option is specified, and if
5481 1. If no option is specified, and if
5481 the requested changeset is an ancestor or descendant of
5482 the requested changeset is an ancestor or descendant of
5482 the working directory's parent, the uncommitted changes
5483 the working directory's parent, the uncommitted changes
5483 are merged into the requested changeset and the merged
5484 are merged into the requested changeset and the merged
5484 result is left uncommitted. If the requested changeset is
5485 result is left uncommitted. If the requested changeset is
5485 not an ancestor or descendant (that is, it is on another
5486 not an ancestor or descendant (that is, it is on another
5486 branch), the update is aborted and the uncommitted changes
5487 branch), the update is aborted and the uncommitted changes
5487 are preserved.
5488 are preserved.
5488
5489
5489 2. With the -m/--merge option, the update is allowed even if the
5490 2. With the -m/--merge option, the update is allowed even if the
5490 requested changeset is not an ancestor or descendant of
5491 requested changeset is not an ancestor or descendant of
5491 the working directory's parent.
5492 the working directory's parent.
5492
5493
5493 3. With the -c/--check option, the update is aborted and the
5494 3. With the -c/--check option, the update is aborted and the
5494 uncommitted changes are preserved.
5495 uncommitted changes are preserved.
5495
5496
5496 4. With the -C/--clean option, uncommitted changes are discarded and
5497 4. With the -C/--clean option, uncommitted changes are discarded and
5497 the working directory is updated to the requested changeset.
5498 the working directory is updated to the requested changeset.
5498
5499
5499 To cancel an uncommitted merge (and lose your changes), use
5500 To cancel an uncommitted merge (and lose your changes), use
5500 :hg:`merge --abort`.
5501 :hg:`merge --abort`.
5501
5502
5502 Use null as the changeset to remove the working directory (like
5503 Use null as the changeset to remove the working directory (like
5503 :hg:`clone -U`).
5504 :hg:`clone -U`).
5504
5505
5505 If you want to revert just one file to an older revision, use
5506 If you want to revert just one file to an older revision, use
5506 :hg:`revert [-r REV] NAME`.
5507 :hg:`revert [-r REV] NAME`.
5507
5508
5508 See :hg:`help dates` for a list of formats valid for -d/--date.
5509 See :hg:`help dates` for a list of formats valid for -d/--date.
5509
5510
5510 Returns 0 on success, 1 if there are unresolved files.
5511 Returns 0 on success, 1 if there are unresolved files.
5511 """
5512 """
5512 rev = opts.get(r'rev')
5513 rev = opts.get(r'rev')
5513 date = opts.get(r'date')
5514 date = opts.get(r'date')
5514 clean = opts.get(r'clean')
5515 clean = opts.get(r'clean')
5515 check = opts.get(r'check')
5516 check = opts.get(r'check')
5516 merge = opts.get(r'merge')
5517 merge = opts.get(r'merge')
5517 if rev and node:
5518 if rev and node:
5518 raise error.Abort(_("please specify just one revision"))
5519 raise error.Abort(_("please specify just one revision"))
5519
5520
5520 if ui.configbool('commands', 'update.requiredest'):
5521 if ui.configbool('commands', 'update.requiredest'):
5521 if not node and not rev and not date:
5522 if not node and not rev and not date:
5522 raise error.Abort(_('you must specify a destination'),
5523 raise error.Abort(_('you must specify a destination'),
5523 hint=_('for example: hg update ".::"'))
5524 hint=_('for example: hg update ".::"'))
5524
5525
5525 if rev is None or rev == '':
5526 if rev is None or rev == '':
5526 rev = node
5527 rev = node
5527
5528
5528 if date and rev is not None:
5529 if date and rev is not None:
5529 raise error.Abort(_("you can't specify a revision and a date"))
5530 raise error.Abort(_("you can't specify a revision and a date"))
5530
5531
5531 if len([x for x in (clean, check, merge) if x]) > 1:
5532 if len([x for x in (clean, check, merge) if x]) > 1:
5532 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5533 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5533 "or -m/--merge"))
5534 "or -m/--merge"))
5534
5535
5535 updatecheck = None
5536 updatecheck = None
5536 if check:
5537 if check:
5537 updatecheck = 'abort'
5538 updatecheck = 'abort'
5538 elif merge:
5539 elif merge:
5539 updatecheck = 'none'
5540 updatecheck = 'none'
5540
5541
5541 with repo.wlock():
5542 with repo.wlock():
5542 cmdutil.clearunfinished(repo)
5543 cmdutil.clearunfinished(repo)
5543
5544
5544 if date:
5545 if date:
5545 rev = cmdutil.finddate(ui, repo, date)
5546 rev = cmdutil.finddate(ui, repo, date)
5546
5547
5547 # if we defined a bookmark, we have to remember the original name
5548 # if we defined a bookmark, we have to remember the original name
5548 brev = rev
5549 brev = rev
5549 if rev:
5550 if rev:
5550 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
5551 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
5551 ctx = scmutil.revsingle(repo, rev, rev)
5552 ctx = scmutil.revsingle(repo, rev, rev)
5552 rev = ctx.rev()
5553 rev = ctx.rev()
5553 if ctx.hidden():
5554 if ctx.hidden():
5554 ctxstr = ctx.hex()[:12]
5555 ctxstr = ctx.hex()[:12]
5555 ui.warn(_("updating to a hidden changeset %s\n") % ctxstr)
5556 ui.warn(_("updating to a hidden changeset %s\n") % ctxstr)
5556
5557
5557 if ctx.obsolete():
5558 if ctx.obsolete():
5558 obsfatemsg = obsutil._getfilteredreason(repo, ctxstr, ctx)
5559 obsfatemsg = obsutil._getfilteredreason(repo, ctxstr, ctx)
5559 ui.warn("(%s)\n" % obsfatemsg)
5560 ui.warn("(%s)\n" % obsfatemsg)
5560
5561
5561 repo.ui.setconfig('ui', 'forcemerge', opts.get(r'tool'), 'update')
5562 repo.ui.setconfig('ui', 'forcemerge', opts.get(r'tool'), 'update')
5562
5563
5563 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5564 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5564 updatecheck=updatecheck)
5565 updatecheck=updatecheck)
5565
5566
5566 @command('verify', [])
5567 @command('verify', [])
5567 def verify(ui, repo):
5568 def verify(ui, repo):
5568 """verify the integrity of the repository
5569 """verify the integrity of the repository
5569
5570
5570 Verify the integrity of the current repository.
5571 Verify the integrity of the current repository.
5571
5572
5572 This will perform an extensive check of the repository's
5573 This will perform an extensive check of the repository's
5573 integrity, validating the hashes and checksums of each entry in
5574 integrity, validating the hashes and checksums of each entry in
5574 the changelog, manifest, and tracked files, as well as the
5575 the changelog, manifest, and tracked files, as well as the
5575 integrity of their crosslinks and indices.
5576 integrity of their crosslinks and indices.
5576
5577
5577 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5578 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5578 for more information about recovery from corruption of the
5579 for more information about recovery from corruption of the
5579 repository.
5580 repository.
5580
5581
5581 Returns 0 on success, 1 if errors are encountered.
5582 Returns 0 on success, 1 if errors are encountered.
5582 """
5583 """
5583 return hg.verify(repo)
5584 return hg.verify(repo)
5584
5585
5585 @command('version', [] + formatteropts, norepo=True, cmdtype=readonly)
5586 @command('version', [] + formatteropts, norepo=True, cmdtype=readonly)
5586 def version_(ui, **opts):
5587 def version_(ui, **opts):
5587 """output version and copyright information"""
5588 """output version and copyright information"""
5588 opts = pycompat.byteskwargs(opts)
5589 opts = pycompat.byteskwargs(opts)
5589 if ui.verbose:
5590 if ui.verbose:
5590 ui.pager('version')
5591 ui.pager('version')
5591 fm = ui.formatter("version", opts)
5592 fm = ui.formatter("version", opts)
5592 fm.startitem()
5593 fm.startitem()
5593 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5594 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5594 util.version())
5595 util.version())
5595 license = _(
5596 license = _(
5596 "(see https://mercurial-scm.org for more information)\n"
5597 "(see https://mercurial-scm.org for more information)\n"
5597 "\nCopyright (C) 2005-2018 Matt Mackall and others\n"
5598 "\nCopyright (C) 2005-2018 Matt Mackall and others\n"
5598 "This is free software; see the source for copying conditions. "
5599 "This is free software; see the source for copying conditions. "
5599 "There is NO\nwarranty; "
5600 "There is NO\nwarranty; "
5600 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5601 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5601 )
5602 )
5602 if not ui.quiet:
5603 if not ui.quiet:
5603 fm.plain(license)
5604 fm.plain(license)
5604
5605
5605 if ui.verbose:
5606 if ui.verbose:
5606 fm.plain(_("\nEnabled extensions:\n\n"))
5607 fm.plain(_("\nEnabled extensions:\n\n"))
5607 # format names and versions into columns
5608 # format names and versions into columns
5608 names = []
5609 names = []
5609 vers = []
5610 vers = []
5610 isinternals = []
5611 isinternals = []
5611 for name, module in extensions.extensions():
5612 for name, module in extensions.extensions():
5612 names.append(name)
5613 names.append(name)
5613 vers.append(extensions.moduleversion(module) or None)
5614 vers.append(extensions.moduleversion(module) or None)
5614 isinternals.append(extensions.ismoduleinternal(module))
5615 isinternals.append(extensions.ismoduleinternal(module))
5615 fn = fm.nested("extensions")
5616 fn = fm.nested("extensions")
5616 if names:
5617 if names:
5617 namefmt = " %%-%ds " % max(len(n) for n in names)
5618 namefmt = " %%-%ds " % max(len(n) for n in names)
5618 places = [_("external"), _("internal")]
5619 places = [_("external"), _("internal")]
5619 for n, v, p in zip(names, vers, isinternals):
5620 for n, v, p in zip(names, vers, isinternals):
5620 fn.startitem()
5621 fn.startitem()
5621 fn.condwrite(ui.verbose, "name", namefmt, n)
5622 fn.condwrite(ui.verbose, "name", namefmt, n)
5622 if ui.verbose:
5623 if ui.verbose:
5623 fn.plain("%s " % places[p])
5624 fn.plain("%s " % places[p])
5624 fn.data(bundled=p)
5625 fn.data(bundled=p)
5625 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5626 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5626 if ui.verbose:
5627 if ui.verbose:
5627 fn.plain("\n")
5628 fn.plain("\n")
5628 fn.end()
5629 fn.end()
5629 fm.end()
5630 fm.end()
5630
5631
5631 def loadcmdtable(ui, name, cmdtable):
5632 def loadcmdtable(ui, name, cmdtable):
5632 """Load command functions from specified cmdtable
5633 """Load command functions from specified cmdtable
5633 """
5634 """
5634 overrides = [cmd for cmd in cmdtable if cmd in table]
5635 overrides = [cmd for cmd in cmdtable if cmd in table]
5635 if overrides:
5636 if overrides:
5636 ui.warn(_("extension '%s' overrides commands: %s\n")
5637 ui.warn(_("extension '%s' overrides commands: %s\n")
5637 % (name, " ".join(overrides)))
5638 % (name, " ".join(overrides)))
5638 table.update(cmdtable)
5639 table.update(cmdtable)
@@ -1,2310 +1,2338 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import errno
11 import errno
12 import hashlib
12 import hashlib
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import (
15 from .node import (
16 bin,
16 bin,
17 hex,
17 hex,
18 nullid,
18 nullid,
19 )
19 )
20 from .thirdparty import (
20 from .thirdparty import (
21 attr,
21 attr,
22 )
22 )
23 from . import (
23 from . import (
24 bookmarks as bookmod,
24 bookmarks as bookmod,
25 bundle2,
25 bundle2,
26 changegroup,
26 changegroup,
27 discovery,
27 discovery,
28 error,
28 error,
29 lock as lockmod,
29 lock as lockmod,
30 logexchange,
30 logexchange,
31 obsolete,
31 obsolete,
32 phases,
32 phases,
33 pushkey,
33 pushkey,
34 pycompat,
34 pycompat,
35 scmutil,
35 scmutil,
36 sslutil,
36 sslutil,
37 streamclone,
37 streamclone,
38 url as urlmod,
38 url as urlmod,
39 util,
39 util,
40 )
40 )
41 from .utils import (
41 from .utils import (
42 stringutil,
42 stringutil,
43 )
43 )
44
44
45 urlerr = util.urlerr
45 urlerr = util.urlerr
46 urlreq = util.urlreq
46 urlreq = util.urlreq
47
47
48 # Maps bundle version human names to changegroup versions.
48 # Maps bundle version human names to changegroup versions.
49 _bundlespeccgversions = {'v1': '01',
49 _bundlespeccgversions = {'v1': '01',
50 'v2': '02',
50 'v2': '02',
51 'packed1': 's1',
51 'packed1': 's1',
52 'bundle2': '02', #legacy
52 'bundle2': '02', #legacy
53 }
53 }
54
54
55 # Maps bundle version with content opts to choose which part to bundle
56 _bundlespeccontentopts = {
57 'v1': {
58 'changegroup': True,
59 'cg.version': '01',
60 'obsolescence': False,
61 'phases': False,
62 'tagsfnodescache': False,
63 'revbranchcache': False
64 },
65 'v2': {
66 'changegroup': True,
67 'cg.version': '02',
68 'obsolescence': False,
69 'phases': False,
70 'tagsfnodescache': True,
71 'revbranchcache': True
72 },
73 'packed1' : {
74 'cg.version': 's1'
75 }
76 }
77 _bundlespeccontentopts['bundle2'] = _bundlespeccontentopts['v2']
78
55 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
79 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
56 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
80 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
57
81
58 @attr.s
82 @attr.s
59 class bundlespec(object):
83 class bundlespec(object):
60 compression = attr.ib()
84 compression = attr.ib()
61 version = attr.ib()
85 version = attr.ib()
62 params = attr.ib()
86 params = attr.ib()
87 contentopts = attr.ib()
63
88
64 def parsebundlespec(repo, spec, strict=True, externalnames=False):
89 def parsebundlespec(repo, spec, strict=True, externalnames=False):
65 """Parse a bundle string specification into parts.
90 """Parse a bundle string specification into parts.
66
91
67 Bundle specifications denote a well-defined bundle/exchange format.
92 Bundle specifications denote a well-defined bundle/exchange format.
68 The content of a given specification should not change over time in
93 The content of a given specification should not change over time in
69 order to ensure that bundles produced by a newer version of Mercurial are
94 order to ensure that bundles produced by a newer version of Mercurial are
70 readable from an older version.
95 readable from an older version.
71
96
72 The string currently has the form:
97 The string currently has the form:
73
98
74 <compression>-<type>[;<parameter0>[;<parameter1>]]
99 <compression>-<type>[;<parameter0>[;<parameter1>]]
75
100
76 Where <compression> is one of the supported compression formats
101 Where <compression> is one of the supported compression formats
77 and <type> is (currently) a version string. A ";" can follow the type and
102 and <type> is (currently) a version string. A ";" can follow the type and
78 all text afterwards is interpreted as URI encoded, ";" delimited key=value
103 all text afterwards is interpreted as URI encoded, ";" delimited key=value
79 pairs.
104 pairs.
80
105
81 If ``strict`` is True (the default) <compression> is required. Otherwise,
106 If ``strict`` is True (the default) <compression> is required. Otherwise,
82 it is optional.
107 it is optional.
83
108
84 If ``externalnames`` is False (the default), the human-centric names will
109 If ``externalnames`` is False (the default), the human-centric names will
85 be converted to their internal representation.
110 be converted to their internal representation.
86
111
87 Returns a bundlespec object of (compression, version, parameters).
112 Returns a bundlespec object of (compression, version, parameters).
88 Compression will be ``None`` if not in strict mode and a compression isn't
113 Compression will be ``None`` if not in strict mode and a compression isn't
89 defined.
114 defined.
90
115
91 An ``InvalidBundleSpecification`` is raised when the specification is
116 An ``InvalidBundleSpecification`` is raised when the specification is
92 not syntactically well formed.
117 not syntactically well formed.
93
118
94 An ``UnsupportedBundleSpecification`` is raised when the compression or
119 An ``UnsupportedBundleSpecification`` is raised when the compression or
95 bundle type/version is not recognized.
120 bundle type/version is not recognized.
96
121
97 Note: this function will likely eventually return a more complex data
122 Note: this function will likely eventually return a more complex data
98 structure, including bundle2 part information.
123 structure, including bundle2 part information.
99 """
124 """
100 def parseparams(s):
125 def parseparams(s):
101 if ';' not in s:
126 if ';' not in s:
102 return s, {}
127 return s, {}
103
128
104 params = {}
129 params = {}
105 version, paramstr = s.split(';', 1)
130 version, paramstr = s.split(';', 1)
106
131
107 for p in paramstr.split(';'):
132 for p in paramstr.split(';'):
108 if '=' not in p:
133 if '=' not in p:
109 raise error.InvalidBundleSpecification(
134 raise error.InvalidBundleSpecification(
110 _('invalid bundle specification: '
135 _('invalid bundle specification: '
111 'missing "=" in parameter: %s') % p)
136 'missing "=" in parameter: %s') % p)
112
137
113 key, value = p.split('=', 1)
138 key, value = p.split('=', 1)
114 key = urlreq.unquote(key)
139 key = urlreq.unquote(key)
115 value = urlreq.unquote(value)
140 value = urlreq.unquote(value)
116 params[key] = value
141 params[key] = value
117
142
118 return version, params
143 return version, params
119
144
120
145
121 if strict and '-' not in spec:
146 if strict and '-' not in spec:
122 raise error.InvalidBundleSpecification(
147 raise error.InvalidBundleSpecification(
123 _('invalid bundle specification; '
148 _('invalid bundle specification; '
124 'must be prefixed with compression: %s') % spec)
149 'must be prefixed with compression: %s') % spec)
125
150
126 if '-' in spec:
151 if '-' in spec:
127 compression, version = spec.split('-', 1)
152 compression, version = spec.split('-', 1)
128
153
129 if compression not in util.compengines.supportedbundlenames:
154 if compression not in util.compengines.supportedbundlenames:
130 raise error.UnsupportedBundleSpecification(
155 raise error.UnsupportedBundleSpecification(
131 _('%s compression is not supported') % compression)
156 _('%s compression is not supported') % compression)
132
157
133 version, params = parseparams(version)
158 version, params = parseparams(version)
134
159
135 if version not in _bundlespeccgversions:
160 if version not in _bundlespeccgversions:
136 raise error.UnsupportedBundleSpecification(
161 raise error.UnsupportedBundleSpecification(
137 _('%s is not a recognized bundle version') % version)
162 _('%s is not a recognized bundle version') % version)
138 else:
163 else:
139 # Value could be just the compression or just the version, in which
164 # Value could be just the compression or just the version, in which
140 # case some defaults are assumed (but only when not in strict mode).
165 # case some defaults are assumed (but only when not in strict mode).
141 assert not strict
166 assert not strict
142
167
143 spec, params = parseparams(spec)
168 spec, params = parseparams(spec)
144
169
145 if spec in util.compengines.supportedbundlenames:
170 if spec in util.compengines.supportedbundlenames:
146 compression = spec
171 compression = spec
147 version = 'v1'
172 version = 'v1'
148 # Generaldelta repos require v2.
173 # Generaldelta repos require v2.
149 if 'generaldelta' in repo.requirements:
174 if 'generaldelta' in repo.requirements:
150 version = 'v2'
175 version = 'v2'
151 # Modern compression engines require v2.
176 # Modern compression engines require v2.
152 if compression not in _bundlespecv1compengines:
177 if compression not in _bundlespecv1compengines:
153 version = 'v2'
178 version = 'v2'
154 elif spec in _bundlespeccgversions:
179 elif spec in _bundlespeccgversions:
155 if spec == 'packed1':
180 if spec == 'packed1':
156 compression = 'none'
181 compression = 'none'
157 else:
182 else:
158 compression = 'bzip2'
183 compression = 'bzip2'
159 version = spec
184 version = spec
160 else:
185 else:
161 raise error.UnsupportedBundleSpecification(
186 raise error.UnsupportedBundleSpecification(
162 _('%s is not a recognized bundle specification') % spec)
187 _('%s is not a recognized bundle specification') % spec)
163
188
164 # Bundle version 1 only supports a known set of compression engines.
189 # Bundle version 1 only supports a known set of compression engines.
165 if version == 'v1' and compression not in _bundlespecv1compengines:
190 if version == 'v1' and compression not in _bundlespecv1compengines:
166 raise error.UnsupportedBundleSpecification(
191 raise error.UnsupportedBundleSpecification(
167 _('compression engine %s is not supported on v1 bundles') %
192 _('compression engine %s is not supported on v1 bundles') %
168 compression)
193 compression)
169
194
170 # The specification for packed1 can optionally declare the data formats
195 # The specification for packed1 can optionally declare the data formats
171 # required to apply it. If we see this metadata, compare against what the
196 # required to apply it. If we see this metadata, compare against what the
172 # repo supports and error if the bundle isn't compatible.
197 # repo supports and error if the bundle isn't compatible.
173 if version == 'packed1' and 'requirements' in params:
198 if version == 'packed1' and 'requirements' in params:
174 requirements = set(params['requirements'].split(','))
199 requirements = set(params['requirements'].split(','))
175 missingreqs = requirements - repo.supportedformats
200 missingreqs = requirements - repo.supportedformats
176 if missingreqs:
201 if missingreqs:
177 raise error.UnsupportedBundleSpecification(
202 raise error.UnsupportedBundleSpecification(
178 _('missing support for repository features: %s') %
203 _('missing support for repository features: %s') %
179 ', '.join(sorted(missingreqs)))
204 ', '.join(sorted(missingreqs)))
180
205
206 # Compute contentopts based on the version
207 contentopts = _bundlespeccontentopts.get(version, {}).copy()
208
181 if not externalnames:
209 if not externalnames:
182 engine = util.compengines.forbundlename(compression)
210 engine = util.compengines.forbundlename(compression)
183 compression = engine.bundletype()[1]
211 compression = engine.bundletype()[1]
184 version = _bundlespeccgversions[version]
212 version = _bundlespeccgversions[version]
185
213
186 return bundlespec(compression, version, params)
214 return bundlespec(compression, version, params, contentopts)
187
215
188 def readbundle(ui, fh, fname, vfs=None):
216 def readbundle(ui, fh, fname, vfs=None):
189 header = changegroup.readexactly(fh, 4)
217 header = changegroup.readexactly(fh, 4)
190
218
191 alg = None
219 alg = None
192 if not fname:
220 if not fname:
193 fname = "stream"
221 fname = "stream"
194 if not header.startswith('HG') and header.startswith('\0'):
222 if not header.startswith('HG') and header.startswith('\0'):
195 fh = changegroup.headerlessfixup(fh, header)
223 fh = changegroup.headerlessfixup(fh, header)
196 header = "HG10"
224 header = "HG10"
197 alg = 'UN'
225 alg = 'UN'
198 elif vfs:
226 elif vfs:
199 fname = vfs.join(fname)
227 fname = vfs.join(fname)
200
228
201 magic, version = header[0:2], header[2:4]
229 magic, version = header[0:2], header[2:4]
202
230
203 if magic != 'HG':
231 if magic != 'HG':
204 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
232 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
205 if version == '10':
233 if version == '10':
206 if alg is None:
234 if alg is None:
207 alg = changegroup.readexactly(fh, 2)
235 alg = changegroup.readexactly(fh, 2)
208 return changegroup.cg1unpacker(fh, alg)
236 return changegroup.cg1unpacker(fh, alg)
209 elif version.startswith('2'):
237 elif version.startswith('2'):
210 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
238 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
211 elif version == 'S1':
239 elif version == 'S1':
212 return streamclone.streamcloneapplier(fh)
240 return streamclone.streamcloneapplier(fh)
213 else:
241 else:
214 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
242 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
215
243
216 def _formatrequirementsspec(requirements):
244 def _formatrequirementsspec(requirements):
217 return urlreq.quote(','.join(sorted(requirements)))
245 return urlreq.quote(','.join(sorted(requirements)))
218
246
219 def _formatrequirementsparams(requirements):
247 def _formatrequirementsparams(requirements):
220 requirements = _formatrequirementsspec(requirements)
248 requirements = _formatrequirementsspec(requirements)
221 params = "%s%s" % (urlreq.quote("requirements="), requirements)
249 params = "%s%s" % (urlreq.quote("requirements="), requirements)
222 return params
250 return params
223
251
224 def getbundlespec(ui, fh):
252 def getbundlespec(ui, fh):
225 """Infer the bundlespec from a bundle file handle.
253 """Infer the bundlespec from a bundle file handle.
226
254
227 The input file handle is seeked and the original seek position is not
255 The input file handle is seeked and the original seek position is not
228 restored.
256 restored.
229 """
257 """
230 def speccompression(alg):
258 def speccompression(alg):
231 try:
259 try:
232 return util.compengines.forbundletype(alg).bundletype()[0]
260 return util.compengines.forbundletype(alg).bundletype()[0]
233 except KeyError:
261 except KeyError:
234 return None
262 return None
235
263
236 b = readbundle(ui, fh, None)
264 b = readbundle(ui, fh, None)
237 if isinstance(b, changegroup.cg1unpacker):
265 if isinstance(b, changegroup.cg1unpacker):
238 alg = b._type
266 alg = b._type
239 if alg == '_truncatedBZ':
267 if alg == '_truncatedBZ':
240 alg = 'BZ'
268 alg = 'BZ'
241 comp = speccompression(alg)
269 comp = speccompression(alg)
242 if not comp:
270 if not comp:
243 raise error.Abort(_('unknown compression algorithm: %s') % alg)
271 raise error.Abort(_('unknown compression algorithm: %s') % alg)
244 return '%s-v1' % comp
272 return '%s-v1' % comp
245 elif isinstance(b, bundle2.unbundle20):
273 elif isinstance(b, bundle2.unbundle20):
246 if 'Compression' in b.params:
274 if 'Compression' in b.params:
247 comp = speccompression(b.params['Compression'])
275 comp = speccompression(b.params['Compression'])
248 if not comp:
276 if not comp:
249 raise error.Abort(_('unknown compression algorithm: %s') % comp)
277 raise error.Abort(_('unknown compression algorithm: %s') % comp)
250 else:
278 else:
251 comp = 'none'
279 comp = 'none'
252
280
253 version = None
281 version = None
254 for part in b.iterparts():
282 for part in b.iterparts():
255 if part.type == 'changegroup':
283 if part.type == 'changegroup':
256 version = part.params['version']
284 version = part.params['version']
257 if version in ('01', '02'):
285 if version in ('01', '02'):
258 version = 'v2'
286 version = 'v2'
259 else:
287 else:
260 raise error.Abort(_('changegroup version %s does not have '
288 raise error.Abort(_('changegroup version %s does not have '
261 'a known bundlespec') % version,
289 'a known bundlespec') % version,
262 hint=_('try upgrading your Mercurial '
290 hint=_('try upgrading your Mercurial '
263 'client'))
291 'client'))
264
292
265 if not version:
293 if not version:
266 raise error.Abort(_('could not identify changegroup version in '
294 raise error.Abort(_('could not identify changegroup version in '
267 'bundle'))
295 'bundle'))
268
296
269 return '%s-%s' % (comp, version)
297 return '%s-%s' % (comp, version)
270 elif isinstance(b, streamclone.streamcloneapplier):
298 elif isinstance(b, streamclone.streamcloneapplier):
271 requirements = streamclone.readbundle1header(fh)[2]
299 requirements = streamclone.readbundle1header(fh)[2]
272 return 'none-packed1;%s' % _formatrequirementsparams(requirements)
300 return 'none-packed1;%s' % _formatrequirementsparams(requirements)
273 else:
301 else:
274 raise error.Abort(_('unknown bundle type: %s') % b)
302 raise error.Abort(_('unknown bundle type: %s') % b)
275
303
276 def _computeoutgoing(repo, heads, common):
304 def _computeoutgoing(repo, heads, common):
277 """Computes which revs are outgoing given a set of common
305 """Computes which revs are outgoing given a set of common
278 and a set of heads.
306 and a set of heads.
279
307
280 This is a separate function so extensions can have access to
308 This is a separate function so extensions can have access to
281 the logic.
309 the logic.
282
310
283 Returns a discovery.outgoing object.
311 Returns a discovery.outgoing object.
284 """
312 """
285 cl = repo.changelog
313 cl = repo.changelog
286 if common:
314 if common:
287 hasnode = cl.hasnode
315 hasnode = cl.hasnode
288 common = [n for n in common if hasnode(n)]
316 common = [n for n in common if hasnode(n)]
289 else:
317 else:
290 common = [nullid]
318 common = [nullid]
291 if not heads:
319 if not heads:
292 heads = cl.heads()
320 heads = cl.heads()
293 return discovery.outgoing(repo, common, heads)
321 return discovery.outgoing(repo, common, heads)
294
322
295 def _forcebundle1(op):
323 def _forcebundle1(op):
296 """return true if a pull/push must use bundle1
324 """return true if a pull/push must use bundle1
297
325
298 This function is used to allow testing of the older bundle version"""
326 This function is used to allow testing of the older bundle version"""
299 ui = op.repo.ui
327 ui = op.repo.ui
300 # The goal is this config is to allow developer to choose the bundle
328 # The goal is this config is to allow developer to choose the bundle
301 # version used during exchanged. This is especially handy during test.
329 # version used during exchanged. This is especially handy during test.
302 # Value is a list of bundle version to be picked from, highest version
330 # Value is a list of bundle version to be picked from, highest version
303 # should be used.
331 # should be used.
304 #
332 #
305 # developer config: devel.legacy.exchange
333 # developer config: devel.legacy.exchange
306 exchange = ui.configlist('devel', 'legacy.exchange')
334 exchange = ui.configlist('devel', 'legacy.exchange')
307 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
335 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
308 return forcebundle1 or not op.remote.capable('bundle2')
336 return forcebundle1 or not op.remote.capable('bundle2')
309
337
310 class pushoperation(object):
338 class pushoperation(object):
311 """A object that represent a single push operation
339 """A object that represent a single push operation
312
340
313 Its purpose is to carry push related state and very common operations.
341 Its purpose is to carry push related state and very common operations.
314
342
315 A new pushoperation should be created at the beginning of each push and
343 A new pushoperation should be created at the beginning of each push and
316 discarded afterward.
344 discarded afterward.
317 """
345 """
318
346
319 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
347 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
320 bookmarks=(), pushvars=None):
348 bookmarks=(), pushvars=None):
321 # repo we push from
349 # repo we push from
322 self.repo = repo
350 self.repo = repo
323 self.ui = repo.ui
351 self.ui = repo.ui
324 # repo we push to
352 # repo we push to
325 self.remote = remote
353 self.remote = remote
326 # force option provided
354 # force option provided
327 self.force = force
355 self.force = force
328 # revs to be pushed (None is "all")
356 # revs to be pushed (None is "all")
329 self.revs = revs
357 self.revs = revs
330 # bookmark explicitly pushed
358 # bookmark explicitly pushed
331 self.bookmarks = bookmarks
359 self.bookmarks = bookmarks
332 # allow push of new branch
360 # allow push of new branch
333 self.newbranch = newbranch
361 self.newbranch = newbranch
334 # step already performed
362 # step already performed
335 # (used to check what steps have been already performed through bundle2)
363 # (used to check what steps have been already performed through bundle2)
336 self.stepsdone = set()
364 self.stepsdone = set()
337 # Integer version of the changegroup push result
365 # Integer version of the changegroup push result
338 # - None means nothing to push
366 # - None means nothing to push
339 # - 0 means HTTP error
367 # - 0 means HTTP error
340 # - 1 means we pushed and remote head count is unchanged *or*
368 # - 1 means we pushed and remote head count is unchanged *or*
341 # we have outgoing changesets but refused to push
369 # we have outgoing changesets but refused to push
342 # - other values as described by addchangegroup()
370 # - other values as described by addchangegroup()
343 self.cgresult = None
371 self.cgresult = None
344 # Boolean value for the bookmark push
372 # Boolean value for the bookmark push
345 self.bkresult = None
373 self.bkresult = None
346 # discover.outgoing object (contains common and outgoing data)
374 # discover.outgoing object (contains common and outgoing data)
347 self.outgoing = None
375 self.outgoing = None
348 # all remote topological heads before the push
376 # all remote topological heads before the push
349 self.remoteheads = None
377 self.remoteheads = None
350 # Details of the remote branch pre and post push
378 # Details of the remote branch pre and post push
351 #
379 #
352 # mapping: {'branch': ([remoteheads],
380 # mapping: {'branch': ([remoteheads],
353 # [newheads],
381 # [newheads],
354 # [unsyncedheads],
382 # [unsyncedheads],
355 # [discardedheads])}
383 # [discardedheads])}
356 # - branch: the branch name
384 # - branch: the branch name
357 # - remoteheads: the list of remote heads known locally
385 # - remoteheads: the list of remote heads known locally
358 # None if the branch is new
386 # None if the branch is new
359 # - newheads: the new remote heads (known locally) with outgoing pushed
387 # - newheads: the new remote heads (known locally) with outgoing pushed
360 # - unsyncedheads: the list of remote heads unknown locally.
388 # - unsyncedheads: the list of remote heads unknown locally.
361 # - discardedheads: the list of remote heads made obsolete by the push
389 # - discardedheads: the list of remote heads made obsolete by the push
362 self.pushbranchmap = None
390 self.pushbranchmap = None
363 # testable as a boolean indicating if any nodes are missing locally.
391 # testable as a boolean indicating if any nodes are missing locally.
364 self.incoming = None
392 self.incoming = None
365 # summary of the remote phase situation
393 # summary of the remote phase situation
366 self.remotephases = None
394 self.remotephases = None
367 # phases changes that must be pushed along side the changesets
395 # phases changes that must be pushed along side the changesets
368 self.outdatedphases = None
396 self.outdatedphases = None
369 # phases changes that must be pushed if changeset push fails
397 # phases changes that must be pushed if changeset push fails
370 self.fallbackoutdatedphases = None
398 self.fallbackoutdatedphases = None
371 # outgoing obsmarkers
399 # outgoing obsmarkers
372 self.outobsmarkers = set()
400 self.outobsmarkers = set()
373 # outgoing bookmarks
401 # outgoing bookmarks
374 self.outbookmarks = []
402 self.outbookmarks = []
375 # transaction manager
403 # transaction manager
376 self.trmanager = None
404 self.trmanager = None
377 # map { pushkey partid -> callback handling failure}
405 # map { pushkey partid -> callback handling failure}
378 # used to handle exception from mandatory pushkey part failure
406 # used to handle exception from mandatory pushkey part failure
379 self.pkfailcb = {}
407 self.pkfailcb = {}
380 # an iterable of pushvars or None
408 # an iterable of pushvars or None
381 self.pushvars = pushvars
409 self.pushvars = pushvars
382
410
383 @util.propertycache
411 @util.propertycache
384 def futureheads(self):
412 def futureheads(self):
385 """future remote heads if the changeset push succeeds"""
413 """future remote heads if the changeset push succeeds"""
386 return self.outgoing.missingheads
414 return self.outgoing.missingheads
387
415
388 @util.propertycache
416 @util.propertycache
389 def fallbackheads(self):
417 def fallbackheads(self):
390 """future remote heads if the changeset push fails"""
418 """future remote heads if the changeset push fails"""
391 if self.revs is None:
419 if self.revs is None:
392 # not target to push, all common are relevant
420 # not target to push, all common are relevant
393 return self.outgoing.commonheads
421 return self.outgoing.commonheads
394 unfi = self.repo.unfiltered()
422 unfi = self.repo.unfiltered()
395 # I want cheads = heads(::missingheads and ::commonheads)
423 # I want cheads = heads(::missingheads and ::commonheads)
396 # (missingheads is revs with secret changeset filtered out)
424 # (missingheads is revs with secret changeset filtered out)
397 #
425 #
398 # This can be expressed as:
426 # This can be expressed as:
399 # cheads = ( (missingheads and ::commonheads)
427 # cheads = ( (missingheads and ::commonheads)
400 # + (commonheads and ::missingheads))"
428 # + (commonheads and ::missingheads))"
401 # )
429 # )
402 #
430 #
403 # while trying to push we already computed the following:
431 # while trying to push we already computed the following:
404 # common = (::commonheads)
432 # common = (::commonheads)
405 # missing = ((commonheads::missingheads) - commonheads)
433 # missing = ((commonheads::missingheads) - commonheads)
406 #
434 #
407 # We can pick:
435 # We can pick:
408 # * missingheads part of common (::commonheads)
436 # * missingheads part of common (::commonheads)
409 common = self.outgoing.common
437 common = self.outgoing.common
410 nm = self.repo.changelog.nodemap
438 nm = self.repo.changelog.nodemap
411 cheads = [node for node in self.revs if nm[node] in common]
439 cheads = [node for node in self.revs if nm[node] in common]
412 # and
440 # and
413 # * commonheads parents on missing
441 # * commonheads parents on missing
414 revset = unfi.set('%ln and parents(roots(%ln))',
442 revset = unfi.set('%ln and parents(roots(%ln))',
415 self.outgoing.commonheads,
443 self.outgoing.commonheads,
416 self.outgoing.missing)
444 self.outgoing.missing)
417 cheads.extend(c.node() for c in revset)
445 cheads.extend(c.node() for c in revset)
418 return cheads
446 return cheads
419
447
420 @property
448 @property
421 def commonheads(self):
449 def commonheads(self):
422 """set of all common heads after changeset bundle push"""
450 """set of all common heads after changeset bundle push"""
423 if self.cgresult:
451 if self.cgresult:
424 return self.futureheads
452 return self.futureheads
425 else:
453 else:
426 return self.fallbackheads
454 return self.fallbackheads
427
455
428 # mapping of message used when pushing bookmark
456 # mapping of message used when pushing bookmark
429 bookmsgmap = {'update': (_("updating bookmark %s\n"),
457 bookmsgmap = {'update': (_("updating bookmark %s\n"),
430 _('updating bookmark %s failed!\n')),
458 _('updating bookmark %s failed!\n')),
431 'export': (_("exporting bookmark %s\n"),
459 'export': (_("exporting bookmark %s\n"),
432 _('exporting bookmark %s failed!\n')),
460 _('exporting bookmark %s failed!\n')),
433 'delete': (_("deleting remote bookmark %s\n"),
461 'delete': (_("deleting remote bookmark %s\n"),
434 _('deleting remote bookmark %s failed!\n')),
462 _('deleting remote bookmark %s failed!\n')),
435 }
463 }
436
464
437
465
438 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
466 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
439 opargs=None):
467 opargs=None):
440 '''Push outgoing changesets (limited by revs) from a local
468 '''Push outgoing changesets (limited by revs) from a local
441 repository to remote. Return an integer:
469 repository to remote. Return an integer:
442 - None means nothing to push
470 - None means nothing to push
443 - 0 means HTTP error
471 - 0 means HTTP error
444 - 1 means we pushed and remote head count is unchanged *or*
472 - 1 means we pushed and remote head count is unchanged *or*
445 we have outgoing changesets but refused to push
473 we have outgoing changesets but refused to push
446 - other values as described by addchangegroup()
474 - other values as described by addchangegroup()
447 '''
475 '''
448 if opargs is None:
476 if opargs is None:
449 opargs = {}
477 opargs = {}
450 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
478 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
451 **pycompat.strkwargs(opargs))
479 **pycompat.strkwargs(opargs))
452 if pushop.remote.local():
480 if pushop.remote.local():
453 missing = (set(pushop.repo.requirements)
481 missing = (set(pushop.repo.requirements)
454 - pushop.remote.local().supported)
482 - pushop.remote.local().supported)
455 if missing:
483 if missing:
456 msg = _("required features are not"
484 msg = _("required features are not"
457 " supported in the destination:"
485 " supported in the destination:"
458 " %s") % (', '.join(sorted(missing)))
486 " %s") % (', '.join(sorted(missing)))
459 raise error.Abort(msg)
487 raise error.Abort(msg)
460
488
461 if not pushop.remote.canpush():
489 if not pushop.remote.canpush():
462 raise error.Abort(_("destination does not support push"))
490 raise error.Abort(_("destination does not support push"))
463
491
464 if not pushop.remote.capable('unbundle'):
492 if not pushop.remote.capable('unbundle'):
465 raise error.Abort(_('cannot push: destination does not support the '
493 raise error.Abort(_('cannot push: destination does not support the '
466 'unbundle wire protocol command'))
494 'unbundle wire protocol command'))
467
495
468 # get lock as we might write phase data
496 # get lock as we might write phase data
469 wlock = lock = None
497 wlock = lock = None
470 try:
498 try:
471 # bundle2 push may receive a reply bundle touching bookmarks or other
499 # bundle2 push may receive a reply bundle touching bookmarks or other
472 # things requiring the wlock. Take it now to ensure proper ordering.
500 # things requiring the wlock. Take it now to ensure proper ordering.
473 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
501 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
474 if (not _forcebundle1(pushop)) and maypushback:
502 if (not _forcebundle1(pushop)) and maypushback:
475 wlock = pushop.repo.wlock()
503 wlock = pushop.repo.wlock()
476 lock = pushop.repo.lock()
504 lock = pushop.repo.lock()
477 pushop.trmanager = transactionmanager(pushop.repo,
505 pushop.trmanager = transactionmanager(pushop.repo,
478 'push-response',
506 'push-response',
479 pushop.remote.url())
507 pushop.remote.url())
480 except IOError as err:
508 except IOError as err:
481 if err.errno != errno.EACCES:
509 if err.errno != errno.EACCES:
482 raise
510 raise
483 # source repo cannot be locked.
511 # source repo cannot be locked.
484 # We do not abort the push, but just disable the local phase
512 # We do not abort the push, but just disable the local phase
485 # synchronisation.
513 # synchronisation.
486 msg = 'cannot lock source repository: %s\n' % err
514 msg = 'cannot lock source repository: %s\n' % err
487 pushop.ui.debug(msg)
515 pushop.ui.debug(msg)
488
516
489 with wlock or util.nullcontextmanager(), \
517 with wlock or util.nullcontextmanager(), \
490 lock or util.nullcontextmanager(), \
518 lock or util.nullcontextmanager(), \
491 pushop.trmanager or util.nullcontextmanager():
519 pushop.trmanager or util.nullcontextmanager():
492 pushop.repo.checkpush(pushop)
520 pushop.repo.checkpush(pushop)
493 _pushdiscovery(pushop)
521 _pushdiscovery(pushop)
494 if not _forcebundle1(pushop):
522 if not _forcebundle1(pushop):
495 _pushbundle2(pushop)
523 _pushbundle2(pushop)
496 _pushchangeset(pushop)
524 _pushchangeset(pushop)
497 _pushsyncphase(pushop)
525 _pushsyncphase(pushop)
498 _pushobsolete(pushop)
526 _pushobsolete(pushop)
499 _pushbookmark(pushop)
527 _pushbookmark(pushop)
500
528
501 return pushop
529 return pushop
502
530
503 # list of steps to perform discovery before push
531 # list of steps to perform discovery before push
504 pushdiscoveryorder = []
532 pushdiscoveryorder = []
505
533
506 # Mapping between step name and function
534 # Mapping between step name and function
507 #
535 #
508 # This exists to help extensions wrap steps if necessary
536 # This exists to help extensions wrap steps if necessary
509 pushdiscoverymapping = {}
537 pushdiscoverymapping = {}
510
538
511 def pushdiscovery(stepname):
539 def pushdiscovery(stepname):
512 """decorator for function performing discovery before push
540 """decorator for function performing discovery before push
513
541
514 The function is added to the step -> function mapping and appended to the
542 The function is added to the step -> function mapping and appended to the
515 list of steps. Beware that decorated function will be added in order (this
543 list of steps. Beware that decorated function will be added in order (this
516 may matter).
544 may matter).
517
545
518 You can only use this decorator for a new step, if you want to wrap a step
546 You can only use this decorator for a new step, if you want to wrap a step
519 from an extension, change the pushdiscovery dictionary directly."""
547 from an extension, change the pushdiscovery dictionary directly."""
520 def dec(func):
548 def dec(func):
521 assert stepname not in pushdiscoverymapping
549 assert stepname not in pushdiscoverymapping
522 pushdiscoverymapping[stepname] = func
550 pushdiscoverymapping[stepname] = func
523 pushdiscoveryorder.append(stepname)
551 pushdiscoveryorder.append(stepname)
524 return func
552 return func
525 return dec
553 return dec
526
554
527 def _pushdiscovery(pushop):
555 def _pushdiscovery(pushop):
528 """Run all discovery steps"""
556 """Run all discovery steps"""
529 for stepname in pushdiscoveryorder:
557 for stepname in pushdiscoveryorder:
530 step = pushdiscoverymapping[stepname]
558 step = pushdiscoverymapping[stepname]
531 step(pushop)
559 step(pushop)
532
560
533 @pushdiscovery('changeset')
561 @pushdiscovery('changeset')
534 def _pushdiscoverychangeset(pushop):
562 def _pushdiscoverychangeset(pushop):
535 """discover the changeset that need to be pushed"""
563 """discover the changeset that need to be pushed"""
536 fci = discovery.findcommonincoming
564 fci = discovery.findcommonincoming
537 if pushop.revs:
565 if pushop.revs:
538 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
566 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
539 ancestorsof=pushop.revs)
567 ancestorsof=pushop.revs)
540 else:
568 else:
541 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
569 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
542 common, inc, remoteheads = commoninc
570 common, inc, remoteheads = commoninc
543 fco = discovery.findcommonoutgoing
571 fco = discovery.findcommonoutgoing
544 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
572 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
545 commoninc=commoninc, force=pushop.force)
573 commoninc=commoninc, force=pushop.force)
546 pushop.outgoing = outgoing
574 pushop.outgoing = outgoing
547 pushop.remoteheads = remoteheads
575 pushop.remoteheads = remoteheads
548 pushop.incoming = inc
576 pushop.incoming = inc
549
577
550 @pushdiscovery('phase')
578 @pushdiscovery('phase')
551 def _pushdiscoveryphase(pushop):
579 def _pushdiscoveryphase(pushop):
552 """discover the phase that needs to be pushed
580 """discover the phase that needs to be pushed
553
581
554 (computed for both success and failure case for changesets push)"""
582 (computed for both success and failure case for changesets push)"""
555 outgoing = pushop.outgoing
583 outgoing = pushop.outgoing
556 unfi = pushop.repo.unfiltered()
584 unfi = pushop.repo.unfiltered()
557 remotephases = pushop.remote.listkeys('phases')
585 remotephases = pushop.remote.listkeys('phases')
558 if (pushop.ui.configbool('ui', '_usedassubrepo')
586 if (pushop.ui.configbool('ui', '_usedassubrepo')
559 and remotephases # server supports phases
587 and remotephases # server supports phases
560 and not pushop.outgoing.missing # no changesets to be pushed
588 and not pushop.outgoing.missing # no changesets to be pushed
561 and remotephases.get('publishing', False)):
589 and remotephases.get('publishing', False)):
562 # When:
590 # When:
563 # - this is a subrepo push
591 # - this is a subrepo push
564 # - and remote support phase
592 # - and remote support phase
565 # - and no changeset are to be pushed
593 # - and no changeset are to be pushed
566 # - and remote is publishing
594 # - and remote is publishing
567 # We may be in issue 3781 case!
595 # We may be in issue 3781 case!
568 # We drop the possible phase synchronisation done by
596 # We drop the possible phase synchronisation done by
569 # courtesy to publish changesets possibly locally draft
597 # courtesy to publish changesets possibly locally draft
570 # on the remote.
598 # on the remote.
571 pushop.outdatedphases = []
599 pushop.outdatedphases = []
572 pushop.fallbackoutdatedphases = []
600 pushop.fallbackoutdatedphases = []
573 return
601 return
574
602
575 pushop.remotephases = phases.remotephasessummary(pushop.repo,
603 pushop.remotephases = phases.remotephasessummary(pushop.repo,
576 pushop.fallbackheads,
604 pushop.fallbackheads,
577 remotephases)
605 remotephases)
578 droots = pushop.remotephases.draftroots
606 droots = pushop.remotephases.draftroots
579
607
580 extracond = ''
608 extracond = ''
581 if not pushop.remotephases.publishing:
609 if not pushop.remotephases.publishing:
582 extracond = ' and public()'
610 extracond = ' and public()'
583 revset = 'heads((%%ln::%%ln) %s)' % extracond
611 revset = 'heads((%%ln::%%ln) %s)' % extracond
584 # Get the list of all revs draft on remote by public here.
612 # Get the list of all revs draft on remote by public here.
585 # XXX Beware that revset break if droots is not strictly
613 # XXX Beware that revset break if droots is not strictly
586 # XXX root we may want to ensure it is but it is costly
614 # XXX root we may want to ensure it is but it is costly
587 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
615 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
588 if not outgoing.missing:
616 if not outgoing.missing:
589 future = fallback
617 future = fallback
590 else:
618 else:
591 # adds changeset we are going to push as draft
619 # adds changeset we are going to push as draft
592 #
620 #
593 # should not be necessary for publishing server, but because of an
621 # should not be necessary for publishing server, but because of an
594 # issue fixed in xxxxx we have to do it anyway.
622 # issue fixed in xxxxx we have to do it anyway.
595 fdroots = list(unfi.set('roots(%ln + %ln::)',
623 fdroots = list(unfi.set('roots(%ln + %ln::)',
596 outgoing.missing, droots))
624 outgoing.missing, droots))
597 fdroots = [f.node() for f in fdroots]
625 fdroots = [f.node() for f in fdroots]
598 future = list(unfi.set(revset, fdroots, pushop.futureheads))
626 future = list(unfi.set(revset, fdroots, pushop.futureheads))
599 pushop.outdatedphases = future
627 pushop.outdatedphases = future
600 pushop.fallbackoutdatedphases = fallback
628 pushop.fallbackoutdatedphases = fallback
601
629
602 @pushdiscovery('obsmarker')
630 @pushdiscovery('obsmarker')
603 def _pushdiscoveryobsmarkers(pushop):
631 def _pushdiscoveryobsmarkers(pushop):
604 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
632 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
605 and pushop.repo.obsstore
633 and pushop.repo.obsstore
606 and 'obsolete' in pushop.remote.listkeys('namespaces')):
634 and 'obsolete' in pushop.remote.listkeys('namespaces')):
607 repo = pushop.repo
635 repo = pushop.repo
608 # very naive computation, that can be quite expensive on big repo.
636 # very naive computation, that can be quite expensive on big repo.
609 # However: evolution is currently slow on them anyway.
637 # However: evolution is currently slow on them anyway.
610 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
638 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
611 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
639 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
612
640
613 @pushdiscovery('bookmarks')
641 @pushdiscovery('bookmarks')
614 def _pushdiscoverybookmarks(pushop):
642 def _pushdiscoverybookmarks(pushop):
615 ui = pushop.ui
643 ui = pushop.ui
616 repo = pushop.repo.unfiltered()
644 repo = pushop.repo.unfiltered()
617 remote = pushop.remote
645 remote = pushop.remote
618 ui.debug("checking for updated bookmarks\n")
646 ui.debug("checking for updated bookmarks\n")
619 ancestors = ()
647 ancestors = ()
620 if pushop.revs:
648 if pushop.revs:
621 revnums = map(repo.changelog.rev, pushop.revs)
649 revnums = map(repo.changelog.rev, pushop.revs)
622 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
650 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
623 remotebookmark = remote.listkeys('bookmarks')
651 remotebookmark = remote.listkeys('bookmarks')
624
652
625 explicit = set([repo._bookmarks.expandname(bookmark)
653 explicit = set([repo._bookmarks.expandname(bookmark)
626 for bookmark in pushop.bookmarks])
654 for bookmark in pushop.bookmarks])
627
655
628 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
656 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
629 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
657 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
630
658
631 def safehex(x):
659 def safehex(x):
632 if x is None:
660 if x is None:
633 return x
661 return x
634 return hex(x)
662 return hex(x)
635
663
636 def hexifycompbookmarks(bookmarks):
664 def hexifycompbookmarks(bookmarks):
637 return [(b, safehex(scid), safehex(dcid))
665 return [(b, safehex(scid), safehex(dcid))
638 for (b, scid, dcid) in bookmarks]
666 for (b, scid, dcid) in bookmarks]
639
667
640 comp = [hexifycompbookmarks(marks) for marks in comp]
668 comp = [hexifycompbookmarks(marks) for marks in comp]
641 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
669 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
642
670
643 def _processcompared(pushop, pushed, explicit, remotebms, comp):
671 def _processcompared(pushop, pushed, explicit, remotebms, comp):
644 """take decision on bookmark to pull from the remote bookmark
672 """take decision on bookmark to pull from the remote bookmark
645
673
646 Exist to help extensions who want to alter this behavior.
674 Exist to help extensions who want to alter this behavior.
647 """
675 """
648 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
676 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
649
677
650 repo = pushop.repo
678 repo = pushop.repo
651
679
652 for b, scid, dcid in advsrc:
680 for b, scid, dcid in advsrc:
653 if b in explicit:
681 if b in explicit:
654 explicit.remove(b)
682 explicit.remove(b)
655 if not pushed or repo[scid].rev() in pushed:
683 if not pushed or repo[scid].rev() in pushed:
656 pushop.outbookmarks.append((b, dcid, scid))
684 pushop.outbookmarks.append((b, dcid, scid))
657 # search added bookmark
685 # search added bookmark
658 for b, scid, dcid in addsrc:
686 for b, scid, dcid in addsrc:
659 if b in explicit:
687 if b in explicit:
660 explicit.remove(b)
688 explicit.remove(b)
661 pushop.outbookmarks.append((b, '', scid))
689 pushop.outbookmarks.append((b, '', scid))
662 # search for overwritten bookmark
690 # search for overwritten bookmark
663 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
691 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
664 if b in explicit:
692 if b in explicit:
665 explicit.remove(b)
693 explicit.remove(b)
666 pushop.outbookmarks.append((b, dcid, scid))
694 pushop.outbookmarks.append((b, dcid, scid))
667 # search for bookmark to delete
695 # search for bookmark to delete
668 for b, scid, dcid in adddst:
696 for b, scid, dcid in adddst:
669 if b in explicit:
697 if b in explicit:
670 explicit.remove(b)
698 explicit.remove(b)
671 # treat as "deleted locally"
699 # treat as "deleted locally"
672 pushop.outbookmarks.append((b, dcid, ''))
700 pushop.outbookmarks.append((b, dcid, ''))
673 # identical bookmarks shouldn't get reported
701 # identical bookmarks shouldn't get reported
674 for b, scid, dcid in same:
702 for b, scid, dcid in same:
675 if b in explicit:
703 if b in explicit:
676 explicit.remove(b)
704 explicit.remove(b)
677
705
678 if explicit:
706 if explicit:
679 explicit = sorted(explicit)
707 explicit = sorted(explicit)
680 # we should probably list all of them
708 # we should probably list all of them
681 pushop.ui.warn(_('bookmark %s does not exist on the local '
709 pushop.ui.warn(_('bookmark %s does not exist on the local '
682 'or remote repository!\n') % explicit[0])
710 'or remote repository!\n') % explicit[0])
683 pushop.bkresult = 2
711 pushop.bkresult = 2
684
712
685 pushop.outbookmarks.sort()
713 pushop.outbookmarks.sort()
686
714
687 def _pushcheckoutgoing(pushop):
715 def _pushcheckoutgoing(pushop):
688 outgoing = pushop.outgoing
716 outgoing = pushop.outgoing
689 unfi = pushop.repo.unfiltered()
717 unfi = pushop.repo.unfiltered()
690 if not outgoing.missing:
718 if not outgoing.missing:
691 # nothing to push
719 # nothing to push
692 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
720 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
693 return False
721 return False
694 # something to push
722 # something to push
695 if not pushop.force:
723 if not pushop.force:
696 # if repo.obsstore == False --> no obsolete
724 # if repo.obsstore == False --> no obsolete
697 # then, save the iteration
725 # then, save the iteration
698 if unfi.obsstore:
726 if unfi.obsstore:
699 # this message are here for 80 char limit reason
727 # this message are here for 80 char limit reason
700 mso = _("push includes obsolete changeset: %s!")
728 mso = _("push includes obsolete changeset: %s!")
701 mspd = _("push includes phase-divergent changeset: %s!")
729 mspd = _("push includes phase-divergent changeset: %s!")
702 mscd = _("push includes content-divergent changeset: %s!")
730 mscd = _("push includes content-divergent changeset: %s!")
703 mst = {"orphan": _("push includes orphan changeset: %s!"),
731 mst = {"orphan": _("push includes orphan changeset: %s!"),
704 "phase-divergent": mspd,
732 "phase-divergent": mspd,
705 "content-divergent": mscd}
733 "content-divergent": mscd}
706 # If we are to push if there is at least one
734 # If we are to push if there is at least one
707 # obsolete or unstable changeset in missing, at
735 # obsolete or unstable changeset in missing, at
708 # least one of the missinghead will be obsolete or
736 # least one of the missinghead will be obsolete or
709 # unstable. So checking heads only is ok
737 # unstable. So checking heads only is ok
710 for node in outgoing.missingheads:
738 for node in outgoing.missingheads:
711 ctx = unfi[node]
739 ctx = unfi[node]
712 if ctx.obsolete():
740 if ctx.obsolete():
713 raise error.Abort(mso % ctx)
741 raise error.Abort(mso % ctx)
714 elif ctx.isunstable():
742 elif ctx.isunstable():
715 # TODO print more than one instability in the abort
743 # TODO print more than one instability in the abort
716 # message
744 # message
717 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
745 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
718
746
719 discovery.checkheads(pushop)
747 discovery.checkheads(pushop)
720 return True
748 return True
721
749
722 # List of names of steps to perform for an outgoing bundle2, order matters.
750 # List of names of steps to perform for an outgoing bundle2, order matters.
723 b2partsgenorder = []
751 b2partsgenorder = []
724
752
725 # Mapping between step name and function
753 # Mapping between step name and function
726 #
754 #
727 # This exists to help extensions wrap steps if necessary
755 # This exists to help extensions wrap steps if necessary
728 b2partsgenmapping = {}
756 b2partsgenmapping = {}
729
757
730 def b2partsgenerator(stepname, idx=None):
758 def b2partsgenerator(stepname, idx=None):
731 """decorator for function generating bundle2 part
759 """decorator for function generating bundle2 part
732
760
733 The function is added to the step -> function mapping and appended to the
761 The function is added to the step -> function mapping and appended to the
734 list of steps. Beware that decorated functions will be added in order
762 list of steps. Beware that decorated functions will be added in order
735 (this may matter).
763 (this may matter).
736
764
737 You can only use this decorator for new steps, if you want to wrap a step
765 You can only use this decorator for new steps, if you want to wrap a step
738 from an extension, attack the b2partsgenmapping dictionary directly."""
766 from an extension, attack the b2partsgenmapping dictionary directly."""
739 def dec(func):
767 def dec(func):
740 assert stepname not in b2partsgenmapping
768 assert stepname not in b2partsgenmapping
741 b2partsgenmapping[stepname] = func
769 b2partsgenmapping[stepname] = func
742 if idx is None:
770 if idx is None:
743 b2partsgenorder.append(stepname)
771 b2partsgenorder.append(stepname)
744 else:
772 else:
745 b2partsgenorder.insert(idx, stepname)
773 b2partsgenorder.insert(idx, stepname)
746 return func
774 return func
747 return dec
775 return dec
748
776
749 def _pushb2ctxcheckheads(pushop, bundler):
777 def _pushb2ctxcheckheads(pushop, bundler):
750 """Generate race condition checking parts
778 """Generate race condition checking parts
751
779
752 Exists as an independent function to aid extensions
780 Exists as an independent function to aid extensions
753 """
781 """
754 # * 'force' do not check for push race,
782 # * 'force' do not check for push race,
755 # * if we don't push anything, there are nothing to check.
783 # * if we don't push anything, there are nothing to check.
756 if not pushop.force and pushop.outgoing.missingheads:
784 if not pushop.force and pushop.outgoing.missingheads:
757 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
785 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
758 emptyremote = pushop.pushbranchmap is None
786 emptyremote = pushop.pushbranchmap is None
759 if not allowunrelated or emptyremote:
787 if not allowunrelated or emptyremote:
760 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
788 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
761 else:
789 else:
762 affected = set()
790 affected = set()
763 for branch, heads in pushop.pushbranchmap.iteritems():
791 for branch, heads in pushop.pushbranchmap.iteritems():
764 remoteheads, newheads, unsyncedheads, discardedheads = heads
792 remoteheads, newheads, unsyncedheads, discardedheads = heads
765 if remoteheads is not None:
793 if remoteheads is not None:
766 remote = set(remoteheads)
794 remote = set(remoteheads)
767 affected |= set(discardedheads) & remote
795 affected |= set(discardedheads) & remote
768 affected |= remote - set(newheads)
796 affected |= remote - set(newheads)
769 if affected:
797 if affected:
770 data = iter(sorted(affected))
798 data = iter(sorted(affected))
771 bundler.newpart('check:updated-heads', data=data)
799 bundler.newpart('check:updated-heads', data=data)
772
800
773 def _pushing(pushop):
801 def _pushing(pushop):
774 """return True if we are pushing anything"""
802 """return True if we are pushing anything"""
775 return bool(pushop.outgoing.missing
803 return bool(pushop.outgoing.missing
776 or pushop.outdatedphases
804 or pushop.outdatedphases
777 or pushop.outobsmarkers
805 or pushop.outobsmarkers
778 or pushop.outbookmarks)
806 or pushop.outbookmarks)
779
807
780 @b2partsgenerator('check-bookmarks')
808 @b2partsgenerator('check-bookmarks')
781 def _pushb2checkbookmarks(pushop, bundler):
809 def _pushb2checkbookmarks(pushop, bundler):
782 """insert bookmark move checking"""
810 """insert bookmark move checking"""
783 if not _pushing(pushop) or pushop.force:
811 if not _pushing(pushop) or pushop.force:
784 return
812 return
785 b2caps = bundle2.bundle2caps(pushop.remote)
813 b2caps = bundle2.bundle2caps(pushop.remote)
786 hasbookmarkcheck = 'bookmarks' in b2caps
814 hasbookmarkcheck = 'bookmarks' in b2caps
787 if not (pushop.outbookmarks and hasbookmarkcheck):
815 if not (pushop.outbookmarks and hasbookmarkcheck):
788 return
816 return
789 data = []
817 data = []
790 for book, old, new in pushop.outbookmarks:
818 for book, old, new in pushop.outbookmarks:
791 old = bin(old)
819 old = bin(old)
792 data.append((book, old))
820 data.append((book, old))
793 checkdata = bookmod.binaryencode(data)
821 checkdata = bookmod.binaryencode(data)
794 bundler.newpart('check:bookmarks', data=checkdata)
822 bundler.newpart('check:bookmarks', data=checkdata)
795
823
796 @b2partsgenerator('check-phases')
824 @b2partsgenerator('check-phases')
797 def _pushb2checkphases(pushop, bundler):
825 def _pushb2checkphases(pushop, bundler):
798 """insert phase move checking"""
826 """insert phase move checking"""
799 if not _pushing(pushop) or pushop.force:
827 if not _pushing(pushop) or pushop.force:
800 return
828 return
801 b2caps = bundle2.bundle2caps(pushop.remote)
829 b2caps = bundle2.bundle2caps(pushop.remote)
802 hasphaseheads = 'heads' in b2caps.get('phases', ())
830 hasphaseheads = 'heads' in b2caps.get('phases', ())
803 if pushop.remotephases is not None and hasphaseheads:
831 if pushop.remotephases is not None and hasphaseheads:
804 # check that the remote phase has not changed
832 # check that the remote phase has not changed
805 checks = [[] for p in phases.allphases]
833 checks = [[] for p in phases.allphases]
806 checks[phases.public].extend(pushop.remotephases.publicheads)
834 checks[phases.public].extend(pushop.remotephases.publicheads)
807 checks[phases.draft].extend(pushop.remotephases.draftroots)
835 checks[phases.draft].extend(pushop.remotephases.draftroots)
808 if any(checks):
836 if any(checks):
809 for nodes in checks:
837 for nodes in checks:
810 nodes.sort()
838 nodes.sort()
811 checkdata = phases.binaryencode(checks)
839 checkdata = phases.binaryencode(checks)
812 bundler.newpart('check:phases', data=checkdata)
840 bundler.newpart('check:phases', data=checkdata)
813
841
814 @b2partsgenerator('changeset')
842 @b2partsgenerator('changeset')
815 def _pushb2ctx(pushop, bundler):
843 def _pushb2ctx(pushop, bundler):
816 """handle changegroup push through bundle2
844 """handle changegroup push through bundle2
817
845
818 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
846 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
819 """
847 """
820 if 'changesets' in pushop.stepsdone:
848 if 'changesets' in pushop.stepsdone:
821 return
849 return
822 pushop.stepsdone.add('changesets')
850 pushop.stepsdone.add('changesets')
823 # Send known heads to the server for race detection.
851 # Send known heads to the server for race detection.
824 if not _pushcheckoutgoing(pushop):
852 if not _pushcheckoutgoing(pushop):
825 return
853 return
826 pushop.repo.prepushoutgoinghooks(pushop)
854 pushop.repo.prepushoutgoinghooks(pushop)
827
855
828 _pushb2ctxcheckheads(pushop, bundler)
856 _pushb2ctxcheckheads(pushop, bundler)
829
857
830 b2caps = bundle2.bundle2caps(pushop.remote)
858 b2caps = bundle2.bundle2caps(pushop.remote)
831 version = '01'
859 version = '01'
832 cgversions = b2caps.get('changegroup')
860 cgversions = b2caps.get('changegroup')
833 if cgversions: # 3.1 and 3.2 ship with an empty value
861 if cgversions: # 3.1 and 3.2 ship with an empty value
834 cgversions = [v for v in cgversions
862 cgversions = [v for v in cgversions
835 if v in changegroup.supportedoutgoingversions(
863 if v in changegroup.supportedoutgoingversions(
836 pushop.repo)]
864 pushop.repo)]
837 if not cgversions:
865 if not cgversions:
838 raise ValueError(_('no common changegroup version'))
866 raise ValueError(_('no common changegroup version'))
839 version = max(cgversions)
867 version = max(cgversions)
840 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
868 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
841 'push')
869 'push')
842 cgpart = bundler.newpart('changegroup', data=cgstream)
870 cgpart = bundler.newpart('changegroup', data=cgstream)
843 if cgversions:
871 if cgversions:
844 cgpart.addparam('version', version)
872 cgpart.addparam('version', version)
845 if 'treemanifest' in pushop.repo.requirements:
873 if 'treemanifest' in pushop.repo.requirements:
846 cgpart.addparam('treemanifest', '1')
874 cgpart.addparam('treemanifest', '1')
847 def handlereply(op):
875 def handlereply(op):
848 """extract addchangegroup returns from server reply"""
876 """extract addchangegroup returns from server reply"""
849 cgreplies = op.records.getreplies(cgpart.id)
877 cgreplies = op.records.getreplies(cgpart.id)
850 assert len(cgreplies['changegroup']) == 1
878 assert len(cgreplies['changegroup']) == 1
851 pushop.cgresult = cgreplies['changegroup'][0]['return']
879 pushop.cgresult = cgreplies['changegroup'][0]['return']
852 return handlereply
880 return handlereply
853
881
854 @b2partsgenerator('phase')
882 @b2partsgenerator('phase')
855 def _pushb2phases(pushop, bundler):
883 def _pushb2phases(pushop, bundler):
856 """handle phase push through bundle2"""
884 """handle phase push through bundle2"""
857 if 'phases' in pushop.stepsdone:
885 if 'phases' in pushop.stepsdone:
858 return
886 return
859 b2caps = bundle2.bundle2caps(pushop.remote)
887 b2caps = bundle2.bundle2caps(pushop.remote)
860 ui = pushop.repo.ui
888 ui = pushop.repo.ui
861
889
862 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
890 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
863 haspushkey = 'pushkey' in b2caps
891 haspushkey = 'pushkey' in b2caps
864 hasphaseheads = 'heads' in b2caps.get('phases', ())
892 hasphaseheads = 'heads' in b2caps.get('phases', ())
865
893
866 if hasphaseheads and not legacyphase:
894 if hasphaseheads and not legacyphase:
867 return _pushb2phaseheads(pushop, bundler)
895 return _pushb2phaseheads(pushop, bundler)
868 elif haspushkey:
896 elif haspushkey:
869 return _pushb2phasespushkey(pushop, bundler)
897 return _pushb2phasespushkey(pushop, bundler)
870
898
871 def _pushb2phaseheads(pushop, bundler):
899 def _pushb2phaseheads(pushop, bundler):
872 """push phase information through a bundle2 - binary part"""
900 """push phase information through a bundle2 - binary part"""
873 pushop.stepsdone.add('phases')
901 pushop.stepsdone.add('phases')
874 if pushop.outdatedphases:
902 if pushop.outdatedphases:
875 updates = [[] for p in phases.allphases]
903 updates = [[] for p in phases.allphases]
876 updates[0].extend(h.node() for h in pushop.outdatedphases)
904 updates[0].extend(h.node() for h in pushop.outdatedphases)
877 phasedata = phases.binaryencode(updates)
905 phasedata = phases.binaryencode(updates)
878 bundler.newpart('phase-heads', data=phasedata)
906 bundler.newpart('phase-heads', data=phasedata)
879
907
880 def _pushb2phasespushkey(pushop, bundler):
908 def _pushb2phasespushkey(pushop, bundler):
881 """push phase information through a bundle2 - pushkey part"""
909 """push phase information through a bundle2 - pushkey part"""
882 pushop.stepsdone.add('phases')
910 pushop.stepsdone.add('phases')
883 part2node = []
911 part2node = []
884
912
885 def handlefailure(pushop, exc):
913 def handlefailure(pushop, exc):
886 targetid = int(exc.partid)
914 targetid = int(exc.partid)
887 for partid, node in part2node:
915 for partid, node in part2node:
888 if partid == targetid:
916 if partid == targetid:
889 raise error.Abort(_('updating %s to public failed') % node)
917 raise error.Abort(_('updating %s to public failed') % node)
890
918
891 enc = pushkey.encode
919 enc = pushkey.encode
892 for newremotehead in pushop.outdatedphases:
920 for newremotehead in pushop.outdatedphases:
893 part = bundler.newpart('pushkey')
921 part = bundler.newpart('pushkey')
894 part.addparam('namespace', enc('phases'))
922 part.addparam('namespace', enc('phases'))
895 part.addparam('key', enc(newremotehead.hex()))
923 part.addparam('key', enc(newremotehead.hex()))
896 part.addparam('old', enc('%d' % phases.draft))
924 part.addparam('old', enc('%d' % phases.draft))
897 part.addparam('new', enc('%d' % phases.public))
925 part.addparam('new', enc('%d' % phases.public))
898 part2node.append((part.id, newremotehead))
926 part2node.append((part.id, newremotehead))
899 pushop.pkfailcb[part.id] = handlefailure
927 pushop.pkfailcb[part.id] = handlefailure
900
928
901 def handlereply(op):
929 def handlereply(op):
902 for partid, node in part2node:
930 for partid, node in part2node:
903 partrep = op.records.getreplies(partid)
931 partrep = op.records.getreplies(partid)
904 results = partrep['pushkey']
932 results = partrep['pushkey']
905 assert len(results) <= 1
933 assert len(results) <= 1
906 msg = None
934 msg = None
907 if not results:
935 if not results:
908 msg = _('server ignored update of %s to public!\n') % node
936 msg = _('server ignored update of %s to public!\n') % node
909 elif not int(results[0]['return']):
937 elif not int(results[0]['return']):
910 msg = _('updating %s to public failed!\n') % node
938 msg = _('updating %s to public failed!\n') % node
911 if msg is not None:
939 if msg is not None:
912 pushop.ui.warn(msg)
940 pushop.ui.warn(msg)
913 return handlereply
941 return handlereply
914
942
915 @b2partsgenerator('obsmarkers')
943 @b2partsgenerator('obsmarkers')
916 def _pushb2obsmarkers(pushop, bundler):
944 def _pushb2obsmarkers(pushop, bundler):
917 if 'obsmarkers' in pushop.stepsdone:
945 if 'obsmarkers' in pushop.stepsdone:
918 return
946 return
919 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
947 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
920 if obsolete.commonversion(remoteversions) is None:
948 if obsolete.commonversion(remoteversions) is None:
921 return
949 return
922 pushop.stepsdone.add('obsmarkers')
950 pushop.stepsdone.add('obsmarkers')
923 if pushop.outobsmarkers:
951 if pushop.outobsmarkers:
924 markers = sorted(pushop.outobsmarkers)
952 markers = sorted(pushop.outobsmarkers)
925 bundle2.buildobsmarkerspart(bundler, markers)
953 bundle2.buildobsmarkerspart(bundler, markers)
926
954
927 @b2partsgenerator('bookmarks')
955 @b2partsgenerator('bookmarks')
928 def _pushb2bookmarks(pushop, bundler):
956 def _pushb2bookmarks(pushop, bundler):
929 """handle bookmark push through bundle2"""
957 """handle bookmark push through bundle2"""
930 if 'bookmarks' in pushop.stepsdone:
958 if 'bookmarks' in pushop.stepsdone:
931 return
959 return
932 b2caps = bundle2.bundle2caps(pushop.remote)
960 b2caps = bundle2.bundle2caps(pushop.remote)
933
961
934 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
962 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
935 legacybooks = 'bookmarks' in legacy
963 legacybooks = 'bookmarks' in legacy
936
964
937 if not legacybooks and 'bookmarks' in b2caps:
965 if not legacybooks and 'bookmarks' in b2caps:
938 return _pushb2bookmarkspart(pushop, bundler)
966 return _pushb2bookmarkspart(pushop, bundler)
939 elif 'pushkey' in b2caps:
967 elif 'pushkey' in b2caps:
940 return _pushb2bookmarkspushkey(pushop, bundler)
968 return _pushb2bookmarkspushkey(pushop, bundler)
941
969
942 def _bmaction(old, new):
970 def _bmaction(old, new):
943 """small utility for bookmark pushing"""
971 """small utility for bookmark pushing"""
944 if not old:
972 if not old:
945 return 'export'
973 return 'export'
946 elif not new:
974 elif not new:
947 return 'delete'
975 return 'delete'
948 return 'update'
976 return 'update'
949
977
950 def _pushb2bookmarkspart(pushop, bundler):
978 def _pushb2bookmarkspart(pushop, bundler):
951 pushop.stepsdone.add('bookmarks')
979 pushop.stepsdone.add('bookmarks')
952 if not pushop.outbookmarks:
980 if not pushop.outbookmarks:
953 return
981 return
954
982
955 allactions = []
983 allactions = []
956 data = []
984 data = []
957 for book, old, new in pushop.outbookmarks:
985 for book, old, new in pushop.outbookmarks:
958 new = bin(new)
986 new = bin(new)
959 data.append((book, new))
987 data.append((book, new))
960 allactions.append((book, _bmaction(old, new)))
988 allactions.append((book, _bmaction(old, new)))
961 checkdata = bookmod.binaryencode(data)
989 checkdata = bookmod.binaryencode(data)
962 bundler.newpart('bookmarks', data=checkdata)
990 bundler.newpart('bookmarks', data=checkdata)
963
991
964 def handlereply(op):
992 def handlereply(op):
965 ui = pushop.ui
993 ui = pushop.ui
966 # if success
994 # if success
967 for book, action in allactions:
995 for book, action in allactions:
968 ui.status(bookmsgmap[action][0] % book)
996 ui.status(bookmsgmap[action][0] % book)
969
997
970 return handlereply
998 return handlereply
971
999
972 def _pushb2bookmarkspushkey(pushop, bundler):
1000 def _pushb2bookmarkspushkey(pushop, bundler):
973 pushop.stepsdone.add('bookmarks')
1001 pushop.stepsdone.add('bookmarks')
974 part2book = []
1002 part2book = []
975 enc = pushkey.encode
1003 enc = pushkey.encode
976
1004
977 def handlefailure(pushop, exc):
1005 def handlefailure(pushop, exc):
978 targetid = int(exc.partid)
1006 targetid = int(exc.partid)
979 for partid, book, action in part2book:
1007 for partid, book, action in part2book:
980 if partid == targetid:
1008 if partid == targetid:
981 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1009 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
982 # we should not be called for part we did not generated
1010 # we should not be called for part we did not generated
983 assert False
1011 assert False
984
1012
985 for book, old, new in pushop.outbookmarks:
1013 for book, old, new in pushop.outbookmarks:
986 part = bundler.newpart('pushkey')
1014 part = bundler.newpart('pushkey')
987 part.addparam('namespace', enc('bookmarks'))
1015 part.addparam('namespace', enc('bookmarks'))
988 part.addparam('key', enc(book))
1016 part.addparam('key', enc(book))
989 part.addparam('old', enc(old))
1017 part.addparam('old', enc(old))
990 part.addparam('new', enc(new))
1018 part.addparam('new', enc(new))
991 action = 'update'
1019 action = 'update'
992 if not old:
1020 if not old:
993 action = 'export'
1021 action = 'export'
994 elif not new:
1022 elif not new:
995 action = 'delete'
1023 action = 'delete'
996 part2book.append((part.id, book, action))
1024 part2book.append((part.id, book, action))
997 pushop.pkfailcb[part.id] = handlefailure
1025 pushop.pkfailcb[part.id] = handlefailure
998
1026
999 def handlereply(op):
1027 def handlereply(op):
1000 ui = pushop.ui
1028 ui = pushop.ui
1001 for partid, book, action in part2book:
1029 for partid, book, action in part2book:
1002 partrep = op.records.getreplies(partid)
1030 partrep = op.records.getreplies(partid)
1003 results = partrep['pushkey']
1031 results = partrep['pushkey']
1004 assert len(results) <= 1
1032 assert len(results) <= 1
1005 if not results:
1033 if not results:
1006 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
1034 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
1007 else:
1035 else:
1008 ret = int(results[0]['return'])
1036 ret = int(results[0]['return'])
1009 if ret:
1037 if ret:
1010 ui.status(bookmsgmap[action][0] % book)
1038 ui.status(bookmsgmap[action][0] % book)
1011 else:
1039 else:
1012 ui.warn(bookmsgmap[action][1] % book)
1040 ui.warn(bookmsgmap[action][1] % book)
1013 if pushop.bkresult is not None:
1041 if pushop.bkresult is not None:
1014 pushop.bkresult = 1
1042 pushop.bkresult = 1
1015 return handlereply
1043 return handlereply
1016
1044
1017 @b2partsgenerator('pushvars', idx=0)
1045 @b2partsgenerator('pushvars', idx=0)
1018 def _getbundlesendvars(pushop, bundler):
1046 def _getbundlesendvars(pushop, bundler):
1019 '''send shellvars via bundle2'''
1047 '''send shellvars via bundle2'''
1020 pushvars = pushop.pushvars
1048 pushvars = pushop.pushvars
1021 if pushvars:
1049 if pushvars:
1022 shellvars = {}
1050 shellvars = {}
1023 for raw in pushvars:
1051 for raw in pushvars:
1024 if '=' not in raw:
1052 if '=' not in raw:
1025 msg = ("unable to parse variable '%s', should follow "
1053 msg = ("unable to parse variable '%s', should follow "
1026 "'KEY=VALUE' or 'KEY=' format")
1054 "'KEY=VALUE' or 'KEY=' format")
1027 raise error.Abort(msg % raw)
1055 raise error.Abort(msg % raw)
1028 k, v = raw.split('=', 1)
1056 k, v = raw.split('=', 1)
1029 shellvars[k] = v
1057 shellvars[k] = v
1030
1058
1031 part = bundler.newpart('pushvars')
1059 part = bundler.newpart('pushvars')
1032
1060
1033 for key, value in shellvars.iteritems():
1061 for key, value in shellvars.iteritems():
1034 part.addparam(key, value, mandatory=False)
1062 part.addparam(key, value, mandatory=False)
1035
1063
1036 def _pushbundle2(pushop):
1064 def _pushbundle2(pushop):
1037 """push data to the remote using bundle2
1065 """push data to the remote using bundle2
1038
1066
1039 The only currently supported type of data is changegroup but this will
1067 The only currently supported type of data is changegroup but this will
1040 evolve in the future."""
1068 evolve in the future."""
1041 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1069 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1042 pushback = (pushop.trmanager
1070 pushback = (pushop.trmanager
1043 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1071 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1044
1072
1045 # create reply capability
1073 # create reply capability
1046 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1074 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1047 allowpushback=pushback,
1075 allowpushback=pushback,
1048 role='client'))
1076 role='client'))
1049 bundler.newpart('replycaps', data=capsblob)
1077 bundler.newpart('replycaps', data=capsblob)
1050 replyhandlers = []
1078 replyhandlers = []
1051 for partgenname in b2partsgenorder:
1079 for partgenname in b2partsgenorder:
1052 partgen = b2partsgenmapping[partgenname]
1080 partgen = b2partsgenmapping[partgenname]
1053 ret = partgen(pushop, bundler)
1081 ret = partgen(pushop, bundler)
1054 if callable(ret):
1082 if callable(ret):
1055 replyhandlers.append(ret)
1083 replyhandlers.append(ret)
1056 # do not push if nothing to push
1084 # do not push if nothing to push
1057 if bundler.nbparts <= 1:
1085 if bundler.nbparts <= 1:
1058 return
1086 return
1059 stream = util.chunkbuffer(bundler.getchunks())
1087 stream = util.chunkbuffer(bundler.getchunks())
1060 try:
1088 try:
1061 try:
1089 try:
1062 reply = pushop.remote.unbundle(
1090 reply = pushop.remote.unbundle(
1063 stream, ['force'], pushop.remote.url())
1091 stream, ['force'], pushop.remote.url())
1064 except error.BundleValueError as exc:
1092 except error.BundleValueError as exc:
1065 raise error.Abort(_('missing support for %s') % exc)
1093 raise error.Abort(_('missing support for %s') % exc)
1066 try:
1094 try:
1067 trgetter = None
1095 trgetter = None
1068 if pushback:
1096 if pushback:
1069 trgetter = pushop.trmanager.transaction
1097 trgetter = pushop.trmanager.transaction
1070 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1098 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1071 except error.BundleValueError as exc:
1099 except error.BundleValueError as exc:
1072 raise error.Abort(_('missing support for %s') % exc)
1100 raise error.Abort(_('missing support for %s') % exc)
1073 except bundle2.AbortFromPart as exc:
1101 except bundle2.AbortFromPart as exc:
1074 pushop.ui.status(_('remote: %s\n') % exc)
1102 pushop.ui.status(_('remote: %s\n') % exc)
1075 if exc.hint is not None:
1103 if exc.hint is not None:
1076 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1104 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1077 raise error.Abort(_('push failed on remote'))
1105 raise error.Abort(_('push failed on remote'))
1078 except error.PushkeyFailed as exc:
1106 except error.PushkeyFailed as exc:
1079 partid = int(exc.partid)
1107 partid = int(exc.partid)
1080 if partid not in pushop.pkfailcb:
1108 if partid not in pushop.pkfailcb:
1081 raise
1109 raise
1082 pushop.pkfailcb[partid](pushop, exc)
1110 pushop.pkfailcb[partid](pushop, exc)
1083 for rephand in replyhandlers:
1111 for rephand in replyhandlers:
1084 rephand(op)
1112 rephand(op)
1085
1113
1086 def _pushchangeset(pushop):
1114 def _pushchangeset(pushop):
1087 """Make the actual push of changeset bundle to remote repo"""
1115 """Make the actual push of changeset bundle to remote repo"""
1088 if 'changesets' in pushop.stepsdone:
1116 if 'changesets' in pushop.stepsdone:
1089 return
1117 return
1090 pushop.stepsdone.add('changesets')
1118 pushop.stepsdone.add('changesets')
1091 if not _pushcheckoutgoing(pushop):
1119 if not _pushcheckoutgoing(pushop):
1092 return
1120 return
1093
1121
1094 # Should have verified this in push().
1122 # Should have verified this in push().
1095 assert pushop.remote.capable('unbundle')
1123 assert pushop.remote.capable('unbundle')
1096
1124
1097 pushop.repo.prepushoutgoinghooks(pushop)
1125 pushop.repo.prepushoutgoinghooks(pushop)
1098 outgoing = pushop.outgoing
1126 outgoing = pushop.outgoing
1099 # TODO: get bundlecaps from remote
1127 # TODO: get bundlecaps from remote
1100 bundlecaps = None
1128 bundlecaps = None
1101 # create a changegroup from local
1129 # create a changegroup from local
1102 if pushop.revs is None and not (outgoing.excluded
1130 if pushop.revs is None and not (outgoing.excluded
1103 or pushop.repo.changelog.filteredrevs):
1131 or pushop.repo.changelog.filteredrevs):
1104 # push everything,
1132 # push everything,
1105 # use the fast path, no race possible on push
1133 # use the fast path, no race possible on push
1106 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1134 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1107 fastpath=True, bundlecaps=bundlecaps)
1135 fastpath=True, bundlecaps=bundlecaps)
1108 else:
1136 else:
1109 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1137 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1110 'push', bundlecaps=bundlecaps)
1138 'push', bundlecaps=bundlecaps)
1111
1139
1112 # apply changegroup to remote
1140 # apply changegroup to remote
1113 # local repo finds heads on server, finds out what
1141 # local repo finds heads on server, finds out what
1114 # revs it must push. once revs transferred, if server
1142 # revs it must push. once revs transferred, if server
1115 # finds it has different heads (someone else won
1143 # finds it has different heads (someone else won
1116 # commit/push race), server aborts.
1144 # commit/push race), server aborts.
1117 if pushop.force:
1145 if pushop.force:
1118 remoteheads = ['force']
1146 remoteheads = ['force']
1119 else:
1147 else:
1120 remoteheads = pushop.remoteheads
1148 remoteheads = pushop.remoteheads
1121 # ssh: return remote's addchangegroup()
1149 # ssh: return remote's addchangegroup()
1122 # http: return remote's addchangegroup() or 0 for error
1150 # http: return remote's addchangegroup() or 0 for error
1123 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1151 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1124 pushop.repo.url())
1152 pushop.repo.url())
1125
1153
1126 def _pushsyncphase(pushop):
1154 def _pushsyncphase(pushop):
1127 """synchronise phase information locally and remotely"""
1155 """synchronise phase information locally and remotely"""
1128 cheads = pushop.commonheads
1156 cheads = pushop.commonheads
1129 # even when we don't push, exchanging phase data is useful
1157 # even when we don't push, exchanging phase data is useful
1130 remotephases = pushop.remote.listkeys('phases')
1158 remotephases = pushop.remote.listkeys('phases')
1131 if (pushop.ui.configbool('ui', '_usedassubrepo')
1159 if (pushop.ui.configbool('ui', '_usedassubrepo')
1132 and remotephases # server supports phases
1160 and remotephases # server supports phases
1133 and pushop.cgresult is None # nothing was pushed
1161 and pushop.cgresult is None # nothing was pushed
1134 and remotephases.get('publishing', False)):
1162 and remotephases.get('publishing', False)):
1135 # When:
1163 # When:
1136 # - this is a subrepo push
1164 # - this is a subrepo push
1137 # - and remote support phase
1165 # - and remote support phase
1138 # - and no changeset was pushed
1166 # - and no changeset was pushed
1139 # - and remote is publishing
1167 # - and remote is publishing
1140 # We may be in issue 3871 case!
1168 # We may be in issue 3871 case!
1141 # We drop the possible phase synchronisation done by
1169 # We drop the possible phase synchronisation done by
1142 # courtesy to publish changesets possibly locally draft
1170 # courtesy to publish changesets possibly locally draft
1143 # on the remote.
1171 # on the remote.
1144 remotephases = {'publishing': 'True'}
1172 remotephases = {'publishing': 'True'}
1145 if not remotephases: # old server or public only reply from non-publishing
1173 if not remotephases: # old server or public only reply from non-publishing
1146 _localphasemove(pushop, cheads)
1174 _localphasemove(pushop, cheads)
1147 # don't push any phase data as there is nothing to push
1175 # don't push any phase data as there is nothing to push
1148 else:
1176 else:
1149 ana = phases.analyzeremotephases(pushop.repo, cheads,
1177 ana = phases.analyzeremotephases(pushop.repo, cheads,
1150 remotephases)
1178 remotephases)
1151 pheads, droots = ana
1179 pheads, droots = ana
1152 ### Apply remote phase on local
1180 ### Apply remote phase on local
1153 if remotephases.get('publishing', False):
1181 if remotephases.get('publishing', False):
1154 _localphasemove(pushop, cheads)
1182 _localphasemove(pushop, cheads)
1155 else: # publish = False
1183 else: # publish = False
1156 _localphasemove(pushop, pheads)
1184 _localphasemove(pushop, pheads)
1157 _localphasemove(pushop, cheads, phases.draft)
1185 _localphasemove(pushop, cheads, phases.draft)
1158 ### Apply local phase on remote
1186 ### Apply local phase on remote
1159
1187
1160 if pushop.cgresult:
1188 if pushop.cgresult:
1161 if 'phases' in pushop.stepsdone:
1189 if 'phases' in pushop.stepsdone:
1162 # phases already pushed though bundle2
1190 # phases already pushed though bundle2
1163 return
1191 return
1164 outdated = pushop.outdatedphases
1192 outdated = pushop.outdatedphases
1165 else:
1193 else:
1166 outdated = pushop.fallbackoutdatedphases
1194 outdated = pushop.fallbackoutdatedphases
1167
1195
1168 pushop.stepsdone.add('phases')
1196 pushop.stepsdone.add('phases')
1169
1197
1170 # filter heads already turned public by the push
1198 # filter heads already turned public by the push
1171 outdated = [c for c in outdated if c.node() not in pheads]
1199 outdated = [c for c in outdated if c.node() not in pheads]
1172 # fallback to independent pushkey command
1200 # fallback to independent pushkey command
1173 for newremotehead in outdated:
1201 for newremotehead in outdated:
1174 r = pushop.remote.pushkey('phases',
1202 r = pushop.remote.pushkey('phases',
1175 newremotehead.hex(),
1203 newremotehead.hex(),
1176 ('%d' % phases.draft),
1204 ('%d' % phases.draft),
1177 ('%d' % phases.public))
1205 ('%d' % phases.public))
1178 if not r:
1206 if not r:
1179 pushop.ui.warn(_('updating %s to public failed!\n')
1207 pushop.ui.warn(_('updating %s to public failed!\n')
1180 % newremotehead)
1208 % newremotehead)
1181
1209
1182 def _localphasemove(pushop, nodes, phase=phases.public):
1210 def _localphasemove(pushop, nodes, phase=phases.public):
1183 """move <nodes> to <phase> in the local source repo"""
1211 """move <nodes> to <phase> in the local source repo"""
1184 if pushop.trmanager:
1212 if pushop.trmanager:
1185 phases.advanceboundary(pushop.repo,
1213 phases.advanceboundary(pushop.repo,
1186 pushop.trmanager.transaction(),
1214 pushop.trmanager.transaction(),
1187 phase,
1215 phase,
1188 nodes)
1216 nodes)
1189 else:
1217 else:
1190 # repo is not locked, do not change any phases!
1218 # repo is not locked, do not change any phases!
1191 # Informs the user that phases should have been moved when
1219 # Informs the user that phases should have been moved when
1192 # applicable.
1220 # applicable.
1193 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1221 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1194 phasestr = phases.phasenames[phase]
1222 phasestr = phases.phasenames[phase]
1195 if actualmoves:
1223 if actualmoves:
1196 pushop.ui.status(_('cannot lock source repo, skipping '
1224 pushop.ui.status(_('cannot lock source repo, skipping '
1197 'local %s phase update\n') % phasestr)
1225 'local %s phase update\n') % phasestr)
1198
1226
1199 def _pushobsolete(pushop):
1227 def _pushobsolete(pushop):
1200 """utility function to push obsolete markers to a remote"""
1228 """utility function to push obsolete markers to a remote"""
1201 if 'obsmarkers' in pushop.stepsdone:
1229 if 'obsmarkers' in pushop.stepsdone:
1202 return
1230 return
1203 repo = pushop.repo
1231 repo = pushop.repo
1204 remote = pushop.remote
1232 remote = pushop.remote
1205 pushop.stepsdone.add('obsmarkers')
1233 pushop.stepsdone.add('obsmarkers')
1206 if pushop.outobsmarkers:
1234 if pushop.outobsmarkers:
1207 pushop.ui.debug('try to push obsolete markers to remote\n')
1235 pushop.ui.debug('try to push obsolete markers to remote\n')
1208 rslts = []
1236 rslts = []
1209 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1237 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1210 for key in sorted(remotedata, reverse=True):
1238 for key in sorted(remotedata, reverse=True):
1211 # reverse sort to ensure we end with dump0
1239 # reverse sort to ensure we end with dump0
1212 data = remotedata[key]
1240 data = remotedata[key]
1213 rslts.append(remote.pushkey('obsolete', key, '', data))
1241 rslts.append(remote.pushkey('obsolete', key, '', data))
1214 if [r for r in rslts if not r]:
1242 if [r for r in rslts if not r]:
1215 msg = _('failed to push some obsolete markers!\n')
1243 msg = _('failed to push some obsolete markers!\n')
1216 repo.ui.warn(msg)
1244 repo.ui.warn(msg)
1217
1245
1218 def _pushbookmark(pushop):
1246 def _pushbookmark(pushop):
1219 """Update bookmark position on remote"""
1247 """Update bookmark position on remote"""
1220 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1248 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1221 return
1249 return
1222 pushop.stepsdone.add('bookmarks')
1250 pushop.stepsdone.add('bookmarks')
1223 ui = pushop.ui
1251 ui = pushop.ui
1224 remote = pushop.remote
1252 remote = pushop.remote
1225
1253
1226 for b, old, new in pushop.outbookmarks:
1254 for b, old, new in pushop.outbookmarks:
1227 action = 'update'
1255 action = 'update'
1228 if not old:
1256 if not old:
1229 action = 'export'
1257 action = 'export'
1230 elif not new:
1258 elif not new:
1231 action = 'delete'
1259 action = 'delete'
1232 if remote.pushkey('bookmarks', b, old, new):
1260 if remote.pushkey('bookmarks', b, old, new):
1233 ui.status(bookmsgmap[action][0] % b)
1261 ui.status(bookmsgmap[action][0] % b)
1234 else:
1262 else:
1235 ui.warn(bookmsgmap[action][1] % b)
1263 ui.warn(bookmsgmap[action][1] % b)
1236 # discovery can have set the value form invalid entry
1264 # discovery can have set the value form invalid entry
1237 if pushop.bkresult is not None:
1265 if pushop.bkresult is not None:
1238 pushop.bkresult = 1
1266 pushop.bkresult = 1
1239
1267
1240 class pulloperation(object):
1268 class pulloperation(object):
1241 """A object that represent a single pull operation
1269 """A object that represent a single pull operation
1242
1270
1243 It purpose is to carry pull related state and very common operation.
1271 It purpose is to carry pull related state and very common operation.
1244
1272
1245 A new should be created at the beginning of each pull and discarded
1273 A new should be created at the beginning of each pull and discarded
1246 afterward.
1274 afterward.
1247 """
1275 """
1248
1276
1249 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1277 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1250 remotebookmarks=None, streamclonerequested=None):
1278 remotebookmarks=None, streamclonerequested=None):
1251 # repo we pull into
1279 # repo we pull into
1252 self.repo = repo
1280 self.repo = repo
1253 # repo we pull from
1281 # repo we pull from
1254 self.remote = remote
1282 self.remote = remote
1255 # revision we try to pull (None is "all")
1283 # revision we try to pull (None is "all")
1256 self.heads = heads
1284 self.heads = heads
1257 # bookmark pulled explicitly
1285 # bookmark pulled explicitly
1258 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1286 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1259 for bookmark in bookmarks]
1287 for bookmark in bookmarks]
1260 # do we force pull?
1288 # do we force pull?
1261 self.force = force
1289 self.force = force
1262 # whether a streaming clone was requested
1290 # whether a streaming clone was requested
1263 self.streamclonerequested = streamclonerequested
1291 self.streamclonerequested = streamclonerequested
1264 # transaction manager
1292 # transaction manager
1265 self.trmanager = None
1293 self.trmanager = None
1266 # set of common changeset between local and remote before pull
1294 # set of common changeset between local and remote before pull
1267 self.common = None
1295 self.common = None
1268 # set of pulled head
1296 # set of pulled head
1269 self.rheads = None
1297 self.rheads = None
1270 # list of missing changeset to fetch remotely
1298 # list of missing changeset to fetch remotely
1271 self.fetch = None
1299 self.fetch = None
1272 # remote bookmarks data
1300 # remote bookmarks data
1273 self.remotebookmarks = remotebookmarks
1301 self.remotebookmarks = remotebookmarks
1274 # result of changegroup pulling (used as return code by pull)
1302 # result of changegroup pulling (used as return code by pull)
1275 self.cgresult = None
1303 self.cgresult = None
1276 # list of step already done
1304 # list of step already done
1277 self.stepsdone = set()
1305 self.stepsdone = set()
1278 # Whether we attempted a clone from pre-generated bundles.
1306 # Whether we attempted a clone from pre-generated bundles.
1279 self.clonebundleattempted = False
1307 self.clonebundleattempted = False
1280
1308
1281 @util.propertycache
1309 @util.propertycache
1282 def pulledsubset(self):
1310 def pulledsubset(self):
1283 """heads of the set of changeset target by the pull"""
1311 """heads of the set of changeset target by the pull"""
1284 # compute target subset
1312 # compute target subset
1285 if self.heads is None:
1313 if self.heads is None:
1286 # We pulled every thing possible
1314 # We pulled every thing possible
1287 # sync on everything common
1315 # sync on everything common
1288 c = set(self.common)
1316 c = set(self.common)
1289 ret = list(self.common)
1317 ret = list(self.common)
1290 for n in self.rheads:
1318 for n in self.rheads:
1291 if n not in c:
1319 if n not in c:
1292 ret.append(n)
1320 ret.append(n)
1293 return ret
1321 return ret
1294 else:
1322 else:
1295 # We pulled a specific subset
1323 # We pulled a specific subset
1296 # sync on this subset
1324 # sync on this subset
1297 return self.heads
1325 return self.heads
1298
1326
1299 @util.propertycache
1327 @util.propertycache
1300 def canusebundle2(self):
1328 def canusebundle2(self):
1301 return not _forcebundle1(self)
1329 return not _forcebundle1(self)
1302
1330
1303 @util.propertycache
1331 @util.propertycache
1304 def remotebundle2caps(self):
1332 def remotebundle2caps(self):
1305 return bundle2.bundle2caps(self.remote)
1333 return bundle2.bundle2caps(self.remote)
1306
1334
1307 def gettransaction(self):
1335 def gettransaction(self):
1308 # deprecated; talk to trmanager directly
1336 # deprecated; talk to trmanager directly
1309 return self.trmanager.transaction()
1337 return self.trmanager.transaction()
1310
1338
1311 class transactionmanager(util.transactional):
1339 class transactionmanager(util.transactional):
1312 """An object to manage the life cycle of a transaction
1340 """An object to manage the life cycle of a transaction
1313
1341
1314 It creates the transaction on demand and calls the appropriate hooks when
1342 It creates the transaction on demand and calls the appropriate hooks when
1315 closing the transaction."""
1343 closing the transaction."""
1316 def __init__(self, repo, source, url):
1344 def __init__(self, repo, source, url):
1317 self.repo = repo
1345 self.repo = repo
1318 self.source = source
1346 self.source = source
1319 self.url = url
1347 self.url = url
1320 self._tr = None
1348 self._tr = None
1321
1349
1322 def transaction(self):
1350 def transaction(self):
1323 """Return an open transaction object, constructing if necessary"""
1351 """Return an open transaction object, constructing if necessary"""
1324 if not self._tr:
1352 if not self._tr:
1325 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1353 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1326 self._tr = self.repo.transaction(trname)
1354 self._tr = self.repo.transaction(trname)
1327 self._tr.hookargs['source'] = self.source
1355 self._tr.hookargs['source'] = self.source
1328 self._tr.hookargs['url'] = self.url
1356 self._tr.hookargs['url'] = self.url
1329 return self._tr
1357 return self._tr
1330
1358
1331 def close(self):
1359 def close(self):
1332 """close transaction if created"""
1360 """close transaction if created"""
1333 if self._tr is not None:
1361 if self._tr is not None:
1334 self._tr.close()
1362 self._tr.close()
1335
1363
1336 def release(self):
1364 def release(self):
1337 """release transaction if created"""
1365 """release transaction if created"""
1338 if self._tr is not None:
1366 if self._tr is not None:
1339 self._tr.release()
1367 self._tr.release()
1340
1368
1341 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1369 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1342 streamclonerequested=None):
1370 streamclonerequested=None):
1343 """Fetch repository data from a remote.
1371 """Fetch repository data from a remote.
1344
1372
1345 This is the main function used to retrieve data from a remote repository.
1373 This is the main function used to retrieve data from a remote repository.
1346
1374
1347 ``repo`` is the local repository to clone into.
1375 ``repo`` is the local repository to clone into.
1348 ``remote`` is a peer instance.
1376 ``remote`` is a peer instance.
1349 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1377 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1350 default) means to pull everything from the remote.
1378 default) means to pull everything from the remote.
1351 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1379 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1352 default, all remote bookmarks are pulled.
1380 default, all remote bookmarks are pulled.
1353 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1381 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1354 initialization.
1382 initialization.
1355 ``streamclonerequested`` is a boolean indicating whether a "streaming
1383 ``streamclonerequested`` is a boolean indicating whether a "streaming
1356 clone" is requested. A "streaming clone" is essentially a raw file copy
1384 clone" is requested. A "streaming clone" is essentially a raw file copy
1357 of revlogs from the server. This only works when the local repository is
1385 of revlogs from the server. This only works when the local repository is
1358 empty. The default value of ``None`` means to respect the server
1386 empty. The default value of ``None`` means to respect the server
1359 configuration for preferring stream clones.
1387 configuration for preferring stream clones.
1360
1388
1361 Returns the ``pulloperation`` created for this pull.
1389 Returns the ``pulloperation`` created for this pull.
1362 """
1390 """
1363 if opargs is None:
1391 if opargs is None:
1364 opargs = {}
1392 opargs = {}
1365 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1393 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1366 streamclonerequested=streamclonerequested,
1394 streamclonerequested=streamclonerequested,
1367 **pycompat.strkwargs(opargs))
1395 **pycompat.strkwargs(opargs))
1368
1396
1369 peerlocal = pullop.remote.local()
1397 peerlocal = pullop.remote.local()
1370 if peerlocal:
1398 if peerlocal:
1371 missing = set(peerlocal.requirements) - pullop.repo.supported
1399 missing = set(peerlocal.requirements) - pullop.repo.supported
1372 if missing:
1400 if missing:
1373 msg = _("required features are not"
1401 msg = _("required features are not"
1374 " supported in the destination:"
1402 " supported in the destination:"
1375 " %s") % (', '.join(sorted(missing)))
1403 " %s") % (', '.join(sorted(missing)))
1376 raise error.Abort(msg)
1404 raise error.Abort(msg)
1377
1405
1378 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1406 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1379 with repo.wlock(), repo.lock(), pullop.trmanager:
1407 with repo.wlock(), repo.lock(), pullop.trmanager:
1380 # This should ideally be in _pullbundle2(). However, it needs to run
1408 # This should ideally be in _pullbundle2(). However, it needs to run
1381 # before discovery to avoid extra work.
1409 # before discovery to avoid extra work.
1382 _maybeapplyclonebundle(pullop)
1410 _maybeapplyclonebundle(pullop)
1383 streamclone.maybeperformlegacystreamclone(pullop)
1411 streamclone.maybeperformlegacystreamclone(pullop)
1384 _pulldiscovery(pullop)
1412 _pulldiscovery(pullop)
1385 if pullop.canusebundle2:
1413 if pullop.canusebundle2:
1386 _pullbundle2(pullop)
1414 _pullbundle2(pullop)
1387 _pullchangeset(pullop)
1415 _pullchangeset(pullop)
1388 _pullphase(pullop)
1416 _pullphase(pullop)
1389 _pullbookmarks(pullop)
1417 _pullbookmarks(pullop)
1390 _pullobsolete(pullop)
1418 _pullobsolete(pullop)
1391
1419
1392 # storing remotenames
1420 # storing remotenames
1393 if repo.ui.configbool('experimental', 'remotenames'):
1421 if repo.ui.configbool('experimental', 'remotenames'):
1394 logexchange.pullremotenames(repo, remote)
1422 logexchange.pullremotenames(repo, remote)
1395
1423
1396 return pullop
1424 return pullop
1397
1425
1398 # list of steps to perform discovery before pull
1426 # list of steps to perform discovery before pull
1399 pulldiscoveryorder = []
1427 pulldiscoveryorder = []
1400
1428
1401 # Mapping between step name and function
1429 # Mapping between step name and function
1402 #
1430 #
1403 # This exists to help extensions wrap steps if necessary
1431 # This exists to help extensions wrap steps if necessary
1404 pulldiscoverymapping = {}
1432 pulldiscoverymapping = {}
1405
1433
1406 def pulldiscovery(stepname):
1434 def pulldiscovery(stepname):
1407 """decorator for function performing discovery before pull
1435 """decorator for function performing discovery before pull
1408
1436
1409 The function is added to the step -> function mapping and appended to the
1437 The function is added to the step -> function mapping and appended to the
1410 list of steps. Beware that decorated function will be added in order (this
1438 list of steps. Beware that decorated function will be added in order (this
1411 may matter).
1439 may matter).
1412
1440
1413 You can only use this decorator for a new step, if you want to wrap a step
1441 You can only use this decorator for a new step, if you want to wrap a step
1414 from an extension, change the pulldiscovery dictionary directly."""
1442 from an extension, change the pulldiscovery dictionary directly."""
1415 def dec(func):
1443 def dec(func):
1416 assert stepname not in pulldiscoverymapping
1444 assert stepname not in pulldiscoverymapping
1417 pulldiscoverymapping[stepname] = func
1445 pulldiscoverymapping[stepname] = func
1418 pulldiscoveryorder.append(stepname)
1446 pulldiscoveryorder.append(stepname)
1419 return func
1447 return func
1420 return dec
1448 return dec
1421
1449
1422 def _pulldiscovery(pullop):
1450 def _pulldiscovery(pullop):
1423 """Run all discovery steps"""
1451 """Run all discovery steps"""
1424 for stepname in pulldiscoveryorder:
1452 for stepname in pulldiscoveryorder:
1425 step = pulldiscoverymapping[stepname]
1453 step = pulldiscoverymapping[stepname]
1426 step(pullop)
1454 step(pullop)
1427
1455
1428 @pulldiscovery('b1:bookmarks')
1456 @pulldiscovery('b1:bookmarks')
1429 def _pullbookmarkbundle1(pullop):
1457 def _pullbookmarkbundle1(pullop):
1430 """fetch bookmark data in bundle1 case
1458 """fetch bookmark data in bundle1 case
1431
1459
1432 If not using bundle2, we have to fetch bookmarks before changeset
1460 If not using bundle2, we have to fetch bookmarks before changeset
1433 discovery to reduce the chance and impact of race conditions."""
1461 discovery to reduce the chance and impact of race conditions."""
1434 if pullop.remotebookmarks is not None:
1462 if pullop.remotebookmarks is not None:
1435 return
1463 return
1436 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1464 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1437 # all known bundle2 servers now support listkeys, but lets be nice with
1465 # all known bundle2 servers now support listkeys, but lets be nice with
1438 # new implementation.
1466 # new implementation.
1439 return
1467 return
1440 books = pullop.remote.listkeys('bookmarks')
1468 books = pullop.remote.listkeys('bookmarks')
1441 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1469 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1442
1470
1443
1471
1444 @pulldiscovery('changegroup')
1472 @pulldiscovery('changegroup')
1445 def _pulldiscoverychangegroup(pullop):
1473 def _pulldiscoverychangegroup(pullop):
1446 """discovery phase for the pull
1474 """discovery phase for the pull
1447
1475
1448 Current handle changeset discovery only, will change handle all discovery
1476 Current handle changeset discovery only, will change handle all discovery
1449 at some point."""
1477 at some point."""
1450 tmp = discovery.findcommonincoming(pullop.repo,
1478 tmp = discovery.findcommonincoming(pullop.repo,
1451 pullop.remote,
1479 pullop.remote,
1452 heads=pullop.heads,
1480 heads=pullop.heads,
1453 force=pullop.force)
1481 force=pullop.force)
1454 common, fetch, rheads = tmp
1482 common, fetch, rheads = tmp
1455 nm = pullop.repo.unfiltered().changelog.nodemap
1483 nm = pullop.repo.unfiltered().changelog.nodemap
1456 if fetch and rheads:
1484 if fetch and rheads:
1457 # If a remote heads is filtered locally, put in back in common.
1485 # If a remote heads is filtered locally, put in back in common.
1458 #
1486 #
1459 # This is a hackish solution to catch most of "common but locally
1487 # This is a hackish solution to catch most of "common but locally
1460 # hidden situation". We do not performs discovery on unfiltered
1488 # hidden situation". We do not performs discovery on unfiltered
1461 # repository because it end up doing a pathological amount of round
1489 # repository because it end up doing a pathological amount of round
1462 # trip for w huge amount of changeset we do not care about.
1490 # trip for w huge amount of changeset we do not care about.
1463 #
1491 #
1464 # If a set of such "common but filtered" changeset exist on the server
1492 # If a set of such "common but filtered" changeset exist on the server
1465 # but are not including a remote heads, we'll not be able to detect it,
1493 # but are not including a remote heads, we'll not be able to detect it,
1466 scommon = set(common)
1494 scommon = set(common)
1467 for n in rheads:
1495 for n in rheads:
1468 if n in nm:
1496 if n in nm:
1469 if n not in scommon:
1497 if n not in scommon:
1470 common.append(n)
1498 common.append(n)
1471 if set(rheads).issubset(set(common)):
1499 if set(rheads).issubset(set(common)):
1472 fetch = []
1500 fetch = []
1473 pullop.common = common
1501 pullop.common = common
1474 pullop.fetch = fetch
1502 pullop.fetch = fetch
1475 pullop.rheads = rheads
1503 pullop.rheads = rheads
1476
1504
1477 def _pullbundle2(pullop):
1505 def _pullbundle2(pullop):
1478 """pull data using bundle2
1506 """pull data using bundle2
1479
1507
1480 For now, the only supported data are changegroup."""
1508 For now, the only supported data are changegroup."""
1481 kwargs = {'bundlecaps': caps20to10(pullop.repo, role='client')}
1509 kwargs = {'bundlecaps': caps20to10(pullop.repo, role='client')}
1482
1510
1483 # make ui easier to access
1511 # make ui easier to access
1484 ui = pullop.repo.ui
1512 ui = pullop.repo.ui
1485
1513
1486 # At the moment we don't do stream clones over bundle2. If that is
1514 # At the moment we don't do stream clones over bundle2. If that is
1487 # implemented then here's where the check for that will go.
1515 # implemented then here's where the check for that will go.
1488 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1516 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1489
1517
1490 # declare pull perimeters
1518 # declare pull perimeters
1491 kwargs['common'] = pullop.common
1519 kwargs['common'] = pullop.common
1492 kwargs['heads'] = pullop.heads or pullop.rheads
1520 kwargs['heads'] = pullop.heads or pullop.rheads
1493
1521
1494 if streaming:
1522 if streaming:
1495 kwargs['cg'] = False
1523 kwargs['cg'] = False
1496 kwargs['stream'] = True
1524 kwargs['stream'] = True
1497 pullop.stepsdone.add('changegroup')
1525 pullop.stepsdone.add('changegroup')
1498 pullop.stepsdone.add('phases')
1526 pullop.stepsdone.add('phases')
1499
1527
1500 else:
1528 else:
1501 # pulling changegroup
1529 # pulling changegroup
1502 pullop.stepsdone.add('changegroup')
1530 pullop.stepsdone.add('changegroup')
1503
1531
1504 kwargs['cg'] = pullop.fetch
1532 kwargs['cg'] = pullop.fetch
1505
1533
1506 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1534 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1507 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1535 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1508 if (not legacyphase and hasbinaryphase):
1536 if (not legacyphase and hasbinaryphase):
1509 kwargs['phases'] = True
1537 kwargs['phases'] = True
1510 pullop.stepsdone.add('phases')
1538 pullop.stepsdone.add('phases')
1511
1539
1512 if 'listkeys' in pullop.remotebundle2caps:
1540 if 'listkeys' in pullop.remotebundle2caps:
1513 if 'phases' not in pullop.stepsdone:
1541 if 'phases' not in pullop.stepsdone:
1514 kwargs['listkeys'] = ['phases']
1542 kwargs['listkeys'] = ['phases']
1515
1543
1516 bookmarksrequested = False
1544 bookmarksrequested = False
1517 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1545 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1518 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1546 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1519
1547
1520 if pullop.remotebookmarks is not None:
1548 if pullop.remotebookmarks is not None:
1521 pullop.stepsdone.add('request-bookmarks')
1549 pullop.stepsdone.add('request-bookmarks')
1522
1550
1523 if ('request-bookmarks' not in pullop.stepsdone
1551 if ('request-bookmarks' not in pullop.stepsdone
1524 and pullop.remotebookmarks is None
1552 and pullop.remotebookmarks is None
1525 and not legacybookmark and hasbinarybook):
1553 and not legacybookmark and hasbinarybook):
1526 kwargs['bookmarks'] = True
1554 kwargs['bookmarks'] = True
1527 bookmarksrequested = True
1555 bookmarksrequested = True
1528
1556
1529 if 'listkeys' in pullop.remotebundle2caps:
1557 if 'listkeys' in pullop.remotebundle2caps:
1530 if 'request-bookmarks' not in pullop.stepsdone:
1558 if 'request-bookmarks' not in pullop.stepsdone:
1531 # make sure to always includes bookmark data when migrating
1559 # make sure to always includes bookmark data when migrating
1532 # `hg incoming --bundle` to using this function.
1560 # `hg incoming --bundle` to using this function.
1533 pullop.stepsdone.add('request-bookmarks')
1561 pullop.stepsdone.add('request-bookmarks')
1534 kwargs.setdefault('listkeys', []).append('bookmarks')
1562 kwargs.setdefault('listkeys', []).append('bookmarks')
1535
1563
1536 # If this is a full pull / clone and the server supports the clone bundles
1564 # If this is a full pull / clone and the server supports the clone bundles
1537 # feature, tell the server whether we attempted a clone bundle. The
1565 # feature, tell the server whether we attempted a clone bundle. The
1538 # presence of this flag indicates the client supports clone bundles. This
1566 # presence of this flag indicates the client supports clone bundles. This
1539 # will enable the server to treat clients that support clone bundles
1567 # will enable the server to treat clients that support clone bundles
1540 # differently from those that don't.
1568 # differently from those that don't.
1541 if (pullop.remote.capable('clonebundles')
1569 if (pullop.remote.capable('clonebundles')
1542 and pullop.heads is None and list(pullop.common) == [nullid]):
1570 and pullop.heads is None and list(pullop.common) == [nullid]):
1543 kwargs['cbattempted'] = pullop.clonebundleattempted
1571 kwargs['cbattempted'] = pullop.clonebundleattempted
1544
1572
1545 if streaming:
1573 if streaming:
1546 pullop.repo.ui.status(_('streaming all changes\n'))
1574 pullop.repo.ui.status(_('streaming all changes\n'))
1547 elif not pullop.fetch:
1575 elif not pullop.fetch:
1548 pullop.repo.ui.status(_("no changes found\n"))
1576 pullop.repo.ui.status(_("no changes found\n"))
1549 pullop.cgresult = 0
1577 pullop.cgresult = 0
1550 else:
1578 else:
1551 if pullop.heads is None and list(pullop.common) == [nullid]:
1579 if pullop.heads is None and list(pullop.common) == [nullid]:
1552 pullop.repo.ui.status(_("requesting all changes\n"))
1580 pullop.repo.ui.status(_("requesting all changes\n"))
1553 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1581 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1554 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1582 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1555 if obsolete.commonversion(remoteversions) is not None:
1583 if obsolete.commonversion(remoteversions) is not None:
1556 kwargs['obsmarkers'] = True
1584 kwargs['obsmarkers'] = True
1557 pullop.stepsdone.add('obsmarkers')
1585 pullop.stepsdone.add('obsmarkers')
1558 _pullbundle2extraprepare(pullop, kwargs)
1586 _pullbundle2extraprepare(pullop, kwargs)
1559 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1587 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1560 try:
1588 try:
1561 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1589 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1562 op.modes['bookmarks'] = 'records'
1590 op.modes['bookmarks'] = 'records'
1563 bundle2.processbundle(pullop.repo, bundle, op=op)
1591 bundle2.processbundle(pullop.repo, bundle, op=op)
1564 except bundle2.AbortFromPart as exc:
1592 except bundle2.AbortFromPart as exc:
1565 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1593 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1566 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1594 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1567 except error.BundleValueError as exc:
1595 except error.BundleValueError as exc:
1568 raise error.Abort(_('missing support for %s') % exc)
1596 raise error.Abort(_('missing support for %s') % exc)
1569
1597
1570 if pullop.fetch:
1598 if pullop.fetch:
1571 pullop.cgresult = bundle2.combinechangegroupresults(op)
1599 pullop.cgresult = bundle2.combinechangegroupresults(op)
1572
1600
1573 # processing phases change
1601 # processing phases change
1574 for namespace, value in op.records['listkeys']:
1602 for namespace, value in op.records['listkeys']:
1575 if namespace == 'phases':
1603 if namespace == 'phases':
1576 _pullapplyphases(pullop, value)
1604 _pullapplyphases(pullop, value)
1577
1605
1578 # processing bookmark update
1606 # processing bookmark update
1579 if bookmarksrequested:
1607 if bookmarksrequested:
1580 books = {}
1608 books = {}
1581 for record in op.records['bookmarks']:
1609 for record in op.records['bookmarks']:
1582 books[record['bookmark']] = record["node"]
1610 books[record['bookmark']] = record["node"]
1583 pullop.remotebookmarks = books
1611 pullop.remotebookmarks = books
1584 else:
1612 else:
1585 for namespace, value in op.records['listkeys']:
1613 for namespace, value in op.records['listkeys']:
1586 if namespace == 'bookmarks':
1614 if namespace == 'bookmarks':
1587 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1615 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1588
1616
1589 # bookmark data were either already there or pulled in the bundle
1617 # bookmark data were either already there or pulled in the bundle
1590 if pullop.remotebookmarks is not None:
1618 if pullop.remotebookmarks is not None:
1591 _pullbookmarks(pullop)
1619 _pullbookmarks(pullop)
1592
1620
1593 def _pullbundle2extraprepare(pullop, kwargs):
1621 def _pullbundle2extraprepare(pullop, kwargs):
1594 """hook function so that extensions can extend the getbundle call"""
1622 """hook function so that extensions can extend the getbundle call"""
1595
1623
1596 def _pullchangeset(pullop):
1624 def _pullchangeset(pullop):
1597 """pull changeset from unbundle into the local repo"""
1625 """pull changeset from unbundle into the local repo"""
1598 # We delay the open of the transaction as late as possible so we
1626 # We delay the open of the transaction as late as possible so we
1599 # don't open transaction for nothing or you break future useful
1627 # don't open transaction for nothing or you break future useful
1600 # rollback call
1628 # rollback call
1601 if 'changegroup' in pullop.stepsdone:
1629 if 'changegroup' in pullop.stepsdone:
1602 return
1630 return
1603 pullop.stepsdone.add('changegroup')
1631 pullop.stepsdone.add('changegroup')
1604 if not pullop.fetch:
1632 if not pullop.fetch:
1605 pullop.repo.ui.status(_("no changes found\n"))
1633 pullop.repo.ui.status(_("no changes found\n"))
1606 pullop.cgresult = 0
1634 pullop.cgresult = 0
1607 return
1635 return
1608 tr = pullop.gettransaction()
1636 tr = pullop.gettransaction()
1609 if pullop.heads is None and list(pullop.common) == [nullid]:
1637 if pullop.heads is None and list(pullop.common) == [nullid]:
1610 pullop.repo.ui.status(_("requesting all changes\n"))
1638 pullop.repo.ui.status(_("requesting all changes\n"))
1611 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1639 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1612 # issue1320, avoid a race if remote changed after discovery
1640 # issue1320, avoid a race if remote changed after discovery
1613 pullop.heads = pullop.rheads
1641 pullop.heads = pullop.rheads
1614
1642
1615 if pullop.remote.capable('getbundle'):
1643 if pullop.remote.capable('getbundle'):
1616 # TODO: get bundlecaps from remote
1644 # TODO: get bundlecaps from remote
1617 cg = pullop.remote.getbundle('pull', common=pullop.common,
1645 cg = pullop.remote.getbundle('pull', common=pullop.common,
1618 heads=pullop.heads or pullop.rheads)
1646 heads=pullop.heads or pullop.rheads)
1619 elif pullop.heads is None:
1647 elif pullop.heads is None:
1620 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1648 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1621 elif not pullop.remote.capable('changegroupsubset'):
1649 elif not pullop.remote.capable('changegroupsubset'):
1622 raise error.Abort(_("partial pull cannot be done because "
1650 raise error.Abort(_("partial pull cannot be done because "
1623 "other repository doesn't support "
1651 "other repository doesn't support "
1624 "changegroupsubset."))
1652 "changegroupsubset."))
1625 else:
1653 else:
1626 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1654 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1627 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1655 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1628 pullop.remote.url())
1656 pullop.remote.url())
1629 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1657 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1630
1658
1631 def _pullphase(pullop):
1659 def _pullphase(pullop):
1632 # Get remote phases data from remote
1660 # Get remote phases data from remote
1633 if 'phases' in pullop.stepsdone:
1661 if 'phases' in pullop.stepsdone:
1634 return
1662 return
1635 remotephases = pullop.remote.listkeys('phases')
1663 remotephases = pullop.remote.listkeys('phases')
1636 _pullapplyphases(pullop, remotephases)
1664 _pullapplyphases(pullop, remotephases)
1637
1665
1638 def _pullapplyphases(pullop, remotephases):
1666 def _pullapplyphases(pullop, remotephases):
1639 """apply phase movement from observed remote state"""
1667 """apply phase movement from observed remote state"""
1640 if 'phases' in pullop.stepsdone:
1668 if 'phases' in pullop.stepsdone:
1641 return
1669 return
1642 pullop.stepsdone.add('phases')
1670 pullop.stepsdone.add('phases')
1643 publishing = bool(remotephases.get('publishing', False))
1671 publishing = bool(remotephases.get('publishing', False))
1644 if remotephases and not publishing:
1672 if remotephases and not publishing:
1645 # remote is new and non-publishing
1673 # remote is new and non-publishing
1646 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1674 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1647 pullop.pulledsubset,
1675 pullop.pulledsubset,
1648 remotephases)
1676 remotephases)
1649 dheads = pullop.pulledsubset
1677 dheads = pullop.pulledsubset
1650 else:
1678 else:
1651 # Remote is old or publishing all common changesets
1679 # Remote is old or publishing all common changesets
1652 # should be seen as public
1680 # should be seen as public
1653 pheads = pullop.pulledsubset
1681 pheads = pullop.pulledsubset
1654 dheads = []
1682 dheads = []
1655 unfi = pullop.repo.unfiltered()
1683 unfi = pullop.repo.unfiltered()
1656 phase = unfi._phasecache.phase
1684 phase = unfi._phasecache.phase
1657 rev = unfi.changelog.nodemap.get
1685 rev = unfi.changelog.nodemap.get
1658 public = phases.public
1686 public = phases.public
1659 draft = phases.draft
1687 draft = phases.draft
1660
1688
1661 # exclude changesets already public locally and update the others
1689 # exclude changesets already public locally and update the others
1662 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1690 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1663 if pheads:
1691 if pheads:
1664 tr = pullop.gettransaction()
1692 tr = pullop.gettransaction()
1665 phases.advanceboundary(pullop.repo, tr, public, pheads)
1693 phases.advanceboundary(pullop.repo, tr, public, pheads)
1666
1694
1667 # exclude changesets already draft locally and update the others
1695 # exclude changesets already draft locally and update the others
1668 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1696 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1669 if dheads:
1697 if dheads:
1670 tr = pullop.gettransaction()
1698 tr = pullop.gettransaction()
1671 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1699 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1672
1700
1673 def _pullbookmarks(pullop):
1701 def _pullbookmarks(pullop):
1674 """process the remote bookmark information to update the local one"""
1702 """process the remote bookmark information to update the local one"""
1675 if 'bookmarks' in pullop.stepsdone:
1703 if 'bookmarks' in pullop.stepsdone:
1676 return
1704 return
1677 pullop.stepsdone.add('bookmarks')
1705 pullop.stepsdone.add('bookmarks')
1678 repo = pullop.repo
1706 repo = pullop.repo
1679 remotebookmarks = pullop.remotebookmarks
1707 remotebookmarks = pullop.remotebookmarks
1680 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1708 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1681 pullop.remote.url(),
1709 pullop.remote.url(),
1682 pullop.gettransaction,
1710 pullop.gettransaction,
1683 explicit=pullop.explicitbookmarks)
1711 explicit=pullop.explicitbookmarks)
1684
1712
1685 def _pullobsolete(pullop):
1713 def _pullobsolete(pullop):
1686 """utility function to pull obsolete markers from a remote
1714 """utility function to pull obsolete markers from a remote
1687
1715
1688 The `gettransaction` is function that return the pull transaction, creating
1716 The `gettransaction` is function that return the pull transaction, creating
1689 one if necessary. We return the transaction to inform the calling code that
1717 one if necessary. We return the transaction to inform the calling code that
1690 a new transaction have been created (when applicable).
1718 a new transaction have been created (when applicable).
1691
1719
1692 Exists mostly to allow overriding for experimentation purpose"""
1720 Exists mostly to allow overriding for experimentation purpose"""
1693 if 'obsmarkers' in pullop.stepsdone:
1721 if 'obsmarkers' in pullop.stepsdone:
1694 return
1722 return
1695 pullop.stepsdone.add('obsmarkers')
1723 pullop.stepsdone.add('obsmarkers')
1696 tr = None
1724 tr = None
1697 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1725 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1698 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1726 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1699 remoteobs = pullop.remote.listkeys('obsolete')
1727 remoteobs = pullop.remote.listkeys('obsolete')
1700 if 'dump0' in remoteobs:
1728 if 'dump0' in remoteobs:
1701 tr = pullop.gettransaction()
1729 tr = pullop.gettransaction()
1702 markers = []
1730 markers = []
1703 for key in sorted(remoteobs, reverse=True):
1731 for key in sorted(remoteobs, reverse=True):
1704 if key.startswith('dump'):
1732 if key.startswith('dump'):
1705 data = util.b85decode(remoteobs[key])
1733 data = util.b85decode(remoteobs[key])
1706 version, newmarks = obsolete._readmarkers(data)
1734 version, newmarks = obsolete._readmarkers(data)
1707 markers += newmarks
1735 markers += newmarks
1708 if markers:
1736 if markers:
1709 pullop.repo.obsstore.add(tr, markers)
1737 pullop.repo.obsstore.add(tr, markers)
1710 pullop.repo.invalidatevolatilesets()
1738 pullop.repo.invalidatevolatilesets()
1711 return tr
1739 return tr
1712
1740
1713 def caps20to10(repo, role):
1741 def caps20to10(repo, role):
1714 """return a set with appropriate options to use bundle20 during getbundle"""
1742 """return a set with appropriate options to use bundle20 during getbundle"""
1715 caps = {'HG20'}
1743 caps = {'HG20'}
1716 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
1744 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
1717 caps.add('bundle2=' + urlreq.quote(capsblob))
1745 caps.add('bundle2=' + urlreq.quote(capsblob))
1718 return caps
1746 return caps
1719
1747
1720 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1748 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1721 getbundle2partsorder = []
1749 getbundle2partsorder = []
1722
1750
1723 # Mapping between step name and function
1751 # Mapping between step name and function
1724 #
1752 #
1725 # This exists to help extensions wrap steps if necessary
1753 # This exists to help extensions wrap steps if necessary
1726 getbundle2partsmapping = {}
1754 getbundle2partsmapping = {}
1727
1755
1728 def getbundle2partsgenerator(stepname, idx=None):
1756 def getbundle2partsgenerator(stepname, idx=None):
1729 """decorator for function generating bundle2 part for getbundle
1757 """decorator for function generating bundle2 part for getbundle
1730
1758
1731 The function is added to the step -> function mapping and appended to the
1759 The function is added to the step -> function mapping and appended to the
1732 list of steps. Beware that decorated functions will be added in order
1760 list of steps. Beware that decorated functions will be added in order
1733 (this may matter).
1761 (this may matter).
1734
1762
1735 You can only use this decorator for new steps, if you want to wrap a step
1763 You can only use this decorator for new steps, if you want to wrap a step
1736 from an extension, attack the getbundle2partsmapping dictionary directly."""
1764 from an extension, attack the getbundle2partsmapping dictionary directly."""
1737 def dec(func):
1765 def dec(func):
1738 assert stepname not in getbundle2partsmapping
1766 assert stepname not in getbundle2partsmapping
1739 getbundle2partsmapping[stepname] = func
1767 getbundle2partsmapping[stepname] = func
1740 if idx is None:
1768 if idx is None:
1741 getbundle2partsorder.append(stepname)
1769 getbundle2partsorder.append(stepname)
1742 else:
1770 else:
1743 getbundle2partsorder.insert(idx, stepname)
1771 getbundle2partsorder.insert(idx, stepname)
1744 return func
1772 return func
1745 return dec
1773 return dec
1746
1774
1747 def bundle2requested(bundlecaps):
1775 def bundle2requested(bundlecaps):
1748 if bundlecaps is not None:
1776 if bundlecaps is not None:
1749 return any(cap.startswith('HG2') for cap in bundlecaps)
1777 return any(cap.startswith('HG2') for cap in bundlecaps)
1750 return False
1778 return False
1751
1779
1752 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1780 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1753 **kwargs):
1781 **kwargs):
1754 """Return chunks constituting a bundle's raw data.
1782 """Return chunks constituting a bundle's raw data.
1755
1783
1756 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1784 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1757 passed.
1785 passed.
1758
1786
1759 Returns a 2-tuple of a dict with metadata about the generated bundle
1787 Returns a 2-tuple of a dict with metadata about the generated bundle
1760 and an iterator over raw chunks (of varying sizes).
1788 and an iterator over raw chunks (of varying sizes).
1761 """
1789 """
1762 kwargs = pycompat.byteskwargs(kwargs)
1790 kwargs = pycompat.byteskwargs(kwargs)
1763 info = {}
1791 info = {}
1764 usebundle2 = bundle2requested(bundlecaps)
1792 usebundle2 = bundle2requested(bundlecaps)
1765 # bundle10 case
1793 # bundle10 case
1766 if not usebundle2:
1794 if not usebundle2:
1767 if bundlecaps and not kwargs.get('cg', True):
1795 if bundlecaps and not kwargs.get('cg', True):
1768 raise ValueError(_('request for bundle10 must include changegroup'))
1796 raise ValueError(_('request for bundle10 must include changegroup'))
1769
1797
1770 if kwargs:
1798 if kwargs:
1771 raise ValueError(_('unsupported getbundle arguments: %s')
1799 raise ValueError(_('unsupported getbundle arguments: %s')
1772 % ', '.join(sorted(kwargs.keys())))
1800 % ', '.join(sorted(kwargs.keys())))
1773 outgoing = _computeoutgoing(repo, heads, common)
1801 outgoing = _computeoutgoing(repo, heads, common)
1774 info['bundleversion'] = 1
1802 info['bundleversion'] = 1
1775 return info, changegroup.makestream(repo, outgoing, '01', source,
1803 return info, changegroup.makestream(repo, outgoing, '01', source,
1776 bundlecaps=bundlecaps)
1804 bundlecaps=bundlecaps)
1777
1805
1778 # bundle20 case
1806 # bundle20 case
1779 info['bundleversion'] = 2
1807 info['bundleversion'] = 2
1780 b2caps = {}
1808 b2caps = {}
1781 for bcaps in bundlecaps:
1809 for bcaps in bundlecaps:
1782 if bcaps.startswith('bundle2='):
1810 if bcaps.startswith('bundle2='):
1783 blob = urlreq.unquote(bcaps[len('bundle2='):])
1811 blob = urlreq.unquote(bcaps[len('bundle2='):])
1784 b2caps.update(bundle2.decodecaps(blob))
1812 b2caps.update(bundle2.decodecaps(blob))
1785 bundler = bundle2.bundle20(repo.ui, b2caps)
1813 bundler = bundle2.bundle20(repo.ui, b2caps)
1786
1814
1787 kwargs['heads'] = heads
1815 kwargs['heads'] = heads
1788 kwargs['common'] = common
1816 kwargs['common'] = common
1789
1817
1790 for name in getbundle2partsorder:
1818 for name in getbundle2partsorder:
1791 func = getbundle2partsmapping[name]
1819 func = getbundle2partsmapping[name]
1792 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1820 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1793 **pycompat.strkwargs(kwargs))
1821 **pycompat.strkwargs(kwargs))
1794
1822
1795 info['prefercompressed'] = bundler.prefercompressed
1823 info['prefercompressed'] = bundler.prefercompressed
1796
1824
1797 return info, bundler.getchunks()
1825 return info, bundler.getchunks()
1798
1826
1799 @getbundle2partsgenerator('stream2')
1827 @getbundle2partsgenerator('stream2')
1800 def _getbundlestream2(bundler, repo, source, bundlecaps=None,
1828 def _getbundlestream2(bundler, repo, source, bundlecaps=None,
1801 b2caps=None, heads=None, common=None, **kwargs):
1829 b2caps=None, heads=None, common=None, **kwargs):
1802 if not kwargs.get('stream', False):
1830 if not kwargs.get('stream', False):
1803 return
1831 return
1804
1832
1805 if not streamclone.allowservergeneration(repo):
1833 if not streamclone.allowservergeneration(repo):
1806 raise error.Abort(_('stream data requested but server does not allow '
1834 raise error.Abort(_('stream data requested but server does not allow '
1807 'this feature'),
1835 'this feature'),
1808 hint=_('well-behaved clients should not be '
1836 hint=_('well-behaved clients should not be '
1809 'requesting stream data from servers not '
1837 'requesting stream data from servers not '
1810 'advertising it; the client may be buggy'))
1838 'advertising it; the client may be buggy'))
1811
1839
1812 # Stream clones don't compress well. And compression undermines a
1840 # Stream clones don't compress well. And compression undermines a
1813 # goal of stream clones, which is to be fast. Communicate the desire
1841 # goal of stream clones, which is to be fast. Communicate the desire
1814 # to avoid compression to consumers of the bundle.
1842 # to avoid compression to consumers of the bundle.
1815 bundler.prefercompressed = False
1843 bundler.prefercompressed = False
1816
1844
1817 filecount, bytecount, it = streamclone.generatev2(repo)
1845 filecount, bytecount, it = streamclone.generatev2(repo)
1818 requirements = _formatrequirementsspec(repo.requirements)
1846 requirements = _formatrequirementsspec(repo.requirements)
1819 part = bundler.newpart('stream2', data=it)
1847 part = bundler.newpart('stream2', data=it)
1820 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1848 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1821 part.addparam('filecount', '%d' % filecount, mandatory=True)
1849 part.addparam('filecount', '%d' % filecount, mandatory=True)
1822 part.addparam('requirements', requirements, mandatory=True)
1850 part.addparam('requirements', requirements, mandatory=True)
1823
1851
1824 @getbundle2partsgenerator('changegroup')
1852 @getbundle2partsgenerator('changegroup')
1825 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1853 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1826 b2caps=None, heads=None, common=None, **kwargs):
1854 b2caps=None, heads=None, common=None, **kwargs):
1827 """add a changegroup part to the requested bundle"""
1855 """add a changegroup part to the requested bundle"""
1828 cgstream = None
1856 cgstream = None
1829 if kwargs.get(r'cg', True):
1857 if kwargs.get(r'cg', True):
1830 # build changegroup bundle here.
1858 # build changegroup bundle here.
1831 version = '01'
1859 version = '01'
1832 cgversions = b2caps.get('changegroup')
1860 cgversions = b2caps.get('changegroup')
1833 if cgversions: # 3.1 and 3.2 ship with an empty value
1861 if cgversions: # 3.1 and 3.2 ship with an empty value
1834 cgversions = [v for v in cgversions
1862 cgversions = [v for v in cgversions
1835 if v in changegroup.supportedoutgoingversions(repo)]
1863 if v in changegroup.supportedoutgoingversions(repo)]
1836 if not cgversions:
1864 if not cgversions:
1837 raise ValueError(_('no common changegroup version'))
1865 raise ValueError(_('no common changegroup version'))
1838 version = max(cgversions)
1866 version = max(cgversions)
1839 outgoing = _computeoutgoing(repo, heads, common)
1867 outgoing = _computeoutgoing(repo, heads, common)
1840 if outgoing.missing:
1868 if outgoing.missing:
1841 cgstream = changegroup.makestream(repo, outgoing, version, source,
1869 cgstream = changegroup.makestream(repo, outgoing, version, source,
1842 bundlecaps=bundlecaps)
1870 bundlecaps=bundlecaps)
1843
1871
1844 if cgstream:
1872 if cgstream:
1845 part = bundler.newpart('changegroup', data=cgstream)
1873 part = bundler.newpart('changegroup', data=cgstream)
1846 if cgversions:
1874 if cgversions:
1847 part.addparam('version', version)
1875 part.addparam('version', version)
1848 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1876 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1849 mandatory=False)
1877 mandatory=False)
1850 if 'treemanifest' in repo.requirements:
1878 if 'treemanifest' in repo.requirements:
1851 part.addparam('treemanifest', '1')
1879 part.addparam('treemanifest', '1')
1852
1880
1853 @getbundle2partsgenerator('bookmarks')
1881 @getbundle2partsgenerator('bookmarks')
1854 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1882 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1855 b2caps=None, **kwargs):
1883 b2caps=None, **kwargs):
1856 """add a bookmark part to the requested bundle"""
1884 """add a bookmark part to the requested bundle"""
1857 if not kwargs.get(r'bookmarks', False):
1885 if not kwargs.get(r'bookmarks', False):
1858 return
1886 return
1859 if 'bookmarks' not in b2caps:
1887 if 'bookmarks' not in b2caps:
1860 raise ValueError(_('no common bookmarks exchange method'))
1888 raise ValueError(_('no common bookmarks exchange method'))
1861 books = bookmod.listbinbookmarks(repo)
1889 books = bookmod.listbinbookmarks(repo)
1862 data = bookmod.binaryencode(books)
1890 data = bookmod.binaryencode(books)
1863 if data:
1891 if data:
1864 bundler.newpart('bookmarks', data=data)
1892 bundler.newpart('bookmarks', data=data)
1865
1893
1866 @getbundle2partsgenerator('listkeys')
1894 @getbundle2partsgenerator('listkeys')
1867 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1895 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1868 b2caps=None, **kwargs):
1896 b2caps=None, **kwargs):
1869 """add parts containing listkeys namespaces to the requested bundle"""
1897 """add parts containing listkeys namespaces to the requested bundle"""
1870 listkeys = kwargs.get(r'listkeys', ())
1898 listkeys = kwargs.get(r'listkeys', ())
1871 for namespace in listkeys:
1899 for namespace in listkeys:
1872 part = bundler.newpart('listkeys')
1900 part = bundler.newpart('listkeys')
1873 part.addparam('namespace', namespace)
1901 part.addparam('namespace', namespace)
1874 keys = repo.listkeys(namespace).items()
1902 keys = repo.listkeys(namespace).items()
1875 part.data = pushkey.encodekeys(keys)
1903 part.data = pushkey.encodekeys(keys)
1876
1904
1877 @getbundle2partsgenerator('obsmarkers')
1905 @getbundle2partsgenerator('obsmarkers')
1878 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1906 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1879 b2caps=None, heads=None, **kwargs):
1907 b2caps=None, heads=None, **kwargs):
1880 """add an obsolescence markers part to the requested bundle"""
1908 """add an obsolescence markers part to the requested bundle"""
1881 if kwargs.get(r'obsmarkers', False):
1909 if kwargs.get(r'obsmarkers', False):
1882 if heads is None:
1910 if heads is None:
1883 heads = repo.heads()
1911 heads = repo.heads()
1884 subset = [c.node() for c in repo.set('::%ln', heads)]
1912 subset = [c.node() for c in repo.set('::%ln', heads)]
1885 markers = repo.obsstore.relevantmarkers(subset)
1913 markers = repo.obsstore.relevantmarkers(subset)
1886 markers = sorted(markers)
1914 markers = sorted(markers)
1887 bundle2.buildobsmarkerspart(bundler, markers)
1915 bundle2.buildobsmarkerspart(bundler, markers)
1888
1916
1889 @getbundle2partsgenerator('phases')
1917 @getbundle2partsgenerator('phases')
1890 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1918 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1891 b2caps=None, heads=None, **kwargs):
1919 b2caps=None, heads=None, **kwargs):
1892 """add phase heads part to the requested bundle"""
1920 """add phase heads part to the requested bundle"""
1893 if kwargs.get(r'phases', False):
1921 if kwargs.get(r'phases', False):
1894 if not 'heads' in b2caps.get('phases'):
1922 if not 'heads' in b2caps.get('phases'):
1895 raise ValueError(_('no common phases exchange method'))
1923 raise ValueError(_('no common phases exchange method'))
1896 if heads is None:
1924 if heads is None:
1897 heads = repo.heads()
1925 heads = repo.heads()
1898
1926
1899 headsbyphase = collections.defaultdict(set)
1927 headsbyphase = collections.defaultdict(set)
1900 if repo.publishing():
1928 if repo.publishing():
1901 headsbyphase[phases.public] = heads
1929 headsbyphase[phases.public] = heads
1902 else:
1930 else:
1903 # find the appropriate heads to move
1931 # find the appropriate heads to move
1904
1932
1905 phase = repo._phasecache.phase
1933 phase = repo._phasecache.phase
1906 node = repo.changelog.node
1934 node = repo.changelog.node
1907 rev = repo.changelog.rev
1935 rev = repo.changelog.rev
1908 for h in heads:
1936 for h in heads:
1909 headsbyphase[phase(repo, rev(h))].add(h)
1937 headsbyphase[phase(repo, rev(h))].add(h)
1910 seenphases = list(headsbyphase.keys())
1938 seenphases = list(headsbyphase.keys())
1911
1939
1912 # We do not handle anything but public and draft phase for now)
1940 # We do not handle anything but public and draft phase for now)
1913 if seenphases:
1941 if seenphases:
1914 assert max(seenphases) <= phases.draft
1942 assert max(seenphases) <= phases.draft
1915
1943
1916 # if client is pulling non-public changesets, we need to find
1944 # if client is pulling non-public changesets, we need to find
1917 # intermediate public heads.
1945 # intermediate public heads.
1918 draftheads = headsbyphase.get(phases.draft, set())
1946 draftheads = headsbyphase.get(phases.draft, set())
1919 if draftheads:
1947 if draftheads:
1920 publicheads = headsbyphase.get(phases.public, set())
1948 publicheads = headsbyphase.get(phases.public, set())
1921
1949
1922 revset = 'heads(only(%ln, %ln) and public())'
1950 revset = 'heads(only(%ln, %ln) and public())'
1923 extraheads = repo.revs(revset, draftheads, publicheads)
1951 extraheads = repo.revs(revset, draftheads, publicheads)
1924 for r in extraheads:
1952 for r in extraheads:
1925 headsbyphase[phases.public].add(node(r))
1953 headsbyphase[phases.public].add(node(r))
1926
1954
1927 # transform data in a format used by the encoding function
1955 # transform data in a format used by the encoding function
1928 phasemapping = []
1956 phasemapping = []
1929 for phase in phases.allphases:
1957 for phase in phases.allphases:
1930 phasemapping.append(sorted(headsbyphase[phase]))
1958 phasemapping.append(sorted(headsbyphase[phase]))
1931
1959
1932 # generate the actual part
1960 # generate the actual part
1933 phasedata = phases.binaryencode(phasemapping)
1961 phasedata = phases.binaryencode(phasemapping)
1934 bundler.newpart('phase-heads', data=phasedata)
1962 bundler.newpart('phase-heads', data=phasedata)
1935
1963
1936 @getbundle2partsgenerator('hgtagsfnodes')
1964 @getbundle2partsgenerator('hgtagsfnodes')
1937 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1965 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1938 b2caps=None, heads=None, common=None,
1966 b2caps=None, heads=None, common=None,
1939 **kwargs):
1967 **kwargs):
1940 """Transfer the .hgtags filenodes mapping.
1968 """Transfer the .hgtags filenodes mapping.
1941
1969
1942 Only values for heads in this bundle will be transferred.
1970 Only values for heads in this bundle will be transferred.
1943
1971
1944 The part data consists of pairs of 20 byte changeset node and .hgtags
1972 The part data consists of pairs of 20 byte changeset node and .hgtags
1945 filenodes raw values.
1973 filenodes raw values.
1946 """
1974 """
1947 # Don't send unless:
1975 # Don't send unless:
1948 # - changeset are being exchanged,
1976 # - changeset are being exchanged,
1949 # - the client supports it.
1977 # - the client supports it.
1950 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1978 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1951 return
1979 return
1952
1980
1953 outgoing = _computeoutgoing(repo, heads, common)
1981 outgoing = _computeoutgoing(repo, heads, common)
1954 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1982 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1955
1983
1956 @getbundle2partsgenerator('cache:rev-branch-cache')
1984 @getbundle2partsgenerator('cache:rev-branch-cache')
1957 def _getbundlerevbranchcache(bundler, repo, source, bundlecaps=None,
1985 def _getbundlerevbranchcache(bundler, repo, source, bundlecaps=None,
1958 b2caps=None, heads=None, common=None,
1986 b2caps=None, heads=None, common=None,
1959 **kwargs):
1987 **kwargs):
1960 """Transfer the rev-branch-cache mapping
1988 """Transfer the rev-branch-cache mapping
1961
1989
1962 The payload is a series of data related to each branch
1990 The payload is a series of data related to each branch
1963
1991
1964 1) branch name length
1992 1) branch name length
1965 2) number of open heads
1993 2) number of open heads
1966 3) number of closed heads
1994 3) number of closed heads
1967 4) open heads nodes
1995 4) open heads nodes
1968 5) closed heads nodes
1996 5) closed heads nodes
1969 """
1997 """
1970 # Don't send unless:
1998 # Don't send unless:
1971 # - changeset are being exchanged,
1999 # - changeset are being exchanged,
1972 # - the client supports it.
2000 # - the client supports it.
1973 if not (kwargs.get(r'cg', True)) or 'rev-branch-cache' not in b2caps:
2001 if not (kwargs.get(r'cg', True)) or 'rev-branch-cache' not in b2caps:
1974 return
2002 return
1975 outgoing = _computeoutgoing(repo, heads, common)
2003 outgoing = _computeoutgoing(repo, heads, common)
1976 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2004 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
1977
2005
1978 def check_heads(repo, their_heads, context):
2006 def check_heads(repo, their_heads, context):
1979 """check if the heads of a repo have been modified
2007 """check if the heads of a repo have been modified
1980
2008
1981 Used by peer for unbundling.
2009 Used by peer for unbundling.
1982 """
2010 """
1983 heads = repo.heads()
2011 heads = repo.heads()
1984 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
2012 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1985 if not (their_heads == ['force'] or their_heads == heads or
2013 if not (their_heads == ['force'] or their_heads == heads or
1986 their_heads == ['hashed', heads_hash]):
2014 their_heads == ['hashed', heads_hash]):
1987 # someone else committed/pushed/unbundled while we
2015 # someone else committed/pushed/unbundled while we
1988 # were transferring data
2016 # were transferring data
1989 raise error.PushRaced('repository changed while %s - '
2017 raise error.PushRaced('repository changed while %s - '
1990 'please try again' % context)
2018 'please try again' % context)
1991
2019
1992 def unbundle(repo, cg, heads, source, url):
2020 def unbundle(repo, cg, heads, source, url):
1993 """Apply a bundle to a repo.
2021 """Apply a bundle to a repo.
1994
2022
1995 this function makes sure the repo is locked during the application and have
2023 this function makes sure the repo is locked during the application and have
1996 mechanism to check that no push race occurred between the creation of the
2024 mechanism to check that no push race occurred between the creation of the
1997 bundle and its application.
2025 bundle and its application.
1998
2026
1999 If the push was raced as PushRaced exception is raised."""
2027 If the push was raced as PushRaced exception is raised."""
2000 r = 0
2028 r = 0
2001 # need a transaction when processing a bundle2 stream
2029 # need a transaction when processing a bundle2 stream
2002 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2030 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2003 lockandtr = [None, None, None]
2031 lockandtr = [None, None, None]
2004 recordout = None
2032 recordout = None
2005 # quick fix for output mismatch with bundle2 in 3.4
2033 # quick fix for output mismatch with bundle2 in 3.4
2006 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
2034 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
2007 if url.startswith('remote:http:') or url.startswith('remote:https:'):
2035 if url.startswith('remote:http:') or url.startswith('remote:https:'):
2008 captureoutput = True
2036 captureoutput = True
2009 try:
2037 try:
2010 # note: outside bundle1, 'heads' is expected to be empty and this
2038 # note: outside bundle1, 'heads' is expected to be empty and this
2011 # 'check_heads' call wil be a no-op
2039 # 'check_heads' call wil be a no-op
2012 check_heads(repo, heads, 'uploading changes')
2040 check_heads(repo, heads, 'uploading changes')
2013 # push can proceed
2041 # push can proceed
2014 if not isinstance(cg, bundle2.unbundle20):
2042 if not isinstance(cg, bundle2.unbundle20):
2015 # legacy case: bundle1 (changegroup 01)
2043 # legacy case: bundle1 (changegroup 01)
2016 txnname = "\n".join([source, util.hidepassword(url)])
2044 txnname = "\n".join([source, util.hidepassword(url)])
2017 with repo.lock(), repo.transaction(txnname) as tr:
2045 with repo.lock(), repo.transaction(txnname) as tr:
2018 op = bundle2.applybundle(repo, cg, tr, source, url)
2046 op = bundle2.applybundle(repo, cg, tr, source, url)
2019 r = bundle2.combinechangegroupresults(op)
2047 r = bundle2.combinechangegroupresults(op)
2020 else:
2048 else:
2021 r = None
2049 r = None
2022 try:
2050 try:
2023 def gettransaction():
2051 def gettransaction():
2024 if not lockandtr[2]:
2052 if not lockandtr[2]:
2025 lockandtr[0] = repo.wlock()
2053 lockandtr[0] = repo.wlock()
2026 lockandtr[1] = repo.lock()
2054 lockandtr[1] = repo.lock()
2027 lockandtr[2] = repo.transaction(source)
2055 lockandtr[2] = repo.transaction(source)
2028 lockandtr[2].hookargs['source'] = source
2056 lockandtr[2].hookargs['source'] = source
2029 lockandtr[2].hookargs['url'] = url
2057 lockandtr[2].hookargs['url'] = url
2030 lockandtr[2].hookargs['bundle2'] = '1'
2058 lockandtr[2].hookargs['bundle2'] = '1'
2031 return lockandtr[2]
2059 return lockandtr[2]
2032
2060
2033 # Do greedy locking by default until we're satisfied with lazy
2061 # Do greedy locking by default until we're satisfied with lazy
2034 # locking.
2062 # locking.
2035 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
2063 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
2036 gettransaction()
2064 gettransaction()
2037
2065
2038 op = bundle2.bundleoperation(repo, gettransaction,
2066 op = bundle2.bundleoperation(repo, gettransaction,
2039 captureoutput=captureoutput)
2067 captureoutput=captureoutput)
2040 try:
2068 try:
2041 op = bundle2.processbundle(repo, cg, op=op)
2069 op = bundle2.processbundle(repo, cg, op=op)
2042 finally:
2070 finally:
2043 r = op.reply
2071 r = op.reply
2044 if captureoutput and r is not None:
2072 if captureoutput and r is not None:
2045 repo.ui.pushbuffer(error=True, subproc=True)
2073 repo.ui.pushbuffer(error=True, subproc=True)
2046 def recordout(output):
2074 def recordout(output):
2047 r.newpart('output', data=output, mandatory=False)
2075 r.newpart('output', data=output, mandatory=False)
2048 if lockandtr[2] is not None:
2076 if lockandtr[2] is not None:
2049 lockandtr[2].close()
2077 lockandtr[2].close()
2050 except BaseException as exc:
2078 except BaseException as exc:
2051 exc.duringunbundle2 = True
2079 exc.duringunbundle2 = True
2052 if captureoutput and r is not None:
2080 if captureoutput and r is not None:
2053 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2081 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2054 def recordout(output):
2082 def recordout(output):
2055 part = bundle2.bundlepart('output', data=output,
2083 part = bundle2.bundlepart('output', data=output,
2056 mandatory=False)
2084 mandatory=False)
2057 parts.append(part)
2085 parts.append(part)
2058 raise
2086 raise
2059 finally:
2087 finally:
2060 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2088 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2061 if recordout is not None:
2089 if recordout is not None:
2062 recordout(repo.ui.popbuffer())
2090 recordout(repo.ui.popbuffer())
2063 return r
2091 return r
2064
2092
2065 def _maybeapplyclonebundle(pullop):
2093 def _maybeapplyclonebundle(pullop):
2066 """Apply a clone bundle from a remote, if possible."""
2094 """Apply a clone bundle from a remote, if possible."""
2067
2095
2068 repo = pullop.repo
2096 repo = pullop.repo
2069 remote = pullop.remote
2097 remote = pullop.remote
2070
2098
2071 if not repo.ui.configbool('ui', 'clonebundles'):
2099 if not repo.ui.configbool('ui', 'clonebundles'):
2072 return
2100 return
2073
2101
2074 # Only run if local repo is empty.
2102 # Only run if local repo is empty.
2075 if len(repo):
2103 if len(repo):
2076 return
2104 return
2077
2105
2078 if pullop.heads:
2106 if pullop.heads:
2079 return
2107 return
2080
2108
2081 if not remote.capable('clonebundles'):
2109 if not remote.capable('clonebundles'):
2082 return
2110 return
2083
2111
2084 res = remote._call('clonebundles')
2112 res = remote._call('clonebundles')
2085
2113
2086 # If we call the wire protocol command, that's good enough to record the
2114 # If we call the wire protocol command, that's good enough to record the
2087 # attempt.
2115 # attempt.
2088 pullop.clonebundleattempted = True
2116 pullop.clonebundleattempted = True
2089
2117
2090 entries = parseclonebundlesmanifest(repo, res)
2118 entries = parseclonebundlesmanifest(repo, res)
2091 if not entries:
2119 if not entries:
2092 repo.ui.note(_('no clone bundles available on remote; '
2120 repo.ui.note(_('no clone bundles available on remote; '
2093 'falling back to regular clone\n'))
2121 'falling back to regular clone\n'))
2094 return
2122 return
2095
2123
2096 entries = filterclonebundleentries(
2124 entries = filterclonebundleentries(
2097 repo, entries, streamclonerequested=pullop.streamclonerequested)
2125 repo, entries, streamclonerequested=pullop.streamclonerequested)
2098
2126
2099 if not entries:
2127 if not entries:
2100 # There is a thundering herd concern here. However, if a server
2128 # There is a thundering herd concern here. However, if a server
2101 # operator doesn't advertise bundles appropriate for its clients,
2129 # operator doesn't advertise bundles appropriate for its clients,
2102 # they deserve what's coming. Furthermore, from a client's
2130 # they deserve what's coming. Furthermore, from a client's
2103 # perspective, no automatic fallback would mean not being able to
2131 # perspective, no automatic fallback would mean not being able to
2104 # clone!
2132 # clone!
2105 repo.ui.warn(_('no compatible clone bundles available on server; '
2133 repo.ui.warn(_('no compatible clone bundles available on server; '
2106 'falling back to regular clone\n'))
2134 'falling back to regular clone\n'))
2107 repo.ui.warn(_('(you may want to report this to the server '
2135 repo.ui.warn(_('(you may want to report this to the server '
2108 'operator)\n'))
2136 'operator)\n'))
2109 return
2137 return
2110
2138
2111 entries = sortclonebundleentries(repo.ui, entries)
2139 entries = sortclonebundleentries(repo.ui, entries)
2112
2140
2113 url = entries[0]['URL']
2141 url = entries[0]['URL']
2114 repo.ui.status(_('applying clone bundle from %s\n') % url)
2142 repo.ui.status(_('applying clone bundle from %s\n') % url)
2115 if trypullbundlefromurl(repo.ui, repo, url):
2143 if trypullbundlefromurl(repo.ui, repo, url):
2116 repo.ui.status(_('finished applying clone bundle\n'))
2144 repo.ui.status(_('finished applying clone bundle\n'))
2117 # Bundle failed.
2145 # Bundle failed.
2118 #
2146 #
2119 # We abort by default to avoid the thundering herd of
2147 # We abort by default to avoid the thundering herd of
2120 # clients flooding a server that was expecting expensive
2148 # clients flooding a server that was expecting expensive
2121 # clone load to be offloaded.
2149 # clone load to be offloaded.
2122 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2150 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2123 repo.ui.warn(_('falling back to normal clone\n'))
2151 repo.ui.warn(_('falling back to normal clone\n'))
2124 else:
2152 else:
2125 raise error.Abort(_('error applying bundle'),
2153 raise error.Abort(_('error applying bundle'),
2126 hint=_('if this error persists, consider contacting '
2154 hint=_('if this error persists, consider contacting '
2127 'the server operator or disable clone '
2155 'the server operator or disable clone '
2128 'bundles via '
2156 'bundles via '
2129 '"--config ui.clonebundles=false"'))
2157 '"--config ui.clonebundles=false"'))
2130
2158
2131 def parseclonebundlesmanifest(repo, s):
2159 def parseclonebundlesmanifest(repo, s):
2132 """Parses the raw text of a clone bundles manifest.
2160 """Parses the raw text of a clone bundles manifest.
2133
2161
2134 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2162 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2135 to the URL and other keys are the attributes for the entry.
2163 to the URL and other keys are the attributes for the entry.
2136 """
2164 """
2137 m = []
2165 m = []
2138 for line in s.splitlines():
2166 for line in s.splitlines():
2139 fields = line.split()
2167 fields = line.split()
2140 if not fields:
2168 if not fields:
2141 continue
2169 continue
2142 attrs = {'URL': fields[0]}
2170 attrs = {'URL': fields[0]}
2143 for rawattr in fields[1:]:
2171 for rawattr in fields[1:]:
2144 key, value = rawattr.split('=', 1)
2172 key, value = rawattr.split('=', 1)
2145 key = urlreq.unquote(key)
2173 key = urlreq.unquote(key)
2146 value = urlreq.unquote(value)
2174 value = urlreq.unquote(value)
2147 attrs[key] = value
2175 attrs[key] = value
2148
2176
2149 # Parse BUNDLESPEC into components. This makes client-side
2177 # Parse BUNDLESPEC into components. This makes client-side
2150 # preferences easier to specify since you can prefer a single
2178 # preferences easier to specify since you can prefer a single
2151 # component of the BUNDLESPEC.
2179 # component of the BUNDLESPEC.
2152 if key == 'BUNDLESPEC':
2180 if key == 'BUNDLESPEC':
2153 try:
2181 try:
2154 bundlespec = parsebundlespec(repo, value,
2182 bundlespec = parsebundlespec(repo, value,
2155 externalnames=True)
2183 externalnames=True)
2156 attrs['COMPRESSION'] = bundlespec.compression
2184 attrs['COMPRESSION'] = bundlespec.compression
2157 attrs['VERSION'] = bundlespec.version
2185 attrs['VERSION'] = bundlespec.version
2158 except error.InvalidBundleSpecification:
2186 except error.InvalidBundleSpecification:
2159 pass
2187 pass
2160 except error.UnsupportedBundleSpecification:
2188 except error.UnsupportedBundleSpecification:
2161 pass
2189 pass
2162
2190
2163 m.append(attrs)
2191 m.append(attrs)
2164
2192
2165 return m
2193 return m
2166
2194
2167 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2195 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2168 """Remove incompatible clone bundle manifest entries.
2196 """Remove incompatible clone bundle manifest entries.
2169
2197
2170 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2198 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2171 and returns a new list consisting of only the entries that this client
2199 and returns a new list consisting of only the entries that this client
2172 should be able to apply.
2200 should be able to apply.
2173
2201
2174 There is no guarantee we'll be able to apply all returned entries because
2202 There is no guarantee we'll be able to apply all returned entries because
2175 the metadata we use to filter on may be missing or wrong.
2203 the metadata we use to filter on may be missing or wrong.
2176 """
2204 """
2177 newentries = []
2205 newentries = []
2178 for entry in entries:
2206 for entry in entries:
2179 spec = entry.get('BUNDLESPEC')
2207 spec = entry.get('BUNDLESPEC')
2180 if spec:
2208 if spec:
2181 try:
2209 try:
2182 bundlespec = parsebundlespec(repo, spec, strict=True)
2210 bundlespec = parsebundlespec(repo, spec, strict=True)
2183
2211
2184 # If a stream clone was requested, filter out non-streamclone
2212 # If a stream clone was requested, filter out non-streamclone
2185 # entries.
2213 # entries.
2186 comp = bundlespec.compression
2214 comp = bundlespec.compression
2187 version = bundlespec.version
2215 version = bundlespec.version
2188 if streamclonerequested and (comp != 'UN' or version != 's1'):
2216 if streamclonerequested and (comp != 'UN' or version != 's1'):
2189 repo.ui.debug('filtering %s because not a stream clone\n' %
2217 repo.ui.debug('filtering %s because not a stream clone\n' %
2190 entry['URL'])
2218 entry['URL'])
2191 continue
2219 continue
2192
2220
2193 except error.InvalidBundleSpecification as e:
2221 except error.InvalidBundleSpecification as e:
2194 repo.ui.debug(str(e) + '\n')
2222 repo.ui.debug(str(e) + '\n')
2195 continue
2223 continue
2196 except error.UnsupportedBundleSpecification as e:
2224 except error.UnsupportedBundleSpecification as e:
2197 repo.ui.debug('filtering %s because unsupported bundle '
2225 repo.ui.debug('filtering %s because unsupported bundle '
2198 'spec: %s\n' % (
2226 'spec: %s\n' % (
2199 entry['URL'], stringutil.forcebytestr(e)))
2227 entry['URL'], stringutil.forcebytestr(e)))
2200 continue
2228 continue
2201 # If we don't have a spec and requested a stream clone, we don't know
2229 # If we don't have a spec and requested a stream clone, we don't know
2202 # what the entry is so don't attempt to apply it.
2230 # what the entry is so don't attempt to apply it.
2203 elif streamclonerequested:
2231 elif streamclonerequested:
2204 repo.ui.debug('filtering %s because cannot determine if a stream '
2232 repo.ui.debug('filtering %s because cannot determine if a stream '
2205 'clone bundle\n' % entry['URL'])
2233 'clone bundle\n' % entry['URL'])
2206 continue
2234 continue
2207
2235
2208 if 'REQUIRESNI' in entry and not sslutil.hassni:
2236 if 'REQUIRESNI' in entry and not sslutil.hassni:
2209 repo.ui.debug('filtering %s because SNI not supported\n' %
2237 repo.ui.debug('filtering %s because SNI not supported\n' %
2210 entry['URL'])
2238 entry['URL'])
2211 continue
2239 continue
2212
2240
2213 newentries.append(entry)
2241 newentries.append(entry)
2214
2242
2215 return newentries
2243 return newentries
2216
2244
2217 class clonebundleentry(object):
2245 class clonebundleentry(object):
2218 """Represents an item in a clone bundles manifest.
2246 """Represents an item in a clone bundles manifest.
2219
2247
2220 This rich class is needed to support sorting since sorted() in Python 3
2248 This rich class is needed to support sorting since sorted() in Python 3
2221 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2249 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2222 won't work.
2250 won't work.
2223 """
2251 """
2224
2252
2225 def __init__(self, value, prefers):
2253 def __init__(self, value, prefers):
2226 self.value = value
2254 self.value = value
2227 self.prefers = prefers
2255 self.prefers = prefers
2228
2256
2229 def _cmp(self, other):
2257 def _cmp(self, other):
2230 for prefkey, prefvalue in self.prefers:
2258 for prefkey, prefvalue in self.prefers:
2231 avalue = self.value.get(prefkey)
2259 avalue = self.value.get(prefkey)
2232 bvalue = other.value.get(prefkey)
2260 bvalue = other.value.get(prefkey)
2233
2261
2234 # Special case for b missing attribute and a matches exactly.
2262 # Special case for b missing attribute and a matches exactly.
2235 if avalue is not None and bvalue is None and avalue == prefvalue:
2263 if avalue is not None and bvalue is None and avalue == prefvalue:
2236 return -1
2264 return -1
2237
2265
2238 # Special case for a missing attribute and b matches exactly.
2266 # Special case for a missing attribute and b matches exactly.
2239 if bvalue is not None and avalue is None and bvalue == prefvalue:
2267 if bvalue is not None and avalue is None and bvalue == prefvalue:
2240 return 1
2268 return 1
2241
2269
2242 # We can't compare unless attribute present on both.
2270 # We can't compare unless attribute present on both.
2243 if avalue is None or bvalue is None:
2271 if avalue is None or bvalue is None:
2244 continue
2272 continue
2245
2273
2246 # Same values should fall back to next attribute.
2274 # Same values should fall back to next attribute.
2247 if avalue == bvalue:
2275 if avalue == bvalue:
2248 continue
2276 continue
2249
2277
2250 # Exact matches come first.
2278 # Exact matches come first.
2251 if avalue == prefvalue:
2279 if avalue == prefvalue:
2252 return -1
2280 return -1
2253 if bvalue == prefvalue:
2281 if bvalue == prefvalue:
2254 return 1
2282 return 1
2255
2283
2256 # Fall back to next attribute.
2284 # Fall back to next attribute.
2257 continue
2285 continue
2258
2286
2259 # If we got here we couldn't sort by attributes and prefers. Fall
2287 # If we got here we couldn't sort by attributes and prefers. Fall
2260 # back to index order.
2288 # back to index order.
2261 return 0
2289 return 0
2262
2290
2263 def __lt__(self, other):
2291 def __lt__(self, other):
2264 return self._cmp(other) < 0
2292 return self._cmp(other) < 0
2265
2293
2266 def __gt__(self, other):
2294 def __gt__(self, other):
2267 return self._cmp(other) > 0
2295 return self._cmp(other) > 0
2268
2296
2269 def __eq__(self, other):
2297 def __eq__(self, other):
2270 return self._cmp(other) == 0
2298 return self._cmp(other) == 0
2271
2299
2272 def __le__(self, other):
2300 def __le__(self, other):
2273 return self._cmp(other) <= 0
2301 return self._cmp(other) <= 0
2274
2302
2275 def __ge__(self, other):
2303 def __ge__(self, other):
2276 return self._cmp(other) >= 0
2304 return self._cmp(other) >= 0
2277
2305
2278 def __ne__(self, other):
2306 def __ne__(self, other):
2279 return self._cmp(other) != 0
2307 return self._cmp(other) != 0
2280
2308
2281 def sortclonebundleentries(ui, entries):
2309 def sortclonebundleentries(ui, entries):
2282 prefers = ui.configlist('ui', 'clonebundleprefers')
2310 prefers = ui.configlist('ui', 'clonebundleprefers')
2283 if not prefers:
2311 if not prefers:
2284 return list(entries)
2312 return list(entries)
2285
2313
2286 prefers = [p.split('=', 1) for p in prefers]
2314 prefers = [p.split('=', 1) for p in prefers]
2287
2315
2288 items = sorted(clonebundleentry(v, prefers) for v in entries)
2316 items = sorted(clonebundleentry(v, prefers) for v in entries)
2289 return [i.value for i in items]
2317 return [i.value for i in items]
2290
2318
2291 def trypullbundlefromurl(ui, repo, url):
2319 def trypullbundlefromurl(ui, repo, url):
2292 """Attempt to apply a bundle from a URL."""
2320 """Attempt to apply a bundle from a URL."""
2293 with repo.lock(), repo.transaction('bundleurl') as tr:
2321 with repo.lock(), repo.transaction('bundleurl') as tr:
2294 try:
2322 try:
2295 fh = urlmod.open(ui, url)
2323 fh = urlmod.open(ui, url)
2296 cg = readbundle(ui, fh, 'stream')
2324 cg = readbundle(ui, fh, 'stream')
2297
2325
2298 if isinstance(cg, streamclone.streamcloneapplier):
2326 if isinstance(cg, streamclone.streamcloneapplier):
2299 cg.apply(repo)
2327 cg.apply(repo)
2300 else:
2328 else:
2301 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2329 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2302 return True
2330 return True
2303 except urlerr.httperror as e:
2331 except urlerr.httperror as e:
2304 ui.warn(_('HTTP error fetching bundle: %s\n') %
2332 ui.warn(_('HTTP error fetching bundle: %s\n') %
2305 stringutil.forcebytestr(e))
2333 stringutil.forcebytestr(e))
2306 except urlerr.urlerror as e:
2334 except urlerr.urlerror as e:
2307 ui.warn(_('error fetching bundle: %s\n') %
2335 ui.warn(_('error fetching bundle: %s\n') %
2308 stringutil.forcebytestr(e.reason))
2336 stringutil.forcebytestr(e.reason))
2309
2337
2310 return False
2338 return False
@@ -1,142 +1,142 b''
1 # coding=UTF-8
1 # coding=UTF-8
2
2
3 from __future__ import absolute_import
3 from __future__ import absolute_import
4
4
5 import base64
5 import base64
6 import zlib
6 import zlib
7
7
8 from mercurial import (
8 from mercurial import (
9 changegroup,
9 changegroup,
10 exchange,
10 exchange,
11 extensions,
11 extensions,
12 filelog,
12 filelog,
13 revlog,
13 revlog,
14 util,
14 util,
15 )
15 )
16
16
17 # Test only: These flags are defined here only in the context of testing the
17 # Test only: These flags are defined here only in the context of testing the
18 # behavior of the flag processor. The canonical way to add flags is to get in
18 # behavior of the flag processor. The canonical way to add flags is to get in
19 # touch with the community and make them known in revlog.
19 # touch with the community and make them known in revlog.
20 REVIDX_NOOP = (1 << 3)
20 REVIDX_NOOP = (1 << 3)
21 REVIDX_BASE64 = (1 << 2)
21 REVIDX_BASE64 = (1 << 2)
22 REVIDX_GZIP = (1 << 1)
22 REVIDX_GZIP = (1 << 1)
23 REVIDX_FAIL = 1
23 REVIDX_FAIL = 1
24
24
25 def validatehash(self, text):
25 def validatehash(self, text):
26 return True
26 return True
27
27
28 def bypass(self, text):
28 def bypass(self, text):
29 return False
29 return False
30
30
31 def noopdonothing(self, text):
31 def noopdonothing(self, text):
32 return (text, True)
32 return (text, True)
33
33
34 def b64encode(self, text):
34 def b64encode(self, text):
35 return (base64.b64encode(text), False)
35 return (base64.b64encode(text), False)
36
36
37 def b64decode(self, text):
37 def b64decode(self, text):
38 return (base64.b64decode(text), True)
38 return (base64.b64decode(text), True)
39
39
40 def gzipcompress(self, text):
40 def gzipcompress(self, text):
41 return (zlib.compress(text), False)
41 return (zlib.compress(text), False)
42
42
43 def gzipdecompress(self, text):
43 def gzipdecompress(self, text):
44 return (zlib.decompress(text), True)
44 return (zlib.decompress(text), True)
45
45
46 def supportedoutgoingversions(orig, repo):
46 def supportedoutgoingversions(orig, repo):
47 versions = orig(repo)
47 versions = orig(repo)
48 versions.discard(b'01')
48 versions.discard(b'01')
49 versions.discard(b'02')
49 versions.discard(b'02')
50 versions.add(b'03')
50 versions.add(b'03')
51 return versions
51 return versions
52
52
53 def allsupportedversions(orig, ui):
53 def allsupportedversions(orig, ui):
54 versions = orig(ui)
54 versions = orig(ui)
55 versions.add(b'03')
55 versions.add(b'03')
56 return versions
56 return versions
57
57
58 def noopaddrevision(orig, self, text, transaction, link, p1, p2,
58 def noopaddrevision(orig, self, text, transaction, link, p1, p2,
59 cachedelta=None, node=None,
59 cachedelta=None, node=None,
60 flags=revlog.REVIDX_DEFAULT_FLAGS):
60 flags=revlog.REVIDX_DEFAULT_FLAGS):
61 if b'[NOOP]' in text:
61 if b'[NOOP]' in text:
62 flags |= REVIDX_NOOP
62 flags |= REVIDX_NOOP
63 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
63 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
64 node=node, flags=flags)
64 node=node, flags=flags)
65
65
66 def b64addrevision(orig, self, text, transaction, link, p1, p2,
66 def b64addrevision(orig, self, text, transaction, link, p1, p2,
67 cachedelta=None, node=None,
67 cachedelta=None, node=None,
68 flags=revlog.REVIDX_DEFAULT_FLAGS):
68 flags=revlog.REVIDX_DEFAULT_FLAGS):
69 if b'[BASE64]' in text:
69 if b'[BASE64]' in text:
70 flags |= REVIDX_BASE64
70 flags |= REVIDX_BASE64
71 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
71 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
72 node=node, flags=flags)
72 node=node, flags=flags)
73
73
74 def gzipaddrevision(orig, self, text, transaction, link, p1, p2,
74 def gzipaddrevision(orig, self, text, transaction, link, p1, p2,
75 cachedelta=None, node=None,
75 cachedelta=None, node=None,
76 flags=revlog.REVIDX_DEFAULT_FLAGS):
76 flags=revlog.REVIDX_DEFAULT_FLAGS):
77 if b'[GZIP]' in text:
77 if b'[GZIP]' in text:
78 flags |= REVIDX_GZIP
78 flags |= REVIDX_GZIP
79 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
79 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
80 node=node, flags=flags)
80 node=node, flags=flags)
81
81
82 def failaddrevision(orig, self, text, transaction, link, p1, p2,
82 def failaddrevision(orig, self, text, transaction, link, p1, p2,
83 cachedelta=None, node=None,
83 cachedelta=None, node=None,
84 flags=revlog.REVIDX_DEFAULT_FLAGS):
84 flags=revlog.REVIDX_DEFAULT_FLAGS):
85 # This addrevision wrapper is meant to add a flag we will not have
85 # This addrevision wrapper is meant to add a flag we will not have
86 # transforms registered for, ensuring we handle this error case.
86 # transforms registered for, ensuring we handle this error case.
87 if b'[FAIL]' in text:
87 if b'[FAIL]' in text:
88 flags |= REVIDX_FAIL
88 flags |= REVIDX_FAIL
89 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
89 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
90 node=node, flags=flags)
90 node=node, flags=flags)
91
91
92 def extsetup(ui):
92 def extsetup(ui):
93 # Enable changegroup3 for flags to be sent over the wire
93 # Enable changegroup3 for flags to be sent over the wire
94 wrapfunction = extensions.wrapfunction
94 wrapfunction = extensions.wrapfunction
95 wrapfunction(changegroup,
95 wrapfunction(changegroup,
96 'supportedoutgoingversions',
96 'supportedoutgoingversions',
97 supportedoutgoingversions)
97 supportedoutgoingversions)
98 wrapfunction(changegroup,
98 wrapfunction(changegroup,
99 'allsupportedversions',
99 'allsupportedversions',
100 allsupportedversions)
100 allsupportedversions)
101
101
102 # Teach revlog about our test flags
102 # Teach revlog about our test flags
103 flags = [REVIDX_NOOP, REVIDX_BASE64, REVIDX_GZIP, REVIDX_FAIL]
103 flags = [REVIDX_NOOP, REVIDX_BASE64, REVIDX_GZIP, REVIDX_FAIL]
104 revlog.REVIDX_KNOWN_FLAGS |= util.bitsfrom(flags)
104 revlog.REVIDX_KNOWN_FLAGS |= util.bitsfrom(flags)
105 revlog.REVIDX_FLAGS_ORDER.extend(flags)
105 revlog.REVIDX_FLAGS_ORDER.extend(flags)
106
106
107 # Teach exchange to use changegroup 3
107 # Teach exchange to use changegroup 3
108 for k in exchange._bundlespeccgversions.keys():
108 for k in exchange._bundlespeccontentopts.keys():
109 exchange._bundlespeccgversions[k] = b'03'
109 exchange._bundlespeccontentopts[k]["cg.version"] = "03"
110
110
111 # Add wrappers for addrevision, responsible to set flags depending on the
111 # Add wrappers for addrevision, responsible to set flags depending on the
112 # revision data contents.
112 # revision data contents.
113 wrapfunction(filelog.filelog, 'addrevision', noopaddrevision)
113 wrapfunction(filelog.filelog, 'addrevision', noopaddrevision)
114 wrapfunction(filelog.filelog, 'addrevision', b64addrevision)
114 wrapfunction(filelog.filelog, 'addrevision', b64addrevision)
115 wrapfunction(filelog.filelog, 'addrevision', gzipaddrevision)
115 wrapfunction(filelog.filelog, 'addrevision', gzipaddrevision)
116 wrapfunction(filelog.filelog, 'addrevision', failaddrevision)
116 wrapfunction(filelog.filelog, 'addrevision', failaddrevision)
117
117
118 # Register flag processors for each extension
118 # Register flag processors for each extension
119 revlog.addflagprocessor(
119 revlog.addflagprocessor(
120 REVIDX_NOOP,
120 REVIDX_NOOP,
121 (
121 (
122 noopdonothing,
122 noopdonothing,
123 noopdonothing,
123 noopdonothing,
124 validatehash,
124 validatehash,
125 )
125 )
126 )
126 )
127 revlog.addflagprocessor(
127 revlog.addflagprocessor(
128 REVIDX_BASE64,
128 REVIDX_BASE64,
129 (
129 (
130 b64decode,
130 b64decode,
131 b64encode,
131 b64encode,
132 bypass,
132 bypass,
133 ),
133 ),
134 )
134 )
135 revlog.addflagprocessor(
135 revlog.addflagprocessor(
136 REVIDX_GZIP,
136 REVIDX_GZIP,
137 (
137 (
138 gzipdecompress,
138 gzipdecompress,
139 gzipcompress,
139 gzipcompress,
140 bypass
140 bypass
141 )
141 )
142 )
142 )
General Comments 0
You need to be logged in to leave comments. Login now