##// END OF EJS Templates
bundlespec: move computing the bundle contentops in parsebundlespec...
Boris Feld -
r37182:6c7a6b04 default
parent child Browse files
Show More
@@ -1,393 +1,391 b''
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 .hglfs::
57 57
58 58 The extension reads its configuration from a versioned ``.hglfs``
59 59 configuration file found in the root of the working directory. The
60 60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 61 configuration files. It uses a single section, ``[track]``.
62 62
63 63 The ``[track]`` section specifies which files are stored as LFS (or
64 64 not). Each line is keyed by a file pattern, with a predicate value.
65 65 The first file pattern match is used, so put more specific patterns
66 66 first. The available predicates are ``all()``, ``none()``, and
67 67 ``size()``. See "hg help filesets.size" for the latter.
68 68
69 69 Example versioned ``.hglfs`` file::
70 70
71 71 [track]
72 72 # No Makefile or python file, anywhere, will be LFS
73 73 **Makefile = none()
74 74 **.py = none()
75 75
76 76 **.zip = all()
77 77 **.exe = size(">1MB")
78 78
79 79 # Catchall for everything not matched above
80 80 ** = size(">10MB")
81 81
82 82 Configs::
83 83
84 84 [lfs]
85 85 # Remote endpoint. Multiple protocols are supported:
86 86 # - http(s)://user:pass@example.com/path
87 87 # git-lfs endpoint
88 88 # - file:///tmp/path
89 89 # local filesystem, usually for testing
90 90 # if unset, lfs will prompt setting this when it must use this value.
91 91 # (default: unset)
92 92 url = https://example.com/repo.git/info/lfs
93 93
94 94 # Which files to track in LFS. Path tests are "**.extname" for file
95 95 # extensions, and "path:under/some/directory" for path prefix. Both
96 96 # are relative to the repository root.
97 97 # File size can be tested with the "size()" fileset, and tests can be
98 98 # joined with fileset operators. (See "hg help filesets.operators".)
99 99 #
100 100 # Some examples:
101 101 # - all() # everything
102 102 # - none() # nothing
103 103 # - size(">20MB") # larger than 20MB
104 104 # - !**.txt # anything not a *.txt file
105 105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 106 # - path:bin # files under "bin" in the project root
107 107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 109 # (default: none())
110 110 #
111 111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 112 # will eventually be deprecated and removed.
113 113 track = size(">10M")
114 114
115 115 # how many times to retry before giving up on transferring an object
116 116 retry = 5
117 117
118 118 # the local directory to store lfs files for sharing across local clones.
119 119 # If not set, the cache is located in an OS specific cache location.
120 120 usercache = /path/to/global/cache
121 121 """
122 122
123 123 from __future__ import absolute_import
124 124
125 125 from mercurial.i18n import _
126 126
127 127 from mercurial import (
128 128 bundle2,
129 129 changegroup,
130 130 cmdutil,
131 131 config,
132 132 context,
133 133 error,
134 134 exchange,
135 135 extensions,
136 136 filelog,
137 137 fileset,
138 138 hg,
139 139 localrepo,
140 140 minifileset,
141 141 node,
142 142 pycompat,
143 143 registrar,
144 144 revlog,
145 145 scmutil,
146 146 templateutil,
147 147 upgrade,
148 148 util,
149 149 vfs as vfsmod,
150 150 wireproto,
151 151 wireprotoserver,
152 152 )
153 153
154 154 from . import (
155 155 blobstore,
156 156 wireprotolfsserver,
157 157 wrapper,
158 158 )
159 159
160 160 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
161 161 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
162 162 # be specifying the version(s) of Mercurial they are tested with, or
163 163 # leave the attribute unspecified.
164 164 testedwith = 'ships-with-hg-core'
165 165
166 166 configtable = {}
167 167 configitem = registrar.configitem(configtable)
168 168
169 169 configitem('experimental', 'lfs.user-agent',
170 170 default=None,
171 171 )
172 172 configitem('experimental', 'lfs.worker-enable',
173 173 default=False,
174 174 )
175 175
176 176 configitem('lfs', 'url',
177 177 default=None,
178 178 )
179 179 configitem('lfs', 'usercache',
180 180 default=None,
181 181 )
182 182 # Deprecated
183 183 configitem('lfs', 'threshold',
184 184 default=None,
185 185 )
186 186 configitem('lfs', 'track',
187 187 default='none()',
188 188 )
189 189 configitem('lfs', 'retry',
190 190 default=5,
191 191 )
192 192
193 193 cmdtable = {}
194 194 command = registrar.command(cmdtable)
195 195
196 196 templatekeyword = registrar.templatekeyword()
197 197 filesetpredicate = registrar.filesetpredicate()
198 198
199 199 def featuresetup(ui, supported):
200 200 # don't die on seeing a repo with the lfs requirement
201 201 supported |= {'lfs'}
202 202
203 203 def uisetup(ui):
204 204 localrepo.featuresetupfuncs.add(featuresetup)
205 205
206 206 def reposetup(ui, repo):
207 207 # Nothing to do with a remote repo
208 208 if not repo.local():
209 209 return
210 210
211 211 repo.svfs.lfslocalblobstore = blobstore.local(repo)
212 212 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
213 213
214 214 class lfsrepo(repo.__class__):
215 215 @localrepo.unfilteredmethod
216 216 def commitctx(self, ctx, error=False):
217 217 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
218 218 return super(lfsrepo, self).commitctx(ctx, error)
219 219
220 220 repo.__class__ = lfsrepo
221 221
222 222 if 'lfs' not in repo.requirements:
223 223 def checkrequireslfs(ui, repo, **kwargs):
224 224 if 'lfs' not in repo.requirements:
225 225 last = kwargs.get(r'node_last')
226 226 _bin = node.bin
227 227 if last:
228 228 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
229 229 else:
230 230 s = repo.set('%n', _bin(kwargs[r'node']))
231 231 match = repo.narrowmatch()
232 232 for ctx in s:
233 233 # TODO: is there a way to just walk the files in the commit?
234 234 if any(ctx[f].islfs() for f in ctx.files()
235 235 if f in ctx and match(f)):
236 236 repo.requirements.add('lfs')
237 237 repo._writerequirements()
238 238 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
239 239 break
240 240
241 241 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
242 242 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
243 243 else:
244 244 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
245 245
246 246 def _trackedmatcher(repo):
247 247 """Return a function (path, size) -> bool indicating whether or not to
248 248 track a given file with lfs."""
249 249 if not repo.wvfs.exists('.hglfs'):
250 250 # No '.hglfs' in wdir. Fallback to config for now.
251 251 trackspec = repo.ui.config('lfs', 'track')
252 252
253 253 # deprecated config: lfs.threshold
254 254 threshold = repo.ui.configbytes('lfs', 'threshold')
255 255 if threshold:
256 256 fileset.parse(trackspec) # make sure syntax errors are confined
257 257 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
258 258
259 259 return minifileset.compile(trackspec)
260 260
261 261 data = repo.wvfs.tryread('.hglfs')
262 262 if not data:
263 263 return lambda p, s: False
264 264
265 265 # Parse errors here will abort with a message that points to the .hglfs file
266 266 # and line number.
267 267 cfg = config.config()
268 268 cfg.parse('.hglfs', data)
269 269
270 270 try:
271 271 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
272 272 for pattern, rule in cfg.items('track')]
273 273 except error.ParseError as e:
274 274 # The original exception gives no indicator that the error is in the
275 275 # .hglfs file, so add that.
276 276
277 277 # TODO: See if the line number of the file can be made available.
278 278 raise error.Abort(_('parse error in .hglfs: %s') % e)
279 279
280 280 def _match(path, size):
281 281 for pat, rule in rules:
282 282 if pat(path, size):
283 283 return rule(path, size)
284 284
285 285 return False
286 286
287 287 return _match
288 288
289 289 def wrapfilelog(filelog):
290 290 wrapfunction = extensions.wrapfunction
291 291
292 292 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
293 293 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
294 294 wrapfunction(filelog, 'size', wrapper.filelogsize)
295 295
296 296 def extsetup(ui):
297 297 wrapfilelog(filelog.filelog)
298 298
299 299 wrapfunction = extensions.wrapfunction
300 300
301 301 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
302 302 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
303 303
304 304 wrapfunction(upgrade, '_finishdatamigration',
305 305 wrapper.upgradefinishdatamigration)
306 306
307 307 wrapfunction(upgrade, 'preservedrequirements',
308 308 wrapper.upgraderequirements)
309 309
310 310 wrapfunction(upgrade, 'supporteddestrequirements',
311 311 wrapper.upgraderequirements)
312 312
313 313 wrapfunction(changegroup,
314 314 'allsupportedversions',
315 315 wrapper.allsupportedversions)
316 316
317 317 wrapfunction(exchange, 'push', wrapper.push)
318 318 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
319 319 wrapfunction(wireprotoserver, 'handlewsgirequest',
320 320 wireprotolfsserver.handlewsgirequest)
321 321
322 322 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
323 323 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
324 324 context.basefilectx.islfs = wrapper.filectxislfs
325 325
326 326 revlog.addflagprocessor(
327 327 revlog.REVIDX_EXTSTORED,
328 328 (
329 329 wrapper.readfromstore,
330 330 wrapper.writetostore,
331 331 wrapper.bypasscheckhash,
332 332 ),
333 333 )
334 334
335 335 wrapfunction(hg, 'clone', wrapper.hgclone)
336 336 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
337 337
338 338 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
339 339
340 340 # Make bundle choose changegroup3 instead of changegroup2. This affects
341 341 # "hg bundle" command. Note: it does not cover all bundle formats like
342 342 # "packed1". Using "packed1" with lfs will likely cause trouble.
343 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
344 for k in names:
345 exchange._bundlespeccgversions[k] = '03'
343 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
346 344
347 345 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
348 346 # options and blob stores are passed from othervfs to the new readonlyvfs.
349 347 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
350 348
351 349 # when writing a bundle via "hg bundle" command, upload related LFS blobs
352 350 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
353 351
354 352 @filesetpredicate('lfs()', callstatus=True)
355 353 def lfsfileset(mctx, x):
356 354 """File that uses LFS storage."""
357 355 # i18n: "lfs" is a keyword
358 356 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
359 357 return [f for f in mctx.subset
360 358 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
361 359
362 360 @templatekeyword('lfs_files', requires={'ctx'})
363 361 def lfsfiles(context, mapping):
364 362 """List of strings. All files modified, added, or removed by this
365 363 changeset."""
366 364 ctx = context.resource(mapping, 'ctx')
367 365
368 366 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
369 367 files = sorted(pointers.keys())
370 368
371 369 def pointer(v):
372 370 # In the file spec, version is first and the other keys are sorted.
373 371 sortkeyfunc = lambda x: (x[0] != 'version', x)
374 372 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
375 373 return util.sortdict(items)
376 374
377 375 makemap = lambda v: {
378 376 'file': v,
379 377 'lfsoid': pointers[v].oid() if pointers[v] else None,
380 378 'lfspointer': templateutil.hybriddict(pointer(v)),
381 379 }
382 380
383 381 # TODO: make the separator ', '?
384 382 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
385 383 return templateutil.hybrid(f, files, makemap, pycompat.identity)
386 384
387 385 @command('debuglfsupload',
388 386 [('r', 'rev', [], _('upload large files introduced by REV'))])
389 387 def debuglfsupload(ui, repo, **opts):
390 388 """upload lfs blobs added by the working copy parent or given revisions"""
391 389 revs = opts.get(r'rev', [])
392 390 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
393 391 wrapper.uploadblobs(repo, pointers)
@@ -1,2225 +1,2228 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import, division
149 149
150 150 import collections
151 151 import errno
152 152 import os
153 153 import re
154 154 import string
155 155 import struct
156 156 import sys
157 157
158 158 from .i18n import _
159 159 from . import (
160 160 bookmarks,
161 161 changegroup,
162 162 encoding,
163 163 error,
164 164 node as nodemod,
165 165 obsolete,
166 166 phases,
167 167 pushkey,
168 168 pycompat,
169 169 streamclone,
170 170 tags,
171 171 url,
172 172 util,
173 173 )
174 174 from .utils import (
175 175 stringutil,
176 176 )
177 177
178 178 urlerr = util.urlerr
179 179 urlreq = util.urlreq
180 180
181 181 _pack = struct.pack
182 182 _unpack = struct.unpack
183 183
184 184 _fstreamparamsize = '>i'
185 185 _fpartheadersize = '>i'
186 186 _fparttypesize = '>B'
187 187 _fpartid = '>I'
188 188 _fpayloadsize = '>i'
189 189 _fpartparamcount = '>BB'
190 190
191 191 preferedchunksize = 32768
192 192
193 193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
194 194
195 195 def outdebug(ui, message):
196 196 """debug regarding output stream (bundling)"""
197 197 if ui.configbool('devel', 'bundle2.debug'):
198 198 ui.debug('bundle2-output: %s\n' % message)
199 199
200 200 def indebug(ui, message):
201 201 """debug on input stream (unbundling)"""
202 202 if ui.configbool('devel', 'bundle2.debug'):
203 203 ui.debug('bundle2-input: %s\n' % message)
204 204
205 205 def validateparttype(parttype):
206 206 """raise ValueError if a parttype contains invalid character"""
207 207 if _parttypeforbidden.search(parttype):
208 208 raise ValueError(parttype)
209 209
210 210 def _makefpartparamsizes(nbparams):
211 211 """return a struct format to read part parameter sizes
212 212
213 213 The number parameters is variable so we need to build that format
214 214 dynamically.
215 215 """
216 216 return '>'+('BB'*nbparams)
217 217
218 218 parthandlermapping = {}
219 219
220 220 def parthandler(parttype, params=()):
221 221 """decorator that register a function as a bundle2 part handler
222 222
223 223 eg::
224 224
225 225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
226 226 def myparttypehandler(...):
227 227 '''process a part of type "my part".'''
228 228 ...
229 229 """
230 230 validateparttype(parttype)
231 231 def _decorator(func):
232 232 lparttype = parttype.lower() # enforce lower case matching.
233 233 assert lparttype not in parthandlermapping
234 234 parthandlermapping[lparttype] = func
235 235 func.params = frozenset(params)
236 236 return func
237 237 return _decorator
238 238
239 239 class unbundlerecords(object):
240 240 """keep record of what happens during and unbundle
241 241
242 242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
243 243 category of record and obj is an arbitrary object.
244 244
245 245 `records['cat']` will return all entries of this category 'cat'.
246 246
247 247 Iterating on the object itself will yield `('category', obj)` tuples
248 248 for all entries.
249 249
250 250 All iterations happens in chronological order.
251 251 """
252 252
253 253 def __init__(self):
254 254 self._categories = {}
255 255 self._sequences = []
256 256 self._replies = {}
257 257
258 258 def add(self, category, entry, inreplyto=None):
259 259 """add a new record of a given category.
260 260
261 261 The entry can then be retrieved in the list returned by
262 262 self['category']."""
263 263 self._categories.setdefault(category, []).append(entry)
264 264 self._sequences.append((category, entry))
265 265 if inreplyto is not None:
266 266 self.getreplies(inreplyto).add(category, entry)
267 267
268 268 def getreplies(self, partid):
269 269 """get the records that are replies to a specific part"""
270 270 return self._replies.setdefault(partid, unbundlerecords())
271 271
272 272 def __getitem__(self, cat):
273 273 return tuple(self._categories.get(cat, ()))
274 274
275 275 def __iter__(self):
276 276 return iter(self._sequences)
277 277
278 278 def __len__(self):
279 279 return len(self._sequences)
280 280
281 281 def __nonzero__(self):
282 282 return bool(self._sequences)
283 283
284 284 __bool__ = __nonzero__
285 285
286 286 class bundleoperation(object):
287 287 """an object that represents a single bundling process
288 288
289 289 Its purpose is to carry unbundle-related objects and states.
290 290
291 291 A new object should be created at the beginning of each bundle processing.
292 292 The object is to be returned by the processing function.
293 293
294 294 The object has very little content now it will ultimately contain:
295 295 * an access to the repo the bundle is applied to,
296 296 * a ui object,
297 297 * a way to retrieve a transaction to add changes to the repo,
298 298 * a way to record the result of processing each part,
299 299 * a way to construct a bundle response when applicable.
300 300 """
301 301
302 302 def __init__(self, repo, transactiongetter, captureoutput=True):
303 303 self.repo = repo
304 304 self.ui = repo.ui
305 305 self.records = unbundlerecords()
306 306 self.reply = None
307 307 self.captureoutput = captureoutput
308 308 self.hookargs = {}
309 309 self._gettransaction = transactiongetter
310 310 # carries value that can modify part behavior
311 311 self.modes = {}
312 312
313 313 def gettransaction(self):
314 314 transaction = self._gettransaction()
315 315
316 316 if self.hookargs:
317 317 # the ones added to the transaction supercede those added
318 318 # to the operation.
319 319 self.hookargs.update(transaction.hookargs)
320 320 transaction.hookargs = self.hookargs
321 321
322 322 # mark the hookargs as flushed. further attempts to add to
323 323 # hookargs will result in an abort.
324 324 self.hookargs = None
325 325
326 326 return transaction
327 327
328 328 def addhookargs(self, hookargs):
329 329 if self.hookargs is None:
330 330 raise error.ProgrammingError('attempted to add hookargs to '
331 331 'operation after transaction started')
332 332 self.hookargs.update(hookargs)
333 333
334 334 class TransactionUnavailable(RuntimeError):
335 335 pass
336 336
337 337 def _notransaction():
338 338 """default method to get a transaction while processing a bundle
339 339
340 340 Raise an exception to highlight the fact that no transaction was expected
341 341 to be created"""
342 342 raise TransactionUnavailable()
343 343
344 344 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
345 345 # transform me into unbundler.apply() as soon as the freeze is lifted
346 346 if isinstance(unbundler, unbundle20):
347 347 tr.hookargs['bundle2'] = '1'
348 348 if source is not None and 'source' not in tr.hookargs:
349 349 tr.hookargs['source'] = source
350 350 if url is not None and 'url' not in tr.hookargs:
351 351 tr.hookargs['url'] = url
352 352 return processbundle(repo, unbundler, lambda: tr)
353 353 else:
354 354 # the transactiongetter won't be used, but we might as well set it
355 355 op = bundleoperation(repo, lambda: tr)
356 356 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
357 357 return op
358 358
359 359 class partiterator(object):
360 360 def __init__(self, repo, op, unbundler):
361 361 self.repo = repo
362 362 self.op = op
363 363 self.unbundler = unbundler
364 364 self.iterator = None
365 365 self.count = 0
366 366 self.current = None
367 367
368 368 def __enter__(self):
369 369 def func():
370 370 itr = enumerate(self.unbundler.iterparts())
371 371 for count, p in itr:
372 372 self.count = count
373 373 self.current = p
374 374 yield p
375 375 p.consume()
376 376 self.current = None
377 377 self.iterator = func()
378 378 return self.iterator
379 379
380 380 def __exit__(self, type, exc, tb):
381 381 if not self.iterator:
382 382 return
383 383
384 384 # Only gracefully abort in a normal exception situation. User aborts
385 385 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
386 386 # and should not gracefully cleanup.
387 387 if isinstance(exc, Exception):
388 388 # Any exceptions seeking to the end of the bundle at this point are
389 389 # almost certainly related to the underlying stream being bad.
390 390 # And, chances are that the exception we're handling is related to
391 391 # getting in that bad state. So, we swallow the seeking error and
392 392 # re-raise the original error.
393 393 seekerror = False
394 394 try:
395 395 if self.current:
396 396 # consume the part content to not corrupt the stream.
397 397 self.current.consume()
398 398
399 399 for part in self.iterator:
400 400 # consume the bundle content
401 401 part.consume()
402 402 except Exception:
403 403 seekerror = True
404 404
405 405 # Small hack to let caller code distinguish exceptions from bundle2
406 406 # processing from processing the old format. This is mostly needed
407 407 # to handle different return codes to unbundle according to the type
408 408 # of bundle. We should probably clean up or drop this return code
409 409 # craziness in a future version.
410 410 exc.duringunbundle2 = True
411 411 salvaged = []
412 412 replycaps = None
413 413 if self.op.reply is not None:
414 414 salvaged = self.op.reply.salvageoutput()
415 415 replycaps = self.op.reply.capabilities
416 416 exc._replycaps = replycaps
417 417 exc._bundle2salvagedoutput = salvaged
418 418
419 419 # Re-raising from a variable loses the original stack. So only use
420 420 # that form if we need to.
421 421 if seekerror:
422 422 raise exc
423 423
424 424 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
425 425 self.count)
426 426
427 427 def processbundle(repo, unbundler, transactiongetter=None, op=None):
428 428 """This function process a bundle, apply effect to/from a repo
429 429
430 430 It iterates over each part then searches for and uses the proper handling
431 431 code to process the part. Parts are processed in order.
432 432
433 433 Unknown Mandatory part will abort the process.
434 434
435 435 It is temporarily possible to provide a prebuilt bundleoperation to the
436 436 function. This is used to ensure output is properly propagated in case of
437 437 an error during the unbundling. This output capturing part will likely be
438 438 reworked and this ability will probably go away in the process.
439 439 """
440 440 if op is None:
441 441 if transactiongetter is None:
442 442 transactiongetter = _notransaction
443 443 op = bundleoperation(repo, transactiongetter)
444 444 # todo:
445 445 # - replace this is a init function soon.
446 446 # - exception catching
447 447 unbundler.params
448 448 if repo.ui.debugflag:
449 449 msg = ['bundle2-input-bundle:']
450 450 if unbundler.params:
451 451 msg.append(' %i params' % len(unbundler.params))
452 452 if op._gettransaction is None or op._gettransaction is _notransaction:
453 453 msg.append(' no-transaction')
454 454 else:
455 455 msg.append(' with-transaction')
456 456 msg.append('\n')
457 457 repo.ui.debug(''.join(msg))
458 458
459 459 processparts(repo, op, unbundler)
460 460
461 461 return op
462 462
463 463 def processparts(repo, op, unbundler):
464 464 with partiterator(repo, op, unbundler) as parts:
465 465 for part in parts:
466 466 _processpart(op, part)
467 467
468 468 def _processchangegroup(op, cg, tr, source, url, **kwargs):
469 469 ret = cg.apply(op.repo, tr, source, url, **kwargs)
470 470 op.records.add('changegroup', {
471 471 'return': ret,
472 472 })
473 473 return ret
474 474
475 475 def _gethandler(op, part):
476 476 status = 'unknown' # used by debug output
477 477 try:
478 478 handler = parthandlermapping.get(part.type)
479 479 if handler is None:
480 480 status = 'unsupported-type'
481 481 raise error.BundleUnknownFeatureError(parttype=part.type)
482 482 indebug(op.ui, 'found a handler for part %s' % part.type)
483 483 unknownparams = part.mandatorykeys - handler.params
484 484 if unknownparams:
485 485 unknownparams = list(unknownparams)
486 486 unknownparams.sort()
487 487 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
488 488 raise error.BundleUnknownFeatureError(parttype=part.type,
489 489 params=unknownparams)
490 490 status = 'supported'
491 491 except error.BundleUnknownFeatureError as exc:
492 492 if part.mandatory: # mandatory parts
493 493 raise
494 494 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
495 495 return # skip to part processing
496 496 finally:
497 497 if op.ui.debugflag:
498 498 msg = ['bundle2-input-part: "%s"' % part.type]
499 499 if not part.mandatory:
500 500 msg.append(' (advisory)')
501 501 nbmp = len(part.mandatorykeys)
502 502 nbap = len(part.params) - nbmp
503 503 if nbmp or nbap:
504 504 msg.append(' (params:')
505 505 if nbmp:
506 506 msg.append(' %i mandatory' % nbmp)
507 507 if nbap:
508 508 msg.append(' %i advisory' % nbmp)
509 509 msg.append(')')
510 510 msg.append(' %s\n' % status)
511 511 op.ui.debug(''.join(msg))
512 512
513 513 return handler
514 514
515 515 def _processpart(op, part):
516 516 """process a single part from a bundle
517 517
518 518 The part is guaranteed to have been fully consumed when the function exits
519 519 (even if an exception is raised)."""
520 520 handler = _gethandler(op, part)
521 521 if handler is None:
522 522 return
523 523
524 524 # handler is called outside the above try block so that we don't
525 525 # risk catching KeyErrors from anything other than the
526 526 # parthandlermapping lookup (any KeyError raised by handler()
527 527 # itself represents a defect of a different variety).
528 528 output = None
529 529 if op.captureoutput and op.reply is not None:
530 530 op.ui.pushbuffer(error=True, subproc=True)
531 531 output = ''
532 532 try:
533 533 handler(op, part)
534 534 finally:
535 535 if output is not None:
536 536 output = op.ui.popbuffer()
537 537 if output:
538 538 outpart = op.reply.newpart('output', data=output,
539 539 mandatory=False)
540 540 outpart.addparam(
541 541 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
542 542
543 543 def decodecaps(blob):
544 544 """decode a bundle2 caps bytes blob into a dictionary
545 545
546 546 The blob is a list of capabilities (one per line)
547 547 Capabilities may have values using a line of the form::
548 548
549 549 capability=value1,value2,value3
550 550
551 551 The values are always a list."""
552 552 caps = {}
553 553 for line in blob.splitlines():
554 554 if not line:
555 555 continue
556 556 if '=' not in line:
557 557 key, vals = line, ()
558 558 else:
559 559 key, vals = line.split('=', 1)
560 560 vals = vals.split(',')
561 561 key = urlreq.unquote(key)
562 562 vals = [urlreq.unquote(v) for v in vals]
563 563 caps[key] = vals
564 564 return caps
565 565
566 566 def encodecaps(caps):
567 567 """encode a bundle2 caps dictionary into a bytes blob"""
568 568 chunks = []
569 569 for ca in sorted(caps):
570 570 vals = caps[ca]
571 571 ca = urlreq.quote(ca)
572 572 vals = [urlreq.quote(v) for v in vals]
573 573 if vals:
574 574 ca = "%s=%s" % (ca, ','.join(vals))
575 575 chunks.append(ca)
576 576 return '\n'.join(chunks)
577 577
578 578 bundletypes = {
579 579 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
580 580 # since the unification ssh accepts a header but there
581 581 # is no capability signaling it.
582 582 "HG20": (), # special-cased below
583 583 "HG10UN": ("HG10UN", 'UN'),
584 584 "HG10BZ": ("HG10", 'BZ'),
585 585 "HG10GZ": ("HG10GZ", 'GZ'),
586 586 }
587 587
588 588 # hgweb uses this list to communicate its preferred type
589 589 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
590 590
591 591 class bundle20(object):
592 592 """represent an outgoing bundle2 container
593 593
594 594 Use the `addparam` method to add stream level parameter. and `newpart` to
595 595 populate it. Then call `getchunks` to retrieve all the binary chunks of
596 596 data that compose the bundle2 container."""
597 597
598 598 _magicstring = 'HG20'
599 599
600 600 def __init__(self, ui, capabilities=()):
601 601 self.ui = ui
602 602 self._params = []
603 603 self._parts = []
604 604 self.capabilities = dict(capabilities)
605 605 self._compengine = util.compengines.forbundletype('UN')
606 606 self._compopts = None
607 607 # If compression is being handled by a consumer of the raw
608 608 # data (e.g. the wire protocol), unsetting this flag tells
609 609 # consumers that the bundle is best left uncompressed.
610 610 self.prefercompressed = True
611 611
612 612 def setcompression(self, alg, compopts=None):
613 613 """setup core part compression to <alg>"""
614 614 if alg in (None, 'UN'):
615 615 return
616 616 assert not any(n.lower() == 'compression' for n, v in self._params)
617 617 self.addparam('Compression', alg)
618 618 self._compengine = util.compengines.forbundletype(alg)
619 619 self._compopts = compopts
620 620
621 621 @property
622 622 def nbparts(self):
623 623 """total number of parts added to the bundler"""
624 624 return len(self._parts)
625 625
626 626 # methods used to defines the bundle2 content
627 627 def addparam(self, name, value=None):
628 628 """add a stream level parameter"""
629 629 if not name:
630 630 raise ValueError(r'empty parameter name')
631 631 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
632 632 raise ValueError(r'non letter first character: %s' % name)
633 633 self._params.append((name, value))
634 634
635 635 def addpart(self, part):
636 636 """add a new part to the bundle2 container
637 637
638 638 Parts contains the actual applicative payload."""
639 639 assert part.id is None
640 640 part.id = len(self._parts) # very cheap counter
641 641 self._parts.append(part)
642 642
643 643 def newpart(self, typeid, *args, **kwargs):
644 644 """create a new part and add it to the containers
645 645
646 646 As the part is directly added to the containers. For now, this means
647 647 that any failure to properly initialize the part after calling
648 648 ``newpart`` should result in a failure of the whole bundling process.
649 649
650 650 You can still fall back to manually create and add if you need better
651 651 control."""
652 652 part = bundlepart(typeid, *args, **kwargs)
653 653 self.addpart(part)
654 654 return part
655 655
656 656 # methods used to generate the bundle2 stream
657 657 def getchunks(self):
658 658 if self.ui.debugflag:
659 659 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
660 660 if self._params:
661 661 msg.append(' (%i params)' % len(self._params))
662 662 msg.append(' %i parts total\n' % len(self._parts))
663 663 self.ui.debug(''.join(msg))
664 664 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
665 665 yield self._magicstring
666 666 param = self._paramchunk()
667 667 outdebug(self.ui, 'bundle parameter: %s' % param)
668 668 yield _pack(_fstreamparamsize, len(param))
669 669 if param:
670 670 yield param
671 671 for chunk in self._compengine.compressstream(self._getcorechunk(),
672 672 self._compopts):
673 673 yield chunk
674 674
675 675 def _paramchunk(self):
676 676 """return a encoded version of all stream parameters"""
677 677 blocks = []
678 678 for par, value in self._params:
679 679 par = urlreq.quote(par)
680 680 if value is not None:
681 681 value = urlreq.quote(value)
682 682 par = '%s=%s' % (par, value)
683 683 blocks.append(par)
684 684 return ' '.join(blocks)
685 685
686 686 def _getcorechunk(self):
687 687 """yield chunk for the core part of the bundle
688 688
689 689 (all but headers and parameters)"""
690 690 outdebug(self.ui, 'start of parts')
691 691 for part in self._parts:
692 692 outdebug(self.ui, 'bundle part: "%s"' % part.type)
693 693 for chunk in part.getchunks(ui=self.ui):
694 694 yield chunk
695 695 outdebug(self.ui, 'end of bundle')
696 696 yield _pack(_fpartheadersize, 0)
697 697
698 698
699 699 def salvageoutput(self):
700 700 """return a list with a copy of all output parts in the bundle
701 701
702 702 This is meant to be used during error handling to make sure we preserve
703 703 server output"""
704 704 salvaged = []
705 705 for part in self._parts:
706 706 if part.type.startswith('output'):
707 707 salvaged.append(part.copy())
708 708 return salvaged
709 709
710 710
711 711 class unpackermixin(object):
712 712 """A mixin to extract bytes and struct data from a stream"""
713 713
714 714 def __init__(self, fp):
715 715 self._fp = fp
716 716
717 717 def _unpack(self, format):
718 718 """unpack this struct format from the stream
719 719
720 720 This method is meant for internal usage by the bundle2 protocol only.
721 721 They directly manipulate the low level stream including bundle2 level
722 722 instruction.
723 723
724 724 Do not use it to implement higher-level logic or methods."""
725 725 data = self._readexact(struct.calcsize(format))
726 726 return _unpack(format, data)
727 727
728 728 def _readexact(self, size):
729 729 """read exactly <size> bytes from the stream
730 730
731 731 This method is meant for internal usage by the bundle2 protocol only.
732 732 They directly manipulate the low level stream including bundle2 level
733 733 instruction.
734 734
735 735 Do not use it to implement higher-level logic or methods."""
736 736 return changegroup.readexactly(self._fp, size)
737 737
738 738 def getunbundler(ui, fp, magicstring=None):
739 739 """return a valid unbundler object for a given magicstring"""
740 740 if magicstring is None:
741 741 magicstring = changegroup.readexactly(fp, 4)
742 742 magic, version = magicstring[0:2], magicstring[2:4]
743 743 if magic != 'HG':
744 744 ui.debug(
745 745 "error: invalid magic: %r (version %r), should be 'HG'\n"
746 746 % (magic, version))
747 747 raise error.Abort(_('not a Mercurial bundle'))
748 748 unbundlerclass = formatmap.get(version)
749 749 if unbundlerclass is None:
750 750 raise error.Abort(_('unknown bundle version %s') % version)
751 751 unbundler = unbundlerclass(ui, fp)
752 752 indebug(ui, 'start processing of %s stream' % magicstring)
753 753 return unbundler
754 754
755 755 class unbundle20(unpackermixin):
756 756 """interpret a bundle2 stream
757 757
758 758 This class is fed with a binary stream and yields parts through its
759 759 `iterparts` methods."""
760 760
761 761 _magicstring = 'HG20'
762 762
763 763 def __init__(self, ui, fp):
764 764 """If header is specified, we do not read it out of the stream."""
765 765 self.ui = ui
766 766 self._compengine = util.compengines.forbundletype('UN')
767 767 self._compressed = None
768 768 super(unbundle20, self).__init__(fp)
769 769
770 770 @util.propertycache
771 771 def params(self):
772 772 """dictionary of stream level parameters"""
773 773 indebug(self.ui, 'reading bundle2 stream parameters')
774 774 params = {}
775 775 paramssize = self._unpack(_fstreamparamsize)[0]
776 776 if paramssize < 0:
777 777 raise error.BundleValueError('negative bundle param size: %i'
778 778 % paramssize)
779 779 if paramssize:
780 780 params = self._readexact(paramssize)
781 781 params = self._processallparams(params)
782 782 return params
783 783
784 784 def _processallparams(self, paramsblock):
785 785 """"""
786 786 params = util.sortdict()
787 787 for p in paramsblock.split(' '):
788 788 p = p.split('=', 1)
789 789 p = [urlreq.unquote(i) for i in p]
790 790 if len(p) < 2:
791 791 p.append(None)
792 792 self._processparam(*p)
793 793 params[p[0]] = p[1]
794 794 return params
795 795
796 796
797 797 def _processparam(self, name, value):
798 798 """process a parameter, applying its effect if needed
799 799
800 800 Parameter starting with a lower case letter are advisory and will be
801 801 ignored when unknown. Those starting with an upper case letter are
802 802 mandatory and will this function will raise a KeyError when unknown.
803 803
804 804 Note: no option are currently supported. Any input will be either
805 805 ignored or failing.
806 806 """
807 807 if not name:
808 808 raise ValueError(r'empty parameter name')
809 809 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
810 810 raise ValueError(r'non letter first character: %s' % name)
811 811 try:
812 812 handler = b2streamparamsmap[name.lower()]
813 813 except KeyError:
814 814 if name[0:1].islower():
815 815 indebug(self.ui, "ignoring unknown parameter %s" % name)
816 816 else:
817 817 raise error.BundleUnknownFeatureError(params=(name,))
818 818 else:
819 819 handler(self, name, value)
820 820
821 821 def _forwardchunks(self):
822 822 """utility to transfer a bundle2 as binary
823 823
824 824 This is made necessary by the fact the 'getbundle' command over 'ssh'
825 825 have no way to know then the reply end, relying on the bundle to be
826 826 interpreted to know its end. This is terrible and we are sorry, but we
827 827 needed to move forward to get general delta enabled.
828 828 """
829 829 yield self._magicstring
830 830 assert 'params' not in vars(self)
831 831 paramssize = self._unpack(_fstreamparamsize)[0]
832 832 if paramssize < 0:
833 833 raise error.BundleValueError('negative bundle param size: %i'
834 834 % paramssize)
835 835 yield _pack(_fstreamparamsize, paramssize)
836 836 if paramssize:
837 837 params = self._readexact(paramssize)
838 838 self._processallparams(params)
839 839 yield params
840 840 assert self._compengine.bundletype == 'UN'
841 841 # From there, payload might need to be decompressed
842 842 self._fp = self._compengine.decompressorreader(self._fp)
843 843 emptycount = 0
844 844 while emptycount < 2:
845 845 # so we can brainlessly loop
846 846 assert _fpartheadersize == _fpayloadsize
847 847 size = self._unpack(_fpartheadersize)[0]
848 848 yield _pack(_fpartheadersize, size)
849 849 if size:
850 850 emptycount = 0
851 851 else:
852 852 emptycount += 1
853 853 continue
854 854 if size == flaginterrupt:
855 855 continue
856 856 elif size < 0:
857 857 raise error.BundleValueError('negative chunk size: %i')
858 858 yield self._readexact(size)
859 859
860 860
861 861 def iterparts(self, seekable=False):
862 862 """yield all parts contained in the stream"""
863 863 cls = seekableunbundlepart if seekable else unbundlepart
864 864 # make sure param have been loaded
865 865 self.params
866 866 # From there, payload need to be decompressed
867 867 self._fp = self._compengine.decompressorreader(self._fp)
868 868 indebug(self.ui, 'start extraction of bundle2 parts')
869 869 headerblock = self._readpartheader()
870 870 while headerblock is not None:
871 871 part = cls(self.ui, headerblock, self._fp)
872 872 yield part
873 873 # Ensure part is fully consumed so we can start reading the next
874 874 # part.
875 875 part.consume()
876 876
877 877 headerblock = self._readpartheader()
878 878 indebug(self.ui, 'end of bundle2 stream')
879 879
880 880 def _readpartheader(self):
881 881 """reads a part header size and return the bytes blob
882 882
883 883 returns None if empty"""
884 884 headersize = self._unpack(_fpartheadersize)[0]
885 885 if headersize < 0:
886 886 raise error.BundleValueError('negative part header size: %i'
887 887 % headersize)
888 888 indebug(self.ui, 'part header size: %i' % headersize)
889 889 if headersize:
890 890 return self._readexact(headersize)
891 891 return None
892 892
893 893 def compressed(self):
894 894 self.params # load params
895 895 return self._compressed
896 896
897 897 def close(self):
898 898 """close underlying file"""
899 899 if util.safehasattr(self._fp, 'close'):
900 900 return self._fp.close()
901 901
902 902 formatmap = {'20': unbundle20}
903 903
904 904 b2streamparamsmap = {}
905 905
906 906 def b2streamparamhandler(name):
907 907 """register a handler for a stream level parameter"""
908 908 def decorator(func):
909 909 assert name not in formatmap
910 910 b2streamparamsmap[name] = func
911 911 return func
912 912 return decorator
913 913
914 914 @b2streamparamhandler('compression')
915 915 def processcompression(unbundler, param, value):
916 916 """read compression parameter and install payload decompression"""
917 917 if value not in util.compengines.supportedbundletypes:
918 918 raise error.BundleUnknownFeatureError(params=(param,),
919 919 values=(value,))
920 920 unbundler._compengine = util.compengines.forbundletype(value)
921 921 if value is not None:
922 922 unbundler._compressed = True
923 923
924 924 class bundlepart(object):
925 925 """A bundle2 part contains application level payload
926 926
927 927 The part `type` is used to route the part to the application level
928 928 handler.
929 929
930 930 The part payload is contained in ``part.data``. It could be raw bytes or a
931 931 generator of byte chunks.
932 932
933 933 You can add parameters to the part using the ``addparam`` method.
934 934 Parameters can be either mandatory (default) or advisory. Remote side
935 935 should be able to safely ignore the advisory ones.
936 936
937 937 Both data and parameters cannot be modified after the generation has begun.
938 938 """
939 939
940 940 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
941 941 data='', mandatory=True):
942 942 validateparttype(parttype)
943 943 self.id = None
944 944 self.type = parttype
945 945 self._data = data
946 946 self._mandatoryparams = list(mandatoryparams)
947 947 self._advisoryparams = list(advisoryparams)
948 948 # checking for duplicated entries
949 949 self._seenparams = set()
950 950 for pname, __ in self._mandatoryparams + self._advisoryparams:
951 951 if pname in self._seenparams:
952 952 raise error.ProgrammingError('duplicated params: %s' % pname)
953 953 self._seenparams.add(pname)
954 954 # status of the part's generation:
955 955 # - None: not started,
956 956 # - False: currently generated,
957 957 # - True: generation done.
958 958 self._generated = None
959 959 self.mandatory = mandatory
960 960
961 961 def __repr__(self):
962 962 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
963 963 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
964 964 % (cls, id(self), self.id, self.type, self.mandatory))
965 965
966 966 def copy(self):
967 967 """return a copy of the part
968 968
969 969 The new part have the very same content but no partid assigned yet.
970 970 Parts with generated data cannot be copied."""
971 971 assert not util.safehasattr(self.data, 'next')
972 972 return self.__class__(self.type, self._mandatoryparams,
973 973 self._advisoryparams, self._data, self.mandatory)
974 974
975 975 # methods used to defines the part content
976 976 @property
977 977 def data(self):
978 978 return self._data
979 979
980 980 @data.setter
981 981 def data(self, data):
982 982 if self._generated is not None:
983 983 raise error.ReadOnlyPartError('part is being generated')
984 984 self._data = data
985 985
986 986 @property
987 987 def mandatoryparams(self):
988 988 # make it an immutable tuple to force people through ``addparam``
989 989 return tuple(self._mandatoryparams)
990 990
991 991 @property
992 992 def advisoryparams(self):
993 993 # make it an immutable tuple to force people through ``addparam``
994 994 return tuple(self._advisoryparams)
995 995
996 996 def addparam(self, name, value='', mandatory=True):
997 997 """add a parameter to the part
998 998
999 999 If 'mandatory' is set to True, the remote handler must claim support
1000 1000 for this parameter or the unbundling will be aborted.
1001 1001
1002 1002 The 'name' and 'value' cannot exceed 255 bytes each.
1003 1003 """
1004 1004 if self._generated is not None:
1005 1005 raise error.ReadOnlyPartError('part is being generated')
1006 1006 if name in self._seenparams:
1007 1007 raise ValueError('duplicated params: %s' % name)
1008 1008 self._seenparams.add(name)
1009 1009 params = self._advisoryparams
1010 1010 if mandatory:
1011 1011 params = self._mandatoryparams
1012 1012 params.append((name, value))
1013 1013
1014 1014 # methods used to generates the bundle2 stream
1015 1015 def getchunks(self, ui):
1016 1016 if self._generated is not None:
1017 1017 raise error.ProgrammingError('part can only be consumed once')
1018 1018 self._generated = False
1019 1019
1020 1020 if ui.debugflag:
1021 1021 msg = ['bundle2-output-part: "%s"' % self.type]
1022 1022 if not self.mandatory:
1023 1023 msg.append(' (advisory)')
1024 1024 nbmp = len(self.mandatoryparams)
1025 1025 nbap = len(self.advisoryparams)
1026 1026 if nbmp or nbap:
1027 1027 msg.append(' (params:')
1028 1028 if nbmp:
1029 1029 msg.append(' %i mandatory' % nbmp)
1030 1030 if nbap:
1031 1031 msg.append(' %i advisory' % nbmp)
1032 1032 msg.append(')')
1033 1033 if not self.data:
1034 1034 msg.append(' empty payload')
1035 1035 elif (util.safehasattr(self.data, 'next')
1036 1036 or util.safehasattr(self.data, '__next__')):
1037 1037 msg.append(' streamed payload')
1038 1038 else:
1039 1039 msg.append(' %i bytes payload' % len(self.data))
1040 1040 msg.append('\n')
1041 1041 ui.debug(''.join(msg))
1042 1042
1043 1043 #### header
1044 1044 if self.mandatory:
1045 1045 parttype = self.type.upper()
1046 1046 else:
1047 1047 parttype = self.type.lower()
1048 1048 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1049 1049 ## parttype
1050 1050 header = [_pack(_fparttypesize, len(parttype)),
1051 1051 parttype, _pack(_fpartid, self.id),
1052 1052 ]
1053 1053 ## parameters
1054 1054 # count
1055 1055 manpar = self.mandatoryparams
1056 1056 advpar = self.advisoryparams
1057 1057 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1058 1058 # size
1059 1059 parsizes = []
1060 1060 for key, value in manpar:
1061 1061 parsizes.append(len(key))
1062 1062 parsizes.append(len(value))
1063 1063 for key, value in advpar:
1064 1064 parsizes.append(len(key))
1065 1065 parsizes.append(len(value))
1066 1066 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1067 1067 header.append(paramsizes)
1068 1068 # key, value
1069 1069 for key, value in manpar:
1070 1070 header.append(key)
1071 1071 header.append(value)
1072 1072 for key, value in advpar:
1073 1073 header.append(key)
1074 1074 header.append(value)
1075 1075 ## finalize header
1076 1076 try:
1077 1077 headerchunk = ''.join(header)
1078 1078 except TypeError:
1079 1079 raise TypeError(r'Found a non-bytes trying to '
1080 1080 r'build bundle part header: %r' % header)
1081 1081 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1082 1082 yield _pack(_fpartheadersize, len(headerchunk))
1083 1083 yield headerchunk
1084 1084 ## payload
1085 1085 try:
1086 1086 for chunk in self._payloadchunks():
1087 1087 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1088 1088 yield _pack(_fpayloadsize, len(chunk))
1089 1089 yield chunk
1090 1090 except GeneratorExit:
1091 1091 # GeneratorExit means that nobody is listening for our
1092 1092 # results anyway, so just bail quickly rather than trying
1093 1093 # to produce an error part.
1094 1094 ui.debug('bundle2-generatorexit\n')
1095 1095 raise
1096 1096 except BaseException as exc:
1097 1097 bexc = stringutil.forcebytestr(exc)
1098 1098 # backup exception data for later
1099 1099 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1100 1100 % bexc)
1101 1101 tb = sys.exc_info()[2]
1102 1102 msg = 'unexpected error: %s' % bexc
1103 1103 interpart = bundlepart('error:abort', [('message', msg)],
1104 1104 mandatory=False)
1105 1105 interpart.id = 0
1106 1106 yield _pack(_fpayloadsize, -1)
1107 1107 for chunk in interpart.getchunks(ui=ui):
1108 1108 yield chunk
1109 1109 outdebug(ui, 'closing payload chunk')
1110 1110 # abort current part payload
1111 1111 yield _pack(_fpayloadsize, 0)
1112 1112 pycompat.raisewithtb(exc, tb)
1113 1113 # end of payload
1114 1114 outdebug(ui, 'closing payload chunk')
1115 1115 yield _pack(_fpayloadsize, 0)
1116 1116 self._generated = True
1117 1117
1118 1118 def _payloadchunks(self):
1119 1119 """yield chunks of a the part payload
1120 1120
1121 1121 Exists to handle the different methods to provide data to a part."""
1122 1122 # we only support fixed size data now.
1123 1123 # This will be improved in the future.
1124 1124 if (util.safehasattr(self.data, 'next')
1125 1125 or util.safehasattr(self.data, '__next__')):
1126 1126 buff = util.chunkbuffer(self.data)
1127 1127 chunk = buff.read(preferedchunksize)
1128 1128 while chunk:
1129 1129 yield chunk
1130 1130 chunk = buff.read(preferedchunksize)
1131 1131 elif len(self.data):
1132 1132 yield self.data
1133 1133
1134 1134
1135 1135 flaginterrupt = -1
1136 1136
1137 1137 class interrupthandler(unpackermixin):
1138 1138 """read one part and process it with restricted capability
1139 1139
1140 1140 This allows to transmit exception raised on the producer size during part
1141 1141 iteration while the consumer is reading a part.
1142 1142
1143 1143 Part processed in this manner only have access to a ui object,"""
1144 1144
1145 1145 def __init__(self, ui, fp):
1146 1146 super(interrupthandler, self).__init__(fp)
1147 1147 self.ui = ui
1148 1148
1149 1149 def _readpartheader(self):
1150 1150 """reads a part header size and return the bytes blob
1151 1151
1152 1152 returns None if empty"""
1153 1153 headersize = self._unpack(_fpartheadersize)[0]
1154 1154 if headersize < 0:
1155 1155 raise error.BundleValueError('negative part header size: %i'
1156 1156 % headersize)
1157 1157 indebug(self.ui, 'part header size: %i\n' % headersize)
1158 1158 if headersize:
1159 1159 return self._readexact(headersize)
1160 1160 return None
1161 1161
1162 1162 def __call__(self):
1163 1163
1164 1164 self.ui.debug('bundle2-input-stream-interrupt:'
1165 1165 ' opening out of band context\n')
1166 1166 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1167 1167 headerblock = self._readpartheader()
1168 1168 if headerblock is None:
1169 1169 indebug(self.ui, 'no part found during interruption.')
1170 1170 return
1171 1171 part = unbundlepart(self.ui, headerblock, self._fp)
1172 1172 op = interruptoperation(self.ui)
1173 1173 hardabort = False
1174 1174 try:
1175 1175 _processpart(op, part)
1176 1176 except (SystemExit, KeyboardInterrupt):
1177 1177 hardabort = True
1178 1178 raise
1179 1179 finally:
1180 1180 if not hardabort:
1181 1181 part.consume()
1182 1182 self.ui.debug('bundle2-input-stream-interrupt:'
1183 1183 ' closing out of band context\n')
1184 1184
1185 1185 class interruptoperation(object):
1186 1186 """A limited operation to be use by part handler during interruption
1187 1187
1188 1188 It only have access to an ui object.
1189 1189 """
1190 1190
1191 1191 def __init__(self, ui):
1192 1192 self.ui = ui
1193 1193 self.reply = None
1194 1194 self.captureoutput = False
1195 1195
1196 1196 @property
1197 1197 def repo(self):
1198 1198 raise error.ProgrammingError('no repo access from stream interruption')
1199 1199
1200 1200 def gettransaction(self):
1201 1201 raise TransactionUnavailable('no repo access from stream interruption')
1202 1202
1203 1203 def decodepayloadchunks(ui, fh):
1204 1204 """Reads bundle2 part payload data into chunks.
1205 1205
1206 1206 Part payload data consists of framed chunks. This function takes
1207 1207 a file handle and emits those chunks.
1208 1208 """
1209 1209 dolog = ui.configbool('devel', 'bundle2.debug')
1210 1210 debug = ui.debug
1211 1211
1212 1212 headerstruct = struct.Struct(_fpayloadsize)
1213 1213 headersize = headerstruct.size
1214 1214 unpack = headerstruct.unpack
1215 1215
1216 1216 readexactly = changegroup.readexactly
1217 1217 read = fh.read
1218 1218
1219 1219 chunksize = unpack(readexactly(fh, headersize))[0]
1220 1220 indebug(ui, 'payload chunk size: %i' % chunksize)
1221 1221
1222 1222 # changegroup.readexactly() is inlined below for performance.
1223 1223 while chunksize:
1224 1224 if chunksize >= 0:
1225 1225 s = read(chunksize)
1226 1226 if len(s) < chunksize:
1227 1227 raise error.Abort(_('stream ended unexpectedly '
1228 1228 ' (got %d bytes, expected %d)') %
1229 1229 (len(s), chunksize))
1230 1230
1231 1231 yield s
1232 1232 elif chunksize == flaginterrupt:
1233 1233 # Interrupt "signal" detected. The regular stream is interrupted
1234 1234 # and a bundle2 part follows. Consume it.
1235 1235 interrupthandler(ui, fh)()
1236 1236 else:
1237 1237 raise error.BundleValueError(
1238 1238 'negative payload chunk size: %s' % chunksize)
1239 1239
1240 1240 s = read(headersize)
1241 1241 if len(s) < headersize:
1242 1242 raise error.Abort(_('stream ended unexpectedly '
1243 1243 ' (got %d bytes, expected %d)') %
1244 1244 (len(s), chunksize))
1245 1245
1246 1246 chunksize = unpack(s)[0]
1247 1247
1248 1248 # indebug() inlined for performance.
1249 1249 if dolog:
1250 1250 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1251 1251
1252 1252 class unbundlepart(unpackermixin):
1253 1253 """a bundle part read from a bundle"""
1254 1254
1255 1255 def __init__(self, ui, header, fp):
1256 1256 super(unbundlepart, self).__init__(fp)
1257 1257 self._seekable = (util.safehasattr(fp, 'seek') and
1258 1258 util.safehasattr(fp, 'tell'))
1259 1259 self.ui = ui
1260 1260 # unbundle state attr
1261 1261 self._headerdata = header
1262 1262 self._headeroffset = 0
1263 1263 self._initialized = False
1264 1264 self.consumed = False
1265 1265 # part data
1266 1266 self.id = None
1267 1267 self.type = None
1268 1268 self.mandatoryparams = None
1269 1269 self.advisoryparams = None
1270 1270 self.params = None
1271 1271 self.mandatorykeys = ()
1272 1272 self._readheader()
1273 1273 self._mandatory = None
1274 1274 self._pos = 0
1275 1275
1276 1276 def _fromheader(self, size):
1277 1277 """return the next <size> byte from the header"""
1278 1278 offset = self._headeroffset
1279 1279 data = self._headerdata[offset:(offset + size)]
1280 1280 self._headeroffset = offset + size
1281 1281 return data
1282 1282
1283 1283 def _unpackheader(self, format):
1284 1284 """read given format from header
1285 1285
1286 1286 This automatically compute the size of the format to read."""
1287 1287 data = self._fromheader(struct.calcsize(format))
1288 1288 return _unpack(format, data)
1289 1289
1290 1290 def _initparams(self, mandatoryparams, advisoryparams):
1291 1291 """internal function to setup all logic related parameters"""
1292 1292 # make it read only to prevent people touching it by mistake.
1293 1293 self.mandatoryparams = tuple(mandatoryparams)
1294 1294 self.advisoryparams = tuple(advisoryparams)
1295 1295 # user friendly UI
1296 1296 self.params = util.sortdict(self.mandatoryparams)
1297 1297 self.params.update(self.advisoryparams)
1298 1298 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1299 1299
1300 1300 def _readheader(self):
1301 1301 """read the header and setup the object"""
1302 1302 typesize = self._unpackheader(_fparttypesize)[0]
1303 1303 self.type = self._fromheader(typesize)
1304 1304 indebug(self.ui, 'part type: "%s"' % self.type)
1305 1305 self.id = self._unpackheader(_fpartid)[0]
1306 1306 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1307 1307 # extract mandatory bit from type
1308 1308 self.mandatory = (self.type != self.type.lower())
1309 1309 self.type = self.type.lower()
1310 1310 ## reading parameters
1311 1311 # param count
1312 1312 mancount, advcount = self._unpackheader(_fpartparamcount)
1313 1313 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1314 1314 # param size
1315 1315 fparamsizes = _makefpartparamsizes(mancount + advcount)
1316 1316 paramsizes = self._unpackheader(fparamsizes)
1317 1317 # make it a list of couple again
1318 1318 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1319 1319 # split mandatory from advisory
1320 1320 mansizes = paramsizes[:mancount]
1321 1321 advsizes = paramsizes[mancount:]
1322 1322 # retrieve param value
1323 1323 manparams = []
1324 1324 for key, value in mansizes:
1325 1325 manparams.append((self._fromheader(key), self._fromheader(value)))
1326 1326 advparams = []
1327 1327 for key, value in advsizes:
1328 1328 advparams.append((self._fromheader(key), self._fromheader(value)))
1329 1329 self._initparams(manparams, advparams)
1330 1330 ## part payload
1331 1331 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1332 1332 # we read the data, tell it
1333 1333 self._initialized = True
1334 1334
1335 1335 def _payloadchunks(self):
1336 1336 """Generator of decoded chunks in the payload."""
1337 1337 return decodepayloadchunks(self.ui, self._fp)
1338 1338
1339 1339 def consume(self):
1340 1340 """Read the part payload until completion.
1341 1341
1342 1342 By consuming the part data, the underlying stream read offset will
1343 1343 be advanced to the next part (or end of stream).
1344 1344 """
1345 1345 if self.consumed:
1346 1346 return
1347 1347
1348 1348 chunk = self.read(32768)
1349 1349 while chunk:
1350 1350 self._pos += len(chunk)
1351 1351 chunk = self.read(32768)
1352 1352
1353 1353 def read(self, size=None):
1354 1354 """read payload data"""
1355 1355 if not self._initialized:
1356 1356 self._readheader()
1357 1357 if size is None:
1358 1358 data = self._payloadstream.read()
1359 1359 else:
1360 1360 data = self._payloadstream.read(size)
1361 1361 self._pos += len(data)
1362 1362 if size is None or len(data) < size:
1363 1363 if not self.consumed and self._pos:
1364 1364 self.ui.debug('bundle2-input-part: total payload size %i\n'
1365 1365 % self._pos)
1366 1366 self.consumed = True
1367 1367 return data
1368 1368
1369 1369 class seekableunbundlepart(unbundlepart):
1370 1370 """A bundle2 part in a bundle that is seekable.
1371 1371
1372 1372 Regular ``unbundlepart`` instances can only be read once. This class
1373 1373 extends ``unbundlepart`` to enable bi-directional seeking within the
1374 1374 part.
1375 1375
1376 1376 Bundle2 part data consists of framed chunks. Offsets when seeking
1377 1377 refer to the decoded data, not the offsets in the underlying bundle2
1378 1378 stream.
1379 1379
1380 1380 To facilitate quickly seeking within the decoded data, instances of this
1381 1381 class maintain a mapping between offsets in the underlying stream and
1382 1382 the decoded payload. This mapping will consume memory in proportion
1383 1383 to the number of chunks within the payload (which almost certainly
1384 1384 increases in proportion with the size of the part).
1385 1385 """
1386 1386 def __init__(self, ui, header, fp):
1387 1387 # (payload, file) offsets for chunk starts.
1388 1388 self._chunkindex = []
1389 1389
1390 1390 super(seekableunbundlepart, self).__init__(ui, header, fp)
1391 1391
1392 1392 def _payloadchunks(self, chunknum=0):
1393 1393 '''seek to specified chunk and start yielding data'''
1394 1394 if len(self._chunkindex) == 0:
1395 1395 assert chunknum == 0, 'Must start with chunk 0'
1396 1396 self._chunkindex.append((0, self._tellfp()))
1397 1397 else:
1398 1398 assert chunknum < len(self._chunkindex), \
1399 1399 'Unknown chunk %d' % chunknum
1400 1400 self._seekfp(self._chunkindex[chunknum][1])
1401 1401
1402 1402 pos = self._chunkindex[chunknum][0]
1403 1403
1404 1404 for chunk in decodepayloadchunks(self.ui, self._fp):
1405 1405 chunknum += 1
1406 1406 pos += len(chunk)
1407 1407 if chunknum == len(self._chunkindex):
1408 1408 self._chunkindex.append((pos, self._tellfp()))
1409 1409
1410 1410 yield chunk
1411 1411
1412 1412 def _findchunk(self, pos):
1413 1413 '''for a given payload position, return a chunk number and offset'''
1414 1414 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1415 1415 if ppos == pos:
1416 1416 return chunk, 0
1417 1417 elif ppos > pos:
1418 1418 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1419 1419 raise ValueError('Unknown chunk')
1420 1420
1421 1421 def tell(self):
1422 1422 return self._pos
1423 1423
1424 1424 def seek(self, offset, whence=os.SEEK_SET):
1425 1425 if whence == os.SEEK_SET:
1426 1426 newpos = offset
1427 1427 elif whence == os.SEEK_CUR:
1428 1428 newpos = self._pos + offset
1429 1429 elif whence == os.SEEK_END:
1430 1430 if not self.consumed:
1431 1431 # Can't use self.consume() here because it advances self._pos.
1432 1432 chunk = self.read(32768)
1433 1433 while chunk:
1434 1434 chunk = self.read(32768)
1435 1435 newpos = self._chunkindex[-1][0] - offset
1436 1436 else:
1437 1437 raise ValueError('Unknown whence value: %r' % (whence,))
1438 1438
1439 1439 if newpos > self._chunkindex[-1][0] and not self.consumed:
1440 1440 # Can't use self.consume() here because it advances self._pos.
1441 1441 chunk = self.read(32768)
1442 1442 while chunk:
1443 1443 chunk = self.read(32668)
1444 1444
1445 1445 if not 0 <= newpos <= self._chunkindex[-1][0]:
1446 1446 raise ValueError('Offset out of range')
1447 1447
1448 1448 if self._pos != newpos:
1449 1449 chunk, internaloffset = self._findchunk(newpos)
1450 1450 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1451 1451 adjust = self.read(internaloffset)
1452 1452 if len(adjust) != internaloffset:
1453 1453 raise error.Abort(_('Seek failed\n'))
1454 1454 self._pos = newpos
1455 1455
1456 1456 def _seekfp(self, offset, whence=0):
1457 1457 """move the underlying file pointer
1458 1458
1459 1459 This method is meant for internal usage by the bundle2 protocol only.
1460 1460 They directly manipulate the low level stream including bundle2 level
1461 1461 instruction.
1462 1462
1463 1463 Do not use it to implement higher-level logic or methods."""
1464 1464 if self._seekable:
1465 1465 return self._fp.seek(offset, whence)
1466 1466 else:
1467 1467 raise NotImplementedError(_('File pointer is not seekable'))
1468 1468
1469 1469 def _tellfp(self):
1470 1470 """return the file offset, or None if file is not seekable
1471 1471
1472 1472 This method is meant for internal usage by the bundle2 protocol only.
1473 1473 They directly manipulate the low level stream including bundle2 level
1474 1474 instruction.
1475 1475
1476 1476 Do not use it to implement higher-level logic or methods."""
1477 1477 if self._seekable:
1478 1478 try:
1479 1479 return self._fp.tell()
1480 1480 except IOError as e:
1481 1481 if e.errno == errno.ESPIPE:
1482 1482 self._seekable = False
1483 1483 else:
1484 1484 raise
1485 1485 return None
1486 1486
1487 1487 # These are only the static capabilities.
1488 1488 # Check the 'getrepocaps' function for the rest.
1489 1489 capabilities = {'HG20': (),
1490 1490 'bookmarks': (),
1491 1491 'error': ('abort', 'unsupportedcontent', 'pushraced',
1492 1492 'pushkey'),
1493 1493 'listkeys': (),
1494 1494 'pushkey': (),
1495 1495 'digests': tuple(sorted(util.DIGESTS.keys())),
1496 1496 'remote-changegroup': ('http', 'https'),
1497 1497 'hgtagsfnodes': (),
1498 1498 'rev-branch-cache': (),
1499 1499 'phases': ('heads',),
1500 1500 'stream': ('v2',),
1501 1501 }
1502 1502
1503 1503 def getrepocaps(repo, allowpushback=False, role=None):
1504 1504 """return the bundle2 capabilities for a given repo
1505 1505
1506 1506 Exists to allow extensions (like evolution) to mutate the capabilities.
1507 1507
1508 1508 The returned value is used for servers advertising their capabilities as
1509 1509 well as clients advertising their capabilities to servers as part of
1510 1510 bundle2 requests. The ``role`` argument specifies which is which.
1511 1511 """
1512 1512 if role not in ('client', 'server'):
1513 1513 raise error.ProgrammingError('role argument must be client or server')
1514 1514
1515 1515 caps = capabilities.copy()
1516 1516 caps['changegroup'] = tuple(sorted(
1517 1517 changegroup.supportedincomingversions(repo)))
1518 1518 if obsolete.isenabled(repo, obsolete.exchangeopt):
1519 1519 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1520 1520 caps['obsmarkers'] = supportedformat
1521 1521 if allowpushback:
1522 1522 caps['pushback'] = ()
1523 1523 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1524 1524 if cpmode == 'check-related':
1525 1525 caps['checkheads'] = ('related',)
1526 1526 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1527 1527 caps.pop('phases')
1528 1528
1529 1529 # Don't advertise stream clone support in server mode if not configured.
1530 1530 if role == 'server':
1531 1531 streamsupported = repo.ui.configbool('server', 'uncompressed',
1532 1532 untrusted=True)
1533 1533 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1534 1534
1535 1535 if not streamsupported or not featuresupported:
1536 1536 caps.pop('stream')
1537 1537 # Else always advertise support on client, because payload support
1538 1538 # should always be advertised.
1539 1539
1540 1540 return caps
1541 1541
1542 1542 def bundle2caps(remote):
1543 1543 """return the bundle capabilities of a peer as dict"""
1544 1544 raw = remote.capable('bundle2')
1545 1545 if not raw and raw != '':
1546 1546 return {}
1547 1547 capsblob = urlreq.unquote(remote.capable('bundle2'))
1548 1548 return decodecaps(capsblob)
1549 1549
1550 1550 def obsmarkersversion(caps):
1551 1551 """extract the list of supported obsmarkers versions from a bundle2caps dict
1552 1552 """
1553 1553 obscaps = caps.get('obsmarkers', ())
1554 1554 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1555 1555
1556 1556 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1557 1557 vfs=None, compression=None, compopts=None):
1558 1558 if bundletype.startswith('HG10'):
1559 1559 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1560 1560 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1561 1561 compression=compression, compopts=compopts)
1562 1562 elif not bundletype.startswith('HG20'):
1563 1563 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1564 1564
1565 1565 caps = {}
1566 1566 if 'obsolescence' in opts:
1567 1567 caps['obsmarkers'] = ('V1',)
1568 1568 bundle = bundle20(ui, caps)
1569 1569 bundle.setcompression(compression, compopts)
1570 1570 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1571 1571 chunkiter = bundle.getchunks()
1572 1572
1573 1573 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1574 1574
1575 1575 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1576 1576 # We should eventually reconcile this logic with the one behind
1577 1577 # 'exchange.getbundle2partsgenerator'.
1578 1578 #
1579 1579 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1580 1580 # different right now. So we keep them separated for now for the sake of
1581 1581 # simplicity.
1582 1582
1583 1583 # we might not always want a changegroup in such bundle, for example in
1584 1584 # stream bundles
1585 1585 if opts.get('changegroup', True):
1586 1586 cgversion = opts.get('cg.version')
1587 1587 if cgversion is None:
1588 1588 cgversion = changegroup.safeversion(repo)
1589 1589 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1590 1590 part = bundler.newpart('changegroup', data=cg.getchunks())
1591 1591 part.addparam('version', cg.version)
1592 1592 if 'clcount' in cg.extras:
1593 1593 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1594 1594 mandatory=False)
1595 1595 if opts.get('phases') and repo.revs('%ln and secret()',
1596 1596 outgoing.missingheads):
1597 1597 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1598 1598
1599 addparttagsfnodescache(repo, bundler, outgoing)
1600 addpartrevbranchcache(repo, bundler, outgoing)
1599 if opts.get('tagsfnodescache', True):
1600 addparttagsfnodescache(repo, bundler, outgoing)
1601
1602 if opts.get('revbranchcache', True):
1603 addpartrevbranchcache(repo, bundler, outgoing)
1601 1604
1602 1605 if opts.get('obsolescence', False):
1603 1606 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1604 1607 buildobsmarkerspart(bundler, obsmarkers)
1605 1608
1606 1609 if opts.get('phases', False):
1607 1610 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1608 1611 phasedata = phases.binaryencode(headsbyphase)
1609 1612 bundler.newpart('phase-heads', data=phasedata)
1610 1613
1611 1614 def addparttagsfnodescache(repo, bundler, outgoing):
1612 1615 # we include the tags fnode cache for the bundle changeset
1613 1616 # (as an optional parts)
1614 1617 cache = tags.hgtagsfnodescache(repo.unfiltered())
1615 1618 chunks = []
1616 1619
1617 1620 # .hgtags fnodes are only relevant for head changesets. While we could
1618 1621 # transfer values for all known nodes, there will likely be little to
1619 1622 # no benefit.
1620 1623 #
1621 1624 # We don't bother using a generator to produce output data because
1622 1625 # a) we only have 40 bytes per head and even esoteric numbers of heads
1623 1626 # consume little memory (1M heads is 40MB) b) we don't want to send the
1624 1627 # part if we don't have entries and knowing if we have entries requires
1625 1628 # cache lookups.
1626 1629 for node in outgoing.missingheads:
1627 1630 # Don't compute missing, as this may slow down serving.
1628 1631 fnode = cache.getfnode(node, computemissing=False)
1629 1632 if fnode is not None:
1630 1633 chunks.extend([node, fnode])
1631 1634
1632 1635 if chunks:
1633 1636 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1634 1637
1635 1638 def addpartrevbranchcache(repo, bundler, outgoing):
1636 1639 # we include the rev branch cache for the bundle changeset
1637 1640 # (as an optional parts)
1638 1641 cache = repo.revbranchcache()
1639 1642 cl = repo.unfiltered().changelog
1640 1643 branchesdata = collections.defaultdict(lambda: (set(), set()))
1641 1644 for node in outgoing.missing:
1642 1645 branch, close = cache.branchinfo(cl.rev(node))
1643 1646 branchesdata[branch][close].add(node)
1644 1647
1645 1648 def generate():
1646 1649 for branch, (nodes, closed) in sorted(branchesdata.items()):
1647 1650 utf8branch = encoding.fromlocal(branch)
1648 1651 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1649 1652 yield utf8branch
1650 1653 for n in sorted(nodes):
1651 1654 yield n
1652 1655 for n in sorted(closed):
1653 1656 yield n
1654 1657
1655 1658 bundler.newpart('cache:rev-branch-cache', data=generate())
1656 1659
1657 1660 def buildobsmarkerspart(bundler, markers):
1658 1661 """add an obsmarker part to the bundler with <markers>
1659 1662
1660 1663 No part is created if markers is empty.
1661 1664 Raises ValueError if the bundler doesn't support any known obsmarker format.
1662 1665 """
1663 1666 if not markers:
1664 1667 return None
1665 1668
1666 1669 remoteversions = obsmarkersversion(bundler.capabilities)
1667 1670 version = obsolete.commonversion(remoteversions)
1668 1671 if version is None:
1669 1672 raise ValueError('bundler does not support common obsmarker format')
1670 1673 stream = obsolete.encodemarkers(markers, True, version=version)
1671 1674 return bundler.newpart('obsmarkers', data=stream)
1672 1675
1673 1676 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1674 1677 compopts=None):
1675 1678 """Write a bundle file and return its filename.
1676 1679
1677 1680 Existing files will not be overwritten.
1678 1681 If no filename is specified, a temporary file is created.
1679 1682 bz2 compression can be turned off.
1680 1683 The bundle file will be deleted in case of errors.
1681 1684 """
1682 1685
1683 1686 if bundletype == "HG20":
1684 1687 bundle = bundle20(ui)
1685 1688 bundle.setcompression(compression, compopts)
1686 1689 part = bundle.newpart('changegroup', data=cg.getchunks())
1687 1690 part.addparam('version', cg.version)
1688 1691 if 'clcount' in cg.extras:
1689 1692 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1690 1693 mandatory=False)
1691 1694 chunkiter = bundle.getchunks()
1692 1695 else:
1693 1696 # compression argument is only for the bundle2 case
1694 1697 assert compression is None
1695 1698 if cg.version != '01':
1696 1699 raise error.Abort(_('old bundle types only supports v1 '
1697 1700 'changegroups'))
1698 1701 header, comp = bundletypes[bundletype]
1699 1702 if comp not in util.compengines.supportedbundletypes:
1700 1703 raise error.Abort(_('unknown stream compression type: %s')
1701 1704 % comp)
1702 1705 compengine = util.compengines.forbundletype(comp)
1703 1706 def chunkiter():
1704 1707 yield header
1705 1708 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1706 1709 yield chunk
1707 1710 chunkiter = chunkiter()
1708 1711
1709 1712 # parse the changegroup data, otherwise we will block
1710 1713 # in case of sshrepo because we don't know the end of the stream
1711 1714 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1712 1715
1713 1716 def combinechangegroupresults(op):
1714 1717 """logic to combine 0 or more addchangegroup results into one"""
1715 1718 results = [r.get('return', 0)
1716 1719 for r in op.records['changegroup']]
1717 1720 changedheads = 0
1718 1721 result = 1
1719 1722 for ret in results:
1720 1723 # If any changegroup result is 0, return 0
1721 1724 if ret == 0:
1722 1725 result = 0
1723 1726 break
1724 1727 if ret < -1:
1725 1728 changedheads += ret + 1
1726 1729 elif ret > 1:
1727 1730 changedheads += ret - 1
1728 1731 if changedheads > 0:
1729 1732 result = 1 + changedheads
1730 1733 elif changedheads < 0:
1731 1734 result = -1 + changedheads
1732 1735 return result
1733 1736
1734 1737 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1735 1738 'targetphase'))
1736 1739 def handlechangegroup(op, inpart):
1737 1740 """apply a changegroup part on the repo
1738 1741
1739 1742 This is a very early implementation that will massive rework before being
1740 1743 inflicted to any end-user.
1741 1744 """
1742 1745 tr = op.gettransaction()
1743 1746 unpackerversion = inpart.params.get('version', '01')
1744 1747 # We should raise an appropriate exception here
1745 1748 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1746 1749 # the source and url passed here are overwritten by the one contained in
1747 1750 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1748 1751 nbchangesets = None
1749 1752 if 'nbchanges' in inpart.params:
1750 1753 nbchangesets = int(inpart.params.get('nbchanges'))
1751 1754 if ('treemanifest' in inpart.params and
1752 1755 'treemanifest' not in op.repo.requirements):
1753 1756 if len(op.repo.changelog) != 0:
1754 1757 raise error.Abort(_(
1755 1758 "bundle contains tree manifests, but local repo is "
1756 1759 "non-empty and does not use tree manifests"))
1757 1760 op.repo.requirements.add('treemanifest')
1758 1761 op.repo._applyopenerreqs()
1759 1762 op.repo._writerequirements()
1760 1763 extrakwargs = {}
1761 1764 targetphase = inpart.params.get('targetphase')
1762 1765 if targetphase is not None:
1763 1766 extrakwargs[r'targetphase'] = int(targetphase)
1764 1767 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1765 1768 expectedtotal=nbchangesets, **extrakwargs)
1766 1769 if op.reply is not None:
1767 1770 # This is definitely not the final form of this
1768 1771 # return. But one need to start somewhere.
1769 1772 part = op.reply.newpart('reply:changegroup', mandatory=False)
1770 1773 part.addparam(
1771 1774 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1772 1775 part.addparam('return', '%i' % ret, mandatory=False)
1773 1776 assert not inpart.read()
1774 1777
1775 1778 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1776 1779 ['digest:%s' % k for k in util.DIGESTS.keys()])
1777 1780 @parthandler('remote-changegroup', _remotechangegroupparams)
1778 1781 def handleremotechangegroup(op, inpart):
1779 1782 """apply a bundle10 on the repo, given an url and validation information
1780 1783
1781 1784 All the information about the remote bundle to import are given as
1782 1785 parameters. The parameters include:
1783 1786 - url: the url to the bundle10.
1784 1787 - size: the bundle10 file size. It is used to validate what was
1785 1788 retrieved by the client matches the server knowledge about the bundle.
1786 1789 - digests: a space separated list of the digest types provided as
1787 1790 parameters.
1788 1791 - digest:<digest-type>: the hexadecimal representation of the digest with
1789 1792 that name. Like the size, it is used to validate what was retrieved by
1790 1793 the client matches what the server knows about the bundle.
1791 1794
1792 1795 When multiple digest types are given, all of them are checked.
1793 1796 """
1794 1797 try:
1795 1798 raw_url = inpart.params['url']
1796 1799 except KeyError:
1797 1800 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1798 1801 parsed_url = util.url(raw_url)
1799 1802 if parsed_url.scheme not in capabilities['remote-changegroup']:
1800 1803 raise error.Abort(_('remote-changegroup does not support %s urls') %
1801 1804 parsed_url.scheme)
1802 1805
1803 1806 try:
1804 1807 size = int(inpart.params['size'])
1805 1808 except ValueError:
1806 1809 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1807 1810 % 'size')
1808 1811 except KeyError:
1809 1812 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1810 1813
1811 1814 digests = {}
1812 1815 for typ in inpart.params.get('digests', '').split():
1813 1816 param = 'digest:%s' % typ
1814 1817 try:
1815 1818 value = inpart.params[param]
1816 1819 except KeyError:
1817 1820 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1818 1821 param)
1819 1822 digests[typ] = value
1820 1823
1821 1824 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1822 1825
1823 1826 tr = op.gettransaction()
1824 1827 from . import exchange
1825 1828 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1826 1829 if not isinstance(cg, changegroup.cg1unpacker):
1827 1830 raise error.Abort(_('%s: not a bundle version 1.0') %
1828 1831 util.hidepassword(raw_url))
1829 1832 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1830 1833 if op.reply is not None:
1831 1834 # This is definitely not the final form of this
1832 1835 # return. But one need to start somewhere.
1833 1836 part = op.reply.newpart('reply:changegroup')
1834 1837 part.addparam(
1835 1838 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1836 1839 part.addparam('return', '%i' % ret, mandatory=False)
1837 1840 try:
1838 1841 real_part.validate()
1839 1842 except error.Abort as e:
1840 1843 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1841 1844 (util.hidepassword(raw_url), str(e)))
1842 1845 assert not inpart.read()
1843 1846
1844 1847 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1845 1848 def handlereplychangegroup(op, inpart):
1846 1849 ret = int(inpart.params['return'])
1847 1850 replyto = int(inpart.params['in-reply-to'])
1848 1851 op.records.add('changegroup', {'return': ret}, replyto)
1849 1852
1850 1853 @parthandler('check:bookmarks')
1851 1854 def handlecheckbookmarks(op, inpart):
1852 1855 """check location of bookmarks
1853 1856
1854 1857 This part is to be used to detect push race regarding bookmark, it
1855 1858 contains binary encoded (bookmark, node) tuple. If the local state does
1856 1859 not marks the one in the part, a PushRaced exception is raised
1857 1860 """
1858 1861 bookdata = bookmarks.binarydecode(inpart)
1859 1862
1860 1863 msgstandard = ('repository changed while pushing - please try again '
1861 1864 '(bookmark "%s" move from %s to %s)')
1862 1865 msgmissing = ('repository changed while pushing - please try again '
1863 1866 '(bookmark "%s" is missing, expected %s)')
1864 1867 msgexist = ('repository changed while pushing - please try again '
1865 1868 '(bookmark "%s" set on %s, expected missing)')
1866 1869 for book, node in bookdata:
1867 1870 currentnode = op.repo._bookmarks.get(book)
1868 1871 if currentnode != node:
1869 1872 if node is None:
1870 1873 finalmsg = msgexist % (book, nodemod.short(currentnode))
1871 1874 elif currentnode is None:
1872 1875 finalmsg = msgmissing % (book, nodemod.short(node))
1873 1876 else:
1874 1877 finalmsg = msgstandard % (book, nodemod.short(node),
1875 1878 nodemod.short(currentnode))
1876 1879 raise error.PushRaced(finalmsg)
1877 1880
1878 1881 @parthandler('check:heads')
1879 1882 def handlecheckheads(op, inpart):
1880 1883 """check that head of the repo did not change
1881 1884
1882 1885 This is used to detect a push race when using unbundle.
1883 1886 This replaces the "heads" argument of unbundle."""
1884 1887 h = inpart.read(20)
1885 1888 heads = []
1886 1889 while len(h) == 20:
1887 1890 heads.append(h)
1888 1891 h = inpart.read(20)
1889 1892 assert not h
1890 1893 # Trigger a transaction so that we are guaranteed to have the lock now.
1891 1894 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1892 1895 op.gettransaction()
1893 1896 if sorted(heads) != sorted(op.repo.heads()):
1894 1897 raise error.PushRaced('repository changed while pushing - '
1895 1898 'please try again')
1896 1899
1897 1900 @parthandler('check:updated-heads')
1898 1901 def handlecheckupdatedheads(op, inpart):
1899 1902 """check for race on the heads touched by a push
1900 1903
1901 1904 This is similar to 'check:heads' but focus on the heads actually updated
1902 1905 during the push. If other activities happen on unrelated heads, it is
1903 1906 ignored.
1904 1907
1905 1908 This allow server with high traffic to avoid push contention as long as
1906 1909 unrelated parts of the graph are involved."""
1907 1910 h = inpart.read(20)
1908 1911 heads = []
1909 1912 while len(h) == 20:
1910 1913 heads.append(h)
1911 1914 h = inpart.read(20)
1912 1915 assert not h
1913 1916 # trigger a transaction so that we are guaranteed to have the lock now.
1914 1917 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1915 1918 op.gettransaction()
1916 1919
1917 1920 currentheads = set()
1918 1921 for ls in op.repo.branchmap().itervalues():
1919 1922 currentheads.update(ls)
1920 1923
1921 1924 for h in heads:
1922 1925 if h not in currentheads:
1923 1926 raise error.PushRaced('repository changed while pushing - '
1924 1927 'please try again')
1925 1928
1926 1929 @parthandler('check:phases')
1927 1930 def handlecheckphases(op, inpart):
1928 1931 """check that phase boundaries of the repository did not change
1929 1932
1930 1933 This is used to detect a push race.
1931 1934 """
1932 1935 phasetonodes = phases.binarydecode(inpart)
1933 1936 unfi = op.repo.unfiltered()
1934 1937 cl = unfi.changelog
1935 1938 phasecache = unfi._phasecache
1936 1939 msg = ('repository changed while pushing - please try again '
1937 1940 '(%s is %s expected %s)')
1938 1941 for expectedphase, nodes in enumerate(phasetonodes):
1939 1942 for n in nodes:
1940 1943 actualphase = phasecache.phase(unfi, cl.rev(n))
1941 1944 if actualphase != expectedphase:
1942 1945 finalmsg = msg % (nodemod.short(n),
1943 1946 phases.phasenames[actualphase],
1944 1947 phases.phasenames[expectedphase])
1945 1948 raise error.PushRaced(finalmsg)
1946 1949
1947 1950 @parthandler('output')
1948 1951 def handleoutput(op, inpart):
1949 1952 """forward output captured on the server to the client"""
1950 1953 for line in inpart.read().splitlines():
1951 1954 op.ui.status(_('remote: %s\n') % line)
1952 1955
1953 1956 @parthandler('replycaps')
1954 1957 def handlereplycaps(op, inpart):
1955 1958 """Notify that a reply bundle should be created
1956 1959
1957 1960 The payload contains the capabilities information for the reply"""
1958 1961 caps = decodecaps(inpart.read())
1959 1962 if op.reply is None:
1960 1963 op.reply = bundle20(op.ui, caps)
1961 1964
1962 1965 class AbortFromPart(error.Abort):
1963 1966 """Sub-class of Abort that denotes an error from a bundle2 part."""
1964 1967
1965 1968 @parthandler('error:abort', ('message', 'hint'))
1966 1969 def handleerrorabort(op, inpart):
1967 1970 """Used to transmit abort error over the wire"""
1968 1971 raise AbortFromPart(inpart.params['message'],
1969 1972 hint=inpart.params.get('hint'))
1970 1973
1971 1974 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1972 1975 'in-reply-to'))
1973 1976 def handleerrorpushkey(op, inpart):
1974 1977 """Used to transmit failure of a mandatory pushkey over the wire"""
1975 1978 kwargs = {}
1976 1979 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1977 1980 value = inpart.params.get(name)
1978 1981 if value is not None:
1979 1982 kwargs[name] = value
1980 1983 raise error.PushkeyFailed(inpart.params['in-reply-to'],
1981 1984 **pycompat.strkwargs(kwargs))
1982 1985
1983 1986 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1984 1987 def handleerrorunsupportedcontent(op, inpart):
1985 1988 """Used to transmit unknown content error over the wire"""
1986 1989 kwargs = {}
1987 1990 parttype = inpart.params.get('parttype')
1988 1991 if parttype is not None:
1989 1992 kwargs['parttype'] = parttype
1990 1993 params = inpart.params.get('params')
1991 1994 if params is not None:
1992 1995 kwargs['params'] = params.split('\0')
1993 1996
1994 1997 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
1995 1998
1996 1999 @parthandler('error:pushraced', ('message',))
1997 2000 def handleerrorpushraced(op, inpart):
1998 2001 """Used to transmit push race error over the wire"""
1999 2002 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2000 2003
2001 2004 @parthandler('listkeys', ('namespace',))
2002 2005 def handlelistkeys(op, inpart):
2003 2006 """retrieve pushkey namespace content stored in a bundle2"""
2004 2007 namespace = inpart.params['namespace']
2005 2008 r = pushkey.decodekeys(inpart.read())
2006 2009 op.records.add('listkeys', (namespace, r))
2007 2010
2008 2011 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2009 2012 def handlepushkey(op, inpart):
2010 2013 """process a pushkey request"""
2011 2014 dec = pushkey.decode
2012 2015 namespace = dec(inpart.params['namespace'])
2013 2016 key = dec(inpart.params['key'])
2014 2017 old = dec(inpart.params['old'])
2015 2018 new = dec(inpart.params['new'])
2016 2019 # Grab the transaction to ensure that we have the lock before performing the
2017 2020 # pushkey.
2018 2021 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2019 2022 op.gettransaction()
2020 2023 ret = op.repo.pushkey(namespace, key, old, new)
2021 2024 record = {'namespace': namespace,
2022 2025 'key': key,
2023 2026 'old': old,
2024 2027 'new': new}
2025 2028 op.records.add('pushkey', record)
2026 2029 if op.reply is not None:
2027 2030 rpart = op.reply.newpart('reply:pushkey')
2028 2031 rpart.addparam(
2029 2032 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2030 2033 rpart.addparam('return', '%i' % ret, mandatory=False)
2031 2034 if inpart.mandatory and not ret:
2032 2035 kwargs = {}
2033 2036 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2034 2037 if key in inpart.params:
2035 2038 kwargs[key] = inpart.params[key]
2036 2039 raise error.PushkeyFailed(partid='%d' % inpart.id,
2037 2040 **pycompat.strkwargs(kwargs))
2038 2041
2039 2042 @parthandler('bookmarks')
2040 2043 def handlebookmark(op, inpart):
2041 2044 """transmit bookmark information
2042 2045
2043 2046 The part contains binary encoded bookmark information.
2044 2047
2045 2048 The exact behavior of this part can be controlled by the 'bookmarks' mode
2046 2049 on the bundle operation.
2047 2050
2048 2051 When mode is 'apply' (the default) the bookmark information is applied as
2049 2052 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2050 2053 issued earlier to check for push races in such update. This behavior is
2051 2054 suitable for pushing.
2052 2055
2053 2056 When mode is 'records', the information is recorded into the 'bookmarks'
2054 2057 records of the bundle operation. This behavior is suitable for pulling.
2055 2058 """
2056 2059 changes = bookmarks.binarydecode(inpart)
2057 2060
2058 2061 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2059 2062 bookmarksmode = op.modes.get('bookmarks', 'apply')
2060 2063
2061 2064 if bookmarksmode == 'apply':
2062 2065 tr = op.gettransaction()
2063 2066 bookstore = op.repo._bookmarks
2064 2067 if pushkeycompat:
2065 2068 allhooks = []
2066 2069 for book, node in changes:
2067 2070 hookargs = tr.hookargs.copy()
2068 2071 hookargs['pushkeycompat'] = '1'
2069 2072 hookargs['namespace'] = 'bookmarks'
2070 2073 hookargs['key'] = book
2071 2074 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2072 2075 hookargs['new'] = nodemod.hex(node if node is not None else '')
2073 2076 allhooks.append(hookargs)
2074 2077
2075 2078 for hookargs in allhooks:
2076 2079 op.repo.hook('prepushkey', throw=True,
2077 2080 **pycompat.strkwargs(hookargs))
2078 2081
2079 2082 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2080 2083
2081 2084 if pushkeycompat:
2082 2085 def runhook():
2083 2086 for hookargs in allhooks:
2084 2087 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2085 2088 op.repo._afterlock(runhook)
2086 2089
2087 2090 elif bookmarksmode == 'records':
2088 2091 for book, node in changes:
2089 2092 record = {'bookmark': book, 'node': node}
2090 2093 op.records.add('bookmarks', record)
2091 2094 else:
2092 2095 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2093 2096
2094 2097 @parthandler('phase-heads')
2095 2098 def handlephases(op, inpart):
2096 2099 """apply phases from bundle part to repo"""
2097 2100 headsbyphase = phases.binarydecode(inpart)
2098 2101 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2099 2102
2100 2103 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2101 2104 def handlepushkeyreply(op, inpart):
2102 2105 """retrieve the result of a pushkey request"""
2103 2106 ret = int(inpart.params['return'])
2104 2107 partid = int(inpart.params['in-reply-to'])
2105 2108 op.records.add('pushkey', {'return': ret}, partid)
2106 2109
2107 2110 @parthandler('obsmarkers')
2108 2111 def handleobsmarker(op, inpart):
2109 2112 """add a stream of obsmarkers to the repo"""
2110 2113 tr = op.gettransaction()
2111 2114 markerdata = inpart.read()
2112 2115 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2113 2116 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2114 2117 % len(markerdata))
2115 2118 # The mergemarkers call will crash if marker creation is not enabled.
2116 2119 # we want to avoid this if the part is advisory.
2117 2120 if not inpart.mandatory and op.repo.obsstore.readonly:
2118 2121 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2119 2122 return
2120 2123 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2121 2124 op.repo.invalidatevolatilesets()
2122 2125 if new:
2123 2126 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2124 2127 op.records.add('obsmarkers', {'new': new})
2125 2128 if op.reply is not None:
2126 2129 rpart = op.reply.newpart('reply:obsmarkers')
2127 2130 rpart.addparam(
2128 2131 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2129 2132 rpart.addparam('new', '%i' % new, mandatory=False)
2130 2133
2131 2134
2132 2135 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2133 2136 def handleobsmarkerreply(op, inpart):
2134 2137 """retrieve the result of a pushkey request"""
2135 2138 ret = int(inpart.params['new'])
2136 2139 partid = int(inpart.params['in-reply-to'])
2137 2140 op.records.add('obsmarkers', {'new': ret}, partid)
2138 2141
2139 2142 @parthandler('hgtagsfnodes')
2140 2143 def handlehgtagsfnodes(op, inpart):
2141 2144 """Applies .hgtags fnodes cache entries to the local repo.
2142 2145
2143 2146 Payload is pairs of 20 byte changeset nodes and filenodes.
2144 2147 """
2145 2148 # Grab the transaction so we ensure that we have the lock at this point.
2146 2149 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2147 2150 op.gettransaction()
2148 2151 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2149 2152
2150 2153 count = 0
2151 2154 while True:
2152 2155 node = inpart.read(20)
2153 2156 fnode = inpart.read(20)
2154 2157 if len(node) < 20 or len(fnode) < 20:
2155 2158 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2156 2159 break
2157 2160 cache.setfnode(node, fnode)
2158 2161 count += 1
2159 2162
2160 2163 cache.write()
2161 2164 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2162 2165
2163 2166 rbcstruct = struct.Struct('>III')
2164 2167
2165 2168 @parthandler('cache:rev-branch-cache')
2166 2169 def handlerbc(op, inpart):
2167 2170 """receive a rev-branch-cache payload and update the local cache
2168 2171
2169 2172 The payload is a series of data related to each branch
2170 2173
2171 2174 1) branch name length
2172 2175 2) number of open heads
2173 2176 3) number of closed heads
2174 2177 4) open heads nodes
2175 2178 5) closed heads nodes
2176 2179 """
2177 2180 total = 0
2178 2181 rawheader = inpart.read(rbcstruct.size)
2179 2182 cache = op.repo.revbranchcache()
2180 2183 cl = op.repo.unfiltered().changelog
2181 2184 while rawheader:
2182 2185 header = rbcstruct.unpack(rawheader)
2183 2186 total += header[1] + header[2]
2184 2187 utf8branch = inpart.read(header[0])
2185 2188 branch = encoding.tolocal(utf8branch)
2186 2189 for x in xrange(header[1]):
2187 2190 node = inpart.read(20)
2188 2191 rev = cl.rev(node)
2189 2192 cache.setdata(branch, rev, node, False)
2190 2193 for x in xrange(header[2]):
2191 2194 node = inpart.read(20)
2192 2195 rev = cl.rev(node)
2193 2196 cache.setdata(branch, rev, node, True)
2194 2197 rawheader = inpart.read(rbcstruct.size)
2195 2198 cache.write()
2196 2199
2197 2200 @parthandler('pushvars')
2198 2201 def bundle2getvars(op, part):
2199 2202 '''unbundle a bundle2 containing shellvars on the server'''
2200 2203 # An option to disable unbundling on server-side for security reasons
2201 2204 if op.ui.configbool('push', 'pushvars.server'):
2202 2205 hookargs = {}
2203 2206 for key, value in part.advisoryparams:
2204 2207 key = key.upper()
2205 2208 # We want pushed variables to have USERVAR_ prepended so we know
2206 2209 # they came from the --pushvar flag.
2207 2210 key = "USERVAR_" + key
2208 2211 hookargs[key] = value
2209 2212 op.addhookargs(hookargs)
2210 2213
2211 2214 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2212 2215 def handlestreamv2bundle(op, part):
2213 2216
2214 2217 requirements = urlreq.unquote(part.params['requirements']).split(',')
2215 2218 filecount = int(part.params['filecount'])
2216 2219 bytecount = int(part.params['bytecount'])
2217 2220
2218 2221 repo = op.repo
2219 2222 if len(repo):
2220 2223 msg = _('cannot apply stream clone to non empty repository')
2221 2224 raise error.Abort(msg)
2222 2225
2223 2226 repo.ui.debug('applying stream bundle\n')
2224 2227 streamclone.applybundlev2(repo, part, filecount, bytecount,
2225 2228 requirements)
@@ -1,5638 +1,5639 b''
1 1 # commands.py - command processing for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import difflib
11 11 import errno
12 12 import os
13 13 import re
14 14 import sys
15 15
16 16 from .i18n import _
17 17 from .node import (
18 18 hex,
19 19 nullid,
20 20 nullrev,
21 21 short,
22 22 )
23 23 from . import (
24 24 archival,
25 25 bookmarks,
26 26 bundle2,
27 27 changegroup,
28 28 cmdutil,
29 29 copies,
30 30 debugcommands as debugcommandsmod,
31 31 destutil,
32 32 dirstateguard,
33 33 discovery,
34 34 encoding,
35 35 error,
36 36 exchange,
37 37 extensions,
38 38 formatter,
39 39 graphmod,
40 40 hbisect,
41 41 help,
42 42 hg,
43 43 lock as lockmod,
44 44 logcmdutil,
45 45 merge as mergemod,
46 46 obsolete,
47 47 obsutil,
48 48 patch,
49 49 phases,
50 50 pycompat,
51 51 rcutil,
52 52 registrar,
53 53 revsetlang,
54 54 rewriteutil,
55 55 scmutil,
56 56 server,
57 57 streamclone,
58 58 tags as tagsmod,
59 59 templatekw,
60 60 ui as uimod,
61 61 util,
62 62 wireprotoserver,
63 63 )
64 64 from .utils import (
65 65 dateutil,
66 66 procutil,
67 67 stringutil,
68 68 )
69 69
70 70 release = lockmod.release
71 71
72 72 table = {}
73 73 table.update(debugcommandsmod.command._table)
74 74
75 75 command = registrar.command(table)
76 76 readonly = registrar.command.readonly
77 77
78 78 # common command options
79 79
80 80 globalopts = [
81 81 ('R', 'repository', '',
82 82 _('repository root directory or name of overlay bundle file'),
83 83 _('REPO')),
84 84 ('', 'cwd', '',
85 85 _('change working directory'), _('DIR')),
86 86 ('y', 'noninteractive', None,
87 87 _('do not prompt, automatically pick the first choice for all prompts')),
88 88 ('q', 'quiet', None, _('suppress output')),
89 89 ('v', 'verbose', None, _('enable additional output')),
90 90 ('', 'color', '',
91 91 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
92 92 # and should not be translated
93 93 _("when to colorize (boolean, always, auto, never, or debug)"),
94 94 _('TYPE')),
95 95 ('', 'config', [],
96 96 _('set/override config option (use \'section.name=value\')'),
97 97 _('CONFIG')),
98 98 ('', 'debug', None, _('enable debugging output')),
99 99 ('', 'debugger', None, _('start debugger')),
100 100 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
101 101 _('ENCODE')),
102 102 ('', 'encodingmode', encoding.encodingmode,
103 103 _('set the charset encoding mode'), _('MODE')),
104 104 ('', 'traceback', None, _('always print a traceback on exception')),
105 105 ('', 'time', None, _('time how long the command takes')),
106 106 ('', 'profile', None, _('print command execution profile')),
107 107 ('', 'version', None, _('output version information and exit')),
108 108 ('h', 'help', None, _('display help and exit')),
109 109 ('', 'hidden', False, _('consider hidden changesets')),
110 110 ('', 'pager', 'auto',
111 111 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
112 112 ]
113 113
114 114 dryrunopts = cmdutil.dryrunopts
115 115 remoteopts = cmdutil.remoteopts
116 116 walkopts = cmdutil.walkopts
117 117 commitopts = cmdutil.commitopts
118 118 commitopts2 = cmdutil.commitopts2
119 119 formatteropts = cmdutil.formatteropts
120 120 templateopts = cmdutil.templateopts
121 121 logopts = cmdutil.logopts
122 122 diffopts = cmdutil.diffopts
123 123 diffwsopts = cmdutil.diffwsopts
124 124 diffopts2 = cmdutil.diffopts2
125 125 mergetoolopts = cmdutil.mergetoolopts
126 126 similarityopts = cmdutil.similarityopts
127 127 subrepoopts = cmdutil.subrepoopts
128 128 debugrevlogopts = cmdutil.debugrevlogopts
129 129
130 130 # Commands start here, listed alphabetically
131 131
132 132 @command('^add',
133 133 walkopts + subrepoopts + dryrunopts,
134 134 _('[OPTION]... [FILE]...'),
135 135 inferrepo=True)
136 136 def add(ui, repo, *pats, **opts):
137 137 """add the specified files on the next commit
138 138
139 139 Schedule files to be version controlled and added to the
140 140 repository.
141 141
142 142 The files will be added to the repository at the next commit. To
143 143 undo an add before that, see :hg:`forget`.
144 144
145 145 If no names are given, add all files to the repository (except
146 146 files matching ``.hgignore``).
147 147
148 148 .. container:: verbose
149 149
150 150 Examples:
151 151
152 152 - New (unknown) files are added
153 153 automatically by :hg:`add`::
154 154
155 155 $ ls
156 156 foo.c
157 157 $ hg status
158 158 ? foo.c
159 159 $ hg add
160 160 adding foo.c
161 161 $ hg status
162 162 A foo.c
163 163
164 164 - Specific files to be added can be specified::
165 165
166 166 $ ls
167 167 bar.c foo.c
168 168 $ hg status
169 169 ? bar.c
170 170 ? foo.c
171 171 $ hg add bar.c
172 172 $ hg status
173 173 A bar.c
174 174 ? foo.c
175 175
176 176 Returns 0 if all files are successfully added.
177 177 """
178 178
179 179 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
180 180 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
181 181 return rejected and 1 or 0
182 182
183 183 @command('addremove',
184 184 similarityopts + subrepoopts + walkopts + dryrunopts,
185 185 _('[OPTION]... [FILE]...'),
186 186 inferrepo=True)
187 187 def addremove(ui, repo, *pats, **opts):
188 188 """add all new files, delete all missing files
189 189
190 190 Add all new files and remove all missing files from the
191 191 repository.
192 192
193 193 Unless names are given, new files are ignored if they match any of
194 194 the patterns in ``.hgignore``. As with add, these changes take
195 195 effect at the next commit.
196 196
197 197 Use the -s/--similarity option to detect renamed files. This
198 198 option takes a percentage between 0 (disabled) and 100 (files must
199 199 be identical) as its parameter. With a parameter greater than 0,
200 200 this compares every removed file with every added file and records
201 201 those similar enough as renames. Detecting renamed files this way
202 202 can be expensive. After using this option, :hg:`status -C` can be
203 203 used to check which files were identified as moved or renamed. If
204 204 not specified, -s/--similarity defaults to 100 and only renames of
205 205 identical files are detected.
206 206
207 207 .. container:: verbose
208 208
209 209 Examples:
210 210
211 211 - A number of files (bar.c and foo.c) are new,
212 212 while foobar.c has been removed (without using :hg:`remove`)
213 213 from the repository::
214 214
215 215 $ ls
216 216 bar.c foo.c
217 217 $ hg status
218 218 ! foobar.c
219 219 ? bar.c
220 220 ? foo.c
221 221 $ hg addremove
222 222 adding bar.c
223 223 adding foo.c
224 224 removing foobar.c
225 225 $ hg status
226 226 A bar.c
227 227 A foo.c
228 228 R foobar.c
229 229
230 230 - A file foobar.c was moved to foo.c without using :hg:`rename`.
231 231 Afterwards, it was edited slightly::
232 232
233 233 $ ls
234 234 foo.c
235 235 $ hg status
236 236 ! foobar.c
237 237 ? foo.c
238 238 $ hg addremove --similarity 90
239 239 removing foobar.c
240 240 adding foo.c
241 241 recording removal of foobar.c as rename to foo.c (94% similar)
242 242 $ hg status -C
243 243 A foo.c
244 244 foobar.c
245 245 R foobar.c
246 246
247 247 Returns 0 if all files are successfully added.
248 248 """
249 249 opts = pycompat.byteskwargs(opts)
250 250 try:
251 251 sim = float(opts.get('similarity') or 100)
252 252 except ValueError:
253 253 raise error.Abort(_('similarity must be a number'))
254 254 if sim < 0 or sim > 100:
255 255 raise error.Abort(_('similarity must be between 0 and 100'))
256 256 matcher = scmutil.match(repo[None], pats, opts)
257 257 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
258 258
259 259 @command('^annotate|blame',
260 260 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
261 261 ('', 'follow', None,
262 262 _('follow copies/renames and list the filename (DEPRECATED)')),
263 263 ('', 'no-follow', None, _("don't follow copies and renames")),
264 264 ('a', 'text', None, _('treat all files as text')),
265 265 ('u', 'user', None, _('list the author (long with -v)')),
266 266 ('f', 'file', None, _('list the filename')),
267 267 ('d', 'date', None, _('list the date (short with -q)')),
268 268 ('n', 'number', None, _('list the revision number (default)')),
269 269 ('c', 'changeset', None, _('list the changeset')),
270 270 ('l', 'line-number', None, _('show line number at the first appearance')),
271 271 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
272 272 ] + diffwsopts + walkopts + formatteropts,
273 273 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
274 274 inferrepo=True)
275 275 def annotate(ui, repo, *pats, **opts):
276 276 """show changeset information by line for each file
277 277
278 278 List changes in files, showing the revision id responsible for
279 279 each line.
280 280
281 281 This command is useful for discovering when a change was made and
282 282 by whom.
283 283
284 284 If you include --file, --user, or --date, the revision number is
285 285 suppressed unless you also include --number.
286 286
287 287 Without the -a/--text option, annotate will avoid processing files
288 288 it detects as binary. With -a, annotate will annotate the file
289 289 anyway, although the results will probably be neither useful
290 290 nor desirable.
291 291
292 292 Returns 0 on success.
293 293 """
294 294 opts = pycompat.byteskwargs(opts)
295 295 if not pats:
296 296 raise error.Abort(_('at least one filename or pattern is required'))
297 297
298 298 if opts.get('follow'):
299 299 # --follow is deprecated and now just an alias for -f/--file
300 300 # to mimic the behavior of Mercurial before version 1.5
301 301 opts['file'] = True
302 302
303 303 rev = opts.get('rev')
304 304 if rev:
305 305 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
306 306 ctx = scmutil.revsingle(repo, rev)
307 307
308 308 rootfm = ui.formatter('annotate', opts)
309 309 if ui.quiet:
310 310 datefunc = dateutil.shortdate
311 311 else:
312 312 datefunc = dateutil.datestr
313 313 if ctx.rev() is None:
314 314 def hexfn(node):
315 315 if node is None:
316 316 return None
317 317 else:
318 318 return rootfm.hexfunc(node)
319 319 if opts.get('changeset'):
320 320 # omit "+" suffix which is appended to node hex
321 321 def formatrev(rev):
322 322 if rev is None:
323 323 return '%d' % ctx.p1().rev()
324 324 else:
325 325 return '%d' % rev
326 326 else:
327 327 def formatrev(rev):
328 328 if rev is None:
329 329 return '%d+' % ctx.p1().rev()
330 330 else:
331 331 return '%d ' % rev
332 332 def formathex(hex):
333 333 if hex is None:
334 334 return '%s+' % rootfm.hexfunc(ctx.p1().node())
335 335 else:
336 336 return '%s ' % hex
337 337 else:
338 338 hexfn = rootfm.hexfunc
339 339 formatrev = formathex = pycompat.bytestr
340 340
341 341 opmap = [('user', ' ', lambda x: x.fctx.user(), ui.shortuser),
342 342 ('number', ' ', lambda x: x.fctx.rev(), formatrev),
343 343 ('changeset', ' ', lambda x: hexfn(x.fctx.node()), formathex),
344 344 ('date', ' ', lambda x: x.fctx.date(), util.cachefunc(datefunc)),
345 345 ('file', ' ', lambda x: x.fctx.path(), pycompat.bytestr),
346 346 ('line_number', ':', lambda x: x.lineno, pycompat.bytestr),
347 347 ]
348 348 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
349 349
350 350 if (not opts.get('user') and not opts.get('changeset')
351 351 and not opts.get('date') and not opts.get('file')):
352 352 opts['number'] = True
353 353
354 354 linenumber = opts.get('line_number') is not None
355 355 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
356 356 raise error.Abort(_('at least one of -n/-c is required for -l'))
357 357
358 358 ui.pager('annotate')
359 359
360 360 if rootfm.isplain():
361 361 def makefunc(get, fmt):
362 362 return lambda x: fmt(get(x))
363 363 else:
364 364 def makefunc(get, fmt):
365 365 return get
366 366 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
367 367 if opts.get(op)]
368 368 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
369 369 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
370 370 if opts.get(op))
371 371
372 372 def bad(x, y):
373 373 raise error.Abort("%s: %s" % (x, y))
374 374
375 375 m = scmutil.match(ctx, pats, opts, badfn=bad)
376 376
377 377 follow = not opts.get('no_follow')
378 378 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
379 379 whitespace=True)
380 380 skiprevs = opts.get('skip')
381 381 if skiprevs:
382 382 skiprevs = scmutil.revrange(repo, skiprevs)
383 383
384 384 for abs in ctx.walk(m):
385 385 fctx = ctx[abs]
386 386 rootfm.startitem()
387 387 rootfm.data(abspath=abs, path=m.rel(abs))
388 388 if not opts.get('text') and fctx.isbinary():
389 389 rootfm.plain(_("%s: binary file\n")
390 390 % ((pats and m.rel(abs)) or abs))
391 391 continue
392 392
393 393 fm = rootfm.nested('lines')
394 394 lines = fctx.annotate(follow=follow, skiprevs=skiprevs,
395 395 diffopts=diffopts)
396 396 if not lines:
397 397 fm.end()
398 398 continue
399 399 formats = []
400 400 pieces = []
401 401
402 402 for f, sep in funcmap:
403 403 l = [f(n) for n in lines]
404 404 if fm.isplain():
405 405 sizes = [encoding.colwidth(x) for x in l]
406 406 ml = max(sizes)
407 407 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
408 408 else:
409 409 formats.append(['%s' for x in l])
410 410 pieces.append(l)
411 411
412 412 for f, p, n in zip(zip(*formats), zip(*pieces), lines):
413 413 fm.startitem()
414 414 fm.context(fctx=n.fctx)
415 415 fm.write(fields, "".join(f), *p)
416 416 if n.skip:
417 417 fmt = "* %s"
418 418 else:
419 419 fmt = ": %s"
420 420 fm.write('line', fmt, n.text)
421 421
422 422 if not lines[-1].text.endswith('\n'):
423 423 fm.plain('\n')
424 424 fm.end()
425 425
426 426 rootfm.end()
427 427
428 428 @command('archive',
429 429 [('', 'no-decode', None, _('do not pass files through decoders')),
430 430 ('p', 'prefix', '', _('directory prefix for files in archive'),
431 431 _('PREFIX')),
432 432 ('r', 'rev', '', _('revision to distribute'), _('REV')),
433 433 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
434 434 ] + subrepoopts + walkopts,
435 435 _('[OPTION]... DEST'))
436 436 def archive(ui, repo, dest, **opts):
437 437 '''create an unversioned archive of a repository revision
438 438
439 439 By default, the revision used is the parent of the working
440 440 directory; use -r/--rev to specify a different revision.
441 441
442 442 The archive type is automatically detected based on file
443 443 extension (to override, use -t/--type).
444 444
445 445 .. container:: verbose
446 446
447 447 Examples:
448 448
449 449 - create a zip file containing the 1.0 release::
450 450
451 451 hg archive -r 1.0 project-1.0.zip
452 452
453 453 - create a tarball excluding .hg files::
454 454
455 455 hg archive project.tar.gz -X ".hg*"
456 456
457 457 Valid types are:
458 458
459 459 :``files``: a directory full of files (default)
460 460 :``tar``: tar archive, uncompressed
461 461 :``tbz2``: tar archive, compressed using bzip2
462 462 :``tgz``: tar archive, compressed using gzip
463 463 :``uzip``: zip archive, uncompressed
464 464 :``zip``: zip archive, compressed using deflate
465 465
466 466 The exact name of the destination archive or directory is given
467 467 using a format string; see :hg:`help export` for details.
468 468
469 469 Each member added to an archive file has a directory prefix
470 470 prepended. Use -p/--prefix to specify a format string for the
471 471 prefix. The default is the basename of the archive, with suffixes
472 472 removed.
473 473
474 474 Returns 0 on success.
475 475 '''
476 476
477 477 opts = pycompat.byteskwargs(opts)
478 478 rev = opts.get('rev')
479 479 if rev:
480 480 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
481 481 ctx = scmutil.revsingle(repo, rev)
482 482 if not ctx:
483 483 raise error.Abort(_('no working directory: please specify a revision'))
484 484 node = ctx.node()
485 485 dest = cmdutil.makefilename(ctx, dest)
486 486 if os.path.realpath(dest) == repo.root:
487 487 raise error.Abort(_('repository root cannot be destination'))
488 488
489 489 kind = opts.get('type') or archival.guesskind(dest) or 'files'
490 490 prefix = opts.get('prefix')
491 491
492 492 if dest == '-':
493 493 if kind == 'files':
494 494 raise error.Abort(_('cannot archive plain files to stdout'))
495 495 dest = cmdutil.makefileobj(ctx, dest)
496 496 if not prefix:
497 497 prefix = os.path.basename(repo.root) + '-%h'
498 498
499 499 prefix = cmdutil.makefilename(ctx, prefix)
500 500 match = scmutil.match(ctx, [], opts)
501 501 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
502 502 match, prefix, subrepos=opts.get('subrepos'))
503 503
504 504 @command('backout',
505 505 [('', 'merge', None, _('merge with old dirstate parent after backout')),
506 506 ('', 'commit', None,
507 507 _('commit if no conflicts were encountered (DEPRECATED)')),
508 508 ('', 'no-commit', None, _('do not commit')),
509 509 ('', 'parent', '',
510 510 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
511 511 ('r', 'rev', '', _('revision to backout'), _('REV')),
512 512 ('e', 'edit', False, _('invoke editor on commit messages')),
513 513 ] + mergetoolopts + walkopts + commitopts + commitopts2,
514 514 _('[OPTION]... [-r] REV'))
515 515 def backout(ui, repo, node=None, rev=None, **opts):
516 516 '''reverse effect of earlier changeset
517 517
518 518 Prepare a new changeset with the effect of REV undone in the
519 519 current working directory. If no conflicts were encountered,
520 520 it will be committed immediately.
521 521
522 522 If REV is the parent of the working directory, then this new changeset
523 523 is committed automatically (unless --no-commit is specified).
524 524
525 525 .. note::
526 526
527 527 :hg:`backout` cannot be used to fix either an unwanted or
528 528 incorrect merge.
529 529
530 530 .. container:: verbose
531 531
532 532 Examples:
533 533
534 534 - Reverse the effect of the parent of the working directory.
535 535 This backout will be committed immediately::
536 536
537 537 hg backout -r .
538 538
539 539 - Reverse the effect of previous bad revision 23::
540 540
541 541 hg backout -r 23
542 542
543 543 - Reverse the effect of previous bad revision 23 and
544 544 leave changes uncommitted::
545 545
546 546 hg backout -r 23 --no-commit
547 547 hg commit -m "Backout revision 23"
548 548
549 549 By default, the pending changeset will have one parent,
550 550 maintaining a linear history. With --merge, the pending
551 551 changeset will instead have two parents: the old parent of the
552 552 working directory and a new child of REV that simply undoes REV.
553 553
554 554 Before version 1.7, the behavior without --merge was equivalent
555 555 to specifying --merge followed by :hg:`update --clean .` to
556 556 cancel the merge and leave the child of REV as a head to be
557 557 merged separately.
558 558
559 559 See :hg:`help dates` for a list of formats valid for -d/--date.
560 560
561 561 See :hg:`help revert` for a way to restore files to the state
562 562 of another revision.
563 563
564 564 Returns 0 on success, 1 if nothing to backout or there are unresolved
565 565 files.
566 566 '''
567 567 wlock = lock = None
568 568 try:
569 569 wlock = repo.wlock()
570 570 lock = repo.lock()
571 571 return _dobackout(ui, repo, node, rev, **opts)
572 572 finally:
573 573 release(lock, wlock)
574 574
575 575 def _dobackout(ui, repo, node=None, rev=None, **opts):
576 576 opts = pycompat.byteskwargs(opts)
577 577 if opts.get('commit') and opts.get('no_commit'):
578 578 raise error.Abort(_("cannot use --commit with --no-commit"))
579 579 if opts.get('merge') and opts.get('no_commit'):
580 580 raise error.Abort(_("cannot use --merge with --no-commit"))
581 581
582 582 if rev and node:
583 583 raise error.Abort(_("please specify just one revision"))
584 584
585 585 if not rev:
586 586 rev = node
587 587
588 588 if not rev:
589 589 raise error.Abort(_("please specify a revision to backout"))
590 590
591 591 date = opts.get('date')
592 592 if date:
593 593 opts['date'] = dateutil.parsedate(date)
594 594
595 595 cmdutil.checkunfinished(repo)
596 596 cmdutil.bailifchanged(repo)
597 597 node = scmutil.revsingle(repo, rev).node()
598 598
599 599 op1, op2 = repo.dirstate.parents()
600 600 if not repo.changelog.isancestor(node, op1):
601 601 raise error.Abort(_('cannot backout change that is not an ancestor'))
602 602
603 603 p1, p2 = repo.changelog.parents(node)
604 604 if p1 == nullid:
605 605 raise error.Abort(_('cannot backout a change with no parents'))
606 606 if p2 != nullid:
607 607 if not opts.get('parent'):
608 608 raise error.Abort(_('cannot backout a merge changeset'))
609 609 p = repo.lookup(opts['parent'])
610 610 if p not in (p1, p2):
611 611 raise error.Abort(_('%s is not a parent of %s') %
612 612 (short(p), short(node)))
613 613 parent = p
614 614 else:
615 615 if opts.get('parent'):
616 616 raise error.Abort(_('cannot use --parent on non-merge changeset'))
617 617 parent = p1
618 618
619 619 # the backout should appear on the same branch
620 620 branch = repo.dirstate.branch()
621 621 bheads = repo.branchheads(branch)
622 622 rctx = scmutil.revsingle(repo, hex(parent))
623 623 if not opts.get('merge') and op1 != node:
624 624 dsguard = dirstateguard.dirstateguard(repo, 'backout')
625 625 try:
626 626 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
627 627 'backout')
628 628 stats = mergemod.update(repo, parent, True, True, node, False)
629 629 repo.setparents(op1, op2)
630 630 dsguard.close()
631 631 hg._showstats(repo, stats)
632 632 if stats.unresolvedcount:
633 633 repo.ui.status(_("use 'hg resolve' to retry unresolved "
634 634 "file merges\n"))
635 635 return 1
636 636 finally:
637 637 ui.setconfig('ui', 'forcemerge', '', '')
638 638 lockmod.release(dsguard)
639 639 else:
640 640 hg.clean(repo, node, show_stats=False)
641 641 repo.dirstate.setbranch(branch)
642 642 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
643 643
644 644 if opts.get('no_commit'):
645 645 msg = _("changeset %s backed out, "
646 646 "don't forget to commit.\n")
647 647 ui.status(msg % short(node))
648 648 return 0
649 649
650 650 def commitfunc(ui, repo, message, match, opts):
651 651 editform = 'backout'
652 652 e = cmdutil.getcommiteditor(editform=editform,
653 653 **pycompat.strkwargs(opts))
654 654 if not message:
655 655 # we don't translate commit messages
656 656 message = "Backed out changeset %s" % short(node)
657 657 e = cmdutil.getcommiteditor(edit=True, editform=editform)
658 658 return repo.commit(message, opts.get('user'), opts.get('date'),
659 659 match, editor=e)
660 660 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
661 661 if not newnode:
662 662 ui.status(_("nothing changed\n"))
663 663 return 1
664 664 cmdutil.commitstatus(repo, newnode, branch, bheads)
665 665
666 666 def nice(node):
667 667 return '%d:%s' % (repo.changelog.rev(node), short(node))
668 668 ui.status(_('changeset %s backs out changeset %s\n') %
669 669 (nice(repo.changelog.tip()), nice(node)))
670 670 if opts.get('merge') and op1 != node:
671 671 hg.clean(repo, op1, show_stats=False)
672 672 ui.status(_('merging with changeset %s\n')
673 673 % nice(repo.changelog.tip()))
674 674 try:
675 675 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
676 676 'backout')
677 677 return hg.merge(repo, hex(repo.changelog.tip()))
678 678 finally:
679 679 ui.setconfig('ui', 'forcemerge', '', '')
680 680 return 0
681 681
682 682 @command('bisect',
683 683 [('r', 'reset', False, _('reset bisect state')),
684 684 ('g', 'good', False, _('mark changeset good')),
685 685 ('b', 'bad', False, _('mark changeset bad')),
686 686 ('s', 'skip', False, _('skip testing changeset')),
687 687 ('e', 'extend', False, _('extend the bisect range')),
688 688 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
689 689 ('U', 'noupdate', False, _('do not update to target'))],
690 690 _("[-gbsr] [-U] [-c CMD] [REV]"))
691 691 def bisect(ui, repo, rev=None, extra=None, command=None,
692 692 reset=None, good=None, bad=None, skip=None, extend=None,
693 693 noupdate=None):
694 694 """subdivision search of changesets
695 695
696 696 This command helps to find changesets which introduce problems. To
697 697 use, mark the earliest changeset you know exhibits the problem as
698 698 bad, then mark the latest changeset which is free from the problem
699 699 as good. Bisect will update your working directory to a revision
700 700 for testing (unless the -U/--noupdate option is specified). Once
701 701 you have performed tests, mark the working directory as good or
702 702 bad, and bisect will either update to another candidate changeset
703 703 or announce that it has found the bad revision.
704 704
705 705 As a shortcut, you can also use the revision argument to mark a
706 706 revision as good or bad without checking it out first.
707 707
708 708 If you supply a command, it will be used for automatic bisection.
709 709 The environment variable HG_NODE will contain the ID of the
710 710 changeset being tested. The exit status of the command will be
711 711 used to mark revisions as good or bad: status 0 means good, 125
712 712 means to skip the revision, 127 (command not found) will abort the
713 713 bisection, and any other non-zero exit status means the revision
714 714 is bad.
715 715
716 716 .. container:: verbose
717 717
718 718 Some examples:
719 719
720 720 - start a bisection with known bad revision 34, and good revision 12::
721 721
722 722 hg bisect --bad 34
723 723 hg bisect --good 12
724 724
725 725 - advance the current bisection by marking current revision as good or
726 726 bad::
727 727
728 728 hg bisect --good
729 729 hg bisect --bad
730 730
731 731 - mark the current revision, or a known revision, to be skipped (e.g. if
732 732 that revision is not usable because of another issue)::
733 733
734 734 hg bisect --skip
735 735 hg bisect --skip 23
736 736
737 737 - skip all revisions that do not touch directories ``foo`` or ``bar``::
738 738
739 739 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
740 740
741 741 - forget the current bisection::
742 742
743 743 hg bisect --reset
744 744
745 745 - use 'make && make tests' to automatically find the first broken
746 746 revision::
747 747
748 748 hg bisect --reset
749 749 hg bisect --bad 34
750 750 hg bisect --good 12
751 751 hg bisect --command "make && make tests"
752 752
753 753 - see all changesets whose states are already known in the current
754 754 bisection::
755 755
756 756 hg log -r "bisect(pruned)"
757 757
758 758 - see the changeset currently being bisected (especially useful
759 759 if running with -U/--noupdate)::
760 760
761 761 hg log -r "bisect(current)"
762 762
763 763 - see all changesets that took part in the current bisection::
764 764
765 765 hg log -r "bisect(range)"
766 766
767 767 - you can even get a nice graph::
768 768
769 769 hg log --graph -r "bisect(range)"
770 770
771 771 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
772 772
773 773 Returns 0 on success.
774 774 """
775 775 # backward compatibility
776 776 if rev in "good bad reset init".split():
777 777 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
778 778 cmd, rev, extra = rev, extra, None
779 779 if cmd == "good":
780 780 good = True
781 781 elif cmd == "bad":
782 782 bad = True
783 783 else:
784 784 reset = True
785 785 elif extra:
786 786 raise error.Abort(_('incompatible arguments'))
787 787
788 788 incompatibles = {
789 789 '--bad': bad,
790 790 '--command': bool(command),
791 791 '--extend': extend,
792 792 '--good': good,
793 793 '--reset': reset,
794 794 '--skip': skip,
795 795 }
796 796
797 797 enabled = [x for x in incompatibles if incompatibles[x]]
798 798
799 799 if len(enabled) > 1:
800 800 raise error.Abort(_('%s and %s are incompatible') %
801 801 tuple(sorted(enabled)[0:2]))
802 802
803 803 if reset:
804 804 hbisect.resetstate(repo)
805 805 return
806 806
807 807 state = hbisect.load_state(repo)
808 808
809 809 # update state
810 810 if good or bad or skip:
811 811 if rev:
812 812 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
813 813 else:
814 814 nodes = [repo.lookup('.')]
815 815 if good:
816 816 state['good'] += nodes
817 817 elif bad:
818 818 state['bad'] += nodes
819 819 elif skip:
820 820 state['skip'] += nodes
821 821 hbisect.save_state(repo, state)
822 822 if not (state['good'] and state['bad']):
823 823 return
824 824
825 825 def mayupdate(repo, node, show_stats=True):
826 826 """common used update sequence"""
827 827 if noupdate:
828 828 return
829 829 cmdutil.checkunfinished(repo)
830 830 cmdutil.bailifchanged(repo)
831 831 return hg.clean(repo, node, show_stats=show_stats)
832 832
833 833 displayer = logcmdutil.changesetdisplayer(ui, repo, {})
834 834
835 835 if command:
836 836 changesets = 1
837 837 if noupdate:
838 838 try:
839 839 node = state['current'][0]
840 840 except LookupError:
841 841 raise error.Abort(_('current bisect revision is unknown - '
842 842 'start a new bisect to fix'))
843 843 else:
844 844 node, p2 = repo.dirstate.parents()
845 845 if p2 != nullid:
846 846 raise error.Abort(_('current bisect revision is a merge'))
847 847 if rev:
848 848 node = repo[scmutil.revsingle(repo, rev, node)].node()
849 849 try:
850 850 while changesets:
851 851 # update state
852 852 state['current'] = [node]
853 853 hbisect.save_state(repo, state)
854 854 status = ui.system(command, environ={'HG_NODE': hex(node)},
855 855 blockedtag='bisect_check')
856 856 if status == 125:
857 857 transition = "skip"
858 858 elif status == 0:
859 859 transition = "good"
860 860 # status < 0 means process was killed
861 861 elif status == 127:
862 862 raise error.Abort(_("failed to execute %s") % command)
863 863 elif status < 0:
864 864 raise error.Abort(_("%s killed") % command)
865 865 else:
866 866 transition = "bad"
867 867 state[transition].append(node)
868 868 ctx = repo[node]
869 869 ui.status(_('changeset %d:%s: %s\n') % (ctx.rev(), ctx,
870 870 transition))
871 871 hbisect.checkstate(state)
872 872 # bisect
873 873 nodes, changesets, bgood = hbisect.bisect(repo, state)
874 874 # update to next check
875 875 node = nodes[0]
876 876 mayupdate(repo, node, show_stats=False)
877 877 finally:
878 878 state['current'] = [node]
879 879 hbisect.save_state(repo, state)
880 880 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
881 881 return
882 882
883 883 hbisect.checkstate(state)
884 884
885 885 # actually bisect
886 886 nodes, changesets, good = hbisect.bisect(repo, state)
887 887 if extend:
888 888 if not changesets:
889 889 extendnode = hbisect.extendrange(repo, state, nodes, good)
890 890 if extendnode is not None:
891 891 ui.write(_("Extending search to changeset %d:%s\n")
892 892 % (extendnode.rev(), extendnode))
893 893 state['current'] = [extendnode.node()]
894 894 hbisect.save_state(repo, state)
895 895 return mayupdate(repo, extendnode.node())
896 896 raise error.Abort(_("nothing to extend"))
897 897
898 898 if changesets == 0:
899 899 hbisect.printresult(ui, repo, state, displayer, nodes, good)
900 900 else:
901 901 assert len(nodes) == 1 # only a single node can be tested next
902 902 node = nodes[0]
903 903 # compute the approximate number of remaining tests
904 904 tests, size = 0, 2
905 905 while size <= changesets:
906 906 tests, size = tests + 1, size * 2
907 907 rev = repo.changelog.rev(node)
908 908 ui.write(_("Testing changeset %d:%s "
909 909 "(%d changesets remaining, ~%d tests)\n")
910 910 % (rev, short(node), changesets, tests))
911 911 state['current'] = [node]
912 912 hbisect.save_state(repo, state)
913 913 return mayupdate(repo, node)
914 914
915 915 @command('bookmarks|bookmark',
916 916 [('f', 'force', False, _('force')),
917 917 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
918 918 ('d', 'delete', False, _('delete a given bookmark')),
919 919 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
920 920 ('i', 'inactive', False, _('mark a bookmark inactive')),
921 921 ] + formatteropts,
922 922 _('hg bookmarks [OPTIONS]... [NAME]...'))
923 923 def bookmark(ui, repo, *names, **opts):
924 924 '''create a new bookmark or list existing bookmarks
925 925
926 926 Bookmarks are labels on changesets to help track lines of development.
927 927 Bookmarks are unversioned and can be moved, renamed and deleted.
928 928 Deleting or moving a bookmark has no effect on the associated changesets.
929 929
930 930 Creating or updating to a bookmark causes it to be marked as 'active'.
931 931 The active bookmark is indicated with a '*'.
932 932 When a commit is made, the active bookmark will advance to the new commit.
933 933 A plain :hg:`update` will also advance an active bookmark, if possible.
934 934 Updating away from a bookmark will cause it to be deactivated.
935 935
936 936 Bookmarks can be pushed and pulled between repositories (see
937 937 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
938 938 diverged, a new 'divergent bookmark' of the form 'name@path' will
939 939 be created. Using :hg:`merge` will resolve the divergence.
940 940
941 941 Specifying bookmark as '.' to -m or -d options is equivalent to specifying
942 942 the active bookmark's name.
943 943
944 944 A bookmark named '@' has the special property that :hg:`clone` will
945 945 check it out by default if it exists.
946 946
947 947 .. container:: verbose
948 948
949 949 Examples:
950 950
951 951 - create an active bookmark for a new line of development::
952 952
953 953 hg book new-feature
954 954
955 955 - create an inactive bookmark as a place marker::
956 956
957 957 hg book -i reviewed
958 958
959 959 - create an inactive bookmark on another changeset::
960 960
961 961 hg book -r .^ tested
962 962
963 963 - rename bookmark turkey to dinner::
964 964
965 965 hg book -m turkey dinner
966 966
967 967 - move the '@' bookmark from another branch::
968 968
969 969 hg book -f @
970 970 '''
971 971 force = opts.get(r'force')
972 972 rev = opts.get(r'rev')
973 973 delete = opts.get(r'delete')
974 974 rename = opts.get(r'rename')
975 975 inactive = opts.get(r'inactive')
976 976
977 977 if delete and rename:
978 978 raise error.Abort(_("--delete and --rename are incompatible"))
979 979 if delete and rev:
980 980 raise error.Abort(_("--rev is incompatible with --delete"))
981 981 if rename and rev:
982 982 raise error.Abort(_("--rev is incompatible with --rename"))
983 983 if not names and (delete or rev):
984 984 raise error.Abort(_("bookmark name required"))
985 985
986 986 if delete or rename or names or inactive:
987 987 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
988 988 if delete:
989 989 names = pycompat.maplist(repo._bookmarks.expandname, names)
990 990 bookmarks.delete(repo, tr, names)
991 991 elif rename:
992 992 if not names:
993 993 raise error.Abort(_("new bookmark name required"))
994 994 elif len(names) > 1:
995 995 raise error.Abort(_("only one new bookmark name allowed"))
996 996 rename = repo._bookmarks.expandname(rename)
997 997 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
998 998 elif names:
999 999 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
1000 1000 elif inactive:
1001 1001 if len(repo._bookmarks) == 0:
1002 1002 ui.status(_("no bookmarks set\n"))
1003 1003 elif not repo._activebookmark:
1004 1004 ui.status(_("no active bookmark\n"))
1005 1005 else:
1006 1006 bookmarks.deactivate(repo)
1007 1007 else: # show bookmarks
1008 1008 bookmarks.printbookmarks(ui, repo, **opts)
1009 1009
1010 1010 @command('branch',
1011 1011 [('f', 'force', None,
1012 1012 _('set branch name even if it shadows an existing branch')),
1013 1013 ('C', 'clean', None, _('reset branch name to parent branch name')),
1014 1014 ('r', 'rev', [], _('change branches of the given revs (EXPERIMENTAL)')),
1015 1015 ],
1016 1016 _('[-fC] [NAME]'))
1017 1017 def branch(ui, repo, label=None, **opts):
1018 1018 """set or show the current branch name
1019 1019
1020 1020 .. note::
1021 1021
1022 1022 Branch names are permanent and global. Use :hg:`bookmark` to create a
1023 1023 light-weight bookmark instead. See :hg:`help glossary` for more
1024 1024 information about named branches and bookmarks.
1025 1025
1026 1026 With no argument, show the current branch name. With one argument,
1027 1027 set the working directory branch name (the branch will not exist
1028 1028 in the repository until the next commit). Standard practice
1029 1029 recommends that primary development take place on the 'default'
1030 1030 branch.
1031 1031
1032 1032 Unless -f/--force is specified, branch will not let you set a
1033 1033 branch name that already exists.
1034 1034
1035 1035 Use -C/--clean to reset the working directory branch to that of
1036 1036 the parent of the working directory, negating a previous branch
1037 1037 change.
1038 1038
1039 1039 Use the command :hg:`update` to switch to an existing branch. Use
1040 1040 :hg:`commit --close-branch` to mark this branch head as closed.
1041 1041 When all heads of a branch are closed, the branch will be
1042 1042 considered closed.
1043 1043
1044 1044 Returns 0 on success.
1045 1045 """
1046 1046 opts = pycompat.byteskwargs(opts)
1047 1047 revs = opts.get('rev')
1048 1048 if label:
1049 1049 label = label.strip()
1050 1050
1051 1051 if not opts.get('clean') and not label:
1052 1052 if revs:
1053 1053 raise error.Abort(_("no branch name specified for the revisions"))
1054 1054 ui.write("%s\n" % repo.dirstate.branch())
1055 1055 return
1056 1056
1057 1057 with repo.wlock():
1058 1058 if opts.get('clean'):
1059 1059 label = repo[None].p1().branch()
1060 1060 repo.dirstate.setbranch(label)
1061 1061 ui.status(_('reset working directory to branch %s\n') % label)
1062 1062 elif label:
1063 1063
1064 1064 scmutil.checknewlabel(repo, label, 'branch')
1065 1065 if revs:
1066 1066 return cmdutil.changebranch(ui, repo, revs, label)
1067 1067
1068 1068 if not opts.get('force') and label in repo.branchmap():
1069 1069 if label not in [p.branch() for p in repo[None].parents()]:
1070 1070 raise error.Abort(_('a branch of the same name already'
1071 1071 ' exists'),
1072 1072 # i18n: "it" refers to an existing branch
1073 1073 hint=_("use 'hg update' to switch to it"))
1074 1074
1075 1075 repo.dirstate.setbranch(label)
1076 1076 ui.status(_('marked working directory as branch %s\n') % label)
1077 1077
1078 1078 # find any open named branches aside from default
1079 1079 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1080 1080 if n != "default" and not c]
1081 1081 if not others:
1082 1082 ui.status(_('(branches are permanent and global, '
1083 1083 'did you want a bookmark?)\n'))
1084 1084
1085 1085 @command('branches',
1086 1086 [('a', 'active', False,
1087 1087 _('show only branches that have unmerged heads (DEPRECATED)')),
1088 1088 ('c', 'closed', False, _('show normal and closed branches')),
1089 1089 ] + formatteropts,
1090 1090 _('[-c]'), cmdtype=readonly)
1091 1091 def branches(ui, repo, active=False, closed=False, **opts):
1092 1092 """list repository named branches
1093 1093
1094 1094 List the repository's named branches, indicating which ones are
1095 1095 inactive. If -c/--closed is specified, also list branches which have
1096 1096 been marked closed (see :hg:`commit --close-branch`).
1097 1097
1098 1098 Use the command :hg:`update` to switch to an existing branch.
1099 1099
1100 1100 Returns 0.
1101 1101 """
1102 1102
1103 1103 opts = pycompat.byteskwargs(opts)
1104 1104 ui.pager('branches')
1105 1105 fm = ui.formatter('branches', opts)
1106 1106 hexfunc = fm.hexfunc
1107 1107
1108 1108 allheads = set(repo.heads())
1109 1109 branches = []
1110 1110 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1111 1111 isactive = False
1112 1112 if not isclosed:
1113 1113 openheads = set(repo.branchmap().iteropen(heads))
1114 1114 isactive = bool(openheads & allheads)
1115 1115 branches.append((tag, repo[tip], isactive, not isclosed))
1116 1116 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1117 1117 reverse=True)
1118 1118
1119 1119 for tag, ctx, isactive, isopen in branches:
1120 1120 if active and not isactive:
1121 1121 continue
1122 1122 if isactive:
1123 1123 label = 'branches.active'
1124 1124 notice = ''
1125 1125 elif not isopen:
1126 1126 if not closed:
1127 1127 continue
1128 1128 label = 'branches.closed'
1129 1129 notice = _(' (closed)')
1130 1130 else:
1131 1131 label = 'branches.inactive'
1132 1132 notice = _(' (inactive)')
1133 1133 current = (tag == repo.dirstate.branch())
1134 1134 if current:
1135 1135 label = 'branches.current'
1136 1136
1137 1137 fm.startitem()
1138 1138 fm.write('branch', '%s', tag, label=label)
1139 1139 rev = ctx.rev()
1140 1140 padsize = max(31 - len("%d" % rev) - encoding.colwidth(tag), 0)
1141 1141 fmt = ' ' * padsize + ' %d:%s'
1142 1142 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1143 1143 label='log.changeset changeset.%s' % ctx.phasestr())
1144 1144 fm.context(ctx=ctx)
1145 1145 fm.data(active=isactive, closed=not isopen, current=current)
1146 1146 if not ui.quiet:
1147 1147 fm.plain(notice)
1148 1148 fm.plain('\n')
1149 1149 fm.end()
1150 1150
1151 1151 @command('bundle',
1152 1152 [('f', 'force', None, _('run even when the destination is unrelated')),
1153 1153 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1154 1154 _('REV')),
1155 1155 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1156 1156 _('BRANCH')),
1157 1157 ('', 'base', [],
1158 1158 _('a base changeset assumed to be available at the destination'),
1159 1159 _('REV')),
1160 1160 ('a', 'all', None, _('bundle all changesets in the repository')),
1161 1161 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1162 1162 ] + remoteopts,
1163 1163 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1164 1164 def bundle(ui, repo, fname, dest=None, **opts):
1165 1165 """create a bundle file
1166 1166
1167 1167 Generate a bundle file containing data to be transferred to another
1168 1168 repository.
1169 1169
1170 1170 To create a bundle containing all changesets, use -a/--all
1171 1171 (or --base null). Otherwise, hg assumes the destination will have
1172 1172 all the nodes you specify with --base parameters. Otherwise, hg
1173 1173 will assume the repository has all the nodes in destination, or
1174 1174 default-push/default if no destination is specified, where destination
1175 1175 is the repository you provide through DEST option.
1176 1176
1177 1177 You can change bundle format with the -t/--type option. See
1178 1178 :hg:`help bundlespec` for documentation on this format. By default,
1179 1179 the most appropriate format is used and compression defaults to
1180 1180 bzip2.
1181 1181
1182 1182 The bundle file can then be transferred using conventional means
1183 1183 and applied to another repository with the unbundle or pull
1184 1184 command. This is useful when direct push and pull are not
1185 1185 available or when exporting an entire repository is undesirable.
1186 1186
1187 1187 Applying bundles preserves all changeset contents including
1188 1188 permissions, copy/rename information, and revision history.
1189 1189
1190 1190 Returns 0 on success, 1 if no changes found.
1191 1191 """
1192 1192 opts = pycompat.byteskwargs(opts)
1193 1193 revs = None
1194 1194 if 'rev' in opts:
1195 1195 revstrings = opts['rev']
1196 1196 revs = scmutil.revrange(repo, revstrings)
1197 1197 if revstrings and not revs:
1198 1198 raise error.Abort(_('no commits to bundle'))
1199 1199
1200 1200 bundletype = opts.get('type', 'bzip2').lower()
1201 1201 try:
1202 1202 bundlespec = exchange.parsebundlespec(repo, bundletype, strict=False)
1203 1203 except error.UnsupportedBundleSpecification as e:
1204 1204 raise error.Abort(pycompat.bytestr(e),
1205 1205 hint=_("see 'hg help bundlespec' for supported "
1206 1206 "values for --type"))
1207 cgversion = bundlespec.version
1207 cgversion = bundlespec.contentopts["cg.version"]
1208 1208
1209 1209 # Packed bundles are a pseudo bundle format for now.
1210 1210 if cgversion == 's1':
1211 1211 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1212 1212 hint=_("use 'hg debugcreatestreamclonebundle'"))
1213 1213
1214 1214 if opts.get('all'):
1215 1215 if dest:
1216 1216 raise error.Abort(_("--all is incompatible with specifying "
1217 1217 "a destination"))
1218 1218 if opts.get('base'):
1219 1219 ui.warn(_("ignoring --base because --all was specified\n"))
1220 1220 base = ['null']
1221 1221 else:
1222 1222 base = scmutil.revrange(repo, opts.get('base'))
1223 1223 if cgversion not in changegroup.supportedoutgoingversions(repo):
1224 1224 raise error.Abort(_("repository does not support bundle version %s") %
1225 1225 cgversion)
1226 1226
1227 1227 if base:
1228 1228 if dest:
1229 1229 raise error.Abort(_("--base is incompatible with specifying "
1230 1230 "a destination"))
1231 1231 common = [repo.lookup(rev) for rev in base]
1232 1232 heads = [repo.lookup(r) for r in revs] if revs else None
1233 1233 outgoing = discovery.outgoing(repo, common, heads)
1234 1234 else:
1235 1235 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1236 1236 dest, branches = hg.parseurl(dest, opts.get('branch'))
1237 1237 other = hg.peer(repo, opts, dest)
1238 1238 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1239 1239 heads = revs and map(repo.lookup, revs) or revs
1240 1240 outgoing = discovery.findcommonoutgoing(repo, other,
1241 1241 onlyheads=heads,
1242 1242 force=opts.get('force'),
1243 1243 portable=True)
1244 1244
1245 1245 if not outgoing.missing:
1246 1246 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1247 1247 return 1
1248 1248
1249 1249 bcompression = bundlespec.compression
1250 1250 if cgversion == '01': #bundle1
1251 1251 if bcompression is None:
1252 1252 bcompression = 'UN'
1253 1253 bversion = 'HG10' + bcompression
1254 1254 bcompression = None
1255 1255 elif cgversion in ('02', '03'):
1256 1256 bversion = 'HG20'
1257 1257 else:
1258 1258 raise error.ProgrammingError(
1259 1259 'bundle: unexpected changegroup version %s' % cgversion)
1260 1260
1261 1261 # TODO compression options should be derived from bundlespec parsing.
1262 1262 # This is a temporary hack to allow adjusting bundle compression
1263 1263 # level without a) formalizing the bundlespec changes to declare it
1264 1264 # b) introducing a command flag.
1265 1265 compopts = {}
1266 1266 complevel = ui.configint('experimental', 'bundlecomplevel')
1267 1267 if complevel is not None:
1268 1268 compopts['level'] = complevel
1269 1269
1270
1271 contentopts = {'cg.version': cgversion, 'changegroup': True}
1270 # Allow overriding the bundling of obsmarker in phases through
1271 # configuration while we don't have a bundle version that include them
1272 1272 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker'):
1273 contentopts['obsolescence'] = True
1273 bundlespec.contentopts['obsolescence'] = True
1274 1274 if repo.ui.configbool('experimental', 'bundle-phases'):
1275 contentopts['phases'] = True
1275 bundlespec.contentopts['phases'] = True
1276
1276 1277 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1277 contentopts, compression=bcompression,
1278 bundlespec.contentopts, compression=bcompression,
1278 1279 compopts=compopts)
1279 1280
1280 1281 @command('cat',
1281 1282 [('o', 'output', '',
1282 1283 _('print output to file with formatted name'), _('FORMAT')),
1283 1284 ('r', 'rev', '', _('print the given revision'), _('REV')),
1284 1285 ('', 'decode', None, _('apply any matching decode filter')),
1285 1286 ] + walkopts + formatteropts,
1286 1287 _('[OPTION]... FILE...'),
1287 1288 inferrepo=True, cmdtype=readonly)
1288 1289 def cat(ui, repo, file1, *pats, **opts):
1289 1290 """output the current or given revision of files
1290 1291
1291 1292 Print the specified files as they were at the given revision. If
1292 1293 no revision is given, the parent of the working directory is used.
1293 1294
1294 1295 Output may be to a file, in which case the name of the file is
1295 1296 given using a template string. See :hg:`help templates`. In addition
1296 1297 to the common template keywords, the following formatting rules are
1297 1298 supported:
1298 1299
1299 1300 :``%%``: literal "%" character
1300 1301 :``%s``: basename of file being printed
1301 1302 :``%d``: dirname of file being printed, or '.' if in repository root
1302 1303 :``%p``: root-relative path name of file being printed
1303 1304 :``%H``: changeset hash (40 hexadecimal digits)
1304 1305 :``%R``: changeset revision number
1305 1306 :``%h``: short-form changeset hash (12 hexadecimal digits)
1306 1307 :``%r``: zero-padded changeset revision number
1307 1308 :``%b``: basename of the exporting repository
1308 1309 :``\\``: literal "\\" character
1309 1310
1310 1311 Returns 0 on success.
1311 1312 """
1312 1313 opts = pycompat.byteskwargs(opts)
1313 1314 rev = opts.get('rev')
1314 1315 if rev:
1315 1316 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
1316 1317 ctx = scmutil.revsingle(repo, rev)
1317 1318 m = scmutil.match(ctx, (file1,) + pats, opts)
1318 1319 fntemplate = opts.pop('output', '')
1319 1320 if cmdutil.isstdiofilename(fntemplate):
1320 1321 fntemplate = ''
1321 1322
1322 1323 if fntemplate:
1323 1324 fm = formatter.nullformatter(ui, 'cat')
1324 1325 else:
1325 1326 ui.pager('cat')
1326 1327 fm = ui.formatter('cat', opts)
1327 1328 with fm:
1328 1329 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '',
1329 1330 **pycompat.strkwargs(opts))
1330 1331
1331 1332 @command('^clone',
1332 1333 [('U', 'noupdate', None, _('the clone will include an empty working '
1333 1334 'directory (only a repository)')),
1334 1335 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1335 1336 _('REV')),
1336 1337 ('r', 'rev', [], _('do not clone everything, but include this changeset'
1337 1338 ' and its ancestors'), _('REV')),
1338 1339 ('b', 'branch', [], _('do not clone everything, but include this branch\'s'
1339 1340 ' changesets and their ancestors'), _('BRANCH')),
1340 1341 ('', 'pull', None, _('use pull protocol to copy metadata')),
1341 1342 ('', 'uncompressed', None,
1342 1343 _('an alias to --stream (DEPRECATED)')),
1343 1344 ('', 'stream', None,
1344 1345 _('clone with minimal data processing')),
1345 1346 ] + remoteopts,
1346 1347 _('[OPTION]... SOURCE [DEST]'),
1347 1348 norepo=True)
1348 1349 def clone(ui, source, dest=None, **opts):
1349 1350 """make a copy of an existing repository
1350 1351
1351 1352 Create a copy of an existing repository in a new directory.
1352 1353
1353 1354 If no destination directory name is specified, it defaults to the
1354 1355 basename of the source.
1355 1356
1356 1357 The location of the source is added to the new repository's
1357 1358 ``.hg/hgrc`` file, as the default to be used for future pulls.
1358 1359
1359 1360 Only local paths and ``ssh://`` URLs are supported as
1360 1361 destinations. For ``ssh://`` destinations, no working directory or
1361 1362 ``.hg/hgrc`` will be created on the remote side.
1362 1363
1363 1364 If the source repository has a bookmark called '@' set, that
1364 1365 revision will be checked out in the new repository by default.
1365 1366
1366 1367 To check out a particular version, use -u/--update, or
1367 1368 -U/--noupdate to create a clone with no working directory.
1368 1369
1369 1370 To pull only a subset of changesets, specify one or more revisions
1370 1371 identifiers with -r/--rev or branches with -b/--branch. The
1371 1372 resulting clone will contain only the specified changesets and
1372 1373 their ancestors. These options (or 'clone src#rev dest') imply
1373 1374 --pull, even for local source repositories.
1374 1375
1375 1376 In normal clone mode, the remote normalizes repository data into a common
1376 1377 exchange format and the receiving end translates this data into its local
1377 1378 storage format. --stream activates a different clone mode that essentially
1378 1379 copies repository files from the remote with minimal data processing. This
1379 1380 significantly reduces the CPU cost of a clone both remotely and locally.
1380 1381 However, it often increases the transferred data size by 30-40%. This can
1381 1382 result in substantially faster clones where I/O throughput is plentiful,
1382 1383 especially for larger repositories. A side-effect of --stream clones is
1383 1384 that storage settings and requirements on the remote are applied locally:
1384 1385 a modern client may inherit legacy or inefficient storage used by the
1385 1386 remote or a legacy Mercurial client may not be able to clone from a
1386 1387 modern Mercurial remote.
1387 1388
1388 1389 .. note::
1389 1390
1390 1391 Specifying a tag will include the tagged changeset but not the
1391 1392 changeset containing the tag.
1392 1393
1393 1394 .. container:: verbose
1394 1395
1395 1396 For efficiency, hardlinks are used for cloning whenever the
1396 1397 source and destination are on the same filesystem (note this
1397 1398 applies only to the repository data, not to the working
1398 1399 directory). Some filesystems, such as AFS, implement hardlinking
1399 1400 incorrectly, but do not report errors. In these cases, use the
1400 1401 --pull option to avoid hardlinking.
1401 1402
1402 1403 Mercurial will update the working directory to the first applicable
1403 1404 revision from this list:
1404 1405
1405 1406 a) null if -U or the source repository has no changesets
1406 1407 b) if -u . and the source repository is local, the first parent of
1407 1408 the source repository's working directory
1408 1409 c) the changeset specified with -u (if a branch name, this means the
1409 1410 latest head of that branch)
1410 1411 d) the changeset specified with -r
1411 1412 e) the tipmost head specified with -b
1412 1413 f) the tipmost head specified with the url#branch source syntax
1413 1414 g) the revision marked with the '@' bookmark, if present
1414 1415 h) the tipmost head of the default branch
1415 1416 i) tip
1416 1417
1417 1418 When cloning from servers that support it, Mercurial may fetch
1418 1419 pre-generated data from a server-advertised URL. When this is done,
1419 1420 hooks operating on incoming changesets and changegroups may fire twice,
1420 1421 once for the bundle fetched from the URL and another for any additional
1421 1422 data not fetched from this URL. In addition, if an error occurs, the
1422 1423 repository may be rolled back to a partial clone. This behavior may
1423 1424 change in future releases. See :hg:`help -e clonebundles` for more.
1424 1425
1425 1426 Examples:
1426 1427
1427 1428 - clone a remote repository to a new directory named hg/::
1428 1429
1429 1430 hg clone https://www.mercurial-scm.org/repo/hg/
1430 1431
1431 1432 - create a lightweight local clone::
1432 1433
1433 1434 hg clone project/ project-feature/
1434 1435
1435 1436 - clone from an absolute path on an ssh server (note double-slash)::
1436 1437
1437 1438 hg clone ssh://user@server//home/projects/alpha/
1438 1439
1439 1440 - do a streaming clone while checking out a specified version::
1440 1441
1441 1442 hg clone --stream http://server/repo -u 1.5
1442 1443
1443 1444 - create a repository without changesets after a particular revision::
1444 1445
1445 1446 hg clone -r 04e544 experimental/ good/
1446 1447
1447 1448 - clone (and track) a particular named branch::
1448 1449
1449 1450 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1450 1451
1451 1452 See :hg:`help urls` for details on specifying URLs.
1452 1453
1453 1454 Returns 0 on success.
1454 1455 """
1455 1456 opts = pycompat.byteskwargs(opts)
1456 1457 if opts.get('noupdate') and opts.get('updaterev'):
1457 1458 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1458 1459
1459 1460 r = hg.clone(ui, opts, source, dest,
1460 1461 pull=opts.get('pull'),
1461 1462 stream=opts.get('stream') or opts.get('uncompressed'),
1462 1463 rev=opts.get('rev'),
1463 1464 update=opts.get('updaterev') or not opts.get('noupdate'),
1464 1465 branch=opts.get('branch'),
1465 1466 shareopts=opts.get('shareopts'))
1466 1467
1467 1468 return r is None
1468 1469
1469 1470 @command('^commit|ci',
1470 1471 [('A', 'addremove', None,
1471 1472 _('mark new/missing files as added/removed before committing')),
1472 1473 ('', 'close-branch', None,
1473 1474 _('mark a branch head as closed')),
1474 1475 ('', 'amend', None, _('amend the parent of the working directory')),
1475 1476 ('s', 'secret', None, _('use the secret phase for committing')),
1476 1477 ('e', 'edit', None, _('invoke editor on commit messages')),
1477 1478 ('i', 'interactive', None, _('use interactive mode')),
1478 1479 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1479 1480 _('[OPTION]... [FILE]...'),
1480 1481 inferrepo=True)
1481 1482 def commit(ui, repo, *pats, **opts):
1482 1483 """commit the specified files or all outstanding changes
1483 1484
1484 1485 Commit changes to the given files into the repository. Unlike a
1485 1486 centralized SCM, this operation is a local operation. See
1486 1487 :hg:`push` for a way to actively distribute your changes.
1487 1488
1488 1489 If a list of files is omitted, all changes reported by :hg:`status`
1489 1490 will be committed.
1490 1491
1491 1492 If you are committing the result of a merge, do not provide any
1492 1493 filenames or -I/-X filters.
1493 1494
1494 1495 If no commit message is specified, Mercurial starts your
1495 1496 configured editor where you can enter a message. In case your
1496 1497 commit fails, you will find a backup of your message in
1497 1498 ``.hg/last-message.txt``.
1498 1499
1499 1500 The --close-branch flag can be used to mark the current branch
1500 1501 head closed. When all heads of a branch are closed, the branch
1501 1502 will be considered closed and no longer listed.
1502 1503
1503 1504 The --amend flag can be used to amend the parent of the
1504 1505 working directory with a new commit that contains the changes
1505 1506 in the parent in addition to those currently reported by :hg:`status`,
1506 1507 if there are any. The old commit is stored in a backup bundle in
1507 1508 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1508 1509 on how to restore it).
1509 1510
1510 1511 Message, user and date are taken from the amended commit unless
1511 1512 specified. When a message isn't specified on the command line,
1512 1513 the editor will open with the message of the amended commit.
1513 1514
1514 1515 It is not possible to amend public changesets (see :hg:`help phases`)
1515 1516 or changesets that have children.
1516 1517
1517 1518 See :hg:`help dates` for a list of formats valid for -d/--date.
1518 1519
1519 1520 Returns 0 on success, 1 if nothing changed.
1520 1521
1521 1522 .. container:: verbose
1522 1523
1523 1524 Examples:
1524 1525
1525 1526 - commit all files ending in .py::
1526 1527
1527 1528 hg commit --include "set:**.py"
1528 1529
1529 1530 - commit all non-binary files::
1530 1531
1531 1532 hg commit --exclude "set:binary()"
1532 1533
1533 1534 - amend the current commit and set the date to now::
1534 1535
1535 1536 hg commit --amend --date now
1536 1537 """
1537 1538 wlock = lock = None
1538 1539 try:
1539 1540 wlock = repo.wlock()
1540 1541 lock = repo.lock()
1541 1542 return _docommit(ui, repo, *pats, **opts)
1542 1543 finally:
1543 1544 release(lock, wlock)
1544 1545
1545 1546 def _docommit(ui, repo, *pats, **opts):
1546 1547 if opts.get(r'interactive'):
1547 1548 opts.pop(r'interactive')
1548 1549 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1549 1550 cmdutil.recordfilter, *pats,
1550 1551 **opts)
1551 1552 # ret can be 0 (no changes to record) or the value returned by
1552 1553 # commit(), 1 if nothing changed or None on success.
1553 1554 return 1 if ret == 0 else ret
1554 1555
1555 1556 opts = pycompat.byteskwargs(opts)
1556 1557 if opts.get('subrepos'):
1557 1558 if opts.get('amend'):
1558 1559 raise error.Abort(_('cannot amend with --subrepos'))
1559 1560 # Let --subrepos on the command line override config setting.
1560 1561 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1561 1562
1562 1563 cmdutil.checkunfinished(repo, commit=True)
1563 1564
1564 1565 branch = repo[None].branch()
1565 1566 bheads = repo.branchheads(branch)
1566 1567
1567 1568 extra = {}
1568 1569 if opts.get('close_branch'):
1569 1570 extra['close'] = '1'
1570 1571
1571 1572 if not bheads:
1572 1573 raise error.Abort(_('can only close branch heads'))
1573 1574 elif opts.get('amend'):
1574 1575 if repo[None].parents()[0].p1().branch() != branch and \
1575 1576 repo[None].parents()[0].p2().branch() != branch:
1576 1577 raise error.Abort(_('can only close branch heads'))
1577 1578
1578 1579 if opts.get('amend'):
1579 1580 if ui.configbool('ui', 'commitsubrepos'):
1580 1581 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1581 1582
1582 1583 old = repo['.']
1583 1584 rewriteutil.precheck(repo, [old.rev()], 'amend')
1584 1585
1585 1586 # Currently histedit gets confused if an amend happens while histedit
1586 1587 # is in progress. Since we have a checkunfinished command, we are
1587 1588 # temporarily honoring it.
1588 1589 #
1589 1590 # Note: eventually this guard will be removed. Please do not expect
1590 1591 # this behavior to remain.
1591 1592 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1592 1593 cmdutil.checkunfinished(repo)
1593 1594
1594 1595 node = cmdutil.amend(ui, repo, old, extra, pats, opts)
1595 1596 if node == old.node():
1596 1597 ui.status(_("nothing changed\n"))
1597 1598 return 1
1598 1599 else:
1599 1600 def commitfunc(ui, repo, message, match, opts):
1600 1601 overrides = {}
1601 1602 if opts.get('secret'):
1602 1603 overrides[('phases', 'new-commit')] = 'secret'
1603 1604
1604 1605 baseui = repo.baseui
1605 1606 with baseui.configoverride(overrides, 'commit'):
1606 1607 with ui.configoverride(overrides, 'commit'):
1607 1608 editform = cmdutil.mergeeditform(repo[None],
1608 1609 'commit.normal')
1609 1610 editor = cmdutil.getcommiteditor(
1610 1611 editform=editform, **pycompat.strkwargs(opts))
1611 1612 return repo.commit(message,
1612 1613 opts.get('user'),
1613 1614 opts.get('date'),
1614 1615 match,
1615 1616 editor=editor,
1616 1617 extra=extra)
1617 1618
1618 1619 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1619 1620
1620 1621 if not node:
1621 1622 stat = cmdutil.postcommitstatus(repo, pats, opts)
1622 1623 if stat[3]:
1623 1624 ui.status(_("nothing changed (%d missing files, see "
1624 1625 "'hg status')\n") % len(stat[3]))
1625 1626 else:
1626 1627 ui.status(_("nothing changed\n"))
1627 1628 return 1
1628 1629
1629 1630 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1630 1631
1631 1632 @command('config|showconfig|debugconfig',
1632 1633 [('u', 'untrusted', None, _('show untrusted configuration options')),
1633 1634 ('e', 'edit', None, _('edit user config')),
1634 1635 ('l', 'local', None, _('edit repository config')),
1635 1636 ('g', 'global', None, _('edit global config'))] + formatteropts,
1636 1637 _('[-u] [NAME]...'),
1637 1638 optionalrepo=True, cmdtype=readonly)
1638 1639 def config(ui, repo, *values, **opts):
1639 1640 """show combined config settings from all hgrc files
1640 1641
1641 1642 With no arguments, print names and values of all config items.
1642 1643
1643 1644 With one argument of the form section.name, print just the value
1644 1645 of that config item.
1645 1646
1646 1647 With multiple arguments, print names and values of all config
1647 1648 items with matching section names or section.names.
1648 1649
1649 1650 With --edit, start an editor on the user-level config file. With
1650 1651 --global, edit the system-wide config file. With --local, edit the
1651 1652 repository-level config file.
1652 1653
1653 1654 With --debug, the source (filename and line number) is printed
1654 1655 for each config item.
1655 1656
1656 1657 See :hg:`help config` for more information about config files.
1657 1658
1658 1659 Returns 0 on success, 1 if NAME does not exist.
1659 1660
1660 1661 """
1661 1662
1662 1663 opts = pycompat.byteskwargs(opts)
1663 1664 if opts.get('edit') or opts.get('local') or opts.get('global'):
1664 1665 if opts.get('local') and opts.get('global'):
1665 1666 raise error.Abort(_("can't use --local and --global together"))
1666 1667
1667 1668 if opts.get('local'):
1668 1669 if not repo:
1669 1670 raise error.Abort(_("can't use --local outside a repository"))
1670 1671 paths = [repo.vfs.join('hgrc')]
1671 1672 elif opts.get('global'):
1672 1673 paths = rcutil.systemrcpath()
1673 1674 else:
1674 1675 paths = rcutil.userrcpath()
1675 1676
1676 1677 for f in paths:
1677 1678 if os.path.exists(f):
1678 1679 break
1679 1680 else:
1680 1681 if opts.get('global'):
1681 1682 samplehgrc = uimod.samplehgrcs['global']
1682 1683 elif opts.get('local'):
1683 1684 samplehgrc = uimod.samplehgrcs['local']
1684 1685 else:
1685 1686 samplehgrc = uimod.samplehgrcs['user']
1686 1687
1687 1688 f = paths[0]
1688 1689 fp = open(f, "wb")
1689 1690 fp.write(util.tonativeeol(samplehgrc))
1690 1691 fp.close()
1691 1692
1692 1693 editor = ui.geteditor()
1693 1694 ui.system("%s \"%s\"" % (editor, f),
1694 1695 onerr=error.Abort, errprefix=_("edit failed"),
1695 1696 blockedtag='config_edit')
1696 1697 return
1697 1698 ui.pager('config')
1698 1699 fm = ui.formatter('config', opts)
1699 1700 for t, f in rcutil.rccomponents():
1700 1701 if t == 'path':
1701 1702 ui.debug('read config from: %s\n' % f)
1702 1703 elif t == 'items':
1703 1704 for section, name, value, source in f:
1704 1705 ui.debug('set config by: %s\n' % source)
1705 1706 else:
1706 1707 raise error.ProgrammingError('unknown rctype: %s' % t)
1707 1708 untrusted = bool(opts.get('untrusted'))
1708 1709
1709 1710 selsections = selentries = []
1710 1711 if values:
1711 1712 selsections = [v for v in values if '.' not in v]
1712 1713 selentries = [v for v in values if '.' in v]
1713 1714 uniquesel = (len(selentries) == 1 and not selsections)
1714 1715 selsections = set(selsections)
1715 1716 selentries = set(selentries)
1716 1717
1717 1718 matched = False
1718 1719 for section, name, value in ui.walkconfig(untrusted=untrusted):
1719 1720 source = ui.configsource(section, name, untrusted)
1720 1721 value = pycompat.bytestr(value)
1721 1722 if fm.isplain():
1722 1723 source = source or 'none'
1723 1724 value = value.replace('\n', '\\n')
1724 1725 entryname = section + '.' + name
1725 1726 if values and not (section in selsections or entryname in selentries):
1726 1727 continue
1727 1728 fm.startitem()
1728 1729 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1729 1730 if uniquesel:
1730 1731 fm.data(name=entryname)
1731 1732 fm.write('value', '%s\n', value)
1732 1733 else:
1733 1734 fm.write('name value', '%s=%s\n', entryname, value)
1734 1735 matched = True
1735 1736 fm.end()
1736 1737 if matched:
1737 1738 return 0
1738 1739 return 1
1739 1740
1740 1741 @command('copy|cp',
1741 1742 [('A', 'after', None, _('record a copy that has already occurred')),
1742 1743 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1743 1744 ] + walkopts + dryrunopts,
1744 1745 _('[OPTION]... [SOURCE]... DEST'))
1745 1746 def copy(ui, repo, *pats, **opts):
1746 1747 """mark files as copied for the next commit
1747 1748
1748 1749 Mark dest as having copies of source files. If dest is a
1749 1750 directory, copies are put in that directory. If dest is a file,
1750 1751 the source must be a single file.
1751 1752
1752 1753 By default, this command copies the contents of files as they
1753 1754 exist in the working directory. If invoked with -A/--after, the
1754 1755 operation is recorded, but no copying is performed.
1755 1756
1756 1757 This command takes effect with the next commit. To undo a copy
1757 1758 before that, see :hg:`revert`.
1758 1759
1759 1760 Returns 0 on success, 1 if errors are encountered.
1760 1761 """
1761 1762 opts = pycompat.byteskwargs(opts)
1762 1763 with repo.wlock(False):
1763 1764 return cmdutil.copy(ui, repo, pats, opts)
1764 1765
1765 1766 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1766 1767 def debugcommands(ui, cmd='', *args):
1767 1768 """list all available commands and options"""
1768 1769 for cmd, vals in sorted(table.iteritems()):
1769 1770 cmd = cmd.split('|')[0].strip('^')
1770 1771 opts = ', '.join([i[1] for i in vals[1]])
1771 1772 ui.write('%s: %s\n' % (cmd, opts))
1772 1773
1773 1774 @command('debugcomplete',
1774 1775 [('o', 'options', None, _('show the command options'))],
1775 1776 _('[-o] CMD'),
1776 1777 norepo=True)
1777 1778 def debugcomplete(ui, cmd='', **opts):
1778 1779 """returns the completion list associated with the given command"""
1779 1780
1780 1781 if opts.get(r'options'):
1781 1782 options = []
1782 1783 otables = [globalopts]
1783 1784 if cmd:
1784 1785 aliases, entry = cmdutil.findcmd(cmd, table, False)
1785 1786 otables.append(entry[1])
1786 1787 for t in otables:
1787 1788 for o in t:
1788 1789 if "(DEPRECATED)" in o[3]:
1789 1790 continue
1790 1791 if o[0]:
1791 1792 options.append('-%s' % o[0])
1792 1793 options.append('--%s' % o[1])
1793 1794 ui.write("%s\n" % "\n".join(options))
1794 1795 return
1795 1796
1796 1797 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1797 1798 if ui.verbose:
1798 1799 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1799 1800 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1800 1801
1801 1802 @command('^diff',
1802 1803 [('r', 'rev', [], _('revision'), _('REV')),
1803 1804 ('c', 'change', '', _('change made by revision'), _('REV'))
1804 1805 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1805 1806 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1806 1807 inferrepo=True, cmdtype=readonly)
1807 1808 def diff(ui, repo, *pats, **opts):
1808 1809 """diff repository (or selected files)
1809 1810
1810 1811 Show differences between revisions for the specified files.
1811 1812
1812 1813 Differences between files are shown using the unified diff format.
1813 1814
1814 1815 .. note::
1815 1816
1816 1817 :hg:`diff` may generate unexpected results for merges, as it will
1817 1818 default to comparing against the working directory's first
1818 1819 parent changeset if no revisions are specified.
1819 1820
1820 1821 When two revision arguments are given, then changes are shown
1821 1822 between those revisions. If only one revision is specified then
1822 1823 that revision is compared to the working directory, and, when no
1823 1824 revisions are specified, the working directory files are compared
1824 1825 to its first parent.
1825 1826
1826 1827 Alternatively you can specify -c/--change with a revision to see
1827 1828 the changes in that changeset relative to its first parent.
1828 1829
1829 1830 Without the -a/--text option, diff will avoid generating diffs of
1830 1831 files it detects as binary. With -a, diff will generate a diff
1831 1832 anyway, probably with undesirable results.
1832 1833
1833 1834 Use the -g/--git option to generate diffs in the git extended diff
1834 1835 format. For more information, read :hg:`help diffs`.
1835 1836
1836 1837 .. container:: verbose
1837 1838
1838 1839 Examples:
1839 1840
1840 1841 - compare a file in the current working directory to its parent::
1841 1842
1842 1843 hg diff foo.c
1843 1844
1844 1845 - compare two historical versions of a directory, with rename info::
1845 1846
1846 1847 hg diff --git -r 1.0:1.2 lib/
1847 1848
1848 1849 - get change stats relative to the last change on some date::
1849 1850
1850 1851 hg diff --stat -r "date('may 2')"
1851 1852
1852 1853 - diff all newly-added files that contain a keyword::
1853 1854
1854 1855 hg diff "set:added() and grep(GNU)"
1855 1856
1856 1857 - compare a revision and its parents::
1857 1858
1858 1859 hg diff -c 9353 # compare against first parent
1859 1860 hg diff -r 9353^:9353 # same using revset syntax
1860 1861 hg diff -r 9353^2:9353 # compare against the second parent
1861 1862
1862 1863 Returns 0 on success.
1863 1864 """
1864 1865
1865 1866 opts = pycompat.byteskwargs(opts)
1866 1867 revs = opts.get('rev')
1867 1868 change = opts.get('change')
1868 1869 stat = opts.get('stat')
1869 1870 reverse = opts.get('reverse')
1870 1871
1871 1872 if revs and change:
1872 1873 msg = _('cannot specify --rev and --change at the same time')
1873 1874 raise error.Abort(msg)
1874 1875 elif change:
1875 1876 repo = scmutil.unhidehashlikerevs(repo, [change], 'nowarn')
1876 1877 node2 = scmutil.revsingle(repo, change, None).node()
1877 1878 node1 = repo[node2].p1().node()
1878 1879 else:
1879 1880 repo = scmutil.unhidehashlikerevs(repo, revs, 'nowarn')
1880 1881 node1, node2 = scmutil.revpair(repo, revs)
1881 1882
1882 1883 if reverse:
1883 1884 node1, node2 = node2, node1
1884 1885
1885 1886 diffopts = patch.diffallopts(ui, opts)
1886 1887 m = scmutil.match(repo[node2], pats, opts)
1887 1888 ui.pager('diff')
1888 1889 logcmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1889 1890 listsubrepos=opts.get('subrepos'),
1890 1891 root=opts.get('root'))
1891 1892
1892 1893 @command('^export',
1893 1894 [('o', 'output', '',
1894 1895 _('print output to file with formatted name'), _('FORMAT')),
1895 1896 ('', 'switch-parent', None, _('diff against the second parent')),
1896 1897 ('r', 'rev', [], _('revisions to export'), _('REV')),
1897 1898 ] + diffopts,
1898 1899 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'), cmdtype=readonly)
1899 1900 def export(ui, repo, *changesets, **opts):
1900 1901 """dump the header and diffs for one or more changesets
1901 1902
1902 1903 Print the changeset header and diffs for one or more revisions.
1903 1904 If no revision is given, the parent of the working directory is used.
1904 1905
1905 1906 The information shown in the changeset header is: author, date,
1906 1907 branch name (if non-default), changeset hash, parent(s) and commit
1907 1908 comment.
1908 1909
1909 1910 .. note::
1910 1911
1911 1912 :hg:`export` may generate unexpected diff output for merge
1912 1913 changesets, as it will compare the merge changeset against its
1913 1914 first parent only.
1914 1915
1915 1916 Output may be to a file, in which case the name of the file is
1916 1917 given using a template string. See :hg:`help templates`. In addition
1917 1918 to the common template keywords, the following formatting rules are
1918 1919 supported:
1919 1920
1920 1921 :``%%``: literal "%" character
1921 1922 :``%H``: changeset hash (40 hexadecimal digits)
1922 1923 :``%N``: number of patches being generated
1923 1924 :``%R``: changeset revision number
1924 1925 :``%b``: basename of the exporting repository
1925 1926 :``%h``: short-form changeset hash (12 hexadecimal digits)
1926 1927 :``%m``: first line of the commit message (only alphanumeric characters)
1927 1928 :``%n``: zero-padded sequence number, starting at 1
1928 1929 :``%r``: zero-padded changeset revision number
1929 1930 :``\\``: literal "\\" character
1930 1931
1931 1932 Without the -a/--text option, export will avoid generating diffs
1932 1933 of files it detects as binary. With -a, export will generate a
1933 1934 diff anyway, probably with undesirable results.
1934 1935
1935 1936 Use the -g/--git option to generate diffs in the git extended diff
1936 1937 format. See :hg:`help diffs` for more information.
1937 1938
1938 1939 With the --switch-parent option, the diff will be against the
1939 1940 second parent. It can be useful to review a merge.
1940 1941
1941 1942 .. container:: verbose
1942 1943
1943 1944 Examples:
1944 1945
1945 1946 - use export and import to transplant a bugfix to the current
1946 1947 branch::
1947 1948
1948 1949 hg export -r 9353 | hg import -
1949 1950
1950 1951 - export all the changesets between two revisions to a file with
1951 1952 rename information::
1952 1953
1953 1954 hg export --git -r 123:150 > changes.txt
1954 1955
1955 1956 - split outgoing changes into a series of patches with
1956 1957 descriptive names::
1957 1958
1958 1959 hg export -r "outgoing()" -o "%n-%m.patch"
1959 1960
1960 1961 Returns 0 on success.
1961 1962 """
1962 1963 opts = pycompat.byteskwargs(opts)
1963 1964 changesets += tuple(opts.get('rev', []))
1964 1965 if not changesets:
1965 1966 changesets = ['.']
1966 1967 repo = scmutil.unhidehashlikerevs(repo, changesets, 'nowarn')
1967 1968 revs = scmutil.revrange(repo, changesets)
1968 1969 if not revs:
1969 1970 raise error.Abort(_("export requires at least one changeset"))
1970 1971 if len(revs) > 1:
1971 1972 ui.note(_('exporting patches:\n'))
1972 1973 else:
1973 1974 ui.note(_('exporting patch:\n'))
1974 1975 ui.pager('export')
1975 1976 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1976 1977 switch_parent=opts.get('switch_parent'),
1977 1978 opts=patch.diffallopts(ui, opts))
1978 1979
1979 1980 @command('files',
1980 1981 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1981 1982 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1982 1983 ] + walkopts + formatteropts + subrepoopts,
1983 1984 _('[OPTION]... [FILE]...'), cmdtype=readonly)
1984 1985 def files(ui, repo, *pats, **opts):
1985 1986 """list tracked files
1986 1987
1987 1988 Print files under Mercurial control in the working directory or
1988 1989 specified revision for given files (excluding removed files).
1989 1990 Files can be specified as filenames or filesets.
1990 1991
1991 1992 If no files are given to match, this command prints the names
1992 1993 of all files under Mercurial control.
1993 1994
1994 1995 .. container:: verbose
1995 1996
1996 1997 Examples:
1997 1998
1998 1999 - list all files under the current directory::
1999 2000
2000 2001 hg files .
2001 2002
2002 2003 - shows sizes and flags for current revision::
2003 2004
2004 2005 hg files -vr .
2005 2006
2006 2007 - list all files named README::
2007 2008
2008 2009 hg files -I "**/README"
2009 2010
2010 2011 - list all binary files::
2011 2012
2012 2013 hg files "set:binary()"
2013 2014
2014 2015 - find files containing a regular expression::
2015 2016
2016 2017 hg files "set:grep('bob')"
2017 2018
2018 2019 - search tracked file contents with xargs and grep::
2019 2020
2020 2021 hg files -0 | xargs -0 grep foo
2021 2022
2022 2023 See :hg:`help patterns` and :hg:`help filesets` for more information
2023 2024 on specifying file patterns.
2024 2025
2025 2026 Returns 0 if a match is found, 1 otherwise.
2026 2027
2027 2028 """
2028 2029
2029 2030 opts = pycompat.byteskwargs(opts)
2030 2031 rev = opts.get('rev')
2031 2032 if rev:
2032 2033 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2033 2034 ctx = scmutil.revsingle(repo, rev, None)
2034 2035
2035 2036 end = '\n'
2036 2037 if opts.get('print0'):
2037 2038 end = '\0'
2038 2039 fmt = '%s' + end
2039 2040
2040 2041 m = scmutil.match(ctx, pats, opts)
2041 2042 ui.pager('files')
2042 2043 with ui.formatter('files', opts) as fm:
2043 2044 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
2044 2045
2045 2046 @command(
2046 2047 '^forget',
2047 2048 walkopts + dryrunopts,
2048 2049 _('[OPTION]... FILE...'), inferrepo=True)
2049 2050 def forget(ui, repo, *pats, **opts):
2050 2051 """forget the specified files on the next commit
2051 2052
2052 2053 Mark the specified files so they will no longer be tracked
2053 2054 after the next commit.
2054 2055
2055 2056 This only removes files from the current branch, not from the
2056 2057 entire project history, and it does not delete them from the
2057 2058 working directory.
2058 2059
2059 2060 To delete the file from the working directory, see :hg:`remove`.
2060 2061
2061 2062 To undo a forget before the next commit, see :hg:`add`.
2062 2063
2063 2064 .. container:: verbose
2064 2065
2065 2066 Examples:
2066 2067
2067 2068 - forget newly-added binary files::
2068 2069
2069 2070 hg forget "set:added() and binary()"
2070 2071
2071 2072 - forget files that would be excluded by .hgignore::
2072 2073
2073 2074 hg forget "set:hgignore()"
2074 2075
2075 2076 Returns 0 on success.
2076 2077 """
2077 2078
2078 2079 opts = pycompat.byteskwargs(opts)
2079 2080 if not pats:
2080 2081 raise error.Abort(_('no files specified'))
2081 2082
2082 2083 m = scmutil.match(repo[None], pats, opts)
2083 2084 dryrun = opts.get(r'dry_run')
2084 2085 rejected = cmdutil.forget(ui, repo, m, prefix="",
2085 2086 explicitonly=False, dryrun=dryrun)[0]
2086 2087 return rejected and 1 or 0
2087 2088
2088 2089 @command(
2089 2090 'graft',
2090 2091 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2091 2092 ('c', 'continue', False, _('resume interrupted graft')),
2092 2093 ('e', 'edit', False, _('invoke editor on commit messages')),
2093 2094 ('', 'log', None, _('append graft info to log message')),
2094 2095 ('f', 'force', False, _('force graft')),
2095 2096 ('D', 'currentdate', False,
2096 2097 _('record the current date as commit date')),
2097 2098 ('U', 'currentuser', False,
2098 2099 _('record the current user as committer'), _('DATE'))]
2099 2100 + commitopts2 + mergetoolopts + dryrunopts,
2100 2101 _('[OPTION]... [-r REV]... REV...'))
2101 2102 def graft(ui, repo, *revs, **opts):
2102 2103 '''copy changes from other branches onto the current branch
2103 2104
2104 2105 This command uses Mercurial's merge logic to copy individual
2105 2106 changes from other branches without merging branches in the
2106 2107 history graph. This is sometimes known as 'backporting' or
2107 2108 'cherry-picking'. By default, graft will copy user, date, and
2108 2109 description from the source changesets.
2109 2110
2110 2111 Changesets that are ancestors of the current revision, that have
2111 2112 already been grafted, or that are merges will be skipped.
2112 2113
2113 2114 If --log is specified, log messages will have a comment appended
2114 2115 of the form::
2115 2116
2116 2117 (grafted from CHANGESETHASH)
2117 2118
2118 2119 If --force is specified, revisions will be grafted even if they
2119 2120 are already ancestors of, or have been grafted to, the destination.
2120 2121 This is useful when the revisions have since been backed out.
2121 2122
2122 2123 If a graft merge results in conflicts, the graft process is
2123 2124 interrupted so that the current merge can be manually resolved.
2124 2125 Once all conflicts are addressed, the graft process can be
2125 2126 continued with the -c/--continue option.
2126 2127
2127 2128 .. note::
2128 2129
2129 2130 The -c/--continue option does not reapply earlier options, except
2130 2131 for --force.
2131 2132
2132 2133 .. container:: verbose
2133 2134
2134 2135 Examples:
2135 2136
2136 2137 - copy a single change to the stable branch and edit its description::
2137 2138
2138 2139 hg update stable
2139 2140 hg graft --edit 9393
2140 2141
2141 2142 - graft a range of changesets with one exception, updating dates::
2142 2143
2143 2144 hg graft -D "2085::2093 and not 2091"
2144 2145
2145 2146 - continue a graft after resolving conflicts::
2146 2147
2147 2148 hg graft -c
2148 2149
2149 2150 - show the source of a grafted changeset::
2150 2151
2151 2152 hg log --debug -r .
2152 2153
2153 2154 - show revisions sorted by date::
2154 2155
2155 2156 hg log -r "sort(all(), date)"
2156 2157
2157 2158 See :hg:`help revisions` for more about specifying revisions.
2158 2159
2159 2160 Returns 0 on successful completion.
2160 2161 '''
2161 2162 with repo.wlock():
2162 2163 return _dograft(ui, repo, *revs, **opts)
2163 2164
2164 2165 def _dograft(ui, repo, *revs, **opts):
2165 2166 opts = pycompat.byteskwargs(opts)
2166 2167 if revs and opts.get('rev'):
2167 2168 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2168 2169 'revision ordering!\n'))
2169 2170
2170 2171 revs = list(revs)
2171 2172 revs.extend(opts.get('rev'))
2172 2173
2173 2174 if not opts.get('user') and opts.get('currentuser'):
2174 2175 opts['user'] = ui.username()
2175 2176 if not opts.get('date') and opts.get('currentdate'):
2176 2177 opts['date'] = "%d %d" % dateutil.makedate()
2177 2178
2178 2179 editor = cmdutil.getcommiteditor(editform='graft',
2179 2180 **pycompat.strkwargs(opts))
2180 2181
2181 2182 cont = False
2182 2183 if opts.get('continue'):
2183 2184 cont = True
2184 2185 if revs:
2185 2186 raise error.Abort(_("can't specify --continue and revisions"))
2186 2187 # read in unfinished revisions
2187 2188 try:
2188 2189 nodes = repo.vfs.read('graftstate').splitlines()
2189 2190 revs = [repo[node].rev() for node in nodes]
2190 2191 except IOError as inst:
2191 2192 if inst.errno != errno.ENOENT:
2192 2193 raise
2193 2194 cmdutil.wrongtooltocontinue(repo, _('graft'))
2194 2195 else:
2195 2196 if not revs:
2196 2197 raise error.Abort(_('no revisions specified'))
2197 2198 cmdutil.checkunfinished(repo)
2198 2199 cmdutil.bailifchanged(repo)
2199 2200 revs = scmutil.revrange(repo, revs)
2200 2201
2201 2202 skipped = set()
2202 2203 # check for merges
2203 2204 for rev in repo.revs('%ld and merge()', revs):
2204 2205 ui.warn(_('skipping ungraftable merge revision %d\n') % rev)
2205 2206 skipped.add(rev)
2206 2207 revs = [r for r in revs if r not in skipped]
2207 2208 if not revs:
2208 2209 return -1
2209 2210
2210 2211 # Don't check in the --continue case, in effect retaining --force across
2211 2212 # --continues. That's because without --force, any revisions we decided to
2212 2213 # skip would have been filtered out here, so they wouldn't have made their
2213 2214 # way to the graftstate. With --force, any revisions we would have otherwise
2214 2215 # skipped would not have been filtered out, and if they hadn't been applied
2215 2216 # already, they'd have been in the graftstate.
2216 2217 if not (cont or opts.get('force')):
2217 2218 # check for ancestors of dest branch
2218 2219 crev = repo['.'].rev()
2219 2220 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2220 2221 # XXX make this lazy in the future
2221 2222 # don't mutate while iterating, create a copy
2222 2223 for rev in list(revs):
2223 2224 if rev in ancestors:
2224 2225 ui.warn(_('skipping ancestor revision %d:%s\n') %
2225 2226 (rev, repo[rev]))
2226 2227 # XXX remove on list is slow
2227 2228 revs.remove(rev)
2228 2229 if not revs:
2229 2230 return -1
2230 2231
2231 2232 # analyze revs for earlier grafts
2232 2233 ids = {}
2233 2234 for ctx in repo.set("%ld", revs):
2234 2235 ids[ctx.hex()] = ctx.rev()
2235 2236 n = ctx.extra().get('source')
2236 2237 if n:
2237 2238 ids[n] = ctx.rev()
2238 2239
2239 2240 # check ancestors for earlier grafts
2240 2241 ui.debug('scanning for duplicate grafts\n')
2241 2242
2242 2243 # The only changesets we can be sure doesn't contain grafts of any
2243 2244 # revs, are the ones that are common ancestors of *all* revs:
2244 2245 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2245 2246 ctx = repo[rev]
2246 2247 n = ctx.extra().get('source')
2247 2248 if n in ids:
2248 2249 try:
2249 2250 r = repo[n].rev()
2250 2251 except error.RepoLookupError:
2251 2252 r = None
2252 2253 if r in revs:
2253 2254 ui.warn(_('skipping revision %d:%s '
2254 2255 '(already grafted to %d:%s)\n')
2255 2256 % (r, repo[r], rev, ctx))
2256 2257 revs.remove(r)
2257 2258 elif ids[n] in revs:
2258 2259 if r is None:
2259 2260 ui.warn(_('skipping already grafted revision %d:%s '
2260 2261 '(%d:%s also has unknown origin %s)\n')
2261 2262 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2262 2263 else:
2263 2264 ui.warn(_('skipping already grafted revision %d:%s '
2264 2265 '(%d:%s also has origin %d:%s)\n')
2265 2266 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2266 2267 revs.remove(ids[n])
2267 2268 elif ctx.hex() in ids:
2268 2269 r = ids[ctx.hex()]
2269 2270 ui.warn(_('skipping already grafted revision %d:%s '
2270 2271 '(was grafted from %d:%s)\n') %
2271 2272 (r, repo[r], rev, ctx))
2272 2273 revs.remove(r)
2273 2274 if not revs:
2274 2275 return -1
2275 2276
2276 2277 for pos, ctx in enumerate(repo.set("%ld", revs)):
2277 2278 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2278 2279 ctx.description().split('\n', 1)[0])
2279 2280 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2280 2281 if names:
2281 2282 desc += ' (%s)' % ' '.join(names)
2282 2283 ui.status(_('grafting %s\n') % desc)
2283 2284 if opts.get('dry_run'):
2284 2285 continue
2285 2286
2286 2287 source = ctx.extra().get('source')
2287 2288 extra = {}
2288 2289 if source:
2289 2290 extra['source'] = source
2290 2291 extra['intermediate-source'] = ctx.hex()
2291 2292 else:
2292 2293 extra['source'] = ctx.hex()
2293 2294 user = ctx.user()
2294 2295 if opts.get('user'):
2295 2296 user = opts['user']
2296 2297 date = ctx.date()
2297 2298 if opts.get('date'):
2298 2299 date = opts['date']
2299 2300 message = ctx.description()
2300 2301 if opts.get('log'):
2301 2302 message += '\n(grafted from %s)' % ctx.hex()
2302 2303
2303 2304 # we don't merge the first commit when continuing
2304 2305 if not cont:
2305 2306 # perform the graft merge with p1(rev) as 'ancestor'
2306 2307 try:
2307 2308 # ui.forcemerge is an internal variable, do not document
2308 2309 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2309 2310 'graft')
2310 2311 stats = mergemod.graft(repo, ctx, ctx.p1(),
2311 2312 ['local', 'graft'])
2312 2313 finally:
2313 2314 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2314 2315 # report any conflicts
2315 2316 if stats.unresolvedcount > 0:
2316 2317 # write out state for --continue
2317 2318 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2318 2319 repo.vfs.write('graftstate', ''.join(nodelines))
2319 2320 extra = ''
2320 2321 if opts.get('user'):
2321 2322 extra += ' --user %s' % procutil.shellquote(opts['user'])
2322 2323 if opts.get('date'):
2323 2324 extra += ' --date %s' % procutil.shellquote(opts['date'])
2324 2325 if opts.get('log'):
2325 2326 extra += ' --log'
2326 2327 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2327 2328 raise error.Abort(
2328 2329 _("unresolved conflicts, can't continue"),
2329 2330 hint=hint)
2330 2331 else:
2331 2332 cont = False
2332 2333
2333 2334 # commit
2334 2335 node = repo.commit(text=message, user=user,
2335 2336 date=date, extra=extra, editor=editor)
2336 2337 if node is None:
2337 2338 ui.warn(
2338 2339 _('note: graft of %d:%s created no changes to commit\n') %
2339 2340 (ctx.rev(), ctx))
2340 2341
2341 2342 # remove state when we complete successfully
2342 2343 if not opts.get('dry_run'):
2343 2344 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2344 2345
2345 2346 return 0
2346 2347
2347 2348 @command('grep',
2348 2349 [('0', 'print0', None, _('end fields with NUL')),
2349 2350 ('', 'all', None, _('print all revisions that match')),
2350 2351 ('a', 'text', None, _('treat all files as text')),
2351 2352 ('f', 'follow', None,
2352 2353 _('follow changeset history,'
2353 2354 ' or file history across copies and renames')),
2354 2355 ('i', 'ignore-case', None, _('ignore case when matching')),
2355 2356 ('l', 'files-with-matches', None,
2356 2357 _('print only filenames and revisions that match')),
2357 2358 ('n', 'line-number', None, _('print matching line numbers')),
2358 2359 ('r', 'rev', [],
2359 2360 _('only search files changed within revision range'), _('REV')),
2360 2361 ('u', 'user', None, _('list the author (long with -v)')),
2361 2362 ('d', 'date', None, _('list the date (short with -q)')),
2362 2363 ] + formatteropts + walkopts,
2363 2364 _('[OPTION]... PATTERN [FILE]...'),
2364 2365 inferrepo=True, cmdtype=readonly)
2365 2366 def grep(ui, repo, pattern, *pats, **opts):
2366 2367 """search revision history for a pattern in specified files
2367 2368
2368 2369 Search revision history for a regular expression in the specified
2369 2370 files or the entire project.
2370 2371
2371 2372 By default, grep prints the most recent revision number for each
2372 2373 file in which it finds a match. To get it to print every revision
2373 2374 that contains a change in match status ("-" for a match that becomes
2374 2375 a non-match, or "+" for a non-match that becomes a match), use the
2375 2376 --all flag.
2376 2377
2377 2378 PATTERN can be any Python (roughly Perl-compatible) regular
2378 2379 expression.
2379 2380
2380 2381 If no FILEs are specified (and -f/--follow isn't set), all files in
2381 2382 the repository are searched, including those that don't exist in the
2382 2383 current branch or have been deleted in a prior changeset.
2383 2384
2384 2385 Returns 0 if a match is found, 1 otherwise.
2385 2386 """
2386 2387 opts = pycompat.byteskwargs(opts)
2387 2388 reflags = re.M
2388 2389 if opts.get('ignore_case'):
2389 2390 reflags |= re.I
2390 2391 try:
2391 2392 regexp = util.re.compile(pattern, reflags)
2392 2393 except re.error as inst:
2393 2394 ui.warn(_("grep: invalid match pattern: %s\n") % pycompat.bytestr(inst))
2394 2395 return 1
2395 2396 sep, eol = ':', '\n'
2396 2397 if opts.get('print0'):
2397 2398 sep = eol = '\0'
2398 2399
2399 2400 getfile = util.lrucachefunc(repo.file)
2400 2401
2401 2402 def matchlines(body):
2402 2403 begin = 0
2403 2404 linenum = 0
2404 2405 while begin < len(body):
2405 2406 match = regexp.search(body, begin)
2406 2407 if not match:
2407 2408 break
2408 2409 mstart, mend = match.span()
2409 2410 linenum += body.count('\n', begin, mstart) + 1
2410 2411 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2411 2412 begin = body.find('\n', mend) + 1 or len(body) + 1
2412 2413 lend = begin - 1
2413 2414 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2414 2415
2415 2416 class linestate(object):
2416 2417 def __init__(self, line, linenum, colstart, colend):
2417 2418 self.line = line
2418 2419 self.linenum = linenum
2419 2420 self.colstart = colstart
2420 2421 self.colend = colend
2421 2422
2422 2423 def __hash__(self):
2423 2424 return hash((self.linenum, self.line))
2424 2425
2425 2426 def __eq__(self, other):
2426 2427 return self.line == other.line
2427 2428
2428 2429 def findpos(self):
2429 2430 """Iterate all (start, end) indices of matches"""
2430 2431 yield self.colstart, self.colend
2431 2432 p = self.colend
2432 2433 while p < len(self.line):
2433 2434 m = regexp.search(self.line, p)
2434 2435 if not m:
2435 2436 break
2436 2437 yield m.span()
2437 2438 p = m.end()
2438 2439
2439 2440 matches = {}
2440 2441 copies = {}
2441 2442 def grepbody(fn, rev, body):
2442 2443 matches[rev].setdefault(fn, [])
2443 2444 m = matches[rev][fn]
2444 2445 for lnum, cstart, cend, line in matchlines(body):
2445 2446 s = linestate(line, lnum, cstart, cend)
2446 2447 m.append(s)
2447 2448
2448 2449 def difflinestates(a, b):
2449 2450 sm = difflib.SequenceMatcher(None, a, b)
2450 2451 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2451 2452 if tag == 'insert':
2452 2453 for i in xrange(blo, bhi):
2453 2454 yield ('+', b[i])
2454 2455 elif tag == 'delete':
2455 2456 for i in xrange(alo, ahi):
2456 2457 yield ('-', a[i])
2457 2458 elif tag == 'replace':
2458 2459 for i in xrange(alo, ahi):
2459 2460 yield ('-', a[i])
2460 2461 for i in xrange(blo, bhi):
2461 2462 yield ('+', b[i])
2462 2463
2463 2464 def display(fm, fn, ctx, pstates, states):
2464 2465 rev = ctx.rev()
2465 2466 if fm.isplain():
2466 2467 formatuser = ui.shortuser
2467 2468 else:
2468 2469 formatuser = str
2469 2470 if ui.quiet:
2470 2471 datefmt = '%Y-%m-%d'
2471 2472 else:
2472 2473 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2473 2474 found = False
2474 2475 @util.cachefunc
2475 2476 def binary():
2476 2477 flog = getfile(fn)
2477 2478 return stringutil.binary(flog.read(ctx.filenode(fn)))
2478 2479
2479 2480 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2480 2481 if opts.get('all'):
2481 2482 iter = difflinestates(pstates, states)
2482 2483 else:
2483 2484 iter = [('', l) for l in states]
2484 2485 for change, l in iter:
2485 2486 fm.startitem()
2486 2487 fm.data(node=fm.hexfunc(ctx.node()))
2487 2488 cols = [
2488 2489 ('filename', fn, True),
2489 2490 ('rev', rev, True),
2490 2491 ('linenumber', l.linenum, opts.get('line_number')),
2491 2492 ]
2492 2493 if opts.get('all'):
2493 2494 cols.append(('change', change, True))
2494 2495 cols.extend([
2495 2496 ('user', formatuser(ctx.user()), opts.get('user')),
2496 2497 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2497 2498 ])
2498 2499 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2499 2500 for name, data, cond in cols:
2500 2501 field = fieldnamemap.get(name, name)
2501 2502 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2502 2503 if cond and name != lastcol:
2503 2504 fm.plain(sep, label='grep.sep')
2504 2505 if not opts.get('files_with_matches'):
2505 2506 fm.plain(sep, label='grep.sep')
2506 2507 if not opts.get('text') and binary():
2507 2508 fm.plain(_(" Binary file matches"))
2508 2509 else:
2509 2510 displaymatches(fm.nested('texts'), l)
2510 2511 fm.plain(eol)
2511 2512 found = True
2512 2513 if opts.get('files_with_matches'):
2513 2514 break
2514 2515 return found
2515 2516
2516 2517 def displaymatches(fm, l):
2517 2518 p = 0
2518 2519 for s, e in l.findpos():
2519 2520 if p < s:
2520 2521 fm.startitem()
2521 2522 fm.write('text', '%s', l.line[p:s])
2522 2523 fm.data(matched=False)
2523 2524 fm.startitem()
2524 2525 fm.write('text', '%s', l.line[s:e], label='grep.match')
2525 2526 fm.data(matched=True)
2526 2527 p = e
2527 2528 if p < len(l.line):
2528 2529 fm.startitem()
2529 2530 fm.write('text', '%s', l.line[p:])
2530 2531 fm.data(matched=False)
2531 2532 fm.end()
2532 2533
2533 2534 skip = {}
2534 2535 revfiles = {}
2535 2536 match = scmutil.match(repo[None], pats, opts)
2536 2537 found = False
2537 2538 follow = opts.get('follow')
2538 2539
2539 2540 def prep(ctx, fns):
2540 2541 rev = ctx.rev()
2541 2542 pctx = ctx.p1()
2542 2543 parent = pctx.rev()
2543 2544 matches.setdefault(rev, {})
2544 2545 matches.setdefault(parent, {})
2545 2546 files = revfiles.setdefault(rev, [])
2546 2547 for fn in fns:
2547 2548 flog = getfile(fn)
2548 2549 try:
2549 2550 fnode = ctx.filenode(fn)
2550 2551 except error.LookupError:
2551 2552 continue
2552 2553
2553 2554 copied = flog.renamed(fnode)
2554 2555 copy = follow and copied and copied[0]
2555 2556 if copy:
2556 2557 copies.setdefault(rev, {})[fn] = copy
2557 2558 if fn in skip:
2558 2559 if copy:
2559 2560 skip[copy] = True
2560 2561 continue
2561 2562 files.append(fn)
2562 2563
2563 2564 if fn not in matches[rev]:
2564 2565 grepbody(fn, rev, flog.read(fnode))
2565 2566
2566 2567 pfn = copy or fn
2567 2568 if pfn not in matches[parent]:
2568 2569 try:
2569 2570 fnode = pctx.filenode(pfn)
2570 2571 grepbody(pfn, parent, flog.read(fnode))
2571 2572 except error.LookupError:
2572 2573 pass
2573 2574
2574 2575 ui.pager('grep')
2575 2576 fm = ui.formatter('grep', opts)
2576 2577 for ctx in cmdutil.walkchangerevs(repo, match, opts, prep):
2577 2578 rev = ctx.rev()
2578 2579 parent = ctx.p1().rev()
2579 2580 for fn in sorted(revfiles.get(rev, [])):
2580 2581 states = matches[rev][fn]
2581 2582 copy = copies.get(rev, {}).get(fn)
2582 2583 if fn in skip:
2583 2584 if copy:
2584 2585 skip[copy] = True
2585 2586 continue
2586 2587 pstates = matches.get(parent, {}).get(copy or fn, [])
2587 2588 if pstates or states:
2588 2589 r = display(fm, fn, ctx, pstates, states)
2589 2590 found = found or r
2590 2591 if r and not opts.get('all'):
2591 2592 skip[fn] = True
2592 2593 if copy:
2593 2594 skip[copy] = True
2594 2595 del revfiles[rev]
2595 2596 # We will keep the matches dict for the duration of the window
2596 2597 # clear the matches dict once the window is over
2597 2598 if not revfiles:
2598 2599 matches.clear()
2599 2600 fm.end()
2600 2601
2601 2602 return not found
2602 2603
2603 2604 @command('heads',
2604 2605 [('r', 'rev', '',
2605 2606 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2606 2607 ('t', 'topo', False, _('show topological heads only')),
2607 2608 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2608 2609 ('c', 'closed', False, _('show normal and closed branch heads')),
2609 2610 ] + templateopts,
2610 2611 _('[-ct] [-r STARTREV] [REV]...'), cmdtype=readonly)
2611 2612 def heads(ui, repo, *branchrevs, **opts):
2612 2613 """show branch heads
2613 2614
2614 2615 With no arguments, show all open branch heads in the repository.
2615 2616 Branch heads are changesets that have no descendants on the
2616 2617 same branch. They are where development generally takes place and
2617 2618 are the usual targets for update and merge operations.
2618 2619
2619 2620 If one or more REVs are given, only open branch heads on the
2620 2621 branches associated with the specified changesets are shown. This
2621 2622 means that you can use :hg:`heads .` to see the heads on the
2622 2623 currently checked-out branch.
2623 2624
2624 2625 If -c/--closed is specified, also show branch heads marked closed
2625 2626 (see :hg:`commit --close-branch`).
2626 2627
2627 2628 If STARTREV is specified, only those heads that are descendants of
2628 2629 STARTREV will be displayed.
2629 2630
2630 2631 If -t/--topo is specified, named branch mechanics will be ignored and only
2631 2632 topological heads (changesets with no children) will be shown.
2632 2633
2633 2634 Returns 0 if matching heads are found, 1 if not.
2634 2635 """
2635 2636
2636 2637 opts = pycompat.byteskwargs(opts)
2637 2638 start = None
2638 2639 rev = opts.get('rev')
2639 2640 if rev:
2640 2641 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2641 2642 start = scmutil.revsingle(repo, rev, None).node()
2642 2643
2643 2644 if opts.get('topo'):
2644 2645 heads = [repo[h] for h in repo.heads(start)]
2645 2646 else:
2646 2647 heads = []
2647 2648 for branch in repo.branchmap():
2648 2649 heads += repo.branchheads(branch, start, opts.get('closed'))
2649 2650 heads = [repo[h] for h in heads]
2650 2651
2651 2652 if branchrevs:
2652 2653 branches = set(repo[br].branch() for br in branchrevs)
2653 2654 heads = [h for h in heads if h.branch() in branches]
2654 2655
2655 2656 if opts.get('active') and branchrevs:
2656 2657 dagheads = repo.heads(start)
2657 2658 heads = [h for h in heads if h.node() in dagheads]
2658 2659
2659 2660 if branchrevs:
2660 2661 haveheads = set(h.branch() for h in heads)
2661 2662 if branches - haveheads:
2662 2663 headless = ', '.join(b for b in branches - haveheads)
2663 2664 msg = _('no open branch heads found on branches %s')
2664 2665 if opts.get('rev'):
2665 2666 msg += _(' (started at %s)') % opts['rev']
2666 2667 ui.warn((msg + '\n') % headless)
2667 2668
2668 2669 if not heads:
2669 2670 return 1
2670 2671
2671 2672 ui.pager('heads')
2672 2673 heads = sorted(heads, key=lambda x: -x.rev())
2673 2674 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
2674 2675 for ctx in heads:
2675 2676 displayer.show(ctx)
2676 2677 displayer.close()
2677 2678
2678 2679 @command('help',
2679 2680 [('e', 'extension', None, _('show only help for extensions')),
2680 2681 ('c', 'command', None, _('show only help for commands')),
2681 2682 ('k', 'keyword', None, _('show topics matching keyword')),
2682 2683 ('s', 'system', [], _('show help for specific platform(s)')),
2683 2684 ],
2684 2685 _('[-ecks] [TOPIC]'),
2685 2686 norepo=True, cmdtype=readonly)
2686 2687 def help_(ui, name=None, **opts):
2687 2688 """show help for a given topic or a help overview
2688 2689
2689 2690 With no arguments, print a list of commands with short help messages.
2690 2691
2691 2692 Given a topic, extension, or command name, print help for that
2692 2693 topic.
2693 2694
2694 2695 Returns 0 if successful.
2695 2696 """
2696 2697
2697 2698 keep = opts.get(r'system') or []
2698 2699 if len(keep) == 0:
2699 2700 if pycompat.sysplatform.startswith('win'):
2700 2701 keep.append('windows')
2701 2702 elif pycompat.sysplatform == 'OpenVMS':
2702 2703 keep.append('vms')
2703 2704 elif pycompat.sysplatform == 'plan9':
2704 2705 keep.append('plan9')
2705 2706 else:
2706 2707 keep.append('unix')
2707 2708 keep.append(pycompat.sysplatform.lower())
2708 2709 if ui.verbose:
2709 2710 keep.append('verbose')
2710 2711
2711 2712 commands = sys.modules[__name__]
2712 2713 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2713 2714 ui.pager('help')
2714 2715 ui.write(formatted)
2715 2716
2716 2717
2717 2718 @command('identify|id',
2718 2719 [('r', 'rev', '',
2719 2720 _('identify the specified revision'), _('REV')),
2720 2721 ('n', 'num', None, _('show local revision number')),
2721 2722 ('i', 'id', None, _('show global revision id')),
2722 2723 ('b', 'branch', None, _('show branch')),
2723 2724 ('t', 'tags', None, _('show tags')),
2724 2725 ('B', 'bookmarks', None, _('show bookmarks')),
2725 2726 ] + remoteopts + formatteropts,
2726 2727 _('[-nibtB] [-r REV] [SOURCE]'),
2727 2728 optionalrepo=True, cmdtype=readonly)
2728 2729 def identify(ui, repo, source=None, rev=None,
2729 2730 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2730 2731 """identify the working directory or specified revision
2731 2732
2732 2733 Print a summary identifying the repository state at REV using one or
2733 2734 two parent hash identifiers, followed by a "+" if the working
2734 2735 directory has uncommitted changes, the branch name (if not default),
2735 2736 a list of tags, and a list of bookmarks.
2736 2737
2737 2738 When REV is not given, print a summary of the current state of the
2738 2739 repository including the working directory. Specify -r. to get information
2739 2740 of the working directory parent without scanning uncommitted changes.
2740 2741
2741 2742 Specifying a path to a repository root or Mercurial bundle will
2742 2743 cause lookup to operate on that repository/bundle.
2743 2744
2744 2745 .. container:: verbose
2745 2746
2746 2747 Examples:
2747 2748
2748 2749 - generate a build identifier for the working directory::
2749 2750
2750 2751 hg id --id > build-id.dat
2751 2752
2752 2753 - find the revision corresponding to a tag::
2753 2754
2754 2755 hg id -n -r 1.3
2755 2756
2756 2757 - check the most recent revision of a remote repository::
2757 2758
2758 2759 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2759 2760
2760 2761 See :hg:`log` for generating more information about specific revisions,
2761 2762 including full hash identifiers.
2762 2763
2763 2764 Returns 0 if successful.
2764 2765 """
2765 2766
2766 2767 opts = pycompat.byteskwargs(opts)
2767 2768 if not repo and not source:
2768 2769 raise error.Abort(_("there is no Mercurial repository here "
2769 2770 "(.hg not found)"))
2770 2771
2771 2772 if ui.debugflag:
2772 2773 hexfunc = hex
2773 2774 else:
2774 2775 hexfunc = short
2775 2776 default = not (num or id or branch or tags or bookmarks)
2776 2777 output = []
2777 2778 revs = []
2778 2779
2779 2780 if source:
2780 2781 source, branches = hg.parseurl(ui.expandpath(source))
2781 2782 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2782 2783 repo = peer.local()
2783 2784 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2784 2785
2785 2786 fm = ui.formatter('identify', opts)
2786 2787 fm.startitem()
2787 2788
2788 2789 if not repo:
2789 2790 if num or branch or tags:
2790 2791 raise error.Abort(
2791 2792 _("can't query remote revision number, branch, or tags"))
2792 2793 if not rev and revs:
2793 2794 rev = revs[0]
2794 2795 if not rev:
2795 2796 rev = "tip"
2796 2797
2797 2798 remoterev = peer.lookup(rev)
2798 2799 hexrev = hexfunc(remoterev)
2799 2800 if default or id:
2800 2801 output = [hexrev]
2801 2802 fm.data(id=hexrev)
2802 2803
2803 2804 def getbms():
2804 2805 bms = []
2805 2806
2806 2807 if 'bookmarks' in peer.listkeys('namespaces'):
2807 2808 hexremoterev = hex(remoterev)
2808 2809 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2809 2810 if bmr == hexremoterev]
2810 2811
2811 2812 return sorted(bms)
2812 2813
2813 2814 bms = getbms()
2814 2815 if bookmarks:
2815 2816 output.extend(bms)
2816 2817 elif default and not ui.quiet:
2817 2818 # multiple bookmarks for a single parent separated by '/'
2818 2819 bm = '/'.join(bms)
2819 2820 if bm:
2820 2821 output.append(bm)
2821 2822
2822 2823 fm.data(node=hex(remoterev))
2823 2824 fm.data(bookmarks=fm.formatlist(bms, name='bookmark'))
2824 2825 else:
2825 2826 if rev:
2826 2827 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
2827 2828 ctx = scmutil.revsingle(repo, rev, None)
2828 2829
2829 2830 if ctx.rev() is None:
2830 2831 ctx = repo[None]
2831 2832 parents = ctx.parents()
2832 2833 taglist = []
2833 2834 for p in parents:
2834 2835 taglist.extend(p.tags())
2835 2836
2836 2837 dirty = ""
2837 2838 if ctx.dirty(missing=True, merge=False, branch=False):
2838 2839 dirty = '+'
2839 2840 fm.data(dirty=dirty)
2840 2841
2841 2842 hexoutput = [hexfunc(p.node()) for p in parents]
2842 2843 if default or id:
2843 2844 output = ["%s%s" % ('+'.join(hexoutput), dirty)]
2844 2845 fm.data(id="%s%s" % ('+'.join(hexoutput), dirty))
2845 2846
2846 2847 if num:
2847 2848 numoutput = ["%d" % p.rev() for p in parents]
2848 2849 output.append("%s%s" % ('+'.join(numoutput), dirty))
2849 2850
2850 2851 fn = fm.nested('parents')
2851 2852 for p in parents:
2852 2853 fn.startitem()
2853 2854 fn.data(rev=p.rev())
2854 2855 fn.data(node=p.hex())
2855 2856 fn.context(ctx=p)
2856 2857 fn.end()
2857 2858 else:
2858 2859 hexoutput = hexfunc(ctx.node())
2859 2860 if default or id:
2860 2861 output = [hexoutput]
2861 2862 fm.data(id=hexoutput)
2862 2863
2863 2864 if num:
2864 2865 output.append(pycompat.bytestr(ctx.rev()))
2865 2866 taglist = ctx.tags()
2866 2867
2867 2868 if default and not ui.quiet:
2868 2869 b = ctx.branch()
2869 2870 if b != 'default':
2870 2871 output.append("(%s)" % b)
2871 2872
2872 2873 # multiple tags for a single parent separated by '/'
2873 2874 t = '/'.join(taglist)
2874 2875 if t:
2875 2876 output.append(t)
2876 2877
2877 2878 # multiple bookmarks for a single parent separated by '/'
2878 2879 bm = '/'.join(ctx.bookmarks())
2879 2880 if bm:
2880 2881 output.append(bm)
2881 2882 else:
2882 2883 if branch:
2883 2884 output.append(ctx.branch())
2884 2885
2885 2886 if tags:
2886 2887 output.extend(taglist)
2887 2888
2888 2889 if bookmarks:
2889 2890 output.extend(ctx.bookmarks())
2890 2891
2891 2892 fm.data(node=ctx.hex())
2892 2893 fm.data(branch=ctx.branch())
2893 2894 fm.data(tags=fm.formatlist(taglist, name='tag', sep=':'))
2894 2895 fm.data(bookmarks=fm.formatlist(ctx.bookmarks(), name='bookmark'))
2895 2896 fm.context(ctx=ctx)
2896 2897
2897 2898 fm.plain("%s\n" % ' '.join(output))
2898 2899 fm.end()
2899 2900
2900 2901 @command('import|patch',
2901 2902 [('p', 'strip', 1,
2902 2903 _('directory strip option for patch. This has the same '
2903 2904 'meaning as the corresponding patch option'), _('NUM')),
2904 2905 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2905 2906 ('e', 'edit', False, _('invoke editor on commit messages')),
2906 2907 ('f', 'force', None,
2907 2908 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2908 2909 ('', 'no-commit', None,
2909 2910 _("don't commit, just update the working directory")),
2910 2911 ('', 'bypass', None,
2911 2912 _("apply patch without touching the working directory")),
2912 2913 ('', 'partial', None,
2913 2914 _('commit even if some hunks fail')),
2914 2915 ('', 'exact', None,
2915 2916 _('abort if patch would apply lossily')),
2916 2917 ('', 'prefix', '',
2917 2918 _('apply patch to subdirectory'), _('DIR')),
2918 2919 ('', 'import-branch', None,
2919 2920 _('use any branch information in patch (implied by --exact)'))] +
2920 2921 commitopts + commitopts2 + similarityopts,
2921 2922 _('[OPTION]... PATCH...'))
2922 2923 def import_(ui, repo, patch1=None, *patches, **opts):
2923 2924 """import an ordered set of patches
2924 2925
2925 2926 Import a list of patches and commit them individually (unless
2926 2927 --no-commit is specified).
2927 2928
2928 2929 To read a patch from standard input (stdin), use "-" as the patch
2929 2930 name. If a URL is specified, the patch will be downloaded from
2930 2931 there.
2931 2932
2932 2933 Import first applies changes to the working directory (unless
2933 2934 --bypass is specified), import will abort if there are outstanding
2934 2935 changes.
2935 2936
2936 2937 Use --bypass to apply and commit patches directly to the
2937 2938 repository, without affecting the working directory. Without
2938 2939 --exact, patches will be applied on top of the working directory
2939 2940 parent revision.
2940 2941
2941 2942 You can import a patch straight from a mail message. Even patches
2942 2943 as attachments work (to use the body part, it must have type
2943 2944 text/plain or text/x-patch). From and Subject headers of email
2944 2945 message are used as default committer and commit message. All
2945 2946 text/plain body parts before first diff are added to the commit
2946 2947 message.
2947 2948
2948 2949 If the imported patch was generated by :hg:`export`, user and
2949 2950 description from patch override values from message headers and
2950 2951 body. Values given on command line with -m/--message and -u/--user
2951 2952 override these.
2952 2953
2953 2954 If --exact is specified, import will set the working directory to
2954 2955 the parent of each patch before applying it, and will abort if the
2955 2956 resulting changeset has a different ID than the one recorded in
2956 2957 the patch. This will guard against various ways that portable
2957 2958 patch formats and mail systems might fail to transfer Mercurial
2958 2959 data or metadata. See :hg:`bundle` for lossless transmission.
2959 2960
2960 2961 Use --partial to ensure a changeset will be created from the patch
2961 2962 even if some hunks fail to apply. Hunks that fail to apply will be
2962 2963 written to a <target-file>.rej file. Conflicts can then be resolved
2963 2964 by hand before :hg:`commit --amend` is run to update the created
2964 2965 changeset. This flag exists to let people import patches that
2965 2966 partially apply without losing the associated metadata (author,
2966 2967 date, description, ...).
2967 2968
2968 2969 .. note::
2969 2970
2970 2971 When no hunks apply cleanly, :hg:`import --partial` will create
2971 2972 an empty changeset, importing only the patch metadata.
2972 2973
2973 2974 With -s/--similarity, hg will attempt to discover renames and
2974 2975 copies in the patch in the same way as :hg:`addremove`.
2975 2976
2976 2977 It is possible to use external patch programs to perform the patch
2977 2978 by setting the ``ui.patch`` configuration option. For the default
2978 2979 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2979 2980 See :hg:`help config` for more information about configuration
2980 2981 files and how to use these options.
2981 2982
2982 2983 See :hg:`help dates` for a list of formats valid for -d/--date.
2983 2984
2984 2985 .. container:: verbose
2985 2986
2986 2987 Examples:
2987 2988
2988 2989 - import a traditional patch from a website and detect renames::
2989 2990
2990 2991 hg import -s 80 http://example.com/bugfix.patch
2991 2992
2992 2993 - import a changeset from an hgweb server::
2993 2994
2994 2995 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2995 2996
2996 2997 - import all the patches in an Unix-style mbox::
2997 2998
2998 2999 hg import incoming-patches.mbox
2999 3000
3000 3001 - import patches from stdin::
3001 3002
3002 3003 hg import -
3003 3004
3004 3005 - attempt to exactly restore an exported changeset (not always
3005 3006 possible)::
3006 3007
3007 3008 hg import --exact proposed-fix.patch
3008 3009
3009 3010 - use an external tool to apply a patch which is too fuzzy for
3010 3011 the default internal tool.
3011 3012
3012 3013 hg import --config ui.patch="patch --merge" fuzzy.patch
3013 3014
3014 3015 - change the default fuzzing from 2 to a less strict 7
3015 3016
3016 3017 hg import --config ui.fuzz=7 fuzz.patch
3017 3018
3018 3019 Returns 0 on success, 1 on partial success (see --partial).
3019 3020 """
3020 3021
3021 3022 opts = pycompat.byteskwargs(opts)
3022 3023 if not patch1:
3023 3024 raise error.Abort(_('need at least one patch to import'))
3024 3025
3025 3026 patches = (patch1,) + patches
3026 3027
3027 3028 date = opts.get('date')
3028 3029 if date:
3029 3030 opts['date'] = dateutil.parsedate(date)
3030 3031
3031 3032 exact = opts.get('exact')
3032 3033 update = not opts.get('bypass')
3033 3034 if not update and opts.get('no_commit'):
3034 3035 raise error.Abort(_('cannot use --no-commit with --bypass'))
3035 3036 try:
3036 3037 sim = float(opts.get('similarity') or 0)
3037 3038 except ValueError:
3038 3039 raise error.Abort(_('similarity must be a number'))
3039 3040 if sim < 0 or sim > 100:
3040 3041 raise error.Abort(_('similarity must be between 0 and 100'))
3041 3042 if sim and not update:
3042 3043 raise error.Abort(_('cannot use --similarity with --bypass'))
3043 3044 if exact:
3044 3045 if opts.get('edit'):
3045 3046 raise error.Abort(_('cannot use --exact with --edit'))
3046 3047 if opts.get('prefix'):
3047 3048 raise error.Abort(_('cannot use --exact with --prefix'))
3048 3049
3049 3050 base = opts["base"]
3050 3051 wlock = dsguard = lock = tr = None
3051 3052 msgs = []
3052 3053 ret = 0
3053 3054
3054 3055
3055 3056 try:
3056 3057 wlock = repo.wlock()
3057 3058
3058 3059 if update:
3059 3060 cmdutil.checkunfinished(repo)
3060 3061 if (exact or not opts.get('force')):
3061 3062 cmdutil.bailifchanged(repo)
3062 3063
3063 3064 if not opts.get('no_commit'):
3064 3065 lock = repo.lock()
3065 3066 tr = repo.transaction('import')
3066 3067 else:
3067 3068 dsguard = dirstateguard.dirstateguard(repo, 'import')
3068 3069 parents = repo[None].parents()
3069 3070 for patchurl in patches:
3070 3071 if patchurl == '-':
3071 3072 ui.status(_('applying patch from stdin\n'))
3072 3073 patchfile = ui.fin
3073 3074 patchurl = 'stdin' # for error message
3074 3075 else:
3075 3076 patchurl = os.path.join(base, patchurl)
3076 3077 ui.status(_('applying %s\n') % patchurl)
3077 3078 patchfile = hg.openpath(ui, patchurl)
3078 3079
3079 3080 haspatch = False
3080 3081 for hunk in patch.split(patchfile):
3081 3082 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
3082 3083 parents, opts,
3083 3084 msgs, hg.clean)
3084 3085 if msg:
3085 3086 haspatch = True
3086 3087 ui.note(msg + '\n')
3087 3088 if update or exact:
3088 3089 parents = repo[None].parents()
3089 3090 else:
3090 3091 parents = [repo[node]]
3091 3092 if rej:
3092 3093 ui.write_err(_("patch applied partially\n"))
3093 3094 ui.write_err(_("(fix the .rej files and run "
3094 3095 "`hg commit --amend`)\n"))
3095 3096 ret = 1
3096 3097 break
3097 3098
3098 3099 if not haspatch:
3099 3100 raise error.Abort(_('%s: no diffs found') % patchurl)
3100 3101
3101 3102 if tr:
3102 3103 tr.close()
3103 3104 if msgs:
3104 3105 repo.savecommitmessage('\n* * *\n'.join(msgs))
3105 3106 if dsguard:
3106 3107 dsguard.close()
3107 3108 return ret
3108 3109 finally:
3109 3110 if tr:
3110 3111 tr.release()
3111 3112 release(lock, dsguard, wlock)
3112 3113
3113 3114 @command('incoming|in',
3114 3115 [('f', 'force', None,
3115 3116 _('run even if remote repository is unrelated')),
3116 3117 ('n', 'newest-first', None, _('show newest record first')),
3117 3118 ('', 'bundle', '',
3118 3119 _('file to store the bundles into'), _('FILE')),
3119 3120 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3120 3121 ('B', 'bookmarks', False, _("compare bookmarks")),
3121 3122 ('b', 'branch', [],
3122 3123 _('a specific branch you would like to pull'), _('BRANCH')),
3123 3124 ] + logopts + remoteopts + subrepoopts,
3124 3125 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3125 3126 def incoming(ui, repo, source="default", **opts):
3126 3127 """show new changesets found in source
3127 3128
3128 3129 Show new changesets found in the specified path/URL or the default
3129 3130 pull location. These are the changesets that would have been pulled
3130 3131 by :hg:`pull` at the time you issued this command.
3131 3132
3132 3133 See pull for valid source format details.
3133 3134
3134 3135 .. container:: verbose
3135 3136
3136 3137 With -B/--bookmarks, the result of bookmark comparison between
3137 3138 local and remote repositories is displayed. With -v/--verbose,
3138 3139 status is also displayed for each bookmark like below::
3139 3140
3140 3141 BM1 01234567890a added
3141 3142 BM2 1234567890ab advanced
3142 3143 BM3 234567890abc diverged
3143 3144 BM4 34567890abcd changed
3144 3145
3145 3146 The action taken locally when pulling depends on the
3146 3147 status of each bookmark:
3147 3148
3148 3149 :``added``: pull will create it
3149 3150 :``advanced``: pull will update it
3150 3151 :``diverged``: pull will create a divergent bookmark
3151 3152 :``changed``: result depends on remote changesets
3152 3153
3153 3154 From the point of view of pulling behavior, bookmark
3154 3155 existing only in the remote repository are treated as ``added``,
3155 3156 even if it is in fact locally deleted.
3156 3157
3157 3158 .. container:: verbose
3158 3159
3159 3160 For remote repository, using --bundle avoids downloading the
3160 3161 changesets twice if the incoming is followed by a pull.
3161 3162
3162 3163 Examples:
3163 3164
3164 3165 - show incoming changes with patches and full description::
3165 3166
3166 3167 hg incoming -vp
3167 3168
3168 3169 - show incoming changes excluding merges, store a bundle::
3169 3170
3170 3171 hg in -vpM --bundle incoming.hg
3171 3172 hg pull incoming.hg
3172 3173
3173 3174 - briefly list changes inside a bundle::
3174 3175
3175 3176 hg in changes.hg -T "{desc|firstline}\\n"
3176 3177
3177 3178 Returns 0 if there are incoming changes, 1 otherwise.
3178 3179 """
3179 3180 opts = pycompat.byteskwargs(opts)
3180 3181 if opts.get('graph'):
3181 3182 logcmdutil.checkunsupportedgraphflags([], opts)
3182 3183 def display(other, chlist, displayer):
3183 3184 revdag = logcmdutil.graphrevs(other, chlist, opts)
3184 3185 logcmdutil.displaygraph(ui, repo, revdag, displayer,
3185 3186 graphmod.asciiedges)
3186 3187
3187 3188 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3188 3189 return 0
3189 3190
3190 3191 if opts.get('bundle') and opts.get('subrepos'):
3191 3192 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3192 3193
3193 3194 if opts.get('bookmarks'):
3194 3195 source, branches = hg.parseurl(ui.expandpath(source),
3195 3196 opts.get('branch'))
3196 3197 other = hg.peer(repo, opts, source)
3197 3198 if 'bookmarks' not in other.listkeys('namespaces'):
3198 3199 ui.warn(_("remote doesn't support bookmarks\n"))
3199 3200 return 0
3200 3201 ui.pager('incoming')
3201 3202 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3202 3203 return bookmarks.incoming(ui, repo, other)
3203 3204
3204 3205 repo._subtoppath = ui.expandpath(source)
3205 3206 try:
3206 3207 return hg.incoming(ui, repo, source, opts)
3207 3208 finally:
3208 3209 del repo._subtoppath
3209 3210
3210 3211
3211 3212 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3212 3213 norepo=True)
3213 3214 def init(ui, dest=".", **opts):
3214 3215 """create a new repository in the given directory
3215 3216
3216 3217 Initialize a new repository in the given directory. If the given
3217 3218 directory does not exist, it will be created.
3218 3219
3219 3220 If no directory is given, the current directory is used.
3220 3221
3221 3222 It is possible to specify an ``ssh://`` URL as the destination.
3222 3223 See :hg:`help urls` for more information.
3223 3224
3224 3225 Returns 0 on success.
3225 3226 """
3226 3227 opts = pycompat.byteskwargs(opts)
3227 3228 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3228 3229
3229 3230 @command('locate',
3230 3231 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3231 3232 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3232 3233 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3233 3234 ] + walkopts,
3234 3235 _('[OPTION]... [PATTERN]...'))
3235 3236 def locate(ui, repo, *pats, **opts):
3236 3237 """locate files matching specific patterns (DEPRECATED)
3237 3238
3238 3239 Print files under Mercurial control in the working directory whose
3239 3240 names match the given patterns.
3240 3241
3241 3242 By default, this command searches all directories in the working
3242 3243 directory. To search just the current directory and its
3243 3244 subdirectories, use "--include .".
3244 3245
3245 3246 If no patterns are given to match, this command prints the names
3246 3247 of all files under Mercurial control in the working directory.
3247 3248
3248 3249 If you want to feed the output of this command into the "xargs"
3249 3250 command, use the -0 option to both this command and "xargs". This
3250 3251 will avoid the problem of "xargs" treating single filenames that
3251 3252 contain whitespace as multiple filenames.
3252 3253
3253 3254 See :hg:`help files` for a more versatile command.
3254 3255
3255 3256 Returns 0 if a match is found, 1 otherwise.
3256 3257 """
3257 3258 opts = pycompat.byteskwargs(opts)
3258 3259 if opts.get('print0'):
3259 3260 end = '\0'
3260 3261 else:
3261 3262 end = '\n'
3262 3263 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3263 3264
3264 3265 ret = 1
3265 3266 m = scmutil.match(ctx, pats, opts, default='relglob',
3266 3267 badfn=lambda x, y: False)
3267 3268
3268 3269 ui.pager('locate')
3269 3270 for abs in ctx.matches(m):
3270 3271 if opts.get('fullpath'):
3271 3272 ui.write(repo.wjoin(abs), end)
3272 3273 else:
3273 3274 ui.write(((pats and m.rel(abs)) or abs), end)
3274 3275 ret = 0
3275 3276
3276 3277 return ret
3277 3278
3278 3279 @command('^log|history',
3279 3280 [('f', 'follow', None,
3280 3281 _('follow changeset history, or file history across copies and renames')),
3281 3282 ('', 'follow-first', None,
3282 3283 _('only follow the first parent of merge changesets (DEPRECATED)')),
3283 3284 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3284 3285 ('C', 'copies', None, _('show copied files')),
3285 3286 ('k', 'keyword', [],
3286 3287 _('do case-insensitive search for a given text'), _('TEXT')),
3287 3288 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3288 3289 ('L', 'line-range', [],
3289 3290 _('follow line range of specified file (EXPERIMENTAL)'),
3290 3291 _('FILE,RANGE')),
3291 3292 ('', 'removed', None, _('include revisions where files were removed')),
3292 3293 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3293 3294 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3294 3295 ('', 'only-branch', [],
3295 3296 _('show only changesets within the given named branch (DEPRECATED)'),
3296 3297 _('BRANCH')),
3297 3298 ('b', 'branch', [],
3298 3299 _('show changesets within the given named branch'), _('BRANCH')),
3299 3300 ('P', 'prune', [],
3300 3301 _('do not display revision or any of its ancestors'), _('REV')),
3301 3302 ] + logopts + walkopts,
3302 3303 _('[OPTION]... [FILE]'),
3303 3304 inferrepo=True, cmdtype=readonly)
3304 3305 def log(ui, repo, *pats, **opts):
3305 3306 """show revision history of entire repository or files
3306 3307
3307 3308 Print the revision history of the specified files or the entire
3308 3309 project.
3309 3310
3310 3311 If no revision range is specified, the default is ``tip:0`` unless
3311 3312 --follow is set, in which case the working directory parent is
3312 3313 used as the starting revision.
3313 3314
3314 3315 File history is shown without following rename or copy history of
3315 3316 files. Use -f/--follow with a filename to follow history across
3316 3317 renames and copies. --follow without a filename will only show
3317 3318 ancestors of the starting revision.
3318 3319
3319 3320 By default this command prints revision number and changeset id,
3320 3321 tags, non-trivial parents, user, date and time, and a summary for
3321 3322 each commit. When the -v/--verbose switch is used, the list of
3322 3323 changed files and full commit message are shown.
3323 3324
3324 3325 With --graph the revisions are shown as an ASCII art DAG with the most
3325 3326 recent changeset at the top.
3326 3327 'o' is a changeset, '@' is a working directory parent, '_' closes a branch,
3327 3328 'x' is obsolete, '*' is unstable, and '+' represents a fork where the
3328 3329 changeset from the lines below is a parent of the 'o' merge on the same
3329 3330 line.
3330 3331 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3331 3332 of a '|' indicates one or more revisions in a path are omitted.
3332 3333
3333 3334 .. container:: verbose
3334 3335
3335 3336 Use -L/--line-range FILE,M:N options to follow the history of lines
3336 3337 from M to N in FILE. With -p/--patch only diff hunks affecting
3337 3338 specified line range will be shown. This option requires --follow;
3338 3339 it can be specified multiple times. Currently, this option is not
3339 3340 compatible with --graph. This option is experimental.
3340 3341
3341 3342 .. note::
3342 3343
3343 3344 :hg:`log --patch` may generate unexpected diff output for merge
3344 3345 changesets, as it will only compare the merge changeset against
3345 3346 its first parent. Also, only files different from BOTH parents
3346 3347 will appear in files:.
3347 3348
3348 3349 .. note::
3349 3350
3350 3351 For performance reasons, :hg:`log FILE` may omit duplicate changes
3351 3352 made on branches and will not show removals or mode changes. To
3352 3353 see all such changes, use the --removed switch.
3353 3354
3354 3355 .. container:: verbose
3355 3356
3356 3357 .. note::
3357 3358
3358 3359 The history resulting from -L/--line-range options depends on diff
3359 3360 options; for instance if white-spaces are ignored, respective changes
3360 3361 with only white-spaces in specified line range will not be listed.
3361 3362
3362 3363 .. container:: verbose
3363 3364
3364 3365 Some examples:
3365 3366
3366 3367 - changesets with full descriptions and file lists::
3367 3368
3368 3369 hg log -v
3369 3370
3370 3371 - changesets ancestral to the working directory::
3371 3372
3372 3373 hg log -f
3373 3374
3374 3375 - last 10 commits on the current branch::
3375 3376
3376 3377 hg log -l 10 -b .
3377 3378
3378 3379 - changesets showing all modifications of a file, including removals::
3379 3380
3380 3381 hg log --removed file.c
3381 3382
3382 3383 - all changesets that touch a directory, with diffs, excluding merges::
3383 3384
3384 3385 hg log -Mp lib/
3385 3386
3386 3387 - all revision numbers that match a keyword::
3387 3388
3388 3389 hg log -k bug --template "{rev}\\n"
3389 3390
3390 3391 - the full hash identifier of the working directory parent::
3391 3392
3392 3393 hg log -r . --template "{node}\\n"
3393 3394
3394 3395 - list available log templates::
3395 3396
3396 3397 hg log -T list
3397 3398
3398 3399 - check if a given changeset is included in a tagged release::
3399 3400
3400 3401 hg log -r "a21ccf and ancestor(1.9)"
3401 3402
3402 3403 - find all changesets by some user in a date range::
3403 3404
3404 3405 hg log -k alice -d "may 2008 to jul 2008"
3405 3406
3406 3407 - summary of all changesets after the last tag::
3407 3408
3408 3409 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3409 3410
3410 3411 - changesets touching lines 13 to 23 for file.c::
3411 3412
3412 3413 hg log -L file.c,13:23
3413 3414
3414 3415 - changesets touching lines 13 to 23 for file.c and lines 2 to 6 of
3415 3416 main.c with patch::
3416 3417
3417 3418 hg log -L file.c,13:23 -L main.c,2:6 -p
3418 3419
3419 3420 See :hg:`help dates` for a list of formats valid for -d/--date.
3420 3421
3421 3422 See :hg:`help revisions` for more about specifying and ordering
3422 3423 revisions.
3423 3424
3424 3425 See :hg:`help templates` for more about pre-packaged styles and
3425 3426 specifying custom templates. The default template used by the log
3426 3427 command can be customized via the ``ui.logtemplate`` configuration
3427 3428 setting.
3428 3429
3429 3430 Returns 0 on success.
3430 3431
3431 3432 """
3432 3433 opts = pycompat.byteskwargs(opts)
3433 3434 linerange = opts.get('line_range')
3434 3435
3435 3436 if linerange and not opts.get('follow'):
3436 3437 raise error.Abort(_('--line-range requires --follow'))
3437 3438
3438 3439 if linerange and pats:
3439 3440 # TODO: take pats as patterns with no line-range filter
3440 3441 raise error.Abort(
3441 3442 _('FILE arguments are not compatible with --line-range option')
3442 3443 )
3443 3444
3444 3445 repo = scmutil.unhidehashlikerevs(repo, opts.get('rev'), 'nowarn')
3445 3446 revs, differ = logcmdutil.getrevs(repo, pats, opts)
3446 3447 if linerange:
3447 3448 # TODO: should follow file history from logcmdutil._initialrevs(),
3448 3449 # then filter the result by logcmdutil._makerevset() and --limit
3449 3450 revs, differ = logcmdutil.getlinerangerevs(repo, revs, opts)
3450 3451
3451 3452 getrenamed = None
3452 3453 if opts.get('copies'):
3453 3454 endrev = None
3454 3455 if opts.get('rev'):
3455 3456 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3456 3457 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3457 3458
3458 3459 ui.pager('log')
3459 3460 displayer = logcmdutil.changesetdisplayer(ui, repo, opts, differ,
3460 3461 buffered=True)
3461 3462 if opts.get('graph'):
3462 3463 displayfn = logcmdutil.displaygraphrevs
3463 3464 else:
3464 3465 displayfn = logcmdutil.displayrevs
3465 3466 displayfn(ui, repo, revs, displayer, getrenamed)
3466 3467
3467 3468 @command('manifest',
3468 3469 [('r', 'rev', '', _('revision to display'), _('REV')),
3469 3470 ('', 'all', False, _("list files from all revisions"))]
3470 3471 + formatteropts,
3471 3472 _('[-r REV]'), cmdtype=readonly)
3472 3473 def manifest(ui, repo, node=None, rev=None, **opts):
3473 3474 """output the current or given revision of the project manifest
3474 3475
3475 3476 Print a list of version controlled files for the given revision.
3476 3477 If no revision is given, the first parent of the working directory
3477 3478 is used, or the null revision if no revision is checked out.
3478 3479
3479 3480 With -v, print file permissions, symlink and executable bits.
3480 3481 With --debug, print file revision hashes.
3481 3482
3482 3483 If option --all is specified, the list of all files from all revisions
3483 3484 is printed. This includes deleted and renamed files.
3484 3485
3485 3486 Returns 0 on success.
3486 3487 """
3487 3488 opts = pycompat.byteskwargs(opts)
3488 3489 fm = ui.formatter('manifest', opts)
3489 3490
3490 3491 if opts.get('all'):
3491 3492 if rev or node:
3492 3493 raise error.Abort(_("can't specify a revision with --all"))
3493 3494
3494 3495 res = []
3495 3496 prefix = "data/"
3496 3497 suffix = ".i"
3497 3498 plen = len(prefix)
3498 3499 slen = len(suffix)
3499 3500 with repo.lock():
3500 3501 for fn, b, size in repo.store.datafiles():
3501 3502 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3502 3503 res.append(fn[plen:-slen])
3503 3504 ui.pager('manifest')
3504 3505 for f in res:
3505 3506 fm.startitem()
3506 3507 fm.write("path", '%s\n', f)
3507 3508 fm.end()
3508 3509 return
3509 3510
3510 3511 if rev and node:
3511 3512 raise error.Abort(_("please specify just one revision"))
3512 3513
3513 3514 if not node:
3514 3515 node = rev
3515 3516
3516 3517 char = {'l': '@', 'x': '*', '': '', 't': 'd'}
3517 3518 mode = {'l': '644', 'x': '755', '': '644', 't': '755'}
3518 3519 if node:
3519 3520 repo = scmutil.unhidehashlikerevs(repo, [node], 'nowarn')
3520 3521 ctx = scmutil.revsingle(repo, node)
3521 3522 mf = ctx.manifest()
3522 3523 ui.pager('manifest')
3523 3524 for f in ctx:
3524 3525 fm.startitem()
3525 3526 fl = ctx[f].flags()
3526 3527 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3527 3528 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3528 3529 fm.write('path', '%s\n', f)
3529 3530 fm.end()
3530 3531
3531 3532 @command('^merge',
3532 3533 [('f', 'force', None,
3533 3534 _('force a merge including outstanding changes (DEPRECATED)')),
3534 3535 ('r', 'rev', '', _('revision to merge'), _('REV')),
3535 3536 ('P', 'preview', None,
3536 3537 _('review revisions to merge (no merge is performed)')),
3537 3538 ('', 'abort', None, _('abort the ongoing merge')),
3538 3539 ] + mergetoolopts,
3539 3540 _('[-P] [[-r] REV]'))
3540 3541 def merge(ui, repo, node=None, **opts):
3541 3542 """merge another revision into working directory
3542 3543
3543 3544 The current working directory is updated with all changes made in
3544 3545 the requested revision since the last common predecessor revision.
3545 3546
3546 3547 Files that changed between either parent are marked as changed for
3547 3548 the next commit and a commit must be performed before any further
3548 3549 updates to the repository are allowed. The next commit will have
3549 3550 two parents.
3550 3551
3551 3552 ``--tool`` can be used to specify the merge tool used for file
3552 3553 merges. It overrides the HGMERGE environment variable and your
3553 3554 configuration files. See :hg:`help merge-tools` for options.
3554 3555
3555 3556 If no revision is specified, the working directory's parent is a
3556 3557 head revision, and the current branch contains exactly one other
3557 3558 head, the other head is merged with by default. Otherwise, an
3558 3559 explicit revision with which to merge with must be provided.
3559 3560
3560 3561 See :hg:`help resolve` for information on handling file conflicts.
3561 3562
3562 3563 To undo an uncommitted merge, use :hg:`merge --abort` which
3563 3564 will check out a clean copy of the original merge parent, losing
3564 3565 all changes.
3565 3566
3566 3567 Returns 0 on success, 1 if there are unresolved files.
3567 3568 """
3568 3569
3569 3570 opts = pycompat.byteskwargs(opts)
3570 3571 abort = opts.get('abort')
3571 3572 if abort and repo.dirstate.p2() == nullid:
3572 3573 cmdutil.wrongtooltocontinue(repo, _('merge'))
3573 3574 if abort:
3574 3575 if node:
3575 3576 raise error.Abort(_("cannot specify a node with --abort"))
3576 3577 if opts.get('rev'):
3577 3578 raise error.Abort(_("cannot specify both --rev and --abort"))
3578 3579 if opts.get('preview'):
3579 3580 raise error.Abort(_("cannot specify --preview with --abort"))
3580 3581 if opts.get('rev') and node:
3581 3582 raise error.Abort(_("please specify just one revision"))
3582 3583 if not node:
3583 3584 node = opts.get('rev')
3584 3585
3585 3586 if node:
3586 3587 node = scmutil.revsingle(repo, node).node()
3587 3588
3588 3589 if not node and not abort:
3589 3590 node = repo[destutil.destmerge(repo)].node()
3590 3591
3591 3592 if opts.get('preview'):
3592 3593 # find nodes that are ancestors of p2 but not of p1
3593 3594 p1 = repo.lookup('.')
3594 3595 p2 = repo.lookup(node)
3595 3596 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3596 3597
3597 3598 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
3598 3599 for node in nodes:
3599 3600 displayer.show(repo[node])
3600 3601 displayer.close()
3601 3602 return 0
3602 3603
3603 3604 try:
3604 3605 # ui.forcemerge is an internal variable, do not document
3605 3606 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3606 3607 force = opts.get('force')
3607 3608 labels = ['working copy', 'merge rev']
3608 3609 return hg.merge(repo, node, force=force, mergeforce=force,
3609 3610 labels=labels, abort=abort)
3610 3611 finally:
3611 3612 ui.setconfig('ui', 'forcemerge', '', 'merge')
3612 3613
3613 3614 @command('outgoing|out',
3614 3615 [('f', 'force', None, _('run even when the destination is unrelated')),
3615 3616 ('r', 'rev', [],
3616 3617 _('a changeset intended to be included in the destination'), _('REV')),
3617 3618 ('n', 'newest-first', None, _('show newest record first')),
3618 3619 ('B', 'bookmarks', False, _('compare bookmarks')),
3619 3620 ('b', 'branch', [], _('a specific branch you would like to push'),
3620 3621 _('BRANCH')),
3621 3622 ] + logopts + remoteopts + subrepoopts,
3622 3623 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3623 3624 def outgoing(ui, repo, dest=None, **opts):
3624 3625 """show changesets not found in the destination
3625 3626
3626 3627 Show changesets not found in the specified destination repository
3627 3628 or the default push location. These are the changesets that would
3628 3629 be pushed if a push was requested.
3629 3630
3630 3631 See pull for details of valid destination formats.
3631 3632
3632 3633 .. container:: verbose
3633 3634
3634 3635 With -B/--bookmarks, the result of bookmark comparison between
3635 3636 local and remote repositories is displayed. With -v/--verbose,
3636 3637 status is also displayed for each bookmark like below::
3637 3638
3638 3639 BM1 01234567890a added
3639 3640 BM2 deleted
3640 3641 BM3 234567890abc advanced
3641 3642 BM4 34567890abcd diverged
3642 3643 BM5 4567890abcde changed
3643 3644
3644 3645 The action taken when pushing depends on the
3645 3646 status of each bookmark:
3646 3647
3647 3648 :``added``: push with ``-B`` will create it
3648 3649 :``deleted``: push with ``-B`` will delete it
3649 3650 :``advanced``: push will update it
3650 3651 :``diverged``: push with ``-B`` will update it
3651 3652 :``changed``: push with ``-B`` will update it
3652 3653
3653 3654 From the point of view of pushing behavior, bookmarks
3654 3655 existing only in the remote repository are treated as
3655 3656 ``deleted``, even if it is in fact added remotely.
3656 3657
3657 3658 Returns 0 if there are outgoing changes, 1 otherwise.
3658 3659 """
3659 3660 opts = pycompat.byteskwargs(opts)
3660 3661 if opts.get('graph'):
3661 3662 logcmdutil.checkunsupportedgraphflags([], opts)
3662 3663 o, other = hg._outgoing(ui, repo, dest, opts)
3663 3664 if not o:
3664 3665 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3665 3666 return
3666 3667
3667 3668 revdag = logcmdutil.graphrevs(repo, o, opts)
3668 3669 ui.pager('outgoing')
3669 3670 displayer = logcmdutil.changesetdisplayer(ui, repo, opts, buffered=True)
3670 3671 logcmdutil.displaygraph(ui, repo, revdag, displayer,
3671 3672 graphmod.asciiedges)
3672 3673 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3673 3674 return 0
3674 3675
3675 3676 if opts.get('bookmarks'):
3676 3677 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3677 3678 dest, branches = hg.parseurl(dest, opts.get('branch'))
3678 3679 other = hg.peer(repo, opts, dest)
3679 3680 if 'bookmarks' not in other.listkeys('namespaces'):
3680 3681 ui.warn(_("remote doesn't support bookmarks\n"))
3681 3682 return 0
3682 3683 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3683 3684 ui.pager('outgoing')
3684 3685 return bookmarks.outgoing(ui, repo, other)
3685 3686
3686 3687 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3687 3688 try:
3688 3689 return hg.outgoing(ui, repo, dest, opts)
3689 3690 finally:
3690 3691 del repo._subtoppath
3691 3692
3692 3693 @command('parents',
3693 3694 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3694 3695 ] + templateopts,
3695 3696 _('[-r REV] [FILE]'),
3696 3697 inferrepo=True)
3697 3698 def parents(ui, repo, file_=None, **opts):
3698 3699 """show the parents of the working directory or revision (DEPRECATED)
3699 3700
3700 3701 Print the working directory's parent revisions. If a revision is
3701 3702 given via -r/--rev, the parent of that revision will be printed.
3702 3703 If a file argument is given, the revision in which the file was
3703 3704 last changed (before the working directory revision or the
3704 3705 argument to --rev if given) is printed.
3705 3706
3706 3707 This command is equivalent to::
3707 3708
3708 3709 hg log -r "p1()+p2()" or
3709 3710 hg log -r "p1(REV)+p2(REV)" or
3710 3711 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3711 3712 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3712 3713
3713 3714 See :hg:`summary` and :hg:`help revsets` for related information.
3714 3715
3715 3716 Returns 0 on success.
3716 3717 """
3717 3718
3718 3719 opts = pycompat.byteskwargs(opts)
3719 3720 rev = opts.get('rev')
3720 3721 if rev:
3721 3722 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
3722 3723 ctx = scmutil.revsingle(repo, rev, None)
3723 3724
3724 3725 if file_:
3725 3726 m = scmutil.match(ctx, (file_,), opts)
3726 3727 if m.anypats() or len(m.files()) != 1:
3727 3728 raise error.Abort(_('can only specify an explicit filename'))
3728 3729 file_ = m.files()[0]
3729 3730 filenodes = []
3730 3731 for cp in ctx.parents():
3731 3732 if not cp:
3732 3733 continue
3733 3734 try:
3734 3735 filenodes.append(cp.filenode(file_))
3735 3736 except error.LookupError:
3736 3737 pass
3737 3738 if not filenodes:
3738 3739 raise error.Abort(_("'%s' not found in manifest!") % file_)
3739 3740 p = []
3740 3741 for fn in filenodes:
3741 3742 fctx = repo.filectx(file_, fileid=fn)
3742 3743 p.append(fctx.node())
3743 3744 else:
3744 3745 p = [cp.node() for cp in ctx.parents()]
3745 3746
3746 3747 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
3747 3748 for n in p:
3748 3749 if n != nullid:
3749 3750 displayer.show(repo[n])
3750 3751 displayer.close()
3751 3752
3752 3753 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True,
3753 3754 cmdtype=readonly)
3754 3755 def paths(ui, repo, search=None, **opts):
3755 3756 """show aliases for remote repositories
3756 3757
3757 3758 Show definition of symbolic path name NAME. If no name is given,
3758 3759 show definition of all available names.
3759 3760
3760 3761 Option -q/--quiet suppresses all output when searching for NAME
3761 3762 and shows only the path names when listing all definitions.
3762 3763
3763 3764 Path names are defined in the [paths] section of your
3764 3765 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3765 3766 repository, ``.hg/hgrc`` is used, too.
3766 3767
3767 3768 The path names ``default`` and ``default-push`` have a special
3768 3769 meaning. When performing a push or pull operation, they are used
3769 3770 as fallbacks if no location is specified on the command-line.
3770 3771 When ``default-push`` is set, it will be used for push and
3771 3772 ``default`` will be used for pull; otherwise ``default`` is used
3772 3773 as the fallback for both. When cloning a repository, the clone
3773 3774 source is written as ``default`` in ``.hg/hgrc``.
3774 3775
3775 3776 .. note::
3776 3777
3777 3778 ``default`` and ``default-push`` apply to all inbound (e.g.
3778 3779 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3779 3780 and :hg:`bundle`) operations.
3780 3781
3781 3782 See :hg:`help urls` for more information.
3782 3783
3783 3784 Returns 0 on success.
3784 3785 """
3785 3786
3786 3787 opts = pycompat.byteskwargs(opts)
3787 3788 ui.pager('paths')
3788 3789 if search:
3789 3790 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3790 3791 if name == search]
3791 3792 else:
3792 3793 pathitems = sorted(ui.paths.iteritems())
3793 3794
3794 3795 fm = ui.formatter('paths', opts)
3795 3796 if fm.isplain():
3796 3797 hidepassword = util.hidepassword
3797 3798 else:
3798 3799 hidepassword = bytes
3799 3800 if ui.quiet:
3800 3801 namefmt = '%s\n'
3801 3802 else:
3802 3803 namefmt = '%s = '
3803 3804 showsubopts = not search and not ui.quiet
3804 3805
3805 3806 for name, path in pathitems:
3806 3807 fm.startitem()
3807 3808 fm.condwrite(not search, 'name', namefmt, name)
3808 3809 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3809 3810 for subopt, value in sorted(path.suboptions.items()):
3810 3811 assert subopt not in ('name', 'url')
3811 3812 if showsubopts:
3812 3813 fm.plain('%s:%s = ' % (name, subopt))
3813 3814 fm.condwrite(showsubopts, subopt, '%s\n', value)
3814 3815
3815 3816 fm.end()
3816 3817
3817 3818 if search and not pathitems:
3818 3819 if not ui.quiet:
3819 3820 ui.warn(_("not found!\n"))
3820 3821 return 1
3821 3822 else:
3822 3823 return 0
3823 3824
3824 3825 @command('phase',
3825 3826 [('p', 'public', False, _('set changeset phase to public')),
3826 3827 ('d', 'draft', False, _('set changeset phase to draft')),
3827 3828 ('s', 'secret', False, _('set changeset phase to secret')),
3828 3829 ('f', 'force', False, _('allow to move boundary backward')),
3829 3830 ('r', 'rev', [], _('target revision'), _('REV')),
3830 3831 ],
3831 3832 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3832 3833 def phase(ui, repo, *revs, **opts):
3833 3834 """set or show the current phase name
3834 3835
3835 3836 With no argument, show the phase name of the current revision(s).
3836 3837
3837 3838 With one of -p/--public, -d/--draft or -s/--secret, change the
3838 3839 phase value of the specified revisions.
3839 3840
3840 3841 Unless -f/--force is specified, :hg:`phase` won't move changesets from a
3841 3842 lower phase to a higher phase. Phases are ordered as follows::
3842 3843
3843 3844 public < draft < secret
3844 3845
3845 3846 Returns 0 on success, 1 if some phases could not be changed.
3846 3847
3847 3848 (For more information about the phases concept, see :hg:`help phases`.)
3848 3849 """
3849 3850 opts = pycompat.byteskwargs(opts)
3850 3851 # search for a unique phase argument
3851 3852 targetphase = None
3852 3853 for idx, name in enumerate(phases.phasenames):
3853 3854 if opts[name]:
3854 3855 if targetphase is not None:
3855 3856 raise error.Abort(_('only one phase can be specified'))
3856 3857 targetphase = idx
3857 3858
3858 3859 # look for specified revision
3859 3860 revs = list(revs)
3860 3861 revs.extend(opts['rev'])
3861 3862 if not revs:
3862 3863 # display both parents as the second parent phase can influence
3863 3864 # the phase of a merge commit
3864 3865 revs = [c.rev() for c in repo[None].parents()]
3865 3866
3866 3867 revs = scmutil.revrange(repo, revs)
3867 3868
3868 3869 ret = 0
3869 3870 if targetphase is None:
3870 3871 # display
3871 3872 for r in revs:
3872 3873 ctx = repo[r]
3873 3874 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3874 3875 else:
3875 3876 with repo.lock(), repo.transaction("phase") as tr:
3876 3877 # set phase
3877 3878 if not revs:
3878 3879 raise error.Abort(_('empty revision set'))
3879 3880 nodes = [repo[r].node() for r in revs]
3880 3881 # moving revision from public to draft may hide them
3881 3882 # We have to check result on an unfiltered repository
3882 3883 unfi = repo.unfiltered()
3883 3884 getphase = unfi._phasecache.phase
3884 3885 olddata = [getphase(unfi, r) for r in unfi]
3885 3886 phases.advanceboundary(repo, tr, targetphase, nodes)
3886 3887 if opts['force']:
3887 3888 phases.retractboundary(repo, tr, targetphase, nodes)
3888 3889 getphase = unfi._phasecache.phase
3889 3890 newdata = [getphase(unfi, r) for r in unfi]
3890 3891 changes = sum(newdata[r] != olddata[r] for r in unfi)
3891 3892 cl = unfi.changelog
3892 3893 rejected = [n for n in nodes
3893 3894 if newdata[cl.rev(n)] < targetphase]
3894 3895 if rejected:
3895 3896 ui.warn(_('cannot move %i changesets to a higher '
3896 3897 'phase, use --force\n') % len(rejected))
3897 3898 ret = 1
3898 3899 if changes:
3899 3900 msg = _('phase changed for %i changesets\n') % changes
3900 3901 if ret:
3901 3902 ui.status(msg)
3902 3903 else:
3903 3904 ui.note(msg)
3904 3905 else:
3905 3906 ui.warn(_('no phases changed\n'))
3906 3907 return ret
3907 3908
3908 3909 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3909 3910 """Run after a changegroup has been added via pull/unbundle
3910 3911
3911 3912 This takes arguments below:
3912 3913
3913 3914 :modheads: change of heads by pull/unbundle
3914 3915 :optupdate: updating working directory is needed or not
3915 3916 :checkout: update destination revision (or None to default destination)
3916 3917 :brev: a name, which might be a bookmark to be activated after updating
3917 3918 """
3918 3919 if modheads == 0:
3919 3920 return
3920 3921 if optupdate:
3921 3922 try:
3922 3923 return hg.updatetotally(ui, repo, checkout, brev)
3923 3924 except error.UpdateAbort as inst:
3924 3925 msg = _("not updating: %s") % stringutil.forcebytestr(inst)
3925 3926 hint = inst.hint
3926 3927 raise error.UpdateAbort(msg, hint=hint)
3927 3928 if modheads > 1:
3928 3929 currentbranchheads = len(repo.branchheads())
3929 3930 if currentbranchheads == modheads:
3930 3931 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3931 3932 elif currentbranchheads > 1:
3932 3933 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3933 3934 "merge)\n"))
3934 3935 else:
3935 3936 ui.status(_("(run 'hg heads' to see heads)\n"))
3936 3937 elif not ui.configbool('commands', 'update.requiredest'):
3937 3938 ui.status(_("(run 'hg update' to get a working copy)\n"))
3938 3939
3939 3940 @command('^pull',
3940 3941 [('u', 'update', None,
3941 3942 _('update to new branch head if new descendants were pulled')),
3942 3943 ('f', 'force', None, _('run even when remote repository is unrelated')),
3943 3944 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3944 3945 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3945 3946 ('b', 'branch', [], _('a specific branch you would like to pull'),
3946 3947 _('BRANCH')),
3947 3948 ] + remoteopts,
3948 3949 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3949 3950 def pull(ui, repo, source="default", **opts):
3950 3951 """pull changes from the specified source
3951 3952
3952 3953 Pull changes from a remote repository to a local one.
3953 3954
3954 3955 This finds all changes from the repository at the specified path
3955 3956 or URL and adds them to a local repository (the current one unless
3956 3957 -R is specified). By default, this does not update the copy of the
3957 3958 project in the working directory.
3958 3959
3959 3960 Use :hg:`incoming` if you want to see what would have been added
3960 3961 by a pull at the time you issued this command. If you then decide
3961 3962 to add those changes to the repository, you should use :hg:`pull
3962 3963 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3963 3964
3964 3965 If SOURCE is omitted, the 'default' path will be used.
3965 3966 See :hg:`help urls` for more information.
3966 3967
3967 3968 Specifying bookmark as ``.`` is equivalent to specifying the active
3968 3969 bookmark's name.
3969 3970
3970 3971 Returns 0 on success, 1 if an update had unresolved files.
3971 3972 """
3972 3973
3973 3974 opts = pycompat.byteskwargs(opts)
3974 3975 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3975 3976 msg = _('update destination required by configuration')
3976 3977 hint = _('use hg pull followed by hg update DEST')
3977 3978 raise error.Abort(msg, hint=hint)
3978 3979
3979 3980 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3980 3981 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3981 3982 other = hg.peer(repo, opts, source)
3982 3983 try:
3983 3984 revs, checkout = hg.addbranchrevs(repo, other, branches,
3984 3985 opts.get('rev'))
3985 3986
3986 3987
3987 3988 pullopargs = {}
3988 3989 if opts.get('bookmark'):
3989 3990 if not revs:
3990 3991 revs = []
3991 3992 # The list of bookmark used here is not the one used to actually
3992 3993 # update the bookmark name. This can result in the revision pulled
3993 3994 # not ending up with the name of the bookmark because of a race
3994 3995 # condition on the server. (See issue 4689 for details)
3995 3996 remotebookmarks = other.listkeys('bookmarks')
3996 3997 remotebookmarks = bookmarks.unhexlifybookmarks(remotebookmarks)
3997 3998 pullopargs['remotebookmarks'] = remotebookmarks
3998 3999 for b in opts['bookmark']:
3999 4000 b = repo._bookmarks.expandname(b)
4000 4001 if b not in remotebookmarks:
4001 4002 raise error.Abort(_('remote bookmark %s not found!') % b)
4002 4003 revs.append(hex(remotebookmarks[b]))
4003 4004
4004 4005 if revs:
4005 4006 try:
4006 4007 # When 'rev' is a bookmark name, we cannot guarantee that it
4007 4008 # will be updated with that name because of a race condition
4008 4009 # server side. (See issue 4689 for details)
4009 4010 oldrevs = revs
4010 4011 revs = [] # actually, nodes
4011 4012 for r in oldrevs:
4012 4013 node = other.lookup(r)
4013 4014 revs.append(node)
4014 4015 if r == checkout:
4015 4016 checkout = node
4016 4017 except error.CapabilityError:
4017 4018 err = _("other repository doesn't support revision lookup, "
4018 4019 "so a rev cannot be specified.")
4019 4020 raise error.Abort(err)
4020 4021
4021 4022 wlock = util.nullcontextmanager()
4022 4023 if opts.get('update'):
4023 4024 wlock = repo.wlock()
4024 4025 with wlock:
4025 4026 pullopargs.update(opts.get('opargs', {}))
4026 4027 modheads = exchange.pull(repo, other, heads=revs,
4027 4028 force=opts.get('force'),
4028 4029 bookmarks=opts.get('bookmark', ()),
4029 4030 opargs=pullopargs).cgresult
4030 4031
4031 4032 # brev is a name, which might be a bookmark to be activated at
4032 4033 # the end of the update. In other words, it is an explicit
4033 4034 # destination of the update
4034 4035 brev = None
4035 4036
4036 4037 if checkout:
4037 4038 checkout = "%d" % repo.changelog.rev(checkout)
4038 4039
4039 4040 # order below depends on implementation of
4040 4041 # hg.addbranchrevs(). opts['bookmark'] is ignored,
4041 4042 # because 'checkout' is determined without it.
4042 4043 if opts.get('rev'):
4043 4044 brev = opts['rev'][0]
4044 4045 elif opts.get('branch'):
4045 4046 brev = opts['branch'][0]
4046 4047 else:
4047 4048 brev = branches[0]
4048 4049 repo._subtoppath = source
4049 4050 try:
4050 4051 ret = postincoming(ui, repo, modheads, opts.get('update'),
4051 4052 checkout, brev)
4052 4053
4053 4054 finally:
4054 4055 del repo._subtoppath
4055 4056
4056 4057 finally:
4057 4058 other.close()
4058 4059 return ret
4059 4060
4060 4061 @command('^push',
4061 4062 [('f', 'force', None, _('force push')),
4062 4063 ('r', 'rev', [],
4063 4064 _('a changeset intended to be included in the destination'),
4064 4065 _('REV')),
4065 4066 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
4066 4067 ('b', 'branch', [],
4067 4068 _('a specific branch you would like to push'), _('BRANCH')),
4068 4069 ('', 'new-branch', False, _('allow pushing a new branch')),
4069 4070 ('', 'pushvars', [], _('variables that can be sent to server (ADVANCED)')),
4070 4071 ] + remoteopts,
4071 4072 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
4072 4073 def push(ui, repo, dest=None, **opts):
4073 4074 """push changes to the specified destination
4074 4075
4075 4076 Push changesets from the local repository to the specified
4076 4077 destination.
4077 4078
4078 4079 This operation is symmetrical to pull: it is identical to a pull
4079 4080 in the destination repository from the current one.
4080 4081
4081 4082 By default, push will not allow creation of new heads at the
4082 4083 destination, since multiple heads would make it unclear which head
4083 4084 to use. In this situation, it is recommended to pull and merge
4084 4085 before pushing.
4085 4086
4086 4087 Use --new-branch if you want to allow push to create a new named
4087 4088 branch that is not present at the destination. This allows you to
4088 4089 only create a new branch without forcing other changes.
4089 4090
4090 4091 .. note::
4091 4092
4092 4093 Extra care should be taken with the -f/--force option,
4093 4094 which will push all new heads on all branches, an action which will
4094 4095 almost always cause confusion for collaborators.
4095 4096
4096 4097 If -r/--rev is used, the specified revision and all its ancestors
4097 4098 will be pushed to the remote repository.
4098 4099
4099 4100 If -B/--bookmark is used, the specified bookmarked revision, its
4100 4101 ancestors, and the bookmark will be pushed to the remote
4101 4102 repository. Specifying ``.`` is equivalent to specifying the active
4102 4103 bookmark's name.
4103 4104
4104 4105 Please see :hg:`help urls` for important details about ``ssh://``
4105 4106 URLs. If DESTINATION is omitted, a default path will be used.
4106 4107
4107 4108 .. container:: verbose
4108 4109
4109 4110 The --pushvars option sends strings to the server that become
4110 4111 environment variables prepended with ``HG_USERVAR_``. For example,
4111 4112 ``--pushvars ENABLE_FEATURE=true``, provides the server side hooks with
4112 4113 ``HG_USERVAR_ENABLE_FEATURE=true`` as part of their environment.
4113 4114
4114 4115 pushvars can provide for user-overridable hooks as well as set debug
4115 4116 levels. One example is having a hook that blocks commits containing
4116 4117 conflict markers, but enables the user to override the hook if the file
4117 4118 is using conflict markers for testing purposes or the file format has
4118 4119 strings that look like conflict markers.
4119 4120
4120 4121 By default, servers will ignore `--pushvars`. To enable it add the
4121 4122 following to your configuration file::
4122 4123
4123 4124 [push]
4124 4125 pushvars.server = true
4125 4126
4126 4127 Returns 0 if push was successful, 1 if nothing to push.
4127 4128 """
4128 4129
4129 4130 opts = pycompat.byteskwargs(opts)
4130 4131 if opts.get('bookmark'):
4131 4132 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
4132 4133 for b in opts['bookmark']:
4133 4134 # translate -B options to -r so changesets get pushed
4134 4135 b = repo._bookmarks.expandname(b)
4135 4136 if b in repo._bookmarks:
4136 4137 opts.setdefault('rev', []).append(b)
4137 4138 else:
4138 4139 # if we try to push a deleted bookmark, translate it to null
4139 4140 # this lets simultaneous -r, -b options continue working
4140 4141 opts.setdefault('rev', []).append("null")
4141 4142
4142 4143 path = ui.paths.getpath(dest, default=('default-push', 'default'))
4143 4144 if not path:
4144 4145 raise error.Abort(_('default repository not configured!'),
4145 4146 hint=_("see 'hg help config.paths'"))
4146 4147 dest = path.pushloc or path.loc
4147 4148 branches = (path.branch, opts.get('branch') or [])
4148 4149 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4149 4150 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4150 4151 other = hg.peer(repo, opts, dest)
4151 4152
4152 4153 if revs:
4153 4154 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4154 4155 if not revs:
4155 4156 raise error.Abort(_("specified revisions evaluate to an empty set"),
4156 4157 hint=_("use different revision arguments"))
4157 4158 elif path.pushrev:
4158 4159 # It doesn't make any sense to specify ancestor revisions. So limit
4159 4160 # to DAG heads to make discovery simpler.
4160 4161 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4161 4162 revs = scmutil.revrange(repo, [expr])
4162 4163 revs = [repo[rev].node() for rev in revs]
4163 4164 if not revs:
4164 4165 raise error.Abort(_('default push revset for path evaluates to an '
4165 4166 'empty set'))
4166 4167
4167 4168 repo._subtoppath = dest
4168 4169 try:
4169 4170 # push subrepos depth-first for coherent ordering
4170 4171 c = repo['.']
4171 4172 subs = c.substate # only repos that are committed
4172 4173 for s in sorted(subs):
4173 4174 result = c.sub(s).push(opts)
4174 4175 if result == 0:
4175 4176 return not result
4176 4177 finally:
4177 4178 del repo._subtoppath
4178 4179
4179 4180 opargs = dict(opts.get('opargs', {})) # copy opargs since we may mutate it
4180 4181 opargs.setdefault('pushvars', []).extend(opts.get('pushvars', []))
4181 4182
4182 4183 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4183 4184 newbranch=opts.get('new_branch'),
4184 4185 bookmarks=opts.get('bookmark', ()),
4185 4186 opargs=opargs)
4186 4187
4187 4188 result = not pushop.cgresult
4188 4189
4189 4190 if pushop.bkresult is not None:
4190 4191 if pushop.bkresult == 2:
4191 4192 result = 2
4192 4193 elif not result and pushop.bkresult:
4193 4194 result = 2
4194 4195
4195 4196 return result
4196 4197
4197 4198 @command('recover', [])
4198 4199 def recover(ui, repo):
4199 4200 """roll back an interrupted transaction
4200 4201
4201 4202 Recover from an interrupted commit or pull.
4202 4203
4203 4204 This command tries to fix the repository status after an
4204 4205 interrupted operation. It should only be necessary when Mercurial
4205 4206 suggests it.
4206 4207
4207 4208 Returns 0 if successful, 1 if nothing to recover or verify fails.
4208 4209 """
4209 4210 if repo.recover():
4210 4211 return hg.verify(repo)
4211 4212 return 1
4212 4213
4213 4214 @command('^remove|rm',
4214 4215 [('A', 'after', None, _('record delete for missing files')),
4215 4216 ('f', 'force', None,
4216 4217 _('forget added files, delete modified files')),
4217 4218 ] + subrepoopts + walkopts + dryrunopts,
4218 4219 _('[OPTION]... FILE...'),
4219 4220 inferrepo=True)
4220 4221 def remove(ui, repo, *pats, **opts):
4221 4222 """remove the specified files on the next commit
4222 4223
4223 4224 Schedule the indicated files for removal from the current branch.
4224 4225
4225 4226 This command schedules the files to be removed at the next commit.
4226 4227 To undo a remove before that, see :hg:`revert`. To undo added
4227 4228 files, see :hg:`forget`.
4228 4229
4229 4230 .. container:: verbose
4230 4231
4231 4232 -A/--after can be used to remove only files that have already
4232 4233 been deleted, -f/--force can be used to force deletion, and -Af
4233 4234 can be used to remove files from the next revision without
4234 4235 deleting them from the working directory.
4235 4236
4236 4237 The following table details the behavior of remove for different
4237 4238 file states (columns) and option combinations (rows). The file
4238 4239 states are Added [A], Clean [C], Modified [M] and Missing [!]
4239 4240 (as reported by :hg:`status`). The actions are Warn, Remove
4240 4241 (from branch) and Delete (from disk):
4241 4242
4242 4243 ========= == == == ==
4243 4244 opt/state A C M !
4244 4245 ========= == == == ==
4245 4246 none W RD W R
4246 4247 -f R RD RD R
4247 4248 -A W W W R
4248 4249 -Af R R R R
4249 4250 ========= == == == ==
4250 4251
4251 4252 .. note::
4252 4253
4253 4254 :hg:`remove` never deletes files in Added [A] state from the
4254 4255 working directory, not even if ``--force`` is specified.
4255 4256
4256 4257 Returns 0 on success, 1 if any warnings encountered.
4257 4258 """
4258 4259
4259 4260 opts = pycompat.byteskwargs(opts)
4260 4261 after, force = opts.get('after'), opts.get('force')
4261 4262 dryrun = opts.get('dry_run')
4262 4263 if not pats and not after:
4263 4264 raise error.Abort(_('no files specified'))
4264 4265
4265 4266 m = scmutil.match(repo[None], pats, opts)
4266 4267 subrepos = opts.get('subrepos')
4267 4268 return cmdutil.remove(ui, repo, m, "", after, force, subrepos,
4268 4269 dryrun=dryrun)
4269 4270
4270 4271 @command('rename|move|mv',
4271 4272 [('A', 'after', None, _('record a rename that has already occurred')),
4272 4273 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4273 4274 ] + walkopts + dryrunopts,
4274 4275 _('[OPTION]... SOURCE... DEST'))
4275 4276 def rename(ui, repo, *pats, **opts):
4276 4277 """rename files; equivalent of copy + remove
4277 4278
4278 4279 Mark dest as copies of sources; mark sources for deletion. If dest
4279 4280 is a directory, copies are put in that directory. If dest is a
4280 4281 file, there can only be one source.
4281 4282
4282 4283 By default, this command copies the contents of files as they
4283 4284 exist in the working directory. If invoked with -A/--after, the
4284 4285 operation is recorded, but no copying is performed.
4285 4286
4286 4287 This command takes effect at the next commit. To undo a rename
4287 4288 before that, see :hg:`revert`.
4288 4289
4289 4290 Returns 0 on success, 1 if errors are encountered.
4290 4291 """
4291 4292 opts = pycompat.byteskwargs(opts)
4292 4293 with repo.wlock(False):
4293 4294 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4294 4295
4295 4296 @command('resolve',
4296 4297 [('a', 'all', None, _('select all unresolved files')),
4297 4298 ('l', 'list', None, _('list state of files needing merge')),
4298 4299 ('m', 'mark', None, _('mark files as resolved')),
4299 4300 ('u', 'unmark', None, _('mark files as unresolved')),
4300 4301 ('n', 'no-status', None, _('hide status prefix'))]
4301 4302 + mergetoolopts + walkopts + formatteropts,
4302 4303 _('[OPTION]... [FILE]...'),
4303 4304 inferrepo=True)
4304 4305 def resolve(ui, repo, *pats, **opts):
4305 4306 """redo merges or set/view the merge status of files
4306 4307
4307 4308 Merges with unresolved conflicts are often the result of
4308 4309 non-interactive merging using the ``internal:merge`` configuration
4309 4310 setting, or a command-line merge tool like ``diff3``. The resolve
4310 4311 command is used to manage the files involved in a merge, after
4311 4312 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4312 4313 working directory must have two parents). See :hg:`help
4313 4314 merge-tools` for information on configuring merge tools.
4314 4315
4315 4316 The resolve command can be used in the following ways:
4316 4317
4317 4318 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4318 4319 files, discarding any previous merge attempts. Re-merging is not
4319 4320 performed for files already marked as resolved. Use ``--all/-a``
4320 4321 to select all unresolved files. ``--tool`` can be used to specify
4321 4322 the merge tool used for the given files. It overrides the HGMERGE
4322 4323 environment variable and your configuration files. Previous file
4323 4324 contents are saved with a ``.orig`` suffix.
4324 4325
4325 4326 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4326 4327 (e.g. after having manually fixed-up the files). The default is
4327 4328 to mark all unresolved files.
4328 4329
4329 4330 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4330 4331 default is to mark all resolved files.
4331 4332
4332 4333 - :hg:`resolve -l`: list files which had or still have conflicts.
4333 4334 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4334 4335 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4335 4336 the list. See :hg:`help filesets` for details.
4336 4337
4337 4338 .. note::
4338 4339
4339 4340 Mercurial will not let you commit files with unresolved merge
4340 4341 conflicts. You must use :hg:`resolve -m ...` before you can
4341 4342 commit after a conflicting merge.
4342 4343
4343 4344 Returns 0 on success, 1 if any files fail a resolve attempt.
4344 4345 """
4345 4346
4346 4347 opts = pycompat.byteskwargs(opts)
4347 4348 flaglist = 'all mark unmark list no_status'.split()
4348 4349 all, mark, unmark, show, nostatus = \
4349 4350 [opts.get(o) for o in flaglist]
4350 4351
4351 4352 if (show and (mark or unmark)) or (mark and unmark):
4352 4353 raise error.Abort(_("too many options specified"))
4353 4354 if pats and all:
4354 4355 raise error.Abort(_("can't specify --all and patterns"))
4355 4356 if not (all or pats or show or mark or unmark):
4356 4357 raise error.Abort(_('no files or directories specified'),
4357 4358 hint=('use --all to re-merge all unresolved files'))
4358 4359
4359 4360 if show:
4360 4361 ui.pager('resolve')
4361 4362 fm = ui.formatter('resolve', opts)
4362 4363 ms = mergemod.mergestate.read(repo)
4363 4364 m = scmutil.match(repo[None], pats, opts)
4364 4365
4365 4366 # Labels and keys based on merge state. Unresolved path conflicts show
4366 4367 # as 'P'. Resolved path conflicts show as 'R', the same as normal
4367 4368 # resolved conflicts.
4368 4369 mergestateinfo = {
4369 4370 mergemod.MERGE_RECORD_UNRESOLVED: ('resolve.unresolved', 'U'),
4370 4371 mergemod.MERGE_RECORD_RESOLVED: ('resolve.resolved', 'R'),
4371 4372 mergemod.MERGE_RECORD_UNRESOLVED_PATH: ('resolve.unresolved', 'P'),
4372 4373 mergemod.MERGE_RECORD_RESOLVED_PATH: ('resolve.resolved', 'R'),
4373 4374 mergemod.MERGE_RECORD_DRIVER_RESOLVED: ('resolve.driverresolved',
4374 4375 'D'),
4375 4376 }
4376 4377
4377 4378 for f in ms:
4378 4379 if not m(f):
4379 4380 continue
4380 4381
4381 4382 label, key = mergestateinfo[ms[f]]
4382 4383 fm.startitem()
4383 4384 fm.condwrite(not nostatus, 'status', '%s ', key, label=label)
4384 4385 fm.write('path', '%s\n', f, label=label)
4385 4386 fm.end()
4386 4387 return 0
4387 4388
4388 4389 with repo.wlock():
4389 4390 ms = mergemod.mergestate.read(repo)
4390 4391
4391 4392 if not (ms.active() or repo.dirstate.p2() != nullid):
4392 4393 raise error.Abort(
4393 4394 _('resolve command not applicable when not merging'))
4394 4395
4395 4396 wctx = repo[None]
4396 4397
4397 4398 if (ms.mergedriver
4398 4399 and ms.mdstate() == mergemod.MERGE_DRIVER_STATE_UNMARKED):
4399 4400 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4400 4401 ms.commit()
4401 4402 # allow mark and unmark to go through
4402 4403 if not mark and not unmark and not proceed:
4403 4404 return 1
4404 4405
4405 4406 m = scmutil.match(wctx, pats, opts)
4406 4407 ret = 0
4407 4408 didwork = False
4408 4409 runconclude = False
4409 4410
4410 4411 tocomplete = []
4411 4412 for f in ms:
4412 4413 if not m(f):
4413 4414 continue
4414 4415
4415 4416 didwork = True
4416 4417
4417 4418 # don't let driver-resolved files be marked, and run the conclude
4418 4419 # step if asked to resolve
4419 4420 if ms[f] == mergemod.MERGE_RECORD_DRIVER_RESOLVED:
4420 4421 exact = m.exact(f)
4421 4422 if mark:
4422 4423 if exact:
4423 4424 ui.warn(_('not marking %s as it is driver-resolved\n')
4424 4425 % f)
4425 4426 elif unmark:
4426 4427 if exact:
4427 4428 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4428 4429 % f)
4429 4430 else:
4430 4431 runconclude = True
4431 4432 continue
4432 4433
4433 4434 # path conflicts must be resolved manually
4434 4435 if ms[f] in (mergemod.MERGE_RECORD_UNRESOLVED_PATH,
4435 4436 mergemod.MERGE_RECORD_RESOLVED_PATH):
4436 4437 if mark:
4437 4438 ms.mark(f, mergemod.MERGE_RECORD_RESOLVED_PATH)
4438 4439 elif unmark:
4439 4440 ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED_PATH)
4440 4441 elif ms[f] == mergemod.MERGE_RECORD_UNRESOLVED_PATH:
4441 4442 ui.warn(_('%s: path conflict must be resolved manually\n')
4442 4443 % f)
4443 4444 continue
4444 4445
4445 4446 if mark:
4446 4447 ms.mark(f, mergemod.MERGE_RECORD_RESOLVED)
4447 4448 elif unmark:
4448 4449 ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED)
4449 4450 else:
4450 4451 # backup pre-resolve (merge uses .orig for its own purposes)
4451 4452 a = repo.wjoin(f)
4452 4453 try:
4453 4454 util.copyfile(a, a + ".resolve")
4454 4455 except (IOError, OSError) as inst:
4455 4456 if inst.errno != errno.ENOENT:
4456 4457 raise
4457 4458
4458 4459 try:
4459 4460 # preresolve file
4460 4461 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4461 4462 'resolve')
4462 4463 complete, r = ms.preresolve(f, wctx)
4463 4464 if not complete:
4464 4465 tocomplete.append(f)
4465 4466 elif r:
4466 4467 ret = 1
4467 4468 finally:
4468 4469 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4469 4470 ms.commit()
4470 4471
4471 4472 # replace filemerge's .orig file with our resolve file, but only
4472 4473 # for merges that are complete
4473 4474 if complete:
4474 4475 try:
4475 4476 util.rename(a + ".resolve",
4476 4477 scmutil.origpath(ui, repo, a))
4477 4478 except OSError as inst:
4478 4479 if inst.errno != errno.ENOENT:
4479 4480 raise
4480 4481
4481 4482 for f in tocomplete:
4482 4483 try:
4483 4484 # resolve file
4484 4485 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4485 4486 'resolve')
4486 4487 r = ms.resolve(f, wctx)
4487 4488 if r:
4488 4489 ret = 1
4489 4490 finally:
4490 4491 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4491 4492 ms.commit()
4492 4493
4493 4494 # replace filemerge's .orig file with our resolve file
4494 4495 a = repo.wjoin(f)
4495 4496 try:
4496 4497 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4497 4498 except OSError as inst:
4498 4499 if inst.errno != errno.ENOENT:
4499 4500 raise
4500 4501
4501 4502 ms.commit()
4502 4503 ms.recordactions()
4503 4504
4504 4505 if not didwork and pats:
4505 4506 hint = None
4506 4507 if not any([p for p in pats if p.find(':') >= 0]):
4507 4508 pats = ['path:%s' % p for p in pats]
4508 4509 m = scmutil.match(wctx, pats, opts)
4509 4510 for f in ms:
4510 4511 if not m(f):
4511 4512 continue
4512 4513 flags = ''.join(['-%s ' % o[0:1] for o in flaglist
4513 4514 if opts.get(o)])
4514 4515 hint = _("(try: hg resolve %s%s)\n") % (
4515 4516 flags,
4516 4517 ' '.join(pats))
4517 4518 break
4518 4519 ui.warn(_("arguments do not match paths that need resolving\n"))
4519 4520 if hint:
4520 4521 ui.warn(hint)
4521 4522 elif ms.mergedriver and ms.mdstate() != 's':
4522 4523 # run conclude step when either a driver-resolved file is requested
4523 4524 # or there are no driver-resolved files
4524 4525 # we can't use 'ret' to determine whether any files are unresolved
4525 4526 # because we might not have tried to resolve some
4526 4527 if ((runconclude or not list(ms.driverresolved()))
4527 4528 and not list(ms.unresolved())):
4528 4529 proceed = mergemod.driverconclude(repo, ms, wctx)
4529 4530 ms.commit()
4530 4531 if not proceed:
4531 4532 return 1
4532 4533
4533 4534 # Nudge users into finishing an unfinished operation
4534 4535 unresolvedf = list(ms.unresolved())
4535 4536 driverresolvedf = list(ms.driverresolved())
4536 4537 if not unresolvedf and not driverresolvedf:
4537 4538 ui.status(_('(no more unresolved files)\n'))
4538 4539 cmdutil.checkafterresolved(repo)
4539 4540 elif not unresolvedf:
4540 4541 ui.status(_('(no more unresolved files -- '
4541 4542 'run "hg resolve --all" to conclude)\n'))
4542 4543
4543 4544 return ret
4544 4545
4545 4546 @command('revert',
4546 4547 [('a', 'all', None, _('revert all changes when no arguments given')),
4547 4548 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4548 4549 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4549 4550 ('C', 'no-backup', None, _('do not save backup copies of files')),
4550 4551 ('i', 'interactive', None, _('interactively select the changes')),
4551 4552 ] + walkopts + dryrunopts,
4552 4553 _('[OPTION]... [-r REV] [NAME]...'))
4553 4554 def revert(ui, repo, *pats, **opts):
4554 4555 """restore files to their checkout state
4555 4556
4556 4557 .. note::
4557 4558
4558 4559 To check out earlier revisions, you should use :hg:`update REV`.
4559 4560 To cancel an uncommitted merge (and lose your changes),
4560 4561 use :hg:`merge --abort`.
4561 4562
4562 4563 With no revision specified, revert the specified files or directories
4563 4564 to the contents they had in the parent of the working directory.
4564 4565 This restores the contents of files to an unmodified
4565 4566 state and unschedules adds, removes, copies, and renames. If the
4566 4567 working directory has two parents, you must explicitly specify a
4567 4568 revision.
4568 4569
4569 4570 Using the -r/--rev or -d/--date options, revert the given files or
4570 4571 directories to their states as of a specific revision. Because
4571 4572 revert does not change the working directory parents, this will
4572 4573 cause these files to appear modified. This can be helpful to "back
4573 4574 out" some or all of an earlier change. See :hg:`backout` for a
4574 4575 related method.
4575 4576
4576 4577 Modified files are saved with a .orig suffix before reverting.
4577 4578 To disable these backups, use --no-backup. It is possible to store
4578 4579 the backup files in a custom directory relative to the root of the
4579 4580 repository by setting the ``ui.origbackuppath`` configuration
4580 4581 option.
4581 4582
4582 4583 See :hg:`help dates` for a list of formats valid for -d/--date.
4583 4584
4584 4585 See :hg:`help backout` for a way to reverse the effect of an
4585 4586 earlier changeset.
4586 4587
4587 4588 Returns 0 on success.
4588 4589 """
4589 4590
4590 4591 opts = pycompat.byteskwargs(opts)
4591 4592 if opts.get("date"):
4592 4593 if opts.get("rev"):
4593 4594 raise error.Abort(_("you can't specify a revision and a date"))
4594 4595 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4595 4596
4596 4597 parent, p2 = repo.dirstate.parents()
4597 4598 if not opts.get('rev') and p2 != nullid:
4598 4599 # revert after merge is a trap for new users (issue2915)
4599 4600 raise error.Abort(_('uncommitted merge with no revision specified'),
4600 4601 hint=_("use 'hg update' or see 'hg help revert'"))
4601 4602
4602 4603 rev = opts.get('rev')
4603 4604 if rev:
4604 4605 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
4605 4606 ctx = scmutil.revsingle(repo, rev)
4606 4607
4607 4608 if (not (pats or opts.get('include') or opts.get('exclude') or
4608 4609 opts.get('all') or opts.get('interactive'))):
4609 4610 msg = _("no files or directories specified")
4610 4611 if p2 != nullid:
4611 4612 hint = _("uncommitted merge, use --all to discard all changes,"
4612 4613 " or 'hg update -C .' to abort the merge")
4613 4614 raise error.Abort(msg, hint=hint)
4614 4615 dirty = any(repo.status())
4615 4616 node = ctx.node()
4616 4617 if node != parent:
4617 4618 if dirty:
4618 4619 hint = _("uncommitted changes, use --all to discard all"
4619 4620 " changes, or 'hg update %s' to update") % ctx.rev()
4620 4621 else:
4621 4622 hint = _("use --all to revert all files,"
4622 4623 " or 'hg update %s' to update") % ctx.rev()
4623 4624 elif dirty:
4624 4625 hint = _("uncommitted changes, use --all to discard all changes")
4625 4626 else:
4626 4627 hint = _("use --all to revert all files")
4627 4628 raise error.Abort(msg, hint=hint)
4628 4629
4629 4630 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats,
4630 4631 **pycompat.strkwargs(opts))
4631 4632
4632 4633 @command('rollback', dryrunopts +
4633 4634 [('f', 'force', False, _('ignore safety measures'))])
4634 4635 def rollback(ui, repo, **opts):
4635 4636 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4636 4637
4637 4638 Please use :hg:`commit --amend` instead of rollback to correct
4638 4639 mistakes in the last commit.
4639 4640
4640 4641 This command should be used with care. There is only one level of
4641 4642 rollback, and there is no way to undo a rollback. It will also
4642 4643 restore the dirstate at the time of the last transaction, losing
4643 4644 any dirstate changes since that time. This command does not alter
4644 4645 the working directory.
4645 4646
4646 4647 Transactions are used to encapsulate the effects of all commands
4647 4648 that create new changesets or propagate existing changesets into a
4648 4649 repository.
4649 4650
4650 4651 .. container:: verbose
4651 4652
4652 4653 For example, the following commands are transactional, and their
4653 4654 effects can be rolled back:
4654 4655
4655 4656 - commit
4656 4657 - import
4657 4658 - pull
4658 4659 - push (with this repository as the destination)
4659 4660 - unbundle
4660 4661
4661 4662 To avoid permanent data loss, rollback will refuse to rollback a
4662 4663 commit transaction if it isn't checked out. Use --force to
4663 4664 override this protection.
4664 4665
4665 4666 The rollback command can be entirely disabled by setting the
4666 4667 ``ui.rollback`` configuration setting to false. If you're here
4667 4668 because you want to use rollback and it's disabled, you can
4668 4669 re-enable the command by setting ``ui.rollback`` to true.
4669 4670
4670 4671 This command is not intended for use on public repositories. Once
4671 4672 changes are visible for pull by other users, rolling a transaction
4672 4673 back locally is ineffective (someone else may already have pulled
4673 4674 the changes). Furthermore, a race is possible with readers of the
4674 4675 repository; for example an in-progress pull from the repository
4675 4676 may fail if a rollback is performed.
4676 4677
4677 4678 Returns 0 on success, 1 if no rollback data is available.
4678 4679 """
4679 4680 if not ui.configbool('ui', 'rollback'):
4680 4681 raise error.Abort(_('rollback is disabled because it is unsafe'),
4681 4682 hint=('see `hg help -v rollback` for information'))
4682 4683 return repo.rollback(dryrun=opts.get(r'dry_run'),
4683 4684 force=opts.get(r'force'))
4684 4685
4685 4686 @command('root', [], cmdtype=readonly)
4686 4687 def root(ui, repo):
4687 4688 """print the root (top) of the current working directory
4688 4689
4689 4690 Print the root directory of the current repository.
4690 4691
4691 4692 Returns 0 on success.
4692 4693 """
4693 4694 ui.write(repo.root + "\n")
4694 4695
4695 4696 @command('^serve',
4696 4697 [('A', 'accesslog', '', _('name of access log file to write to'),
4697 4698 _('FILE')),
4698 4699 ('d', 'daemon', None, _('run server in background')),
4699 4700 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4700 4701 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4701 4702 # use string type, then we can check if something was passed
4702 4703 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4703 4704 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4704 4705 _('ADDR')),
4705 4706 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4706 4707 _('PREFIX')),
4707 4708 ('n', 'name', '',
4708 4709 _('name to show in web pages (default: working directory)'), _('NAME')),
4709 4710 ('', 'web-conf', '',
4710 4711 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4711 4712 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4712 4713 _('FILE')),
4713 4714 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4714 4715 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4715 4716 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4716 4717 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4717 4718 ('', 'style', '', _('template style to use'), _('STYLE')),
4718 4719 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4719 4720 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4720 4721 + subrepoopts,
4721 4722 _('[OPTION]...'),
4722 4723 optionalrepo=True)
4723 4724 def serve(ui, repo, **opts):
4724 4725 """start stand-alone webserver
4725 4726
4726 4727 Start a local HTTP repository browser and pull server. You can use
4727 4728 this for ad-hoc sharing and browsing of repositories. It is
4728 4729 recommended to use a real web server to serve a repository for
4729 4730 longer periods of time.
4730 4731
4731 4732 Please note that the server does not implement access control.
4732 4733 This means that, by default, anybody can read from the server and
4733 4734 nobody can write to it by default. Set the ``web.allow-push``
4734 4735 option to ``*`` to allow everybody to push to the server. You
4735 4736 should use a real web server if you need to authenticate users.
4736 4737
4737 4738 By default, the server logs accesses to stdout and errors to
4738 4739 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4739 4740 files.
4740 4741
4741 4742 To have the server choose a free port number to listen on, specify
4742 4743 a port number of 0; in this case, the server will print the port
4743 4744 number it uses.
4744 4745
4745 4746 Returns 0 on success.
4746 4747 """
4747 4748
4748 4749 opts = pycompat.byteskwargs(opts)
4749 4750 if opts["stdio"] and opts["cmdserver"]:
4750 4751 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4751 4752
4752 4753 if opts["stdio"]:
4753 4754 if repo is None:
4754 4755 raise error.RepoError(_("there is no Mercurial repository here"
4755 4756 " (.hg not found)"))
4756 4757 s = wireprotoserver.sshserver(ui, repo)
4757 4758 s.serve_forever()
4758 4759
4759 4760 service = server.createservice(ui, repo, opts)
4760 4761 return server.runservice(opts, initfn=service.init, runfn=service.run)
4761 4762
4762 4763 @command('^status|st',
4763 4764 [('A', 'all', None, _('show status of all files')),
4764 4765 ('m', 'modified', None, _('show only modified files')),
4765 4766 ('a', 'added', None, _('show only added files')),
4766 4767 ('r', 'removed', None, _('show only removed files')),
4767 4768 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4768 4769 ('c', 'clean', None, _('show only files without changes')),
4769 4770 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4770 4771 ('i', 'ignored', None, _('show only ignored files')),
4771 4772 ('n', 'no-status', None, _('hide status prefix')),
4772 4773 ('t', 'terse', '', _('show the terse output (EXPERIMENTAL)')),
4773 4774 ('C', 'copies', None, _('show source of copied files')),
4774 4775 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4775 4776 ('', 'rev', [], _('show difference from revision'), _('REV')),
4776 4777 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4777 4778 ] + walkopts + subrepoopts + formatteropts,
4778 4779 _('[OPTION]... [FILE]...'),
4779 4780 inferrepo=True, cmdtype=readonly)
4780 4781 def status(ui, repo, *pats, **opts):
4781 4782 """show changed files in the working directory
4782 4783
4783 4784 Show status of files in the repository. If names are given, only
4784 4785 files that match are shown. Files that are clean or ignored or
4785 4786 the source of a copy/move operation, are not listed unless
4786 4787 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4787 4788 Unless options described with "show only ..." are given, the
4788 4789 options -mardu are used.
4789 4790
4790 4791 Option -q/--quiet hides untracked (unknown and ignored) files
4791 4792 unless explicitly requested with -u/--unknown or -i/--ignored.
4792 4793
4793 4794 .. note::
4794 4795
4795 4796 :hg:`status` may appear to disagree with diff if permissions have
4796 4797 changed or a merge has occurred. The standard diff format does
4797 4798 not report permission changes and diff only reports changes
4798 4799 relative to one merge parent.
4799 4800
4800 4801 If one revision is given, it is used as the base revision.
4801 4802 If two revisions are given, the differences between them are
4802 4803 shown. The --change option can also be used as a shortcut to list
4803 4804 the changed files of a revision from its first parent.
4804 4805
4805 4806 The codes used to show the status of files are::
4806 4807
4807 4808 M = modified
4808 4809 A = added
4809 4810 R = removed
4810 4811 C = clean
4811 4812 ! = missing (deleted by non-hg command, but still tracked)
4812 4813 ? = not tracked
4813 4814 I = ignored
4814 4815 = origin of the previous file (with --copies)
4815 4816
4816 4817 .. container:: verbose
4817 4818
4818 4819 The -t/--terse option abbreviates the output by showing only the directory
4819 4820 name if all the files in it share the same status. The option takes an
4820 4821 argument indicating the statuses to abbreviate: 'm' for 'modified', 'a'
4821 4822 for 'added', 'r' for 'removed', 'd' for 'deleted', 'u' for 'unknown', 'i'
4822 4823 for 'ignored' and 'c' for clean.
4823 4824
4824 4825 It abbreviates only those statuses which are passed. Note that clean and
4825 4826 ignored files are not displayed with '--terse ic' unless the -c/--clean
4826 4827 and -i/--ignored options are also used.
4827 4828
4828 4829 The -v/--verbose option shows information when the repository is in an
4829 4830 unfinished merge, shelve, rebase state etc. You can have this behavior
4830 4831 turned on by default by enabling the ``commands.status.verbose`` option.
4831 4832
4832 4833 You can skip displaying some of these states by setting
4833 4834 ``commands.status.skipstates`` to one or more of: 'bisect', 'graft',
4834 4835 'histedit', 'merge', 'rebase', or 'unshelve'.
4835 4836
4836 4837 Examples:
4837 4838
4838 4839 - show changes in the working directory relative to a
4839 4840 changeset::
4840 4841
4841 4842 hg status --rev 9353
4842 4843
4843 4844 - show changes in the working directory relative to the
4844 4845 current directory (see :hg:`help patterns` for more information)::
4845 4846
4846 4847 hg status re:
4847 4848
4848 4849 - show all changes including copies in an existing changeset::
4849 4850
4850 4851 hg status --copies --change 9353
4851 4852
4852 4853 - get a NUL separated list of added files, suitable for xargs::
4853 4854
4854 4855 hg status -an0
4855 4856
4856 4857 - show more information about the repository status, abbreviating
4857 4858 added, removed, modified, deleted, and untracked paths::
4858 4859
4859 4860 hg status -v -t mardu
4860 4861
4861 4862 Returns 0 on success.
4862 4863
4863 4864 """
4864 4865
4865 4866 opts = pycompat.byteskwargs(opts)
4866 4867 revs = opts.get('rev')
4867 4868 change = opts.get('change')
4868 4869 terse = opts.get('terse')
4869 4870
4870 4871 if revs and change:
4871 4872 msg = _('cannot specify --rev and --change at the same time')
4872 4873 raise error.Abort(msg)
4873 4874 elif revs and terse:
4874 4875 msg = _('cannot use --terse with --rev')
4875 4876 raise error.Abort(msg)
4876 4877 elif change:
4877 4878 repo = scmutil.unhidehashlikerevs(repo, [change], 'nowarn')
4878 4879 node2 = scmutil.revsingle(repo, change, None).node()
4879 4880 node1 = repo[node2].p1().node()
4880 4881 else:
4881 4882 repo = scmutil.unhidehashlikerevs(repo, revs, 'nowarn')
4882 4883 node1, node2 = scmutil.revpair(repo, revs)
4883 4884
4884 4885 if pats or ui.configbool('commands', 'status.relative'):
4885 4886 cwd = repo.getcwd()
4886 4887 else:
4887 4888 cwd = ''
4888 4889
4889 4890 if opts.get('print0'):
4890 4891 end = '\0'
4891 4892 else:
4892 4893 end = '\n'
4893 4894 copy = {}
4894 4895 states = 'modified added removed deleted unknown ignored clean'.split()
4895 4896 show = [k for k in states if opts.get(k)]
4896 4897 if opts.get('all'):
4897 4898 show += ui.quiet and (states[:4] + ['clean']) or states
4898 4899
4899 4900 if not show:
4900 4901 if ui.quiet:
4901 4902 show = states[:4]
4902 4903 else:
4903 4904 show = states[:5]
4904 4905
4905 4906 m = scmutil.match(repo[node2], pats, opts)
4906 4907 if terse:
4907 4908 # we need to compute clean and unknown to terse
4908 4909 stat = repo.status(node1, node2, m,
4909 4910 'ignored' in show or 'i' in terse,
4910 4911 True, True, opts.get('subrepos'))
4911 4912
4912 4913 stat = cmdutil.tersedir(stat, terse)
4913 4914 else:
4914 4915 stat = repo.status(node1, node2, m,
4915 4916 'ignored' in show, 'clean' in show,
4916 4917 'unknown' in show, opts.get('subrepos'))
4917 4918
4918 4919 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4919 4920
4920 4921 if (opts.get('all') or opts.get('copies')
4921 4922 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4922 4923 copy = copies.pathcopies(repo[node1], repo[node2], m)
4923 4924
4924 4925 ui.pager('status')
4925 4926 fm = ui.formatter('status', opts)
4926 4927 fmt = '%s' + end
4927 4928 showchar = not opts.get('no_status')
4928 4929
4929 4930 for state, char, files in changestates:
4930 4931 if state in show:
4931 4932 label = 'status.' + state
4932 4933 for f in files:
4933 4934 fm.startitem()
4934 4935 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4935 4936 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4936 4937 if f in copy:
4937 4938 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4938 4939 label='status.copied')
4939 4940
4940 4941 if ((ui.verbose or ui.configbool('commands', 'status.verbose'))
4941 4942 and not ui.plain()):
4942 4943 cmdutil.morestatus(repo, fm)
4943 4944 fm.end()
4944 4945
4945 4946 @command('^summary|sum',
4946 4947 [('', 'remote', None, _('check for push and pull'))],
4947 4948 '[--remote]', cmdtype=readonly)
4948 4949 def summary(ui, repo, **opts):
4949 4950 """summarize working directory state
4950 4951
4951 4952 This generates a brief summary of the working directory state,
4952 4953 including parents, branch, commit status, phase and available updates.
4953 4954
4954 4955 With the --remote option, this will check the default paths for
4955 4956 incoming and outgoing changes. This can be time-consuming.
4956 4957
4957 4958 Returns 0 on success.
4958 4959 """
4959 4960
4960 4961 opts = pycompat.byteskwargs(opts)
4961 4962 ui.pager('summary')
4962 4963 ctx = repo[None]
4963 4964 parents = ctx.parents()
4964 4965 pnode = parents[0].node()
4965 4966 marks = []
4966 4967
4967 4968 ms = None
4968 4969 try:
4969 4970 ms = mergemod.mergestate.read(repo)
4970 4971 except error.UnsupportedMergeRecords as e:
4971 4972 s = ' '.join(e.recordtypes)
4972 4973 ui.warn(
4973 4974 _('warning: merge state has unsupported record types: %s\n') % s)
4974 4975 unresolved = []
4975 4976 else:
4976 4977 unresolved = list(ms.unresolved())
4977 4978
4978 4979 for p in parents:
4979 4980 # label with log.changeset (instead of log.parent) since this
4980 4981 # shows a working directory parent *changeset*:
4981 4982 # i18n: column positioning for "hg summary"
4982 4983 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4983 4984 label=logcmdutil.changesetlabels(p))
4984 4985 ui.write(' '.join(p.tags()), label='log.tag')
4985 4986 if p.bookmarks():
4986 4987 marks.extend(p.bookmarks())
4987 4988 if p.rev() == -1:
4988 4989 if not len(repo):
4989 4990 ui.write(_(' (empty repository)'))
4990 4991 else:
4991 4992 ui.write(_(' (no revision checked out)'))
4992 4993 if p.obsolete():
4993 4994 ui.write(_(' (obsolete)'))
4994 4995 if p.isunstable():
4995 4996 instabilities = (ui.label(instability, 'trouble.%s' % instability)
4996 4997 for instability in p.instabilities())
4997 4998 ui.write(' ('
4998 4999 + ', '.join(instabilities)
4999 5000 + ')')
5000 5001 ui.write('\n')
5001 5002 if p.description():
5002 5003 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
5003 5004 label='log.summary')
5004 5005
5005 5006 branch = ctx.branch()
5006 5007 bheads = repo.branchheads(branch)
5007 5008 # i18n: column positioning for "hg summary"
5008 5009 m = _('branch: %s\n') % branch
5009 5010 if branch != 'default':
5010 5011 ui.write(m, label='log.branch')
5011 5012 else:
5012 5013 ui.status(m, label='log.branch')
5013 5014
5014 5015 if marks:
5015 5016 active = repo._activebookmark
5016 5017 # i18n: column positioning for "hg summary"
5017 5018 ui.write(_('bookmarks:'), label='log.bookmark')
5018 5019 if active is not None:
5019 5020 if active in marks:
5020 5021 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
5021 5022 marks.remove(active)
5022 5023 else:
5023 5024 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
5024 5025 for m in marks:
5025 5026 ui.write(' ' + m, label='log.bookmark')
5026 5027 ui.write('\n', label='log.bookmark')
5027 5028
5028 5029 status = repo.status(unknown=True)
5029 5030
5030 5031 c = repo.dirstate.copies()
5031 5032 copied, renamed = [], []
5032 5033 for d, s in c.iteritems():
5033 5034 if s in status.removed:
5034 5035 status.removed.remove(s)
5035 5036 renamed.append(d)
5036 5037 else:
5037 5038 copied.append(d)
5038 5039 if d in status.added:
5039 5040 status.added.remove(d)
5040 5041
5041 5042 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
5042 5043
5043 5044 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
5044 5045 (ui.label(_('%d added'), 'status.added'), status.added),
5045 5046 (ui.label(_('%d removed'), 'status.removed'), status.removed),
5046 5047 (ui.label(_('%d renamed'), 'status.copied'), renamed),
5047 5048 (ui.label(_('%d copied'), 'status.copied'), copied),
5048 5049 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
5049 5050 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
5050 5051 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
5051 5052 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
5052 5053 t = []
5053 5054 for l, s in labels:
5054 5055 if s:
5055 5056 t.append(l % len(s))
5056 5057
5057 5058 t = ', '.join(t)
5058 5059 cleanworkdir = False
5059 5060
5060 5061 if repo.vfs.exists('graftstate'):
5061 5062 t += _(' (graft in progress)')
5062 5063 if repo.vfs.exists('updatestate'):
5063 5064 t += _(' (interrupted update)')
5064 5065 elif len(parents) > 1:
5065 5066 t += _(' (merge)')
5066 5067 elif branch != parents[0].branch():
5067 5068 t += _(' (new branch)')
5068 5069 elif (parents[0].closesbranch() and
5069 5070 pnode in repo.branchheads(branch, closed=True)):
5070 5071 t += _(' (head closed)')
5071 5072 elif not (status.modified or status.added or status.removed or renamed or
5072 5073 copied or subs):
5073 5074 t += _(' (clean)')
5074 5075 cleanworkdir = True
5075 5076 elif pnode not in bheads:
5076 5077 t += _(' (new branch head)')
5077 5078
5078 5079 if parents:
5079 5080 pendingphase = max(p.phase() for p in parents)
5080 5081 else:
5081 5082 pendingphase = phases.public
5082 5083
5083 5084 if pendingphase > phases.newcommitphase(ui):
5084 5085 t += ' (%s)' % phases.phasenames[pendingphase]
5085 5086
5086 5087 if cleanworkdir:
5087 5088 # i18n: column positioning for "hg summary"
5088 5089 ui.status(_('commit: %s\n') % t.strip())
5089 5090 else:
5090 5091 # i18n: column positioning for "hg summary"
5091 5092 ui.write(_('commit: %s\n') % t.strip())
5092 5093
5093 5094 # all ancestors of branch heads - all ancestors of parent = new csets
5094 5095 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
5095 5096 bheads))
5096 5097
5097 5098 if new == 0:
5098 5099 # i18n: column positioning for "hg summary"
5099 5100 ui.status(_('update: (current)\n'))
5100 5101 elif pnode not in bheads:
5101 5102 # i18n: column positioning for "hg summary"
5102 5103 ui.write(_('update: %d new changesets (update)\n') % new)
5103 5104 else:
5104 5105 # i18n: column positioning for "hg summary"
5105 5106 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
5106 5107 (new, len(bheads)))
5107 5108
5108 5109 t = []
5109 5110 draft = len(repo.revs('draft()'))
5110 5111 if draft:
5111 5112 t.append(_('%d draft') % draft)
5112 5113 secret = len(repo.revs('secret()'))
5113 5114 if secret:
5114 5115 t.append(_('%d secret') % secret)
5115 5116
5116 5117 if draft or secret:
5117 5118 ui.status(_('phases: %s\n') % ', '.join(t))
5118 5119
5119 5120 if obsolete.isenabled(repo, obsolete.createmarkersopt):
5120 5121 for trouble in ("orphan", "contentdivergent", "phasedivergent"):
5121 5122 numtrouble = len(repo.revs(trouble + "()"))
5122 5123 # We write all the possibilities to ease translation
5123 5124 troublemsg = {
5124 5125 "orphan": _("orphan: %d changesets"),
5125 5126 "contentdivergent": _("content-divergent: %d changesets"),
5126 5127 "phasedivergent": _("phase-divergent: %d changesets"),
5127 5128 }
5128 5129 if numtrouble > 0:
5129 5130 ui.status(troublemsg[trouble] % numtrouble + "\n")
5130 5131
5131 5132 cmdutil.summaryhooks(ui, repo)
5132 5133
5133 5134 if opts.get('remote'):
5134 5135 needsincoming, needsoutgoing = True, True
5135 5136 else:
5136 5137 needsincoming, needsoutgoing = False, False
5137 5138 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
5138 5139 if i:
5139 5140 needsincoming = True
5140 5141 if o:
5141 5142 needsoutgoing = True
5142 5143 if not needsincoming and not needsoutgoing:
5143 5144 return
5144 5145
5145 5146 def getincoming():
5146 5147 source, branches = hg.parseurl(ui.expandpath('default'))
5147 5148 sbranch = branches[0]
5148 5149 try:
5149 5150 other = hg.peer(repo, {}, source)
5150 5151 except error.RepoError:
5151 5152 if opts.get('remote'):
5152 5153 raise
5153 5154 return source, sbranch, None, None, None
5154 5155 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
5155 5156 if revs:
5156 5157 revs = [other.lookup(rev) for rev in revs]
5157 5158 ui.debug('comparing with %s\n' % util.hidepassword(source))
5158 5159 repo.ui.pushbuffer()
5159 5160 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
5160 5161 repo.ui.popbuffer()
5161 5162 return source, sbranch, other, commoninc, commoninc[1]
5162 5163
5163 5164 if needsincoming:
5164 5165 source, sbranch, sother, commoninc, incoming = getincoming()
5165 5166 else:
5166 5167 source = sbranch = sother = commoninc = incoming = None
5167 5168
5168 5169 def getoutgoing():
5169 5170 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
5170 5171 dbranch = branches[0]
5171 5172 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
5172 5173 if source != dest:
5173 5174 try:
5174 5175 dother = hg.peer(repo, {}, dest)
5175 5176 except error.RepoError:
5176 5177 if opts.get('remote'):
5177 5178 raise
5178 5179 return dest, dbranch, None, None
5179 5180 ui.debug('comparing with %s\n' % util.hidepassword(dest))
5180 5181 elif sother is None:
5181 5182 # there is no explicit destination peer, but source one is invalid
5182 5183 return dest, dbranch, None, None
5183 5184 else:
5184 5185 dother = sother
5185 5186 if (source != dest or (sbranch is not None and sbranch != dbranch)):
5186 5187 common = None
5187 5188 else:
5188 5189 common = commoninc
5189 5190 if revs:
5190 5191 revs = [repo.lookup(rev) for rev in revs]
5191 5192 repo.ui.pushbuffer()
5192 5193 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
5193 5194 commoninc=common)
5194 5195 repo.ui.popbuffer()
5195 5196 return dest, dbranch, dother, outgoing
5196 5197
5197 5198 if needsoutgoing:
5198 5199 dest, dbranch, dother, outgoing = getoutgoing()
5199 5200 else:
5200 5201 dest = dbranch = dother = outgoing = None
5201 5202
5202 5203 if opts.get('remote'):
5203 5204 t = []
5204 5205 if incoming:
5205 5206 t.append(_('1 or more incoming'))
5206 5207 o = outgoing.missing
5207 5208 if o:
5208 5209 t.append(_('%d outgoing') % len(o))
5209 5210 other = dother or sother
5210 5211 if 'bookmarks' in other.listkeys('namespaces'):
5211 5212 counts = bookmarks.summary(repo, other)
5212 5213 if counts[0] > 0:
5213 5214 t.append(_('%d incoming bookmarks') % counts[0])
5214 5215 if counts[1] > 0:
5215 5216 t.append(_('%d outgoing bookmarks') % counts[1])
5216 5217
5217 5218 if t:
5218 5219 # i18n: column positioning for "hg summary"
5219 5220 ui.write(_('remote: %s\n') % (', '.join(t)))
5220 5221 else:
5221 5222 # i18n: column positioning for "hg summary"
5222 5223 ui.status(_('remote: (synced)\n'))
5223 5224
5224 5225 cmdutil.summaryremotehooks(ui, repo, opts,
5225 5226 ((source, sbranch, sother, commoninc),
5226 5227 (dest, dbranch, dother, outgoing)))
5227 5228
5228 5229 @command('tag',
5229 5230 [('f', 'force', None, _('force tag')),
5230 5231 ('l', 'local', None, _('make the tag local')),
5231 5232 ('r', 'rev', '', _('revision to tag'), _('REV')),
5232 5233 ('', 'remove', None, _('remove a tag')),
5233 5234 # -l/--local is already there, commitopts cannot be used
5234 5235 ('e', 'edit', None, _('invoke editor on commit messages')),
5235 5236 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5236 5237 ] + commitopts2,
5237 5238 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5238 5239 def tag(ui, repo, name1, *names, **opts):
5239 5240 """add one or more tags for the current or given revision
5240 5241
5241 5242 Name a particular revision using <name>.
5242 5243
5243 5244 Tags are used to name particular revisions of the repository and are
5244 5245 very useful to compare different revisions, to go back to significant
5245 5246 earlier versions or to mark branch points as releases, etc. Changing
5246 5247 an existing tag is normally disallowed; use -f/--force to override.
5247 5248
5248 5249 If no revision is given, the parent of the working directory is
5249 5250 used.
5250 5251
5251 5252 To facilitate version control, distribution, and merging of tags,
5252 5253 they are stored as a file named ".hgtags" which is managed similarly
5253 5254 to other project files and can be hand-edited if necessary. This
5254 5255 also means that tagging creates a new commit. The file
5255 5256 ".hg/localtags" is used for local tags (not shared among
5256 5257 repositories).
5257 5258
5258 5259 Tag commits are usually made at the head of a branch. If the parent
5259 5260 of the working directory is not a branch head, :hg:`tag` aborts; use
5260 5261 -f/--force to force the tag commit to be based on a non-head
5261 5262 changeset.
5262 5263
5263 5264 See :hg:`help dates` for a list of formats valid for -d/--date.
5264 5265
5265 5266 Since tag names have priority over branch names during revision
5266 5267 lookup, using an existing branch name as a tag name is discouraged.
5267 5268
5268 5269 Returns 0 on success.
5269 5270 """
5270 5271 opts = pycompat.byteskwargs(opts)
5271 5272 wlock = lock = None
5272 5273 try:
5273 5274 wlock = repo.wlock()
5274 5275 lock = repo.lock()
5275 5276 rev_ = "."
5276 5277 names = [t.strip() for t in (name1,) + names]
5277 5278 if len(names) != len(set(names)):
5278 5279 raise error.Abort(_('tag names must be unique'))
5279 5280 for n in names:
5280 5281 scmutil.checknewlabel(repo, n, 'tag')
5281 5282 if not n:
5282 5283 raise error.Abort(_('tag names cannot consist entirely of '
5283 5284 'whitespace'))
5284 5285 if opts.get('rev') and opts.get('remove'):
5285 5286 raise error.Abort(_("--rev and --remove are incompatible"))
5286 5287 if opts.get('rev'):
5287 5288 rev_ = opts['rev']
5288 5289 message = opts.get('message')
5289 5290 if opts.get('remove'):
5290 5291 if opts.get('local'):
5291 5292 expectedtype = 'local'
5292 5293 else:
5293 5294 expectedtype = 'global'
5294 5295
5295 5296 for n in names:
5296 5297 if not repo.tagtype(n):
5297 5298 raise error.Abort(_("tag '%s' does not exist") % n)
5298 5299 if repo.tagtype(n) != expectedtype:
5299 5300 if expectedtype == 'global':
5300 5301 raise error.Abort(_("tag '%s' is not a global tag") % n)
5301 5302 else:
5302 5303 raise error.Abort(_("tag '%s' is not a local tag") % n)
5303 5304 rev_ = 'null'
5304 5305 if not message:
5305 5306 # we don't translate commit messages
5306 5307 message = 'Removed tag %s' % ', '.join(names)
5307 5308 elif not opts.get('force'):
5308 5309 for n in names:
5309 5310 if n in repo.tags():
5310 5311 raise error.Abort(_("tag '%s' already exists "
5311 5312 "(use -f to force)") % n)
5312 5313 if not opts.get('local'):
5313 5314 p1, p2 = repo.dirstate.parents()
5314 5315 if p2 != nullid:
5315 5316 raise error.Abort(_('uncommitted merge'))
5316 5317 bheads = repo.branchheads()
5317 5318 if not opts.get('force') and bheads and p1 not in bheads:
5318 5319 raise error.Abort(_('working directory is not at a branch head '
5319 5320 '(use -f to force)'))
5320 5321 node = scmutil.revsingle(repo, rev_).node()
5321 5322
5322 5323 if not message:
5323 5324 # we don't translate commit messages
5324 5325 message = ('Added tag %s for changeset %s' %
5325 5326 (', '.join(names), short(node)))
5326 5327
5327 5328 date = opts.get('date')
5328 5329 if date:
5329 5330 date = dateutil.parsedate(date)
5330 5331
5331 5332 if opts.get('remove'):
5332 5333 editform = 'tag.remove'
5333 5334 else:
5334 5335 editform = 'tag.add'
5335 5336 editor = cmdutil.getcommiteditor(editform=editform,
5336 5337 **pycompat.strkwargs(opts))
5337 5338
5338 5339 # don't allow tagging the null rev
5339 5340 if (not opts.get('remove') and
5340 5341 scmutil.revsingle(repo, rev_).rev() == nullrev):
5341 5342 raise error.Abort(_("cannot tag null revision"))
5342 5343
5343 5344 tagsmod.tag(repo, names, node, message, opts.get('local'),
5344 5345 opts.get('user'), date, editor=editor)
5345 5346 finally:
5346 5347 release(lock, wlock)
5347 5348
5348 5349 @command('tags', formatteropts, '', cmdtype=readonly)
5349 5350 def tags(ui, repo, **opts):
5350 5351 """list repository tags
5351 5352
5352 5353 This lists both regular and local tags. When the -v/--verbose
5353 5354 switch is used, a third column "local" is printed for local tags.
5354 5355 When the -q/--quiet switch is used, only the tag name is printed.
5355 5356
5356 5357 Returns 0 on success.
5357 5358 """
5358 5359
5359 5360 opts = pycompat.byteskwargs(opts)
5360 5361 ui.pager('tags')
5361 5362 fm = ui.formatter('tags', opts)
5362 5363 hexfunc = fm.hexfunc
5363 5364 tagtype = ""
5364 5365
5365 5366 for t, n in reversed(repo.tagslist()):
5366 5367 hn = hexfunc(n)
5367 5368 label = 'tags.normal'
5368 5369 tagtype = ''
5369 5370 if repo.tagtype(t) == 'local':
5370 5371 label = 'tags.local'
5371 5372 tagtype = 'local'
5372 5373
5373 5374 fm.startitem()
5374 5375 fm.write('tag', '%s', t, label=label)
5375 5376 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5376 5377 fm.condwrite(not ui.quiet, 'rev node', fmt,
5377 5378 repo.changelog.rev(n), hn, label=label)
5378 5379 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5379 5380 tagtype, label=label)
5380 5381 fm.plain('\n')
5381 5382 fm.end()
5382 5383
5383 5384 @command('tip',
5384 5385 [('p', 'patch', None, _('show patch')),
5385 5386 ('g', 'git', None, _('use git extended diff format')),
5386 5387 ] + templateopts,
5387 5388 _('[-p] [-g]'))
5388 5389 def tip(ui, repo, **opts):
5389 5390 """show the tip revision (DEPRECATED)
5390 5391
5391 5392 The tip revision (usually just called the tip) is the changeset
5392 5393 most recently added to the repository (and therefore the most
5393 5394 recently changed head).
5394 5395
5395 5396 If you have just made a commit, that commit will be the tip. If
5396 5397 you have just pulled changes from another repository, the tip of
5397 5398 that repository becomes the current tip. The "tip" tag is special
5398 5399 and cannot be renamed or assigned to a different changeset.
5399 5400
5400 5401 This command is deprecated, please use :hg:`heads` instead.
5401 5402
5402 5403 Returns 0 on success.
5403 5404 """
5404 5405 opts = pycompat.byteskwargs(opts)
5405 5406 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
5406 5407 displayer.show(repo['tip'])
5407 5408 displayer.close()
5408 5409
5409 5410 @command('unbundle',
5410 5411 [('u', 'update', None,
5411 5412 _('update to new branch head if changesets were unbundled'))],
5412 5413 _('[-u] FILE...'))
5413 5414 def unbundle(ui, repo, fname1, *fnames, **opts):
5414 5415 """apply one or more bundle files
5415 5416
5416 5417 Apply one or more bundle files generated by :hg:`bundle`.
5417 5418
5418 5419 Returns 0 on success, 1 if an update has unresolved files.
5419 5420 """
5420 5421 fnames = (fname1,) + fnames
5421 5422
5422 5423 with repo.lock():
5423 5424 for fname in fnames:
5424 5425 f = hg.openpath(ui, fname)
5425 5426 gen = exchange.readbundle(ui, f, fname)
5426 5427 if isinstance(gen, streamclone.streamcloneapplier):
5427 5428 raise error.Abort(
5428 5429 _('packed bundles cannot be applied with '
5429 5430 '"hg unbundle"'),
5430 5431 hint=_('use "hg debugapplystreamclonebundle"'))
5431 5432 url = 'bundle:' + fname
5432 5433 try:
5433 5434 txnname = 'unbundle'
5434 5435 if not isinstance(gen, bundle2.unbundle20):
5435 5436 txnname = 'unbundle\n%s' % util.hidepassword(url)
5436 5437 with repo.transaction(txnname) as tr:
5437 5438 op = bundle2.applybundle(repo, gen, tr, source='unbundle',
5438 5439 url=url)
5439 5440 except error.BundleUnknownFeatureError as exc:
5440 5441 raise error.Abort(
5441 5442 _('%s: unknown bundle feature, %s') % (fname, exc),
5442 5443 hint=_("see https://mercurial-scm.org/"
5443 5444 "wiki/BundleFeature for more "
5444 5445 "information"))
5445 5446 modheads = bundle2.combinechangegroupresults(op)
5446 5447
5447 5448 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5448 5449
5449 5450 @command('^update|up|checkout|co',
5450 5451 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5451 5452 ('c', 'check', None, _('require clean working directory')),
5452 5453 ('m', 'merge', None, _('merge uncommitted changes')),
5453 5454 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5454 5455 ('r', 'rev', '', _('revision'), _('REV'))
5455 5456 ] + mergetoolopts,
5456 5457 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5457 5458 def update(ui, repo, node=None, **opts):
5458 5459 """update working directory (or switch revisions)
5459 5460
5460 5461 Update the repository's working directory to the specified
5461 5462 changeset. If no changeset is specified, update to the tip of the
5462 5463 current named branch and move the active bookmark (see :hg:`help
5463 5464 bookmarks`).
5464 5465
5465 5466 Update sets the working directory's parent revision to the specified
5466 5467 changeset (see :hg:`help parents`).
5467 5468
5468 5469 If the changeset is not a descendant or ancestor of the working
5469 5470 directory's parent and there are uncommitted changes, the update is
5470 5471 aborted. With the -c/--check option, the working directory is checked
5471 5472 for uncommitted changes; if none are found, the working directory is
5472 5473 updated to the specified changeset.
5473 5474
5474 5475 .. container:: verbose
5475 5476
5476 5477 The -C/--clean, -c/--check, and -m/--merge options control what
5477 5478 happens if the working directory contains uncommitted changes.
5478 5479 At most of one of them can be specified.
5479 5480
5480 5481 1. If no option is specified, and if
5481 5482 the requested changeset is an ancestor or descendant of
5482 5483 the working directory's parent, the uncommitted changes
5483 5484 are merged into the requested changeset and the merged
5484 5485 result is left uncommitted. If the requested changeset is
5485 5486 not an ancestor or descendant (that is, it is on another
5486 5487 branch), the update is aborted and the uncommitted changes
5487 5488 are preserved.
5488 5489
5489 5490 2. With the -m/--merge option, the update is allowed even if the
5490 5491 requested changeset is not an ancestor or descendant of
5491 5492 the working directory's parent.
5492 5493
5493 5494 3. With the -c/--check option, the update is aborted and the
5494 5495 uncommitted changes are preserved.
5495 5496
5496 5497 4. With the -C/--clean option, uncommitted changes are discarded and
5497 5498 the working directory is updated to the requested changeset.
5498 5499
5499 5500 To cancel an uncommitted merge (and lose your changes), use
5500 5501 :hg:`merge --abort`.
5501 5502
5502 5503 Use null as the changeset to remove the working directory (like
5503 5504 :hg:`clone -U`).
5504 5505
5505 5506 If you want to revert just one file to an older revision, use
5506 5507 :hg:`revert [-r REV] NAME`.
5507 5508
5508 5509 See :hg:`help dates` for a list of formats valid for -d/--date.
5509 5510
5510 5511 Returns 0 on success, 1 if there are unresolved files.
5511 5512 """
5512 5513 rev = opts.get(r'rev')
5513 5514 date = opts.get(r'date')
5514 5515 clean = opts.get(r'clean')
5515 5516 check = opts.get(r'check')
5516 5517 merge = opts.get(r'merge')
5517 5518 if rev and node:
5518 5519 raise error.Abort(_("please specify just one revision"))
5519 5520
5520 5521 if ui.configbool('commands', 'update.requiredest'):
5521 5522 if not node and not rev and not date:
5522 5523 raise error.Abort(_('you must specify a destination'),
5523 5524 hint=_('for example: hg update ".::"'))
5524 5525
5525 5526 if rev is None or rev == '':
5526 5527 rev = node
5527 5528
5528 5529 if date and rev is not None:
5529 5530 raise error.Abort(_("you can't specify a revision and a date"))
5530 5531
5531 5532 if len([x for x in (clean, check, merge) if x]) > 1:
5532 5533 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5533 5534 "or -m/--merge"))
5534 5535
5535 5536 updatecheck = None
5536 5537 if check:
5537 5538 updatecheck = 'abort'
5538 5539 elif merge:
5539 5540 updatecheck = 'none'
5540 5541
5541 5542 with repo.wlock():
5542 5543 cmdutil.clearunfinished(repo)
5543 5544
5544 5545 if date:
5545 5546 rev = cmdutil.finddate(ui, repo, date)
5546 5547
5547 5548 # if we defined a bookmark, we have to remember the original name
5548 5549 brev = rev
5549 5550 if rev:
5550 5551 repo = scmutil.unhidehashlikerevs(repo, [rev], 'nowarn')
5551 5552 ctx = scmutil.revsingle(repo, rev, rev)
5552 5553 rev = ctx.rev()
5553 5554 if ctx.hidden():
5554 5555 ctxstr = ctx.hex()[:12]
5555 5556 ui.warn(_("updating to a hidden changeset %s\n") % ctxstr)
5556 5557
5557 5558 if ctx.obsolete():
5558 5559 obsfatemsg = obsutil._getfilteredreason(repo, ctxstr, ctx)
5559 5560 ui.warn("(%s)\n" % obsfatemsg)
5560 5561
5561 5562 repo.ui.setconfig('ui', 'forcemerge', opts.get(r'tool'), 'update')
5562 5563
5563 5564 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5564 5565 updatecheck=updatecheck)
5565 5566
5566 5567 @command('verify', [])
5567 5568 def verify(ui, repo):
5568 5569 """verify the integrity of the repository
5569 5570
5570 5571 Verify the integrity of the current repository.
5571 5572
5572 5573 This will perform an extensive check of the repository's
5573 5574 integrity, validating the hashes and checksums of each entry in
5574 5575 the changelog, manifest, and tracked files, as well as the
5575 5576 integrity of their crosslinks and indices.
5576 5577
5577 5578 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5578 5579 for more information about recovery from corruption of the
5579 5580 repository.
5580 5581
5581 5582 Returns 0 on success, 1 if errors are encountered.
5582 5583 """
5583 5584 return hg.verify(repo)
5584 5585
5585 5586 @command('version', [] + formatteropts, norepo=True, cmdtype=readonly)
5586 5587 def version_(ui, **opts):
5587 5588 """output version and copyright information"""
5588 5589 opts = pycompat.byteskwargs(opts)
5589 5590 if ui.verbose:
5590 5591 ui.pager('version')
5591 5592 fm = ui.formatter("version", opts)
5592 5593 fm.startitem()
5593 5594 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5594 5595 util.version())
5595 5596 license = _(
5596 5597 "(see https://mercurial-scm.org for more information)\n"
5597 5598 "\nCopyright (C) 2005-2018 Matt Mackall and others\n"
5598 5599 "This is free software; see the source for copying conditions. "
5599 5600 "There is NO\nwarranty; "
5600 5601 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5601 5602 )
5602 5603 if not ui.quiet:
5603 5604 fm.plain(license)
5604 5605
5605 5606 if ui.verbose:
5606 5607 fm.plain(_("\nEnabled extensions:\n\n"))
5607 5608 # format names and versions into columns
5608 5609 names = []
5609 5610 vers = []
5610 5611 isinternals = []
5611 5612 for name, module in extensions.extensions():
5612 5613 names.append(name)
5613 5614 vers.append(extensions.moduleversion(module) or None)
5614 5615 isinternals.append(extensions.ismoduleinternal(module))
5615 5616 fn = fm.nested("extensions")
5616 5617 if names:
5617 5618 namefmt = " %%-%ds " % max(len(n) for n in names)
5618 5619 places = [_("external"), _("internal")]
5619 5620 for n, v, p in zip(names, vers, isinternals):
5620 5621 fn.startitem()
5621 5622 fn.condwrite(ui.verbose, "name", namefmt, n)
5622 5623 if ui.verbose:
5623 5624 fn.plain("%s " % places[p])
5624 5625 fn.data(bundled=p)
5625 5626 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5626 5627 if ui.verbose:
5627 5628 fn.plain("\n")
5628 5629 fn.end()
5629 5630 fm.end()
5630 5631
5631 5632 def loadcmdtable(ui, name, cmdtable):
5632 5633 """Load command functions from specified cmdtable
5633 5634 """
5634 5635 overrides = [cmd for cmd in cmdtable if cmd in table]
5635 5636 if overrides:
5636 5637 ui.warn(_("extension '%s' overrides commands: %s\n")
5637 5638 % (name, " ".join(overrides)))
5638 5639 table.update(cmdtable)
@@ -1,2310 +1,2338 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import collections
11 11 import errno
12 12 import hashlib
13 13
14 14 from .i18n import _
15 15 from .node import (
16 16 bin,
17 17 hex,
18 18 nullid,
19 19 )
20 20 from .thirdparty import (
21 21 attr,
22 22 )
23 23 from . import (
24 24 bookmarks as bookmod,
25 25 bundle2,
26 26 changegroup,
27 27 discovery,
28 28 error,
29 29 lock as lockmod,
30 30 logexchange,
31 31 obsolete,
32 32 phases,
33 33 pushkey,
34 34 pycompat,
35 35 scmutil,
36 36 sslutil,
37 37 streamclone,
38 38 url as urlmod,
39 39 util,
40 40 )
41 41 from .utils import (
42 42 stringutil,
43 43 )
44 44
45 45 urlerr = util.urlerr
46 46 urlreq = util.urlreq
47 47
48 48 # Maps bundle version human names to changegroup versions.
49 49 _bundlespeccgversions = {'v1': '01',
50 50 'v2': '02',
51 51 'packed1': 's1',
52 52 'bundle2': '02', #legacy
53 53 }
54 54
55 # Maps bundle version with content opts to choose which part to bundle
56 _bundlespeccontentopts = {
57 'v1': {
58 'changegroup': True,
59 'cg.version': '01',
60 'obsolescence': False,
61 'phases': False,
62 'tagsfnodescache': False,
63 'revbranchcache': False
64 },
65 'v2': {
66 'changegroup': True,
67 'cg.version': '02',
68 'obsolescence': False,
69 'phases': False,
70 'tagsfnodescache': True,
71 'revbranchcache': True
72 },
73 'packed1' : {
74 'cg.version': 's1'
75 }
76 }
77 _bundlespeccontentopts['bundle2'] = _bundlespeccontentopts['v2']
78
55 79 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
56 80 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
57 81
58 82 @attr.s
59 83 class bundlespec(object):
60 84 compression = attr.ib()
61 85 version = attr.ib()
62 86 params = attr.ib()
87 contentopts = attr.ib()
63 88
64 89 def parsebundlespec(repo, spec, strict=True, externalnames=False):
65 90 """Parse a bundle string specification into parts.
66 91
67 92 Bundle specifications denote a well-defined bundle/exchange format.
68 93 The content of a given specification should not change over time in
69 94 order to ensure that bundles produced by a newer version of Mercurial are
70 95 readable from an older version.
71 96
72 97 The string currently has the form:
73 98
74 99 <compression>-<type>[;<parameter0>[;<parameter1>]]
75 100
76 101 Where <compression> is one of the supported compression formats
77 102 and <type> is (currently) a version string. A ";" can follow the type and
78 103 all text afterwards is interpreted as URI encoded, ";" delimited key=value
79 104 pairs.
80 105
81 106 If ``strict`` is True (the default) <compression> is required. Otherwise,
82 107 it is optional.
83 108
84 109 If ``externalnames`` is False (the default), the human-centric names will
85 110 be converted to their internal representation.
86 111
87 112 Returns a bundlespec object of (compression, version, parameters).
88 113 Compression will be ``None`` if not in strict mode and a compression isn't
89 114 defined.
90 115
91 116 An ``InvalidBundleSpecification`` is raised when the specification is
92 117 not syntactically well formed.
93 118
94 119 An ``UnsupportedBundleSpecification`` is raised when the compression or
95 120 bundle type/version is not recognized.
96 121
97 122 Note: this function will likely eventually return a more complex data
98 123 structure, including bundle2 part information.
99 124 """
100 125 def parseparams(s):
101 126 if ';' not in s:
102 127 return s, {}
103 128
104 129 params = {}
105 130 version, paramstr = s.split(';', 1)
106 131
107 132 for p in paramstr.split(';'):
108 133 if '=' not in p:
109 134 raise error.InvalidBundleSpecification(
110 135 _('invalid bundle specification: '
111 136 'missing "=" in parameter: %s') % p)
112 137
113 138 key, value = p.split('=', 1)
114 139 key = urlreq.unquote(key)
115 140 value = urlreq.unquote(value)
116 141 params[key] = value
117 142
118 143 return version, params
119 144
120 145
121 146 if strict and '-' not in spec:
122 147 raise error.InvalidBundleSpecification(
123 148 _('invalid bundle specification; '
124 149 'must be prefixed with compression: %s') % spec)
125 150
126 151 if '-' in spec:
127 152 compression, version = spec.split('-', 1)
128 153
129 154 if compression not in util.compengines.supportedbundlenames:
130 155 raise error.UnsupportedBundleSpecification(
131 156 _('%s compression is not supported') % compression)
132 157
133 158 version, params = parseparams(version)
134 159
135 160 if version not in _bundlespeccgversions:
136 161 raise error.UnsupportedBundleSpecification(
137 162 _('%s is not a recognized bundle version') % version)
138 163 else:
139 164 # Value could be just the compression or just the version, in which
140 165 # case some defaults are assumed (but only when not in strict mode).
141 166 assert not strict
142 167
143 168 spec, params = parseparams(spec)
144 169
145 170 if spec in util.compengines.supportedbundlenames:
146 171 compression = spec
147 172 version = 'v1'
148 173 # Generaldelta repos require v2.
149 174 if 'generaldelta' in repo.requirements:
150 175 version = 'v2'
151 176 # Modern compression engines require v2.
152 177 if compression not in _bundlespecv1compengines:
153 178 version = 'v2'
154 179 elif spec in _bundlespeccgversions:
155 180 if spec == 'packed1':
156 181 compression = 'none'
157 182 else:
158 183 compression = 'bzip2'
159 184 version = spec
160 185 else:
161 186 raise error.UnsupportedBundleSpecification(
162 187 _('%s is not a recognized bundle specification') % spec)
163 188
164 189 # Bundle version 1 only supports a known set of compression engines.
165 190 if version == 'v1' and compression not in _bundlespecv1compengines:
166 191 raise error.UnsupportedBundleSpecification(
167 192 _('compression engine %s is not supported on v1 bundles') %
168 193 compression)
169 194
170 195 # The specification for packed1 can optionally declare the data formats
171 196 # required to apply it. If we see this metadata, compare against what the
172 197 # repo supports and error if the bundle isn't compatible.
173 198 if version == 'packed1' and 'requirements' in params:
174 199 requirements = set(params['requirements'].split(','))
175 200 missingreqs = requirements - repo.supportedformats
176 201 if missingreqs:
177 202 raise error.UnsupportedBundleSpecification(
178 203 _('missing support for repository features: %s') %
179 204 ', '.join(sorted(missingreqs)))
180 205
206 # Compute contentopts based on the version
207 contentopts = _bundlespeccontentopts.get(version, {}).copy()
208
181 209 if not externalnames:
182 210 engine = util.compengines.forbundlename(compression)
183 211 compression = engine.bundletype()[1]
184 212 version = _bundlespeccgversions[version]
185 213
186 return bundlespec(compression, version, params)
214 return bundlespec(compression, version, params, contentopts)
187 215
188 216 def readbundle(ui, fh, fname, vfs=None):
189 217 header = changegroup.readexactly(fh, 4)
190 218
191 219 alg = None
192 220 if not fname:
193 221 fname = "stream"
194 222 if not header.startswith('HG') and header.startswith('\0'):
195 223 fh = changegroup.headerlessfixup(fh, header)
196 224 header = "HG10"
197 225 alg = 'UN'
198 226 elif vfs:
199 227 fname = vfs.join(fname)
200 228
201 229 magic, version = header[0:2], header[2:4]
202 230
203 231 if magic != 'HG':
204 232 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
205 233 if version == '10':
206 234 if alg is None:
207 235 alg = changegroup.readexactly(fh, 2)
208 236 return changegroup.cg1unpacker(fh, alg)
209 237 elif version.startswith('2'):
210 238 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
211 239 elif version == 'S1':
212 240 return streamclone.streamcloneapplier(fh)
213 241 else:
214 242 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
215 243
216 244 def _formatrequirementsspec(requirements):
217 245 return urlreq.quote(','.join(sorted(requirements)))
218 246
219 247 def _formatrequirementsparams(requirements):
220 248 requirements = _formatrequirementsspec(requirements)
221 249 params = "%s%s" % (urlreq.quote("requirements="), requirements)
222 250 return params
223 251
224 252 def getbundlespec(ui, fh):
225 253 """Infer the bundlespec from a bundle file handle.
226 254
227 255 The input file handle is seeked and the original seek position is not
228 256 restored.
229 257 """
230 258 def speccompression(alg):
231 259 try:
232 260 return util.compengines.forbundletype(alg).bundletype()[0]
233 261 except KeyError:
234 262 return None
235 263
236 264 b = readbundle(ui, fh, None)
237 265 if isinstance(b, changegroup.cg1unpacker):
238 266 alg = b._type
239 267 if alg == '_truncatedBZ':
240 268 alg = 'BZ'
241 269 comp = speccompression(alg)
242 270 if not comp:
243 271 raise error.Abort(_('unknown compression algorithm: %s') % alg)
244 272 return '%s-v1' % comp
245 273 elif isinstance(b, bundle2.unbundle20):
246 274 if 'Compression' in b.params:
247 275 comp = speccompression(b.params['Compression'])
248 276 if not comp:
249 277 raise error.Abort(_('unknown compression algorithm: %s') % comp)
250 278 else:
251 279 comp = 'none'
252 280
253 281 version = None
254 282 for part in b.iterparts():
255 283 if part.type == 'changegroup':
256 284 version = part.params['version']
257 285 if version in ('01', '02'):
258 286 version = 'v2'
259 287 else:
260 288 raise error.Abort(_('changegroup version %s does not have '
261 289 'a known bundlespec') % version,
262 290 hint=_('try upgrading your Mercurial '
263 291 'client'))
264 292
265 293 if not version:
266 294 raise error.Abort(_('could not identify changegroup version in '
267 295 'bundle'))
268 296
269 297 return '%s-%s' % (comp, version)
270 298 elif isinstance(b, streamclone.streamcloneapplier):
271 299 requirements = streamclone.readbundle1header(fh)[2]
272 300 return 'none-packed1;%s' % _formatrequirementsparams(requirements)
273 301 else:
274 302 raise error.Abort(_('unknown bundle type: %s') % b)
275 303
276 304 def _computeoutgoing(repo, heads, common):
277 305 """Computes which revs are outgoing given a set of common
278 306 and a set of heads.
279 307
280 308 This is a separate function so extensions can have access to
281 309 the logic.
282 310
283 311 Returns a discovery.outgoing object.
284 312 """
285 313 cl = repo.changelog
286 314 if common:
287 315 hasnode = cl.hasnode
288 316 common = [n for n in common if hasnode(n)]
289 317 else:
290 318 common = [nullid]
291 319 if not heads:
292 320 heads = cl.heads()
293 321 return discovery.outgoing(repo, common, heads)
294 322
295 323 def _forcebundle1(op):
296 324 """return true if a pull/push must use bundle1
297 325
298 326 This function is used to allow testing of the older bundle version"""
299 327 ui = op.repo.ui
300 328 # The goal is this config is to allow developer to choose the bundle
301 329 # version used during exchanged. This is especially handy during test.
302 330 # Value is a list of bundle version to be picked from, highest version
303 331 # should be used.
304 332 #
305 333 # developer config: devel.legacy.exchange
306 334 exchange = ui.configlist('devel', 'legacy.exchange')
307 335 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
308 336 return forcebundle1 or not op.remote.capable('bundle2')
309 337
310 338 class pushoperation(object):
311 339 """A object that represent a single push operation
312 340
313 341 Its purpose is to carry push related state and very common operations.
314 342
315 343 A new pushoperation should be created at the beginning of each push and
316 344 discarded afterward.
317 345 """
318 346
319 347 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
320 348 bookmarks=(), pushvars=None):
321 349 # repo we push from
322 350 self.repo = repo
323 351 self.ui = repo.ui
324 352 # repo we push to
325 353 self.remote = remote
326 354 # force option provided
327 355 self.force = force
328 356 # revs to be pushed (None is "all")
329 357 self.revs = revs
330 358 # bookmark explicitly pushed
331 359 self.bookmarks = bookmarks
332 360 # allow push of new branch
333 361 self.newbranch = newbranch
334 362 # step already performed
335 363 # (used to check what steps have been already performed through bundle2)
336 364 self.stepsdone = set()
337 365 # Integer version of the changegroup push result
338 366 # - None means nothing to push
339 367 # - 0 means HTTP error
340 368 # - 1 means we pushed and remote head count is unchanged *or*
341 369 # we have outgoing changesets but refused to push
342 370 # - other values as described by addchangegroup()
343 371 self.cgresult = None
344 372 # Boolean value for the bookmark push
345 373 self.bkresult = None
346 374 # discover.outgoing object (contains common and outgoing data)
347 375 self.outgoing = None
348 376 # all remote topological heads before the push
349 377 self.remoteheads = None
350 378 # Details of the remote branch pre and post push
351 379 #
352 380 # mapping: {'branch': ([remoteheads],
353 381 # [newheads],
354 382 # [unsyncedheads],
355 383 # [discardedheads])}
356 384 # - branch: the branch name
357 385 # - remoteheads: the list of remote heads known locally
358 386 # None if the branch is new
359 387 # - newheads: the new remote heads (known locally) with outgoing pushed
360 388 # - unsyncedheads: the list of remote heads unknown locally.
361 389 # - discardedheads: the list of remote heads made obsolete by the push
362 390 self.pushbranchmap = None
363 391 # testable as a boolean indicating if any nodes are missing locally.
364 392 self.incoming = None
365 393 # summary of the remote phase situation
366 394 self.remotephases = None
367 395 # phases changes that must be pushed along side the changesets
368 396 self.outdatedphases = None
369 397 # phases changes that must be pushed if changeset push fails
370 398 self.fallbackoutdatedphases = None
371 399 # outgoing obsmarkers
372 400 self.outobsmarkers = set()
373 401 # outgoing bookmarks
374 402 self.outbookmarks = []
375 403 # transaction manager
376 404 self.trmanager = None
377 405 # map { pushkey partid -> callback handling failure}
378 406 # used to handle exception from mandatory pushkey part failure
379 407 self.pkfailcb = {}
380 408 # an iterable of pushvars or None
381 409 self.pushvars = pushvars
382 410
383 411 @util.propertycache
384 412 def futureheads(self):
385 413 """future remote heads if the changeset push succeeds"""
386 414 return self.outgoing.missingheads
387 415
388 416 @util.propertycache
389 417 def fallbackheads(self):
390 418 """future remote heads if the changeset push fails"""
391 419 if self.revs is None:
392 420 # not target to push, all common are relevant
393 421 return self.outgoing.commonheads
394 422 unfi = self.repo.unfiltered()
395 423 # I want cheads = heads(::missingheads and ::commonheads)
396 424 # (missingheads is revs with secret changeset filtered out)
397 425 #
398 426 # This can be expressed as:
399 427 # cheads = ( (missingheads and ::commonheads)
400 428 # + (commonheads and ::missingheads))"
401 429 # )
402 430 #
403 431 # while trying to push we already computed the following:
404 432 # common = (::commonheads)
405 433 # missing = ((commonheads::missingheads) - commonheads)
406 434 #
407 435 # We can pick:
408 436 # * missingheads part of common (::commonheads)
409 437 common = self.outgoing.common
410 438 nm = self.repo.changelog.nodemap
411 439 cheads = [node for node in self.revs if nm[node] in common]
412 440 # and
413 441 # * commonheads parents on missing
414 442 revset = unfi.set('%ln and parents(roots(%ln))',
415 443 self.outgoing.commonheads,
416 444 self.outgoing.missing)
417 445 cheads.extend(c.node() for c in revset)
418 446 return cheads
419 447
420 448 @property
421 449 def commonheads(self):
422 450 """set of all common heads after changeset bundle push"""
423 451 if self.cgresult:
424 452 return self.futureheads
425 453 else:
426 454 return self.fallbackheads
427 455
428 456 # mapping of message used when pushing bookmark
429 457 bookmsgmap = {'update': (_("updating bookmark %s\n"),
430 458 _('updating bookmark %s failed!\n')),
431 459 'export': (_("exporting bookmark %s\n"),
432 460 _('exporting bookmark %s failed!\n')),
433 461 'delete': (_("deleting remote bookmark %s\n"),
434 462 _('deleting remote bookmark %s failed!\n')),
435 463 }
436 464
437 465
438 466 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
439 467 opargs=None):
440 468 '''Push outgoing changesets (limited by revs) from a local
441 469 repository to remote. Return an integer:
442 470 - None means nothing to push
443 471 - 0 means HTTP error
444 472 - 1 means we pushed and remote head count is unchanged *or*
445 473 we have outgoing changesets but refused to push
446 474 - other values as described by addchangegroup()
447 475 '''
448 476 if opargs is None:
449 477 opargs = {}
450 478 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
451 479 **pycompat.strkwargs(opargs))
452 480 if pushop.remote.local():
453 481 missing = (set(pushop.repo.requirements)
454 482 - pushop.remote.local().supported)
455 483 if missing:
456 484 msg = _("required features are not"
457 485 " supported in the destination:"
458 486 " %s") % (', '.join(sorted(missing)))
459 487 raise error.Abort(msg)
460 488
461 489 if not pushop.remote.canpush():
462 490 raise error.Abort(_("destination does not support push"))
463 491
464 492 if not pushop.remote.capable('unbundle'):
465 493 raise error.Abort(_('cannot push: destination does not support the '
466 494 'unbundle wire protocol command'))
467 495
468 496 # get lock as we might write phase data
469 497 wlock = lock = None
470 498 try:
471 499 # bundle2 push may receive a reply bundle touching bookmarks or other
472 500 # things requiring the wlock. Take it now to ensure proper ordering.
473 501 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
474 502 if (not _forcebundle1(pushop)) and maypushback:
475 503 wlock = pushop.repo.wlock()
476 504 lock = pushop.repo.lock()
477 505 pushop.trmanager = transactionmanager(pushop.repo,
478 506 'push-response',
479 507 pushop.remote.url())
480 508 except IOError as err:
481 509 if err.errno != errno.EACCES:
482 510 raise
483 511 # source repo cannot be locked.
484 512 # We do not abort the push, but just disable the local phase
485 513 # synchronisation.
486 514 msg = 'cannot lock source repository: %s\n' % err
487 515 pushop.ui.debug(msg)
488 516
489 517 with wlock or util.nullcontextmanager(), \
490 518 lock or util.nullcontextmanager(), \
491 519 pushop.trmanager or util.nullcontextmanager():
492 520 pushop.repo.checkpush(pushop)
493 521 _pushdiscovery(pushop)
494 522 if not _forcebundle1(pushop):
495 523 _pushbundle2(pushop)
496 524 _pushchangeset(pushop)
497 525 _pushsyncphase(pushop)
498 526 _pushobsolete(pushop)
499 527 _pushbookmark(pushop)
500 528
501 529 return pushop
502 530
503 531 # list of steps to perform discovery before push
504 532 pushdiscoveryorder = []
505 533
506 534 # Mapping between step name and function
507 535 #
508 536 # This exists to help extensions wrap steps if necessary
509 537 pushdiscoverymapping = {}
510 538
511 539 def pushdiscovery(stepname):
512 540 """decorator for function performing discovery before push
513 541
514 542 The function is added to the step -> function mapping and appended to the
515 543 list of steps. Beware that decorated function will be added in order (this
516 544 may matter).
517 545
518 546 You can only use this decorator for a new step, if you want to wrap a step
519 547 from an extension, change the pushdiscovery dictionary directly."""
520 548 def dec(func):
521 549 assert stepname not in pushdiscoverymapping
522 550 pushdiscoverymapping[stepname] = func
523 551 pushdiscoveryorder.append(stepname)
524 552 return func
525 553 return dec
526 554
527 555 def _pushdiscovery(pushop):
528 556 """Run all discovery steps"""
529 557 for stepname in pushdiscoveryorder:
530 558 step = pushdiscoverymapping[stepname]
531 559 step(pushop)
532 560
533 561 @pushdiscovery('changeset')
534 562 def _pushdiscoverychangeset(pushop):
535 563 """discover the changeset that need to be pushed"""
536 564 fci = discovery.findcommonincoming
537 565 if pushop.revs:
538 566 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
539 567 ancestorsof=pushop.revs)
540 568 else:
541 569 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
542 570 common, inc, remoteheads = commoninc
543 571 fco = discovery.findcommonoutgoing
544 572 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
545 573 commoninc=commoninc, force=pushop.force)
546 574 pushop.outgoing = outgoing
547 575 pushop.remoteheads = remoteheads
548 576 pushop.incoming = inc
549 577
550 578 @pushdiscovery('phase')
551 579 def _pushdiscoveryphase(pushop):
552 580 """discover the phase that needs to be pushed
553 581
554 582 (computed for both success and failure case for changesets push)"""
555 583 outgoing = pushop.outgoing
556 584 unfi = pushop.repo.unfiltered()
557 585 remotephases = pushop.remote.listkeys('phases')
558 586 if (pushop.ui.configbool('ui', '_usedassubrepo')
559 587 and remotephases # server supports phases
560 588 and not pushop.outgoing.missing # no changesets to be pushed
561 589 and remotephases.get('publishing', False)):
562 590 # When:
563 591 # - this is a subrepo push
564 592 # - and remote support phase
565 593 # - and no changeset are to be pushed
566 594 # - and remote is publishing
567 595 # We may be in issue 3781 case!
568 596 # We drop the possible phase synchronisation done by
569 597 # courtesy to publish changesets possibly locally draft
570 598 # on the remote.
571 599 pushop.outdatedphases = []
572 600 pushop.fallbackoutdatedphases = []
573 601 return
574 602
575 603 pushop.remotephases = phases.remotephasessummary(pushop.repo,
576 604 pushop.fallbackheads,
577 605 remotephases)
578 606 droots = pushop.remotephases.draftroots
579 607
580 608 extracond = ''
581 609 if not pushop.remotephases.publishing:
582 610 extracond = ' and public()'
583 611 revset = 'heads((%%ln::%%ln) %s)' % extracond
584 612 # Get the list of all revs draft on remote by public here.
585 613 # XXX Beware that revset break if droots is not strictly
586 614 # XXX root we may want to ensure it is but it is costly
587 615 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
588 616 if not outgoing.missing:
589 617 future = fallback
590 618 else:
591 619 # adds changeset we are going to push as draft
592 620 #
593 621 # should not be necessary for publishing server, but because of an
594 622 # issue fixed in xxxxx we have to do it anyway.
595 623 fdroots = list(unfi.set('roots(%ln + %ln::)',
596 624 outgoing.missing, droots))
597 625 fdroots = [f.node() for f in fdroots]
598 626 future = list(unfi.set(revset, fdroots, pushop.futureheads))
599 627 pushop.outdatedphases = future
600 628 pushop.fallbackoutdatedphases = fallback
601 629
602 630 @pushdiscovery('obsmarker')
603 631 def _pushdiscoveryobsmarkers(pushop):
604 632 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
605 633 and pushop.repo.obsstore
606 634 and 'obsolete' in pushop.remote.listkeys('namespaces')):
607 635 repo = pushop.repo
608 636 # very naive computation, that can be quite expensive on big repo.
609 637 # However: evolution is currently slow on them anyway.
610 638 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
611 639 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
612 640
613 641 @pushdiscovery('bookmarks')
614 642 def _pushdiscoverybookmarks(pushop):
615 643 ui = pushop.ui
616 644 repo = pushop.repo.unfiltered()
617 645 remote = pushop.remote
618 646 ui.debug("checking for updated bookmarks\n")
619 647 ancestors = ()
620 648 if pushop.revs:
621 649 revnums = map(repo.changelog.rev, pushop.revs)
622 650 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
623 651 remotebookmark = remote.listkeys('bookmarks')
624 652
625 653 explicit = set([repo._bookmarks.expandname(bookmark)
626 654 for bookmark in pushop.bookmarks])
627 655
628 656 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
629 657 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
630 658
631 659 def safehex(x):
632 660 if x is None:
633 661 return x
634 662 return hex(x)
635 663
636 664 def hexifycompbookmarks(bookmarks):
637 665 return [(b, safehex(scid), safehex(dcid))
638 666 for (b, scid, dcid) in bookmarks]
639 667
640 668 comp = [hexifycompbookmarks(marks) for marks in comp]
641 669 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
642 670
643 671 def _processcompared(pushop, pushed, explicit, remotebms, comp):
644 672 """take decision on bookmark to pull from the remote bookmark
645 673
646 674 Exist to help extensions who want to alter this behavior.
647 675 """
648 676 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
649 677
650 678 repo = pushop.repo
651 679
652 680 for b, scid, dcid in advsrc:
653 681 if b in explicit:
654 682 explicit.remove(b)
655 683 if not pushed or repo[scid].rev() in pushed:
656 684 pushop.outbookmarks.append((b, dcid, scid))
657 685 # search added bookmark
658 686 for b, scid, dcid in addsrc:
659 687 if b in explicit:
660 688 explicit.remove(b)
661 689 pushop.outbookmarks.append((b, '', scid))
662 690 # search for overwritten bookmark
663 691 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
664 692 if b in explicit:
665 693 explicit.remove(b)
666 694 pushop.outbookmarks.append((b, dcid, scid))
667 695 # search for bookmark to delete
668 696 for b, scid, dcid in adddst:
669 697 if b in explicit:
670 698 explicit.remove(b)
671 699 # treat as "deleted locally"
672 700 pushop.outbookmarks.append((b, dcid, ''))
673 701 # identical bookmarks shouldn't get reported
674 702 for b, scid, dcid in same:
675 703 if b in explicit:
676 704 explicit.remove(b)
677 705
678 706 if explicit:
679 707 explicit = sorted(explicit)
680 708 # we should probably list all of them
681 709 pushop.ui.warn(_('bookmark %s does not exist on the local '
682 710 'or remote repository!\n') % explicit[0])
683 711 pushop.bkresult = 2
684 712
685 713 pushop.outbookmarks.sort()
686 714
687 715 def _pushcheckoutgoing(pushop):
688 716 outgoing = pushop.outgoing
689 717 unfi = pushop.repo.unfiltered()
690 718 if not outgoing.missing:
691 719 # nothing to push
692 720 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
693 721 return False
694 722 # something to push
695 723 if not pushop.force:
696 724 # if repo.obsstore == False --> no obsolete
697 725 # then, save the iteration
698 726 if unfi.obsstore:
699 727 # this message are here for 80 char limit reason
700 728 mso = _("push includes obsolete changeset: %s!")
701 729 mspd = _("push includes phase-divergent changeset: %s!")
702 730 mscd = _("push includes content-divergent changeset: %s!")
703 731 mst = {"orphan": _("push includes orphan changeset: %s!"),
704 732 "phase-divergent": mspd,
705 733 "content-divergent": mscd}
706 734 # If we are to push if there is at least one
707 735 # obsolete or unstable changeset in missing, at
708 736 # least one of the missinghead will be obsolete or
709 737 # unstable. So checking heads only is ok
710 738 for node in outgoing.missingheads:
711 739 ctx = unfi[node]
712 740 if ctx.obsolete():
713 741 raise error.Abort(mso % ctx)
714 742 elif ctx.isunstable():
715 743 # TODO print more than one instability in the abort
716 744 # message
717 745 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
718 746
719 747 discovery.checkheads(pushop)
720 748 return True
721 749
722 750 # List of names of steps to perform for an outgoing bundle2, order matters.
723 751 b2partsgenorder = []
724 752
725 753 # Mapping between step name and function
726 754 #
727 755 # This exists to help extensions wrap steps if necessary
728 756 b2partsgenmapping = {}
729 757
730 758 def b2partsgenerator(stepname, idx=None):
731 759 """decorator for function generating bundle2 part
732 760
733 761 The function is added to the step -> function mapping and appended to the
734 762 list of steps. Beware that decorated functions will be added in order
735 763 (this may matter).
736 764
737 765 You can only use this decorator for new steps, if you want to wrap a step
738 766 from an extension, attack the b2partsgenmapping dictionary directly."""
739 767 def dec(func):
740 768 assert stepname not in b2partsgenmapping
741 769 b2partsgenmapping[stepname] = func
742 770 if idx is None:
743 771 b2partsgenorder.append(stepname)
744 772 else:
745 773 b2partsgenorder.insert(idx, stepname)
746 774 return func
747 775 return dec
748 776
749 777 def _pushb2ctxcheckheads(pushop, bundler):
750 778 """Generate race condition checking parts
751 779
752 780 Exists as an independent function to aid extensions
753 781 """
754 782 # * 'force' do not check for push race,
755 783 # * if we don't push anything, there are nothing to check.
756 784 if not pushop.force and pushop.outgoing.missingheads:
757 785 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
758 786 emptyremote = pushop.pushbranchmap is None
759 787 if not allowunrelated or emptyremote:
760 788 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
761 789 else:
762 790 affected = set()
763 791 for branch, heads in pushop.pushbranchmap.iteritems():
764 792 remoteheads, newheads, unsyncedheads, discardedheads = heads
765 793 if remoteheads is not None:
766 794 remote = set(remoteheads)
767 795 affected |= set(discardedheads) & remote
768 796 affected |= remote - set(newheads)
769 797 if affected:
770 798 data = iter(sorted(affected))
771 799 bundler.newpart('check:updated-heads', data=data)
772 800
773 801 def _pushing(pushop):
774 802 """return True if we are pushing anything"""
775 803 return bool(pushop.outgoing.missing
776 804 or pushop.outdatedphases
777 805 or pushop.outobsmarkers
778 806 or pushop.outbookmarks)
779 807
780 808 @b2partsgenerator('check-bookmarks')
781 809 def _pushb2checkbookmarks(pushop, bundler):
782 810 """insert bookmark move checking"""
783 811 if not _pushing(pushop) or pushop.force:
784 812 return
785 813 b2caps = bundle2.bundle2caps(pushop.remote)
786 814 hasbookmarkcheck = 'bookmarks' in b2caps
787 815 if not (pushop.outbookmarks and hasbookmarkcheck):
788 816 return
789 817 data = []
790 818 for book, old, new in pushop.outbookmarks:
791 819 old = bin(old)
792 820 data.append((book, old))
793 821 checkdata = bookmod.binaryencode(data)
794 822 bundler.newpart('check:bookmarks', data=checkdata)
795 823
796 824 @b2partsgenerator('check-phases')
797 825 def _pushb2checkphases(pushop, bundler):
798 826 """insert phase move checking"""
799 827 if not _pushing(pushop) or pushop.force:
800 828 return
801 829 b2caps = bundle2.bundle2caps(pushop.remote)
802 830 hasphaseheads = 'heads' in b2caps.get('phases', ())
803 831 if pushop.remotephases is not None and hasphaseheads:
804 832 # check that the remote phase has not changed
805 833 checks = [[] for p in phases.allphases]
806 834 checks[phases.public].extend(pushop.remotephases.publicheads)
807 835 checks[phases.draft].extend(pushop.remotephases.draftroots)
808 836 if any(checks):
809 837 for nodes in checks:
810 838 nodes.sort()
811 839 checkdata = phases.binaryencode(checks)
812 840 bundler.newpart('check:phases', data=checkdata)
813 841
814 842 @b2partsgenerator('changeset')
815 843 def _pushb2ctx(pushop, bundler):
816 844 """handle changegroup push through bundle2
817 845
818 846 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
819 847 """
820 848 if 'changesets' in pushop.stepsdone:
821 849 return
822 850 pushop.stepsdone.add('changesets')
823 851 # Send known heads to the server for race detection.
824 852 if not _pushcheckoutgoing(pushop):
825 853 return
826 854 pushop.repo.prepushoutgoinghooks(pushop)
827 855
828 856 _pushb2ctxcheckheads(pushop, bundler)
829 857
830 858 b2caps = bundle2.bundle2caps(pushop.remote)
831 859 version = '01'
832 860 cgversions = b2caps.get('changegroup')
833 861 if cgversions: # 3.1 and 3.2 ship with an empty value
834 862 cgversions = [v for v in cgversions
835 863 if v in changegroup.supportedoutgoingversions(
836 864 pushop.repo)]
837 865 if not cgversions:
838 866 raise ValueError(_('no common changegroup version'))
839 867 version = max(cgversions)
840 868 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
841 869 'push')
842 870 cgpart = bundler.newpart('changegroup', data=cgstream)
843 871 if cgversions:
844 872 cgpart.addparam('version', version)
845 873 if 'treemanifest' in pushop.repo.requirements:
846 874 cgpart.addparam('treemanifest', '1')
847 875 def handlereply(op):
848 876 """extract addchangegroup returns from server reply"""
849 877 cgreplies = op.records.getreplies(cgpart.id)
850 878 assert len(cgreplies['changegroup']) == 1
851 879 pushop.cgresult = cgreplies['changegroup'][0]['return']
852 880 return handlereply
853 881
854 882 @b2partsgenerator('phase')
855 883 def _pushb2phases(pushop, bundler):
856 884 """handle phase push through bundle2"""
857 885 if 'phases' in pushop.stepsdone:
858 886 return
859 887 b2caps = bundle2.bundle2caps(pushop.remote)
860 888 ui = pushop.repo.ui
861 889
862 890 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
863 891 haspushkey = 'pushkey' in b2caps
864 892 hasphaseheads = 'heads' in b2caps.get('phases', ())
865 893
866 894 if hasphaseheads and not legacyphase:
867 895 return _pushb2phaseheads(pushop, bundler)
868 896 elif haspushkey:
869 897 return _pushb2phasespushkey(pushop, bundler)
870 898
871 899 def _pushb2phaseheads(pushop, bundler):
872 900 """push phase information through a bundle2 - binary part"""
873 901 pushop.stepsdone.add('phases')
874 902 if pushop.outdatedphases:
875 903 updates = [[] for p in phases.allphases]
876 904 updates[0].extend(h.node() for h in pushop.outdatedphases)
877 905 phasedata = phases.binaryencode(updates)
878 906 bundler.newpart('phase-heads', data=phasedata)
879 907
880 908 def _pushb2phasespushkey(pushop, bundler):
881 909 """push phase information through a bundle2 - pushkey part"""
882 910 pushop.stepsdone.add('phases')
883 911 part2node = []
884 912
885 913 def handlefailure(pushop, exc):
886 914 targetid = int(exc.partid)
887 915 for partid, node in part2node:
888 916 if partid == targetid:
889 917 raise error.Abort(_('updating %s to public failed') % node)
890 918
891 919 enc = pushkey.encode
892 920 for newremotehead in pushop.outdatedphases:
893 921 part = bundler.newpart('pushkey')
894 922 part.addparam('namespace', enc('phases'))
895 923 part.addparam('key', enc(newremotehead.hex()))
896 924 part.addparam('old', enc('%d' % phases.draft))
897 925 part.addparam('new', enc('%d' % phases.public))
898 926 part2node.append((part.id, newremotehead))
899 927 pushop.pkfailcb[part.id] = handlefailure
900 928
901 929 def handlereply(op):
902 930 for partid, node in part2node:
903 931 partrep = op.records.getreplies(partid)
904 932 results = partrep['pushkey']
905 933 assert len(results) <= 1
906 934 msg = None
907 935 if not results:
908 936 msg = _('server ignored update of %s to public!\n') % node
909 937 elif not int(results[0]['return']):
910 938 msg = _('updating %s to public failed!\n') % node
911 939 if msg is not None:
912 940 pushop.ui.warn(msg)
913 941 return handlereply
914 942
915 943 @b2partsgenerator('obsmarkers')
916 944 def _pushb2obsmarkers(pushop, bundler):
917 945 if 'obsmarkers' in pushop.stepsdone:
918 946 return
919 947 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
920 948 if obsolete.commonversion(remoteversions) is None:
921 949 return
922 950 pushop.stepsdone.add('obsmarkers')
923 951 if pushop.outobsmarkers:
924 952 markers = sorted(pushop.outobsmarkers)
925 953 bundle2.buildobsmarkerspart(bundler, markers)
926 954
927 955 @b2partsgenerator('bookmarks')
928 956 def _pushb2bookmarks(pushop, bundler):
929 957 """handle bookmark push through bundle2"""
930 958 if 'bookmarks' in pushop.stepsdone:
931 959 return
932 960 b2caps = bundle2.bundle2caps(pushop.remote)
933 961
934 962 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
935 963 legacybooks = 'bookmarks' in legacy
936 964
937 965 if not legacybooks and 'bookmarks' in b2caps:
938 966 return _pushb2bookmarkspart(pushop, bundler)
939 967 elif 'pushkey' in b2caps:
940 968 return _pushb2bookmarkspushkey(pushop, bundler)
941 969
942 970 def _bmaction(old, new):
943 971 """small utility for bookmark pushing"""
944 972 if not old:
945 973 return 'export'
946 974 elif not new:
947 975 return 'delete'
948 976 return 'update'
949 977
950 978 def _pushb2bookmarkspart(pushop, bundler):
951 979 pushop.stepsdone.add('bookmarks')
952 980 if not pushop.outbookmarks:
953 981 return
954 982
955 983 allactions = []
956 984 data = []
957 985 for book, old, new in pushop.outbookmarks:
958 986 new = bin(new)
959 987 data.append((book, new))
960 988 allactions.append((book, _bmaction(old, new)))
961 989 checkdata = bookmod.binaryencode(data)
962 990 bundler.newpart('bookmarks', data=checkdata)
963 991
964 992 def handlereply(op):
965 993 ui = pushop.ui
966 994 # if success
967 995 for book, action in allactions:
968 996 ui.status(bookmsgmap[action][0] % book)
969 997
970 998 return handlereply
971 999
972 1000 def _pushb2bookmarkspushkey(pushop, bundler):
973 1001 pushop.stepsdone.add('bookmarks')
974 1002 part2book = []
975 1003 enc = pushkey.encode
976 1004
977 1005 def handlefailure(pushop, exc):
978 1006 targetid = int(exc.partid)
979 1007 for partid, book, action in part2book:
980 1008 if partid == targetid:
981 1009 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
982 1010 # we should not be called for part we did not generated
983 1011 assert False
984 1012
985 1013 for book, old, new in pushop.outbookmarks:
986 1014 part = bundler.newpart('pushkey')
987 1015 part.addparam('namespace', enc('bookmarks'))
988 1016 part.addparam('key', enc(book))
989 1017 part.addparam('old', enc(old))
990 1018 part.addparam('new', enc(new))
991 1019 action = 'update'
992 1020 if not old:
993 1021 action = 'export'
994 1022 elif not new:
995 1023 action = 'delete'
996 1024 part2book.append((part.id, book, action))
997 1025 pushop.pkfailcb[part.id] = handlefailure
998 1026
999 1027 def handlereply(op):
1000 1028 ui = pushop.ui
1001 1029 for partid, book, action in part2book:
1002 1030 partrep = op.records.getreplies(partid)
1003 1031 results = partrep['pushkey']
1004 1032 assert len(results) <= 1
1005 1033 if not results:
1006 1034 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
1007 1035 else:
1008 1036 ret = int(results[0]['return'])
1009 1037 if ret:
1010 1038 ui.status(bookmsgmap[action][0] % book)
1011 1039 else:
1012 1040 ui.warn(bookmsgmap[action][1] % book)
1013 1041 if pushop.bkresult is not None:
1014 1042 pushop.bkresult = 1
1015 1043 return handlereply
1016 1044
1017 1045 @b2partsgenerator('pushvars', idx=0)
1018 1046 def _getbundlesendvars(pushop, bundler):
1019 1047 '''send shellvars via bundle2'''
1020 1048 pushvars = pushop.pushvars
1021 1049 if pushvars:
1022 1050 shellvars = {}
1023 1051 for raw in pushvars:
1024 1052 if '=' not in raw:
1025 1053 msg = ("unable to parse variable '%s', should follow "
1026 1054 "'KEY=VALUE' or 'KEY=' format")
1027 1055 raise error.Abort(msg % raw)
1028 1056 k, v = raw.split('=', 1)
1029 1057 shellvars[k] = v
1030 1058
1031 1059 part = bundler.newpart('pushvars')
1032 1060
1033 1061 for key, value in shellvars.iteritems():
1034 1062 part.addparam(key, value, mandatory=False)
1035 1063
1036 1064 def _pushbundle2(pushop):
1037 1065 """push data to the remote using bundle2
1038 1066
1039 1067 The only currently supported type of data is changegroup but this will
1040 1068 evolve in the future."""
1041 1069 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1042 1070 pushback = (pushop.trmanager
1043 1071 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1044 1072
1045 1073 # create reply capability
1046 1074 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1047 1075 allowpushback=pushback,
1048 1076 role='client'))
1049 1077 bundler.newpart('replycaps', data=capsblob)
1050 1078 replyhandlers = []
1051 1079 for partgenname in b2partsgenorder:
1052 1080 partgen = b2partsgenmapping[partgenname]
1053 1081 ret = partgen(pushop, bundler)
1054 1082 if callable(ret):
1055 1083 replyhandlers.append(ret)
1056 1084 # do not push if nothing to push
1057 1085 if bundler.nbparts <= 1:
1058 1086 return
1059 1087 stream = util.chunkbuffer(bundler.getchunks())
1060 1088 try:
1061 1089 try:
1062 1090 reply = pushop.remote.unbundle(
1063 1091 stream, ['force'], pushop.remote.url())
1064 1092 except error.BundleValueError as exc:
1065 1093 raise error.Abort(_('missing support for %s') % exc)
1066 1094 try:
1067 1095 trgetter = None
1068 1096 if pushback:
1069 1097 trgetter = pushop.trmanager.transaction
1070 1098 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1071 1099 except error.BundleValueError as exc:
1072 1100 raise error.Abort(_('missing support for %s') % exc)
1073 1101 except bundle2.AbortFromPart as exc:
1074 1102 pushop.ui.status(_('remote: %s\n') % exc)
1075 1103 if exc.hint is not None:
1076 1104 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1077 1105 raise error.Abort(_('push failed on remote'))
1078 1106 except error.PushkeyFailed as exc:
1079 1107 partid = int(exc.partid)
1080 1108 if partid not in pushop.pkfailcb:
1081 1109 raise
1082 1110 pushop.pkfailcb[partid](pushop, exc)
1083 1111 for rephand in replyhandlers:
1084 1112 rephand(op)
1085 1113
1086 1114 def _pushchangeset(pushop):
1087 1115 """Make the actual push of changeset bundle to remote repo"""
1088 1116 if 'changesets' in pushop.stepsdone:
1089 1117 return
1090 1118 pushop.stepsdone.add('changesets')
1091 1119 if not _pushcheckoutgoing(pushop):
1092 1120 return
1093 1121
1094 1122 # Should have verified this in push().
1095 1123 assert pushop.remote.capable('unbundle')
1096 1124
1097 1125 pushop.repo.prepushoutgoinghooks(pushop)
1098 1126 outgoing = pushop.outgoing
1099 1127 # TODO: get bundlecaps from remote
1100 1128 bundlecaps = None
1101 1129 # create a changegroup from local
1102 1130 if pushop.revs is None and not (outgoing.excluded
1103 1131 or pushop.repo.changelog.filteredrevs):
1104 1132 # push everything,
1105 1133 # use the fast path, no race possible on push
1106 1134 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1107 1135 fastpath=True, bundlecaps=bundlecaps)
1108 1136 else:
1109 1137 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1110 1138 'push', bundlecaps=bundlecaps)
1111 1139
1112 1140 # apply changegroup to remote
1113 1141 # local repo finds heads on server, finds out what
1114 1142 # revs it must push. once revs transferred, if server
1115 1143 # finds it has different heads (someone else won
1116 1144 # commit/push race), server aborts.
1117 1145 if pushop.force:
1118 1146 remoteheads = ['force']
1119 1147 else:
1120 1148 remoteheads = pushop.remoteheads
1121 1149 # ssh: return remote's addchangegroup()
1122 1150 # http: return remote's addchangegroup() or 0 for error
1123 1151 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1124 1152 pushop.repo.url())
1125 1153
1126 1154 def _pushsyncphase(pushop):
1127 1155 """synchronise phase information locally and remotely"""
1128 1156 cheads = pushop.commonheads
1129 1157 # even when we don't push, exchanging phase data is useful
1130 1158 remotephases = pushop.remote.listkeys('phases')
1131 1159 if (pushop.ui.configbool('ui', '_usedassubrepo')
1132 1160 and remotephases # server supports phases
1133 1161 and pushop.cgresult is None # nothing was pushed
1134 1162 and remotephases.get('publishing', False)):
1135 1163 # When:
1136 1164 # - this is a subrepo push
1137 1165 # - and remote support phase
1138 1166 # - and no changeset was pushed
1139 1167 # - and remote is publishing
1140 1168 # We may be in issue 3871 case!
1141 1169 # We drop the possible phase synchronisation done by
1142 1170 # courtesy to publish changesets possibly locally draft
1143 1171 # on the remote.
1144 1172 remotephases = {'publishing': 'True'}
1145 1173 if not remotephases: # old server or public only reply from non-publishing
1146 1174 _localphasemove(pushop, cheads)
1147 1175 # don't push any phase data as there is nothing to push
1148 1176 else:
1149 1177 ana = phases.analyzeremotephases(pushop.repo, cheads,
1150 1178 remotephases)
1151 1179 pheads, droots = ana
1152 1180 ### Apply remote phase on local
1153 1181 if remotephases.get('publishing', False):
1154 1182 _localphasemove(pushop, cheads)
1155 1183 else: # publish = False
1156 1184 _localphasemove(pushop, pheads)
1157 1185 _localphasemove(pushop, cheads, phases.draft)
1158 1186 ### Apply local phase on remote
1159 1187
1160 1188 if pushop.cgresult:
1161 1189 if 'phases' in pushop.stepsdone:
1162 1190 # phases already pushed though bundle2
1163 1191 return
1164 1192 outdated = pushop.outdatedphases
1165 1193 else:
1166 1194 outdated = pushop.fallbackoutdatedphases
1167 1195
1168 1196 pushop.stepsdone.add('phases')
1169 1197
1170 1198 # filter heads already turned public by the push
1171 1199 outdated = [c for c in outdated if c.node() not in pheads]
1172 1200 # fallback to independent pushkey command
1173 1201 for newremotehead in outdated:
1174 1202 r = pushop.remote.pushkey('phases',
1175 1203 newremotehead.hex(),
1176 1204 ('%d' % phases.draft),
1177 1205 ('%d' % phases.public))
1178 1206 if not r:
1179 1207 pushop.ui.warn(_('updating %s to public failed!\n')
1180 1208 % newremotehead)
1181 1209
1182 1210 def _localphasemove(pushop, nodes, phase=phases.public):
1183 1211 """move <nodes> to <phase> in the local source repo"""
1184 1212 if pushop.trmanager:
1185 1213 phases.advanceboundary(pushop.repo,
1186 1214 pushop.trmanager.transaction(),
1187 1215 phase,
1188 1216 nodes)
1189 1217 else:
1190 1218 # repo is not locked, do not change any phases!
1191 1219 # Informs the user that phases should have been moved when
1192 1220 # applicable.
1193 1221 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1194 1222 phasestr = phases.phasenames[phase]
1195 1223 if actualmoves:
1196 1224 pushop.ui.status(_('cannot lock source repo, skipping '
1197 1225 'local %s phase update\n') % phasestr)
1198 1226
1199 1227 def _pushobsolete(pushop):
1200 1228 """utility function to push obsolete markers to a remote"""
1201 1229 if 'obsmarkers' in pushop.stepsdone:
1202 1230 return
1203 1231 repo = pushop.repo
1204 1232 remote = pushop.remote
1205 1233 pushop.stepsdone.add('obsmarkers')
1206 1234 if pushop.outobsmarkers:
1207 1235 pushop.ui.debug('try to push obsolete markers to remote\n')
1208 1236 rslts = []
1209 1237 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1210 1238 for key in sorted(remotedata, reverse=True):
1211 1239 # reverse sort to ensure we end with dump0
1212 1240 data = remotedata[key]
1213 1241 rslts.append(remote.pushkey('obsolete', key, '', data))
1214 1242 if [r for r in rslts if not r]:
1215 1243 msg = _('failed to push some obsolete markers!\n')
1216 1244 repo.ui.warn(msg)
1217 1245
1218 1246 def _pushbookmark(pushop):
1219 1247 """Update bookmark position on remote"""
1220 1248 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1221 1249 return
1222 1250 pushop.stepsdone.add('bookmarks')
1223 1251 ui = pushop.ui
1224 1252 remote = pushop.remote
1225 1253
1226 1254 for b, old, new in pushop.outbookmarks:
1227 1255 action = 'update'
1228 1256 if not old:
1229 1257 action = 'export'
1230 1258 elif not new:
1231 1259 action = 'delete'
1232 1260 if remote.pushkey('bookmarks', b, old, new):
1233 1261 ui.status(bookmsgmap[action][0] % b)
1234 1262 else:
1235 1263 ui.warn(bookmsgmap[action][1] % b)
1236 1264 # discovery can have set the value form invalid entry
1237 1265 if pushop.bkresult is not None:
1238 1266 pushop.bkresult = 1
1239 1267
1240 1268 class pulloperation(object):
1241 1269 """A object that represent a single pull operation
1242 1270
1243 1271 It purpose is to carry pull related state and very common operation.
1244 1272
1245 1273 A new should be created at the beginning of each pull and discarded
1246 1274 afterward.
1247 1275 """
1248 1276
1249 1277 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1250 1278 remotebookmarks=None, streamclonerequested=None):
1251 1279 # repo we pull into
1252 1280 self.repo = repo
1253 1281 # repo we pull from
1254 1282 self.remote = remote
1255 1283 # revision we try to pull (None is "all")
1256 1284 self.heads = heads
1257 1285 # bookmark pulled explicitly
1258 1286 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1259 1287 for bookmark in bookmarks]
1260 1288 # do we force pull?
1261 1289 self.force = force
1262 1290 # whether a streaming clone was requested
1263 1291 self.streamclonerequested = streamclonerequested
1264 1292 # transaction manager
1265 1293 self.trmanager = None
1266 1294 # set of common changeset between local and remote before pull
1267 1295 self.common = None
1268 1296 # set of pulled head
1269 1297 self.rheads = None
1270 1298 # list of missing changeset to fetch remotely
1271 1299 self.fetch = None
1272 1300 # remote bookmarks data
1273 1301 self.remotebookmarks = remotebookmarks
1274 1302 # result of changegroup pulling (used as return code by pull)
1275 1303 self.cgresult = None
1276 1304 # list of step already done
1277 1305 self.stepsdone = set()
1278 1306 # Whether we attempted a clone from pre-generated bundles.
1279 1307 self.clonebundleattempted = False
1280 1308
1281 1309 @util.propertycache
1282 1310 def pulledsubset(self):
1283 1311 """heads of the set of changeset target by the pull"""
1284 1312 # compute target subset
1285 1313 if self.heads is None:
1286 1314 # We pulled every thing possible
1287 1315 # sync on everything common
1288 1316 c = set(self.common)
1289 1317 ret = list(self.common)
1290 1318 for n in self.rheads:
1291 1319 if n not in c:
1292 1320 ret.append(n)
1293 1321 return ret
1294 1322 else:
1295 1323 # We pulled a specific subset
1296 1324 # sync on this subset
1297 1325 return self.heads
1298 1326
1299 1327 @util.propertycache
1300 1328 def canusebundle2(self):
1301 1329 return not _forcebundle1(self)
1302 1330
1303 1331 @util.propertycache
1304 1332 def remotebundle2caps(self):
1305 1333 return bundle2.bundle2caps(self.remote)
1306 1334
1307 1335 def gettransaction(self):
1308 1336 # deprecated; talk to trmanager directly
1309 1337 return self.trmanager.transaction()
1310 1338
1311 1339 class transactionmanager(util.transactional):
1312 1340 """An object to manage the life cycle of a transaction
1313 1341
1314 1342 It creates the transaction on demand and calls the appropriate hooks when
1315 1343 closing the transaction."""
1316 1344 def __init__(self, repo, source, url):
1317 1345 self.repo = repo
1318 1346 self.source = source
1319 1347 self.url = url
1320 1348 self._tr = None
1321 1349
1322 1350 def transaction(self):
1323 1351 """Return an open transaction object, constructing if necessary"""
1324 1352 if not self._tr:
1325 1353 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1326 1354 self._tr = self.repo.transaction(trname)
1327 1355 self._tr.hookargs['source'] = self.source
1328 1356 self._tr.hookargs['url'] = self.url
1329 1357 return self._tr
1330 1358
1331 1359 def close(self):
1332 1360 """close transaction if created"""
1333 1361 if self._tr is not None:
1334 1362 self._tr.close()
1335 1363
1336 1364 def release(self):
1337 1365 """release transaction if created"""
1338 1366 if self._tr is not None:
1339 1367 self._tr.release()
1340 1368
1341 1369 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1342 1370 streamclonerequested=None):
1343 1371 """Fetch repository data from a remote.
1344 1372
1345 1373 This is the main function used to retrieve data from a remote repository.
1346 1374
1347 1375 ``repo`` is the local repository to clone into.
1348 1376 ``remote`` is a peer instance.
1349 1377 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1350 1378 default) means to pull everything from the remote.
1351 1379 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1352 1380 default, all remote bookmarks are pulled.
1353 1381 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1354 1382 initialization.
1355 1383 ``streamclonerequested`` is a boolean indicating whether a "streaming
1356 1384 clone" is requested. A "streaming clone" is essentially a raw file copy
1357 1385 of revlogs from the server. This only works when the local repository is
1358 1386 empty. The default value of ``None`` means to respect the server
1359 1387 configuration for preferring stream clones.
1360 1388
1361 1389 Returns the ``pulloperation`` created for this pull.
1362 1390 """
1363 1391 if opargs is None:
1364 1392 opargs = {}
1365 1393 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1366 1394 streamclonerequested=streamclonerequested,
1367 1395 **pycompat.strkwargs(opargs))
1368 1396
1369 1397 peerlocal = pullop.remote.local()
1370 1398 if peerlocal:
1371 1399 missing = set(peerlocal.requirements) - pullop.repo.supported
1372 1400 if missing:
1373 1401 msg = _("required features are not"
1374 1402 " supported in the destination:"
1375 1403 " %s") % (', '.join(sorted(missing)))
1376 1404 raise error.Abort(msg)
1377 1405
1378 1406 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1379 1407 with repo.wlock(), repo.lock(), pullop.trmanager:
1380 1408 # This should ideally be in _pullbundle2(). However, it needs to run
1381 1409 # before discovery to avoid extra work.
1382 1410 _maybeapplyclonebundle(pullop)
1383 1411 streamclone.maybeperformlegacystreamclone(pullop)
1384 1412 _pulldiscovery(pullop)
1385 1413 if pullop.canusebundle2:
1386 1414 _pullbundle2(pullop)
1387 1415 _pullchangeset(pullop)
1388 1416 _pullphase(pullop)
1389 1417 _pullbookmarks(pullop)
1390 1418 _pullobsolete(pullop)
1391 1419
1392 1420 # storing remotenames
1393 1421 if repo.ui.configbool('experimental', 'remotenames'):
1394 1422 logexchange.pullremotenames(repo, remote)
1395 1423
1396 1424 return pullop
1397 1425
1398 1426 # list of steps to perform discovery before pull
1399 1427 pulldiscoveryorder = []
1400 1428
1401 1429 # Mapping between step name and function
1402 1430 #
1403 1431 # This exists to help extensions wrap steps if necessary
1404 1432 pulldiscoverymapping = {}
1405 1433
1406 1434 def pulldiscovery(stepname):
1407 1435 """decorator for function performing discovery before pull
1408 1436
1409 1437 The function is added to the step -> function mapping and appended to the
1410 1438 list of steps. Beware that decorated function will be added in order (this
1411 1439 may matter).
1412 1440
1413 1441 You can only use this decorator for a new step, if you want to wrap a step
1414 1442 from an extension, change the pulldiscovery dictionary directly."""
1415 1443 def dec(func):
1416 1444 assert stepname not in pulldiscoverymapping
1417 1445 pulldiscoverymapping[stepname] = func
1418 1446 pulldiscoveryorder.append(stepname)
1419 1447 return func
1420 1448 return dec
1421 1449
1422 1450 def _pulldiscovery(pullop):
1423 1451 """Run all discovery steps"""
1424 1452 for stepname in pulldiscoveryorder:
1425 1453 step = pulldiscoverymapping[stepname]
1426 1454 step(pullop)
1427 1455
1428 1456 @pulldiscovery('b1:bookmarks')
1429 1457 def _pullbookmarkbundle1(pullop):
1430 1458 """fetch bookmark data in bundle1 case
1431 1459
1432 1460 If not using bundle2, we have to fetch bookmarks before changeset
1433 1461 discovery to reduce the chance and impact of race conditions."""
1434 1462 if pullop.remotebookmarks is not None:
1435 1463 return
1436 1464 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1437 1465 # all known bundle2 servers now support listkeys, but lets be nice with
1438 1466 # new implementation.
1439 1467 return
1440 1468 books = pullop.remote.listkeys('bookmarks')
1441 1469 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1442 1470
1443 1471
1444 1472 @pulldiscovery('changegroup')
1445 1473 def _pulldiscoverychangegroup(pullop):
1446 1474 """discovery phase for the pull
1447 1475
1448 1476 Current handle changeset discovery only, will change handle all discovery
1449 1477 at some point."""
1450 1478 tmp = discovery.findcommonincoming(pullop.repo,
1451 1479 pullop.remote,
1452 1480 heads=pullop.heads,
1453 1481 force=pullop.force)
1454 1482 common, fetch, rheads = tmp
1455 1483 nm = pullop.repo.unfiltered().changelog.nodemap
1456 1484 if fetch and rheads:
1457 1485 # If a remote heads is filtered locally, put in back in common.
1458 1486 #
1459 1487 # This is a hackish solution to catch most of "common but locally
1460 1488 # hidden situation". We do not performs discovery on unfiltered
1461 1489 # repository because it end up doing a pathological amount of round
1462 1490 # trip for w huge amount of changeset we do not care about.
1463 1491 #
1464 1492 # If a set of such "common but filtered" changeset exist on the server
1465 1493 # but are not including a remote heads, we'll not be able to detect it,
1466 1494 scommon = set(common)
1467 1495 for n in rheads:
1468 1496 if n in nm:
1469 1497 if n not in scommon:
1470 1498 common.append(n)
1471 1499 if set(rheads).issubset(set(common)):
1472 1500 fetch = []
1473 1501 pullop.common = common
1474 1502 pullop.fetch = fetch
1475 1503 pullop.rheads = rheads
1476 1504
1477 1505 def _pullbundle2(pullop):
1478 1506 """pull data using bundle2
1479 1507
1480 1508 For now, the only supported data are changegroup."""
1481 1509 kwargs = {'bundlecaps': caps20to10(pullop.repo, role='client')}
1482 1510
1483 1511 # make ui easier to access
1484 1512 ui = pullop.repo.ui
1485 1513
1486 1514 # At the moment we don't do stream clones over bundle2. If that is
1487 1515 # implemented then here's where the check for that will go.
1488 1516 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1489 1517
1490 1518 # declare pull perimeters
1491 1519 kwargs['common'] = pullop.common
1492 1520 kwargs['heads'] = pullop.heads or pullop.rheads
1493 1521
1494 1522 if streaming:
1495 1523 kwargs['cg'] = False
1496 1524 kwargs['stream'] = True
1497 1525 pullop.stepsdone.add('changegroup')
1498 1526 pullop.stepsdone.add('phases')
1499 1527
1500 1528 else:
1501 1529 # pulling changegroup
1502 1530 pullop.stepsdone.add('changegroup')
1503 1531
1504 1532 kwargs['cg'] = pullop.fetch
1505 1533
1506 1534 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1507 1535 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1508 1536 if (not legacyphase and hasbinaryphase):
1509 1537 kwargs['phases'] = True
1510 1538 pullop.stepsdone.add('phases')
1511 1539
1512 1540 if 'listkeys' in pullop.remotebundle2caps:
1513 1541 if 'phases' not in pullop.stepsdone:
1514 1542 kwargs['listkeys'] = ['phases']
1515 1543
1516 1544 bookmarksrequested = False
1517 1545 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1518 1546 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1519 1547
1520 1548 if pullop.remotebookmarks is not None:
1521 1549 pullop.stepsdone.add('request-bookmarks')
1522 1550
1523 1551 if ('request-bookmarks' not in pullop.stepsdone
1524 1552 and pullop.remotebookmarks is None
1525 1553 and not legacybookmark and hasbinarybook):
1526 1554 kwargs['bookmarks'] = True
1527 1555 bookmarksrequested = True
1528 1556
1529 1557 if 'listkeys' in pullop.remotebundle2caps:
1530 1558 if 'request-bookmarks' not in pullop.stepsdone:
1531 1559 # make sure to always includes bookmark data when migrating
1532 1560 # `hg incoming --bundle` to using this function.
1533 1561 pullop.stepsdone.add('request-bookmarks')
1534 1562 kwargs.setdefault('listkeys', []).append('bookmarks')
1535 1563
1536 1564 # If this is a full pull / clone and the server supports the clone bundles
1537 1565 # feature, tell the server whether we attempted a clone bundle. The
1538 1566 # presence of this flag indicates the client supports clone bundles. This
1539 1567 # will enable the server to treat clients that support clone bundles
1540 1568 # differently from those that don't.
1541 1569 if (pullop.remote.capable('clonebundles')
1542 1570 and pullop.heads is None and list(pullop.common) == [nullid]):
1543 1571 kwargs['cbattempted'] = pullop.clonebundleattempted
1544 1572
1545 1573 if streaming:
1546 1574 pullop.repo.ui.status(_('streaming all changes\n'))
1547 1575 elif not pullop.fetch:
1548 1576 pullop.repo.ui.status(_("no changes found\n"))
1549 1577 pullop.cgresult = 0
1550 1578 else:
1551 1579 if pullop.heads is None and list(pullop.common) == [nullid]:
1552 1580 pullop.repo.ui.status(_("requesting all changes\n"))
1553 1581 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1554 1582 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1555 1583 if obsolete.commonversion(remoteversions) is not None:
1556 1584 kwargs['obsmarkers'] = True
1557 1585 pullop.stepsdone.add('obsmarkers')
1558 1586 _pullbundle2extraprepare(pullop, kwargs)
1559 1587 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1560 1588 try:
1561 1589 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1562 1590 op.modes['bookmarks'] = 'records'
1563 1591 bundle2.processbundle(pullop.repo, bundle, op=op)
1564 1592 except bundle2.AbortFromPart as exc:
1565 1593 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1566 1594 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1567 1595 except error.BundleValueError as exc:
1568 1596 raise error.Abort(_('missing support for %s') % exc)
1569 1597
1570 1598 if pullop.fetch:
1571 1599 pullop.cgresult = bundle2.combinechangegroupresults(op)
1572 1600
1573 1601 # processing phases change
1574 1602 for namespace, value in op.records['listkeys']:
1575 1603 if namespace == 'phases':
1576 1604 _pullapplyphases(pullop, value)
1577 1605
1578 1606 # processing bookmark update
1579 1607 if bookmarksrequested:
1580 1608 books = {}
1581 1609 for record in op.records['bookmarks']:
1582 1610 books[record['bookmark']] = record["node"]
1583 1611 pullop.remotebookmarks = books
1584 1612 else:
1585 1613 for namespace, value in op.records['listkeys']:
1586 1614 if namespace == 'bookmarks':
1587 1615 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1588 1616
1589 1617 # bookmark data were either already there or pulled in the bundle
1590 1618 if pullop.remotebookmarks is not None:
1591 1619 _pullbookmarks(pullop)
1592 1620
1593 1621 def _pullbundle2extraprepare(pullop, kwargs):
1594 1622 """hook function so that extensions can extend the getbundle call"""
1595 1623
1596 1624 def _pullchangeset(pullop):
1597 1625 """pull changeset from unbundle into the local repo"""
1598 1626 # We delay the open of the transaction as late as possible so we
1599 1627 # don't open transaction for nothing or you break future useful
1600 1628 # rollback call
1601 1629 if 'changegroup' in pullop.stepsdone:
1602 1630 return
1603 1631 pullop.stepsdone.add('changegroup')
1604 1632 if not pullop.fetch:
1605 1633 pullop.repo.ui.status(_("no changes found\n"))
1606 1634 pullop.cgresult = 0
1607 1635 return
1608 1636 tr = pullop.gettransaction()
1609 1637 if pullop.heads is None and list(pullop.common) == [nullid]:
1610 1638 pullop.repo.ui.status(_("requesting all changes\n"))
1611 1639 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1612 1640 # issue1320, avoid a race if remote changed after discovery
1613 1641 pullop.heads = pullop.rheads
1614 1642
1615 1643 if pullop.remote.capable('getbundle'):
1616 1644 # TODO: get bundlecaps from remote
1617 1645 cg = pullop.remote.getbundle('pull', common=pullop.common,
1618 1646 heads=pullop.heads or pullop.rheads)
1619 1647 elif pullop.heads is None:
1620 1648 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1621 1649 elif not pullop.remote.capable('changegroupsubset'):
1622 1650 raise error.Abort(_("partial pull cannot be done because "
1623 1651 "other repository doesn't support "
1624 1652 "changegroupsubset."))
1625 1653 else:
1626 1654 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1627 1655 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1628 1656 pullop.remote.url())
1629 1657 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1630 1658
1631 1659 def _pullphase(pullop):
1632 1660 # Get remote phases data from remote
1633 1661 if 'phases' in pullop.stepsdone:
1634 1662 return
1635 1663 remotephases = pullop.remote.listkeys('phases')
1636 1664 _pullapplyphases(pullop, remotephases)
1637 1665
1638 1666 def _pullapplyphases(pullop, remotephases):
1639 1667 """apply phase movement from observed remote state"""
1640 1668 if 'phases' in pullop.stepsdone:
1641 1669 return
1642 1670 pullop.stepsdone.add('phases')
1643 1671 publishing = bool(remotephases.get('publishing', False))
1644 1672 if remotephases and not publishing:
1645 1673 # remote is new and non-publishing
1646 1674 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1647 1675 pullop.pulledsubset,
1648 1676 remotephases)
1649 1677 dheads = pullop.pulledsubset
1650 1678 else:
1651 1679 # Remote is old or publishing all common changesets
1652 1680 # should be seen as public
1653 1681 pheads = pullop.pulledsubset
1654 1682 dheads = []
1655 1683 unfi = pullop.repo.unfiltered()
1656 1684 phase = unfi._phasecache.phase
1657 1685 rev = unfi.changelog.nodemap.get
1658 1686 public = phases.public
1659 1687 draft = phases.draft
1660 1688
1661 1689 # exclude changesets already public locally and update the others
1662 1690 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1663 1691 if pheads:
1664 1692 tr = pullop.gettransaction()
1665 1693 phases.advanceboundary(pullop.repo, tr, public, pheads)
1666 1694
1667 1695 # exclude changesets already draft locally and update the others
1668 1696 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1669 1697 if dheads:
1670 1698 tr = pullop.gettransaction()
1671 1699 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1672 1700
1673 1701 def _pullbookmarks(pullop):
1674 1702 """process the remote bookmark information to update the local one"""
1675 1703 if 'bookmarks' in pullop.stepsdone:
1676 1704 return
1677 1705 pullop.stepsdone.add('bookmarks')
1678 1706 repo = pullop.repo
1679 1707 remotebookmarks = pullop.remotebookmarks
1680 1708 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1681 1709 pullop.remote.url(),
1682 1710 pullop.gettransaction,
1683 1711 explicit=pullop.explicitbookmarks)
1684 1712
1685 1713 def _pullobsolete(pullop):
1686 1714 """utility function to pull obsolete markers from a remote
1687 1715
1688 1716 The `gettransaction` is function that return the pull transaction, creating
1689 1717 one if necessary. We return the transaction to inform the calling code that
1690 1718 a new transaction have been created (when applicable).
1691 1719
1692 1720 Exists mostly to allow overriding for experimentation purpose"""
1693 1721 if 'obsmarkers' in pullop.stepsdone:
1694 1722 return
1695 1723 pullop.stepsdone.add('obsmarkers')
1696 1724 tr = None
1697 1725 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1698 1726 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1699 1727 remoteobs = pullop.remote.listkeys('obsolete')
1700 1728 if 'dump0' in remoteobs:
1701 1729 tr = pullop.gettransaction()
1702 1730 markers = []
1703 1731 for key in sorted(remoteobs, reverse=True):
1704 1732 if key.startswith('dump'):
1705 1733 data = util.b85decode(remoteobs[key])
1706 1734 version, newmarks = obsolete._readmarkers(data)
1707 1735 markers += newmarks
1708 1736 if markers:
1709 1737 pullop.repo.obsstore.add(tr, markers)
1710 1738 pullop.repo.invalidatevolatilesets()
1711 1739 return tr
1712 1740
1713 1741 def caps20to10(repo, role):
1714 1742 """return a set with appropriate options to use bundle20 during getbundle"""
1715 1743 caps = {'HG20'}
1716 1744 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
1717 1745 caps.add('bundle2=' + urlreq.quote(capsblob))
1718 1746 return caps
1719 1747
1720 1748 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1721 1749 getbundle2partsorder = []
1722 1750
1723 1751 # Mapping between step name and function
1724 1752 #
1725 1753 # This exists to help extensions wrap steps if necessary
1726 1754 getbundle2partsmapping = {}
1727 1755
1728 1756 def getbundle2partsgenerator(stepname, idx=None):
1729 1757 """decorator for function generating bundle2 part for getbundle
1730 1758
1731 1759 The function is added to the step -> function mapping and appended to the
1732 1760 list of steps. Beware that decorated functions will be added in order
1733 1761 (this may matter).
1734 1762
1735 1763 You can only use this decorator for new steps, if you want to wrap a step
1736 1764 from an extension, attack the getbundle2partsmapping dictionary directly."""
1737 1765 def dec(func):
1738 1766 assert stepname not in getbundle2partsmapping
1739 1767 getbundle2partsmapping[stepname] = func
1740 1768 if idx is None:
1741 1769 getbundle2partsorder.append(stepname)
1742 1770 else:
1743 1771 getbundle2partsorder.insert(idx, stepname)
1744 1772 return func
1745 1773 return dec
1746 1774
1747 1775 def bundle2requested(bundlecaps):
1748 1776 if bundlecaps is not None:
1749 1777 return any(cap.startswith('HG2') for cap in bundlecaps)
1750 1778 return False
1751 1779
1752 1780 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1753 1781 **kwargs):
1754 1782 """Return chunks constituting a bundle's raw data.
1755 1783
1756 1784 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1757 1785 passed.
1758 1786
1759 1787 Returns a 2-tuple of a dict with metadata about the generated bundle
1760 1788 and an iterator over raw chunks (of varying sizes).
1761 1789 """
1762 1790 kwargs = pycompat.byteskwargs(kwargs)
1763 1791 info = {}
1764 1792 usebundle2 = bundle2requested(bundlecaps)
1765 1793 # bundle10 case
1766 1794 if not usebundle2:
1767 1795 if bundlecaps and not kwargs.get('cg', True):
1768 1796 raise ValueError(_('request for bundle10 must include changegroup'))
1769 1797
1770 1798 if kwargs:
1771 1799 raise ValueError(_('unsupported getbundle arguments: %s')
1772 1800 % ', '.join(sorted(kwargs.keys())))
1773 1801 outgoing = _computeoutgoing(repo, heads, common)
1774 1802 info['bundleversion'] = 1
1775 1803 return info, changegroup.makestream(repo, outgoing, '01', source,
1776 1804 bundlecaps=bundlecaps)
1777 1805
1778 1806 # bundle20 case
1779 1807 info['bundleversion'] = 2
1780 1808 b2caps = {}
1781 1809 for bcaps in bundlecaps:
1782 1810 if bcaps.startswith('bundle2='):
1783 1811 blob = urlreq.unquote(bcaps[len('bundle2='):])
1784 1812 b2caps.update(bundle2.decodecaps(blob))
1785 1813 bundler = bundle2.bundle20(repo.ui, b2caps)
1786 1814
1787 1815 kwargs['heads'] = heads
1788 1816 kwargs['common'] = common
1789 1817
1790 1818 for name in getbundle2partsorder:
1791 1819 func = getbundle2partsmapping[name]
1792 1820 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1793 1821 **pycompat.strkwargs(kwargs))
1794 1822
1795 1823 info['prefercompressed'] = bundler.prefercompressed
1796 1824
1797 1825 return info, bundler.getchunks()
1798 1826
1799 1827 @getbundle2partsgenerator('stream2')
1800 1828 def _getbundlestream2(bundler, repo, source, bundlecaps=None,
1801 1829 b2caps=None, heads=None, common=None, **kwargs):
1802 1830 if not kwargs.get('stream', False):
1803 1831 return
1804 1832
1805 1833 if not streamclone.allowservergeneration(repo):
1806 1834 raise error.Abort(_('stream data requested but server does not allow '
1807 1835 'this feature'),
1808 1836 hint=_('well-behaved clients should not be '
1809 1837 'requesting stream data from servers not '
1810 1838 'advertising it; the client may be buggy'))
1811 1839
1812 1840 # Stream clones don't compress well. And compression undermines a
1813 1841 # goal of stream clones, which is to be fast. Communicate the desire
1814 1842 # to avoid compression to consumers of the bundle.
1815 1843 bundler.prefercompressed = False
1816 1844
1817 1845 filecount, bytecount, it = streamclone.generatev2(repo)
1818 1846 requirements = _formatrequirementsspec(repo.requirements)
1819 1847 part = bundler.newpart('stream2', data=it)
1820 1848 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1821 1849 part.addparam('filecount', '%d' % filecount, mandatory=True)
1822 1850 part.addparam('requirements', requirements, mandatory=True)
1823 1851
1824 1852 @getbundle2partsgenerator('changegroup')
1825 1853 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1826 1854 b2caps=None, heads=None, common=None, **kwargs):
1827 1855 """add a changegroup part to the requested bundle"""
1828 1856 cgstream = None
1829 1857 if kwargs.get(r'cg', True):
1830 1858 # build changegroup bundle here.
1831 1859 version = '01'
1832 1860 cgversions = b2caps.get('changegroup')
1833 1861 if cgversions: # 3.1 and 3.2 ship with an empty value
1834 1862 cgversions = [v for v in cgversions
1835 1863 if v in changegroup.supportedoutgoingversions(repo)]
1836 1864 if not cgversions:
1837 1865 raise ValueError(_('no common changegroup version'))
1838 1866 version = max(cgversions)
1839 1867 outgoing = _computeoutgoing(repo, heads, common)
1840 1868 if outgoing.missing:
1841 1869 cgstream = changegroup.makestream(repo, outgoing, version, source,
1842 1870 bundlecaps=bundlecaps)
1843 1871
1844 1872 if cgstream:
1845 1873 part = bundler.newpart('changegroup', data=cgstream)
1846 1874 if cgversions:
1847 1875 part.addparam('version', version)
1848 1876 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1849 1877 mandatory=False)
1850 1878 if 'treemanifest' in repo.requirements:
1851 1879 part.addparam('treemanifest', '1')
1852 1880
1853 1881 @getbundle2partsgenerator('bookmarks')
1854 1882 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1855 1883 b2caps=None, **kwargs):
1856 1884 """add a bookmark part to the requested bundle"""
1857 1885 if not kwargs.get(r'bookmarks', False):
1858 1886 return
1859 1887 if 'bookmarks' not in b2caps:
1860 1888 raise ValueError(_('no common bookmarks exchange method'))
1861 1889 books = bookmod.listbinbookmarks(repo)
1862 1890 data = bookmod.binaryencode(books)
1863 1891 if data:
1864 1892 bundler.newpart('bookmarks', data=data)
1865 1893
1866 1894 @getbundle2partsgenerator('listkeys')
1867 1895 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1868 1896 b2caps=None, **kwargs):
1869 1897 """add parts containing listkeys namespaces to the requested bundle"""
1870 1898 listkeys = kwargs.get(r'listkeys', ())
1871 1899 for namespace in listkeys:
1872 1900 part = bundler.newpart('listkeys')
1873 1901 part.addparam('namespace', namespace)
1874 1902 keys = repo.listkeys(namespace).items()
1875 1903 part.data = pushkey.encodekeys(keys)
1876 1904
1877 1905 @getbundle2partsgenerator('obsmarkers')
1878 1906 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1879 1907 b2caps=None, heads=None, **kwargs):
1880 1908 """add an obsolescence markers part to the requested bundle"""
1881 1909 if kwargs.get(r'obsmarkers', False):
1882 1910 if heads is None:
1883 1911 heads = repo.heads()
1884 1912 subset = [c.node() for c in repo.set('::%ln', heads)]
1885 1913 markers = repo.obsstore.relevantmarkers(subset)
1886 1914 markers = sorted(markers)
1887 1915 bundle2.buildobsmarkerspart(bundler, markers)
1888 1916
1889 1917 @getbundle2partsgenerator('phases')
1890 1918 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1891 1919 b2caps=None, heads=None, **kwargs):
1892 1920 """add phase heads part to the requested bundle"""
1893 1921 if kwargs.get(r'phases', False):
1894 1922 if not 'heads' in b2caps.get('phases'):
1895 1923 raise ValueError(_('no common phases exchange method'))
1896 1924 if heads is None:
1897 1925 heads = repo.heads()
1898 1926
1899 1927 headsbyphase = collections.defaultdict(set)
1900 1928 if repo.publishing():
1901 1929 headsbyphase[phases.public] = heads
1902 1930 else:
1903 1931 # find the appropriate heads to move
1904 1932
1905 1933 phase = repo._phasecache.phase
1906 1934 node = repo.changelog.node
1907 1935 rev = repo.changelog.rev
1908 1936 for h in heads:
1909 1937 headsbyphase[phase(repo, rev(h))].add(h)
1910 1938 seenphases = list(headsbyphase.keys())
1911 1939
1912 1940 # We do not handle anything but public and draft phase for now)
1913 1941 if seenphases:
1914 1942 assert max(seenphases) <= phases.draft
1915 1943
1916 1944 # if client is pulling non-public changesets, we need to find
1917 1945 # intermediate public heads.
1918 1946 draftheads = headsbyphase.get(phases.draft, set())
1919 1947 if draftheads:
1920 1948 publicheads = headsbyphase.get(phases.public, set())
1921 1949
1922 1950 revset = 'heads(only(%ln, %ln) and public())'
1923 1951 extraheads = repo.revs(revset, draftheads, publicheads)
1924 1952 for r in extraheads:
1925 1953 headsbyphase[phases.public].add(node(r))
1926 1954
1927 1955 # transform data in a format used by the encoding function
1928 1956 phasemapping = []
1929 1957 for phase in phases.allphases:
1930 1958 phasemapping.append(sorted(headsbyphase[phase]))
1931 1959
1932 1960 # generate the actual part
1933 1961 phasedata = phases.binaryencode(phasemapping)
1934 1962 bundler.newpart('phase-heads', data=phasedata)
1935 1963
1936 1964 @getbundle2partsgenerator('hgtagsfnodes')
1937 1965 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1938 1966 b2caps=None, heads=None, common=None,
1939 1967 **kwargs):
1940 1968 """Transfer the .hgtags filenodes mapping.
1941 1969
1942 1970 Only values for heads in this bundle will be transferred.
1943 1971
1944 1972 The part data consists of pairs of 20 byte changeset node and .hgtags
1945 1973 filenodes raw values.
1946 1974 """
1947 1975 # Don't send unless:
1948 1976 # - changeset are being exchanged,
1949 1977 # - the client supports it.
1950 1978 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1951 1979 return
1952 1980
1953 1981 outgoing = _computeoutgoing(repo, heads, common)
1954 1982 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1955 1983
1956 1984 @getbundle2partsgenerator('cache:rev-branch-cache')
1957 1985 def _getbundlerevbranchcache(bundler, repo, source, bundlecaps=None,
1958 1986 b2caps=None, heads=None, common=None,
1959 1987 **kwargs):
1960 1988 """Transfer the rev-branch-cache mapping
1961 1989
1962 1990 The payload is a series of data related to each branch
1963 1991
1964 1992 1) branch name length
1965 1993 2) number of open heads
1966 1994 3) number of closed heads
1967 1995 4) open heads nodes
1968 1996 5) closed heads nodes
1969 1997 """
1970 1998 # Don't send unless:
1971 1999 # - changeset are being exchanged,
1972 2000 # - the client supports it.
1973 2001 if not (kwargs.get(r'cg', True)) or 'rev-branch-cache' not in b2caps:
1974 2002 return
1975 2003 outgoing = _computeoutgoing(repo, heads, common)
1976 2004 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
1977 2005
1978 2006 def check_heads(repo, their_heads, context):
1979 2007 """check if the heads of a repo have been modified
1980 2008
1981 2009 Used by peer for unbundling.
1982 2010 """
1983 2011 heads = repo.heads()
1984 2012 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1985 2013 if not (their_heads == ['force'] or their_heads == heads or
1986 2014 their_heads == ['hashed', heads_hash]):
1987 2015 # someone else committed/pushed/unbundled while we
1988 2016 # were transferring data
1989 2017 raise error.PushRaced('repository changed while %s - '
1990 2018 'please try again' % context)
1991 2019
1992 2020 def unbundle(repo, cg, heads, source, url):
1993 2021 """Apply a bundle to a repo.
1994 2022
1995 2023 this function makes sure the repo is locked during the application and have
1996 2024 mechanism to check that no push race occurred between the creation of the
1997 2025 bundle and its application.
1998 2026
1999 2027 If the push was raced as PushRaced exception is raised."""
2000 2028 r = 0
2001 2029 # need a transaction when processing a bundle2 stream
2002 2030 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2003 2031 lockandtr = [None, None, None]
2004 2032 recordout = None
2005 2033 # quick fix for output mismatch with bundle2 in 3.4
2006 2034 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
2007 2035 if url.startswith('remote:http:') or url.startswith('remote:https:'):
2008 2036 captureoutput = True
2009 2037 try:
2010 2038 # note: outside bundle1, 'heads' is expected to be empty and this
2011 2039 # 'check_heads' call wil be a no-op
2012 2040 check_heads(repo, heads, 'uploading changes')
2013 2041 # push can proceed
2014 2042 if not isinstance(cg, bundle2.unbundle20):
2015 2043 # legacy case: bundle1 (changegroup 01)
2016 2044 txnname = "\n".join([source, util.hidepassword(url)])
2017 2045 with repo.lock(), repo.transaction(txnname) as tr:
2018 2046 op = bundle2.applybundle(repo, cg, tr, source, url)
2019 2047 r = bundle2.combinechangegroupresults(op)
2020 2048 else:
2021 2049 r = None
2022 2050 try:
2023 2051 def gettransaction():
2024 2052 if not lockandtr[2]:
2025 2053 lockandtr[0] = repo.wlock()
2026 2054 lockandtr[1] = repo.lock()
2027 2055 lockandtr[2] = repo.transaction(source)
2028 2056 lockandtr[2].hookargs['source'] = source
2029 2057 lockandtr[2].hookargs['url'] = url
2030 2058 lockandtr[2].hookargs['bundle2'] = '1'
2031 2059 return lockandtr[2]
2032 2060
2033 2061 # Do greedy locking by default until we're satisfied with lazy
2034 2062 # locking.
2035 2063 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
2036 2064 gettransaction()
2037 2065
2038 2066 op = bundle2.bundleoperation(repo, gettransaction,
2039 2067 captureoutput=captureoutput)
2040 2068 try:
2041 2069 op = bundle2.processbundle(repo, cg, op=op)
2042 2070 finally:
2043 2071 r = op.reply
2044 2072 if captureoutput and r is not None:
2045 2073 repo.ui.pushbuffer(error=True, subproc=True)
2046 2074 def recordout(output):
2047 2075 r.newpart('output', data=output, mandatory=False)
2048 2076 if lockandtr[2] is not None:
2049 2077 lockandtr[2].close()
2050 2078 except BaseException as exc:
2051 2079 exc.duringunbundle2 = True
2052 2080 if captureoutput and r is not None:
2053 2081 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2054 2082 def recordout(output):
2055 2083 part = bundle2.bundlepart('output', data=output,
2056 2084 mandatory=False)
2057 2085 parts.append(part)
2058 2086 raise
2059 2087 finally:
2060 2088 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2061 2089 if recordout is not None:
2062 2090 recordout(repo.ui.popbuffer())
2063 2091 return r
2064 2092
2065 2093 def _maybeapplyclonebundle(pullop):
2066 2094 """Apply a clone bundle from a remote, if possible."""
2067 2095
2068 2096 repo = pullop.repo
2069 2097 remote = pullop.remote
2070 2098
2071 2099 if not repo.ui.configbool('ui', 'clonebundles'):
2072 2100 return
2073 2101
2074 2102 # Only run if local repo is empty.
2075 2103 if len(repo):
2076 2104 return
2077 2105
2078 2106 if pullop.heads:
2079 2107 return
2080 2108
2081 2109 if not remote.capable('clonebundles'):
2082 2110 return
2083 2111
2084 2112 res = remote._call('clonebundles')
2085 2113
2086 2114 # If we call the wire protocol command, that's good enough to record the
2087 2115 # attempt.
2088 2116 pullop.clonebundleattempted = True
2089 2117
2090 2118 entries = parseclonebundlesmanifest(repo, res)
2091 2119 if not entries:
2092 2120 repo.ui.note(_('no clone bundles available on remote; '
2093 2121 'falling back to regular clone\n'))
2094 2122 return
2095 2123
2096 2124 entries = filterclonebundleentries(
2097 2125 repo, entries, streamclonerequested=pullop.streamclonerequested)
2098 2126
2099 2127 if not entries:
2100 2128 # There is a thundering herd concern here. However, if a server
2101 2129 # operator doesn't advertise bundles appropriate for its clients,
2102 2130 # they deserve what's coming. Furthermore, from a client's
2103 2131 # perspective, no automatic fallback would mean not being able to
2104 2132 # clone!
2105 2133 repo.ui.warn(_('no compatible clone bundles available on server; '
2106 2134 'falling back to regular clone\n'))
2107 2135 repo.ui.warn(_('(you may want to report this to the server '
2108 2136 'operator)\n'))
2109 2137 return
2110 2138
2111 2139 entries = sortclonebundleentries(repo.ui, entries)
2112 2140
2113 2141 url = entries[0]['URL']
2114 2142 repo.ui.status(_('applying clone bundle from %s\n') % url)
2115 2143 if trypullbundlefromurl(repo.ui, repo, url):
2116 2144 repo.ui.status(_('finished applying clone bundle\n'))
2117 2145 # Bundle failed.
2118 2146 #
2119 2147 # We abort by default to avoid the thundering herd of
2120 2148 # clients flooding a server that was expecting expensive
2121 2149 # clone load to be offloaded.
2122 2150 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2123 2151 repo.ui.warn(_('falling back to normal clone\n'))
2124 2152 else:
2125 2153 raise error.Abort(_('error applying bundle'),
2126 2154 hint=_('if this error persists, consider contacting '
2127 2155 'the server operator or disable clone '
2128 2156 'bundles via '
2129 2157 '"--config ui.clonebundles=false"'))
2130 2158
2131 2159 def parseclonebundlesmanifest(repo, s):
2132 2160 """Parses the raw text of a clone bundles manifest.
2133 2161
2134 2162 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2135 2163 to the URL and other keys are the attributes for the entry.
2136 2164 """
2137 2165 m = []
2138 2166 for line in s.splitlines():
2139 2167 fields = line.split()
2140 2168 if not fields:
2141 2169 continue
2142 2170 attrs = {'URL': fields[0]}
2143 2171 for rawattr in fields[1:]:
2144 2172 key, value = rawattr.split('=', 1)
2145 2173 key = urlreq.unquote(key)
2146 2174 value = urlreq.unquote(value)
2147 2175 attrs[key] = value
2148 2176
2149 2177 # Parse BUNDLESPEC into components. This makes client-side
2150 2178 # preferences easier to specify since you can prefer a single
2151 2179 # component of the BUNDLESPEC.
2152 2180 if key == 'BUNDLESPEC':
2153 2181 try:
2154 2182 bundlespec = parsebundlespec(repo, value,
2155 2183 externalnames=True)
2156 2184 attrs['COMPRESSION'] = bundlespec.compression
2157 2185 attrs['VERSION'] = bundlespec.version
2158 2186 except error.InvalidBundleSpecification:
2159 2187 pass
2160 2188 except error.UnsupportedBundleSpecification:
2161 2189 pass
2162 2190
2163 2191 m.append(attrs)
2164 2192
2165 2193 return m
2166 2194
2167 2195 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2168 2196 """Remove incompatible clone bundle manifest entries.
2169 2197
2170 2198 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2171 2199 and returns a new list consisting of only the entries that this client
2172 2200 should be able to apply.
2173 2201
2174 2202 There is no guarantee we'll be able to apply all returned entries because
2175 2203 the metadata we use to filter on may be missing or wrong.
2176 2204 """
2177 2205 newentries = []
2178 2206 for entry in entries:
2179 2207 spec = entry.get('BUNDLESPEC')
2180 2208 if spec:
2181 2209 try:
2182 2210 bundlespec = parsebundlespec(repo, spec, strict=True)
2183 2211
2184 2212 # If a stream clone was requested, filter out non-streamclone
2185 2213 # entries.
2186 2214 comp = bundlespec.compression
2187 2215 version = bundlespec.version
2188 2216 if streamclonerequested and (comp != 'UN' or version != 's1'):
2189 2217 repo.ui.debug('filtering %s because not a stream clone\n' %
2190 2218 entry['URL'])
2191 2219 continue
2192 2220
2193 2221 except error.InvalidBundleSpecification as e:
2194 2222 repo.ui.debug(str(e) + '\n')
2195 2223 continue
2196 2224 except error.UnsupportedBundleSpecification as e:
2197 2225 repo.ui.debug('filtering %s because unsupported bundle '
2198 2226 'spec: %s\n' % (
2199 2227 entry['URL'], stringutil.forcebytestr(e)))
2200 2228 continue
2201 2229 # If we don't have a spec and requested a stream clone, we don't know
2202 2230 # what the entry is so don't attempt to apply it.
2203 2231 elif streamclonerequested:
2204 2232 repo.ui.debug('filtering %s because cannot determine if a stream '
2205 2233 'clone bundle\n' % entry['URL'])
2206 2234 continue
2207 2235
2208 2236 if 'REQUIRESNI' in entry and not sslutil.hassni:
2209 2237 repo.ui.debug('filtering %s because SNI not supported\n' %
2210 2238 entry['URL'])
2211 2239 continue
2212 2240
2213 2241 newentries.append(entry)
2214 2242
2215 2243 return newentries
2216 2244
2217 2245 class clonebundleentry(object):
2218 2246 """Represents an item in a clone bundles manifest.
2219 2247
2220 2248 This rich class is needed to support sorting since sorted() in Python 3
2221 2249 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2222 2250 won't work.
2223 2251 """
2224 2252
2225 2253 def __init__(self, value, prefers):
2226 2254 self.value = value
2227 2255 self.prefers = prefers
2228 2256
2229 2257 def _cmp(self, other):
2230 2258 for prefkey, prefvalue in self.prefers:
2231 2259 avalue = self.value.get(prefkey)
2232 2260 bvalue = other.value.get(prefkey)
2233 2261
2234 2262 # Special case for b missing attribute and a matches exactly.
2235 2263 if avalue is not None and bvalue is None and avalue == prefvalue:
2236 2264 return -1
2237 2265
2238 2266 # Special case for a missing attribute and b matches exactly.
2239 2267 if bvalue is not None and avalue is None and bvalue == prefvalue:
2240 2268 return 1
2241 2269
2242 2270 # We can't compare unless attribute present on both.
2243 2271 if avalue is None or bvalue is None:
2244 2272 continue
2245 2273
2246 2274 # Same values should fall back to next attribute.
2247 2275 if avalue == bvalue:
2248 2276 continue
2249 2277
2250 2278 # Exact matches come first.
2251 2279 if avalue == prefvalue:
2252 2280 return -1
2253 2281 if bvalue == prefvalue:
2254 2282 return 1
2255 2283
2256 2284 # Fall back to next attribute.
2257 2285 continue
2258 2286
2259 2287 # If we got here we couldn't sort by attributes and prefers. Fall
2260 2288 # back to index order.
2261 2289 return 0
2262 2290
2263 2291 def __lt__(self, other):
2264 2292 return self._cmp(other) < 0
2265 2293
2266 2294 def __gt__(self, other):
2267 2295 return self._cmp(other) > 0
2268 2296
2269 2297 def __eq__(self, other):
2270 2298 return self._cmp(other) == 0
2271 2299
2272 2300 def __le__(self, other):
2273 2301 return self._cmp(other) <= 0
2274 2302
2275 2303 def __ge__(self, other):
2276 2304 return self._cmp(other) >= 0
2277 2305
2278 2306 def __ne__(self, other):
2279 2307 return self._cmp(other) != 0
2280 2308
2281 2309 def sortclonebundleentries(ui, entries):
2282 2310 prefers = ui.configlist('ui', 'clonebundleprefers')
2283 2311 if not prefers:
2284 2312 return list(entries)
2285 2313
2286 2314 prefers = [p.split('=', 1) for p in prefers]
2287 2315
2288 2316 items = sorted(clonebundleentry(v, prefers) for v in entries)
2289 2317 return [i.value for i in items]
2290 2318
2291 2319 def trypullbundlefromurl(ui, repo, url):
2292 2320 """Attempt to apply a bundle from a URL."""
2293 2321 with repo.lock(), repo.transaction('bundleurl') as tr:
2294 2322 try:
2295 2323 fh = urlmod.open(ui, url)
2296 2324 cg = readbundle(ui, fh, 'stream')
2297 2325
2298 2326 if isinstance(cg, streamclone.streamcloneapplier):
2299 2327 cg.apply(repo)
2300 2328 else:
2301 2329 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2302 2330 return True
2303 2331 except urlerr.httperror as e:
2304 2332 ui.warn(_('HTTP error fetching bundle: %s\n') %
2305 2333 stringutil.forcebytestr(e))
2306 2334 except urlerr.urlerror as e:
2307 2335 ui.warn(_('error fetching bundle: %s\n') %
2308 2336 stringutil.forcebytestr(e.reason))
2309 2337
2310 2338 return False
@@ -1,142 +1,142 b''
1 1 # coding=UTF-8
2 2
3 3 from __future__ import absolute_import
4 4
5 5 import base64
6 6 import zlib
7 7
8 8 from mercurial import (
9 9 changegroup,
10 10 exchange,
11 11 extensions,
12 12 filelog,
13 13 revlog,
14 14 util,
15 15 )
16 16
17 17 # Test only: These flags are defined here only in the context of testing the
18 18 # behavior of the flag processor. The canonical way to add flags is to get in
19 19 # touch with the community and make them known in revlog.
20 20 REVIDX_NOOP = (1 << 3)
21 21 REVIDX_BASE64 = (1 << 2)
22 22 REVIDX_GZIP = (1 << 1)
23 23 REVIDX_FAIL = 1
24 24
25 25 def validatehash(self, text):
26 26 return True
27 27
28 28 def bypass(self, text):
29 29 return False
30 30
31 31 def noopdonothing(self, text):
32 32 return (text, True)
33 33
34 34 def b64encode(self, text):
35 35 return (base64.b64encode(text), False)
36 36
37 37 def b64decode(self, text):
38 38 return (base64.b64decode(text), True)
39 39
40 40 def gzipcompress(self, text):
41 41 return (zlib.compress(text), False)
42 42
43 43 def gzipdecompress(self, text):
44 44 return (zlib.decompress(text), True)
45 45
46 46 def supportedoutgoingversions(orig, repo):
47 47 versions = orig(repo)
48 48 versions.discard(b'01')
49 49 versions.discard(b'02')
50 50 versions.add(b'03')
51 51 return versions
52 52
53 53 def allsupportedversions(orig, ui):
54 54 versions = orig(ui)
55 55 versions.add(b'03')
56 56 return versions
57 57
58 58 def noopaddrevision(orig, self, text, transaction, link, p1, p2,
59 59 cachedelta=None, node=None,
60 60 flags=revlog.REVIDX_DEFAULT_FLAGS):
61 61 if b'[NOOP]' in text:
62 62 flags |= REVIDX_NOOP
63 63 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
64 64 node=node, flags=flags)
65 65
66 66 def b64addrevision(orig, self, text, transaction, link, p1, p2,
67 67 cachedelta=None, node=None,
68 68 flags=revlog.REVIDX_DEFAULT_FLAGS):
69 69 if b'[BASE64]' in text:
70 70 flags |= REVIDX_BASE64
71 71 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
72 72 node=node, flags=flags)
73 73
74 74 def gzipaddrevision(orig, self, text, transaction, link, p1, p2,
75 75 cachedelta=None, node=None,
76 76 flags=revlog.REVIDX_DEFAULT_FLAGS):
77 77 if b'[GZIP]' in text:
78 78 flags |= REVIDX_GZIP
79 79 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
80 80 node=node, flags=flags)
81 81
82 82 def failaddrevision(orig, self, text, transaction, link, p1, p2,
83 83 cachedelta=None, node=None,
84 84 flags=revlog.REVIDX_DEFAULT_FLAGS):
85 85 # This addrevision wrapper is meant to add a flag we will not have
86 86 # transforms registered for, ensuring we handle this error case.
87 87 if b'[FAIL]' in text:
88 88 flags |= REVIDX_FAIL
89 89 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
90 90 node=node, flags=flags)
91 91
92 92 def extsetup(ui):
93 93 # Enable changegroup3 for flags to be sent over the wire
94 94 wrapfunction = extensions.wrapfunction
95 95 wrapfunction(changegroup,
96 96 'supportedoutgoingversions',
97 97 supportedoutgoingversions)
98 98 wrapfunction(changegroup,
99 99 'allsupportedversions',
100 100 allsupportedversions)
101 101
102 102 # Teach revlog about our test flags
103 103 flags = [REVIDX_NOOP, REVIDX_BASE64, REVIDX_GZIP, REVIDX_FAIL]
104 104 revlog.REVIDX_KNOWN_FLAGS |= util.bitsfrom(flags)
105 105 revlog.REVIDX_FLAGS_ORDER.extend(flags)
106 106
107 107 # Teach exchange to use changegroup 3
108 for k in exchange._bundlespeccgversions.keys():
109 exchange._bundlespeccgversions[k] = b'03'
108 for k in exchange._bundlespeccontentopts.keys():
109 exchange._bundlespeccontentopts[k]["cg.version"] = "03"
110 110
111 111 # Add wrappers for addrevision, responsible to set flags depending on the
112 112 # revision data contents.
113 113 wrapfunction(filelog.filelog, 'addrevision', noopaddrevision)
114 114 wrapfunction(filelog.filelog, 'addrevision', b64addrevision)
115 115 wrapfunction(filelog.filelog, 'addrevision', gzipaddrevision)
116 116 wrapfunction(filelog.filelog, 'addrevision', failaddrevision)
117 117
118 118 # Register flag processors for each extension
119 119 revlog.addflagprocessor(
120 120 REVIDX_NOOP,
121 121 (
122 122 noopdonothing,
123 123 noopdonothing,
124 124 validatehash,
125 125 )
126 126 )
127 127 revlog.addflagprocessor(
128 128 REVIDX_BASE64,
129 129 (
130 130 b64decode,
131 131 b64encode,
132 132 bypass,
133 133 ),
134 134 )
135 135 revlog.addflagprocessor(
136 136 REVIDX_GZIP,
137 137 (
138 138 gzipdecompress,
139 139 gzipcompress,
140 140 bypass
141 141 )
142 142 )
General Comments 0
You need to be logged in to leave comments. Login now