Show More
@@ -1,393 +1,390 | |||
|
1 | 1 | # lfs - hash-preserving large file support using Git-LFS protocol |
|
2 | 2 | # |
|
3 | 3 | # Copyright 2017 Facebook, Inc. |
|
4 | 4 | # |
|
5 | 5 | # This software may be used and distributed according to the terms of the |
|
6 | 6 | # GNU General Public License version 2 or any later version. |
|
7 | 7 | |
|
8 | 8 | """lfs - large file support (EXPERIMENTAL) |
|
9 | 9 | |
|
10 | 10 | This extension allows large files to be tracked outside of the normal |
|
11 | 11 | repository storage and stored on a centralized server, similar to the |
|
12 | 12 | ``largefiles`` extension. The ``git-lfs`` protocol is used when |
|
13 | 13 | communicating with the server, so existing git infrastructure can be |
|
14 | 14 | harnessed. Even though the files are stored outside of the repository, |
|
15 | 15 | they are still integrity checked in the same manner as normal files. |
|
16 | 16 | |
|
17 | 17 | The files stored outside of the repository are downloaded on demand, |
|
18 | 18 | which reduces the time to clone, and possibly the local disk usage. |
|
19 | 19 | This changes fundamental workflows in a DVCS, so careful thought |
|
20 | 20 | should be given before deploying it. :hg:`convert` can be used to |
|
21 | 21 | convert LFS repositories to normal repositories that no longer |
|
22 | 22 | require this extension, and do so without changing the commit hashes. |
|
23 | 23 | This allows the extension to be disabled if the centralized workflow |
|
24 | 24 | becomes burdensome. However, the pre and post convert clones will |
|
25 | 25 | not be able to communicate with each other unless the extension is |
|
26 | 26 | enabled on both. |
|
27 | 27 | |
|
28 | 28 | To start a new repository, or to add LFS files to an existing one, just |
|
29 | 29 | create an ``.hglfs`` file as described below in the root directory of |
|
30 | 30 | the repository. Typically, this file should be put under version |
|
31 | 31 | control, so that the settings will propagate to other repositories with |
|
32 | 32 | push and pull. During any commit, Mercurial will consult this file to |
|
33 | 33 | determine if an added or modified file should be stored externally. The |
|
34 | 34 | type of storage depends on the characteristics of the file at each |
|
35 | 35 | commit. A file that is near a size threshold may switch back and forth |
|
36 | 36 | between LFS and normal storage, as needed. |
|
37 | 37 | |
|
38 | 38 | Alternately, both normal repositories and largefile controlled |
|
39 | 39 | repositories can be converted to LFS by using :hg:`convert` and the |
|
40 | 40 | ``lfs.track`` config option described below. The ``.hglfs`` file |
|
41 | 41 | should then be created and added, to control subsequent LFS selection. |
|
42 | 42 | The hashes are also unchanged in this case. The LFS and non-LFS |
|
43 | 43 | repositories can be distinguished because the LFS repository will |
|
44 | 44 | abort any command if this extension is disabled. |
|
45 | 45 | |
|
46 | 46 | Committed LFS files are held locally, until the repository is pushed. |
|
47 | 47 | Prior to pushing the normal repository data, the LFS files that are |
|
48 | 48 | tracked by the outgoing commits are automatically uploaded to the |
|
49 | 49 | configured central server. No LFS files are transferred on |
|
50 | 50 | :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on |
|
51 | 51 | demand as they need to be read, if a cached copy cannot be found |
|
52 | 52 | locally. Both committing and downloading an LFS file will link the |
|
53 | 53 | file to a usercache, to speed up future access. See the `usercache` |
|
54 | 54 | config setting described below. |
|
55 | 55 | |
|
56 | 56 | .hglfs:: |
|
57 | 57 | |
|
58 | 58 | The extension reads its configuration from a versioned ``.hglfs`` |
|
59 | 59 | configuration file found in the root of the working directory. The |
|
60 | 60 | ``.hglfs`` file uses the same syntax as all other Mercurial |
|
61 | 61 | configuration files. It uses a single section, ``[track]``. |
|
62 | 62 | |
|
63 | 63 | The ``[track]`` section specifies which files are stored as LFS (or |
|
64 | 64 | not). Each line is keyed by a file pattern, with a predicate value. |
|
65 | 65 | The first file pattern match is used, so put more specific patterns |
|
66 | 66 | first. The available predicates are ``all()``, ``none()``, and |
|
67 | 67 | ``size()``. See "hg help filesets.size" for the latter. |
|
68 | 68 | |
|
69 | 69 | Example versioned ``.hglfs`` file:: |
|
70 | 70 | |
|
71 | 71 | [track] |
|
72 | 72 | # No Makefile or python file, anywhere, will be LFS |
|
73 | 73 | **Makefile = none() |
|
74 | 74 | **.py = none() |
|
75 | 75 | |
|
76 | 76 | **.zip = all() |
|
77 | 77 | **.exe = size(">1MB") |
|
78 | 78 | |
|
79 | 79 | # Catchall for everything not matched above |
|
80 | 80 | ** = size(">10MB") |
|
81 | 81 | |
|
82 | 82 | Configs:: |
|
83 | 83 | |
|
84 | 84 | [lfs] |
|
85 | 85 | # Remote endpoint. Multiple protocols are supported: |
|
86 | 86 | # - http(s)://user:pass@example.com/path |
|
87 | 87 | # git-lfs endpoint |
|
88 | 88 | # - file:///tmp/path |
|
89 | 89 | # local filesystem, usually for testing |
|
90 | 90 | # if unset, lfs will prompt setting this when it must use this value. |
|
91 | 91 | # (default: unset) |
|
92 | 92 | url = https://example.com/repo.git/info/lfs |
|
93 | 93 | |
|
94 | 94 | # Which files to track in LFS. Path tests are "**.extname" for file |
|
95 | 95 | # extensions, and "path:under/some/directory" for path prefix. Both |
|
96 | 96 | # are relative to the repository root. |
|
97 | 97 | # File size can be tested with the "size()" fileset, and tests can be |
|
98 | 98 | # joined with fileset operators. (See "hg help filesets.operators".) |
|
99 | 99 | # |
|
100 | 100 | # Some examples: |
|
101 | 101 | # - all() # everything |
|
102 | 102 | # - none() # nothing |
|
103 | 103 | # - size(">20MB") # larger than 20MB |
|
104 | 104 | # - !**.txt # anything not a *.txt file |
|
105 | 105 | # - **.zip | **.tar.gz | **.7z # some types of compressed files |
|
106 | 106 | # - path:bin # files under "bin" in the project root |
|
107 | 107 | # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz |
|
108 | 108 | # | (path:bin & !path:/bin/README) | size(">1GB") |
|
109 | 109 | # (default: none()) |
|
110 | 110 | # |
|
111 | 111 | # This is ignored if there is a tracked '.hglfs' file, and this setting |
|
112 | 112 | # will eventually be deprecated and removed. |
|
113 | 113 | track = size(">10M") |
|
114 | 114 | |
|
115 | 115 | # how many times to retry before giving up on transferring an object |
|
116 | 116 | retry = 5 |
|
117 | 117 | |
|
118 | 118 | # the local directory to store lfs files for sharing across local clones. |
|
119 | 119 | # If not set, the cache is located in an OS specific cache location. |
|
120 | 120 | usercache = /path/to/global/cache |
|
121 | 121 | """ |
|
122 | 122 | |
|
123 | 123 | from __future__ import absolute_import |
|
124 | 124 | |
|
125 | 125 | from mercurial.i18n import _ |
|
126 | 126 | |
|
127 | 127 | from mercurial import ( |
|
128 | 128 | bundle2, |
|
129 | 129 | changegroup, |
|
130 | 130 | cmdutil, |
|
131 | 131 | config, |
|
132 | 132 | context, |
|
133 | 133 | error, |
|
134 | 134 | exchange, |
|
135 | 135 | extensions, |
|
136 | 136 | filelog, |
|
137 | 137 | fileset, |
|
138 | 138 | hg, |
|
139 | 139 | localrepo, |
|
140 | merge, | |
|
141 | 140 | minifileset, |
|
142 | 141 | node, |
|
143 | 142 | pycompat, |
|
144 | 143 | registrar, |
|
145 | 144 | revlog, |
|
146 | 145 | scmutil, |
|
147 | 146 | templatekw, |
|
148 | 147 | upgrade, |
|
149 | 148 | util, |
|
150 | 149 | vfs as vfsmod, |
|
151 | 150 | wireproto, |
|
152 | 151 | ) |
|
153 | 152 | |
|
154 | 153 | from . import ( |
|
155 | 154 | blobstore, |
|
156 | 155 | wrapper, |
|
157 | 156 | ) |
|
158 | 157 | |
|
159 | 158 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for |
|
160 | 159 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
161 | 160 | # be specifying the version(s) of Mercurial they are tested with, or |
|
162 | 161 | # leave the attribute unspecified. |
|
163 | 162 | testedwith = 'ships-with-hg-core' |
|
164 | 163 | |
|
165 | 164 | configtable = {} |
|
166 | 165 | configitem = registrar.configitem(configtable) |
|
167 | 166 | |
|
168 | 167 | configitem('experimental', 'lfs.user-agent', |
|
169 | 168 | default=None, |
|
170 | 169 | ) |
|
171 | 170 | configitem('experimental', 'lfs.worker-enable', |
|
172 | 171 | default=False, |
|
173 | 172 | ) |
|
174 | 173 | |
|
175 | 174 | configitem('lfs', 'url', |
|
176 | 175 | default=None, |
|
177 | 176 | ) |
|
178 | 177 | configitem('lfs', 'usercache', |
|
179 | 178 | default=None, |
|
180 | 179 | ) |
|
181 | 180 | # Deprecated |
|
182 | 181 | configitem('lfs', 'threshold', |
|
183 | 182 | default=None, |
|
184 | 183 | ) |
|
185 | 184 | configitem('lfs', 'track', |
|
186 | 185 | default='none()', |
|
187 | 186 | ) |
|
188 | 187 | configitem('lfs', 'retry', |
|
189 | 188 | default=5, |
|
190 | 189 | ) |
|
191 | 190 | |
|
192 | 191 | cmdtable = {} |
|
193 | 192 | command = registrar.command(cmdtable) |
|
194 | 193 | |
|
195 | 194 | templatekeyword = registrar.templatekeyword() |
|
196 | 195 | filesetpredicate = registrar.filesetpredicate() |
|
197 | 196 | |
|
198 | 197 | def featuresetup(ui, supported): |
|
199 | 198 | # don't die on seeing a repo with the lfs requirement |
|
200 | 199 | supported |= {'lfs'} |
|
201 | 200 | |
|
202 | 201 | def uisetup(ui): |
|
203 | 202 | localrepo.localrepository.featuresetupfuncs.add(featuresetup) |
|
204 | 203 | |
|
205 | 204 | def reposetup(ui, repo): |
|
206 | 205 | # Nothing to do with a remote repo |
|
207 | 206 | if not repo.local(): |
|
208 | 207 | return |
|
209 | 208 | |
|
210 | 209 | repo.svfs.lfslocalblobstore = blobstore.local(repo) |
|
211 | 210 | repo.svfs.lfsremoteblobstore = blobstore.remote(repo) |
|
212 | 211 | |
|
213 | 212 | class lfsrepo(repo.__class__): |
|
214 | 213 | @localrepo.unfilteredmethod |
|
215 | 214 | def commitctx(self, ctx, error=False): |
|
216 | 215 | repo.svfs.options['lfstrack'] = _trackedmatcher(self) |
|
217 | 216 | return super(lfsrepo, self).commitctx(ctx, error) |
|
218 | 217 | |
|
219 | 218 | repo.__class__ = lfsrepo |
|
220 | 219 | |
|
221 | 220 | if 'lfs' not in repo.requirements: |
|
222 | 221 | def checkrequireslfs(ui, repo, **kwargs): |
|
223 | 222 | if 'lfs' not in repo.requirements: |
|
224 | 223 | last = kwargs.get('node_last') |
|
225 | 224 | _bin = node.bin |
|
226 | 225 | if last: |
|
227 | 226 | s = repo.set('%n:%n', _bin(kwargs['node']), _bin(last)) |
|
228 | 227 | else: |
|
229 | 228 | s = repo.set('%n', _bin(kwargs['node'])) |
|
230 | 229 | for ctx in s: |
|
231 | 230 | # TODO: is there a way to just walk the files in the commit? |
|
232 | 231 | if any(ctx[f].islfs() for f in ctx.files() if f in ctx): |
|
233 | 232 | repo.requirements.add('lfs') |
|
234 | 233 | repo._writerequirements() |
|
235 | 234 | repo.prepushoutgoinghooks.add('lfs', wrapper.prepush) |
|
236 | 235 | break |
|
237 | 236 | |
|
238 | 237 | ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs') |
|
239 | 238 | ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs') |
|
240 | 239 | else: |
|
241 | 240 | repo.prepushoutgoinghooks.add('lfs', wrapper.prepush) |
|
242 | 241 | |
|
243 | 242 | def _trackedmatcher(repo): |
|
244 | 243 | """Return a function (path, size) -> bool indicating whether or not to |
|
245 | 244 | track a given file with lfs.""" |
|
246 | 245 | if not repo.wvfs.exists('.hglfs'): |
|
247 | 246 | # No '.hglfs' in wdir. Fallback to config for now. |
|
248 | 247 | trackspec = repo.ui.config('lfs', 'track') |
|
249 | 248 | |
|
250 | 249 | # deprecated config: lfs.threshold |
|
251 | 250 | threshold = repo.ui.configbytes('lfs', 'threshold') |
|
252 | 251 | if threshold: |
|
253 | 252 | fileset.parse(trackspec) # make sure syntax errors are confined |
|
254 | 253 | trackspec = "(%s) | size('>%d')" % (trackspec, threshold) |
|
255 | 254 | |
|
256 | 255 | return minifileset.compile(trackspec) |
|
257 | 256 | |
|
258 | 257 | data = repo.wvfs.tryread('.hglfs') |
|
259 | 258 | if not data: |
|
260 | 259 | return lambda p, s: False |
|
261 | 260 | |
|
262 | 261 | # Parse errors here will abort with a message that points to the .hglfs file |
|
263 | 262 | # and line number. |
|
264 | 263 | cfg = config.config() |
|
265 | 264 | cfg.parse('.hglfs', data) |
|
266 | 265 | |
|
267 | 266 | try: |
|
268 | 267 | rules = [(minifileset.compile(pattern), minifileset.compile(rule)) |
|
269 | 268 | for pattern, rule in cfg.items('track')] |
|
270 | 269 | except error.ParseError as e: |
|
271 | 270 | # The original exception gives no indicator that the error is in the |
|
272 | 271 | # .hglfs file, so add that. |
|
273 | 272 | |
|
274 | 273 | # TODO: See if the line number of the file can be made available. |
|
275 | 274 | raise error.Abort(_('parse error in .hglfs: %s') % e) |
|
276 | 275 | |
|
277 | 276 | def _match(path, size): |
|
278 | 277 | for pat, rule in rules: |
|
279 | 278 | if pat(path, size): |
|
280 | 279 | return rule(path, size) |
|
281 | 280 | |
|
282 | 281 | return False |
|
283 | 282 | |
|
284 | 283 | return _match |
|
285 | 284 | |
|
286 | 285 | def wrapfilelog(filelog): |
|
287 | 286 | wrapfunction = extensions.wrapfunction |
|
288 | 287 | |
|
289 | 288 | wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision) |
|
290 | 289 | wrapfunction(filelog, 'renamed', wrapper.filelogrenamed) |
|
291 | 290 | wrapfunction(filelog, 'size', wrapper.filelogsize) |
|
292 | 291 | |
|
293 | 292 | def extsetup(ui): |
|
294 | 293 | wrapfilelog(filelog.filelog) |
|
295 | 294 | |
|
296 | 295 | wrapfunction = extensions.wrapfunction |
|
297 | 296 | |
|
298 | 297 | wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter) |
|
299 | 298 | wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink) |
|
300 | 299 | |
|
301 | 300 | wrapfunction(upgrade, '_finishdatamigration', |
|
302 | 301 | wrapper.upgradefinishdatamigration) |
|
303 | 302 | |
|
304 | 303 | wrapfunction(upgrade, 'preservedrequirements', |
|
305 | 304 | wrapper.upgraderequirements) |
|
306 | 305 | |
|
307 | 306 | wrapfunction(upgrade, 'supporteddestrequirements', |
|
308 | 307 | wrapper.upgraderequirements) |
|
309 | 308 | |
|
310 | 309 | wrapfunction(changegroup, |
|
311 | 310 | 'supportedoutgoingversions', |
|
312 | 311 | wrapper.supportedoutgoingversions) |
|
313 | 312 | wrapfunction(changegroup, |
|
314 | 313 | 'allsupportedversions', |
|
315 | 314 | wrapper.allsupportedversions) |
|
316 | 315 | |
|
317 | 316 | wrapfunction(exchange, 'push', wrapper.push) |
|
318 | 317 | wrapfunction(wireproto, '_capabilities', wrapper._capabilities) |
|
319 | 318 | |
|
320 | 319 | wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp) |
|
321 | 320 | wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary) |
|
322 | 321 | context.basefilectx.islfs = wrapper.filectxislfs |
|
323 | 322 | |
|
324 | 323 | revlog.addflagprocessor( |
|
325 | 324 | revlog.REVIDX_EXTSTORED, |
|
326 | 325 | ( |
|
327 | 326 | wrapper.readfromstore, |
|
328 | 327 | wrapper.writetostore, |
|
329 | 328 | wrapper.bypasscheckhash, |
|
330 | 329 | ), |
|
331 | 330 | ) |
|
332 | 331 | |
|
333 | 332 | wrapfunction(hg, 'clone', wrapper.hgclone) |
|
334 | 333 | wrapfunction(hg, 'postshare', wrapper.hgpostshare) |
|
335 | 334 | |
|
336 | wrapfunction(merge, 'applyupdates', wrapper.mergemodapplyupdates) | |
|
337 | ||
|
338 | 335 | scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles) |
|
339 | 336 | |
|
340 | 337 | # Make bundle choose changegroup3 instead of changegroup2. This affects |
|
341 | 338 | # "hg bundle" command. Note: it does not cover all bundle formats like |
|
342 | 339 | # "packed1". Using "packed1" with lfs will likely cause trouble. |
|
343 | 340 | names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02'] |
|
344 | 341 | for k in names: |
|
345 | 342 | exchange._bundlespeccgversions[k] = '03' |
|
346 | 343 | |
|
347 | 344 | # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs |
|
348 | 345 | # options and blob stores are passed from othervfs to the new readonlyvfs. |
|
349 | 346 | wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit) |
|
350 | 347 | |
|
351 | 348 | # when writing a bundle via "hg bundle" command, upload related LFS blobs |
|
352 | 349 | wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle) |
|
353 | 350 | |
|
354 | 351 | @filesetpredicate('lfs()', callstatus=True) |
|
355 | 352 | def lfsfileset(mctx, x): |
|
356 | 353 | """File that uses LFS storage.""" |
|
357 | 354 | # i18n: "lfs" is a keyword |
|
358 | 355 | fileset.getargs(x, 0, 0, _("lfs takes no arguments")) |
|
359 | 356 | return [f for f in mctx.subset |
|
360 | 357 | if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None] |
|
361 | 358 | |
|
362 | 359 | @templatekeyword('lfs_files') |
|
363 | 360 | def lfsfiles(repo, ctx, **args): |
|
364 | 361 | """List of strings. All files modified, added, or removed by this |
|
365 | 362 | changeset.""" |
|
366 | 363 | args = pycompat.byteskwargs(args) |
|
367 | 364 | |
|
368 | 365 | pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer} |
|
369 | 366 | files = sorted(pointers.keys()) |
|
370 | 367 | |
|
371 | 368 | def pointer(v): |
|
372 | 369 | # In the file spec, version is first and the other keys are sorted. |
|
373 | 370 | sortkeyfunc = lambda x: (x[0] != 'version', x) |
|
374 | 371 | items = sorted(pointers[v].iteritems(), key=sortkeyfunc) |
|
375 | 372 | return util.sortdict(items) |
|
376 | 373 | |
|
377 | 374 | makemap = lambda v: { |
|
378 | 375 | 'file': v, |
|
379 | 376 | 'lfsoid': pointers[v].oid() if pointers[v] else None, |
|
380 | 377 | 'lfspointer': templatekw.hybriddict(pointer(v)), |
|
381 | 378 | } |
|
382 | 379 | |
|
383 | 380 | # TODO: make the separator ', '? |
|
384 | 381 | f = templatekw._showlist('lfs_file', files, args) |
|
385 | 382 | return templatekw._hybrid(f, files, makemap, pycompat.identity) |
|
386 | 383 | |
|
387 | 384 | @command('debuglfsupload', |
|
388 | 385 | [('r', 'rev', [], _('upload large files introduced by REV'))]) |
|
389 | 386 | def debuglfsupload(ui, repo, **opts): |
|
390 | 387 | """upload lfs blobs added by the working copy parent or given revisions""" |
|
391 | 388 | revs = opts.get('rev', []) |
|
392 | 389 | pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs)) |
|
393 | 390 | wrapper.uploadblobs(repo, pointers) |
@@ -1,412 +1,391 | |||
|
1 | 1 | # wrapper.py - methods wrapping core mercurial logic |
|
2 | 2 | # |
|
3 | 3 | # Copyright 2017 Facebook, Inc. |
|
4 | 4 | # |
|
5 | 5 | # This software may be used and distributed according to the terms of the |
|
6 | 6 | # GNU General Public License version 2 or any later version. |
|
7 | 7 | |
|
8 | 8 | from __future__ import absolute_import |
|
9 | 9 | |
|
10 | 10 | import hashlib |
|
11 | 11 | |
|
12 | 12 | from mercurial.i18n import _ |
|
13 | 13 | from mercurial.node import bin, nullid, short |
|
14 | 14 | |
|
15 | 15 | from mercurial import ( |
|
16 | 16 | error, |
|
17 | 17 | filelog, |
|
18 | 18 | revlog, |
|
19 | 19 | util, |
|
20 | 20 | ) |
|
21 | 21 | |
|
22 | 22 | from ..largefiles import lfutil |
|
23 | 23 | |
|
24 | 24 | from . import ( |
|
25 | 25 | blobstore, |
|
26 | 26 | pointer, |
|
27 | 27 | ) |
|
28 | 28 | |
|
29 | 29 | def supportedoutgoingversions(orig, repo): |
|
30 | 30 | versions = orig(repo) |
|
31 | 31 | if 'lfs' in repo.requirements: |
|
32 | 32 | versions.discard('01') |
|
33 | 33 | versions.discard('02') |
|
34 | 34 | versions.add('03') |
|
35 | 35 | return versions |
|
36 | 36 | |
|
37 | 37 | def allsupportedversions(orig, ui): |
|
38 | 38 | versions = orig(ui) |
|
39 | 39 | versions.add('03') |
|
40 | 40 | return versions |
|
41 | 41 | |
|
42 | 42 | def _capabilities(orig, repo, proto): |
|
43 | 43 | '''Wrap server command to announce lfs server capability''' |
|
44 | 44 | caps = orig(repo, proto) |
|
45 | 45 | # XXX: change to 'lfs=serve' when separate git server isn't required? |
|
46 | 46 | caps.append('lfs') |
|
47 | 47 | return caps |
|
48 | 48 | |
|
49 | 49 | def bypasscheckhash(self, text): |
|
50 | 50 | return False |
|
51 | 51 | |
|
52 | 52 | def readfromstore(self, text): |
|
53 | 53 | """Read filelog content from local blobstore transform for flagprocessor. |
|
54 | 54 | |
|
55 | 55 | Default tranform for flagprocessor, returning contents from blobstore. |
|
56 | 56 | Returns a 2-typle (text, validatehash) where validatehash is True as the |
|
57 | 57 | contents of the blobstore should be checked using checkhash. |
|
58 | 58 | """ |
|
59 | 59 | p = pointer.deserialize(text) |
|
60 | 60 | oid = p.oid() |
|
61 | 61 | store = self.opener.lfslocalblobstore |
|
62 | 62 | if not store.has(oid): |
|
63 | 63 | p.filename = self.filename |
|
64 | 64 | self.opener.lfsremoteblobstore.readbatch([p], store) |
|
65 | 65 | |
|
66 | 66 | # The caller will validate the content |
|
67 | 67 | text = store.read(oid, verify=False) |
|
68 | 68 | |
|
69 | 69 | # pack hg filelog metadata |
|
70 | 70 | hgmeta = {} |
|
71 | 71 | for k in p.keys(): |
|
72 | 72 | if k.startswith('x-hg-'): |
|
73 | 73 | name = k[len('x-hg-'):] |
|
74 | 74 | hgmeta[name] = p[k] |
|
75 | 75 | if hgmeta or text.startswith('\1\n'): |
|
76 | 76 | text = filelog.packmeta(hgmeta, text) |
|
77 | 77 | |
|
78 | 78 | return (text, True) |
|
79 | 79 | |
|
80 | 80 | def writetostore(self, text): |
|
81 | 81 | # hg filelog metadata (includes rename, etc) |
|
82 | 82 | hgmeta, offset = filelog.parsemeta(text) |
|
83 | 83 | if offset and offset > 0: |
|
84 | 84 | # lfs blob does not contain hg filelog metadata |
|
85 | 85 | text = text[offset:] |
|
86 | 86 | |
|
87 | 87 | # git-lfs only supports sha256 |
|
88 | 88 | oid = hashlib.sha256(text).hexdigest() |
|
89 | 89 | self.opener.lfslocalblobstore.write(oid, text) |
|
90 | 90 | |
|
91 | 91 | # replace contents with metadata |
|
92 | 92 | longoid = 'sha256:%s' % oid |
|
93 | 93 | metadata = pointer.gitlfspointer(oid=longoid, size=str(len(text))) |
|
94 | 94 | |
|
95 | 95 | # by default, we expect the content to be binary. however, LFS could also |
|
96 | 96 | # be used for non-binary content. add a special entry for non-binary data. |
|
97 | 97 | # this will be used by filectx.isbinary(). |
|
98 | 98 | if not util.binary(text): |
|
99 | 99 | # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix |
|
100 | 100 | metadata['x-is-binary'] = '0' |
|
101 | 101 | |
|
102 | 102 | # translate hg filelog metadata to lfs metadata with "x-hg-" prefix |
|
103 | 103 | if hgmeta is not None: |
|
104 | 104 | for k, v in hgmeta.iteritems(): |
|
105 | 105 | metadata['x-hg-%s' % k] = v |
|
106 | 106 | |
|
107 | 107 | rawtext = metadata.serialize() |
|
108 | 108 | return (rawtext, False) |
|
109 | 109 | |
|
110 | 110 | def _islfs(rlog, node=None, rev=None): |
|
111 | 111 | if rev is None: |
|
112 | 112 | if node is None: |
|
113 | 113 | # both None - likely working copy content where node is not ready |
|
114 | 114 | return False |
|
115 | 115 | rev = rlog.rev(node) |
|
116 | 116 | else: |
|
117 | 117 | node = rlog.node(rev) |
|
118 | 118 | if node == nullid: |
|
119 | 119 | return False |
|
120 | 120 | flags = rlog.flags(rev) |
|
121 | 121 | return bool(flags & revlog.REVIDX_EXTSTORED) |
|
122 | 122 | |
|
123 | 123 | def filelogaddrevision(orig, self, text, transaction, link, p1, p2, |
|
124 | 124 | cachedelta=None, node=None, |
|
125 | 125 | flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds): |
|
126 | 126 | textlen = len(text) |
|
127 | 127 | # exclude hg rename meta from file size |
|
128 | 128 | meta, offset = filelog.parsemeta(text) |
|
129 | 129 | if offset: |
|
130 | 130 | textlen -= offset |
|
131 | 131 | |
|
132 | 132 | lfstrack = self.opener.options['lfstrack'] |
|
133 | 133 | |
|
134 | 134 | if lfstrack(self.filename, textlen): |
|
135 | 135 | flags |= revlog.REVIDX_EXTSTORED |
|
136 | 136 | |
|
137 | 137 | return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta, |
|
138 | 138 | node=node, flags=flags, **kwds) |
|
139 | 139 | |
|
140 | 140 | def filelogrenamed(orig, self, node): |
|
141 | 141 | if _islfs(self, node): |
|
142 | 142 | rawtext = self.revision(node, raw=True) |
|
143 | 143 | if not rawtext: |
|
144 | 144 | return False |
|
145 | 145 | metadata = pointer.deserialize(rawtext) |
|
146 | 146 | if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata: |
|
147 | 147 | return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev']) |
|
148 | 148 | else: |
|
149 | 149 | return False |
|
150 | 150 | return orig(self, node) |
|
151 | 151 | |
|
152 | 152 | def filelogsize(orig, self, rev): |
|
153 | 153 | if _islfs(self, rev=rev): |
|
154 | 154 | # fast path: use lfs metadata to answer size |
|
155 | 155 | rawtext = self.revision(rev, raw=True) |
|
156 | 156 | metadata = pointer.deserialize(rawtext) |
|
157 | 157 | return int(metadata['size']) |
|
158 | 158 | return orig(self, rev) |
|
159 | 159 | |
|
160 | 160 | def filectxcmp(orig, self, fctx): |
|
161 | 161 | """returns True if text is different than fctx""" |
|
162 | 162 | # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs |
|
163 | 163 | if self.islfs() and getattr(fctx, 'islfs', lambda: False)(): |
|
164 | 164 | # fast path: check LFS oid |
|
165 | 165 | p1 = pointer.deserialize(self.rawdata()) |
|
166 | 166 | p2 = pointer.deserialize(fctx.rawdata()) |
|
167 | 167 | return p1.oid() != p2.oid() |
|
168 | 168 | return orig(self, fctx) |
|
169 | 169 | |
|
170 | 170 | def filectxisbinary(orig, self): |
|
171 | 171 | if self.islfs(): |
|
172 | 172 | # fast path: use lfs metadata to answer isbinary |
|
173 | 173 | metadata = pointer.deserialize(self.rawdata()) |
|
174 | 174 | # if lfs metadata says nothing, assume it's binary by default |
|
175 | 175 | return bool(int(metadata.get('x-is-binary', 1))) |
|
176 | 176 | return orig(self) |
|
177 | 177 | |
|
178 | 178 | def filectxislfs(self): |
|
179 | 179 | return _islfs(self.filelog(), self.filenode()) |
|
180 | 180 | |
|
181 | 181 | def _updatecatformatter(orig, fm, ctx, matcher, path, decode): |
|
182 | 182 | orig(fm, ctx, matcher, path, decode) |
|
183 | 183 | fm.data(rawdata=ctx[path].rawdata()) |
|
184 | 184 | |
|
185 | 185 | def convertsink(orig, sink): |
|
186 | 186 | sink = orig(sink) |
|
187 | 187 | if sink.repotype == 'hg': |
|
188 | 188 | class lfssink(sink.__class__): |
|
189 | 189 | def putcommit(self, files, copies, parents, commit, source, revmap, |
|
190 | 190 | full, cleanp2): |
|
191 | 191 | pc = super(lfssink, self).putcommit |
|
192 | 192 | node = pc(files, copies, parents, commit, source, revmap, full, |
|
193 | 193 | cleanp2) |
|
194 | 194 | |
|
195 | 195 | if 'lfs' not in self.repo.requirements: |
|
196 | 196 | ctx = self.repo[node] |
|
197 | 197 | |
|
198 | 198 | # The file list may contain removed files, so check for |
|
199 | 199 | # membership before assuming it is in the context. |
|
200 | 200 | if any(f in ctx and ctx[f].islfs() for f, n in files): |
|
201 | 201 | self.repo.requirements.add('lfs') |
|
202 | 202 | self.repo._writerequirements() |
|
203 | 203 | |
|
204 | 204 | # Permanently enable lfs locally |
|
205 | 205 | self.repo.vfs.append( |
|
206 | 206 | 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n')) |
|
207 | 207 | |
|
208 | 208 | return node |
|
209 | 209 | |
|
210 | 210 | sink.__class__ = lfssink |
|
211 | 211 | |
|
212 | 212 | return sink |
|
213 | 213 | |
|
214 | 214 | def vfsinit(orig, self, othervfs): |
|
215 | 215 | orig(self, othervfs) |
|
216 | 216 | # copy lfs related options |
|
217 | 217 | for k, v in othervfs.options.items(): |
|
218 | 218 | if k.startswith('lfs'): |
|
219 | 219 | self.options[k] = v |
|
220 | 220 | # also copy lfs blobstores. note: this can run before reposetup, so lfs |
|
221 | 221 | # blobstore attributes are not always ready at this time. |
|
222 | 222 | for name in ['lfslocalblobstore', 'lfsremoteblobstore']: |
|
223 | 223 | if util.safehasattr(othervfs, name): |
|
224 | 224 | setattr(self, name, getattr(othervfs, name)) |
|
225 | 225 | |
|
226 | 226 | def hgclone(orig, ui, opts, *args, **kwargs): |
|
227 | 227 | result = orig(ui, opts, *args, **kwargs) |
|
228 | 228 | |
|
229 | 229 | if result is not None: |
|
230 | 230 | sourcerepo, destrepo = result |
|
231 | 231 | repo = destrepo.local() |
|
232 | 232 | |
|
233 | 233 | # When cloning to a remote repo (like through SSH), no repo is available |
|
234 | 234 | # from the peer. Therefore the hgrc can't be updated. |
|
235 | 235 | if not repo: |
|
236 | 236 | return result |
|
237 | 237 | |
|
238 | 238 | # If lfs is required for this repo, permanently enable it locally |
|
239 | 239 | if 'lfs' in repo.requirements: |
|
240 | 240 | repo.vfs.append('hgrc', |
|
241 | 241 | util.tonativeeol('\n[extensions]\nlfs=\n')) |
|
242 | 242 | |
|
243 | 243 | return result |
|
244 | 244 | |
|
245 | 245 | def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None): |
|
246 | 246 | orig(sourcerepo, destrepo, bookmarks, defaultpath) |
|
247 | 247 | |
|
248 | 248 | # If lfs is required for this repo, permanently enable it locally |
|
249 | 249 | if 'lfs' in destrepo.requirements: |
|
250 | 250 | destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n')) |
|
251 | 251 | |
|
252 | 252 | def _prefetchfiles(repo, ctx, files): |
|
253 | 253 | """Ensure that required LFS blobs are present, fetching them as a group if |
|
254 | needed. | |
|
255 | ||
|
256 | This is centralized logic for various prefetch hooks.""" | |
|
254 | needed.""" | |
|
257 | 255 | pointers = [] |
|
258 | 256 | localstore = repo.svfs.lfslocalblobstore |
|
259 | 257 | |
|
260 | 258 | for f in files: |
|
261 | 259 | p = pointerfromctx(ctx, f) |
|
262 | 260 | if p and not localstore.has(p.oid()): |
|
263 | 261 | p.filename = f |
|
264 | 262 | pointers.append(p) |
|
265 | 263 | |
|
266 | 264 | if pointers: |
|
267 | 265 | repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore) |
|
268 | 266 | |
|
269 | def mergemodapplyupdates(orig, repo, actions, wctx, mctx, overwrite, | |
|
270 | labels=None): | |
|
271 | """Ensure that the required LFS blobs are present before applying updates, | |
|
272 | fetching them as a group if needed. | |
|
273 | ||
|
274 | This has the effect of ensuring all necessary LFS blobs are present before | |
|
275 | making working directory changes during an update (including after clone and | |
|
276 | share) or merge.""" | |
|
277 | ||
|
278 | # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they | |
|
279 | # don't touch mctx. 'cd' is skipped, because changed/deleted never resolves | |
|
280 | # to something from the remote side. | |
|
281 | oplist = [actions[a] for a in 'g dc dg m'.split()] | |
|
282 | ||
|
283 | _prefetchfiles(repo, mctx, | |
|
284 | [f for sublist in oplist for f, args, msg in sublist]) | |
|
285 | ||
|
286 | return orig(repo, actions, wctx, mctx, overwrite, labels) | |
|
287 | ||
|
288 | 267 | def _canskipupload(repo): |
|
289 | 268 | # if remotestore is a null store, upload is a no-op and can be skipped |
|
290 | 269 | return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote) |
|
291 | 270 | |
|
292 | 271 | def candownload(repo): |
|
293 | 272 | # if remotestore is a null store, downloads will lead to nothing |
|
294 | 273 | return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote) |
|
295 | 274 | |
|
296 | 275 | def uploadblobsfromrevs(repo, revs): |
|
297 | 276 | '''upload lfs blobs introduced by revs |
|
298 | 277 | |
|
299 | 278 | Note: also used by other extensions e. g. infinitepush. avoid renaming. |
|
300 | 279 | ''' |
|
301 | 280 | if _canskipupload(repo): |
|
302 | 281 | return |
|
303 | 282 | pointers = extractpointers(repo, revs) |
|
304 | 283 | uploadblobs(repo, pointers) |
|
305 | 284 | |
|
306 | 285 | def prepush(pushop): |
|
307 | 286 | """Prepush hook. |
|
308 | 287 | |
|
309 | 288 | Read through the revisions to push, looking for filelog entries that can be |
|
310 | 289 | deserialized into metadata so that we can block the push on their upload to |
|
311 | 290 | the remote blobstore. |
|
312 | 291 | """ |
|
313 | 292 | return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing) |
|
314 | 293 | |
|
315 | 294 | def push(orig, repo, remote, *args, **kwargs): |
|
316 | 295 | """bail on push if the extension isn't enabled on remote when needed""" |
|
317 | 296 | if 'lfs' in repo.requirements: |
|
318 | 297 | # If the remote peer is for a local repo, the requirement tests in the |
|
319 | 298 | # base class method enforce lfs support. Otherwise, some revisions in |
|
320 | 299 | # this repo use lfs, and the remote repo needs the extension loaded. |
|
321 | 300 | if not remote.local() and not remote.capable('lfs'): |
|
322 | 301 | # This is a copy of the message in exchange.push() when requirements |
|
323 | 302 | # are missing between local repos. |
|
324 | 303 | m = _("required features are not supported in the destination: %s") |
|
325 | 304 | raise error.Abort(m % 'lfs', |
|
326 | 305 | hint=_('enable the lfs extension on the server')) |
|
327 | 306 | return orig(repo, remote, *args, **kwargs) |
|
328 | 307 | |
|
329 | 308 | def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing, |
|
330 | 309 | *args, **kwargs): |
|
331 | 310 | """upload LFS blobs added by outgoing revisions on 'hg bundle'""" |
|
332 | 311 | uploadblobsfromrevs(repo, outgoing.missing) |
|
333 | 312 | return orig(ui, repo, source, filename, bundletype, outgoing, *args, |
|
334 | 313 | **kwargs) |
|
335 | 314 | |
|
336 | 315 | def extractpointers(repo, revs): |
|
337 | 316 | """return a list of lfs pointers added by given revs""" |
|
338 | 317 | repo.ui.debug('lfs: computing set of blobs to upload\n') |
|
339 | 318 | pointers = {} |
|
340 | 319 | for r in revs: |
|
341 | 320 | ctx = repo[r] |
|
342 | 321 | for p in pointersfromctx(ctx).values(): |
|
343 | 322 | pointers[p.oid()] = p |
|
344 | 323 | return sorted(pointers.values()) |
|
345 | 324 | |
|
346 | 325 | def pointerfromctx(ctx, f, removed=False): |
|
347 | 326 | """return a pointer for the named file from the given changectx, or None if |
|
348 | 327 | the file isn't LFS. |
|
349 | 328 | |
|
350 | 329 | Optionally, the pointer for a file deleted from the context can be returned. |
|
351 | 330 | Since no such pointer is actually stored, and to distinguish from a non LFS |
|
352 | 331 | file, this pointer is represented by an empty dict. |
|
353 | 332 | """ |
|
354 | 333 | _ctx = ctx |
|
355 | 334 | if f not in ctx: |
|
356 | 335 | if not removed: |
|
357 | 336 | return None |
|
358 | 337 | if f in ctx.p1(): |
|
359 | 338 | _ctx = ctx.p1() |
|
360 | 339 | elif f in ctx.p2(): |
|
361 | 340 | _ctx = ctx.p2() |
|
362 | 341 | else: |
|
363 | 342 | return None |
|
364 | 343 | fctx = _ctx[f] |
|
365 | 344 | if not _islfs(fctx.filelog(), fctx.filenode()): |
|
366 | 345 | return None |
|
367 | 346 | try: |
|
368 | 347 | p = pointer.deserialize(fctx.rawdata()) |
|
369 | 348 | if ctx == _ctx: |
|
370 | 349 | return p |
|
371 | 350 | return {} |
|
372 | 351 | except pointer.InvalidPointer as ex: |
|
373 | 352 | raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n') |
|
374 | 353 | % (f, short(_ctx.node()), ex)) |
|
375 | 354 | |
|
376 | 355 | def pointersfromctx(ctx, removed=False): |
|
377 | 356 | """return a dict {path: pointer} for given single changectx. |
|
378 | 357 | |
|
379 | 358 | If ``removed`` == True and the LFS file was removed from ``ctx``, the value |
|
380 | 359 | stored for the path is an empty dict. |
|
381 | 360 | """ |
|
382 | 361 | result = {} |
|
383 | 362 | for f in ctx.files(): |
|
384 | 363 | p = pointerfromctx(ctx, f, removed=removed) |
|
385 | 364 | if p is not None: |
|
386 | 365 | result[f] = p |
|
387 | 366 | return result |
|
388 | 367 | |
|
389 | 368 | def uploadblobs(repo, pointers): |
|
390 | 369 | """upload given pointers from local blobstore""" |
|
391 | 370 | if not pointers: |
|
392 | 371 | return |
|
393 | 372 | |
|
394 | 373 | remoteblob = repo.svfs.lfsremoteblobstore |
|
395 | 374 | remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore) |
|
396 | 375 | |
|
397 | 376 | def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements): |
|
398 | 377 | orig(ui, srcrepo, dstrepo, requirements) |
|
399 | 378 | |
|
400 | 379 | srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs |
|
401 | 380 | dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs |
|
402 | 381 | |
|
403 | 382 | for dirpath, dirs, files in srclfsvfs.walk(): |
|
404 | 383 | for oid in files: |
|
405 | 384 | ui.write(_('copying lfs blob %s\n') % oid) |
|
406 | 385 | lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid)) |
|
407 | 386 | |
|
408 | 387 | def upgraderequirements(orig, repo): |
|
409 | 388 | reqs = orig(repo) |
|
410 | 389 | if 'lfs' in repo.requirements: |
|
411 | 390 | reqs.add('lfs') |
|
412 | 391 | return reqs |
@@ -1,2072 +1,2084 | |||
|
1 | 1 | # merge.py - directory-level update/merge handling for Mercurial |
|
2 | 2 | # |
|
3 | 3 | # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com> |
|
4 | 4 | # |
|
5 | 5 | # This software may be used and distributed according to the terms of the |
|
6 | 6 | # GNU General Public License version 2 or any later version. |
|
7 | 7 | |
|
8 | 8 | from __future__ import absolute_import |
|
9 | 9 | |
|
10 | 10 | import errno |
|
11 | 11 | import hashlib |
|
12 | 12 | import shutil |
|
13 | 13 | import struct |
|
14 | 14 | |
|
15 | 15 | from .i18n import _ |
|
16 | 16 | from .node import ( |
|
17 | 17 | addednodeid, |
|
18 | 18 | bin, |
|
19 | 19 | hex, |
|
20 | 20 | modifiednodeid, |
|
21 | 21 | nullhex, |
|
22 | 22 | nullid, |
|
23 | 23 | nullrev, |
|
24 | 24 | ) |
|
25 | 25 | from . import ( |
|
26 | 26 | copies, |
|
27 | 27 | error, |
|
28 | 28 | filemerge, |
|
29 | 29 | match as matchmod, |
|
30 | 30 | obsutil, |
|
31 | 31 | pycompat, |
|
32 | 32 | scmutil, |
|
33 | 33 | subrepoutil, |
|
34 | 34 | util, |
|
35 | 35 | worker, |
|
36 | 36 | ) |
|
37 | 37 | |
|
38 | 38 | _pack = struct.pack |
|
39 | 39 | _unpack = struct.unpack |
|
40 | 40 | |
|
41 | 41 | def _droponode(data): |
|
42 | 42 | # used for compatibility for v1 |
|
43 | 43 | bits = data.split('\0') |
|
44 | 44 | bits = bits[:-2] + bits[-1:] |
|
45 | 45 | return '\0'.join(bits) |
|
46 | 46 | |
|
47 | 47 | class mergestate(object): |
|
48 | 48 | '''track 3-way merge state of individual files |
|
49 | 49 | |
|
50 | 50 | The merge state is stored on disk when needed. Two files are used: one with |
|
51 | 51 | an old format (version 1), and one with a new format (version 2). Version 2 |
|
52 | 52 | stores a superset of the data in version 1, including new kinds of records |
|
53 | 53 | in the future. For more about the new format, see the documentation for |
|
54 | 54 | `_readrecordsv2`. |
|
55 | 55 | |
|
56 | 56 | Each record can contain arbitrary content, and has an associated type. This |
|
57 | 57 | `type` should be a letter. If `type` is uppercase, the record is mandatory: |
|
58 | 58 | versions of Mercurial that don't support it should abort. If `type` is |
|
59 | 59 | lowercase, the record can be safely ignored. |
|
60 | 60 | |
|
61 | 61 | Currently known records: |
|
62 | 62 | |
|
63 | 63 | L: the node of the "local" part of the merge (hexified version) |
|
64 | 64 | O: the node of the "other" part of the merge (hexified version) |
|
65 | 65 | F: a file to be merged entry |
|
66 | 66 | C: a change/delete or delete/change conflict |
|
67 | 67 | D: a file that the external merge driver will merge internally |
|
68 | 68 | (experimental) |
|
69 | 69 | P: a path conflict (file vs directory) |
|
70 | 70 | m: the external merge driver defined for this merge plus its run state |
|
71 | 71 | (experimental) |
|
72 | 72 | f: a (filename, dictionary) tuple of optional values for a given file |
|
73 | 73 | X: unsupported mandatory record type (used in tests) |
|
74 | 74 | x: unsupported advisory record type (used in tests) |
|
75 | 75 | l: the labels for the parts of the merge. |
|
76 | 76 | |
|
77 | 77 | Merge driver run states (experimental): |
|
78 | 78 | u: driver-resolved files unmarked -- needs to be run next time we're about |
|
79 | 79 | to resolve or commit |
|
80 | 80 | m: driver-resolved files marked -- only needs to be run before commit |
|
81 | 81 | s: success/skipped -- does not need to be run any more |
|
82 | 82 | |
|
83 | 83 | Merge record states (stored in self._state, indexed by filename): |
|
84 | 84 | u: unresolved conflict |
|
85 | 85 | r: resolved conflict |
|
86 | 86 | pu: unresolved path conflict (file conflicts with directory) |
|
87 | 87 | pr: resolved path conflict |
|
88 | 88 | d: driver-resolved conflict |
|
89 | 89 | |
|
90 | 90 | The resolve command transitions between 'u' and 'r' for conflicts and |
|
91 | 91 | 'pu' and 'pr' for path conflicts. |
|
92 | 92 | ''' |
|
93 | 93 | statepathv1 = 'merge/state' |
|
94 | 94 | statepathv2 = 'merge/state2' |
|
95 | 95 | |
|
96 | 96 | @staticmethod |
|
97 | 97 | def clean(repo, node=None, other=None, labels=None): |
|
98 | 98 | """Initialize a brand new merge state, removing any existing state on |
|
99 | 99 | disk.""" |
|
100 | 100 | ms = mergestate(repo) |
|
101 | 101 | ms.reset(node, other, labels) |
|
102 | 102 | return ms |
|
103 | 103 | |
|
104 | 104 | @staticmethod |
|
105 | 105 | def read(repo): |
|
106 | 106 | """Initialize the merge state, reading it from disk.""" |
|
107 | 107 | ms = mergestate(repo) |
|
108 | 108 | ms._read() |
|
109 | 109 | return ms |
|
110 | 110 | |
|
111 | 111 | def __init__(self, repo): |
|
112 | 112 | """Initialize the merge state. |
|
113 | 113 | |
|
114 | 114 | Do not use this directly! Instead call read() or clean().""" |
|
115 | 115 | self._repo = repo |
|
116 | 116 | self._dirty = False |
|
117 | 117 | self._labels = None |
|
118 | 118 | |
|
119 | 119 | def reset(self, node=None, other=None, labels=None): |
|
120 | 120 | self._state = {} |
|
121 | 121 | self._stateextras = {} |
|
122 | 122 | self._local = None |
|
123 | 123 | self._other = None |
|
124 | 124 | self._labels = labels |
|
125 | 125 | for var in ('localctx', 'otherctx'): |
|
126 | 126 | if var in vars(self): |
|
127 | 127 | delattr(self, var) |
|
128 | 128 | if node: |
|
129 | 129 | self._local = node |
|
130 | 130 | self._other = other |
|
131 | 131 | self._readmergedriver = None |
|
132 | 132 | if self.mergedriver: |
|
133 | 133 | self._mdstate = 's' |
|
134 | 134 | else: |
|
135 | 135 | self._mdstate = 'u' |
|
136 | 136 | shutil.rmtree(self._repo.vfs.join('merge'), True) |
|
137 | 137 | self._results = {} |
|
138 | 138 | self._dirty = False |
|
139 | 139 | |
|
140 | 140 | def _read(self): |
|
141 | 141 | """Analyse each record content to restore a serialized state from disk |
|
142 | 142 | |
|
143 | 143 | This function process "record" entry produced by the de-serialization |
|
144 | 144 | of on disk file. |
|
145 | 145 | """ |
|
146 | 146 | self._state = {} |
|
147 | 147 | self._stateextras = {} |
|
148 | 148 | self._local = None |
|
149 | 149 | self._other = None |
|
150 | 150 | for var in ('localctx', 'otherctx'): |
|
151 | 151 | if var in vars(self): |
|
152 | 152 | delattr(self, var) |
|
153 | 153 | self._readmergedriver = None |
|
154 | 154 | self._mdstate = 's' |
|
155 | 155 | unsupported = set() |
|
156 | 156 | records = self._readrecords() |
|
157 | 157 | for rtype, record in records: |
|
158 | 158 | if rtype == 'L': |
|
159 | 159 | self._local = bin(record) |
|
160 | 160 | elif rtype == 'O': |
|
161 | 161 | self._other = bin(record) |
|
162 | 162 | elif rtype == 'm': |
|
163 | 163 | bits = record.split('\0', 1) |
|
164 | 164 | mdstate = bits[1] |
|
165 | 165 | if len(mdstate) != 1 or mdstate not in 'ums': |
|
166 | 166 | # the merge driver should be idempotent, so just rerun it |
|
167 | 167 | mdstate = 'u' |
|
168 | 168 | |
|
169 | 169 | self._readmergedriver = bits[0] |
|
170 | 170 | self._mdstate = mdstate |
|
171 | 171 | elif rtype in 'FDCP': |
|
172 | 172 | bits = record.split('\0') |
|
173 | 173 | self._state[bits[0]] = bits[1:] |
|
174 | 174 | elif rtype == 'f': |
|
175 | 175 | filename, rawextras = record.split('\0', 1) |
|
176 | 176 | extraparts = rawextras.split('\0') |
|
177 | 177 | extras = {} |
|
178 | 178 | i = 0 |
|
179 | 179 | while i < len(extraparts): |
|
180 | 180 | extras[extraparts[i]] = extraparts[i + 1] |
|
181 | 181 | i += 2 |
|
182 | 182 | |
|
183 | 183 | self._stateextras[filename] = extras |
|
184 | 184 | elif rtype == 'l': |
|
185 | 185 | labels = record.split('\0', 2) |
|
186 | 186 | self._labels = [l for l in labels if len(l) > 0] |
|
187 | 187 | elif not rtype.islower(): |
|
188 | 188 | unsupported.add(rtype) |
|
189 | 189 | self._results = {} |
|
190 | 190 | self._dirty = False |
|
191 | 191 | |
|
192 | 192 | if unsupported: |
|
193 | 193 | raise error.UnsupportedMergeRecords(unsupported) |
|
194 | 194 | |
|
195 | 195 | def _readrecords(self): |
|
196 | 196 | """Read merge state from disk and return a list of record (TYPE, data) |
|
197 | 197 | |
|
198 | 198 | We read data from both v1 and v2 files and decide which one to use. |
|
199 | 199 | |
|
200 | 200 | V1 has been used by version prior to 2.9.1 and contains less data than |
|
201 | 201 | v2. We read both versions and check if no data in v2 contradicts |
|
202 | 202 | v1. If there is not contradiction we can safely assume that both v1 |
|
203 | 203 | and v2 were written at the same time and use the extract data in v2. If |
|
204 | 204 | there is contradiction we ignore v2 content as we assume an old version |
|
205 | 205 | of Mercurial has overwritten the mergestate file and left an old v2 |
|
206 | 206 | file around. |
|
207 | 207 | |
|
208 | 208 | returns list of record [(TYPE, data), ...]""" |
|
209 | 209 | v1records = self._readrecordsv1() |
|
210 | 210 | v2records = self._readrecordsv2() |
|
211 | 211 | if self._v1v2match(v1records, v2records): |
|
212 | 212 | return v2records |
|
213 | 213 | else: |
|
214 | 214 | # v1 file is newer than v2 file, use it |
|
215 | 215 | # we have to infer the "other" changeset of the merge |
|
216 | 216 | # we cannot do better than that with v1 of the format |
|
217 | 217 | mctx = self._repo[None].parents()[-1] |
|
218 | 218 | v1records.append(('O', mctx.hex())) |
|
219 | 219 | # add place holder "other" file node information |
|
220 | 220 | # nobody is using it yet so we do no need to fetch the data |
|
221 | 221 | # if mctx was wrong `mctx[bits[-2]]` may fails. |
|
222 | 222 | for idx, r in enumerate(v1records): |
|
223 | 223 | if r[0] == 'F': |
|
224 | 224 | bits = r[1].split('\0') |
|
225 | 225 | bits.insert(-2, '') |
|
226 | 226 | v1records[idx] = (r[0], '\0'.join(bits)) |
|
227 | 227 | return v1records |
|
228 | 228 | |
|
229 | 229 | def _v1v2match(self, v1records, v2records): |
|
230 | 230 | oldv2 = set() # old format version of v2 record |
|
231 | 231 | for rec in v2records: |
|
232 | 232 | if rec[0] == 'L': |
|
233 | 233 | oldv2.add(rec) |
|
234 | 234 | elif rec[0] == 'F': |
|
235 | 235 | # drop the onode data (not contained in v1) |
|
236 | 236 | oldv2.add(('F', _droponode(rec[1]))) |
|
237 | 237 | for rec in v1records: |
|
238 | 238 | if rec not in oldv2: |
|
239 | 239 | return False |
|
240 | 240 | else: |
|
241 | 241 | return True |
|
242 | 242 | |
|
243 | 243 | def _readrecordsv1(self): |
|
244 | 244 | """read on disk merge state for version 1 file |
|
245 | 245 | |
|
246 | 246 | returns list of record [(TYPE, data), ...] |
|
247 | 247 | |
|
248 | 248 | Note: the "F" data from this file are one entry short |
|
249 | 249 | (no "other file node" entry) |
|
250 | 250 | """ |
|
251 | 251 | records = [] |
|
252 | 252 | try: |
|
253 | 253 | f = self._repo.vfs(self.statepathv1) |
|
254 | 254 | for i, l in enumerate(f): |
|
255 | 255 | if i == 0: |
|
256 | 256 | records.append(('L', l[:-1])) |
|
257 | 257 | else: |
|
258 | 258 | records.append(('F', l[:-1])) |
|
259 | 259 | f.close() |
|
260 | 260 | except IOError as err: |
|
261 | 261 | if err.errno != errno.ENOENT: |
|
262 | 262 | raise |
|
263 | 263 | return records |
|
264 | 264 | |
|
265 | 265 | def _readrecordsv2(self): |
|
266 | 266 | """read on disk merge state for version 2 file |
|
267 | 267 | |
|
268 | 268 | This format is a list of arbitrary records of the form: |
|
269 | 269 | |
|
270 | 270 | [type][length][content] |
|
271 | 271 | |
|
272 | 272 | `type` is a single character, `length` is a 4 byte integer, and |
|
273 | 273 | `content` is an arbitrary byte sequence of length `length`. |
|
274 | 274 | |
|
275 | 275 | Mercurial versions prior to 3.7 have a bug where if there are |
|
276 | 276 | unsupported mandatory merge records, attempting to clear out the merge |
|
277 | 277 | state with hg update --clean or similar aborts. The 't' record type |
|
278 | 278 | works around that by writing out what those versions treat as an |
|
279 | 279 | advisory record, but later versions interpret as special: the first |
|
280 | 280 | character is the 'real' record type and everything onwards is the data. |
|
281 | 281 | |
|
282 | 282 | Returns list of records [(TYPE, data), ...].""" |
|
283 | 283 | records = [] |
|
284 | 284 | try: |
|
285 | 285 | f = self._repo.vfs(self.statepathv2) |
|
286 | 286 | data = f.read() |
|
287 | 287 | off = 0 |
|
288 | 288 | end = len(data) |
|
289 | 289 | while off < end: |
|
290 | 290 | rtype = data[off] |
|
291 | 291 | off += 1 |
|
292 | 292 | length = _unpack('>I', data[off:(off + 4)])[0] |
|
293 | 293 | off += 4 |
|
294 | 294 | record = data[off:(off + length)] |
|
295 | 295 | off += length |
|
296 | 296 | if rtype == 't': |
|
297 | 297 | rtype, record = record[0], record[1:] |
|
298 | 298 | records.append((rtype, record)) |
|
299 | 299 | f.close() |
|
300 | 300 | except IOError as err: |
|
301 | 301 | if err.errno != errno.ENOENT: |
|
302 | 302 | raise |
|
303 | 303 | return records |
|
304 | 304 | |
|
305 | 305 | @util.propertycache |
|
306 | 306 | def mergedriver(self): |
|
307 | 307 | # protect against the following: |
|
308 | 308 | # - A configures a malicious merge driver in their hgrc, then |
|
309 | 309 | # pauses the merge |
|
310 | 310 | # - A edits their hgrc to remove references to the merge driver |
|
311 | 311 | # - A gives a copy of their entire repo, including .hg, to B |
|
312 | 312 | # - B inspects .hgrc and finds it to be clean |
|
313 | 313 | # - B then continues the merge and the malicious merge driver |
|
314 | 314 | # gets invoked |
|
315 | 315 | configmergedriver = self._repo.ui.config('experimental', 'mergedriver') |
|
316 | 316 | if (self._readmergedriver is not None |
|
317 | 317 | and self._readmergedriver != configmergedriver): |
|
318 | 318 | raise error.ConfigError( |
|
319 | 319 | _("merge driver changed since merge started"), |
|
320 | 320 | hint=_("revert merge driver change or abort merge")) |
|
321 | 321 | |
|
322 | 322 | return configmergedriver |
|
323 | 323 | |
|
324 | 324 | @util.propertycache |
|
325 | 325 | def localctx(self): |
|
326 | 326 | if self._local is None: |
|
327 | 327 | msg = "localctx accessed but self._local isn't set" |
|
328 | 328 | raise error.ProgrammingError(msg) |
|
329 | 329 | return self._repo[self._local] |
|
330 | 330 | |
|
331 | 331 | @util.propertycache |
|
332 | 332 | def otherctx(self): |
|
333 | 333 | if self._other is None: |
|
334 | 334 | msg = "otherctx accessed but self._other isn't set" |
|
335 | 335 | raise error.ProgrammingError(msg) |
|
336 | 336 | return self._repo[self._other] |
|
337 | 337 | |
|
338 | 338 | def active(self): |
|
339 | 339 | """Whether mergestate is active. |
|
340 | 340 | |
|
341 | 341 | Returns True if there appears to be mergestate. This is a rough proxy |
|
342 | 342 | for "is a merge in progress." |
|
343 | 343 | """ |
|
344 | 344 | # Check local variables before looking at filesystem for performance |
|
345 | 345 | # reasons. |
|
346 | 346 | return bool(self._local) or bool(self._state) or \ |
|
347 | 347 | self._repo.vfs.exists(self.statepathv1) or \ |
|
348 | 348 | self._repo.vfs.exists(self.statepathv2) |
|
349 | 349 | |
|
350 | 350 | def commit(self): |
|
351 | 351 | """Write current state on disk (if necessary)""" |
|
352 | 352 | if self._dirty: |
|
353 | 353 | records = self._makerecords() |
|
354 | 354 | self._writerecords(records) |
|
355 | 355 | self._dirty = False |
|
356 | 356 | |
|
357 | 357 | def _makerecords(self): |
|
358 | 358 | records = [] |
|
359 | 359 | records.append(('L', hex(self._local))) |
|
360 | 360 | records.append(('O', hex(self._other))) |
|
361 | 361 | if self.mergedriver: |
|
362 | 362 | records.append(('m', '\0'.join([ |
|
363 | 363 | self.mergedriver, self._mdstate]))) |
|
364 | 364 | # Write out state items. In all cases, the value of the state map entry |
|
365 | 365 | # is written as the contents of the record. The record type depends on |
|
366 | 366 | # the type of state that is stored, and capital-letter records are used |
|
367 | 367 | # to prevent older versions of Mercurial that do not support the feature |
|
368 | 368 | # from loading them. |
|
369 | 369 | for filename, v in self._state.iteritems(): |
|
370 | 370 | if v[0] == 'd': |
|
371 | 371 | # Driver-resolved merge. These are stored in 'D' records. |
|
372 | 372 | records.append(('D', '\0'.join([filename] + v))) |
|
373 | 373 | elif v[0] in ('pu', 'pr'): |
|
374 | 374 | # Path conflicts. These are stored in 'P' records. The current |
|
375 | 375 | # resolution state ('pu' or 'pr') is stored within the record. |
|
376 | 376 | records.append(('P', '\0'.join([filename] + v))) |
|
377 | 377 | elif v[1] == nullhex or v[6] == nullhex: |
|
378 | 378 | # Change/Delete or Delete/Change conflicts. These are stored in |
|
379 | 379 | # 'C' records. v[1] is the local file, and is nullhex when the |
|
380 | 380 | # file is deleted locally ('dc'). v[6] is the remote file, and |
|
381 | 381 | # is nullhex when the file is deleted remotely ('cd'). |
|
382 | 382 | records.append(('C', '\0'.join([filename] + v))) |
|
383 | 383 | else: |
|
384 | 384 | # Normal files. These are stored in 'F' records. |
|
385 | 385 | records.append(('F', '\0'.join([filename] + v))) |
|
386 | 386 | for filename, extras in sorted(self._stateextras.iteritems()): |
|
387 | 387 | rawextras = '\0'.join('%s\0%s' % (k, v) for k, v in |
|
388 | 388 | extras.iteritems()) |
|
389 | 389 | records.append(('f', '%s\0%s' % (filename, rawextras))) |
|
390 | 390 | if self._labels is not None: |
|
391 | 391 | labels = '\0'.join(self._labels) |
|
392 | 392 | records.append(('l', labels)) |
|
393 | 393 | return records |
|
394 | 394 | |
|
395 | 395 | def _writerecords(self, records): |
|
396 | 396 | """Write current state on disk (both v1 and v2)""" |
|
397 | 397 | self._writerecordsv1(records) |
|
398 | 398 | self._writerecordsv2(records) |
|
399 | 399 | |
|
400 | 400 | def _writerecordsv1(self, records): |
|
401 | 401 | """Write current state on disk in a version 1 file""" |
|
402 | 402 | f = self._repo.vfs(self.statepathv1, 'w') |
|
403 | 403 | irecords = iter(records) |
|
404 | 404 | lrecords = next(irecords) |
|
405 | 405 | assert lrecords[0] == 'L' |
|
406 | 406 | f.write(hex(self._local) + '\n') |
|
407 | 407 | for rtype, data in irecords: |
|
408 | 408 | if rtype == 'F': |
|
409 | 409 | f.write('%s\n' % _droponode(data)) |
|
410 | 410 | f.close() |
|
411 | 411 | |
|
412 | 412 | def _writerecordsv2(self, records): |
|
413 | 413 | """Write current state on disk in a version 2 file |
|
414 | 414 | |
|
415 | 415 | See the docstring for _readrecordsv2 for why we use 't'.""" |
|
416 | 416 | # these are the records that all version 2 clients can read |
|
417 | 417 | whitelist = 'LOF' |
|
418 | 418 | f = self._repo.vfs(self.statepathv2, 'w') |
|
419 | 419 | for key, data in records: |
|
420 | 420 | assert len(key) == 1 |
|
421 | 421 | if key not in whitelist: |
|
422 | 422 | key, data = 't', '%s%s' % (key, data) |
|
423 | 423 | format = '>sI%is' % len(data) |
|
424 | 424 | f.write(_pack(format, key, len(data), data)) |
|
425 | 425 | f.close() |
|
426 | 426 | |
|
427 | 427 | def add(self, fcl, fco, fca, fd): |
|
428 | 428 | """add a new (potentially?) conflicting file the merge state |
|
429 | 429 | fcl: file context for local, |
|
430 | 430 | fco: file context for remote, |
|
431 | 431 | fca: file context for ancestors, |
|
432 | 432 | fd: file path of the resulting merge. |
|
433 | 433 | |
|
434 | 434 | note: also write the local version to the `.hg/merge` directory. |
|
435 | 435 | """ |
|
436 | 436 | if fcl.isabsent(): |
|
437 | 437 | hash = nullhex |
|
438 | 438 | else: |
|
439 | 439 | hash = hex(hashlib.sha1(fcl.path()).digest()) |
|
440 | 440 | self._repo.vfs.write('merge/' + hash, fcl.data()) |
|
441 | 441 | self._state[fd] = ['u', hash, fcl.path(), |
|
442 | 442 | fca.path(), hex(fca.filenode()), |
|
443 | 443 | fco.path(), hex(fco.filenode()), |
|
444 | 444 | fcl.flags()] |
|
445 | 445 | self._stateextras[fd] = {'ancestorlinknode': hex(fca.node())} |
|
446 | 446 | self._dirty = True |
|
447 | 447 | |
|
448 | 448 | def addpath(self, path, frename, forigin): |
|
449 | 449 | """add a new conflicting path to the merge state |
|
450 | 450 | path: the path that conflicts |
|
451 | 451 | frename: the filename the conflicting file was renamed to |
|
452 | 452 | forigin: origin of the file ('l' or 'r' for local/remote) |
|
453 | 453 | """ |
|
454 | 454 | self._state[path] = ['pu', frename, forigin] |
|
455 | 455 | self._dirty = True |
|
456 | 456 | |
|
457 | 457 | def __contains__(self, dfile): |
|
458 | 458 | return dfile in self._state |
|
459 | 459 | |
|
460 | 460 | def __getitem__(self, dfile): |
|
461 | 461 | return self._state[dfile][0] |
|
462 | 462 | |
|
463 | 463 | def __iter__(self): |
|
464 | 464 | return iter(sorted(self._state)) |
|
465 | 465 | |
|
466 | 466 | def files(self): |
|
467 | 467 | return self._state.keys() |
|
468 | 468 | |
|
469 | 469 | def mark(self, dfile, state): |
|
470 | 470 | self._state[dfile][0] = state |
|
471 | 471 | self._dirty = True |
|
472 | 472 | |
|
473 | 473 | def mdstate(self): |
|
474 | 474 | return self._mdstate |
|
475 | 475 | |
|
476 | 476 | def unresolved(self): |
|
477 | 477 | """Obtain the paths of unresolved files.""" |
|
478 | 478 | |
|
479 | 479 | for f, entry in self._state.iteritems(): |
|
480 | 480 | if entry[0] in ('u', 'pu'): |
|
481 | 481 | yield f |
|
482 | 482 | |
|
483 | 483 | def driverresolved(self): |
|
484 | 484 | """Obtain the paths of driver-resolved files.""" |
|
485 | 485 | |
|
486 | 486 | for f, entry in self._state.items(): |
|
487 | 487 | if entry[0] == 'd': |
|
488 | 488 | yield f |
|
489 | 489 | |
|
490 | 490 | def extras(self, filename): |
|
491 | 491 | return self._stateextras.setdefault(filename, {}) |
|
492 | 492 | |
|
493 | 493 | def _resolve(self, preresolve, dfile, wctx): |
|
494 | 494 | """rerun merge process for file path `dfile`""" |
|
495 | 495 | if self[dfile] in 'rd': |
|
496 | 496 | return True, 0 |
|
497 | 497 | stateentry = self._state[dfile] |
|
498 | 498 | state, hash, lfile, afile, anode, ofile, onode, flags = stateentry |
|
499 | 499 | octx = self._repo[self._other] |
|
500 | 500 | extras = self.extras(dfile) |
|
501 | 501 | anccommitnode = extras.get('ancestorlinknode') |
|
502 | 502 | if anccommitnode: |
|
503 | 503 | actx = self._repo[anccommitnode] |
|
504 | 504 | else: |
|
505 | 505 | actx = None |
|
506 | 506 | fcd = self._filectxorabsent(hash, wctx, dfile) |
|
507 | 507 | fco = self._filectxorabsent(onode, octx, ofile) |
|
508 | 508 | # TODO: move this to filectxorabsent |
|
509 | 509 | fca = self._repo.filectx(afile, fileid=anode, changeid=actx) |
|
510 | 510 | # "premerge" x flags |
|
511 | 511 | flo = fco.flags() |
|
512 | 512 | fla = fca.flags() |
|
513 | 513 | if 'x' in flags + flo + fla and 'l' not in flags + flo + fla: |
|
514 | 514 | if fca.node() == nullid and flags != flo: |
|
515 | 515 | if preresolve: |
|
516 | 516 | self._repo.ui.warn( |
|
517 | 517 | _('warning: cannot merge flags for %s ' |
|
518 | 518 | 'without common ancestor - keeping local flags\n') |
|
519 | 519 | % afile) |
|
520 | 520 | elif flags == fla: |
|
521 | 521 | flags = flo |
|
522 | 522 | if preresolve: |
|
523 | 523 | # restore local |
|
524 | 524 | if hash != nullhex: |
|
525 | 525 | f = self._repo.vfs('merge/' + hash) |
|
526 | 526 | wctx[dfile].write(f.read(), flags) |
|
527 | 527 | f.close() |
|
528 | 528 | else: |
|
529 | 529 | wctx[dfile].remove(ignoremissing=True) |
|
530 | 530 | complete, r, deleted = filemerge.premerge(self._repo, wctx, |
|
531 | 531 | self._local, lfile, fcd, |
|
532 | 532 | fco, fca, |
|
533 | 533 | labels=self._labels) |
|
534 | 534 | else: |
|
535 | 535 | complete, r, deleted = filemerge.filemerge(self._repo, wctx, |
|
536 | 536 | self._local, lfile, fcd, |
|
537 | 537 | fco, fca, |
|
538 | 538 | labels=self._labels) |
|
539 | 539 | if r is None: |
|
540 | 540 | # no real conflict |
|
541 | 541 | del self._state[dfile] |
|
542 | 542 | self._stateextras.pop(dfile, None) |
|
543 | 543 | self._dirty = True |
|
544 | 544 | elif not r: |
|
545 | 545 | self.mark(dfile, 'r') |
|
546 | 546 | |
|
547 | 547 | if complete: |
|
548 | 548 | action = None |
|
549 | 549 | if deleted: |
|
550 | 550 | if fcd.isabsent(): |
|
551 | 551 | # dc: local picked. Need to drop if present, which may |
|
552 | 552 | # happen on re-resolves. |
|
553 | 553 | action = 'f' |
|
554 | 554 | else: |
|
555 | 555 | # cd: remote picked (or otherwise deleted) |
|
556 | 556 | action = 'r' |
|
557 | 557 | else: |
|
558 | 558 | if fcd.isabsent(): # dc: remote picked |
|
559 | 559 | action = 'g' |
|
560 | 560 | elif fco.isabsent(): # cd: local picked |
|
561 | 561 | if dfile in self.localctx: |
|
562 | 562 | action = 'am' |
|
563 | 563 | else: |
|
564 | 564 | action = 'a' |
|
565 | 565 | # else: regular merges (no action necessary) |
|
566 | 566 | self._results[dfile] = r, action |
|
567 | 567 | |
|
568 | 568 | return complete, r |
|
569 | 569 | |
|
570 | 570 | def _filectxorabsent(self, hexnode, ctx, f): |
|
571 | 571 | if hexnode == nullhex: |
|
572 | 572 | return filemerge.absentfilectx(ctx, f) |
|
573 | 573 | else: |
|
574 | 574 | return ctx[f] |
|
575 | 575 | |
|
576 | 576 | def preresolve(self, dfile, wctx): |
|
577 | 577 | """run premerge process for dfile |
|
578 | 578 | |
|
579 | 579 | Returns whether the merge is complete, and the exit code.""" |
|
580 | 580 | return self._resolve(True, dfile, wctx) |
|
581 | 581 | |
|
582 | 582 | def resolve(self, dfile, wctx): |
|
583 | 583 | """run merge process (assuming premerge was run) for dfile |
|
584 | 584 | |
|
585 | 585 | Returns the exit code of the merge.""" |
|
586 | 586 | return self._resolve(False, dfile, wctx)[1] |
|
587 | 587 | |
|
588 | 588 | def counts(self): |
|
589 | 589 | """return counts for updated, merged and removed files in this |
|
590 | 590 | session""" |
|
591 | 591 | updated, merged, removed = 0, 0, 0 |
|
592 | 592 | for r, action in self._results.itervalues(): |
|
593 | 593 | if r is None: |
|
594 | 594 | updated += 1 |
|
595 | 595 | elif r == 0: |
|
596 | 596 | if action == 'r': |
|
597 | 597 | removed += 1 |
|
598 | 598 | else: |
|
599 | 599 | merged += 1 |
|
600 | 600 | return updated, merged, removed |
|
601 | 601 | |
|
602 | 602 | def unresolvedcount(self): |
|
603 | 603 | """get unresolved count for this merge (persistent)""" |
|
604 | 604 | return len(list(self.unresolved())) |
|
605 | 605 | |
|
606 | 606 | def actions(self): |
|
607 | 607 | """return lists of actions to perform on the dirstate""" |
|
608 | 608 | actions = {'r': [], 'f': [], 'a': [], 'am': [], 'g': []} |
|
609 | 609 | for f, (r, action) in self._results.iteritems(): |
|
610 | 610 | if action is not None: |
|
611 | 611 | actions[action].append((f, None, "merge result")) |
|
612 | 612 | return actions |
|
613 | 613 | |
|
614 | 614 | def recordactions(self): |
|
615 | 615 | """record remove/add/get actions in the dirstate""" |
|
616 | 616 | branchmerge = self._repo.dirstate.p2() != nullid |
|
617 | 617 | recordupdates(self._repo, self.actions(), branchmerge) |
|
618 | 618 | |
|
619 | 619 | def queueremove(self, f): |
|
620 | 620 | """queues a file to be removed from the dirstate |
|
621 | 621 | |
|
622 | 622 | Meant for use by custom merge drivers.""" |
|
623 | 623 | self._results[f] = 0, 'r' |
|
624 | 624 | |
|
625 | 625 | def queueadd(self, f): |
|
626 | 626 | """queues a file to be added to the dirstate |
|
627 | 627 | |
|
628 | 628 | Meant for use by custom merge drivers.""" |
|
629 | 629 | self._results[f] = 0, 'a' |
|
630 | 630 | |
|
631 | 631 | def queueget(self, f): |
|
632 | 632 | """queues a file to be marked modified in the dirstate |
|
633 | 633 | |
|
634 | 634 | Meant for use by custom merge drivers.""" |
|
635 | 635 | self._results[f] = 0, 'g' |
|
636 | 636 | |
|
637 | 637 | def _getcheckunknownconfig(repo, section, name): |
|
638 | 638 | config = repo.ui.config(section, name) |
|
639 | 639 | valid = ['abort', 'ignore', 'warn'] |
|
640 | 640 | if config not in valid: |
|
641 | 641 | validstr = ', '.join(["'" + v + "'" for v in valid]) |
|
642 | 642 | raise error.ConfigError(_("%s.%s not valid " |
|
643 | 643 | "('%s' is none of %s)") |
|
644 | 644 | % (section, name, config, validstr)) |
|
645 | 645 | return config |
|
646 | 646 | |
|
647 | 647 | def _checkunknownfile(repo, wctx, mctx, f, f2=None): |
|
648 | 648 | if wctx.isinmemory(): |
|
649 | 649 | # Nothing to do in IMM because nothing in the "working copy" can be an |
|
650 | 650 | # unknown file. |
|
651 | 651 | # |
|
652 | 652 | # Note that we should bail out here, not in ``_checkunknownfiles()``, |
|
653 | 653 | # because that function does other useful work. |
|
654 | 654 | return False |
|
655 | 655 | |
|
656 | 656 | if f2 is None: |
|
657 | 657 | f2 = f |
|
658 | 658 | return (repo.wvfs.audit.check(f) |
|
659 | 659 | and repo.wvfs.isfileorlink(f) |
|
660 | 660 | and repo.dirstate.normalize(f) not in repo.dirstate |
|
661 | 661 | and mctx[f2].cmp(wctx[f])) |
|
662 | 662 | |
|
663 | 663 | class _unknowndirschecker(object): |
|
664 | 664 | """ |
|
665 | 665 | Look for any unknown files or directories that may have a path conflict |
|
666 | 666 | with a file. If any path prefix of the file exists as a file or link, |
|
667 | 667 | then it conflicts. If the file itself is a directory that contains any |
|
668 | 668 | file that is not tracked, then it conflicts. |
|
669 | 669 | |
|
670 | 670 | Returns the shortest path at which a conflict occurs, or None if there is |
|
671 | 671 | no conflict. |
|
672 | 672 | """ |
|
673 | 673 | def __init__(self): |
|
674 | 674 | # A set of paths known to be good. This prevents repeated checking of |
|
675 | 675 | # dirs. It will be updated with any new dirs that are checked and found |
|
676 | 676 | # to be safe. |
|
677 | 677 | self._unknowndircache = set() |
|
678 | 678 | |
|
679 | 679 | # A set of paths that are known to be absent. This prevents repeated |
|
680 | 680 | # checking of subdirectories that are known not to exist. It will be |
|
681 | 681 | # updated with any new dirs that are checked and found to be absent. |
|
682 | 682 | self._missingdircache = set() |
|
683 | 683 | |
|
684 | 684 | def __call__(self, repo, wctx, f): |
|
685 | 685 | if wctx.isinmemory(): |
|
686 | 686 | # Nothing to do in IMM for the same reason as ``_checkunknownfile``. |
|
687 | 687 | return False |
|
688 | 688 | |
|
689 | 689 | # Check for path prefixes that exist as unknown files. |
|
690 | 690 | for p in reversed(list(util.finddirs(f))): |
|
691 | 691 | if p in self._missingdircache: |
|
692 | 692 | return |
|
693 | 693 | if p in self._unknowndircache: |
|
694 | 694 | continue |
|
695 | 695 | if repo.wvfs.audit.check(p): |
|
696 | 696 | if (repo.wvfs.isfileorlink(p) |
|
697 | 697 | and repo.dirstate.normalize(p) not in repo.dirstate): |
|
698 | 698 | return p |
|
699 | 699 | if not repo.wvfs.lexists(p): |
|
700 | 700 | self._missingdircache.add(p) |
|
701 | 701 | return |
|
702 | 702 | self._unknowndircache.add(p) |
|
703 | 703 | |
|
704 | 704 | # Check if the file conflicts with a directory containing unknown files. |
|
705 | 705 | if repo.wvfs.audit.check(f) and repo.wvfs.isdir(f): |
|
706 | 706 | # Does the directory contain any files that are not in the dirstate? |
|
707 | 707 | for p, dirs, files in repo.wvfs.walk(f): |
|
708 | 708 | for fn in files: |
|
709 | 709 | relf = repo.dirstate.normalize(repo.wvfs.reljoin(p, fn)) |
|
710 | 710 | if relf not in repo.dirstate: |
|
711 | 711 | return f |
|
712 | 712 | return None |
|
713 | 713 | |
|
714 | 714 | def _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce): |
|
715 | 715 | """ |
|
716 | 716 | Considers any actions that care about the presence of conflicting unknown |
|
717 | 717 | files. For some actions, the result is to abort; for others, it is to |
|
718 | 718 | choose a different action. |
|
719 | 719 | """ |
|
720 | 720 | fileconflicts = set() |
|
721 | 721 | pathconflicts = set() |
|
722 | 722 | warnconflicts = set() |
|
723 | 723 | abortconflicts = set() |
|
724 | 724 | unknownconfig = _getcheckunknownconfig(repo, 'merge', 'checkunknown') |
|
725 | 725 | ignoredconfig = _getcheckunknownconfig(repo, 'merge', 'checkignored') |
|
726 | 726 | pathconfig = repo.ui.configbool('experimental', 'merge.checkpathconflicts') |
|
727 | 727 | if not force: |
|
728 | 728 | def collectconflicts(conflicts, config): |
|
729 | 729 | if config == 'abort': |
|
730 | 730 | abortconflicts.update(conflicts) |
|
731 | 731 | elif config == 'warn': |
|
732 | 732 | warnconflicts.update(conflicts) |
|
733 | 733 | |
|
734 | 734 | checkunknowndirs = _unknowndirschecker() |
|
735 | 735 | for f, (m, args, msg) in actions.iteritems(): |
|
736 | 736 | if m in ('c', 'dc'): |
|
737 | 737 | if _checkunknownfile(repo, wctx, mctx, f): |
|
738 | 738 | fileconflicts.add(f) |
|
739 | 739 | elif pathconfig and f not in wctx: |
|
740 | 740 | path = checkunknowndirs(repo, wctx, f) |
|
741 | 741 | if path is not None: |
|
742 | 742 | pathconflicts.add(path) |
|
743 | 743 | elif m == 'dg': |
|
744 | 744 | if _checkunknownfile(repo, wctx, mctx, f, args[0]): |
|
745 | 745 | fileconflicts.add(f) |
|
746 | 746 | |
|
747 | 747 | allconflicts = fileconflicts | pathconflicts |
|
748 | 748 | ignoredconflicts = set([c for c in allconflicts |
|
749 | 749 | if repo.dirstate._ignore(c)]) |
|
750 | 750 | unknownconflicts = allconflicts - ignoredconflicts |
|
751 | 751 | collectconflicts(ignoredconflicts, ignoredconfig) |
|
752 | 752 | collectconflicts(unknownconflicts, unknownconfig) |
|
753 | 753 | else: |
|
754 | 754 | for f, (m, args, msg) in actions.iteritems(): |
|
755 | 755 | if m == 'cm': |
|
756 | 756 | fl2, anc = args |
|
757 | 757 | different = _checkunknownfile(repo, wctx, mctx, f) |
|
758 | 758 | if repo.dirstate._ignore(f): |
|
759 | 759 | config = ignoredconfig |
|
760 | 760 | else: |
|
761 | 761 | config = unknownconfig |
|
762 | 762 | |
|
763 | 763 | # The behavior when force is True is described by this table: |
|
764 | 764 | # config different mergeforce | action backup |
|
765 | 765 | # * n * | get n |
|
766 | 766 | # * y y | merge - |
|
767 | 767 | # abort y n | merge - (1) |
|
768 | 768 | # warn y n | warn + get y |
|
769 | 769 | # ignore y n | get y |
|
770 | 770 | # |
|
771 | 771 | # (1) this is probably the wrong behavior here -- we should |
|
772 | 772 | # probably abort, but some actions like rebases currently |
|
773 | 773 | # don't like an abort happening in the middle of |
|
774 | 774 | # merge.update. |
|
775 | 775 | if not different: |
|
776 | 776 | actions[f] = ('g', (fl2, False), "remote created") |
|
777 | 777 | elif mergeforce or config == 'abort': |
|
778 | 778 | actions[f] = ('m', (f, f, None, False, anc), |
|
779 | 779 | "remote differs from untracked local") |
|
780 | 780 | elif config == 'abort': |
|
781 | 781 | abortconflicts.add(f) |
|
782 | 782 | else: |
|
783 | 783 | if config == 'warn': |
|
784 | 784 | warnconflicts.add(f) |
|
785 | 785 | actions[f] = ('g', (fl2, True), "remote created") |
|
786 | 786 | |
|
787 | 787 | for f in sorted(abortconflicts): |
|
788 | 788 | warn = repo.ui.warn |
|
789 | 789 | if f in pathconflicts: |
|
790 | 790 | if repo.wvfs.isfileorlink(f): |
|
791 | 791 | warn(_("%s: untracked file conflicts with directory\n") % f) |
|
792 | 792 | else: |
|
793 | 793 | warn(_("%s: untracked directory conflicts with file\n") % f) |
|
794 | 794 | else: |
|
795 | 795 | warn(_("%s: untracked file differs\n") % f) |
|
796 | 796 | if abortconflicts: |
|
797 | 797 | raise error.Abort(_("untracked files in working directory " |
|
798 | 798 | "differ from files in requested revision")) |
|
799 | 799 | |
|
800 | 800 | for f in sorted(warnconflicts): |
|
801 | 801 | if repo.wvfs.isfileorlink(f): |
|
802 | 802 | repo.ui.warn(_("%s: replacing untracked file\n") % f) |
|
803 | 803 | else: |
|
804 | 804 | repo.ui.warn(_("%s: replacing untracked files in directory\n") % f) |
|
805 | 805 | |
|
806 | 806 | for f, (m, args, msg) in actions.iteritems(): |
|
807 | 807 | if m == 'c': |
|
808 | 808 | backup = (f in fileconflicts or f in pathconflicts or |
|
809 | 809 | any(p in pathconflicts for p in util.finddirs(f))) |
|
810 | 810 | flags, = args |
|
811 | 811 | actions[f] = ('g', (flags, backup), msg) |
|
812 | 812 | |
|
813 | 813 | def _forgetremoved(wctx, mctx, branchmerge): |
|
814 | 814 | """ |
|
815 | 815 | Forget removed files |
|
816 | 816 | |
|
817 | 817 | If we're jumping between revisions (as opposed to merging), and if |
|
818 | 818 | neither the working directory nor the target rev has the file, |
|
819 | 819 | then we need to remove it from the dirstate, to prevent the |
|
820 | 820 | dirstate from listing the file when it is no longer in the |
|
821 | 821 | manifest. |
|
822 | 822 | |
|
823 | 823 | If we're merging, and the other revision has removed a file |
|
824 | 824 | that is not present in the working directory, we need to mark it |
|
825 | 825 | as removed. |
|
826 | 826 | """ |
|
827 | 827 | |
|
828 | 828 | actions = {} |
|
829 | 829 | m = 'f' |
|
830 | 830 | if branchmerge: |
|
831 | 831 | m = 'r' |
|
832 | 832 | for f in wctx.deleted(): |
|
833 | 833 | if f not in mctx: |
|
834 | 834 | actions[f] = m, None, "forget deleted" |
|
835 | 835 | |
|
836 | 836 | if not branchmerge: |
|
837 | 837 | for f in wctx.removed(): |
|
838 | 838 | if f not in mctx: |
|
839 | 839 | actions[f] = 'f', None, "forget removed" |
|
840 | 840 | |
|
841 | 841 | return actions |
|
842 | 842 | |
|
843 | 843 | def _checkcollision(repo, wmf, actions): |
|
844 | 844 | # build provisional merged manifest up |
|
845 | 845 | pmmf = set(wmf) |
|
846 | 846 | |
|
847 | 847 | if actions: |
|
848 | 848 | # k, dr, e and rd are no-op |
|
849 | 849 | for m in 'a', 'am', 'f', 'g', 'cd', 'dc': |
|
850 | 850 | for f, args, msg in actions[m]: |
|
851 | 851 | pmmf.add(f) |
|
852 | 852 | for f, args, msg in actions['r']: |
|
853 | 853 | pmmf.discard(f) |
|
854 | 854 | for f, args, msg in actions['dm']: |
|
855 | 855 | f2, flags = args |
|
856 | 856 | pmmf.discard(f2) |
|
857 | 857 | pmmf.add(f) |
|
858 | 858 | for f, args, msg in actions['dg']: |
|
859 | 859 | pmmf.add(f) |
|
860 | 860 | for f, args, msg in actions['m']: |
|
861 | 861 | f1, f2, fa, move, anc = args |
|
862 | 862 | if move: |
|
863 | 863 | pmmf.discard(f1) |
|
864 | 864 | pmmf.add(f) |
|
865 | 865 | |
|
866 | 866 | # check case-folding collision in provisional merged manifest |
|
867 | 867 | foldmap = {} |
|
868 | 868 | for f in pmmf: |
|
869 | 869 | fold = util.normcase(f) |
|
870 | 870 | if fold in foldmap: |
|
871 | 871 | raise error.Abort(_("case-folding collision between %s and %s") |
|
872 | 872 | % (f, foldmap[fold])) |
|
873 | 873 | foldmap[fold] = f |
|
874 | 874 | |
|
875 | 875 | # check case-folding of directories |
|
876 | 876 | foldprefix = unfoldprefix = lastfull = '' |
|
877 | 877 | for fold, f in sorted(foldmap.items()): |
|
878 | 878 | if fold.startswith(foldprefix) and not f.startswith(unfoldprefix): |
|
879 | 879 | # the folded prefix matches but actual casing is different |
|
880 | 880 | raise error.Abort(_("case-folding collision between " |
|
881 | 881 | "%s and directory of %s") % (lastfull, f)) |
|
882 | 882 | foldprefix = fold + '/' |
|
883 | 883 | unfoldprefix = f + '/' |
|
884 | 884 | lastfull = f |
|
885 | 885 | |
|
886 | 886 | def driverpreprocess(repo, ms, wctx, labels=None): |
|
887 | 887 | """run the preprocess step of the merge driver, if any |
|
888 | 888 | |
|
889 | 889 | This is currently not implemented -- it's an extension point.""" |
|
890 | 890 | return True |
|
891 | 891 | |
|
892 | 892 | def driverconclude(repo, ms, wctx, labels=None): |
|
893 | 893 | """run the conclude step of the merge driver, if any |
|
894 | 894 | |
|
895 | 895 | This is currently not implemented -- it's an extension point.""" |
|
896 | 896 | return True |
|
897 | 897 | |
|
898 | 898 | def _filesindirs(repo, manifest, dirs): |
|
899 | 899 | """ |
|
900 | 900 | Generator that yields pairs of all the files in the manifest that are found |
|
901 | 901 | inside the directories listed in dirs, and which directory they are found |
|
902 | 902 | in. |
|
903 | 903 | """ |
|
904 | 904 | for f in manifest: |
|
905 | 905 | for p in util.finddirs(f): |
|
906 | 906 | if p in dirs: |
|
907 | 907 | yield f, p |
|
908 | 908 | break |
|
909 | 909 | |
|
910 | 910 | def checkpathconflicts(repo, wctx, mctx, actions): |
|
911 | 911 | """ |
|
912 | 912 | Check if any actions introduce path conflicts in the repository, updating |
|
913 | 913 | actions to record or handle the path conflict accordingly. |
|
914 | 914 | """ |
|
915 | 915 | mf = wctx.manifest() |
|
916 | 916 | |
|
917 | 917 | # The set of local files that conflict with a remote directory. |
|
918 | 918 | localconflicts = set() |
|
919 | 919 | |
|
920 | 920 | # The set of directories that conflict with a remote file, and so may cause |
|
921 | 921 | # conflicts if they still contain any files after the merge. |
|
922 | 922 | remoteconflicts = set() |
|
923 | 923 | |
|
924 | 924 | # The set of directories that appear as both a file and a directory in the |
|
925 | 925 | # remote manifest. These indicate an invalid remote manifest, which |
|
926 | 926 | # can't be updated to cleanly. |
|
927 | 927 | invalidconflicts = set() |
|
928 | 928 | |
|
929 | 929 | # The set of directories that contain files that are being created. |
|
930 | 930 | createdfiledirs = set() |
|
931 | 931 | |
|
932 | 932 | # The set of files deleted by all the actions. |
|
933 | 933 | deletedfiles = set() |
|
934 | 934 | |
|
935 | 935 | for f, (m, args, msg) in actions.items(): |
|
936 | 936 | if m in ('c', 'dc', 'm', 'cm'): |
|
937 | 937 | # This action may create a new local file. |
|
938 | 938 | createdfiledirs.update(util.finddirs(f)) |
|
939 | 939 | if mf.hasdir(f): |
|
940 | 940 | # The file aliases a local directory. This might be ok if all |
|
941 | 941 | # the files in the local directory are being deleted. This |
|
942 | 942 | # will be checked once we know what all the deleted files are. |
|
943 | 943 | remoteconflicts.add(f) |
|
944 | 944 | # Track the names of all deleted files. |
|
945 | 945 | if m == 'r': |
|
946 | 946 | deletedfiles.add(f) |
|
947 | 947 | if m == 'm': |
|
948 | 948 | f1, f2, fa, move, anc = args |
|
949 | 949 | if move: |
|
950 | 950 | deletedfiles.add(f1) |
|
951 | 951 | if m == 'dm': |
|
952 | 952 | f2, flags = args |
|
953 | 953 | deletedfiles.add(f2) |
|
954 | 954 | |
|
955 | 955 | # Check all directories that contain created files for path conflicts. |
|
956 | 956 | for p in createdfiledirs: |
|
957 | 957 | if p in mf: |
|
958 | 958 | if p in mctx: |
|
959 | 959 | # A file is in a directory which aliases both a local |
|
960 | 960 | # and a remote file. This is an internal inconsistency |
|
961 | 961 | # within the remote manifest. |
|
962 | 962 | invalidconflicts.add(p) |
|
963 | 963 | else: |
|
964 | 964 | # A file is in a directory which aliases a local file. |
|
965 | 965 | # We will need to rename the local file. |
|
966 | 966 | localconflicts.add(p) |
|
967 | 967 | if p in actions and actions[p][0] in ('c', 'dc', 'm', 'cm'): |
|
968 | 968 | # The file is in a directory which aliases a remote file. |
|
969 | 969 | # This is an internal inconsistency within the remote |
|
970 | 970 | # manifest. |
|
971 | 971 | invalidconflicts.add(p) |
|
972 | 972 | |
|
973 | 973 | # Rename all local conflicting files that have not been deleted. |
|
974 | 974 | for p in localconflicts: |
|
975 | 975 | if p not in deletedfiles: |
|
976 | 976 | ctxname = bytes(wctx).rstrip('+') |
|
977 | 977 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) |
|
978 | 978 | actions[pnew] = ('pr', (p,), "local path conflict") |
|
979 | 979 | actions[p] = ('p', (pnew, 'l'), "path conflict") |
|
980 | 980 | |
|
981 | 981 | if remoteconflicts: |
|
982 | 982 | # Check if all files in the conflicting directories have been removed. |
|
983 | 983 | ctxname = bytes(mctx).rstrip('+') |
|
984 | 984 | for f, p in _filesindirs(repo, mf, remoteconflicts): |
|
985 | 985 | if f not in deletedfiles: |
|
986 | 986 | m, args, msg = actions[p] |
|
987 | 987 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) |
|
988 | 988 | if m in ('dc', 'm'): |
|
989 | 989 | # Action was merge, just update target. |
|
990 | 990 | actions[pnew] = (m, args, msg) |
|
991 | 991 | else: |
|
992 | 992 | # Action was create, change to renamed get action. |
|
993 | 993 | fl = args[0] |
|
994 | 994 | actions[pnew] = ('dg', (p, fl), "remote path conflict") |
|
995 | 995 | actions[p] = ('p', (pnew, 'r'), "path conflict") |
|
996 | 996 | remoteconflicts.remove(p) |
|
997 | 997 | break |
|
998 | 998 | |
|
999 | 999 | if invalidconflicts: |
|
1000 | 1000 | for p in invalidconflicts: |
|
1001 | 1001 | repo.ui.warn(_("%s: is both a file and a directory\n") % p) |
|
1002 | 1002 | raise error.Abort(_("destination manifest contains path conflicts")) |
|
1003 | 1003 | |
|
1004 | 1004 | def manifestmerge(repo, wctx, p2, pa, branchmerge, force, matcher, |
|
1005 | 1005 | acceptremote, followcopies, forcefulldiff=False): |
|
1006 | 1006 | """ |
|
1007 | 1007 | Merge wctx and p2 with ancestor pa and generate merge action list |
|
1008 | 1008 | |
|
1009 | 1009 | branchmerge and force are as passed in to update |
|
1010 | 1010 | matcher = matcher to filter file lists |
|
1011 | 1011 | acceptremote = accept the incoming changes without prompting |
|
1012 | 1012 | """ |
|
1013 | 1013 | if matcher is not None and matcher.always(): |
|
1014 | 1014 | matcher = None |
|
1015 | 1015 | |
|
1016 | 1016 | copy, movewithdir, diverge, renamedelete, dirmove = {}, {}, {}, {}, {} |
|
1017 | 1017 | |
|
1018 | 1018 | # manifests fetched in order are going to be faster, so prime the caches |
|
1019 | 1019 | [x.manifest() for x in |
|
1020 | 1020 | sorted(wctx.parents() + [p2, pa], key=scmutil.intrev)] |
|
1021 | 1021 | |
|
1022 | 1022 | if followcopies: |
|
1023 | 1023 | ret = copies.mergecopies(repo, wctx, p2, pa) |
|
1024 | 1024 | copy, movewithdir, diverge, renamedelete, dirmove = ret |
|
1025 | 1025 | |
|
1026 | 1026 | boolbm = pycompat.bytestr(bool(branchmerge)) |
|
1027 | 1027 | boolf = pycompat.bytestr(bool(force)) |
|
1028 | 1028 | boolm = pycompat.bytestr(bool(matcher)) |
|
1029 | 1029 | repo.ui.note(_("resolving manifests\n")) |
|
1030 | 1030 | repo.ui.debug(" branchmerge: %s, force: %s, partial: %s\n" |
|
1031 | 1031 | % (boolbm, boolf, boolm)) |
|
1032 | 1032 | repo.ui.debug(" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2)) |
|
1033 | 1033 | |
|
1034 | 1034 | m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest() |
|
1035 | 1035 | copied = set(copy.values()) |
|
1036 | 1036 | copied.update(movewithdir.values()) |
|
1037 | 1037 | |
|
1038 | 1038 | if '.hgsubstate' in m1: |
|
1039 | 1039 | # check whether sub state is modified |
|
1040 | 1040 | if any(wctx.sub(s).dirty() for s in wctx.substate): |
|
1041 | 1041 | m1['.hgsubstate'] = modifiednodeid |
|
1042 | 1042 | |
|
1043 | 1043 | # Don't use m2-vs-ma optimization if: |
|
1044 | 1044 | # - ma is the same as m1 or m2, which we're just going to diff again later |
|
1045 | 1045 | # - The caller specifically asks for a full diff, which is useful during bid |
|
1046 | 1046 | # merge. |
|
1047 | 1047 | if (pa not in ([wctx, p2] + wctx.parents()) and not forcefulldiff): |
|
1048 | 1048 | # Identify which files are relevant to the merge, so we can limit the |
|
1049 | 1049 | # total m1-vs-m2 diff to just those files. This has significant |
|
1050 | 1050 | # performance benefits in large repositories. |
|
1051 | 1051 | relevantfiles = set(ma.diff(m2).keys()) |
|
1052 | 1052 | |
|
1053 | 1053 | # For copied and moved files, we need to add the source file too. |
|
1054 | 1054 | for copykey, copyvalue in copy.iteritems(): |
|
1055 | 1055 | if copyvalue in relevantfiles: |
|
1056 | 1056 | relevantfiles.add(copykey) |
|
1057 | 1057 | for movedirkey in movewithdir: |
|
1058 | 1058 | relevantfiles.add(movedirkey) |
|
1059 | 1059 | filesmatcher = scmutil.matchfiles(repo, relevantfiles) |
|
1060 | 1060 | matcher = matchmod.intersectmatchers(matcher, filesmatcher) |
|
1061 | 1061 | |
|
1062 | 1062 | diff = m1.diff(m2, match=matcher) |
|
1063 | 1063 | |
|
1064 | 1064 | if matcher is None: |
|
1065 | 1065 | matcher = matchmod.always('', '') |
|
1066 | 1066 | |
|
1067 | 1067 | actions = {} |
|
1068 | 1068 | for f, ((n1, fl1), (n2, fl2)) in diff.iteritems(): |
|
1069 | 1069 | if n1 and n2: # file exists on both local and remote side |
|
1070 | 1070 | if f not in ma: |
|
1071 | 1071 | fa = copy.get(f, None) |
|
1072 | 1072 | if fa is not None: |
|
1073 | 1073 | actions[f] = ('m', (f, f, fa, False, pa.node()), |
|
1074 | 1074 | "both renamed from " + fa) |
|
1075 | 1075 | else: |
|
1076 | 1076 | actions[f] = ('m', (f, f, None, False, pa.node()), |
|
1077 | 1077 | "both created") |
|
1078 | 1078 | else: |
|
1079 | 1079 | a = ma[f] |
|
1080 | 1080 | fla = ma.flags(f) |
|
1081 | 1081 | nol = 'l' not in fl1 + fl2 + fla |
|
1082 | 1082 | if n2 == a and fl2 == fla: |
|
1083 | 1083 | actions[f] = ('k', (), "remote unchanged") |
|
1084 | 1084 | elif n1 == a and fl1 == fla: # local unchanged - use remote |
|
1085 | 1085 | if n1 == n2: # optimization: keep local content |
|
1086 | 1086 | actions[f] = ('e', (fl2,), "update permissions") |
|
1087 | 1087 | else: |
|
1088 | 1088 | actions[f] = ('g', (fl2, False), "remote is newer") |
|
1089 | 1089 | elif nol and n2 == a: # remote only changed 'x' |
|
1090 | 1090 | actions[f] = ('e', (fl2,), "update permissions") |
|
1091 | 1091 | elif nol and n1 == a: # local only changed 'x' |
|
1092 | 1092 | actions[f] = ('g', (fl1, False), "remote is newer") |
|
1093 | 1093 | else: # both changed something |
|
1094 | 1094 | actions[f] = ('m', (f, f, f, False, pa.node()), |
|
1095 | 1095 | "versions differ") |
|
1096 | 1096 | elif n1: # file exists only on local side |
|
1097 | 1097 | if f in copied: |
|
1098 | 1098 | pass # we'll deal with it on m2 side |
|
1099 | 1099 | elif f in movewithdir: # directory rename, move local |
|
1100 | 1100 | f2 = movewithdir[f] |
|
1101 | 1101 | if f2 in m2: |
|
1102 | 1102 | actions[f2] = ('m', (f, f2, None, True, pa.node()), |
|
1103 | 1103 | "remote directory rename, both created") |
|
1104 | 1104 | else: |
|
1105 | 1105 | actions[f2] = ('dm', (f, fl1), |
|
1106 | 1106 | "remote directory rename - move from " + f) |
|
1107 | 1107 | elif f in copy: |
|
1108 | 1108 | f2 = copy[f] |
|
1109 | 1109 | actions[f] = ('m', (f, f2, f2, False, pa.node()), |
|
1110 | 1110 | "local copied/moved from " + f2) |
|
1111 | 1111 | elif f in ma: # clean, a different, no remote |
|
1112 | 1112 | if n1 != ma[f]: |
|
1113 | 1113 | if acceptremote: |
|
1114 | 1114 | actions[f] = ('r', None, "remote delete") |
|
1115 | 1115 | else: |
|
1116 | 1116 | actions[f] = ('cd', (f, None, f, False, pa.node()), |
|
1117 | 1117 | "prompt changed/deleted") |
|
1118 | 1118 | elif n1 == addednodeid: |
|
1119 | 1119 | # This extra 'a' is added by working copy manifest to mark |
|
1120 | 1120 | # the file as locally added. We should forget it instead of |
|
1121 | 1121 | # deleting it. |
|
1122 | 1122 | actions[f] = ('f', None, "remote deleted") |
|
1123 | 1123 | else: |
|
1124 | 1124 | actions[f] = ('r', None, "other deleted") |
|
1125 | 1125 | elif n2: # file exists only on remote side |
|
1126 | 1126 | if f in copied: |
|
1127 | 1127 | pass # we'll deal with it on m1 side |
|
1128 | 1128 | elif f in movewithdir: |
|
1129 | 1129 | f2 = movewithdir[f] |
|
1130 | 1130 | if f2 in m1: |
|
1131 | 1131 | actions[f2] = ('m', (f2, f, None, False, pa.node()), |
|
1132 | 1132 | "local directory rename, both created") |
|
1133 | 1133 | else: |
|
1134 | 1134 | actions[f2] = ('dg', (f, fl2), |
|
1135 | 1135 | "local directory rename - get from " + f) |
|
1136 | 1136 | elif f in copy: |
|
1137 | 1137 | f2 = copy[f] |
|
1138 | 1138 | if f2 in m2: |
|
1139 | 1139 | actions[f] = ('m', (f2, f, f2, False, pa.node()), |
|
1140 | 1140 | "remote copied from " + f2) |
|
1141 | 1141 | else: |
|
1142 | 1142 | actions[f] = ('m', (f2, f, f2, True, pa.node()), |
|
1143 | 1143 | "remote moved from " + f2) |
|
1144 | 1144 | elif f not in ma: |
|
1145 | 1145 | # local unknown, remote created: the logic is described by the |
|
1146 | 1146 | # following table: |
|
1147 | 1147 | # |
|
1148 | 1148 | # force branchmerge different | action |
|
1149 | 1149 | # n * * | create |
|
1150 | 1150 | # y n * | create |
|
1151 | 1151 | # y y n | create |
|
1152 | 1152 | # y y y | merge |
|
1153 | 1153 | # |
|
1154 | 1154 | # Checking whether the files are different is expensive, so we |
|
1155 | 1155 | # don't do that when we can avoid it. |
|
1156 | 1156 | if not force: |
|
1157 | 1157 | actions[f] = ('c', (fl2,), "remote created") |
|
1158 | 1158 | elif not branchmerge: |
|
1159 | 1159 | actions[f] = ('c', (fl2,), "remote created") |
|
1160 | 1160 | else: |
|
1161 | 1161 | actions[f] = ('cm', (fl2, pa.node()), |
|
1162 | 1162 | "remote created, get or merge") |
|
1163 | 1163 | elif n2 != ma[f]: |
|
1164 | 1164 | df = None |
|
1165 | 1165 | for d in dirmove: |
|
1166 | 1166 | if f.startswith(d): |
|
1167 | 1167 | # new file added in a directory that was moved |
|
1168 | 1168 | df = dirmove[d] + f[len(d):] |
|
1169 | 1169 | break |
|
1170 | 1170 | if df is not None and df in m1: |
|
1171 | 1171 | actions[df] = ('m', (df, f, f, False, pa.node()), |
|
1172 | 1172 | "local directory rename - respect move from " + f) |
|
1173 | 1173 | elif acceptremote: |
|
1174 | 1174 | actions[f] = ('c', (fl2,), "remote recreating") |
|
1175 | 1175 | else: |
|
1176 | 1176 | actions[f] = ('dc', (None, f, f, False, pa.node()), |
|
1177 | 1177 | "prompt deleted/changed") |
|
1178 | 1178 | |
|
1179 | 1179 | if repo.ui.configbool('experimental', 'merge.checkpathconflicts'): |
|
1180 | 1180 | # If we are merging, look for path conflicts. |
|
1181 | 1181 | checkpathconflicts(repo, wctx, p2, actions) |
|
1182 | 1182 | |
|
1183 | 1183 | return actions, diverge, renamedelete |
|
1184 | 1184 | |
|
1185 | 1185 | def _resolvetrivial(repo, wctx, mctx, ancestor, actions): |
|
1186 | 1186 | """Resolves false conflicts where the nodeid changed but the content |
|
1187 | 1187 | remained the same.""" |
|
1188 | 1188 | |
|
1189 | 1189 | for f, (m, args, msg) in actions.items(): |
|
1190 | 1190 | if m == 'cd' and f in ancestor and not wctx[f].cmp(ancestor[f]): |
|
1191 | 1191 | # local did change but ended up with same content |
|
1192 | 1192 | actions[f] = 'r', None, "prompt same" |
|
1193 | 1193 | elif m == 'dc' and f in ancestor and not mctx[f].cmp(ancestor[f]): |
|
1194 | 1194 | # remote did change but ended up with same content |
|
1195 | 1195 | del actions[f] # don't get = keep local deleted |
|
1196 | 1196 | |
|
1197 | 1197 | def calculateupdates(repo, wctx, mctx, ancestors, branchmerge, force, |
|
1198 | 1198 | acceptremote, followcopies, matcher=None, |
|
1199 | 1199 | mergeforce=False): |
|
1200 | 1200 | """Calculate the actions needed to merge mctx into wctx using ancestors""" |
|
1201 | 1201 | # Avoid cycle. |
|
1202 | 1202 | from . import sparse |
|
1203 | 1203 | |
|
1204 | 1204 | if len(ancestors) == 1: # default |
|
1205 | 1205 | actions, diverge, renamedelete = manifestmerge( |
|
1206 | 1206 | repo, wctx, mctx, ancestors[0], branchmerge, force, matcher, |
|
1207 | 1207 | acceptremote, followcopies) |
|
1208 | 1208 | _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce) |
|
1209 | 1209 | |
|
1210 | 1210 | else: # only when merge.preferancestor=* - the default |
|
1211 | 1211 | repo.ui.note( |
|
1212 | 1212 | _("note: merging %s and %s using bids from ancestors %s\n") % |
|
1213 | 1213 | (wctx, mctx, _(' and ').join(pycompat.bytestr(anc) |
|
1214 | 1214 | for anc in ancestors))) |
|
1215 | 1215 | |
|
1216 | 1216 | # Call for bids |
|
1217 | 1217 | fbids = {} # mapping filename to bids (action method to list af actions) |
|
1218 | 1218 | diverge, renamedelete = None, None |
|
1219 | 1219 | for ancestor in ancestors: |
|
1220 | 1220 | repo.ui.note(_('\ncalculating bids for ancestor %s\n') % ancestor) |
|
1221 | 1221 | actions, diverge1, renamedelete1 = manifestmerge( |
|
1222 | 1222 | repo, wctx, mctx, ancestor, branchmerge, force, matcher, |
|
1223 | 1223 | acceptremote, followcopies, forcefulldiff=True) |
|
1224 | 1224 | _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce) |
|
1225 | 1225 | |
|
1226 | 1226 | # Track the shortest set of warning on the theory that bid |
|
1227 | 1227 | # merge will correctly incorporate more information |
|
1228 | 1228 | if diverge is None or len(diverge1) < len(diverge): |
|
1229 | 1229 | diverge = diverge1 |
|
1230 | 1230 | if renamedelete is None or len(renamedelete) < len(renamedelete1): |
|
1231 | 1231 | renamedelete = renamedelete1 |
|
1232 | 1232 | |
|
1233 | 1233 | for f, a in sorted(actions.iteritems()): |
|
1234 | 1234 | m, args, msg = a |
|
1235 | 1235 | repo.ui.debug(' %s: %s -> %s\n' % (f, msg, m)) |
|
1236 | 1236 | if f in fbids: |
|
1237 | 1237 | d = fbids[f] |
|
1238 | 1238 | if m in d: |
|
1239 | 1239 | d[m].append(a) |
|
1240 | 1240 | else: |
|
1241 | 1241 | d[m] = [a] |
|
1242 | 1242 | else: |
|
1243 | 1243 | fbids[f] = {m: [a]} |
|
1244 | 1244 | |
|
1245 | 1245 | # Pick the best bid for each file |
|
1246 | 1246 | repo.ui.note(_('\nauction for merging merge bids\n')) |
|
1247 | 1247 | actions = {} |
|
1248 | 1248 | dms = [] # filenames that have dm actions |
|
1249 | 1249 | for f, bids in sorted(fbids.items()): |
|
1250 | 1250 | # bids is a mapping from action method to list af actions |
|
1251 | 1251 | # Consensus? |
|
1252 | 1252 | if len(bids) == 1: # all bids are the same kind of method |
|
1253 | 1253 | m, l = list(bids.items())[0] |
|
1254 | 1254 | if all(a == l[0] for a in l[1:]): # len(bids) is > 1 |
|
1255 | 1255 | repo.ui.note(_(" %s: consensus for %s\n") % (f, m)) |
|
1256 | 1256 | actions[f] = l[0] |
|
1257 | 1257 | if m == 'dm': |
|
1258 | 1258 | dms.append(f) |
|
1259 | 1259 | continue |
|
1260 | 1260 | # If keep is an option, just do it. |
|
1261 | 1261 | if 'k' in bids: |
|
1262 | 1262 | repo.ui.note(_(" %s: picking 'keep' action\n") % f) |
|
1263 | 1263 | actions[f] = bids['k'][0] |
|
1264 | 1264 | continue |
|
1265 | 1265 | # If there are gets and they all agree [how could they not?], do it. |
|
1266 | 1266 | if 'g' in bids: |
|
1267 | 1267 | ga0 = bids['g'][0] |
|
1268 | 1268 | if all(a == ga0 for a in bids['g'][1:]): |
|
1269 | 1269 | repo.ui.note(_(" %s: picking 'get' action\n") % f) |
|
1270 | 1270 | actions[f] = ga0 |
|
1271 | 1271 | continue |
|
1272 | 1272 | # TODO: Consider other simple actions such as mode changes |
|
1273 | 1273 | # Handle inefficient democrazy. |
|
1274 | 1274 | repo.ui.note(_(' %s: multiple bids for merge action:\n') % f) |
|
1275 | 1275 | for m, l in sorted(bids.items()): |
|
1276 | 1276 | for _f, args, msg in l: |
|
1277 | 1277 | repo.ui.note(' %s -> %s\n' % (msg, m)) |
|
1278 | 1278 | # Pick random action. TODO: Instead, prompt user when resolving |
|
1279 | 1279 | m, l = list(bids.items())[0] |
|
1280 | 1280 | repo.ui.warn(_(' %s: ambiguous merge - picked %s action\n') % |
|
1281 | 1281 | (f, m)) |
|
1282 | 1282 | actions[f] = l[0] |
|
1283 | 1283 | if m == 'dm': |
|
1284 | 1284 | dms.append(f) |
|
1285 | 1285 | continue |
|
1286 | 1286 | # Work around 'dm' that can cause multiple actions for the same file |
|
1287 | 1287 | for f in dms: |
|
1288 | 1288 | dm, (f0, flags), msg = actions[f] |
|
1289 | 1289 | assert dm == 'dm', dm |
|
1290 | 1290 | if f0 in actions and actions[f0][0] == 'r': |
|
1291 | 1291 | # We have one bid for removing a file and another for moving it. |
|
1292 | 1292 | # These two could be merged as first move and then delete ... |
|
1293 | 1293 | # but instead drop moving and just delete. |
|
1294 | 1294 | del actions[f] |
|
1295 | 1295 | repo.ui.note(_('end of auction\n\n')) |
|
1296 | 1296 | |
|
1297 | 1297 | _resolvetrivial(repo, wctx, mctx, ancestors[0], actions) |
|
1298 | 1298 | |
|
1299 | 1299 | if wctx.rev() is None: |
|
1300 | 1300 | fractions = _forgetremoved(wctx, mctx, branchmerge) |
|
1301 | 1301 | actions.update(fractions) |
|
1302 | 1302 | |
|
1303 | 1303 | prunedactions = sparse.filterupdatesactions(repo, wctx, mctx, branchmerge, |
|
1304 | 1304 | actions) |
|
1305 | 1305 | |
|
1306 | 1306 | return prunedactions, diverge, renamedelete |
|
1307 | 1307 | |
|
1308 | 1308 | def _getcwd(): |
|
1309 | 1309 | try: |
|
1310 | 1310 | return pycompat.getcwd() |
|
1311 | 1311 | except OSError as err: |
|
1312 | 1312 | if err.errno == errno.ENOENT: |
|
1313 | 1313 | return None |
|
1314 | 1314 | raise |
|
1315 | 1315 | |
|
1316 | 1316 | def batchremove(repo, wctx, actions): |
|
1317 | 1317 | """apply removes to the working directory |
|
1318 | 1318 | |
|
1319 | 1319 | yields tuples for progress updates |
|
1320 | 1320 | """ |
|
1321 | 1321 | verbose = repo.ui.verbose |
|
1322 | 1322 | cwd = _getcwd() |
|
1323 | 1323 | i = 0 |
|
1324 | 1324 | for f, args, msg in actions: |
|
1325 | 1325 | repo.ui.debug(" %s: %s -> r\n" % (f, msg)) |
|
1326 | 1326 | if verbose: |
|
1327 | 1327 | repo.ui.note(_("removing %s\n") % f) |
|
1328 | 1328 | wctx[f].audit() |
|
1329 | 1329 | try: |
|
1330 | 1330 | wctx[f].remove(ignoremissing=True) |
|
1331 | 1331 | except OSError as inst: |
|
1332 | 1332 | repo.ui.warn(_("update failed to remove %s: %s!\n") % |
|
1333 | 1333 | (f, inst.strerror)) |
|
1334 | 1334 | if i == 100: |
|
1335 | 1335 | yield i, f |
|
1336 | 1336 | i = 0 |
|
1337 | 1337 | i += 1 |
|
1338 | 1338 | if i > 0: |
|
1339 | 1339 | yield i, f |
|
1340 | 1340 | |
|
1341 | 1341 | if cwd and not _getcwd(): |
|
1342 | 1342 | # cwd was removed in the course of removing files; print a helpful |
|
1343 | 1343 | # warning. |
|
1344 | 1344 | repo.ui.warn(_("current directory was removed\n" |
|
1345 | 1345 | "(consider changing to repo root: %s)\n") % repo.root) |
|
1346 | 1346 | |
|
1347 | 1347 | def batchget(repo, mctx, wctx, actions): |
|
1348 | 1348 | """apply gets to the working directory |
|
1349 | 1349 | |
|
1350 | 1350 | mctx is the context to get from |
|
1351 | 1351 | |
|
1352 | 1352 | yields tuples for progress updates |
|
1353 | 1353 | """ |
|
1354 | 1354 | verbose = repo.ui.verbose |
|
1355 | 1355 | fctx = mctx.filectx |
|
1356 | 1356 | ui = repo.ui |
|
1357 | 1357 | i = 0 |
|
1358 | 1358 | with repo.wvfs.backgroundclosing(ui, expectedcount=len(actions)): |
|
1359 | 1359 | for f, (flags, backup), msg in actions: |
|
1360 | 1360 | repo.ui.debug(" %s: %s -> g\n" % (f, msg)) |
|
1361 | 1361 | if verbose: |
|
1362 | 1362 | repo.ui.note(_("getting %s\n") % f) |
|
1363 | 1363 | |
|
1364 | 1364 | if backup: |
|
1365 | 1365 | # If a file or directory exists with the same name, back that |
|
1366 | 1366 | # up. Otherwise, look to see if there is a file that conflicts |
|
1367 | 1367 | # with a directory this file is in, and if so, back that up. |
|
1368 | 1368 | absf = repo.wjoin(f) |
|
1369 | 1369 | if not repo.wvfs.lexists(f): |
|
1370 | 1370 | for p in util.finddirs(f): |
|
1371 | 1371 | if repo.wvfs.isfileorlink(p): |
|
1372 | 1372 | absf = repo.wjoin(p) |
|
1373 | 1373 | break |
|
1374 | 1374 | orig = scmutil.origpath(ui, repo, absf) |
|
1375 | 1375 | if repo.wvfs.lexists(absf): |
|
1376 | 1376 | util.rename(absf, orig) |
|
1377 | 1377 | wctx[f].clearunknown() |
|
1378 | 1378 | atomictemp = ui.configbool("experimental", "update.atomic-file") |
|
1379 | 1379 | wctx[f].write(fctx(f).data(), flags, backgroundclose=True, |
|
1380 | 1380 | atomictemp=atomictemp) |
|
1381 | 1381 | if i == 100: |
|
1382 | 1382 | yield i, f |
|
1383 | 1383 | i = 0 |
|
1384 | 1384 | i += 1 |
|
1385 | 1385 | if i > 0: |
|
1386 | 1386 | yield i, f |
|
1387 | 1387 | |
|
1388 | def _prefetchfiles(repo, ctx, actions): | |
|
1389 | """Invoke ``scmutil.fileprefetchhooks()`` for the files relevant to the dict | |
|
1390 | of merge actions. ``ctx`` is the context being merged in.""" | |
|
1391 | ||
|
1392 | # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they | |
|
1393 | # don't touch the context to be merged in. 'cd' is skipped, because | |
|
1394 | # changed/deleted never resolves to something from the remote side. | |
|
1395 | oplist = [actions[a] for a in 'g dc dg m'.split()] | |
|
1396 | prefetch = scmutil.fileprefetchhooks | |
|
1397 | prefetch(repo, ctx, [f for sublist in oplist for f, args, msg in sublist]) | |
|
1388 | 1398 | |
|
1389 | 1399 | def applyupdates(repo, actions, wctx, mctx, overwrite, labels=None): |
|
1390 | 1400 | """apply the merge action list to the working directory |
|
1391 | 1401 | |
|
1392 | 1402 | wctx is the working copy context |
|
1393 | 1403 | mctx is the context to be merged into the working copy |
|
1394 | 1404 | |
|
1395 | 1405 | Return a tuple of counts (updated, merged, removed, unresolved) that |
|
1396 | 1406 | describes how many files were affected by the update. |
|
1397 | 1407 | """ |
|
1398 | 1408 | |
|
1409 | _prefetchfiles(repo, mctx, actions) | |
|
1410 | ||
|
1399 | 1411 | updated, merged, removed = 0, 0, 0 |
|
1400 | 1412 | ms = mergestate.clean(repo, wctx.p1().node(), mctx.node(), labels) |
|
1401 | 1413 | moves = [] |
|
1402 | 1414 | for m, l in actions.items(): |
|
1403 | 1415 | l.sort() |
|
1404 | 1416 | |
|
1405 | 1417 | # 'cd' and 'dc' actions are treated like other merge conflicts |
|
1406 | 1418 | mergeactions = sorted(actions['cd']) |
|
1407 | 1419 | mergeactions.extend(sorted(actions['dc'])) |
|
1408 | 1420 | mergeactions.extend(actions['m']) |
|
1409 | 1421 | for f, args, msg in mergeactions: |
|
1410 | 1422 | f1, f2, fa, move, anc = args |
|
1411 | 1423 | if f == '.hgsubstate': # merged internally |
|
1412 | 1424 | continue |
|
1413 | 1425 | if f1 is None: |
|
1414 | 1426 | fcl = filemerge.absentfilectx(wctx, fa) |
|
1415 | 1427 | else: |
|
1416 | 1428 | repo.ui.debug(" preserving %s for resolve of %s\n" % (f1, f)) |
|
1417 | 1429 | fcl = wctx[f1] |
|
1418 | 1430 | if f2 is None: |
|
1419 | 1431 | fco = filemerge.absentfilectx(mctx, fa) |
|
1420 | 1432 | else: |
|
1421 | 1433 | fco = mctx[f2] |
|
1422 | 1434 | actx = repo[anc] |
|
1423 | 1435 | if fa in actx: |
|
1424 | 1436 | fca = actx[fa] |
|
1425 | 1437 | else: |
|
1426 | 1438 | # TODO: move to absentfilectx |
|
1427 | 1439 | fca = repo.filectx(f1, fileid=nullrev) |
|
1428 | 1440 | ms.add(fcl, fco, fca, f) |
|
1429 | 1441 | if f1 != f and move: |
|
1430 | 1442 | moves.append(f1) |
|
1431 | 1443 | |
|
1432 | 1444 | _updating = _('updating') |
|
1433 | 1445 | _files = _('files') |
|
1434 | 1446 | progress = repo.ui.progress |
|
1435 | 1447 | |
|
1436 | 1448 | # remove renamed files after safely stored |
|
1437 | 1449 | for f in moves: |
|
1438 | 1450 | if wctx[f].lexists(): |
|
1439 | 1451 | repo.ui.debug("removing %s\n" % f) |
|
1440 | 1452 | wctx[f].audit() |
|
1441 | 1453 | wctx[f].remove() |
|
1442 | 1454 | |
|
1443 | 1455 | numupdates = sum(len(l) for m, l in actions.items() if m != 'k') |
|
1444 | 1456 | z = 0 |
|
1445 | 1457 | |
|
1446 | 1458 | if [a for a in actions['r'] if a[0] == '.hgsubstate']: |
|
1447 | 1459 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) |
|
1448 | 1460 | |
|
1449 | 1461 | # record path conflicts |
|
1450 | 1462 | for f, args, msg in actions['p']: |
|
1451 | 1463 | f1, fo = args |
|
1452 | 1464 | s = repo.ui.status |
|
1453 | 1465 | s(_("%s: path conflict - a file or link has the same name as a " |
|
1454 | 1466 | "directory\n") % f) |
|
1455 | 1467 | if fo == 'l': |
|
1456 | 1468 | s(_("the local file has been renamed to %s\n") % f1) |
|
1457 | 1469 | else: |
|
1458 | 1470 | s(_("the remote file has been renamed to %s\n") % f1) |
|
1459 | 1471 | s(_("resolve manually then use 'hg resolve --mark %s'\n") % f) |
|
1460 | 1472 | ms.addpath(f, f1, fo) |
|
1461 | 1473 | z += 1 |
|
1462 | 1474 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1463 | 1475 | |
|
1464 | 1476 | # When merging in-memory, we can't support worker processes, so set the |
|
1465 | 1477 | # per-item cost at 0 in that case. |
|
1466 | 1478 | cost = 0 if wctx.isinmemory() else 0.001 |
|
1467 | 1479 | |
|
1468 | 1480 | # remove in parallel (must come before resolving path conflicts and getting) |
|
1469 | 1481 | prog = worker.worker(repo.ui, cost, batchremove, (repo, wctx), |
|
1470 | 1482 | actions['r']) |
|
1471 | 1483 | for i, item in prog: |
|
1472 | 1484 | z += i |
|
1473 | 1485 | progress(_updating, z, item=item, total=numupdates, unit=_files) |
|
1474 | 1486 | removed = len(actions['r']) |
|
1475 | 1487 | |
|
1476 | 1488 | # resolve path conflicts (must come before getting) |
|
1477 | 1489 | for f, args, msg in actions['pr']: |
|
1478 | 1490 | repo.ui.debug(" %s: %s -> pr\n" % (f, msg)) |
|
1479 | 1491 | f0, = args |
|
1480 | 1492 | if wctx[f0].lexists(): |
|
1481 | 1493 | repo.ui.note(_("moving %s to %s\n") % (f0, f)) |
|
1482 | 1494 | wctx[f].audit() |
|
1483 | 1495 | wctx[f].write(wctx.filectx(f0).data(), wctx.filectx(f0).flags()) |
|
1484 | 1496 | wctx[f0].remove() |
|
1485 | 1497 | z += 1 |
|
1486 | 1498 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1487 | 1499 | |
|
1488 | 1500 | # get in parallel |
|
1489 | 1501 | prog = worker.worker(repo.ui, cost, batchget, (repo, mctx, wctx), |
|
1490 | 1502 | actions['g']) |
|
1491 | 1503 | for i, item in prog: |
|
1492 | 1504 | z += i |
|
1493 | 1505 | progress(_updating, z, item=item, total=numupdates, unit=_files) |
|
1494 | 1506 | updated = len(actions['g']) |
|
1495 | 1507 | |
|
1496 | 1508 | if [a for a in actions['g'] if a[0] == '.hgsubstate']: |
|
1497 | 1509 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) |
|
1498 | 1510 | |
|
1499 | 1511 | # forget (manifest only, just log it) (must come first) |
|
1500 | 1512 | for f, args, msg in actions['f']: |
|
1501 | 1513 | repo.ui.debug(" %s: %s -> f\n" % (f, msg)) |
|
1502 | 1514 | z += 1 |
|
1503 | 1515 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1504 | 1516 | |
|
1505 | 1517 | # re-add (manifest only, just log it) |
|
1506 | 1518 | for f, args, msg in actions['a']: |
|
1507 | 1519 | repo.ui.debug(" %s: %s -> a\n" % (f, msg)) |
|
1508 | 1520 | z += 1 |
|
1509 | 1521 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1510 | 1522 | |
|
1511 | 1523 | # re-add/mark as modified (manifest only, just log it) |
|
1512 | 1524 | for f, args, msg in actions['am']: |
|
1513 | 1525 | repo.ui.debug(" %s: %s -> am\n" % (f, msg)) |
|
1514 | 1526 | z += 1 |
|
1515 | 1527 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1516 | 1528 | |
|
1517 | 1529 | # keep (noop, just log it) |
|
1518 | 1530 | for f, args, msg in actions['k']: |
|
1519 | 1531 | repo.ui.debug(" %s: %s -> k\n" % (f, msg)) |
|
1520 | 1532 | # no progress |
|
1521 | 1533 | |
|
1522 | 1534 | # directory rename, move local |
|
1523 | 1535 | for f, args, msg in actions['dm']: |
|
1524 | 1536 | repo.ui.debug(" %s: %s -> dm\n" % (f, msg)) |
|
1525 | 1537 | z += 1 |
|
1526 | 1538 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1527 | 1539 | f0, flags = args |
|
1528 | 1540 | repo.ui.note(_("moving %s to %s\n") % (f0, f)) |
|
1529 | 1541 | wctx[f].audit() |
|
1530 | 1542 | wctx[f].write(wctx.filectx(f0).data(), flags) |
|
1531 | 1543 | wctx[f0].remove() |
|
1532 | 1544 | updated += 1 |
|
1533 | 1545 | |
|
1534 | 1546 | # local directory rename, get |
|
1535 | 1547 | for f, args, msg in actions['dg']: |
|
1536 | 1548 | repo.ui.debug(" %s: %s -> dg\n" % (f, msg)) |
|
1537 | 1549 | z += 1 |
|
1538 | 1550 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1539 | 1551 | f0, flags = args |
|
1540 | 1552 | repo.ui.note(_("getting %s to %s\n") % (f0, f)) |
|
1541 | 1553 | wctx[f].write(mctx.filectx(f0).data(), flags) |
|
1542 | 1554 | updated += 1 |
|
1543 | 1555 | |
|
1544 | 1556 | # exec |
|
1545 | 1557 | for f, args, msg in actions['e']: |
|
1546 | 1558 | repo.ui.debug(" %s: %s -> e\n" % (f, msg)) |
|
1547 | 1559 | z += 1 |
|
1548 | 1560 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1549 | 1561 | flags, = args |
|
1550 | 1562 | wctx[f].audit() |
|
1551 | 1563 | wctx[f].setflags('l' in flags, 'x' in flags) |
|
1552 | 1564 | updated += 1 |
|
1553 | 1565 | |
|
1554 | 1566 | # the ordering is important here -- ms.mergedriver will raise if the merge |
|
1555 | 1567 | # driver has changed, and we want to be able to bypass it when overwrite is |
|
1556 | 1568 | # True |
|
1557 | 1569 | usemergedriver = not overwrite and mergeactions and ms.mergedriver |
|
1558 | 1570 | |
|
1559 | 1571 | if usemergedriver: |
|
1560 | 1572 | if wctx.isinmemory(): |
|
1561 | 1573 | raise error.InMemoryMergeConflictsError("in-memory merge does not " |
|
1562 | 1574 | "support mergedriver") |
|
1563 | 1575 | ms.commit() |
|
1564 | 1576 | proceed = driverpreprocess(repo, ms, wctx, labels=labels) |
|
1565 | 1577 | # the driver might leave some files unresolved |
|
1566 | 1578 | unresolvedf = set(ms.unresolved()) |
|
1567 | 1579 | if not proceed: |
|
1568 | 1580 | # XXX setting unresolved to at least 1 is a hack to make sure we |
|
1569 | 1581 | # error out |
|
1570 | 1582 | return updated, merged, removed, max(len(unresolvedf), 1) |
|
1571 | 1583 | newactions = [] |
|
1572 | 1584 | for f, args, msg in mergeactions: |
|
1573 | 1585 | if f in unresolvedf: |
|
1574 | 1586 | newactions.append((f, args, msg)) |
|
1575 | 1587 | mergeactions = newactions |
|
1576 | 1588 | |
|
1577 | 1589 | try: |
|
1578 | 1590 | # premerge |
|
1579 | 1591 | tocomplete = [] |
|
1580 | 1592 | for f, args, msg in mergeactions: |
|
1581 | 1593 | repo.ui.debug(" %s: %s -> m (premerge)\n" % (f, msg)) |
|
1582 | 1594 | z += 1 |
|
1583 | 1595 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1584 | 1596 | if f == '.hgsubstate': # subrepo states need updating |
|
1585 | 1597 | subrepoutil.submerge(repo, wctx, mctx, wctx.ancestor(mctx), |
|
1586 | 1598 | overwrite, labels) |
|
1587 | 1599 | continue |
|
1588 | 1600 | wctx[f].audit() |
|
1589 | 1601 | complete, r = ms.preresolve(f, wctx) |
|
1590 | 1602 | if not complete: |
|
1591 | 1603 | numupdates += 1 |
|
1592 | 1604 | tocomplete.append((f, args, msg)) |
|
1593 | 1605 | |
|
1594 | 1606 | # merge |
|
1595 | 1607 | for f, args, msg in tocomplete: |
|
1596 | 1608 | repo.ui.debug(" %s: %s -> m (merge)\n" % (f, msg)) |
|
1597 | 1609 | z += 1 |
|
1598 | 1610 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1599 | 1611 | ms.resolve(f, wctx) |
|
1600 | 1612 | |
|
1601 | 1613 | finally: |
|
1602 | 1614 | ms.commit() |
|
1603 | 1615 | |
|
1604 | 1616 | unresolved = ms.unresolvedcount() |
|
1605 | 1617 | |
|
1606 | 1618 | if usemergedriver and not unresolved and ms.mdstate() != 's': |
|
1607 | 1619 | if not driverconclude(repo, ms, wctx, labels=labels): |
|
1608 | 1620 | # XXX setting unresolved to at least 1 is a hack to make sure we |
|
1609 | 1621 | # error out |
|
1610 | 1622 | unresolved = max(unresolved, 1) |
|
1611 | 1623 | |
|
1612 | 1624 | ms.commit() |
|
1613 | 1625 | |
|
1614 | 1626 | msupdated, msmerged, msremoved = ms.counts() |
|
1615 | 1627 | updated += msupdated |
|
1616 | 1628 | merged += msmerged |
|
1617 | 1629 | removed += msremoved |
|
1618 | 1630 | |
|
1619 | 1631 | extraactions = ms.actions() |
|
1620 | 1632 | if extraactions: |
|
1621 | 1633 | mfiles = set(a[0] for a in actions['m']) |
|
1622 | 1634 | for k, acts in extraactions.iteritems(): |
|
1623 | 1635 | actions[k].extend(acts) |
|
1624 | 1636 | # Remove these files from actions['m'] as well. This is important |
|
1625 | 1637 | # because in recordupdates, files in actions['m'] are processed |
|
1626 | 1638 | # after files in other actions, and the merge driver might add |
|
1627 | 1639 | # files to those actions via extraactions above. This can lead to a |
|
1628 | 1640 | # file being recorded twice, with poor results. This is especially |
|
1629 | 1641 | # problematic for actions['r'] (currently only possible with the |
|
1630 | 1642 | # merge driver in the initial merge process; interrupted merges |
|
1631 | 1643 | # don't go through this flow). |
|
1632 | 1644 | # |
|
1633 | 1645 | # The real fix here is to have indexes by both file and action so |
|
1634 | 1646 | # that when the action for a file is changed it is automatically |
|
1635 | 1647 | # reflected in the other action lists. But that involves a more |
|
1636 | 1648 | # complex data structure, so this will do for now. |
|
1637 | 1649 | # |
|
1638 | 1650 | # We don't need to do the same operation for 'dc' and 'cd' because |
|
1639 | 1651 | # those lists aren't consulted again. |
|
1640 | 1652 | mfiles.difference_update(a[0] for a in acts) |
|
1641 | 1653 | |
|
1642 | 1654 | actions['m'] = [a for a in actions['m'] if a[0] in mfiles] |
|
1643 | 1655 | |
|
1644 | 1656 | progress(_updating, None, total=numupdates, unit=_files) |
|
1645 | 1657 | |
|
1646 | 1658 | return updated, merged, removed, unresolved |
|
1647 | 1659 | |
|
1648 | 1660 | def recordupdates(repo, actions, branchmerge): |
|
1649 | 1661 | "record merge actions to the dirstate" |
|
1650 | 1662 | # remove (must come first) |
|
1651 | 1663 | for f, args, msg in actions.get('r', []): |
|
1652 | 1664 | if branchmerge: |
|
1653 | 1665 | repo.dirstate.remove(f) |
|
1654 | 1666 | else: |
|
1655 | 1667 | repo.dirstate.drop(f) |
|
1656 | 1668 | |
|
1657 | 1669 | # forget (must come first) |
|
1658 | 1670 | for f, args, msg in actions.get('f', []): |
|
1659 | 1671 | repo.dirstate.drop(f) |
|
1660 | 1672 | |
|
1661 | 1673 | # resolve path conflicts |
|
1662 | 1674 | for f, args, msg in actions.get('pr', []): |
|
1663 | 1675 | f0, = args |
|
1664 | 1676 | origf0 = repo.dirstate.copied(f0) or f0 |
|
1665 | 1677 | repo.dirstate.add(f) |
|
1666 | 1678 | repo.dirstate.copy(origf0, f) |
|
1667 | 1679 | if f0 == origf0: |
|
1668 | 1680 | repo.dirstate.remove(f0) |
|
1669 | 1681 | else: |
|
1670 | 1682 | repo.dirstate.drop(f0) |
|
1671 | 1683 | |
|
1672 | 1684 | # re-add |
|
1673 | 1685 | for f, args, msg in actions.get('a', []): |
|
1674 | 1686 | repo.dirstate.add(f) |
|
1675 | 1687 | |
|
1676 | 1688 | # re-add/mark as modified |
|
1677 | 1689 | for f, args, msg in actions.get('am', []): |
|
1678 | 1690 | if branchmerge: |
|
1679 | 1691 | repo.dirstate.normallookup(f) |
|
1680 | 1692 | else: |
|
1681 | 1693 | repo.dirstate.add(f) |
|
1682 | 1694 | |
|
1683 | 1695 | # exec change |
|
1684 | 1696 | for f, args, msg in actions.get('e', []): |
|
1685 | 1697 | repo.dirstate.normallookup(f) |
|
1686 | 1698 | |
|
1687 | 1699 | # keep |
|
1688 | 1700 | for f, args, msg in actions.get('k', []): |
|
1689 | 1701 | pass |
|
1690 | 1702 | |
|
1691 | 1703 | # get |
|
1692 | 1704 | for f, args, msg in actions.get('g', []): |
|
1693 | 1705 | if branchmerge: |
|
1694 | 1706 | repo.dirstate.otherparent(f) |
|
1695 | 1707 | else: |
|
1696 | 1708 | repo.dirstate.normal(f) |
|
1697 | 1709 | |
|
1698 | 1710 | # merge |
|
1699 | 1711 | for f, args, msg in actions.get('m', []): |
|
1700 | 1712 | f1, f2, fa, move, anc = args |
|
1701 | 1713 | if branchmerge: |
|
1702 | 1714 | # We've done a branch merge, mark this file as merged |
|
1703 | 1715 | # so that we properly record the merger later |
|
1704 | 1716 | repo.dirstate.merge(f) |
|
1705 | 1717 | if f1 != f2: # copy/rename |
|
1706 | 1718 | if move: |
|
1707 | 1719 | repo.dirstate.remove(f1) |
|
1708 | 1720 | if f1 != f: |
|
1709 | 1721 | repo.dirstate.copy(f1, f) |
|
1710 | 1722 | else: |
|
1711 | 1723 | repo.dirstate.copy(f2, f) |
|
1712 | 1724 | else: |
|
1713 | 1725 | # We've update-merged a locally modified file, so |
|
1714 | 1726 | # we set the dirstate to emulate a normal checkout |
|
1715 | 1727 | # of that file some time in the past. Thus our |
|
1716 | 1728 | # merge will appear as a normal local file |
|
1717 | 1729 | # modification. |
|
1718 | 1730 | if f2 == f: # file not locally copied/moved |
|
1719 | 1731 | repo.dirstate.normallookup(f) |
|
1720 | 1732 | if move: |
|
1721 | 1733 | repo.dirstate.drop(f1) |
|
1722 | 1734 | |
|
1723 | 1735 | # directory rename, move local |
|
1724 | 1736 | for f, args, msg in actions.get('dm', []): |
|
1725 | 1737 | f0, flag = args |
|
1726 | 1738 | if branchmerge: |
|
1727 | 1739 | repo.dirstate.add(f) |
|
1728 | 1740 | repo.dirstate.remove(f0) |
|
1729 | 1741 | repo.dirstate.copy(f0, f) |
|
1730 | 1742 | else: |
|
1731 | 1743 | repo.dirstate.normal(f) |
|
1732 | 1744 | repo.dirstate.drop(f0) |
|
1733 | 1745 | |
|
1734 | 1746 | # directory rename, get |
|
1735 | 1747 | for f, args, msg in actions.get('dg', []): |
|
1736 | 1748 | f0, flag = args |
|
1737 | 1749 | if branchmerge: |
|
1738 | 1750 | repo.dirstate.add(f) |
|
1739 | 1751 | repo.dirstate.copy(f0, f) |
|
1740 | 1752 | else: |
|
1741 | 1753 | repo.dirstate.normal(f) |
|
1742 | 1754 | |
|
1743 | 1755 | def update(repo, node, branchmerge, force, ancestor=None, |
|
1744 | 1756 | mergeancestor=False, labels=None, matcher=None, mergeforce=False, |
|
1745 | 1757 | updatecheck=None, wc=None): |
|
1746 | 1758 | """ |
|
1747 | 1759 | Perform a merge between the working directory and the given node |
|
1748 | 1760 | |
|
1749 | 1761 | node = the node to update to |
|
1750 | 1762 | branchmerge = whether to merge between branches |
|
1751 | 1763 | force = whether to force branch merging or file overwriting |
|
1752 | 1764 | matcher = a matcher to filter file lists (dirstate not updated) |
|
1753 | 1765 | mergeancestor = whether it is merging with an ancestor. If true, |
|
1754 | 1766 | we should accept the incoming changes for any prompts that occur. |
|
1755 | 1767 | If false, merging with an ancestor (fast-forward) is only allowed |
|
1756 | 1768 | between different named branches. This flag is used by rebase extension |
|
1757 | 1769 | as a temporary fix and should be avoided in general. |
|
1758 | 1770 | labels = labels to use for base, local and other |
|
1759 | 1771 | mergeforce = whether the merge was run with 'merge --force' (deprecated): if |
|
1760 | 1772 | this is True, then 'force' should be True as well. |
|
1761 | 1773 | |
|
1762 | 1774 | The table below shows all the behaviors of the update command given the |
|
1763 | 1775 | -c/--check and -C/--clean or no options, whether the working directory is |
|
1764 | 1776 | dirty, whether a revision is specified, and the relationship of the parent |
|
1765 | 1777 | rev to the target rev (linear or not). Match from top first. The -n |
|
1766 | 1778 | option doesn't exist on the command line, but represents the |
|
1767 | 1779 | experimental.updatecheck=noconflict option. |
|
1768 | 1780 | |
|
1769 | 1781 | This logic is tested by test-update-branches.t. |
|
1770 | 1782 | |
|
1771 | 1783 | -c -C -n -m dirty rev linear | result |
|
1772 | 1784 | y y * * * * * | (1) |
|
1773 | 1785 | y * y * * * * | (1) |
|
1774 | 1786 | y * * y * * * | (1) |
|
1775 | 1787 | * y y * * * * | (1) |
|
1776 | 1788 | * y * y * * * | (1) |
|
1777 | 1789 | * * y y * * * | (1) |
|
1778 | 1790 | * * * * * n n | x |
|
1779 | 1791 | * * * * n * * | ok |
|
1780 | 1792 | n n n n y * y | merge |
|
1781 | 1793 | n n n n y y n | (2) |
|
1782 | 1794 | n n n y y * * | merge |
|
1783 | 1795 | n n y n y * * | merge if no conflict |
|
1784 | 1796 | n y n n y * * | discard |
|
1785 | 1797 | y n n n y * * | (3) |
|
1786 | 1798 | |
|
1787 | 1799 | x = can't happen |
|
1788 | 1800 | * = don't-care |
|
1789 | 1801 | 1 = incompatible options (checked in commands.py) |
|
1790 | 1802 | 2 = abort: uncommitted changes (commit or update --clean to discard changes) |
|
1791 | 1803 | 3 = abort: uncommitted changes (checked in commands.py) |
|
1792 | 1804 | |
|
1793 | 1805 | The merge is performed inside ``wc``, a workingctx-like objects. It defaults |
|
1794 | 1806 | to repo[None] if None is passed. |
|
1795 | 1807 | |
|
1796 | 1808 | Return the same tuple as applyupdates(). |
|
1797 | 1809 | """ |
|
1798 | 1810 | # Avoid cycle. |
|
1799 | 1811 | from . import sparse |
|
1800 | 1812 | |
|
1801 | 1813 | # This function used to find the default destination if node was None, but |
|
1802 | 1814 | # that's now in destutil.py. |
|
1803 | 1815 | assert node is not None |
|
1804 | 1816 | if not branchmerge and not force: |
|
1805 | 1817 | # TODO: remove the default once all callers that pass branchmerge=False |
|
1806 | 1818 | # and force=False pass a value for updatecheck. We may want to allow |
|
1807 | 1819 | # updatecheck='abort' to better suppport some of these callers. |
|
1808 | 1820 | if updatecheck is None: |
|
1809 | 1821 | updatecheck = 'linear' |
|
1810 | 1822 | assert updatecheck in ('none', 'linear', 'noconflict') |
|
1811 | 1823 | # If we're doing a partial update, we need to skip updating |
|
1812 | 1824 | # the dirstate, so make a note of any partial-ness to the |
|
1813 | 1825 | # update here. |
|
1814 | 1826 | if matcher is None or matcher.always(): |
|
1815 | 1827 | partial = False |
|
1816 | 1828 | else: |
|
1817 | 1829 | partial = True |
|
1818 | 1830 | with repo.wlock(): |
|
1819 | 1831 | if wc is None: |
|
1820 | 1832 | wc = repo[None] |
|
1821 | 1833 | pl = wc.parents() |
|
1822 | 1834 | p1 = pl[0] |
|
1823 | 1835 | pas = [None] |
|
1824 | 1836 | if ancestor is not None: |
|
1825 | 1837 | pas = [repo[ancestor]] |
|
1826 | 1838 | |
|
1827 | 1839 | overwrite = force and not branchmerge |
|
1828 | 1840 | |
|
1829 | 1841 | p2 = repo[node] |
|
1830 | 1842 | if pas[0] is None: |
|
1831 | 1843 | if repo.ui.configlist('merge', 'preferancestor') == ['*']: |
|
1832 | 1844 | cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node()) |
|
1833 | 1845 | pas = [repo[anc] for anc in (sorted(cahs) or [nullid])] |
|
1834 | 1846 | else: |
|
1835 | 1847 | pas = [p1.ancestor(p2, warn=branchmerge)] |
|
1836 | 1848 | |
|
1837 | 1849 | fp1, fp2, xp1, xp2 = p1.node(), p2.node(), str(p1), str(p2) |
|
1838 | 1850 | |
|
1839 | 1851 | ### check phase |
|
1840 | 1852 | if not overwrite: |
|
1841 | 1853 | if len(pl) > 1: |
|
1842 | 1854 | raise error.Abort(_("outstanding uncommitted merge")) |
|
1843 | 1855 | ms = mergestate.read(repo) |
|
1844 | 1856 | if list(ms.unresolved()): |
|
1845 | 1857 | raise error.Abort(_("outstanding merge conflicts")) |
|
1846 | 1858 | if branchmerge: |
|
1847 | 1859 | if pas == [p2]: |
|
1848 | 1860 | raise error.Abort(_("merging with a working directory ancestor" |
|
1849 | 1861 | " has no effect")) |
|
1850 | 1862 | elif pas == [p1]: |
|
1851 | 1863 | if not mergeancestor and wc.branch() == p2.branch(): |
|
1852 | 1864 | raise error.Abort(_("nothing to merge"), |
|
1853 | 1865 | hint=_("use 'hg update' " |
|
1854 | 1866 | "or check 'hg heads'")) |
|
1855 | 1867 | if not force and (wc.files() or wc.deleted()): |
|
1856 | 1868 | raise error.Abort(_("uncommitted changes"), |
|
1857 | 1869 | hint=_("use 'hg status' to list changes")) |
|
1858 | 1870 | if not wc.isinmemory(): |
|
1859 | 1871 | for s in sorted(wc.substate): |
|
1860 | 1872 | wc.sub(s).bailifchanged() |
|
1861 | 1873 | |
|
1862 | 1874 | elif not overwrite: |
|
1863 | 1875 | if p1 == p2: # no-op update |
|
1864 | 1876 | # call the hooks and exit early |
|
1865 | 1877 | repo.hook('preupdate', throw=True, parent1=xp2, parent2='') |
|
1866 | 1878 | repo.hook('update', parent1=xp2, parent2='', error=0) |
|
1867 | 1879 | return 0, 0, 0, 0 |
|
1868 | 1880 | |
|
1869 | 1881 | if (updatecheck == 'linear' and |
|
1870 | 1882 | pas not in ([p1], [p2])): # nonlinear |
|
1871 | 1883 | dirty = wc.dirty(missing=True) |
|
1872 | 1884 | if dirty: |
|
1873 | 1885 | # Branching is a bit strange to ensure we do the minimal |
|
1874 | 1886 | # amount of call to obsutil.foreground. |
|
1875 | 1887 | foreground = obsutil.foreground(repo, [p1.node()]) |
|
1876 | 1888 | # note: the <node> variable contains a random identifier |
|
1877 | 1889 | if repo[node].node() in foreground: |
|
1878 | 1890 | pass # allow updating to successors |
|
1879 | 1891 | else: |
|
1880 | 1892 | msg = _("uncommitted changes") |
|
1881 | 1893 | hint = _("commit or update --clean to discard changes") |
|
1882 | 1894 | raise error.UpdateAbort(msg, hint=hint) |
|
1883 | 1895 | else: |
|
1884 | 1896 | # Allow jumping branches if clean and specific rev given |
|
1885 | 1897 | pass |
|
1886 | 1898 | |
|
1887 | 1899 | if overwrite: |
|
1888 | 1900 | pas = [wc] |
|
1889 | 1901 | elif not branchmerge: |
|
1890 | 1902 | pas = [p1] |
|
1891 | 1903 | |
|
1892 | 1904 | # deprecated config: merge.followcopies |
|
1893 | 1905 | followcopies = repo.ui.configbool('merge', 'followcopies') |
|
1894 | 1906 | if overwrite: |
|
1895 | 1907 | followcopies = False |
|
1896 | 1908 | elif not pas[0]: |
|
1897 | 1909 | followcopies = False |
|
1898 | 1910 | if not branchmerge and not wc.dirty(missing=True): |
|
1899 | 1911 | followcopies = False |
|
1900 | 1912 | |
|
1901 | 1913 | ### calculate phase |
|
1902 | 1914 | actionbyfile, diverge, renamedelete = calculateupdates( |
|
1903 | 1915 | repo, wc, p2, pas, branchmerge, force, mergeancestor, |
|
1904 | 1916 | followcopies, matcher=matcher, mergeforce=mergeforce) |
|
1905 | 1917 | |
|
1906 | 1918 | if updatecheck == 'noconflict': |
|
1907 | 1919 | for f, (m, args, msg) in actionbyfile.iteritems(): |
|
1908 | 1920 | if m not in ('g', 'k', 'e', 'r', 'pr'): |
|
1909 | 1921 | msg = _("conflicting changes") |
|
1910 | 1922 | hint = _("commit or update --clean to discard changes") |
|
1911 | 1923 | raise error.Abort(msg, hint=hint) |
|
1912 | 1924 | |
|
1913 | 1925 | # Prompt and create actions. Most of this is in the resolve phase |
|
1914 | 1926 | # already, but we can't handle .hgsubstate in filemerge or |
|
1915 | 1927 | # subrepoutil.submerge yet so we have to keep prompting for it. |
|
1916 | 1928 | if '.hgsubstate' in actionbyfile: |
|
1917 | 1929 | f = '.hgsubstate' |
|
1918 | 1930 | m, args, msg = actionbyfile[f] |
|
1919 | 1931 | prompts = filemerge.partextras(labels) |
|
1920 | 1932 | prompts['f'] = f |
|
1921 | 1933 | if m == 'cd': |
|
1922 | 1934 | if repo.ui.promptchoice( |
|
1923 | 1935 | _("local%(l)s changed %(f)s which other%(o)s deleted\n" |
|
1924 | 1936 | "use (c)hanged version or (d)elete?" |
|
1925 | 1937 | "$$ &Changed $$ &Delete") % prompts, 0): |
|
1926 | 1938 | actionbyfile[f] = ('r', None, "prompt delete") |
|
1927 | 1939 | elif f in p1: |
|
1928 | 1940 | actionbyfile[f] = ('am', None, "prompt keep") |
|
1929 | 1941 | else: |
|
1930 | 1942 | actionbyfile[f] = ('a', None, "prompt keep") |
|
1931 | 1943 | elif m == 'dc': |
|
1932 | 1944 | f1, f2, fa, move, anc = args |
|
1933 | 1945 | flags = p2[f2].flags() |
|
1934 | 1946 | if repo.ui.promptchoice( |
|
1935 | 1947 | _("other%(o)s changed %(f)s which local%(l)s deleted\n" |
|
1936 | 1948 | "use (c)hanged version or leave (d)eleted?" |
|
1937 | 1949 | "$$ &Changed $$ &Deleted") % prompts, 0) == 0: |
|
1938 | 1950 | actionbyfile[f] = ('g', (flags, False), "prompt recreating") |
|
1939 | 1951 | else: |
|
1940 | 1952 | del actionbyfile[f] |
|
1941 | 1953 | |
|
1942 | 1954 | # Convert to dictionary-of-lists format |
|
1943 | 1955 | actions = dict((m, []) |
|
1944 | 1956 | for m in 'a am f g cd dc r dm dg m e k p pr'.split()) |
|
1945 | 1957 | for f, (m, args, msg) in actionbyfile.iteritems(): |
|
1946 | 1958 | if m not in actions: |
|
1947 | 1959 | actions[m] = [] |
|
1948 | 1960 | actions[m].append((f, args, msg)) |
|
1949 | 1961 | |
|
1950 | 1962 | if not util.fscasesensitive(repo.path): |
|
1951 | 1963 | # check collision between files only in p2 for clean update |
|
1952 | 1964 | if (not branchmerge and |
|
1953 | 1965 | (force or not wc.dirty(missing=True, branch=False))): |
|
1954 | 1966 | _checkcollision(repo, p2.manifest(), None) |
|
1955 | 1967 | else: |
|
1956 | 1968 | _checkcollision(repo, wc.manifest(), actions) |
|
1957 | 1969 | |
|
1958 | 1970 | # divergent renames |
|
1959 | 1971 | for f, fl in sorted(diverge.iteritems()): |
|
1960 | 1972 | repo.ui.warn(_("note: possible conflict - %s was renamed " |
|
1961 | 1973 | "multiple times to:\n") % f) |
|
1962 | 1974 | for nf in fl: |
|
1963 | 1975 | repo.ui.warn(" %s\n" % nf) |
|
1964 | 1976 | |
|
1965 | 1977 | # rename and delete |
|
1966 | 1978 | for f, fl in sorted(renamedelete.iteritems()): |
|
1967 | 1979 | repo.ui.warn(_("note: possible conflict - %s was deleted " |
|
1968 | 1980 | "and renamed to:\n") % f) |
|
1969 | 1981 | for nf in fl: |
|
1970 | 1982 | repo.ui.warn(" %s\n" % nf) |
|
1971 | 1983 | |
|
1972 | 1984 | ### apply phase |
|
1973 | 1985 | if not branchmerge: # just jump to the new rev |
|
1974 | 1986 | fp1, fp2, xp1, xp2 = fp2, nullid, xp2, '' |
|
1975 | 1987 | if not partial and not wc.isinmemory(): |
|
1976 | 1988 | repo.hook('preupdate', throw=True, parent1=xp1, parent2=xp2) |
|
1977 | 1989 | # note that we're in the middle of an update |
|
1978 | 1990 | repo.vfs.write('updatestate', p2.hex()) |
|
1979 | 1991 | |
|
1980 | 1992 | # Advertise fsmonitor when its presence could be useful. |
|
1981 | 1993 | # |
|
1982 | 1994 | # We only advertise when performing an update from an empty working |
|
1983 | 1995 | # directory. This typically only occurs during initial clone. |
|
1984 | 1996 | # |
|
1985 | 1997 | # We give users a mechanism to disable the warning in case it is |
|
1986 | 1998 | # annoying. |
|
1987 | 1999 | # |
|
1988 | 2000 | # We only allow on Linux and MacOS because that's where fsmonitor is |
|
1989 | 2001 | # considered stable. |
|
1990 | 2002 | fsmonitorwarning = repo.ui.configbool('fsmonitor', 'warn_when_unused') |
|
1991 | 2003 | fsmonitorthreshold = repo.ui.configint('fsmonitor', |
|
1992 | 2004 | 'warn_update_file_count') |
|
1993 | 2005 | try: |
|
1994 | 2006 | # avoid cycle: extensions -> cmdutil -> merge |
|
1995 | 2007 | from . import extensions |
|
1996 | 2008 | extensions.find('fsmonitor') |
|
1997 | 2009 | fsmonitorenabled = repo.ui.config('fsmonitor', 'mode') != 'off' |
|
1998 | 2010 | # We intentionally don't look at whether fsmonitor has disabled |
|
1999 | 2011 | # itself because a) fsmonitor may have already printed a warning |
|
2000 | 2012 | # b) we only care about the config state here. |
|
2001 | 2013 | except KeyError: |
|
2002 | 2014 | fsmonitorenabled = False |
|
2003 | 2015 | |
|
2004 | 2016 | if (fsmonitorwarning |
|
2005 | 2017 | and not fsmonitorenabled |
|
2006 | 2018 | and p1.node() == nullid |
|
2007 | 2019 | and len(actions['g']) >= fsmonitorthreshold |
|
2008 | 2020 | and pycompat.sysplatform.startswith(('linux', 'darwin'))): |
|
2009 | 2021 | repo.ui.warn( |
|
2010 | 2022 | _('(warning: large working directory being used without ' |
|
2011 | 2023 | 'fsmonitor enabled; enable fsmonitor to improve performance; ' |
|
2012 | 2024 | 'see "hg help -e fsmonitor")\n')) |
|
2013 | 2025 | |
|
2014 | 2026 | stats = applyupdates(repo, actions, wc, p2, overwrite, labels=labels) |
|
2015 | 2027 | |
|
2016 | 2028 | if not partial and not wc.isinmemory(): |
|
2017 | 2029 | with repo.dirstate.parentchange(): |
|
2018 | 2030 | repo.setparents(fp1, fp2) |
|
2019 | 2031 | recordupdates(repo, actions, branchmerge) |
|
2020 | 2032 | # update completed, clear state |
|
2021 | 2033 | util.unlink(repo.vfs.join('updatestate')) |
|
2022 | 2034 | |
|
2023 | 2035 | if not branchmerge: |
|
2024 | 2036 | repo.dirstate.setbranch(p2.branch()) |
|
2025 | 2037 | |
|
2026 | 2038 | # If we're updating to a location, clean up any stale temporary includes |
|
2027 | 2039 | # (ex: this happens during hg rebase --abort). |
|
2028 | 2040 | if not branchmerge: |
|
2029 | 2041 | sparse.prunetemporaryincludes(repo) |
|
2030 | 2042 | |
|
2031 | 2043 | if not partial: |
|
2032 | 2044 | repo.hook('update', parent1=xp1, parent2=xp2, error=stats[3]) |
|
2033 | 2045 | return stats |
|
2034 | 2046 | |
|
2035 | 2047 | def graft(repo, ctx, pctx, labels, keepparent=False): |
|
2036 | 2048 | """Do a graft-like merge. |
|
2037 | 2049 | |
|
2038 | 2050 | This is a merge where the merge ancestor is chosen such that one |
|
2039 | 2051 | or more changesets are grafted onto the current changeset. In |
|
2040 | 2052 | addition to the merge, this fixes up the dirstate to include only |
|
2041 | 2053 | a single parent (if keepparent is False) and tries to duplicate any |
|
2042 | 2054 | renames/copies appropriately. |
|
2043 | 2055 | |
|
2044 | 2056 | ctx - changeset to rebase |
|
2045 | 2057 | pctx - merge base, usually ctx.p1() |
|
2046 | 2058 | labels - merge labels eg ['local', 'graft'] |
|
2047 | 2059 | keepparent - keep second parent if any |
|
2048 | 2060 | |
|
2049 | 2061 | """ |
|
2050 | 2062 | # If we're grafting a descendant onto an ancestor, be sure to pass |
|
2051 | 2063 | # mergeancestor=True to update. This does two things: 1) allows the merge if |
|
2052 | 2064 | # the destination is the same as the parent of the ctx (so we can use graft |
|
2053 | 2065 | # to copy commits), and 2) informs update that the incoming changes are |
|
2054 | 2066 | # newer than the destination so it doesn't prompt about "remote changed foo |
|
2055 | 2067 | # which local deleted". |
|
2056 | 2068 | mergeancestor = repo.changelog.isancestor(repo['.'].node(), ctx.node()) |
|
2057 | 2069 | |
|
2058 | 2070 | stats = update(repo, ctx.node(), True, True, pctx.node(), |
|
2059 | 2071 | mergeancestor=mergeancestor, labels=labels) |
|
2060 | 2072 | |
|
2061 | 2073 | pother = nullid |
|
2062 | 2074 | parents = ctx.parents() |
|
2063 | 2075 | if keepparent and len(parents) == 2 and pctx in parents: |
|
2064 | 2076 | parents.remove(pctx) |
|
2065 | 2077 | pother = parents[0].node() |
|
2066 | 2078 | |
|
2067 | 2079 | with repo.dirstate.parentchange(): |
|
2068 | 2080 | repo.setparents(repo['.'].node(), pother) |
|
2069 | 2081 | repo.dirstate.write(repo.currenttransaction()) |
|
2070 | 2082 | # fix up dirstate for copies and renames |
|
2071 | 2083 | copies.duplicatecopies(repo, repo[None], ctx.rev(), pctx.rev()) |
|
2072 | 2084 | return stats |
General Comments 0
You need to be logged in to leave comments.
Login now