##// END OF EJS Templates
refactor: prefer checks against nullrev over nullid...
Joerg Sonnenberger -
r47601:728d89f6 default
parent child Browse files
Show More
@@ -1,803 +1,803 b''
1 # extdiff.py - external diff program support for mercurial
1 # extdiff.py - external diff program support for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to allow external programs to compare revisions
8 '''command to allow external programs to compare revisions
9
9
10 The extdiff Mercurial extension allows you to use external programs
10 The extdiff Mercurial extension allows you to use external programs
11 to compare revisions, or revision with working directory. The external
11 to compare revisions, or revision with working directory. The external
12 diff programs are called with a configurable set of options and two
12 diff programs are called with a configurable set of options and two
13 non-option arguments: paths to directories containing snapshots of
13 non-option arguments: paths to directories containing snapshots of
14 files to compare.
14 files to compare.
15
15
16 If there is more than one file being compared and the "child" revision
16 If there is more than one file being compared and the "child" revision
17 is the working directory, any modifications made in the external diff
17 is the working directory, any modifications made in the external diff
18 program will be copied back to the working directory from the temporary
18 program will be copied back to the working directory from the temporary
19 directory.
19 directory.
20
20
21 The extdiff extension also allows you to configure new diff commands, so
21 The extdiff extension also allows you to configure new diff commands, so
22 you do not need to type :hg:`extdiff -p kdiff3` always. ::
22 you do not need to type :hg:`extdiff -p kdiff3` always. ::
23
23
24 [extdiff]
24 [extdiff]
25 # add new command that runs GNU diff(1) in 'context diff' mode
25 # add new command that runs GNU diff(1) in 'context diff' mode
26 cdiff = gdiff -Nprc5
26 cdiff = gdiff -Nprc5
27 ## or the old way:
27 ## or the old way:
28 #cmd.cdiff = gdiff
28 #cmd.cdiff = gdiff
29 #opts.cdiff = -Nprc5
29 #opts.cdiff = -Nprc5
30
30
31 # add new command called meld, runs meld (no need to name twice). If
31 # add new command called meld, runs meld (no need to name twice). If
32 # the meld executable is not available, the meld tool in [merge-tools]
32 # the meld executable is not available, the meld tool in [merge-tools]
33 # will be used, if available
33 # will be used, if available
34 meld =
34 meld =
35
35
36 # add new command called vimdiff, runs gvimdiff with DirDiff plugin
36 # add new command called vimdiff, runs gvimdiff with DirDiff plugin
37 # (see http://www.vim.org/scripts/script.php?script_id=102) Non
37 # (see http://www.vim.org/scripts/script.php?script_id=102) Non
38 # English user, be sure to put "let g:DirDiffDynamicDiffText = 1" in
38 # English user, be sure to put "let g:DirDiffDynamicDiffText = 1" in
39 # your .vimrc
39 # your .vimrc
40 vimdiff = gvim -f "+next" \\
40 vimdiff = gvim -f "+next" \\
41 "+execute 'DirDiff' fnameescape(argv(0)) fnameescape(argv(1))"
41 "+execute 'DirDiff' fnameescape(argv(0)) fnameescape(argv(1))"
42
42
43 Tool arguments can include variables that are expanded at runtime::
43 Tool arguments can include variables that are expanded at runtime::
44
44
45 $parent1, $plabel1 - filename, descriptive label of first parent
45 $parent1, $plabel1 - filename, descriptive label of first parent
46 $child, $clabel - filename, descriptive label of child revision
46 $child, $clabel - filename, descriptive label of child revision
47 $parent2, $plabel2 - filename, descriptive label of second parent
47 $parent2, $plabel2 - filename, descriptive label of second parent
48 $root - repository root
48 $root - repository root
49 $parent is an alias for $parent1.
49 $parent is an alias for $parent1.
50
50
51 The extdiff extension will look in your [diff-tools] and [merge-tools]
51 The extdiff extension will look in your [diff-tools] and [merge-tools]
52 sections for diff tool arguments, when none are specified in [extdiff].
52 sections for diff tool arguments, when none are specified in [extdiff].
53
53
54 ::
54 ::
55
55
56 [extdiff]
56 [extdiff]
57 kdiff3 =
57 kdiff3 =
58
58
59 [diff-tools]
59 [diff-tools]
60 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
60 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
61
61
62 If a program has a graphical interface, it might be interesting to tell
62 If a program has a graphical interface, it might be interesting to tell
63 Mercurial about it. It will prevent the program from being mistakenly
63 Mercurial about it. It will prevent the program from being mistakenly
64 used in a terminal-only environment (such as an SSH terminal session),
64 used in a terminal-only environment (such as an SSH terminal session),
65 and will make :hg:`extdiff --per-file` open multiple file diffs at once
65 and will make :hg:`extdiff --per-file` open multiple file diffs at once
66 instead of one by one (if you still want to open file diffs one by one,
66 instead of one by one (if you still want to open file diffs one by one,
67 you can use the --confirm option).
67 you can use the --confirm option).
68
68
69 Declaring that a tool has a graphical interface can be done with the
69 Declaring that a tool has a graphical interface can be done with the
70 ``gui`` flag next to where ``diffargs`` are specified:
70 ``gui`` flag next to where ``diffargs`` are specified:
71
71
72 ::
72 ::
73
73
74 [diff-tools]
74 [diff-tools]
75 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
75 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
76 kdiff3.gui = true
76 kdiff3.gui = true
77
77
78 You can use -I/-X and list of file or directory names like normal
78 You can use -I/-X and list of file or directory names like normal
79 :hg:`diff` command. The extdiff extension makes snapshots of only
79 :hg:`diff` command. The extdiff extension makes snapshots of only
80 needed files, so running the external diff program will actually be
80 needed files, so running the external diff program will actually be
81 pretty fast (at least faster than having to compare the entire tree).
81 pretty fast (at least faster than having to compare the entire tree).
82 '''
82 '''
83
83
84 from __future__ import absolute_import
84 from __future__ import absolute_import
85
85
86 import os
86 import os
87 import re
87 import re
88 import shutil
88 import shutil
89 import stat
89 import stat
90 import subprocess
90 import subprocess
91
91
92 from mercurial.i18n import _
92 from mercurial.i18n import _
93 from mercurial.node import (
93 from mercurial.node import (
94 nullid,
94 nullrev,
95 short,
95 short,
96 )
96 )
97 from mercurial import (
97 from mercurial import (
98 archival,
98 archival,
99 cmdutil,
99 cmdutil,
100 encoding,
100 encoding,
101 error,
101 error,
102 filemerge,
102 filemerge,
103 formatter,
103 formatter,
104 pycompat,
104 pycompat,
105 registrar,
105 registrar,
106 scmutil,
106 scmutil,
107 util,
107 util,
108 )
108 )
109 from mercurial.utils import (
109 from mercurial.utils import (
110 procutil,
110 procutil,
111 stringutil,
111 stringutil,
112 )
112 )
113
113
114 cmdtable = {}
114 cmdtable = {}
115 command = registrar.command(cmdtable)
115 command = registrar.command(cmdtable)
116
116
117 configtable = {}
117 configtable = {}
118 configitem = registrar.configitem(configtable)
118 configitem = registrar.configitem(configtable)
119
119
120 configitem(
120 configitem(
121 b'extdiff',
121 b'extdiff',
122 br'opts\..*',
122 br'opts\..*',
123 default=b'',
123 default=b'',
124 generic=True,
124 generic=True,
125 )
125 )
126
126
127 configitem(
127 configitem(
128 b'extdiff',
128 b'extdiff',
129 br'gui\..*',
129 br'gui\..*',
130 generic=True,
130 generic=True,
131 )
131 )
132
132
133 configitem(
133 configitem(
134 b'diff-tools',
134 b'diff-tools',
135 br'.*\.diffargs$',
135 br'.*\.diffargs$',
136 default=None,
136 default=None,
137 generic=True,
137 generic=True,
138 )
138 )
139
139
140 configitem(
140 configitem(
141 b'diff-tools',
141 b'diff-tools',
142 br'.*\.gui$',
142 br'.*\.gui$',
143 generic=True,
143 generic=True,
144 )
144 )
145
145
146 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
146 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
147 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
147 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
148 # be specifying the version(s) of Mercurial they are tested with, or
148 # be specifying the version(s) of Mercurial they are tested with, or
149 # leave the attribute unspecified.
149 # leave the attribute unspecified.
150 testedwith = b'ships-with-hg-core'
150 testedwith = b'ships-with-hg-core'
151
151
152
152
153 def snapshot(ui, repo, files, node, tmproot, listsubrepos):
153 def snapshot(ui, repo, files, node, tmproot, listsubrepos):
154 """snapshot files as of some revision
154 """snapshot files as of some revision
155 if not using snapshot, -I/-X does not work and recursive diff
155 if not using snapshot, -I/-X does not work and recursive diff
156 in tools like kdiff3 and meld displays too many files."""
156 in tools like kdiff3 and meld displays too many files."""
157 dirname = os.path.basename(repo.root)
157 dirname = os.path.basename(repo.root)
158 if dirname == b"":
158 if dirname == b"":
159 dirname = b"root"
159 dirname = b"root"
160 if node is not None:
160 if node is not None:
161 dirname = b'%s.%s' % (dirname, short(node))
161 dirname = b'%s.%s' % (dirname, short(node))
162 base = os.path.join(tmproot, dirname)
162 base = os.path.join(tmproot, dirname)
163 os.mkdir(base)
163 os.mkdir(base)
164 fnsandstat = []
164 fnsandstat = []
165
165
166 if node is not None:
166 if node is not None:
167 ui.note(
167 ui.note(
168 _(b'making snapshot of %d files from rev %s\n')
168 _(b'making snapshot of %d files from rev %s\n')
169 % (len(files), short(node))
169 % (len(files), short(node))
170 )
170 )
171 else:
171 else:
172 ui.note(
172 ui.note(
173 _(b'making snapshot of %d files from working directory\n')
173 _(b'making snapshot of %d files from working directory\n')
174 % (len(files))
174 % (len(files))
175 )
175 )
176
176
177 if files:
177 if files:
178 repo.ui.setconfig(b"ui", b"archivemeta", False)
178 repo.ui.setconfig(b"ui", b"archivemeta", False)
179
179
180 archival.archive(
180 archival.archive(
181 repo,
181 repo,
182 base,
182 base,
183 node,
183 node,
184 b'files',
184 b'files',
185 match=scmutil.matchfiles(repo, files),
185 match=scmutil.matchfiles(repo, files),
186 subrepos=listsubrepos,
186 subrepos=listsubrepos,
187 )
187 )
188
188
189 for fn in sorted(files):
189 for fn in sorted(files):
190 wfn = util.pconvert(fn)
190 wfn = util.pconvert(fn)
191 ui.note(b' %s\n' % wfn)
191 ui.note(b' %s\n' % wfn)
192
192
193 if node is None:
193 if node is None:
194 dest = os.path.join(base, wfn)
194 dest = os.path.join(base, wfn)
195
195
196 fnsandstat.append((dest, repo.wjoin(fn), os.lstat(dest)))
196 fnsandstat.append((dest, repo.wjoin(fn), os.lstat(dest)))
197 return dirname, fnsandstat
197 return dirname, fnsandstat
198
198
199
199
200 def formatcmdline(
200 def formatcmdline(
201 cmdline,
201 cmdline,
202 repo_root,
202 repo_root,
203 do3way,
203 do3way,
204 parent1,
204 parent1,
205 plabel1,
205 plabel1,
206 parent2,
206 parent2,
207 plabel2,
207 plabel2,
208 child,
208 child,
209 clabel,
209 clabel,
210 ):
210 ):
211 # Function to quote file/dir names in the argument string.
211 # Function to quote file/dir names in the argument string.
212 # When not operating in 3-way mode, an empty string is
212 # When not operating in 3-way mode, an empty string is
213 # returned for parent2
213 # returned for parent2
214 replace = {
214 replace = {
215 b'parent': parent1,
215 b'parent': parent1,
216 b'parent1': parent1,
216 b'parent1': parent1,
217 b'parent2': parent2,
217 b'parent2': parent2,
218 b'plabel1': plabel1,
218 b'plabel1': plabel1,
219 b'plabel2': plabel2,
219 b'plabel2': plabel2,
220 b'child': child,
220 b'child': child,
221 b'clabel': clabel,
221 b'clabel': clabel,
222 b'root': repo_root,
222 b'root': repo_root,
223 }
223 }
224
224
225 def quote(match):
225 def quote(match):
226 pre = match.group(2)
226 pre = match.group(2)
227 key = match.group(3)
227 key = match.group(3)
228 if not do3way and key == b'parent2':
228 if not do3way and key == b'parent2':
229 return pre
229 return pre
230 return pre + procutil.shellquote(replace[key])
230 return pre + procutil.shellquote(replace[key])
231
231
232 # Match parent2 first, so 'parent1?' will match both parent1 and parent
232 # Match parent2 first, so 'parent1?' will match both parent1 and parent
233 regex = (
233 regex = (
234 br'''(['"]?)([^\s'"$]*)'''
234 br'''(['"]?)([^\s'"$]*)'''
235 br'\$(parent2|parent1?|child|plabel1|plabel2|clabel|root)\1'
235 br'\$(parent2|parent1?|child|plabel1|plabel2|clabel|root)\1'
236 )
236 )
237 if not do3way and not re.search(regex, cmdline):
237 if not do3way and not re.search(regex, cmdline):
238 cmdline += b' $parent1 $child'
238 cmdline += b' $parent1 $child'
239 return re.sub(regex, quote, cmdline)
239 return re.sub(regex, quote, cmdline)
240
240
241
241
242 def _systembackground(cmd, environ=None, cwd=None):
242 def _systembackground(cmd, environ=None, cwd=None):
243 """like 'procutil.system', but returns the Popen object directly
243 """like 'procutil.system', but returns the Popen object directly
244 so we don't have to wait on it.
244 so we don't have to wait on it.
245 """
245 """
246 env = procutil.shellenviron(environ)
246 env = procutil.shellenviron(environ)
247 proc = subprocess.Popen(
247 proc = subprocess.Popen(
248 procutil.tonativestr(cmd),
248 procutil.tonativestr(cmd),
249 shell=True,
249 shell=True,
250 close_fds=procutil.closefds,
250 close_fds=procutil.closefds,
251 env=procutil.tonativeenv(env),
251 env=procutil.tonativeenv(env),
252 cwd=pycompat.rapply(procutil.tonativestr, cwd),
252 cwd=pycompat.rapply(procutil.tonativestr, cwd),
253 )
253 )
254 return proc
254 return proc
255
255
256
256
257 def _runperfilediff(
257 def _runperfilediff(
258 cmdline,
258 cmdline,
259 repo_root,
259 repo_root,
260 ui,
260 ui,
261 guitool,
261 guitool,
262 do3way,
262 do3way,
263 confirm,
263 confirm,
264 commonfiles,
264 commonfiles,
265 tmproot,
265 tmproot,
266 dir1a,
266 dir1a,
267 dir1b,
267 dir1b,
268 dir2,
268 dir2,
269 rev1a,
269 rev1a,
270 rev1b,
270 rev1b,
271 rev2,
271 rev2,
272 ):
272 ):
273 # Note that we need to sort the list of files because it was
273 # Note that we need to sort the list of files because it was
274 # built in an "unstable" way and it's annoying to get files in a
274 # built in an "unstable" way and it's annoying to get files in a
275 # random order, especially when "confirm" mode is enabled.
275 # random order, especially when "confirm" mode is enabled.
276 waitprocs = []
276 waitprocs = []
277 totalfiles = len(commonfiles)
277 totalfiles = len(commonfiles)
278 for idx, commonfile in enumerate(sorted(commonfiles)):
278 for idx, commonfile in enumerate(sorted(commonfiles)):
279 path1a = os.path.join(dir1a, commonfile)
279 path1a = os.path.join(dir1a, commonfile)
280 label1a = commonfile + rev1a
280 label1a = commonfile + rev1a
281 if not os.path.isfile(path1a):
281 if not os.path.isfile(path1a):
282 path1a = pycompat.osdevnull
282 path1a = pycompat.osdevnull
283
283
284 path1b = b''
284 path1b = b''
285 label1b = b''
285 label1b = b''
286 if do3way:
286 if do3way:
287 path1b = os.path.join(dir1b, commonfile)
287 path1b = os.path.join(dir1b, commonfile)
288 label1b = commonfile + rev1b
288 label1b = commonfile + rev1b
289 if not os.path.isfile(path1b):
289 if not os.path.isfile(path1b):
290 path1b = pycompat.osdevnull
290 path1b = pycompat.osdevnull
291
291
292 path2 = os.path.join(dir2, commonfile)
292 path2 = os.path.join(dir2, commonfile)
293 label2 = commonfile + rev2
293 label2 = commonfile + rev2
294
294
295 if confirm:
295 if confirm:
296 # Prompt before showing this diff
296 # Prompt before showing this diff
297 difffiles = _(b'diff %s (%d of %d)') % (
297 difffiles = _(b'diff %s (%d of %d)') % (
298 commonfile,
298 commonfile,
299 idx + 1,
299 idx + 1,
300 totalfiles,
300 totalfiles,
301 )
301 )
302 responses = _(
302 responses = _(
303 b'[Yns?]'
303 b'[Yns?]'
304 b'$$ &Yes, show diff'
304 b'$$ &Yes, show diff'
305 b'$$ &No, skip this diff'
305 b'$$ &No, skip this diff'
306 b'$$ &Skip remaining diffs'
306 b'$$ &Skip remaining diffs'
307 b'$$ &? (display help)'
307 b'$$ &? (display help)'
308 )
308 )
309 r = ui.promptchoice(b'%s %s' % (difffiles, responses))
309 r = ui.promptchoice(b'%s %s' % (difffiles, responses))
310 if r == 3: # ?
310 if r == 3: # ?
311 while r == 3:
311 while r == 3:
312 for c, t in ui.extractchoices(responses)[1]:
312 for c, t in ui.extractchoices(responses)[1]:
313 ui.write(b'%s - %s\n' % (c, encoding.lower(t)))
313 ui.write(b'%s - %s\n' % (c, encoding.lower(t)))
314 r = ui.promptchoice(b'%s %s' % (difffiles, responses))
314 r = ui.promptchoice(b'%s %s' % (difffiles, responses))
315 if r == 0: # yes
315 if r == 0: # yes
316 pass
316 pass
317 elif r == 1: # no
317 elif r == 1: # no
318 continue
318 continue
319 elif r == 2: # skip
319 elif r == 2: # skip
320 break
320 break
321
321
322 curcmdline = formatcmdline(
322 curcmdline = formatcmdline(
323 cmdline,
323 cmdline,
324 repo_root,
324 repo_root,
325 do3way=do3way,
325 do3way=do3way,
326 parent1=path1a,
326 parent1=path1a,
327 plabel1=label1a,
327 plabel1=label1a,
328 parent2=path1b,
328 parent2=path1b,
329 plabel2=label1b,
329 plabel2=label1b,
330 child=path2,
330 child=path2,
331 clabel=label2,
331 clabel=label2,
332 )
332 )
333
333
334 if confirm or not guitool:
334 if confirm or not guitool:
335 # Run the comparison program and wait for it to exit
335 # Run the comparison program and wait for it to exit
336 # before we show the next file.
336 # before we show the next file.
337 # This is because either we need to wait for confirmation
337 # This is because either we need to wait for confirmation
338 # from the user between each invocation, or because, as far
338 # from the user between each invocation, or because, as far
339 # as we know, the tool doesn't have a GUI, in which case
339 # as we know, the tool doesn't have a GUI, in which case
340 # we can't run multiple CLI programs at the same time.
340 # we can't run multiple CLI programs at the same time.
341 ui.debug(
341 ui.debug(
342 b'running %r in %s\n' % (pycompat.bytestr(curcmdline), tmproot)
342 b'running %r in %s\n' % (pycompat.bytestr(curcmdline), tmproot)
343 )
343 )
344 ui.system(curcmdline, cwd=tmproot, blockedtag=b'extdiff')
344 ui.system(curcmdline, cwd=tmproot, blockedtag=b'extdiff')
345 else:
345 else:
346 # Run the comparison program but don't wait, as we're
346 # Run the comparison program but don't wait, as we're
347 # going to rapid-fire each file diff and then wait on
347 # going to rapid-fire each file diff and then wait on
348 # the whole group.
348 # the whole group.
349 ui.debug(
349 ui.debug(
350 b'running %r in %s (backgrounded)\n'
350 b'running %r in %s (backgrounded)\n'
351 % (pycompat.bytestr(curcmdline), tmproot)
351 % (pycompat.bytestr(curcmdline), tmproot)
352 )
352 )
353 proc = _systembackground(curcmdline, cwd=tmproot)
353 proc = _systembackground(curcmdline, cwd=tmproot)
354 waitprocs.append(proc)
354 waitprocs.append(proc)
355
355
356 if waitprocs:
356 if waitprocs:
357 with ui.timeblockedsection(b'extdiff'):
357 with ui.timeblockedsection(b'extdiff'):
358 for proc in waitprocs:
358 for proc in waitprocs:
359 proc.wait()
359 proc.wait()
360
360
361
361
362 def diffpatch(ui, repo, node1, node2, tmproot, matcher, cmdline):
362 def diffpatch(ui, repo, node1, node2, tmproot, matcher, cmdline):
363 template = b'hg-%h.patch'
363 template = b'hg-%h.patch'
364 # write patches to temporary files
364 # write patches to temporary files
365 with formatter.nullformatter(ui, b'extdiff', {}) as fm:
365 with formatter.nullformatter(ui, b'extdiff', {}) as fm:
366 cmdutil.export(
366 cmdutil.export(
367 repo,
367 repo,
368 [repo[node1].rev(), repo[node2].rev()],
368 [repo[node1].rev(), repo[node2].rev()],
369 fm,
369 fm,
370 fntemplate=repo.vfs.reljoin(tmproot, template),
370 fntemplate=repo.vfs.reljoin(tmproot, template),
371 match=matcher,
371 match=matcher,
372 )
372 )
373 label1 = cmdutil.makefilename(repo[node1], template)
373 label1 = cmdutil.makefilename(repo[node1], template)
374 label2 = cmdutil.makefilename(repo[node2], template)
374 label2 = cmdutil.makefilename(repo[node2], template)
375 file1 = repo.vfs.reljoin(tmproot, label1)
375 file1 = repo.vfs.reljoin(tmproot, label1)
376 file2 = repo.vfs.reljoin(tmproot, label2)
376 file2 = repo.vfs.reljoin(tmproot, label2)
377 cmdline = formatcmdline(
377 cmdline = formatcmdline(
378 cmdline,
378 cmdline,
379 repo.root,
379 repo.root,
380 # no 3way while comparing patches
380 # no 3way while comparing patches
381 do3way=False,
381 do3way=False,
382 parent1=file1,
382 parent1=file1,
383 plabel1=label1,
383 plabel1=label1,
384 # while comparing patches, there is no second parent
384 # while comparing patches, there is no second parent
385 parent2=None,
385 parent2=None,
386 plabel2=None,
386 plabel2=None,
387 child=file2,
387 child=file2,
388 clabel=label2,
388 clabel=label2,
389 )
389 )
390 ui.debug(b'running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot))
390 ui.debug(b'running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot))
391 ui.system(cmdline, cwd=tmproot, blockedtag=b'extdiff')
391 ui.system(cmdline, cwd=tmproot, blockedtag=b'extdiff')
392 return 1
392 return 1
393
393
394
394
395 def diffrevs(
395 def diffrevs(
396 ui,
396 ui,
397 repo,
397 repo,
398 ctx1a,
398 ctx1a,
399 ctx1b,
399 ctx1b,
400 ctx2,
400 ctx2,
401 matcher,
401 matcher,
402 tmproot,
402 tmproot,
403 cmdline,
403 cmdline,
404 do3way,
404 do3way,
405 guitool,
405 guitool,
406 opts,
406 opts,
407 ):
407 ):
408
408
409 subrepos = opts.get(b'subrepos')
409 subrepos = opts.get(b'subrepos')
410
410
411 # calculate list of files changed between both revs
411 # calculate list of files changed between both revs
412 st = ctx1a.status(ctx2, matcher, listsubrepos=subrepos)
412 st = ctx1a.status(ctx2, matcher, listsubrepos=subrepos)
413 mod_a, add_a, rem_a = set(st.modified), set(st.added), set(st.removed)
413 mod_a, add_a, rem_a = set(st.modified), set(st.added), set(st.removed)
414 if do3way:
414 if do3way:
415 stb = ctx1b.status(ctx2, matcher, listsubrepos=subrepos)
415 stb = ctx1b.status(ctx2, matcher, listsubrepos=subrepos)
416 mod_b, add_b, rem_b = (
416 mod_b, add_b, rem_b = (
417 set(stb.modified),
417 set(stb.modified),
418 set(stb.added),
418 set(stb.added),
419 set(stb.removed),
419 set(stb.removed),
420 )
420 )
421 else:
421 else:
422 mod_b, add_b, rem_b = set(), set(), set()
422 mod_b, add_b, rem_b = set(), set(), set()
423 modadd = mod_a | add_a | mod_b | add_b
423 modadd = mod_a | add_a | mod_b | add_b
424 common = modadd | rem_a | rem_b
424 common = modadd | rem_a | rem_b
425 if not common:
425 if not common:
426 return 0
426 return 0
427
427
428 # Always make a copy of ctx1a (and ctx1b, if applicable)
428 # Always make a copy of ctx1a (and ctx1b, if applicable)
429 # dir1a should contain files which are:
429 # dir1a should contain files which are:
430 # * modified or removed from ctx1a to ctx2
430 # * modified or removed from ctx1a to ctx2
431 # * modified or added from ctx1b to ctx2
431 # * modified or added from ctx1b to ctx2
432 # (except file added from ctx1a to ctx2 as they were not present in
432 # (except file added from ctx1a to ctx2 as they were not present in
433 # ctx1a)
433 # ctx1a)
434 dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a)
434 dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a)
435 dir1a = snapshot(ui, repo, dir1a_files, ctx1a.node(), tmproot, subrepos)[0]
435 dir1a = snapshot(ui, repo, dir1a_files, ctx1a.node(), tmproot, subrepos)[0]
436 rev1a = b'' if ctx1a.rev() is None else b'@%d' % ctx1a.rev()
436 rev1a = b'' if ctx1a.rev() is None else b'@%d' % ctx1a.rev()
437 if do3way:
437 if do3way:
438 # file calculation criteria same as dir1a
438 # file calculation criteria same as dir1a
439 dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b)
439 dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b)
440 dir1b = snapshot(
440 dir1b = snapshot(
441 ui, repo, dir1b_files, ctx1b.node(), tmproot, subrepos
441 ui, repo, dir1b_files, ctx1b.node(), tmproot, subrepos
442 )[0]
442 )[0]
443 rev1b = b'@%d' % ctx1b.rev()
443 rev1b = b'@%d' % ctx1b.rev()
444 else:
444 else:
445 dir1b = None
445 dir1b = None
446 rev1b = b''
446 rev1b = b''
447
447
448 fnsandstat = []
448 fnsandstat = []
449
449
450 # If ctx2 is not the wc or there is >1 change, copy it
450 # If ctx2 is not the wc or there is >1 change, copy it
451 dir2root = b''
451 dir2root = b''
452 rev2 = b''
452 rev2 = b''
453 if ctx2.node() is not None:
453 if ctx2.node() is not None:
454 dir2 = snapshot(ui, repo, modadd, ctx2.node(), tmproot, subrepos)[0]
454 dir2 = snapshot(ui, repo, modadd, ctx2.node(), tmproot, subrepos)[0]
455 rev2 = b'@%d' % ctx2.rev()
455 rev2 = b'@%d' % ctx2.rev()
456 elif len(common) > 1:
456 elif len(common) > 1:
457 # we only actually need to get the files to copy back to
457 # we only actually need to get the files to copy back to
458 # the working dir in this case (because the other cases
458 # the working dir in this case (because the other cases
459 # are: diffing 2 revisions or single file -- in which case
459 # are: diffing 2 revisions or single file -- in which case
460 # the file is already directly passed to the diff tool).
460 # the file is already directly passed to the diff tool).
461 dir2, fnsandstat = snapshot(ui, repo, modadd, None, tmproot, subrepos)
461 dir2, fnsandstat = snapshot(ui, repo, modadd, None, tmproot, subrepos)
462 else:
462 else:
463 # This lets the diff tool open the changed file directly
463 # This lets the diff tool open the changed file directly
464 dir2 = b''
464 dir2 = b''
465 dir2root = repo.root
465 dir2root = repo.root
466
466
467 label1a = rev1a
467 label1a = rev1a
468 label1b = rev1b
468 label1b = rev1b
469 label2 = rev2
469 label2 = rev2
470
470
471 if not opts.get(b'per_file'):
471 if not opts.get(b'per_file'):
472 # If only one change, diff the files instead of the directories
472 # If only one change, diff the files instead of the directories
473 # Handle bogus modifies correctly by checking if the files exist
473 # Handle bogus modifies correctly by checking if the files exist
474 if len(common) == 1:
474 if len(common) == 1:
475 common_file = util.localpath(common.pop())
475 common_file = util.localpath(common.pop())
476 dir1a = os.path.join(tmproot, dir1a, common_file)
476 dir1a = os.path.join(tmproot, dir1a, common_file)
477 label1a = common_file + rev1a
477 label1a = common_file + rev1a
478 if not os.path.isfile(dir1a):
478 if not os.path.isfile(dir1a):
479 dir1a = pycompat.osdevnull
479 dir1a = pycompat.osdevnull
480 if do3way:
480 if do3way:
481 dir1b = os.path.join(tmproot, dir1b, common_file)
481 dir1b = os.path.join(tmproot, dir1b, common_file)
482 label1b = common_file + rev1b
482 label1b = common_file + rev1b
483 if not os.path.isfile(dir1b):
483 if not os.path.isfile(dir1b):
484 dir1b = pycompat.osdevnull
484 dir1b = pycompat.osdevnull
485 dir2 = os.path.join(dir2root, dir2, common_file)
485 dir2 = os.path.join(dir2root, dir2, common_file)
486 label2 = common_file + rev2
486 label2 = common_file + rev2
487
487
488 # Run the external tool on the 2 temp directories or the patches
488 # Run the external tool on the 2 temp directories or the patches
489 cmdline = formatcmdline(
489 cmdline = formatcmdline(
490 cmdline,
490 cmdline,
491 repo.root,
491 repo.root,
492 do3way=do3way,
492 do3way=do3way,
493 parent1=dir1a,
493 parent1=dir1a,
494 plabel1=label1a,
494 plabel1=label1a,
495 parent2=dir1b,
495 parent2=dir1b,
496 plabel2=label1b,
496 plabel2=label1b,
497 child=dir2,
497 child=dir2,
498 clabel=label2,
498 clabel=label2,
499 )
499 )
500 ui.debug(b'running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot))
500 ui.debug(b'running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot))
501 ui.system(cmdline, cwd=tmproot, blockedtag=b'extdiff')
501 ui.system(cmdline, cwd=tmproot, blockedtag=b'extdiff')
502 else:
502 else:
503 # Run the external tool once for each pair of files
503 # Run the external tool once for each pair of files
504 _runperfilediff(
504 _runperfilediff(
505 cmdline,
505 cmdline,
506 repo.root,
506 repo.root,
507 ui,
507 ui,
508 guitool=guitool,
508 guitool=guitool,
509 do3way=do3way,
509 do3way=do3way,
510 confirm=opts.get(b'confirm'),
510 confirm=opts.get(b'confirm'),
511 commonfiles=common,
511 commonfiles=common,
512 tmproot=tmproot,
512 tmproot=tmproot,
513 dir1a=os.path.join(tmproot, dir1a),
513 dir1a=os.path.join(tmproot, dir1a),
514 dir1b=os.path.join(tmproot, dir1b) if do3way else None,
514 dir1b=os.path.join(tmproot, dir1b) if do3way else None,
515 dir2=os.path.join(dir2root, dir2),
515 dir2=os.path.join(dir2root, dir2),
516 rev1a=rev1a,
516 rev1a=rev1a,
517 rev1b=rev1b,
517 rev1b=rev1b,
518 rev2=rev2,
518 rev2=rev2,
519 )
519 )
520
520
521 for copy_fn, working_fn, st in fnsandstat:
521 for copy_fn, working_fn, st in fnsandstat:
522 cpstat = os.lstat(copy_fn)
522 cpstat = os.lstat(copy_fn)
523 # Some tools copy the file and attributes, so mtime may not detect
523 # Some tools copy the file and attributes, so mtime may not detect
524 # all changes. A size check will detect more cases, but not all.
524 # all changes. A size check will detect more cases, but not all.
525 # The only certain way to detect every case is to diff all files,
525 # The only certain way to detect every case is to diff all files,
526 # which could be expensive.
526 # which could be expensive.
527 # copyfile() carries over the permission, so the mode check could
527 # copyfile() carries over the permission, so the mode check could
528 # be in an 'elif' branch, but for the case where the file has
528 # be in an 'elif' branch, but for the case where the file has
529 # changed without affecting mtime or size.
529 # changed without affecting mtime or size.
530 if (
530 if (
531 cpstat[stat.ST_MTIME] != st[stat.ST_MTIME]
531 cpstat[stat.ST_MTIME] != st[stat.ST_MTIME]
532 or cpstat.st_size != st.st_size
532 or cpstat.st_size != st.st_size
533 or (cpstat.st_mode & 0o100) != (st.st_mode & 0o100)
533 or (cpstat.st_mode & 0o100) != (st.st_mode & 0o100)
534 ):
534 ):
535 ui.debug(
535 ui.debug(
536 b'file changed while diffing. '
536 b'file changed while diffing. '
537 b'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn)
537 b'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn)
538 )
538 )
539 util.copyfile(copy_fn, working_fn)
539 util.copyfile(copy_fn, working_fn)
540
540
541 return 1
541 return 1
542
542
543
543
544 def dodiff(ui, repo, cmdline, pats, opts, guitool=False):
544 def dodiff(ui, repo, cmdline, pats, opts, guitool=False):
545 """Do the actual diff:
545 """Do the actual diff:
546
546
547 - copy to a temp structure if diffing 2 internal revisions
547 - copy to a temp structure if diffing 2 internal revisions
548 - copy to a temp structure if diffing working revision with
548 - copy to a temp structure if diffing working revision with
549 another one and more than 1 file is changed
549 another one and more than 1 file is changed
550 - just invoke the diff for a single file in the working dir
550 - just invoke the diff for a single file in the working dir
551 """
551 """
552
552
553 cmdutil.check_at_most_one_arg(opts, b'rev', b'change')
553 cmdutil.check_at_most_one_arg(opts, b'rev', b'change')
554 revs = opts.get(b'rev')
554 revs = opts.get(b'rev')
555 from_rev = opts.get(b'from')
555 from_rev = opts.get(b'from')
556 to_rev = opts.get(b'to')
556 to_rev = opts.get(b'to')
557 change = opts.get(b'change')
557 change = opts.get(b'change')
558 do3way = b'$parent2' in cmdline
558 do3way = b'$parent2' in cmdline
559
559
560 if change:
560 if change:
561 ctx2 = scmutil.revsingle(repo, change, None)
561 ctx2 = scmutil.revsingle(repo, change, None)
562 ctx1a, ctx1b = ctx2.p1(), ctx2.p2()
562 ctx1a, ctx1b = ctx2.p1(), ctx2.p2()
563 elif from_rev or to_rev:
563 elif from_rev or to_rev:
564 repo = scmutil.unhidehashlikerevs(
564 repo = scmutil.unhidehashlikerevs(
565 repo, [from_rev] + [to_rev], b'nowarn'
565 repo, [from_rev] + [to_rev], b'nowarn'
566 )
566 )
567 ctx1a = scmutil.revsingle(repo, from_rev, None)
567 ctx1a = scmutil.revsingle(repo, from_rev, None)
568 ctx1b = repo[nullid]
568 ctx1b = repo[nullrev]
569 ctx2 = scmutil.revsingle(repo, to_rev, None)
569 ctx2 = scmutil.revsingle(repo, to_rev, None)
570 else:
570 else:
571 ctx1a, ctx2 = scmutil.revpair(repo, revs)
571 ctx1a, ctx2 = scmutil.revpair(repo, revs)
572 if not revs:
572 if not revs:
573 ctx1b = repo[None].p2()
573 ctx1b = repo[None].p2()
574 else:
574 else:
575 ctx1b = repo[nullid]
575 ctx1b = repo[nullrev]
576
576
577 # Disable 3-way merge if there is only one parent
577 # Disable 3-way merge if there is only one parent
578 if do3way:
578 if do3way:
579 if ctx1b.node() == nullid:
579 if ctx1b.rev() == nullrev:
580 do3way = False
580 do3way = False
581
581
582 matcher = scmutil.match(ctx2, pats, opts)
582 matcher = scmutil.match(ctx2, pats, opts)
583
583
584 if opts.get(b'patch'):
584 if opts.get(b'patch'):
585 if opts.get(b'subrepos'):
585 if opts.get(b'subrepos'):
586 raise error.Abort(_(b'--patch cannot be used with --subrepos'))
586 raise error.Abort(_(b'--patch cannot be used with --subrepos'))
587 if opts.get(b'per_file'):
587 if opts.get(b'per_file'):
588 raise error.Abort(_(b'--patch cannot be used with --per-file'))
588 raise error.Abort(_(b'--patch cannot be used with --per-file'))
589 if ctx2.node() is None:
589 if ctx2.node() is None:
590 raise error.Abort(_(b'--patch requires two revisions'))
590 raise error.Abort(_(b'--patch requires two revisions'))
591
591
592 tmproot = pycompat.mkdtemp(prefix=b'extdiff.')
592 tmproot = pycompat.mkdtemp(prefix=b'extdiff.')
593 try:
593 try:
594 if opts.get(b'patch'):
594 if opts.get(b'patch'):
595 return diffpatch(
595 return diffpatch(
596 ui, repo, ctx1a.node(), ctx2.node(), tmproot, matcher, cmdline
596 ui, repo, ctx1a.node(), ctx2.node(), tmproot, matcher, cmdline
597 )
597 )
598
598
599 return diffrevs(
599 return diffrevs(
600 ui,
600 ui,
601 repo,
601 repo,
602 ctx1a,
602 ctx1a,
603 ctx1b,
603 ctx1b,
604 ctx2,
604 ctx2,
605 matcher,
605 matcher,
606 tmproot,
606 tmproot,
607 cmdline,
607 cmdline,
608 do3way,
608 do3way,
609 guitool,
609 guitool,
610 opts,
610 opts,
611 )
611 )
612
612
613 finally:
613 finally:
614 ui.note(_(b'cleaning up temp directory\n'))
614 ui.note(_(b'cleaning up temp directory\n'))
615 shutil.rmtree(tmproot)
615 shutil.rmtree(tmproot)
616
616
617
617
618 extdiffopts = (
618 extdiffopts = (
619 [
619 [
620 (
620 (
621 b'o',
621 b'o',
622 b'option',
622 b'option',
623 [],
623 [],
624 _(b'pass option to comparison program'),
624 _(b'pass option to comparison program'),
625 _(b'OPT'),
625 _(b'OPT'),
626 ),
626 ),
627 (b'r', b'rev', [], _(b'revision (DEPRECATED)'), _(b'REV')),
627 (b'r', b'rev', [], _(b'revision (DEPRECATED)'), _(b'REV')),
628 (b'', b'from', b'', _(b'revision to diff from'), _(b'REV1')),
628 (b'', b'from', b'', _(b'revision to diff from'), _(b'REV1')),
629 (b'', b'to', b'', _(b'revision to diff to'), _(b'REV2')),
629 (b'', b'to', b'', _(b'revision to diff to'), _(b'REV2')),
630 (b'c', b'change', b'', _(b'change made by revision'), _(b'REV')),
630 (b'c', b'change', b'', _(b'change made by revision'), _(b'REV')),
631 (
631 (
632 b'',
632 b'',
633 b'per-file',
633 b'per-file',
634 False,
634 False,
635 _(b'compare each file instead of revision snapshots'),
635 _(b'compare each file instead of revision snapshots'),
636 ),
636 ),
637 (
637 (
638 b'',
638 b'',
639 b'confirm',
639 b'confirm',
640 False,
640 False,
641 _(b'prompt user before each external program invocation'),
641 _(b'prompt user before each external program invocation'),
642 ),
642 ),
643 (b'', b'patch', None, _(b'compare patches for two revisions')),
643 (b'', b'patch', None, _(b'compare patches for two revisions')),
644 ]
644 ]
645 + cmdutil.walkopts
645 + cmdutil.walkopts
646 + cmdutil.subrepoopts
646 + cmdutil.subrepoopts
647 )
647 )
648
648
649
649
650 @command(
650 @command(
651 b'extdiff',
651 b'extdiff',
652 [
652 [
653 (b'p', b'program', b'', _(b'comparison program to run'), _(b'CMD')),
653 (b'p', b'program', b'', _(b'comparison program to run'), _(b'CMD')),
654 ]
654 ]
655 + extdiffopts,
655 + extdiffopts,
656 _(b'hg extdiff [OPT]... [FILE]...'),
656 _(b'hg extdiff [OPT]... [FILE]...'),
657 helpcategory=command.CATEGORY_FILE_CONTENTS,
657 helpcategory=command.CATEGORY_FILE_CONTENTS,
658 inferrepo=True,
658 inferrepo=True,
659 )
659 )
660 def extdiff(ui, repo, *pats, **opts):
660 def extdiff(ui, repo, *pats, **opts):
661 """use external program to diff repository (or selected files)
661 """use external program to diff repository (or selected files)
662
662
663 Show differences between revisions for the specified files, using
663 Show differences between revisions for the specified files, using
664 an external program. The default program used is diff, with
664 an external program. The default program used is diff, with
665 default options "-Npru".
665 default options "-Npru".
666
666
667 To select a different program, use the -p/--program option. The
667 To select a different program, use the -p/--program option. The
668 program will be passed the names of two directories to compare,
668 program will be passed the names of two directories to compare,
669 unless the --per-file option is specified (see below). To pass
669 unless the --per-file option is specified (see below). To pass
670 additional options to the program, use -o/--option. These will be
670 additional options to the program, use -o/--option. These will be
671 passed before the names of the directories or files to compare.
671 passed before the names of the directories or files to compare.
672
672
673 The --from, --to, and --change options work the same way they do for
673 The --from, --to, and --change options work the same way they do for
674 :hg:`diff`.
674 :hg:`diff`.
675
675
676 The --per-file option runs the external program repeatedly on each
676 The --per-file option runs the external program repeatedly on each
677 file to diff, instead of once on two directories. By default,
677 file to diff, instead of once on two directories. By default,
678 this happens one by one, where the next file diff is open in the
678 this happens one by one, where the next file diff is open in the
679 external program only once the previous external program (for the
679 external program only once the previous external program (for the
680 previous file diff) has exited. If the external program has a
680 previous file diff) has exited. If the external program has a
681 graphical interface, it can open all the file diffs at once instead
681 graphical interface, it can open all the file diffs at once instead
682 of one by one. See :hg:`help -e extdiff` for information about how
682 of one by one. See :hg:`help -e extdiff` for information about how
683 to tell Mercurial that a given program has a graphical interface.
683 to tell Mercurial that a given program has a graphical interface.
684
684
685 The --confirm option will prompt the user before each invocation of
685 The --confirm option will prompt the user before each invocation of
686 the external program. It is ignored if --per-file isn't specified.
686 the external program. It is ignored if --per-file isn't specified.
687 """
687 """
688 opts = pycompat.byteskwargs(opts)
688 opts = pycompat.byteskwargs(opts)
689 program = opts.get(b'program')
689 program = opts.get(b'program')
690 option = opts.get(b'option')
690 option = opts.get(b'option')
691 if not program:
691 if not program:
692 program = b'diff'
692 program = b'diff'
693 option = option or [b'-Npru']
693 option = option or [b'-Npru']
694 cmdline = b' '.join(map(procutil.shellquote, [program] + option))
694 cmdline = b' '.join(map(procutil.shellquote, [program] + option))
695 return dodiff(ui, repo, cmdline, pats, opts)
695 return dodiff(ui, repo, cmdline, pats, opts)
696
696
697
697
698 class savedcmd(object):
698 class savedcmd(object):
699 """use external program to diff repository (or selected files)
699 """use external program to diff repository (or selected files)
700
700
701 Show differences between revisions for the specified files, using
701 Show differences between revisions for the specified files, using
702 the following program::
702 the following program::
703
703
704 %(path)s
704 %(path)s
705
705
706 When two revision arguments are given, then changes are shown
706 When two revision arguments are given, then changes are shown
707 between those revisions. If only one revision is specified then
707 between those revisions. If only one revision is specified then
708 that revision is compared to the working directory, and, when no
708 that revision is compared to the working directory, and, when no
709 revisions are specified, the working directory files are compared
709 revisions are specified, the working directory files are compared
710 to its parent.
710 to its parent.
711 """
711 """
712
712
713 def __init__(self, path, cmdline, isgui):
713 def __init__(self, path, cmdline, isgui):
714 # We can't pass non-ASCII through docstrings (and path is
714 # We can't pass non-ASCII through docstrings (and path is
715 # in an unknown encoding anyway), but avoid double separators on
715 # in an unknown encoding anyway), but avoid double separators on
716 # Windows
716 # Windows
717 docpath = stringutil.escapestr(path).replace(b'\\\\', b'\\')
717 docpath = stringutil.escapestr(path).replace(b'\\\\', b'\\')
718 self.__doc__ %= {'path': pycompat.sysstr(stringutil.uirepr(docpath))}
718 self.__doc__ %= {'path': pycompat.sysstr(stringutil.uirepr(docpath))}
719 self._cmdline = cmdline
719 self._cmdline = cmdline
720 self._isgui = isgui
720 self._isgui = isgui
721
721
722 def __call__(self, ui, repo, *pats, **opts):
722 def __call__(self, ui, repo, *pats, **opts):
723 opts = pycompat.byteskwargs(opts)
723 opts = pycompat.byteskwargs(opts)
724 options = b' '.join(map(procutil.shellquote, opts[b'option']))
724 options = b' '.join(map(procutil.shellquote, opts[b'option']))
725 if options:
725 if options:
726 options = b' ' + options
726 options = b' ' + options
727 return dodiff(
727 return dodiff(
728 ui, repo, self._cmdline + options, pats, opts, guitool=self._isgui
728 ui, repo, self._cmdline + options, pats, opts, guitool=self._isgui
729 )
729 )
730
730
731
731
732 def _gettooldetails(ui, cmd, path):
732 def _gettooldetails(ui, cmd, path):
733 """
733 """
734 returns following things for a
734 returns following things for a
735 ```
735 ```
736 [extdiff]
736 [extdiff]
737 <cmd> = <path>
737 <cmd> = <path>
738 ```
738 ```
739 entry:
739 entry:
740
740
741 cmd: command/tool name
741 cmd: command/tool name
742 path: path to the tool
742 path: path to the tool
743 cmdline: the command which should be run
743 cmdline: the command which should be run
744 isgui: whether the tool uses GUI or not
744 isgui: whether the tool uses GUI or not
745
745
746 Reads all external tools related configs, whether it be extdiff section,
746 Reads all external tools related configs, whether it be extdiff section,
747 diff-tools or merge-tools section, or its specified in an old format or
747 diff-tools or merge-tools section, or its specified in an old format or
748 the latest format.
748 the latest format.
749 """
749 """
750 path = util.expandpath(path)
750 path = util.expandpath(path)
751 if cmd.startswith(b'cmd.'):
751 if cmd.startswith(b'cmd.'):
752 cmd = cmd[4:]
752 cmd = cmd[4:]
753 if not path:
753 if not path:
754 path = procutil.findexe(cmd)
754 path = procutil.findexe(cmd)
755 if path is None:
755 if path is None:
756 path = filemerge.findexternaltool(ui, cmd) or cmd
756 path = filemerge.findexternaltool(ui, cmd) or cmd
757 diffopts = ui.config(b'extdiff', b'opts.' + cmd)
757 diffopts = ui.config(b'extdiff', b'opts.' + cmd)
758 cmdline = procutil.shellquote(path)
758 cmdline = procutil.shellquote(path)
759 if diffopts:
759 if diffopts:
760 cmdline += b' ' + diffopts
760 cmdline += b' ' + diffopts
761 isgui = ui.configbool(b'extdiff', b'gui.' + cmd)
761 isgui = ui.configbool(b'extdiff', b'gui.' + cmd)
762 else:
762 else:
763 if path:
763 if path:
764 # case "cmd = path opts"
764 # case "cmd = path opts"
765 cmdline = path
765 cmdline = path
766 diffopts = len(pycompat.shlexsplit(cmdline)) > 1
766 diffopts = len(pycompat.shlexsplit(cmdline)) > 1
767 else:
767 else:
768 # case "cmd ="
768 # case "cmd ="
769 path = procutil.findexe(cmd)
769 path = procutil.findexe(cmd)
770 if path is None:
770 if path is None:
771 path = filemerge.findexternaltool(ui, cmd) or cmd
771 path = filemerge.findexternaltool(ui, cmd) or cmd
772 cmdline = procutil.shellquote(path)
772 cmdline = procutil.shellquote(path)
773 diffopts = False
773 diffopts = False
774 isgui = ui.configbool(b'extdiff', b'gui.' + cmd)
774 isgui = ui.configbool(b'extdiff', b'gui.' + cmd)
775 # look for diff arguments in [diff-tools] then [merge-tools]
775 # look for diff arguments in [diff-tools] then [merge-tools]
776 if not diffopts:
776 if not diffopts:
777 key = cmd + b'.diffargs'
777 key = cmd + b'.diffargs'
778 for section in (b'diff-tools', b'merge-tools'):
778 for section in (b'diff-tools', b'merge-tools'):
779 args = ui.config(section, key)
779 args = ui.config(section, key)
780 if args:
780 if args:
781 cmdline += b' ' + args
781 cmdline += b' ' + args
782 if isgui is None:
782 if isgui is None:
783 isgui = ui.configbool(section, cmd + b'.gui') or False
783 isgui = ui.configbool(section, cmd + b'.gui') or False
784 break
784 break
785 return cmd, path, cmdline, isgui
785 return cmd, path, cmdline, isgui
786
786
787
787
788 def uisetup(ui):
788 def uisetup(ui):
789 for cmd, path in ui.configitems(b'extdiff'):
789 for cmd, path in ui.configitems(b'extdiff'):
790 if cmd.startswith(b'opts.') or cmd.startswith(b'gui.'):
790 if cmd.startswith(b'opts.') or cmd.startswith(b'gui.'):
791 continue
791 continue
792 cmd, path, cmdline, isgui = _gettooldetails(ui, cmd, path)
792 cmd, path, cmdline, isgui = _gettooldetails(ui, cmd, path)
793 command(
793 command(
794 cmd,
794 cmd,
795 extdiffopts[:],
795 extdiffopts[:],
796 _(b'hg %s [OPTION]... [FILE]...') % cmd,
796 _(b'hg %s [OPTION]... [FILE]...') % cmd,
797 helpcategory=command.CATEGORY_FILE_CONTENTS,
797 helpcategory=command.CATEGORY_FILE_CONTENTS,
798 inferrepo=True,
798 inferrepo=True,
799 )(savedcmd(path, cmdline, isgui))
799 )(savedcmd(path, cmdline, isgui))
800
800
801
801
802 # tell hggettext to extract docstrings from these functions:
802 # tell hggettext to extract docstrings from these functions:
803 i18nfunctions = [savedcmd]
803 i18nfunctions = [savedcmd]
@@ -1,197 +1,197 b''
1 # split.py - split a changeset into smaller ones
1 # split.py - split a changeset into smaller ones
2 #
2 #
3 # Copyright 2015 Laurent Charignon <lcharignon@fb.com>
3 # Copyright 2015 Laurent Charignon <lcharignon@fb.com>
4 # Copyright 2017 Facebook, Inc.
4 # Copyright 2017 Facebook, Inc.
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8 """command to split a changeset into smaller ones (EXPERIMENTAL)"""
8 """command to split a changeset into smaller ones (EXPERIMENTAL)"""
9
9
10 from __future__ import absolute_import
10 from __future__ import absolute_import
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13
13
14 from mercurial.node import (
14 from mercurial.node import (
15 nullid,
15 nullrev,
16 short,
16 short,
17 )
17 )
18
18
19 from mercurial import (
19 from mercurial import (
20 bookmarks,
20 bookmarks,
21 cmdutil,
21 cmdutil,
22 commands,
22 commands,
23 error,
23 error,
24 hg,
24 hg,
25 pycompat,
25 pycompat,
26 registrar,
26 registrar,
27 revsetlang,
27 revsetlang,
28 rewriteutil,
28 rewriteutil,
29 scmutil,
29 scmutil,
30 util,
30 util,
31 )
31 )
32
32
33 # allow people to use split without explicitly enabling rebase extension
33 # allow people to use split without explicitly enabling rebase extension
34 from . import rebase
34 from . import rebase
35
35
36 cmdtable = {}
36 cmdtable = {}
37 command = registrar.command(cmdtable)
37 command = registrar.command(cmdtable)
38
38
39 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
39 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
40 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
40 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
41 # be specifying the version(s) of Mercurial they are tested with, or
41 # be specifying the version(s) of Mercurial they are tested with, or
42 # leave the attribute unspecified.
42 # leave the attribute unspecified.
43 testedwith = b'ships-with-hg-core'
43 testedwith = b'ships-with-hg-core'
44
44
45
45
46 @command(
46 @command(
47 b'split',
47 b'split',
48 [
48 [
49 (b'r', b'rev', b'', _(b"revision to split"), _(b'REV')),
49 (b'r', b'rev', b'', _(b"revision to split"), _(b'REV')),
50 (b'', b'rebase', True, _(b'rebase descendants after split')),
50 (b'', b'rebase', True, _(b'rebase descendants after split')),
51 ]
51 ]
52 + cmdutil.commitopts2,
52 + cmdutil.commitopts2,
53 _(b'hg split [--no-rebase] [[-r] REV]'),
53 _(b'hg split [--no-rebase] [[-r] REV]'),
54 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
54 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
55 helpbasic=True,
55 helpbasic=True,
56 )
56 )
57 def split(ui, repo, *revs, **opts):
57 def split(ui, repo, *revs, **opts):
58 """split a changeset into smaller ones
58 """split a changeset into smaller ones
59
59
60 Repeatedly prompt changes and commit message for new changesets until there
60 Repeatedly prompt changes and commit message for new changesets until there
61 is nothing left in the original changeset.
61 is nothing left in the original changeset.
62
62
63 If --rev was not given, split the working directory parent.
63 If --rev was not given, split the working directory parent.
64
64
65 By default, rebase connected non-obsoleted descendants onto the new
65 By default, rebase connected non-obsoleted descendants onto the new
66 changeset. Use --no-rebase to avoid the rebase.
66 changeset. Use --no-rebase to avoid the rebase.
67 """
67 """
68 opts = pycompat.byteskwargs(opts)
68 opts = pycompat.byteskwargs(opts)
69 revlist = []
69 revlist = []
70 if opts.get(b'rev'):
70 if opts.get(b'rev'):
71 revlist.append(opts.get(b'rev'))
71 revlist.append(opts.get(b'rev'))
72 revlist.extend(revs)
72 revlist.extend(revs)
73 with repo.wlock(), repo.lock():
73 with repo.wlock(), repo.lock():
74 tr = repo.transaction(b'split')
74 tr = repo.transaction(b'split')
75 # If the rebase somehow runs into conflicts, make sure
75 # If the rebase somehow runs into conflicts, make sure
76 # we close the transaction so the user can continue it.
76 # we close the transaction so the user can continue it.
77 with util.acceptintervention(tr):
77 with util.acceptintervention(tr):
78 revs = scmutil.revrange(repo, revlist or [b'.'])
78 revs = scmutil.revrange(repo, revlist or [b'.'])
79 if len(revs) > 1:
79 if len(revs) > 1:
80 raise error.InputError(_(b'cannot split multiple revisions'))
80 raise error.InputError(_(b'cannot split multiple revisions'))
81
81
82 rev = revs.first()
82 rev = revs.first()
83 ctx = repo[rev]
83 # Handle nullrev specially here (instead of leaving for precheck()
84 # Handle nullid specially here (instead of leaving for precheck()
85 # below) so we get a nicer message and error code.
84 # below) so we get a nicer message and error code.
86 if rev is None or ctx.node() == nullid:
85 if rev is None or rev == nullrev:
87 ui.status(_(b'nothing to split\n'))
86 ui.status(_(b'nothing to split\n'))
88 return 1
87 return 1
88 ctx = repo[rev]
89 if ctx.node() is None:
89 if ctx.node() is None:
90 raise error.InputError(_(b'cannot split working directory'))
90 raise error.InputError(_(b'cannot split working directory'))
91
91
92 if opts.get(b'rebase'):
92 if opts.get(b'rebase'):
93 # Skip obsoleted descendants and their descendants so the rebase
93 # Skip obsoleted descendants and their descendants so the rebase
94 # won't cause conflicts for sure.
94 # won't cause conflicts for sure.
95 descendants = list(repo.revs(b'(%d::) - (%d)', rev, rev))
95 descendants = list(repo.revs(b'(%d::) - (%d)', rev, rev))
96 torebase = list(
96 torebase = list(
97 repo.revs(
97 repo.revs(
98 b'%ld - (%ld & obsolete())::', descendants, descendants
98 b'%ld - (%ld & obsolete())::', descendants, descendants
99 )
99 )
100 )
100 )
101 else:
101 else:
102 torebase = []
102 torebase = []
103 rewriteutil.precheck(repo, [rev] + torebase, b'split')
103 rewriteutil.precheck(repo, [rev] + torebase, b'split')
104
104
105 if len(ctx.parents()) > 1:
105 if len(ctx.parents()) > 1:
106 raise error.InputError(_(b'cannot split a merge changeset'))
106 raise error.InputError(_(b'cannot split a merge changeset'))
107
107
108 cmdutil.bailifchanged(repo)
108 cmdutil.bailifchanged(repo)
109
109
110 # Deactivate bookmark temporarily so it won't get moved
110 # Deactivate bookmark temporarily so it won't get moved
111 # unintentionally
111 # unintentionally
112 bname = repo._activebookmark
112 bname = repo._activebookmark
113 if bname and repo._bookmarks[bname] != ctx.node():
113 if bname and repo._bookmarks[bname] != ctx.node():
114 bookmarks.deactivate(repo)
114 bookmarks.deactivate(repo)
115
115
116 wnode = repo[b'.'].node()
116 wnode = repo[b'.'].node()
117 top = None
117 top = None
118 try:
118 try:
119 top = dosplit(ui, repo, tr, ctx, opts)
119 top = dosplit(ui, repo, tr, ctx, opts)
120 finally:
120 finally:
121 # top is None: split failed, need update --clean recovery.
121 # top is None: split failed, need update --clean recovery.
122 # wnode == ctx.node(): wnode split, no need to update.
122 # wnode == ctx.node(): wnode split, no need to update.
123 if top is None or wnode != ctx.node():
123 if top is None or wnode != ctx.node():
124 hg.clean(repo, wnode, show_stats=False)
124 hg.clean(repo, wnode, show_stats=False)
125 if bname:
125 if bname:
126 bookmarks.activate(repo, bname)
126 bookmarks.activate(repo, bname)
127 if torebase and top:
127 if torebase and top:
128 dorebase(ui, repo, torebase, top)
128 dorebase(ui, repo, torebase, top)
129
129
130
130
131 def dosplit(ui, repo, tr, ctx, opts):
131 def dosplit(ui, repo, tr, ctx, opts):
132 committed = [] # [ctx]
132 committed = [] # [ctx]
133
133
134 # Set working parent to ctx.p1(), and keep working copy as ctx's content
134 # Set working parent to ctx.p1(), and keep working copy as ctx's content
135 if ctx.node() != repo.dirstate.p1():
135 if ctx.node() != repo.dirstate.p1():
136 hg.clean(repo, ctx.node(), show_stats=False)
136 hg.clean(repo, ctx.node(), show_stats=False)
137 with repo.dirstate.parentchange():
137 with repo.dirstate.parentchange():
138 scmutil.movedirstate(repo, ctx.p1())
138 scmutil.movedirstate(repo, ctx.p1())
139
139
140 # Any modified, added, removed, deleted result means split is incomplete
140 # Any modified, added, removed, deleted result means split is incomplete
141 def incomplete(repo):
141 def incomplete(repo):
142 st = repo.status()
142 st = repo.status()
143 return any((st.modified, st.added, st.removed, st.deleted))
143 return any((st.modified, st.added, st.removed, st.deleted))
144
144
145 # Main split loop
145 # Main split loop
146 while incomplete(repo):
146 while incomplete(repo):
147 if committed:
147 if committed:
148 header = _(
148 header = _(
149 b'HG: Splitting %s. So far it has been split into:\n'
149 b'HG: Splitting %s. So far it has been split into:\n'
150 ) % short(ctx.node())
150 ) % short(ctx.node())
151 # We don't want color codes in the commit message template, so
151 # We don't want color codes in the commit message template, so
152 # disable the label() template function while we render it.
152 # disable the label() template function while we render it.
153 with ui.configoverride(
153 with ui.configoverride(
154 {(b'templatealias', b'label(l,x)'): b"x"}, b'split'
154 {(b'templatealias', b'label(l,x)'): b"x"}, b'split'
155 ):
155 ):
156 for c in committed:
156 for c in committed:
157 summary = cmdutil.format_changeset_summary(ui, c, b'split')
157 summary = cmdutil.format_changeset_summary(ui, c, b'split')
158 header += _(b'HG: - %s\n') % summary
158 header += _(b'HG: - %s\n') % summary
159 header += _(
159 header += _(
160 b'HG: Write commit message for the next split changeset.\n'
160 b'HG: Write commit message for the next split changeset.\n'
161 )
161 )
162 else:
162 else:
163 header = _(
163 header = _(
164 b'HG: Splitting %s. Write commit message for the '
164 b'HG: Splitting %s. Write commit message for the '
165 b'first split changeset.\n'
165 b'first split changeset.\n'
166 ) % short(ctx.node())
166 ) % short(ctx.node())
167 opts.update(
167 opts.update(
168 {
168 {
169 b'edit': True,
169 b'edit': True,
170 b'interactive': True,
170 b'interactive': True,
171 b'message': header + ctx.description(),
171 b'message': header + ctx.description(),
172 }
172 }
173 )
173 )
174 commands.commit(ui, repo, **pycompat.strkwargs(opts))
174 commands.commit(ui, repo, **pycompat.strkwargs(opts))
175 newctx = repo[b'.']
175 newctx = repo[b'.']
176 committed.append(newctx)
176 committed.append(newctx)
177
177
178 if not committed:
178 if not committed:
179 raise error.InputError(_(b'cannot split an empty revision'))
179 raise error.InputError(_(b'cannot split an empty revision'))
180
180
181 scmutil.cleanupnodes(
181 scmutil.cleanupnodes(
182 repo,
182 repo,
183 {ctx.node(): [c.node() for c in committed]},
183 {ctx.node(): [c.node() for c in committed]},
184 operation=b'split',
184 operation=b'split',
185 fixphase=True,
185 fixphase=True,
186 )
186 )
187
187
188 return committed[-1]
188 return committed[-1]
189
189
190
190
191 def dorebase(ui, repo, src, destctx):
191 def dorebase(ui, repo, src, destctx):
192 rebase.rebase(
192 rebase.rebase(
193 ui,
193 ui,
194 repo,
194 repo,
195 rev=[revsetlang.formatspec(b'%ld', src)],
195 rev=[revsetlang.formatspec(b'%ld', src)],
196 dest=revsetlang.formatspec(b'%d', destctx.rev()),
196 dest=revsetlang.formatspec(b'%d', destctx.rev()),
197 )
197 )
@@ -1,3113 +1,3113 b''
1 # context.py - changeset and file context objects for mercurial
1 # context.py - changeset and file context objects for mercurial
2 #
2 #
3 # Copyright 2006, 2007 Olivia Mackall <olivia@selenic.com>
3 # Copyright 2006, 2007 Olivia Mackall <olivia@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import filecmp
11 import filecmp
12 import os
12 import os
13 import stat
13 import stat
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 addednodeid,
17 addednodeid,
18 hex,
18 hex,
19 modifiednodeid,
19 modifiednodeid,
20 nullid,
20 nullid,
21 nullrev,
21 nullrev,
22 short,
22 short,
23 wdirfilenodeids,
23 wdirfilenodeids,
24 wdirhex,
24 wdirhex,
25 )
25 )
26 from .pycompat import (
26 from .pycompat import (
27 getattr,
27 getattr,
28 open,
28 open,
29 )
29 )
30 from . import (
30 from . import (
31 dagop,
31 dagop,
32 encoding,
32 encoding,
33 error,
33 error,
34 fileset,
34 fileset,
35 match as matchmod,
35 match as matchmod,
36 mergestate as mergestatemod,
36 mergestate as mergestatemod,
37 metadata,
37 metadata,
38 obsolete as obsmod,
38 obsolete as obsmod,
39 patch,
39 patch,
40 pathutil,
40 pathutil,
41 phases,
41 phases,
42 pycompat,
42 pycompat,
43 repoview,
43 repoview,
44 scmutil,
44 scmutil,
45 sparse,
45 sparse,
46 subrepo,
46 subrepo,
47 subrepoutil,
47 subrepoutil,
48 util,
48 util,
49 )
49 )
50 from .utils import (
50 from .utils import (
51 dateutil,
51 dateutil,
52 stringutil,
52 stringutil,
53 )
53 )
54
54
55 propertycache = util.propertycache
55 propertycache = util.propertycache
56
56
57
57
58 class basectx(object):
58 class basectx(object):
59 """A basectx object represents the common logic for its children:
59 """A basectx object represents the common logic for its children:
60 changectx: read-only context that is already present in the repo,
60 changectx: read-only context that is already present in the repo,
61 workingctx: a context that represents the working directory and can
61 workingctx: a context that represents the working directory and can
62 be committed,
62 be committed,
63 memctx: a context that represents changes in-memory and can also
63 memctx: a context that represents changes in-memory and can also
64 be committed."""
64 be committed."""
65
65
66 def __init__(self, repo):
66 def __init__(self, repo):
67 self._repo = repo
67 self._repo = repo
68
68
69 def __bytes__(self):
69 def __bytes__(self):
70 return short(self.node())
70 return short(self.node())
71
71
72 __str__ = encoding.strmethod(__bytes__)
72 __str__ = encoding.strmethod(__bytes__)
73
73
74 def __repr__(self):
74 def __repr__(self):
75 return "<%s %s>" % (type(self).__name__, str(self))
75 return "<%s %s>" % (type(self).__name__, str(self))
76
76
77 def __eq__(self, other):
77 def __eq__(self, other):
78 try:
78 try:
79 return type(self) == type(other) and self._rev == other._rev
79 return type(self) == type(other) and self._rev == other._rev
80 except AttributeError:
80 except AttributeError:
81 return False
81 return False
82
82
83 def __ne__(self, other):
83 def __ne__(self, other):
84 return not (self == other)
84 return not (self == other)
85
85
86 def __contains__(self, key):
86 def __contains__(self, key):
87 return key in self._manifest
87 return key in self._manifest
88
88
89 def __getitem__(self, key):
89 def __getitem__(self, key):
90 return self.filectx(key)
90 return self.filectx(key)
91
91
92 def __iter__(self):
92 def __iter__(self):
93 return iter(self._manifest)
93 return iter(self._manifest)
94
94
95 def _buildstatusmanifest(self, status):
95 def _buildstatusmanifest(self, status):
96 """Builds a manifest that includes the given status results, if this is
96 """Builds a manifest that includes the given status results, if this is
97 a working copy context. For non-working copy contexts, it just returns
97 a working copy context. For non-working copy contexts, it just returns
98 the normal manifest."""
98 the normal manifest."""
99 return self.manifest()
99 return self.manifest()
100
100
101 def _matchstatus(self, other, match):
101 def _matchstatus(self, other, match):
102 """This internal method provides a way for child objects to override the
102 """This internal method provides a way for child objects to override the
103 match operator.
103 match operator.
104 """
104 """
105 return match
105 return match
106
106
107 def _buildstatus(
107 def _buildstatus(
108 self, other, s, match, listignored, listclean, listunknown
108 self, other, s, match, listignored, listclean, listunknown
109 ):
109 ):
110 """build a status with respect to another context"""
110 """build a status with respect to another context"""
111 # Load earliest manifest first for caching reasons. More specifically,
111 # Load earliest manifest first for caching reasons. More specifically,
112 # if you have revisions 1000 and 1001, 1001 is probably stored as a
112 # if you have revisions 1000 and 1001, 1001 is probably stored as a
113 # delta against 1000. Thus, if you read 1000 first, we'll reconstruct
113 # delta against 1000. Thus, if you read 1000 first, we'll reconstruct
114 # 1000 and cache it so that when you read 1001, we just need to apply a
114 # 1000 and cache it so that when you read 1001, we just need to apply a
115 # delta to what's in the cache. So that's one full reconstruction + one
115 # delta to what's in the cache. So that's one full reconstruction + one
116 # delta application.
116 # delta application.
117 mf2 = None
117 mf2 = None
118 if self.rev() is not None and self.rev() < other.rev():
118 if self.rev() is not None and self.rev() < other.rev():
119 mf2 = self._buildstatusmanifest(s)
119 mf2 = self._buildstatusmanifest(s)
120 mf1 = other._buildstatusmanifest(s)
120 mf1 = other._buildstatusmanifest(s)
121 if mf2 is None:
121 if mf2 is None:
122 mf2 = self._buildstatusmanifest(s)
122 mf2 = self._buildstatusmanifest(s)
123
123
124 modified, added = [], []
124 modified, added = [], []
125 removed = []
125 removed = []
126 clean = []
126 clean = []
127 deleted, unknown, ignored = s.deleted, s.unknown, s.ignored
127 deleted, unknown, ignored = s.deleted, s.unknown, s.ignored
128 deletedset = set(deleted)
128 deletedset = set(deleted)
129 d = mf1.diff(mf2, match=match, clean=listclean)
129 d = mf1.diff(mf2, match=match, clean=listclean)
130 for fn, value in pycompat.iteritems(d):
130 for fn, value in pycompat.iteritems(d):
131 if fn in deletedset:
131 if fn in deletedset:
132 continue
132 continue
133 if value is None:
133 if value is None:
134 clean.append(fn)
134 clean.append(fn)
135 continue
135 continue
136 (node1, flag1), (node2, flag2) = value
136 (node1, flag1), (node2, flag2) = value
137 if node1 is None:
137 if node1 is None:
138 added.append(fn)
138 added.append(fn)
139 elif node2 is None:
139 elif node2 is None:
140 removed.append(fn)
140 removed.append(fn)
141 elif flag1 != flag2:
141 elif flag1 != flag2:
142 modified.append(fn)
142 modified.append(fn)
143 elif node2 not in wdirfilenodeids:
143 elif node2 not in wdirfilenodeids:
144 # When comparing files between two commits, we save time by
144 # When comparing files between two commits, we save time by
145 # not comparing the file contents when the nodeids differ.
145 # not comparing the file contents when the nodeids differ.
146 # Note that this means we incorrectly report a reverted change
146 # Note that this means we incorrectly report a reverted change
147 # to a file as a modification.
147 # to a file as a modification.
148 modified.append(fn)
148 modified.append(fn)
149 elif self[fn].cmp(other[fn]):
149 elif self[fn].cmp(other[fn]):
150 modified.append(fn)
150 modified.append(fn)
151 else:
151 else:
152 clean.append(fn)
152 clean.append(fn)
153
153
154 if removed:
154 if removed:
155 # need to filter files if they are already reported as removed
155 # need to filter files if they are already reported as removed
156 unknown = [
156 unknown = [
157 fn
157 fn
158 for fn in unknown
158 for fn in unknown
159 if fn not in mf1 and (not match or match(fn))
159 if fn not in mf1 and (not match or match(fn))
160 ]
160 ]
161 ignored = [
161 ignored = [
162 fn
162 fn
163 for fn in ignored
163 for fn in ignored
164 if fn not in mf1 and (not match or match(fn))
164 if fn not in mf1 and (not match or match(fn))
165 ]
165 ]
166 # if they're deleted, don't report them as removed
166 # if they're deleted, don't report them as removed
167 removed = [fn for fn in removed if fn not in deletedset]
167 removed = [fn for fn in removed if fn not in deletedset]
168
168
169 return scmutil.status(
169 return scmutil.status(
170 modified, added, removed, deleted, unknown, ignored, clean
170 modified, added, removed, deleted, unknown, ignored, clean
171 )
171 )
172
172
173 @propertycache
173 @propertycache
174 def substate(self):
174 def substate(self):
175 return subrepoutil.state(self, self._repo.ui)
175 return subrepoutil.state(self, self._repo.ui)
176
176
177 def subrev(self, subpath):
177 def subrev(self, subpath):
178 return self.substate[subpath][1]
178 return self.substate[subpath][1]
179
179
180 def rev(self):
180 def rev(self):
181 return self._rev
181 return self._rev
182
182
183 def node(self):
183 def node(self):
184 return self._node
184 return self._node
185
185
186 def hex(self):
186 def hex(self):
187 return hex(self.node())
187 return hex(self.node())
188
188
189 def manifest(self):
189 def manifest(self):
190 return self._manifest
190 return self._manifest
191
191
192 def manifestctx(self):
192 def manifestctx(self):
193 return self._manifestctx
193 return self._manifestctx
194
194
195 def repo(self):
195 def repo(self):
196 return self._repo
196 return self._repo
197
197
198 def phasestr(self):
198 def phasestr(self):
199 return phases.phasenames[self.phase()]
199 return phases.phasenames[self.phase()]
200
200
201 def mutable(self):
201 def mutable(self):
202 return self.phase() > phases.public
202 return self.phase() > phases.public
203
203
204 def matchfileset(self, cwd, expr, badfn=None):
204 def matchfileset(self, cwd, expr, badfn=None):
205 return fileset.match(self, cwd, expr, badfn=badfn)
205 return fileset.match(self, cwd, expr, badfn=badfn)
206
206
207 def obsolete(self):
207 def obsolete(self):
208 """True if the changeset is obsolete"""
208 """True if the changeset is obsolete"""
209 return self.rev() in obsmod.getrevs(self._repo, b'obsolete')
209 return self.rev() in obsmod.getrevs(self._repo, b'obsolete')
210
210
211 def extinct(self):
211 def extinct(self):
212 """True if the changeset is extinct"""
212 """True if the changeset is extinct"""
213 return self.rev() in obsmod.getrevs(self._repo, b'extinct')
213 return self.rev() in obsmod.getrevs(self._repo, b'extinct')
214
214
215 def orphan(self):
215 def orphan(self):
216 """True if the changeset is not obsolete, but its ancestor is"""
216 """True if the changeset is not obsolete, but its ancestor is"""
217 return self.rev() in obsmod.getrevs(self._repo, b'orphan')
217 return self.rev() in obsmod.getrevs(self._repo, b'orphan')
218
218
219 def phasedivergent(self):
219 def phasedivergent(self):
220 """True if the changeset tries to be a successor of a public changeset
220 """True if the changeset tries to be a successor of a public changeset
221
221
222 Only non-public and non-obsolete changesets may be phase-divergent.
222 Only non-public and non-obsolete changesets may be phase-divergent.
223 """
223 """
224 return self.rev() in obsmod.getrevs(self._repo, b'phasedivergent')
224 return self.rev() in obsmod.getrevs(self._repo, b'phasedivergent')
225
225
226 def contentdivergent(self):
226 def contentdivergent(self):
227 """Is a successor of a changeset with multiple possible successor sets
227 """Is a successor of a changeset with multiple possible successor sets
228
228
229 Only non-public and non-obsolete changesets may be content-divergent.
229 Only non-public and non-obsolete changesets may be content-divergent.
230 """
230 """
231 return self.rev() in obsmod.getrevs(self._repo, b'contentdivergent')
231 return self.rev() in obsmod.getrevs(self._repo, b'contentdivergent')
232
232
233 def isunstable(self):
233 def isunstable(self):
234 """True if the changeset is either orphan, phase-divergent or
234 """True if the changeset is either orphan, phase-divergent or
235 content-divergent"""
235 content-divergent"""
236 return self.orphan() or self.phasedivergent() or self.contentdivergent()
236 return self.orphan() or self.phasedivergent() or self.contentdivergent()
237
237
238 def instabilities(self):
238 def instabilities(self):
239 """return the list of instabilities affecting this changeset.
239 """return the list of instabilities affecting this changeset.
240
240
241 Instabilities are returned as strings. possible values are:
241 Instabilities are returned as strings. possible values are:
242 - orphan,
242 - orphan,
243 - phase-divergent,
243 - phase-divergent,
244 - content-divergent.
244 - content-divergent.
245 """
245 """
246 instabilities = []
246 instabilities = []
247 if self.orphan():
247 if self.orphan():
248 instabilities.append(b'orphan')
248 instabilities.append(b'orphan')
249 if self.phasedivergent():
249 if self.phasedivergent():
250 instabilities.append(b'phase-divergent')
250 instabilities.append(b'phase-divergent')
251 if self.contentdivergent():
251 if self.contentdivergent():
252 instabilities.append(b'content-divergent')
252 instabilities.append(b'content-divergent')
253 return instabilities
253 return instabilities
254
254
255 def parents(self):
255 def parents(self):
256 """return contexts for each parent changeset"""
256 """return contexts for each parent changeset"""
257 return self._parents
257 return self._parents
258
258
259 def p1(self):
259 def p1(self):
260 return self._parents[0]
260 return self._parents[0]
261
261
262 def p2(self):
262 def p2(self):
263 parents = self._parents
263 parents = self._parents
264 if len(parents) == 2:
264 if len(parents) == 2:
265 return parents[1]
265 return parents[1]
266 return self._repo[nullrev]
266 return self._repo[nullrev]
267
267
268 def _fileinfo(self, path):
268 def _fileinfo(self, path):
269 if '_manifest' in self.__dict__:
269 if '_manifest' in self.__dict__:
270 try:
270 try:
271 return self._manifest.find(path)
271 return self._manifest.find(path)
272 except KeyError:
272 except KeyError:
273 raise error.ManifestLookupError(
273 raise error.ManifestLookupError(
274 self._node or b'None', path, _(b'not found in manifest')
274 self._node or b'None', path, _(b'not found in manifest')
275 )
275 )
276 if '_manifestdelta' in self.__dict__ or path in self.files():
276 if '_manifestdelta' in self.__dict__ or path in self.files():
277 if path in self._manifestdelta:
277 if path in self._manifestdelta:
278 return (
278 return (
279 self._manifestdelta[path],
279 self._manifestdelta[path],
280 self._manifestdelta.flags(path),
280 self._manifestdelta.flags(path),
281 )
281 )
282 mfl = self._repo.manifestlog
282 mfl = self._repo.manifestlog
283 try:
283 try:
284 node, flag = mfl[self._changeset.manifest].find(path)
284 node, flag = mfl[self._changeset.manifest].find(path)
285 except KeyError:
285 except KeyError:
286 raise error.ManifestLookupError(
286 raise error.ManifestLookupError(
287 self._node or b'None', path, _(b'not found in manifest')
287 self._node or b'None', path, _(b'not found in manifest')
288 )
288 )
289
289
290 return node, flag
290 return node, flag
291
291
292 def filenode(self, path):
292 def filenode(self, path):
293 return self._fileinfo(path)[0]
293 return self._fileinfo(path)[0]
294
294
295 def flags(self, path):
295 def flags(self, path):
296 try:
296 try:
297 return self._fileinfo(path)[1]
297 return self._fileinfo(path)[1]
298 except error.LookupError:
298 except error.LookupError:
299 return b''
299 return b''
300
300
301 @propertycache
301 @propertycache
302 def _copies(self):
302 def _copies(self):
303 return metadata.computechangesetcopies(self)
303 return metadata.computechangesetcopies(self)
304
304
305 def p1copies(self):
305 def p1copies(self):
306 return self._copies[0]
306 return self._copies[0]
307
307
308 def p2copies(self):
308 def p2copies(self):
309 return self._copies[1]
309 return self._copies[1]
310
310
311 def sub(self, path, allowcreate=True):
311 def sub(self, path, allowcreate=True):
312 '''return a subrepo for the stored revision of path, never wdir()'''
312 '''return a subrepo for the stored revision of path, never wdir()'''
313 return subrepo.subrepo(self, path, allowcreate=allowcreate)
313 return subrepo.subrepo(self, path, allowcreate=allowcreate)
314
314
315 def nullsub(self, path, pctx):
315 def nullsub(self, path, pctx):
316 return subrepo.nullsubrepo(self, path, pctx)
316 return subrepo.nullsubrepo(self, path, pctx)
317
317
318 def workingsub(self, path):
318 def workingsub(self, path):
319 """return a subrepo for the stored revision, or wdir if this is a wdir
319 """return a subrepo for the stored revision, or wdir if this is a wdir
320 context.
320 context.
321 """
321 """
322 return subrepo.subrepo(self, path, allowwdir=True)
322 return subrepo.subrepo(self, path, allowwdir=True)
323
323
324 def match(
324 def match(
325 self,
325 self,
326 pats=None,
326 pats=None,
327 include=None,
327 include=None,
328 exclude=None,
328 exclude=None,
329 default=b'glob',
329 default=b'glob',
330 listsubrepos=False,
330 listsubrepos=False,
331 badfn=None,
331 badfn=None,
332 cwd=None,
332 cwd=None,
333 ):
333 ):
334 r = self._repo
334 r = self._repo
335 if not cwd:
335 if not cwd:
336 cwd = r.getcwd()
336 cwd = r.getcwd()
337 return matchmod.match(
337 return matchmod.match(
338 r.root,
338 r.root,
339 cwd,
339 cwd,
340 pats,
340 pats,
341 include,
341 include,
342 exclude,
342 exclude,
343 default,
343 default,
344 auditor=r.nofsauditor,
344 auditor=r.nofsauditor,
345 ctx=self,
345 ctx=self,
346 listsubrepos=listsubrepos,
346 listsubrepos=listsubrepos,
347 badfn=badfn,
347 badfn=badfn,
348 )
348 )
349
349
350 def diff(
350 def diff(
351 self,
351 self,
352 ctx2=None,
352 ctx2=None,
353 match=None,
353 match=None,
354 changes=None,
354 changes=None,
355 opts=None,
355 opts=None,
356 losedatafn=None,
356 losedatafn=None,
357 pathfn=None,
357 pathfn=None,
358 copy=None,
358 copy=None,
359 copysourcematch=None,
359 copysourcematch=None,
360 hunksfilterfn=None,
360 hunksfilterfn=None,
361 ):
361 ):
362 """Returns a diff generator for the given contexts and matcher"""
362 """Returns a diff generator for the given contexts and matcher"""
363 if ctx2 is None:
363 if ctx2 is None:
364 ctx2 = self.p1()
364 ctx2 = self.p1()
365 if ctx2 is not None:
365 if ctx2 is not None:
366 ctx2 = self._repo[ctx2]
366 ctx2 = self._repo[ctx2]
367 return patch.diff(
367 return patch.diff(
368 self._repo,
368 self._repo,
369 ctx2,
369 ctx2,
370 self,
370 self,
371 match=match,
371 match=match,
372 changes=changes,
372 changes=changes,
373 opts=opts,
373 opts=opts,
374 losedatafn=losedatafn,
374 losedatafn=losedatafn,
375 pathfn=pathfn,
375 pathfn=pathfn,
376 copy=copy,
376 copy=copy,
377 copysourcematch=copysourcematch,
377 copysourcematch=copysourcematch,
378 hunksfilterfn=hunksfilterfn,
378 hunksfilterfn=hunksfilterfn,
379 )
379 )
380
380
381 def dirs(self):
381 def dirs(self):
382 return self._manifest.dirs()
382 return self._manifest.dirs()
383
383
384 def hasdir(self, dir):
384 def hasdir(self, dir):
385 return self._manifest.hasdir(dir)
385 return self._manifest.hasdir(dir)
386
386
387 def status(
387 def status(
388 self,
388 self,
389 other=None,
389 other=None,
390 match=None,
390 match=None,
391 listignored=False,
391 listignored=False,
392 listclean=False,
392 listclean=False,
393 listunknown=False,
393 listunknown=False,
394 listsubrepos=False,
394 listsubrepos=False,
395 ):
395 ):
396 """return status of files between two nodes or node and working
396 """return status of files between two nodes or node and working
397 directory.
397 directory.
398
398
399 If other is None, compare this node with working directory.
399 If other is None, compare this node with working directory.
400
400
401 ctx1.status(ctx2) returns the status of change from ctx1 to ctx2
401 ctx1.status(ctx2) returns the status of change from ctx1 to ctx2
402
402
403 Returns a mercurial.scmutils.status object.
403 Returns a mercurial.scmutils.status object.
404
404
405 Data can be accessed using either tuple notation:
405 Data can be accessed using either tuple notation:
406
406
407 (modified, added, removed, deleted, unknown, ignored, clean)
407 (modified, added, removed, deleted, unknown, ignored, clean)
408
408
409 or direct attribute access:
409 or direct attribute access:
410
410
411 s.modified, s.added, ...
411 s.modified, s.added, ...
412 """
412 """
413
413
414 ctx1 = self
414 ctx1 = self
415 ctx2 = self._repo[other]
415 ctx2 = self._repo[other]
416
416
417 # This next code block is, admittedly, fragile logic that tests for
417 # This next code block is, admittedly, fragile logic that tests for
418 # reversing the contexts and wouldn't need to exist if it weren't for
418 # reversing the contexts and wouldn't need to exist if it weren't for
419 # the fast (and common) code path of comparing the working directory
419 # the fast (and common) code path of comparing the working directory
420 # with its first parent.
420 # with its first parent.
421 #
421 #
422 # What we're aiming for here is the ability to call:
422 # What we're aiming for here is the ability to call:
423 #
423 #
424 # workingctx.status(parentctx)
424 # workingctx.status(parentctx)
425 #
425 #
426 # If we always built the manifest for each context and compared those,
426 # If we always built the manifest for each context and compared those,
427 # then we'd be done. But the special case of the above call means we
427 # then we'd be done. But the special case of the above call means we
428 # just copy the manifest of the parent.
428 # just copy the manifest of the parent.
429 reversed = False
429 reversed = False
430 if not isinstance(ctx1, changectx) and isinstance(ctx2, changectx):
430 if not isinstance(ctx1, changectx) and isinstance(ctx2, changectx):
431 reversed = True
431 reversed = True
432 ctx1, ctx2 = ctx2, ctx1
432 ctx1, ctx2 = ctx2, ctx1
433
433
434 match = self._repo.narrowmatch(match)
434 match = self._repo.narrowmatch(match)
435 match = ctx2._matchstatus(ctx1, match)
435 match = ctx2._matchstatus(ctx1, match)
436 r = scmutil.status([], [], [], [], [], [], [])
436 r = scmutil.status([], [], [], [], [], [], [])
437 r = ctx2._buildstatus(
437 r = ctx2._buildstatus(
438 ctx1, r, match, listignored, listclean, listunknown
438 ctx1, r, match, listignored, listclean, listunknown
439 )
439 )
440
440
441 if reversed:
441 if reversed:
442 # Reverse added and removed. Clear deleted, unknown and ignored as
442 # Reverse added and removed. Clear deleted, unknown and ignored as
443 # these make no sense to reverse.
443 # these make no sense to reverse.
444 r = scmutil.status(
444 r = scmutil.status(
445 r.modified, r.removed, r.added, [], [], [], r.clean
445 r.modified, r.removed, r.added, [], [], [], r.clean
446 )
446 )
447
447
448 if listsubrepos:
448 if listsubrepos:
449 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
449 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
450 try:
450 try:
451 rev2 = ctx2.subrev(subpath)
451 rev2 = ctx2.subrev(subpath)
452 except KeyError:
452 except KeyError:
453 # A subrepo that existed in node1 was deleted between
453 # A subrepo that existed in node1 was deleted between
454 # node1 and node2 (inclusive). Thus, ctx2's substate
454 # node1 and node2 (inclusive). Thus, ctx2's substate
455 # won't contain that subpath. The best we can do ignore it.
455 # won't contain that subpath. The best we can do ignore it.
456 rev2 = None
456 rev2 = None
457 submatch = matchmod.subdirmatcher(subpath, match)
457 submatch = matchmod.subdirmatcher(subpath, match)
458 s = sub.status(
458 s = sub.status(
459 rev2,
459 rev2,
460 match=submatch,
460 match=submatch,
461 ignored=listignored,
461 ignored=listignored,
462 clean=listclean,
462 clean=listclean,
463 unknown=listunknown,
463 unknown=listunknown,
464 listsubrepos=True,
464 listsubrepos=True,
465 )
465 )
466 for k in (
466 for k in (
467 'modified',
467 'modified',
468 'added',
468 'added',
469 'removed',
469 'removed',
470 'deleted',
470 'deleted',
471 'unknown',
471 'unknown',
472 'ignored',
472 'ignored',
473 'clean',
473 'clean',
474 ):
474 ):
475 rfiles, sfiles = getattr(r, k), getattr(s, k)
475 rfiles, sfiles = getattr(r, k), getattr(s, k)
476 rfiles.extend(b"%s/%s" % (subpath, f) for f in sfiles)
476 rfiles.extend(b"%s/%s" % (subpath, f) for f in sfiles)
477
477
478 r.modified.sort()
478 r.modified.sort()
479 r.added.sort()
479 r.added.sort()
480 r.removed.sort()
480 r.removed.sort()
481 r.deleted.sort()
481 r.deleted.sort()
482 r.unknown.sort()
482 r.unknown.sort()
483 r.ignored.sort()
483 r.ignored.sort()
484 r.clean.sort()
484 r.clean.sort()
485
485
486 return r
486 return r
487
487
488 def mergestate(self, clean=False):
488 def mergestate(self, clean=False):
489 """Get a mergestate object for this context."""
489 """Get a mergestate object for this context."""
490 raise NotImplementedError(
490 raise NotImplementedError(
491 '%s does not implement mergestate()' % self.__class__
491 '%s does not implement mergestate()' % self.__class__
492 )
492 )
493
493
494 def isempty(self):
494 def isempty(self):
495 return not (
495 return not (
496 len(self.parents()) > 1
496 len(self.parents()) > 1
497 or self.branch() != self.p1().branch()
497 or self.branch() != self.p1().branch()
498 or self.closesbranch()
498 or self.closesbranch()
499 or self.files()
499 or self.files()
500 )
500 )
501
501
502
502
503 class changectx(basectx):
503 class changectx(basectx):
504 """A changecontext object makes access to data related to a particular
504 """A changecontext object makes access to data related to a particular
505 changeset convenient. It represents a read-only context already present in
505 changeset convenient. It represents a read-only context already present in
506 the repo."""
506 the repo."""
507
507
508 def __init__(self, repo, rev, node, maybe_filtered=True):
508 def __init__(self, repo, rev, node, maybe_filtered=True):
509 super(changectx, self).__init__(repo)
509 super(changectx, self).__init__(repo)
510 self._rev = rev
510 self._rev = rev
511 self._node = node
511 self._node = node
512 # When maybe_filtered is True, the revision might be affected by
512 # When maybe_filtered is True, the revision might be affected by
513 # changelog filtering and operation through the filtered changelog must be used.
513 # changelog filtering and operation through the filtered changelog must be used.
514 #
514 #
515 # When maybe_filtered is False, the revision has already been checked
515 # When maybe_filtered is False, the revision has already been checked
516 # against filtering and is not filtered. Operation through the
516 # against filtering and is not filtered. Operation through the
517 # unfiltered changelog might be used in some case.
517 # unfiltered changelog might be used in some case.
518 self._maybe_filtered = maybe_filtered
518 self._maybe_filtered = maybe_filtered
519
519
520 def __hash__(self):
520 def __hash__(self):
521 try:
521 try:
522 return hash(self._rev)
522 return hash(self._rev)
523 except AttributeError:
523 except AttributeError:
524 return id(self)
524 return id(self)
525
525
526 def __nonzero__(self):
526 def __nonzero__(self):
527 return self._rev != nullrev
527 return self._rev != nullrev
528
528
529 __bool__ = __nonzero__
529 __bool__ = __nonzero__
530
530
531 @propertycache
531 @propertycache
532 def _changeset(self):
532 def _changeset(self):
533 if self._maybe_filtered:
533 if self._maybe_filtered:
534 repo = self._repo
534 repo = self._repo
535 else:
535 else:
536 repo = self._repo.unfiltered()
536 repo = self._repo.unfiltered()
537 return repo.changelog.changelogrevision(self.rev())
537 return repo.changelog.changelogrevision(self.rev())
538
538
539 @propertycache
539 @propertycache
540 def _manifest(self):
540 def _manifest(self):
541 return self._manifestctx.read()
541 return self._manifestctx.read()
542
542
543 @property
543 @property
544 def _manifestctx(self):
544 def _manifestctx(self):
545 return self._repo.manifestlog[self._changeset.manifest]
545 return self._repo.manifestlog[self._changeset.manifest]
546
546
547 @propertycache
547 @propertycache
548 def _manifestdelta(self):
548 def _manifestdelta(self):
549 return self._manifestctx.readdelta()
549 return self._manifestctx.readdelta()
550
550
551 @propertycache
551 @propertycache
552 def _parents(self):
552 def _parents(self):
553 repo = self._repo
553 repo = self._repo
554 if self._maybe_filtered:
554 if self._maybe_filtered:
555 cl = repo.changelog
555 cl = repo.changelog
556 else:
556 else:
557 cl = repo.unfiltered().changelog
557 cl = repo.unfiltered().changelog
558
558
559 p1, p2 = cl.parentrevs(self._rev)
559 p1, p2 = cl.parentrevs(self._rev)
560 if p2 == nullrev:
560 if p2 == nullrev:
561 return [changectx(repo, p1, cl.node(p1), maybe_filtered=False)]
561 return [changectx(repo, p1, cl.node(p1), maybe_filtered=False)]
562 return [
562 return [
563 changectx(repo, p1, cl.node(p1), maybe_filtered=False),
563 changectx(repo, p1, cl.node(p1), maybe_filtered=False),
564 changectx(repo, p2, cl.node(p2), maybe_filtered=False),
564 changectx(repo, p2, cl.node(p2), maybe_filtered=False),
565 ]
565 ]
566
566
567 def changeset(self):
567 def changeset(self):
568 c = self._changeset
568 c = self._changeset
569 return (
569 return (
570 c.manifest,
570 c.manifest,
571 c.user,
571 c.user,
572 c.date,
572 c.date,
573 c.files,
573 c.files,
574 c.description,
574 c.description,
575 c.extra,
575 c.extra,
576 )
576 )
577
577
578 def manifestnode(self):
578 def manifestnode(self):
579 return self._changeset.manifest
579 return self._changeset.manifest
580
580
581 def user(self):
581 def user(self):
582 return self._changeset.user
582 return self._changeset.user
583
583
584 def date(self):
584 def date(self):
585 return self._changeset.date
585 return self._changeset.date
586
586
587 def files(self):
587 def files(self):
588 return self._changeset.files
588 return self._changeset.files
589
589
590 def filesmodified(self):
590 def filesmodified(self):
591 modified = set(self.files())
591 modified = set(self.files())
592 modified.difference_update(self.filesadded())
592 modified.difference_update(self.filesadded())
593 modified.difference_update(self.filesremoved())
593 modified.difference_update(self.filesremoved())
594 return sorted(modified)
594 return sorted(modified)
595
595
596 def filesadded(self):
596 def filesadded(self):
597 filesadded = self._changeset.filesadded
597 filesadded = self._changeset.filesadded
598 compute_on_none = True
598 compute_on_none = True
599 if self._repo.filecopiesmode == b'changeset-sidedata':
599 if self._repo.filecopiesmode == b'changeset-sidedata':
600 compute_on_none = False
600 compute_on_none = False
601 else:
601 else:
602 source = self._repo.ui.config(b'experimental', b'copies.read-from')
602 source = self._repo.ui.config(b'experimental', b'copies.read-from')
603 if source == b'changeset-only':
603 if source == b'changeset-only':
604 compute_on_none = False
604 compute_on_none = False
605 elif source != b'compatibility':
605 elif source != b'compatibility':
606 # filelog mode, ignore any changelog content
606 # filelog mode, ignore any changelog content
607 filesadded = None
607 filesadded = None
608 if filesadded is None:
608 if filesadded is None:
609 if compute_on_none:
609 if compute_on_none:
610 filesadded = metadata.computechangesetfilesadded(self)
610 filesadded = metadata.computechangesetfilesadded(self)
611 else:
611 else:
612 filesadded = []
612 filesadded = []
613 return filesadded
613 return filesadded
614
614
615 def filesremoved(self):
615 def filesremoved(self):
616 filesremoved = self._changeset.filesremoved
616 filesremoved = self._changeset.filesremoved
617 compute_on_none = True
617 compute_on_none = True
618 if self._repo.filecopiesmode == b'changeset-sidedata':
618 if self._repo.filecopiesmode == b'changeset-sidedata':
619 compute_on_none = False
619 compute_on_none = False
620 else:
620 else:
621 source = self._repo.ui.config(b'experimental', b'copies.read-from')
621 source = self._repo.ui.config(b'experimental', b'copies.read-from')
622 if source == b'changeset-only':
622 if source == b'changeset-only':
623 compute_on_none = False
623 compute_on_none = False
624 elif source != b'compatibility':
624 elif source != b'compatibility':
625 # filelog mode, ignore any changelog content
625 # filelog mode, ignore any changelog content
626 filesremoved = None
626 filesremoved = None
627 if filesremoved is None:
627 if filesremoved is None:
628 if compute_on_none:
628 if compute_on_none:
629 filesremoved = metadata.computechangesetfilesremoved(self)
629 filesremoved = metadata.computechangesetfilesremoved(self)
630 else:
630 else:
631 filesremoved = []
631 filesremoved = []
632 return filesremoved
632 return filesremoved
633
633
634 @propertycache
634 @propertycache
635 def _copies(self):
635 def _copies(self):
636 p1copies = self._changeset.p1copies
636 p1copies = self._changeset.p1copies
637 p2copies = self._changeset.p2copies
637 p2copies = self._changeset.p2copies
638 compute_on_none = True
638 compute_on_none = True
639 if self._repo.filecopiesmode == b'changeset-sidedata':
639 if self._repo.filecopiesmode == b'changeset-sidedata':
640 compute_on_none = False
640 compute_on_none = False
641 else:
641 else:
642 source = self._repo.ui.config(b'experimental', b'copies.read-from')
642 source = self._repo.ui.config(b'experimental', b'copies.read-from')
643 # If config says to get copy metadata only from changeset, then
643 # If config says to get copy metadata only from changeset, then
644 # return that, defaulting to {} if there was no copy metadata. In
644 # return that, defaulting to {} if there was no copy metadata. In
645 # compatibility mode, we return copy data from the changeset if it
645 # compatibility mode, we return copy data from the changeset if it
646 # was recorded there, and otherwise we fall back to getting it from
646 # was recorded there, and otherwise we fall back to getting it from
647 # the filelogs (below).
647 # the filelogs (below).
648 #
648 #
649 # If we are in compatiblity mode and there is not data in the
649 # If we are in compatiblity mode and there is not data in the
650 # changeset), we get the copy metadata from the filelogs.
650 # changeset), we get the copy metadata from the filelogs.
651 #
651 #
652 # otherwise, when config said to read only from filelog, we get the
652 # otherwise, when config said to read only from filelog, we get the
653 # copy metadata from the filelogs.
653 # copy metadata from the filelogs.
654 if source == b'changeset-only':
654 if source == b'changeset-only':
655 compute_on_none = False
655 compute_on_none = False
656 elif source != b'compatibility':
656 elif source != b'compatibility':
657 # filelog mode, ignore any changelog content
657 # filelog mode, ignore any changelog content
658 p1copies = p2copies = None
658 p1copies = p2copies = None
659 if p1copies is None:
659 if p1copies is None:
660 if compute_on_none:
660 if compute_on_none:
661 p1copies, p2copies = super(changectx, self)._copies
661 p1copies, p2copies = super(changectx, self)._copies
662 else:
662 else:
663 if p1copies is None:
663 if p1copies is None:
664 p1copies = {}
664 p1copies = {}
665 if p2copies is None:
665 if p2copies is None:
666 p2copies = {}
666 p2copies = {}
667 return p1copies, p2copies
667 return p1copies, p2copies
668
668
669 def description(self):
669 def description(self):
670 return self._changeset.description
670 return self._changeset.description
671
671
672 def branch(self):
672 def branch(self):
673 return encoding.tolocal(self._changeset.extra.get(b"branch"))
673 return encoding.tolocal(self._changeset.extra.get(b"branch"))
674
674
675 def closesbranch(self):
675 def closesbranch(self):
676 return b'close' in self._changeset.extra
676 return b'close' in self._changeset.extra
677
677
678 def extra(self):
678 def extra(self):
679 """Return a dict of extra information."""
679 """Return a dict of extra information."""
680 return self._changeset.extra
680 return self._changeset.extra
681
681
682 def tags(self):
682 def tags(self):
683 """Return a list of byte tag names"""
683 """Return a list of byte tag names"""
684 return self._repo.nodetags(self._node)
684 return self._repo.nodetags(self._node)
685
685
686 def bookmarks(self):
686 def bookmarks(self):
687 """Return a list of byte bookmark names."""
687 """Return a list of byte bookmark names."""
688 return self._repo.nodebookmarks(self._node)
688 return self._repo.nodebookmarks(self._node)
689
689
690 def phase(self):
690 def phase(self):
691 return self._repo._phasecache.phase(self._repo, self._rev)
691 return self._repo._phasecache.phase(self._repo, self._rev)
692
692
693 def hidden(self):
693 def hidden(self):
694 return self._rev in repoview.filterrevs(self._repo, b'visible')
694 return self._rev in repoview.filterrevs(self._repo, b'visible')
695
695
696 def isinmemory(self):
696 def isinmemory(self):
697 return False
697 return False
698
698
699 def children(self):
699 def children(self):
700 """return list of changectx contexts for each child changeset.
700 """return list of changectx contexts for each child changeset.
701
701
702 This returns only the immediate child changesets. Use descendants() to
702 This returns only the immediate child changesets. Use descendants() to
703 recursively walk children.
703 recursively walk children.
704 """
704 """
705 c = self._repo.changelog.children(self._node)
705 c = self._repo.changelog.children(self._node)
706 return [self._repo[x] for x in c]
706 return [self._repo[x] for x in c]
707
707
708 def ancestors(self):
708 def ancestors(self):
709 for a in self._repo.changelog.ancestors([self._rev]):
709 for a in self._repo.changelog.ancestors([self._rev]):
710 yield self._repo[a]
710 yield self._repo[a]
711
711
712 def descendants(self):
712 def descendants(self):
713 """Recursively yield all children of the changeset.
713 """Recursively yield all children of the changeset.
714
714
715 For just the immediate children, use children()
715 For just the immediate children, use children()
716 """
716 """
717 for d in self._repo.changelog.descendants([self._rev]):
717 for d in self._repo.changelog.descendants([self._rev]):
718 yield self._repo[d]
718 yield self._repo[d]
719
719
720 def filectx(self, path, fileid=None, filelog=None):
720 def filectx(self, path, fileid=None, filelog=None):
721 """get a file context from this changeset"""
721 """get a file context from this changeset"""
722 if fileid is None:
722 if fileid is None:
723 fileid = self.filenode(path)
723 fileid = self.filenode(path)
724 return filectx(
724 return filectx(
725 self._repo, path, fileid=fileid, changectx=self, filelog=filelog
725 self._repo, path, fileid=fileid, changectx=self, filelog=filelog
726 )
726 )
727
727
728 def ancestor(self, c2, warn=False):
728 def ancestor(self, c2, warn=False):
729 """return the "best" ancestor context of self and c2
729 """return the "best" ancestor context of self and c2
730
730
731 If there are multiple candidates, it will show a message and check
731 If there are multiple candidates, it will show a message and check
732 merge.preferancestor configuration before falling back to the
732 merge.preferancestor configuration before falling back to the
733 revlog ancestor."""
733 revlog ancestor."""
734 # deal with workingctxs
734 # deal with workingctxs
735 n2 = c2._node
735 n2 = c2._node
736 if n2 is None:
736 if n2 is None:
737 n2 = c2._parents[0]._node
737 n2 = c2._parents[0]._node
738 cahs = self._repo.changelog.commonancestorsheads(self._node, n2)
738 cahs = self._repo.changelog.commonancestorsheads(self._node, n2)
739 if not cahs:
739 if not cahs:
740 anc = nullid
740 anc = nullid
741 elif len(cahs) == 1:
741 elif len(cahs) == 1:
742 anc = cahs[0]
742 anc = cahs[0]
743 else:
743 else:
744 # experimental config: merge.preferancestor
744 # experimental config: merge.preferancestor
745 for r in self._repo.ui.configlist(b'merge', b'preferancestor'):
745 for r in self._repo.ui.configlist(b'merge', b'preferancestor'):
746 try:
746 try:
747 ctx = scmutil.revsymbol(self._repo, r)
747 ctx = scmutil.revsymbol(self._repo, r)
748 except error.RepoLookupError:
748 except error.RepoLookupError:
749 continue
749 continue
750 anc = ctx.node()
750 anc = ctx.node()
751 if anc in cahs:
751 if anc in cahs:
752 break
752 break
753 else:
753 else:
754 anc = self._repo.changelog.ancestor(self._node, n2)
754 anc = self._repo.changelog.ancestor(self._node, n2)
755 if warn:
755 if warn:
756 self._repo.ui.status(
756 self._repo.ui.status(
757 (
757 (
758 _(b"note: using %s as ancestor of %s and %s\n")
758 _(b"note: using %s as ancestor of %s and %s\n")
759 % (short(anc), short(self._node), short(n2))
759 % (short(anc), short(self._node), short(n2))
760 )
760 )
761 + b''.join(
761 + b''.join(
762 _(
762 _(
763 b" alternatively, use --config "
763 b" alternatively, use --config "
764 b"merge.preferancestor=%s\n"
764 b"merge.preferancestor=%s\n"
765 )
765 )
766 % short(n)
766 % short(n)
767 for n in sorted(cahs)
767 for n in sorted(cahs)
768 if n != anc
768 if n != anc
769 )
769 )
770 )
770 )
771 return self._repo[anc]
771 return self._repo[anc]
772
772
773 def isancestorof(self, other):
773 def isancestorof(self, other):
774 """True if this changeset is an ancestor of other"""
774 """True if this changeset is an ancestor of other"""
775 return self._repo.changelog.isancestorrev(self._rev, other._rev)
775 return self._repo.changelog.isancestorrev(self._rev, other._rev)
776
776
777 def walk(self, match):
777 def walk(self, match):
778 '''Generates matching file names.'''
778 '''Generates matching file names.'''
779
779
780 # Wrap match.bad method to have message with nodeid
780 # Wrap match.bad method to have message with nodeid
781 def bad(fn, msg):
781 def bad(fn, msg):
782 # The manifest doesn't know about subrepos, so don't complain about
782 # The manifest doesn't know about subrepos, so don't complain about
783 # paths into valid subrepos.
783 # paths into valid subrepos.
784 if any(fn == s or fn.startswith(s + b'/') for s in self.substate):
784 if any(fn == s or fn.startswith(s + b'/') for s in self.substate):
785 return
785 return
786 match.bad(fn, _(b'no such file in rev %s') % self)
786 match.bad(fn, _(b'no such file in rev %s') % self)
787
787
788 m = matchmod.badmatch(self._repo.narrowmatch(match), bad)
788 m = matchmod.badmatch(self._repo.narrowmatch(match), bad)
789 return self._manifest.walk(m)
789 return self._manifest.walk(m)
790
790
791 def matches(self, match):
791 def matches(self, match):
792 return self.walk(match)
792 return self.walk(match)
793
793
794
794
795 class basefilectx(object):
795 class basefilectx(object):
796 """A filecontext object represents the common logic for its children:
796 """A filecontext object represents the common logic for its children:
797 filectx: read-only access to a filerevision that is already present
797 filectx: read-only access to a filerevision that is already present
798 in the repo,
798 in the repo,
799 workingfilectx: a filecontext that represents files from the working
799 workingfilectx: a filecontext that represents files from the working
800 directory,
800 directory,
801 memfilectx: a filecontext that represents files in-memory,
801 memfilectx: a filecontext that represents files in-memory,
802 """
802 """
803
803
804 @propertycache
804 @propertycache
805 def _filelog(self):
805 def _filelog(self):
806 return self._repo.file(self._path)
806 return self._repo.file(self._path)
807
807
808 @propertycache
808 @propertycache
809 def _changeid(self):
809 def _changeid(self):
810 if '_changectx' in self.__dict__:
810 if '_changectx' in self.__dict__:
811 return self._changectx.rev()
811 return self._changectx.rev()
812 elif '_descendantrev' in self.__dict__:
812 elif '_descendantrev' in self.__dict__:
813 # this file context was created from a revision with a known
813 # this file context was created from a revision with a known
814 # descendant, we can (lazily) correct for linkrev aliases
814 # descendant, we can (lazily) correct for linkrev aliases
815 return self._adjustlinkrev(self._descendantrev)
815 return self._adjustlinkrev(self._descendantrev)
816 else:
816 else:
817 return self._filelog.linkrev(self._filerev)
817 return self._filelog.linkrev(self._filerev)
818
818
819 @propertycache
819 @propertycache
820 def _filenode(self):
820 def _filenode(self):
821 if '_fileid' in self.__dict__:
821 if '_fileid' in self.__dict__:
822 return self._filelog.lookup(self._fileid)
822 return self._filelog.lookup(self._fileid)
823 else:
823 else:
824 return self._changectx.filenode(self._path)
824 return self._changectx.filenode(self._path)
825
825
826 @propertycache
826 @propertycache
827 def _filerev(self):
827 def _filerev(self):
828 return self._filelog.rev(self._filenode)
828 return self._filelog.rev(self._filenode)
829
829
830 @propertycache
830 @propertycache
831 def _repopath(self):
831 def _repopath(self):
832 return self._path
832 return self._path
833
833
834 def __nonzero__(self):
834 def __nonzero__(self):
835 try:
835 try:
836 self._filenode
836 self._filenode
837 return True
837 return True
838 except error.LookupError:
838 except error.LookupError:
839 # file is missing
839 # file is missing
840 return False
840 return False
841
841
842 __bool__ = __nonzero__
842 __bool__ = __nonzero__
843
843
844 def __bytes__(self):
844 def __bytes__(self):
845 try:
845 try:
846 return b"%s@%s" % (self.path(), self._changectx)
846 return b"%s@%s" % (self.path(), self._changectx)
847 except error.LookupError:
847 except error.LookupError:
848 return b"%s@???" % self.path()
848 return b"%s@???" % self.path()
849
849
850 __str__ = encoding.strmethod(__bytes__)
850 __str__ = encoding.strmethod(__bytes__)
851
851
852 def __repr__(self):
852 def __repr__(self):
853 return "<%s %s>" % (type(self).__name__, str(self))
853 return "<%s %s>" % (type(self).__name__, str(self))
854
854
855 def __hash__(self):
855 def __hash__(self):
856 try:
856 try:
857 return hash((self._path, self._filenode))
857 return hash((self._path, self._filenode))
858 except AttributeError:
858 except AttributeError:
859 return id(self)
859 return id(self)
860
860
861 def __eq__(self, other):
861 def __eq__(self, other):
862 try:
862 try:
863 return (
863 return (
864 type(self) == type(other)
864 type(self) == type(other)
865 and self._path == other._path
865 and self._path == other._path
866 and self._filenode == other._filenode
866 and self._filenode == other._filenode
867 )
867 )
868 except AttributeError:
868 except AttributeError:
869 return False
869 return False
870
870
871 def __ne__(self, other):
871 def __ne__(self, other):
872 return not (self == other)
872 return not (self == other)
873
873
874 def filerev(self):
874 def filerev(self):
875 return self._filerev
875 return self._filerev
876
876
877 def filenode(self):
877 def filenode(self):
878 return self._filenode
878 return self._filenode
879
879
880 @propertycache
880 @propertycache
881 def _flags(self):
881 def _flags(self):
882 return self._changectx.flags(self._path)
882 return self._changectx.flags(self._path)
883
883
884 def flags(self):
884 def flags(self):
885 return self._flags
885 return self._flags
886
886
887 def filelog(self):
887 def filelog(self):
888 return self._filelog
888 return self._filelog
889
889
890 def rev(self):
890 def rev(self):
891 return self._changeid
891 return self._changeid
892
892
893 def linkrev(self):
893 def linkrev(self):
894 return self._filelog.linkrev(self._filerev)
894 return self._filelog.linkrev(self._filerev)
895
895
896 def node(self):
896 def node(self):
897 return self._changectx.node()
897 return self._changectx.node()
898
898
899 def hex(self):
899 def hex(self):
900 return self._changectx.hex()
900 return self._changectx.hex()
901
901
902 def user(self):
902 def user(self):
903 return self._changectx.user()
903 return self._changectx.user()
904
904
905 def date(self):
905 def date(self):
906 return self._changectx.date()
906 return self._changectx.date()
907
907
908 def files(self):
908 def files(self):
909 return self._changectx.files()
909 return self._changectx.files()
910
910
911 def description(self):
911 def description(self):
912 return self._changectx.description()
912 return self._changectx.description()
913
913
914 def branch(self):
914 def branch(self):
915 return self._changectx.branch()
915 return self._changectx.branch()
916
916
917 def extra(self):
917 def extra(self):
918 return self._changectx.extra()
918 return self._changectx.extra()
919
919
920 def phase(self):
920 def phase(self):
921 return self._changectx.phase()
921 return self._changectx.phase()
922
922
923 def phasestr(self):
923 def phasestr(self):
924 return self._changectx.phasestr()
924 return self._changectx.phasestr()
925
925
926 def obsolete(self):
926 def obsolete(self):
927 return self._changectx.obsolete()
927 return self._changectx.obsolete()
928
928
929 def instabilities(self):
929 def instabilities(self):
930 return self._changectx.instabilities()
930 return self._changectx.instabilities()
931
931
932 def manifest(self):
932 def manifest(self):
933 return self._changectx.manifest()
933 return self._changectx.manifest()
934
934
935 def changectx(self):
935 def changectx(self):
936 return self._changectx
936 return self._changectx
937
937
938 def renamed(self):
938 def renamed(self):
939 return self._copied
939 return self._copied
940
940
941 def copysource(self):
941 def copysource(self):
942 return self._copied and self._copied[0]
942 return self._copied and self._copied[0]
943
943
944 def repo(self):
944 def repo(self):
945 return self._repo
945 return self._repo
946
946
947 def size(self):
947 def size(self):
948 return len(self.data())
948 return len(self.data())
949
949
950 def path(self):
950 def path(self):
951 return self._path
951 return self._path
952
952
953 def isbinary(self):
953 def isbinary(self):
954 try:
954 try:
955 return stringutil.binary(self.data())
955 return stringutil.binary(self.data())
956 except IOError:
956 except IOError:
957 return False
957 return False
958
958
959 def isexec(self):
959 def isexec(self):
960 return b'x' in self.flags()
960 return b'x' in self.flags()
961
961
962 def islink(self):
962 def islink(self):
963 return b'l' in self.flags()
963 return b'l' in self.flags()
964
964
965 def isabsent(self):
965 def isabsent(self):
966 """whether this filectx represents a file not in self._changectx
966 """whether this filectx represents a file not in self._changectx
967
967
968 This is mainly for merge code to detect change/delete conflicts. This is
968 This is mainly for merge code to detect change/delete conflicts. This is
969 expected to be True for all subclasses of basectx."""
969 expected to be True for all subclasses of basectx."""
970 return False
970 return False
971
971
972 _customcmp = False
972 _customcmp = False
973
973
974 def cmp(self, fctx):
974 def cmp(self, fctx):
975 """compare with other file context
975 """compare with other file context
976
976
977 returns True if different than fctx.
977 returns True if different than fctx.
978 """
978 """
979 if fctx._customcmp:
979 if fctx._customcmp:
980 return fctx.cmp(self)
980 return fctx.cmp(self)
981
981
982 if self._filenode is None:
982 if self._filenode is None:
983 raise error.ProgrammingError(
983 raise error.ProgrammingError(
984 b'filectx.cmp() must be reimplemented if not backed by revlog'
984 b'filectx.cmp() must be reimplemented if not backed by revlog'
985 )
985 )
986
986
987 if fctx._filenode is None:
987 if fctx._filenode is None:
988 if self._repo._encodefilterpats:
988 if self._repo._encodefilterpats:
989 # can't rely on size() because wdir content may be decoded
989 # can't rely on size() because wdir content may be decoded
990 return self._filelog.cmp(self._filenode, fctx.data())
990 return self._filelog.cmp(self._filenode, fctx.data())
991 if self.size() - 4 == fctx.size():
991 if self.size() - 4 == fctx.size():
992 # size() can match:
992 # size() can match:
993 # if file data starts with '\1\n', empty metadata block is
993 # if file data starts with '\1\n', empty metadata block is
994 # prepended, which adds 4 bytes to filelog.size().
994 # prepended, which adds 4 bytes to filelog.size().
995 return self._filelog.cmp(self._filenode, fctx.data())
995 return self._filelog.cmp(self._filenode, fctx.data())
996 if self.size() == fctx.size() or self.flags() == b'l':
996 if self.size() == fctx.size() or self.flags() == b'l':
997 # size() matches: need to compare content
997 # size() matches: need to compare content
998 # issue6456: Always compare symlinks because size can represent
998 # issue6456: Always compare symlinks because size can represent
999 # encrypted string for EXT-4 encryption(fscrypt).
999 # encrypted string for EXT-4 encryption(fscrypt).
1000 return self._filelog.cmp(self._filenode, fctx.data())
1000 return self._filelog.cmp(self._filenode, fctx.data())
1001
1001
1002 # size() differs
1002 # size() differs
1003 return True
1003 return True
1004
1004
1005 def _adjustlinkrev(self, srcrev, inclusive=False, stoprev=None):
1005 def _adjustlinkrev(self, srcrev, inclusive=False, stoprev=None):
1006 """return the first ancestor of <srcrev> introducing <fnode>
1006 """return the first ancestor of <srcrev> introducing <fnode>
1007
1007
1008 If the linkrev of the file revision does not point to an ancestor of
1008 If the linkrev of the file revision does not point to an ancestor of
1009 srcrev, we'll walk down the ancestors until we find one introducing
1009 srcrev, we'll walk down the ancestors until we find one introducing
1010 this file revision.
1010 this file revision.
1011
1011
1012 :srcrev: the changeset revision we search ancestors from
1012 :srcrev: the changeset revision we search ancestors from
1013 :inclusive: if true, the src revision will also be checked
1013 :inclusive: if true, the src revision will also be checked
1014 :stoprev: an optional revision to stop the walk at. If no introduction
1014 :stoprev: an optional revision to stop the walk at. If no introduction
1015 of this file content could be found before this floor
1015 of this file content could be found before this floor
1016 revision, the function will returns "None" and stops its
1016 revision, the function will returns "None" and stops its
1017 iteration.
1017 iteration.
1018 """
1018 """
1019 repo = self._repo
1019 repo = self._repo
1020 cl = repo.unfiltered().changelog
1020 cl = repo.unfiltered().changelog
1021 mfl = repo.manifestlog
1021 mfl = repo.manifestlog
1022 # fetch the linkrev
1022 # fetch the linkrev
1023 lkr = self.linkrev()
1023 lkr = self.linkrev()
1024 if srcrev == lkr:
1024 if srcrev == lkr:
1025 return lkr
1025 return lkr
1026 # hack to reuse ancestor computation when searching for renames
1026 # hack to reuse ancestor computation when searching for renames
1027 memberanc = getattr(self, '_ancestrycontext', None)
1027 memberanc = getattr(self, '_ancestrycontext', None)
1028 iteranc = None
1028 iteranc = None
1029 if srcrev is None:
1029 if srcrev is None:
1030 # wctx case, used by workingfilectx during mergecopy
1030 # wctx case, used by workingfilectx during mergecopy
1031 revs = [p.rev() for p in self._repo[None].parents()]
1031 revs = [p.rev() for p in self._repo[None].parents()]
1032 inclusive = True # we skipped the real (revless) source
1032 inclusive = True # we skipped the real (revless) source
1033 else:
1033 else:
1034 revs = [srcrev]
1034 revs = [srcrev]
1035 if memberanc is None:
1035 if memberanc is None:
1036 memberanc = iteranc = cl.ancestors(revs, lkr, inclusive=inclusive)
1036 memberanc = iteranc = cl.ancestors(revs, lkr, inclusive=inclusive)
1037 # check if this linkrev is an ancestor of srcrev
1037 # check if this linkrev is an ancestor of srcrev
1038 if lkr not in memberanc:
1038 if lkr not in memberanc:
1039 if iteranc is None:
1039 if iteranc is None:
1040 iteranc = cl.ancestors(revs, lkr, inclusive=inclusive)
1040 iteranc = cl.ancestors(revs, lkr, inclusive=inclusive)
1041 fnode = self._filenode
1041 fnode = self._filenode
1042 path = self._path
1042 path = self._path
1043 for a in iteranc:
1043 for a in iteranc:
1044 if stoprev is not None and a < stoprev:
1044 if stoprev is not None and a < stoprev:
1045 return None
1045 return None
1046 ac = cl.read(a) # get changeset data (we avoid object creation)
1046 ac = cl.read(a) # get changeset data (we avoid object creation)
1047 if path in ac[3]: # checking the 'files' field.
1047 if path in ac[3]: # checking the 'files' field.
1048 # The file has been touched, check if the content is
1048 # The file has been touched, check if the content is
1049 # similar to the one we search for.
1049 # similar to the one we search for.
1050 if fnode == mfl[ac[0]].readfast().get(path):
1050 if fnode == mfl[ac[0]].readfast().get(path):
1051 return a
1051 return a
1052 # In theory, we should never get out of that loop without a result.
1052 # In theory, we should never get out of that loop without a result.
1053 # But if manifest uses a buggy file revision (not children of the
1053 # But if manifest uses a buggy file revision (not children of the
1054 # one it replaces) we could. Such a buggy situation will likely
1054 # one it replaces) we could. Such a buggy situation will likely
1055 # result is crash somewhere else at to some point.
1055 # result is crash somewhere else at to some point.
1056 return lkr
1056 return lkr
1057
1057
1058 def isintroducedafter(self, changelogrev):
1058 def isintroducedafter(self, changelogrev):
1059 """True if a filectx has been introduced after a given floor revision"""
1059 """True if a filectx has been introduced after a given floor revision"""
1060 if self.linkrev() >= changelogrev:
1060 if self.linkrev() >= changelogrev:
1061 return True
1061 return True
1062 introrev = self._introrev(stoprev=changelogrev)
1062 introrev = self._introrev(stoprev=changelogrev)
1063 if introrev is None:
1063 if introrev is None:
1064 return False
1064 return False
1065 return introrev >= changelogrev
1065 return introrev >= changelogrev
1066
1066
1067 def introrev(self):
1067 def introrev(self):
1068 """return the rev of the changeset which introduced this file revision
1068 """return the rev of the changeset which introduced this file revision
1069
1069
1070 This method is different from linkrev because it take into account the
1070 This method is different from linkrev because it take into account the
1071 changeset the filectx was created from. It ensures the returned
1071 changeset the filectx was created from. It ensures the returned
1072 revision is one of its ancestors. This prevents bugs from
1072 revision is one of its ancestors. This prevents bugs from
1073 'linkrev-shadowing' when a file revision is used by multiple
1073 'linkrev-shadowing' when a file revision is used by multiple
1074 changesets.
1074 changesets.
1075 """
1075 """
1076 return self._introrev()
1076 return self._introrev()
1077
1077
1078 def _introrev(self, stoprev=None):
1078 def _introrev(self, stoprev=None):
1079 """
1079 """
1080 Same as `introrev` but, with an extra argument to limit changelog
1080 Same as `introrev` but, with an extra argument to limit changelog
1081 iteration range in some internal usecase.
1081 iteration range in some internal usecase.
1082
1082
1083 If `stoprev` is set, the `introrev` will not be searched past that
1083 If `stoprev` is set, the `introrev` will not be searched past that
1084 `stoprev` revision and "None" might be returned. This is useful to
1084 `stoprev` revision and "None" might be returned. This is useful to
1085 limit the iteration range.
1085 limit the iteration range.
1086 """
1086 """
1087 toprev = None
1087 toprev = None
1088 attrs = vars(self)
1088 attrs = vars(self)
1089 if '_changeid' in attrs:
1089 if '_changeid' in attrs:
1090 # We have a cached value already
1090 # We have a cached value already
1091 toprev = self._changeid
1091 toprev = self._changeid
1092 elif '_changectx' in attrs:
1092 elif '_changectx' in attrs:
1093 # We know which changelog entry we are coming from
1093 # We know which changelog entry we are coming from
1094 toprev = self._changectx.rev()
1094 toprev = self._changectx.rev()
1095
1095
1096 if toprev is not None:
1096 if toprev is not None:
1097 return self._adjustlinkrev(toprev, inclusive=True, stoprev=stoprev)
1097 return self._adjustlinkrev(toprev, inclusive=True, stoprev=stoprev)
1098 elif '_descendantrev' in attrs:
1098 elif '_descendantrev' in attrs:
1099 introrev = self._adjustlinkrev(self._descendantrev, stoprev=stoprev)
1099 introrev = self._adjustlinkrev(self._descendantrev, stoprev=stoprev)
1100 # be nice and cache the result of the computation
1100 # be nice and cache the result of the computation
1101 if introrev is not None:
1101 if introrev is not None:
1102 self._changeid = introrev
1102 self._changeid = introrev
1103 return introrev
1103 return introrev
1104 else:
1104 else:
1105 return self.linkrev()
1105 return self.linkrev()
1106
1106
1107 def introfilectx(self):
1107 def introfilectx(self):
1108 """Return filectx having identical contents, but pointing to the
1108 """Return filectx having identical contents, but pointing to the
1109 changeset revision where this filectx was introduced"""
1109 changeset revision where this filectx was introduced"""
1110 introrev = self.introrev()
1110 introrev = self.introrev()
1111 if self.rev() == introrev:
1111 if self.rev() == introrev:
1112 return self
1112 return self
1113 return self.filectx(self.filenode(), changeid=introrev)
1113 return self.filectx(self.filenode(), changeid=introrev)
1114
1114
1115 def _parentfilectx(self, path, fileid, filelog):
1115 def _parentfilectx(self, path, fileid, filelog):
1116 """create parent filectx keeping ancestry info for _adjustlinkrev()"""
1116 """create parent filectx keeping ancestry info for _adjustlinkrev()"""
1117 fctx = filectx(self._repo, path, fileid=fileid, filelog=filelog)
1117 fctx = filectx(self._repo, path, fileid=fileid, filelog=filelog)
1118 if '_changeid' in vars(self) or '_changectx' in vars(self):
1118 if '_changeid' in vars(self) or '_changectx' in vars(self):
1119 # If self is associated with a changeset (probably explicitly
1119 # If self is associated with a changeset (probably explicitly
1120 # fed), ensure the created filectx is associated with a
1120 # fed), ensure the created filectx is associated with a
1121 # changeset that is an ancestor of self.changectx.
1121 # changeset that is an ancestor of self.changectx.
1122 # This lets us later use _adjustlinkrev to get a correct link.
1122 # This lets us later use _adjustlinkrev to get a correct link.
1123 fctx._descendantrev = self.rev()
1123 fctx._descendantrev = self.rev()
1124 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
1124 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
1125 elif '_descendantrev' in vars(self):
1125 elif '_descendantrev' in vars(self):
1126 # Otherwise propagate _descendantrev if we have one associated.
1126 # Otherwise propagate _descendantrev if we have one associated.
1127 fctx._descendantrev = self._descendantrev
1127 fctx._descendantrev = self._descendantrev
1128 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
1128 fctx._ancestrycontext = getattr(self, '_ancestrycontext', None)
1129 return fctx
1129 return fctx
1130
1130
1131 def parents(self):
1131 def parents(self):
1132 _path = self._path
1132 _path = self._path
1133 fl = self._filelog
1133 fl = self._filelog
1134 parents = self._filelog.parents(self._filenode)
1134 parents = self._filelog.parents(self._filenode)
1135 pl = [(_path, node, fl) for node in parents if node != nullid]
1135 pl = [(_path, node, fl) for node in parents if node != nullid]
1136
1136
1137 r = fl.renamed(self._filenode)
1137 r = fl.renamed(self._filenode)
1138 if r:
1138 if r:
1139 # - In the simple rename case, both parent are nullid, pl is empty.
1139 # - In the simple rename case, both parent are nullid, pl is empty.
1140 # - In case of merge, only one of the parent is null id and should
1140 # - In case of merge, only one of the parent is null id and should
1141 # be replaced with the rename information. This parent is -always-
1141 # be replaced with the rename information. This parent is -always-
1142 # the first one.
1142 # the first one.
1143 #
1143 #
1144 # As null id have always been filtered out in the previous list
1144 # As null id have always been filtered out in the previous list
1145 # comprehension, inserting to 0 will always result in "replacing
1145 # comprehension, inserting to 0 will always result in "replacing
1146 # first nullid parent with rename information.
1146 # first nullid parent with rename information.
1147 pl.insert(0, (r[0], r[1], self._repo.file(r[0])))
1147 pl.insert(0, (r[0], r[1], self._repo.file(r[0])))
1148
1148
1149 return [self._parentfilectx(path, fnode, l) for path, fnode, l in pl]
1149 return [self._parentfilectx(path, fnode, l) for path, fnode, l in pl]
1150
1150
1151 def p1(self):
1151 def p1(self):
1152 return self.parents()[0]
1152 return self.parents()[0]
1153
1153
1154 def p2(self):
1154 def p2(self):
1155 p = self.parents()
1155 p = self.parents()
1156 if len(p) == 2:
1156 if len(p) == 2:
1157 return p[1]
1157 return p[1]
1158 return filectx(self._repo, self._path, fileid=-1, filelog=self._filelog)
1158 return filectx(self._repo, self._path, fileid=-1, filelog=self._filelog)
1159
1159
1160 def annotate(self, follow=False, skiprevs=None, diffopts=None):
1160 def annotate(self, follow=False, skiprevs=None, diffopts=None):
1161 """Returns a list of annotateline objects for each line in the file
1161 """Returns a list of annotateline objects for each line in the file
1162
1162
1163 - line.fctx is the filectx of the node where that line was last changed
1163 - line.fctx is the filectx of the node where that line was last changed
1164 - line.lineno is the line number at the first appearance in the managed
1164 - line.lineno is the line number at the first appearance in the managed
1165 file
1165 file
1166 - line.text is the data on that line (including newline character)
1166 - line.text is the data on that line (including newline character)
1167 """
1167 """
1168 getlog = util.lrucachefunc(lambda x: self._repo.file(x))
1168 getlog = util.lrucachefunc(lambda x: self._repo.file(x))
1169
1169
1170 def parents(f):
1170 def parents(f):
1171 # Cut _descendantrev here to mitigate the penalty of lazy linkrev
1171 # Cut _descendantrev here to mitigate the penalty of lazy linkrev
1172 # adjustment. Otherwise, p._adjustlinkrev() would walk changelog
1172 # adjustment. Otherwise, p._adjustlinkrev() would walk changelog
1173 # from the topmost introrev (= srcrev) down to p.linkrev() if it
1173 # from the topmost introrev (= srcrev) down to p.linkrev() if it
1174 # isn't an ancestor of the srcrev.
1174 # isn't an ancestor of the srcrev.
1175 f._changeid
1175 f._changeid
1176 pl = f.parents()
1176 pl = f.parents()
1177
1177
1178 # Don't return renamed parents if we aren't following.
1178 # Don't return renamed parents if we aren't following.
1179 if not follow:
1179 if not follow:
1180 pl = [p for p in pl if p.path() == f.path()]
1180 pl = [p for p in pl if p.path() == f.path()]
1181
1181
1182 # renamed filectx won't have a filelog yet, so set it
1182 # renamed filectx won't have a filelog yet, so set it
1183 # from the cache to save time
1183 # from the cache to save time
1184 for p in pl:
1184 for p in pl:
1185 if not '_filelog' in p.__dict__:
1185 if not '_filelog' in p.__dict__:
1186 p._filelog = getlog(p.path())
1186 p._filelog = getlog(p.path())
1187
1187
1188 return pl
1188 return pl
1189
1189
1190 # use linkrev to find the first changeset where self appeared
1190 # use linkrev to find the first changeset where self appeared
1191 base = self.introfilectx()
1191 base = self.introfilectx()
1192 if getattr(base, '_ancestrycontext', None) is None:
1192 if getattr(base, '_ancestrycontext', None) is None:
1193 # it is safe to use an unfiltered repository here because we are
1193 # it is safe to use an unfiltered repository here because we are
1194 # walking ancestors only.
1194 # walking ancestors only.
1195 cl = self._repo.unfiltered().changelog
1195 cl = self._repo.unfiltered().changelog
1196 if base.rev() is None:
1196 if base.rev() is None:
1197 # wctx is not inclusive, but works because _ancestrycontext
1197 # wctx is not inclusive, but works because _ancestrycontext
1198 # is used to test filelog revisions
1198 # is used to test filelog revisions
1199 ac = cl.ancestors(
1199 ac = cl.ancestors(
1200 [p.rev() for p in base.parents()], inclusive=True
1200 [p.rev() for p in base.parents()], inclusive=True
1201 )
1201 )
1202 else:
1202 else:
1203 ac = cl.ancestors([base.rev()], inclusive=True)
1203 ac = cl.ancestors([base.rev()], inclusive=True)
1204 base._ancestrycontext = ac
1204 base._ancestrycontext = ac
1205
1205
1206 return dagop.annotate(
1206 return dagop.annotate(
1207 base, parents, skiprevs=skiprevs, diffopts=diffopts
1207 base, parents, skiprevs=skiprevs, diffopts=diffopts
1208 )
1208 )
1209
1209
1210 def ancestors(self, followfirst=False):
1210 def ancestors(self, followfirst=False):
1211 visit = {}
1211 visit = {}
1212 c = self
1212 c = self
1213 if followfirst:
1213 if followfirst:
1214 cut = 1
1214 cut = 1
1215 else:
1215 else:
1216 cut = None
1216 cut = None
1217
1217
1218 while True:
1218 while True:
1219 for parent in c.parents()[:cut]:
1219 for parent in c.parents()[:cut]:
1220 visit[(parent.linkrev(), parent.filenode())] = parent
1220 visit[(parent.linkrev(), parent.filenode())] = parent
1221 if not visit:
1221 if not visit:
1222 break
1222 break
1223 c = visit.pop(max(visit))
1223 c = visit.pop(max(visit))
1224 yield c
1224 yield c
1225
1225
1226 def decodeddata(self):
1226 def decodeddata(self):
1227 """Returns `data()` after running repository decoding filters.
1227 """Returns `data()` after running repository decoding filters.
1228
1228
1229 This is often equivalent to how the data would be expressed on disk.
1229 This is often equivalent to how the data would be expressed on disk.
1230 """
1230 """
1231 return self._repo.wwritedata(self.path(), self.data())
1231 return self._repo.wwritedata(self.path(), self.data())
1232
1232
1233
1233
1234 class filectx(basefilectx):
1234 class filectx(basefilectx):
1235 """A filecontext object makes access to data related to a particular
1235 """A filecontext object makes access to data related to a particular
1236 filerevision convenient."""
1236 filerevision convenient."""
1237
1237
1238 def __init__(
1238 def __init__(
1239 self,
1239 self,
1240 repo,
1240 repo,
1241 path,
1241 path,
1242 changeid=None,
1242 changeid=None,
1243 fileid=None,
1243 fileid=None,
1244 filelog=None,
1244 filelog=None,
1245 changectx=None,
1245 changectx=None,
1246 ):
1246 ):
1247 """changeid must be a revision number, if specified.
1247 """changeid must be a revision number, if specified.
1248 fileid can be a file revision or node."""
1248 fileid can be a file revision or node."""
1249 self._repo = repo
1249 self._repo = repo
1250 self._path = path
1250 self._path = path
1251
1251
1252 assert (
1252 assert (
1253 changeid is not None or fileid is not None or changectx is not None
1253 changeid is not None or fileid is not None or changectx is not None
1254 ), b"bad args: changeid=%r, fileid=%r, changectx=%r" % (
1254 ), b"bad args: changeid=%r, fileid=%r, changectx=%r" % (
1255 changeid,
1255 changeid,
1256 fileid,
1256 fileid,
1257 changectx,
1257 changectx,
1258 )
1258 )
1259
1259
1260 if filelog is not None:
1260 if filelog is not None:
1261 self._filelog = filelog
1261 self._filelog = filelog
1262
1262
1263 if changeid is not None:
1263 if changeid is not None:
1264 self._changeid = changeid
1264 self._changeid = changeid
1265 if changectx is not None:
1265 if changectx is not None:
1266 self._changectx = changectx
1266 self._changectx = changectx
1267 if fileid is not None:
1267 if fileid is not None:
1268 self._fileid = fileid
1268 self._fileid = fileid
1269
1269
1270 @propertycache
1270 @propertycache
1271 def _changectx(self):
1271 def _changectx(self):
1272 try:
1272 try:
1273 return self._repo[self._changeid]
1273 return self._repo[self._changeid]
1274 except error.FilteredRepoLookupError:
1274 except error.FilteredRepoLookupError:
1275 # Linkrev may point to any revision in the repository. When the
1275 # Linkrev may point to any revision in the repository. When the
1276 # repository is filtered this may lead to `filectx` trying to build
1276 # repository is filtered this may lead to `filectx` trying to build
1277 # `changectx` for filtered revision. In such case we fallback to
1277 # `changectx` for filtered revision. In such case we fallback to
1278 # creating `changectx` on the unfiltered version of the reposition.
1278 # creating `changectx` on the unfiltered version of the reposition.
1279 # This fallback should not be an issue because `changectx` from
1279 # This fallback should not be an issue because `changectx` from
1280 # `filectx` are not used in complex operations that care about
1280 # `filectx` are not used in complex operations that care about
1281 # filtering.
1281 # filtering.
1282 #
1282 #
1283 # This fallback is a cheap and dirty fix that prevent several
1283 # This fallback is a cheap and dirty fix that prevent several
1284 # crashes. It does not ensure the behavior is correct. However the
1284 # crashes. It does not ensure the behavior is correct. However the
1285 # behavior was not correct before filtering either and "incorrect
1285 # behavior was not correct before filtering either and "incorrect
1286 # behavior" is seen as better as "crash"
1286 # behavior" is seen as better as "crash"
1287 #
1287 #
1288 # Linkrevs have several serious troubles with filtering that are
1288 # Linkrevs have several serious troubles with filtering that are
1289 # complicated to solve. Proper handling of the issue here should be
1289 # complicated to solve. Proper handling of the issue here should be
1290 # considered when solving linkrev issue are on the table.
1290 # considered when solving linkrev issue are on the table.
1291 return self._repo.unfiltered()[self._changeid]
1291 return self._repo.unfiltered()[self._changeid]
1292
1292
1293 def filectx(self, fileid, changeid=None):
1293 def filectx(self, fileid, changeid=None):
1294 """opens an arbitrary revision of the file without
1294 """opens an arbitrary revision of the file without
1295 opening a new filelog"""
1295 opening a new filelog"""
1296 return filectx(
1296 return filectx(
1297 self._repo,
1297 self._repo,
1298 self._path,
1298 self._path,
1299 fileid=fileid,
1299 fileid=fileid,
1300 filelog=self._filelog,
1300 filelog=self._filelog,
1301 changeid=changeid,
1301 changeid=changeid,
1302 )
1302 )
1303
1303
1304 def rawdata(self):
1304 def rawdata(self):
1305 return self._filelog.rawdata(self._filenode)
1305 return self._filelog.rawdata(self._filenode)
1306
1306
1307 def rawflags(self):
1307 def rawflags(self):
1308 """low-level revlog flags"""
1308 """low-level revlog flags"""
1309 return self._filelog.flags(self._filerev)
1309 return self._filelog.flags(self._filerev)
1310
1310
1311 def data(self):
1311 def data(self):
1312 try:
1312 try:
1313 return self._filelog.read(self._filenode)
1313 return self._filelog.read(self._filenode)
1314 except error.CensoredNodeError:
1314 except error.CensoredNodeError:
1315 if self._repo.ui.config(b"censor", b"policy") == b"ignore":
1315 if self._repo.ui.config(b"censor", b"policy") == b"ignore":
1316 return b""
1316 return b""
1317 raise error.Abort(
1317 raise error.Abort(
1318 _(b"censored node: %s") % short(self._filenode),
1318 _(b"censored node: %s") % short(self._filenode),
1319 hint=_(b"set censor.policy to ignore errors"),
1319 hint=_(b"set censor.policy to ignore errors"),
1320 )
1320 )
1321
1321
1322 def size(self):
1322 def size(self):
1323 return self._filelog.size(self._filerev)
1323 return self._filelog.size(self._filerev)
1324
1324
1325 @propertycache
1325 @propertycache
1326 def _copied(self):
1326 def _copied(self):
1327 """check if file was actually renamed in this changeset revision
1327 """check if file was actually renamed in this changeset revision
1328
1328
1329 If rename logged in file revision, we report copy for changeset only
1329 If rename logged in file revision, we report copy for changeset only
1330 if file revisions linkrev points back to the changeset in question
1330 if file revisions linkrev points back to the changeset in question
1331 or both changeset parents contain different file revisions.
1331 or both changeset parents contain different file revisions.
1332 """
1332 """
1333
1333
1334 renamed = self._filelog.renamed(self._filenode)
1334 renamed = self._filelog.renamed(self._filenode)
1335 if not renamed:
1335 if not renamed:
1336 return None
1336 return None
1337
1337
1338 if self.rev() == self.linkrev():
1338 if self.rev() == self.linkrev():
1339 return renamed
1339 return renamed
1340
1340
1341 name = self.path()
1341 name = self.path()
1342 fnode = self._filenode
1342 fnode = self._filenode
1343 for p in self._changectx.parents():
1343 for p in self._changectx.parents():
1344 try:
1344 try:
1345 if fnode == p.filenode(name):
1345 if fnode == p.filenode(name):
1346 return None
1346 return None
1347 except error.LookupError:
1347 except error.LookupError:
1348 pass
1348 pass
1349 return renamed
1349 return renamed
1350
1350
1351 def children(self):
1351 def children(self):
1352 # hard for renames
1352 # hard for renames
1353 c = self._filelog.children(self._filenode)
1353 c = self._filelog.children(self._filenode)
1354 return [
1354 return [
1355 filectx(self._repo, self._path, fileid=x, filelog=self._filelog)
1355 filectx(self._repo, self._path, fileid=x, filelog=self._filelog)
1356 for x in c
1356 for x in c
1357 ]
1357 ]
1358
1358
1359
1359
1360 class committablectx(basectx):
1360 class committablectx(basectx):
1361 """A committablectx object provides common functionality for a context that
1361 """A committablectx object provides common functionality for a context that
1362 wants the ability to commit, e.g. workingctx or memctx."""
1362 wants the ability to commit, e.g. workingctx or memctx."""
1363
1363
1364 def __init__(
1364 def __init__(
1365 self,
1365 self,
1366 repo,
1366 repo,
1367 text=b"",
1367 text=b"",
1368 user=None,
1368 user=None,
1369 date=None,
1369 date=None,
1370 extra=None,
1370 extra=None,
1371 changes=None,
1371 changes=None,
1372 branch=None,
1372 branch=None,
1373 ):
1373 ):
1374 super(committablectx, self).__init__(repo)
1374 super(committablectx, self).__init__(repo)
1375 self._rev = None
1375 self._rev = None
1376 self._node = None
1376 self._node = None
1377 self._text = text
1377 self._text = text
1378 if date:
1378 if date:
1379 self._date = dateutil.parsedate(date)
1379 self._date = dateutil.parsedate(date)
1380 if user:
1380 if user:
1381 self._user = user
1381 self._user = user
1382 if changes:
1382 if changes:
1383 self._status = changes
1383 self._status = changes
1384
1384
1385 self._extra = {}
1385 self._extra = {}
1386 if extra:
1386 if extra:
1387 self._extra = extra.copy()
1387 self._extra = extra.copy()
1388 if branch is not None:
1388 if branch is not None:
1389 self._extra[b'branch'] = encoding.fromlocal(branch)
1389 self._extra[b'branch'] = encoding.fromlocal(branch)
1390 if not self._extra.get(b'branch'):
1390 if not self._extra.get(b'branch'):
1391 self._extra[b'branch'] = b'default'
1391 self._extra[b'branch'] = b'default'
1392
1392
1393 def __bytes__(self):
1393 def __bytes__(self):
1394 return bytes(self._parents[0]) + b"+"
1394 return bytes(self._parents[0]) + b"+"
1395
1395
1396 __str__ = encoding.strmethod(__bytes__)
1396 __str__ = encoding.strmethod(__bytes__)
1397
1397
1398 def __nonzero__(self):
1398 def __nonzero__(self):
1399 return True
1399 return True
1400
1400
1401 __bool__ = __nonzero__
1401 __bool__ = __nonzero__
1402
1402
1403 @propertycache
1403 @propertycache
1404 def _status(self):
1404 def _status(self):
1405 return self._repo.status()
1405 return self._repo.status()
1406
1406
1407 @propertycache
1407 @propertycache
1408 def _user(self):
1408 def _user(self):
1409 return self._repo.ui.username()
1409 return self._repo.ui.username()
1410
1410
1411 @propertycache
1411 @propertycache
1412 def _date(self):
1412 def _date(self):
1413 ui = self._repo.ui
1413 ui = self._repo.ui
1414 date = ui.configdate(b'devel', b'default-date')
1414 date = ui.configdate(b'devel', b'default-date')
1415 if date is None:
1415 if date is None:
1416 date = dateutil.makedate()
1416 date = dateutil.makedate()
1417 return date
1417 return date
1418
1418
1419 def subrev(self, subpath):
1419 def subrev(self, subpath):
1420 return None
1420 return None
1421
1421
1422 def manifestnode(self):
1422 def manifestnode(self):
1423 return None
1423 return None
1424
1424
1425 def user(self):
1425 def user(self):
1426 return self._user or self._repo.ui.username()
1426 return self._user or self._repo.ui.username()
1427
1427
1428 def date(self):
1428 def date(self):
1429 return self._date
1429 return self._date
1430
1430
1431 def description(self):
1431 def description(self):
1432 return self._text
1432 return self._text
1433
1433
1434 def files(self):
1434 def files(self):
1435 return sorted(
1435 return sorted(
1436 self._status.modified + self._status.added + self._status.removed
1436 self._status.modified + self._status.added + self._status.removed
1437 )
1437 )
1438
1438
1439 def modified(self):
1439 def modified(self):
1440 return self._status.modified
1440 return self._status.modified
1441
1441
1442 def added(self):
1442 def added(self):
1443 return self._status.added
1443 return self._status.added
1444
1444
1445 def removed(self):
1445 def removed(self):
1446 return self._status.removed
1446 return self._status.removed
1447
1447
1448 def deleted(self):
1448 def deleted(self):
1449 return self._status.deleted
1449 return self._status.deleted
1450
1450
1451 filesmodified = modified
1451 filesmodified = modified
1452 filesadded = added
1452 filesadded = added
1453 filesremoved = removed
1453 filesremoved = removed
1454
1454
1455 def branch(self):
1455 def branch(self):
1456 return encoding.tolocal(self._extra[b'branch'])
1456 return encoding.tolocal(self._extra[b'branch'])
1457
1457
1458 def closesbranch(self):
1458 def closesbranch(self):
1459 return b'close' in self._extra
1459 return b'close' in self._extra
1460
1460
1461 def extra(self):
1461 def extra(self):
1462 return self._extra
1462 return self._extra
1463
1463
1464 def isinmemory(self):
1464 def isinmemory(self):
1465 return False
1465 return False
1466
1466
1467 def tags(self):
1467 def tags(self):
1468 return []
1468 return []
1469
1469
1470 def bookmarks(self):
1470 def bookmarks(self):
1471 b = []
1471 b = []
1472 for p in self.parents():
1472 for p in self.parents():
1473 b.extend(p.bookmarks())
1473 b.extend(p.bookmarks())
1474 return b
1474 return b
1475
1475
1476 def phase(self):
1476 def phase(self):
1477 phase = phases.newcommitphase(self._repo.ui)
1477 phase = phases.newcommitphase(self._repo.ui)
1478 for p in self.parents():
1478 for p in self.parents():
1479 phase = max(phase, p.phase())
1479 phase = max(phase, p.phase())
1480 return phase
1480 return phase
1481
1481
1482 def hidden(self):
1482 def hidden(self):
1483 return False
1483 return False
1484
1484
1485 def children(self):
1485 def children(self):
1486 return []
1486 return []
1487
1487
1488 def flags(self, path):
1488 def flags(self, path):
1489 if '_manifest' in self.__dict__:
1489 if '_manifest' in self.__dict__:
1490 try:
1490 try:
1491 return self._manifest.flags(path)
1491 return self._manifest.flags(path)
1492 except KeyError:
1492 except KeyError:
1493 return b''
1493 return b''
1494
1494
1495 try:
1495 try:
1496 return self._flagfunc(path)
1496 return self._flagfunc(path)
1497 except OSError:
1497 except OSError:
1498 return b''
1498 return b''
1499
1499
1500 def ancestor(self, c2):
1500 def ancestor(self, c2):
1501 """return the "best" ancestor context of self and c2"""
1501 """return the "best" ancestor context of self and c2"""
1502 return self._parents[0].ancestor(c2) # punt on two parents for now
1502 return self._parents[0].ancestor(c2) # punt on two parents for now
1503
1503
1504 def ancestors(self):
1504 def ancestors(self):
1505 for p in self._parents:
1505 for p in self._parents:
1506 yield p
1506 yield p
1507 for a in self._repo.changelog.ancestors(
1507 for a in self._repo.changelog.ancestors(
1508 [p.rev() for p in self._parents]
1508 [p.rev() for p in self._parents]
1509 ):
1509 ):
1510 yield self._repo[a]
1510 yield self._repo[a]
1511
1511
1512 def markcommitted(self, node):
1512 def markcommitted(self, node):
1513 """Perform post-commit cleanup necessary after committing this ctx
1513 """Perform post-commit cleanup necessary after committing this ctx
1514
1514
1515 Specifically, this updates backing stores this working context
1515 Specifically, this updates backing stores this working context
1516 wraps to reflect the fact that the changes reflected by this
1516 wraps to reflect the fact that the changes reflected by this
1517 workingctx have been committed. For example, it marks
1517 workingctx have been committed. For example, it marks
1518 modified and added files as normal in the dirstate.
1518 modified and added files as normal in the dirstate.
1519
1519
1520 """
1520 """
1521
1521
1522 def dirty(self, missing=False, merge=True, branch=True):
1522 def dirty(self, missing=False, merge=True, branch=True):
1523 return False
1523 return False
1524
1524
1525
1525
1526 class workingctx(committablectx):
1526 class workingctx(committablectx):
1527 """A workingctx object makes access to data related to
1527 """A workingctx object makes access to data related to
1528 the current working directory convenient.
1528 the current working directory convenient.
1529 date - any valid date string or (unixtime, offset), or None.
1529 date - any valid date string or (unixtime, offset), or None.
1530 user - username string, or None.
1530 user - username string, or None.
1531 extra - a dictionary of extra values, or None.
1531 extra - a dictionary of extra values, or None.
1532 changes - a list of file lists as returned by localrepo.status()
1532 changes - a list of file lists as returned by localrepo.status()
1533 or None to use the repository status.
1533 or None to use the repository status.
1534 """
1534 """
1535
1535
1536 def __init__(
1536 def __init__(
1537 self, repo, text=b"", user=None, date=None, extra=None, changes=None
1537 self, repo, text=b"", user=None, date=None, extra=None, changes=None
1538 ):
1538 ):
1539 branch = None
1539 branch = None
1540 if not extra or b'branch' not in extra:
1540 if not extra or b'branch' not in extra:
1541 try:
1541 try:
1542 branch = repo.dirstate.branch()
1542 branch = repo.dirstate.branch()
1543 except UnicodeDecodeError:
1543 except UnicodeDecodeError:
1544 raise error.Abort(_(b'branch name not in UTF-8!'))
1544 raise error.Abort(_(b'branch name not in UTF-8!'))
1545 super(workingctx, self).__init__(
1545 super(workingctx, self).__init__(
1546 repo, text, user, date, extra, changes, branch=branch
1546 repo, text, user, date, extra, changes, branch=branch
1547 )
1547 )
1548
1548
1549 def __iter__(self):
1549 def __iter__(self):
1550 d = self._repo.dirstate
1550 d = self._repo.dirstate
1551 for f in d:
1551 for f in d:
1552 if d[f] != b'r':
1552 if d[f] != b'r':
1553 yield f
1553 yield f
1554
1554
1555 def __contains__(self, key):
1555 def __contains__(self, key):
1556 return self._repo.dirstate[key] not in b"?r"
1556 return self._repo.dirstate[key] not in b"?r"
1557
1557
1558 def hex(self):
1558 def hex(self):
1559 return wdirhex
1559 return wdirhex
1560
1560
1561 @propertycache
1561 @propertycache
1562 def _parents(self):
1562 def _parents(self):
1563 p = self._repo.dirstate.parents()
1563 p = self._repo.dirstate.parents()
1564 if p[1] == nullid:
1564 if p[1] == nullid:
1565 p = p[:-1]
1565 p = p[:-1]
1566 # use unfiltered repo to delay/avoid loading obsmarkers
1566 # use unfiltered repo to delay/avoid loading obsmarkers
1567 unfi = self._repo.unfiltered()
1567 unfi = self._repo.unfiltered()
1568 return [
1568 return [
1569 changectx(
1569 changectx(
1570 self._repo, unfi.changelog.rev(n), n, maybe_filtered=False
1570 self._repo, unfi.changelog.rev(n), n, maybe_filtered=False
1571 )
1571 )
1572 for n in p
1572 for n in p
1573 ]
1573 ]
1574
1574
1575 def setparents(self, p1node, p2node=nullid):
1575 def setparents(self, p1node, p2node=nullid):
1576 dirstate = self._repo.dirstate
1576 dirstate = self._repo.dirstate
1577 with dirstate.parentchange():
1577 with dirstate.parentchange():
1578 copies = dirstate.setparents(p1node, p2node)
1578 copies = dirstate.setparents(p1node, p2node)
1579 pctx = self._repo[p1node]
1579 pctx = self._repo[p1node]
1580 if copies:
1580 if copies:
1581 # Adjust copy records, the dirstate cannot do it, it
1581 # Adjust copy records, the dirstate cannot do it, it
1582 # requires access to parents manifests. Preserve them
1582 # requires access to parents manifests. Preserve them
1583 # only for entries added to first parent.
1583 # only for entries added to first parent.
1584 for f in copies:
1584 for f in copies:
1585 if f not in pctx and copies[f] in pctx:
1585 if f not in pctx and copies[f] in pctx:
1586 dirstate.copy(copies[f], f)
1586 dirstate.copy(copies[f], f)
1587 if p2node == nullid:
1587 if p2node == nullid:
1588 for f, s in sorted(dirstate.copies().items()):
1588 for f, s in sorted(dirstate.copies().items()):
1589 if f not in pctx and s not in pctx:
1589 if f not in pctx and s not in pctx:
1590 dirstate.copy(None, f)
1590 dirstate.copy(None, f)
1591
1591
1592 def _fileinfo(self, path):
1592 def _fileinfo(self, path):
1593 # populate __dict__['_manifest'] as workingctx has no _manifestdelta
1593 # populate __dict__['_manifest'] as workingctx has no _manifestdelta
1594 self._manifest
1594 self._manifest
1595 return super(workingctx, self)._fileinfo(path)
1595 return super(workingctx, self)._fileinfo(path)
1596
1596
1597 def _buildflagfunc(self):
1597 def _buildflagfunc(self):
1598 # Create a fallback function for getting file flags when the
1598 # Create a fallback function for getting file flags when the
1599 # filesystem doesn't support them
1599 # filesystem doesn't support them
1600
1600
1601 copiesget = self._repo.dirstate.copies().get
1601 copiesget = self._repo.dirstate.copies().get
1602 parents = self.parents()
1602 parents = self.parents()
1603 if len(parents) < 2:
1603 if len(parents) < 2:
1604 # when we have one parent, it's easy: copy from parent
1604 # when we have one parent, it's easy: copy from parent
1605 man = parents[0].manifest()
1605 man = parents[0].manifest()
1606
1606
1607 def func(f):
1607 def func(f):
1608 f = copiesget(f, f)
1608 f = copiesget(f, f)
1609 return man.flags(f)
1609 return man.flags(f)
1610
1610
1611 else:
1611 else:
1612 # merges are tricky: we try to reconstruct the unstored
1612 # merges are tricky: we try to reconstruct the unstored
1613 # result from the merge (issue1802)
1613 # result from the merge (issue1802)
1614 p1, p2 = parents
1614 p1, p2 = parents
1615 pa = p1.ancestor(p2)
1615 pa = p1.ancestor(p2)
1616 m1, m2, ma = p1.manifest(), p2.manifest(), pa.manifest()
1616 m1, m2, ma = p1.manifest(), p2.manifest(), pa.manifest()
1617
1617
1618 def func(f):
1618 def func(f):
1619 f = copiesget(f, f) # may be wrong for merges with copies
1619 f = copiesget(f, f) # may be wrong for merges with copies
1620 fl1, fl2, fla = m1.flags(f), m2.flags(f), ma.flags(f)
1620 fl1, fl2, fla = m1.flags(f), m2.flags(f), ma.flags(f)
1621 if fl1 == fl2:
1621 if fl1 == fl2:
1622 return fl1
1622 return fl1
1623 if fl1 == fla:
1623 if fl1 == fla:
1624 return fl2
1624 return fl2
1625 if fl2 == fla:
1625 if fl2 == fla:
1626 return fl1
1626 return fl1
1627 return b'' # punt for conflicts
1627 return b'' # punt for conflicts
1628
1628
1629 return func
1629 return func
1630
1630
1631 @propertycache
1631 @propertycache
1632 def _flagfunc(self):
1632 def _flagfunc(self):
1633 return self._repo.dirstate.flagfunc(self._buildflagfunc)
1633 return self._repo.dirstate.flagfunc(self._buildflagfunc)
1634
1634
1635 def flags(self, path):
1635 def flags(self, path):
1636 try:
1636 try:
1637 return self._flagfunc(path)
1637 return self._flagfunc(path)
1638 except OSError:
1638 except OSError:
1639 return b''
1639 return b''
1640
1640
1641 def filectx(self, path, filelog=None):
1641 def filectx(self, path, filelog=None):
1642 """get a file context from the working directory"""
1642 """get a file context from the working directory"""
1643 return workingfilectx(
1643 return workingfilectx(
1644 self._repo, path, workingctx=self, filelog=filelog
1644 self._repo, path, workingctx=self, filelog=filelog
1645 )
1645 )
1646
1646
1647 def dirty(self, missing=False, merge=True, branch=True):
1647 def dirty(self, missing=False, merge=True, branch=True):
1648 """check whether a working directory is modified"""
1648 """check whether a working directory is modified"""
1649 # check subrepos first
1649 # check subrepos first
1650 for s in sorted(self.substate):
1650 for s in sorted(self.substate):
1651 if self.sub(s).dirty(missing=missing):
1651 if self.sub(s).dirty(missing=missing):
1652 return True
1652 return True
1653 # check current working dir
1653 # check current working dir
1654 return (
1654 return (
1655 (merge and self.p2())
1655 (merge and self.p2())
1656 or (branch and self.branch() != self.p1().branch())
1656 or (branch and self.branch() != self.p1().branch())
1657 or self.modified()
1657 or self.modified()
1658 or self.added()
1658 or self.added()
1659 or self.removed()
1659 or self.removed()
1660 or (missing and self.deleted())
1660 or (missing and self.deleted())
1661 )
1661 )
1662
1662
1663 def add(self, list, prefix=b""):
1663 def add(self, list, prefix=b""):
1664 with self._repo.wlock():
1664 with self._repo.wlock():
1665 ui, ds = self._repo.ui, self._repo.dirstate
1665 ui, ds = self._repo.ui, self._repo.dirstate
1666 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1666 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1667 rejected = []
1667 rejected = []
1668 lstat = self._repo.wvfs.lstat
1668 lstat = self._repo.wvfs.lstat
1669 for f in list:
1669 for f in list:
1670 # ds.pathto() returns an absolute file when this is invoked from
1670 # ds.pathto() returns an absolute file when this is invoked from
1671 # the keyword extension. That gets flagged as non-portable on
1671 # the keyword extension. That gets flagged as non-portable on
1672 # Windows, since it contains the drive letter and colon.
1672 # Windows, since it contains the drive letter and colon.
1673 scmutil.checkportable(ui, os.path.join(prefix, f))
1673 scmutil.checkportable(ui, os.path.join(prefix, f))
1674 try:
1674 try:
1675 st = lstat(f)
1675 st = lstat(f)
1676 except OSError:
1676 except OSError:
1677 ui.warn(_(b"%s does not exist!\n") % uipath(f))
1677 ui.warn(_(b"%s does not exist!\n") % uipath(f))
1678 rejected.append(f)
1678 rejected.append(f)
1679 continue
1679 continue
1680 limit = ui.configbytes(b'ui', b'large-file-limit')
1680 limit = ui.configbytes(b'ui', b'large-file-limit')
1681 if limit != 0 and st.st_size > limit:
1681 if limit != 0 and st.st_size > limit:
1682 ui.warn(
1682 ui.warn(
1683 _(
1683 _(
1684 b"%s: up to %d MB of RAM may be required "
1684 b"%s: up to %d MB of RAM may be required "
1685 b"to manage this file\n"
1685 b"to manage this file\n"
1686 b"(use 'hg revert %s' to cancel the "
1686 b"(use 'hg revert %s' to cancel the "
1687 b"pending addition)\n"
1687 b"pending addition)\n"
1688 )
1688 )
1689 % (f, 3 * st.st_size // 1000000, uipath(f))
1689 % (f, 3 * st.st_size // 1000000, uipath(f))
1690 )
1690 )
1691 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1691 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1692 ui.warn(
1692 ui.warn(
1693 _(
1693 _(
1694 b"%s not added: only files and symlinks "
1694 b"%s not added: only files and symlinks "
1695 b"supported currently\n"
1695 b"supported currently\n"
1696 )
1696 )
1697 % uipath(f)
1697 % uipath(f)
1698 )
1698 )
1699 rejected.append(f)
1699 rejected.append(f)
1700 elif ds[f] in b'amn':
1700 elif ds[f] in b'amn':
1701 ui.warn(_(b"%s already tracked!\n") % uipath(f))
1701 ui.warn(_(b"%s already tracked!\n") % uipath(f))
1702 elif ds[f] == b'r':
1702 elif ds[f] == b'r':
1703 ds.normallookup(f)
1703 ds.normallookup(f)
1704 else:
1704 else:
1705 ds.add(f)
1705 ds.add(f)
1706 return rejected
1706 return rejected
1707
1707
1708 def forget(self, files, prefix=b""):
1708 def forget(self, files, prefix=b""):
1709 with self._repo.wlock():
1709 with self._repo.wlock():
1710 ds = self._repo.dirstate
1710 ds = self._repo.dirstate
1711 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1711 uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
1712 rejected = []
1712 rejected = []
1713 for f in files:
1713 for f in files:
1714 if f not in ds:
1714 if f not in ds:
1715 self._repo.ui.warn(_(b"%s not tracked!\n") % uipath(f))
1715 self._repo.ui.warn(_(b"%s not tracked!\n") % uipath(f))
1716 rejected.append(f)
1716 rejected.append(f)
1717 elif ds[f] != b'a':
1717 elif ds[f] != b'a':
1718 ds.remove(f)
1718 ds.remove(f)
1719 else:
1719 else:
1720 ds.drop(f)
1720 ds.drop(f)
1721 return rejected
1721 return rejected
1722
1722
1723 def copy(self, source, dest):
1723 def copy(self, source, dest):
1724 try:
1724 try:
1725 st = self._repo.wvfs.lstat(dest)
1725 st = self._repo.wvfs.lstat(dest)
1726 except OSError as err:
1726 except OSError as err:
1727 if err.errno != errno.ENOENT:
1727 if err.errno != errno.ENOENT:
1728 raise
1728 raise
1729 self._repo.ui.warn(
1729 self._repo.ui.warn(
1730 _(b"%s does not exist!\n") % self._repo.dirstate.pathto(dest)
1730 _(b"%s does not exist!\n") % self._repo.dirstate.pathto(dest)
1731 )
1731 )
1732 return
1732 return
1733 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1733 if not (stat.S_ISREG(st.st_mode) or stat.S_ISLNK(st.st_mode)):
1734 self._repo.ui.warn(
1734 self._repo.ui.warn(
1735 _(b"copy failed: %s is not a file or a symbolic link\n")
1735 _(b"copy failed: %s is not a file or a symbolic link\n")
1736 % self._repo.dirstate.pathto(dest)
1736 % self._repo.dirstate.pathto(dest)
1737 )
1737 )
1738 else:
1738 else:
1739 with self._repo.wlock():
1739 with self._repo.wlock():
1740 ds = self._repo.dirstate
1740 ds = self._repo.dirstate
1741 if ds[dest] in b'?':
1741 if ds[dest] in b'?':
1742 ds.add(dest)
1742 ds.add(dest)
1743 elif ds[dest] in b'r':
1743 elif ds[dest] in b'r':
1744 ds.normallookup(dest)
1744 ds.normallookup(dest)
1745 ds.copy(source, dest)
1745 ds.copy(source, dest)
1746
1746
1747 def match(
1747 def match(
1748 self,
1748 self,
1749 pats=None,
1749 pats=None,
1750 include=None,
1750 include=None,
1751 exclude=None,
1751 exclude=None,
1752 default=b'glob',
1752 default=b'glob',
1753 listsubrepos=False,
1753 listsubrepos=False,
1754 badfn=None,
1754 badfn=None,
1755 cwd=None,
1755 cwd=None,
1756 ):
1756 ):
1757 r = self._repo
1757 r = self._repo
1758 if not cwd:
1758 if not cwd:
1759 cwd = r.getcwd()
1759 cwd = r.getcwd()
1760
1760
1761 # Only a case insensitive filesystem needs magic to translate user input
1761 # Only a case insensitive filesystem needs magic to translate user input
1762 # to actual case in the filesystem.
1762 # to actual case in the filesystem.
1763 icasefs = not util.fscasesensitive(r.root)
1763 icasefs = not util.fscasesensitive(r.root)
1764 return matchmod.match(
1764 return matchmod.match(
1765 r.root,
1765 r.root,
1766 cwd,
1766 cwd,
1767 pats,
1767 pats,
1768 include,
1768 include,
1769 exclude,
1769 exclude,
1770 default,
1770 default,
1771 auditor=r.auditor,
1771 auditor=r.auditor,
1772 ctx=self,
1772 ctx=self,
1773 listsubrepos=listsubrepos,
1773 listsubrepos=listsubrepos,
1774 badfn=badfn,
1774 badfn=badfn,
1775 icasefs=icasefs,
1775 icasefs=icasefs,
1776 )
1776 )
1777
1777
1778 def _filtersuspectsymlink(self, files):
1778 def _filtersuspectsymlink(self, files):
1779 if not files or self._repo.dirstate._checklink:
1779 if not files or self._repo.dirstate._checklink:
1780 return files
1780 return files
1781
1781
1782 # Symlink placeholders may get non-symlink-like contents
1782 # Symlink placeholders may get non-symlink-like contents
1783 # via user error or dereferencing by NFS or Samba servers,
1783 # via user error or dereferencing by NFS or Samba servers,
1784 # so we filter out any placeholders that don't look like a
1784 # so we filter out any placeholders that don't look like a
1785 # symlink
1785 # symlink
1786 sane = []
1786 sane = []
1787 for f in files:
1787 for f in files:
1788 if self.flags(f) == b'l':
1788 if self.flags(f) == b'l':
1789 d = self[f].data()
1789 d = self[f].data()
1790 if (
1790 if (
1791 d == b''
1791 d == b''
1792 or len(d) >= 1024
1792 or len(d) >= 1024
1793 or b'\n' in d
1793 or b'\n' in d
1794 or stringutil.binary(d)
1794 or stringutil.binary(d)
1795 ):
1795 ):
1796 self._repo.ui.debug(
1796 self._repo.ui.debug(
1797 b'ignoring suspect symlink placeholder "%s"\n' % f
1797 b'ignoring suspect symlink placeholder "%s"\n' % f
1798 )
1798 )
1799 continue
1799 continue
1800 sane.append(f)
1800 sane.append(f)
1801 return sane
1801 return sane
1802
1802
1803 def _checklookup(self, files):
1803 def _checklookup(self, files):
1804 # check for any possibly clean files
1804 # check for any possibly clean files
1805 if not files:
1805 if not files:
1806 return [], [], []
1806 return [], [], []
1807
1807
1808 modified = []
1808 modified = []
1809 deleted = []
1809 deleted = []
1810 fixup = []
1810 fixup = []
1811 pctx = self._parents[0]
1811 pctx = self._parents[0]
1812 # do a full compare of any files that might have changed
1812 # do a full compare of any files that might have changed
1813 for f in sorted(files):
1813 for f in sorted(files):
1814 try:
1814 try:
1815 # This will return True for a file that got replaced by a
1815 # This will return True for a file that got replaced by a
1816 # directory in the interim, but fixing that is pretty hard.
1816 # directory in the interim, but fixing that is pretty hard.
1817 if (
1817 if (
1818 f not in pctx
1818 f not in pctx
1819 or self.flags(f) != pctx.flags(f)
1819 or self.flags(f) != pctx.flags(f)
1820 or pctx[f].cmp(self[f])
1820 or pctx[f].cmp(self[f])
1821 ):
1821 ):
1822 modified.append(f)
1822 modified.append(f)
1823 else:
1823 else:
1824 fixup.append(f)
1824 fixup.append(f)
1825 except (IOError, OSError):
1825 except (IOError, OSError):
1826 # A file become inaccessible in between? Mark it as deleted,
1826 # A file become inaccessible in between? Mark it as deleted,
1827 # matching dirstate behavior (issue5584).
1827 # matching dirstate behavior (issue5584).
1828 # The dirstate has more complex behavior around whether a
1828 # The dirstate has more complex behavior around whether a
1829 # missing file matches a directory, etc, but we don't need to
1829 # missing file matches a directory, etc, but we don't need to
1830 # bother with that: if f has made it to this point, we're sure
1830 # bother with that: if f has made it to this point, we're sure
1831 # it's in the dirstate.
1831 # it's in the dirstate.
1832 deleted.append(f)
1832 deleted.append(f)
1833
1833
1834 return modified, deleted, fixup
1834 return modified, deleted, fixup
1835
1835
1836 def _poststatusfixup(self, status, fixup):
1836 def _poststatusfixup(self, status, fixup):
1837 """update dirstate for files that are actually clean"""
1837 """update dirstate for files that are actually clean"""
1838 poststatus = self._repo.postdsstatus()
1838 poststatus = self._repo.postdsstatus()
1839 if fixup or poststatus:
1839 if fixup or poststatus:
1840 try:
1840 try:
1841 oldid = self._repo.dirstate.identity()
1841 oldid = self._repo.dirstate.identity()
1842
1842
1843 # updating the dirstate is optional
1843 # updating the dirstate is optional
1844 # so we don't wait on the lock
1844 # so we don't wait on the lock
1845 # wlock can invalidate the dirstate, so cache normal _after_
1845 # wlock can invalidate the dirstate, so cache normal _after_
1846 # taking the lock
1846 # taking the lock
1847 with self._repo.wlock(False):
1847 with self._repo.wlock(False):
1848 if self._repo.dirstate.identity() == oldid:
1848 if self._repo.dirstate.identity() == oldid:
1849 if fixup:
1849 if fixup:
1850 normal = self._repo.dirstate.normal
1850 normal = self._repo.dirstate.normal
1851 for f in fixup:
1851 for f in fixup:
1852 normal(f)
1852 normal(f)
1853 # write changes out explicitly, because nesting
1853 # write changes out explicitly, because nesting
1854 # wlock at runtime may prevent 'wlock.release()'
1854 # wlock at runtime may prevent 'wlock.release()'
1855 # after this block from doing so for subsequent
1855 # after this block from doing so for subsequent
1856 # changing files
1856 # changing files
1857 tr = self._repo.currenttransaction()
1857 tr = self._repo.currenttransaction()
1858 self._repo.dirstate.write(tr)
1858 self._repo.dirstate.write(tr)
1859
1859
1860 if poststatus:
1860 if poststatus:
1861 for ps in poststatus:
1861 for ps in poststatus:
1862 ps(self, status)
1862 ps(self, status)
1863 else:
1863 else:
1864 # in this case, writing changes out breaks
1864 # in this case, writing changes out breaks
1865 # consistency, because .hg/dirstate was
1865 # consistency, because .hg/dirstate was
1866 # already changed simultaneously after last
1866 # already changed simultaneously after last
1867 # caching (see also issue5584 for detail)
1867 # caching (see also issue5584 for detail)
1868 self._repo.ui.debug(
1868 self._repo.ui.debug(
1869 b'skip updating dirstate: identity mismatch\n'
1869 b'skip updating dirstate: identity mismatch\n'
1870 )
1870 )
1871 except error.LockError:
1871 except error.LockError:
1872 pass
1872 pass
1873 finally:
1873 finally:
1874 # Even if the wlock couldn't be grabbed, clear out the list.
1874 # Even if the wlock couldn't be grabbed, clear out the list.
1875 self._repo.clearpostdsstatus()
1875 self._repo.clearpostdsstatus()
1876
1876
1877 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
1877 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
1878 '''Gets the status from the dirstate -- internal use only.'''
1878 '''Gets the status from the dirstate -- internal use only.'''
1879 subrepos = []
1879 subrepos = []
1880 if b'.hgsub' in self:
1880 if b'.hgsub' in self:
1881 subrepos = sorted(self.substate)
1881 subrepos = sorted(self.substate)
1882 cmp, s = self._repo.dirstate.status(
1882 cmp, s = self._repo.dirstate.status(
1883 match, subrepos, ignored=ignored, clean=clean, unknown=unknown
1883 match, subrepos, ignored=ignored, clean=clean, unknown=unknown
1884 )
1884 )
1885
1885
1886 # check for any possibly clean files
1886 # check for any possibly clean files
1887 fixup = []
1887 fixup = []
1888 if cmp:
1888 if cmp:
1889 modified2, deleted2, fixup = self._checklookup(cmp)
1889 modified2, deleted2, fixup = self._checklookup(cmp)
1890 s.modified.extend(modified2)
1890 s.modified.extend(modified2)
1891 s.deleted.extend(deleted2)
1891 s.deleted.extend(deleted2)
1892
1892
1893 if fixup and clean:
1893 if fixup and clean:
1894 s.clean.extend(fixup)
1894 s.clean.extend(fixup)
1895
1895
1896 self._poststatusfixup(s, fixup)
1896 self._poststatusfixup(s, fixup)
1897
1897
1898 if match.always():
1898 if match.always():
1899 # cache for performance
1899 # cache for performance
1900 if s.unknown or s.ignored or s.clean:
1900 if s.unknown or s.ignored or s.clean:
1901 # "_status" is cached with list*=False in the normal route
1901 # "_status" is cached with list*=False in the normal route
1902 self._status = scmutil.status(
1902 self._status = scmutil.status(
1903 s.modified, s.added, s.removed, s.deleted, [], [], []
1903 s.modified, s.added, s.removed, s.deleted, [], [], []
1904 )
1904 )
1905 else:
1905 else:
1906 self._status = s
1906 self._status = s
1907
1907
1908 return s
1908 return s
1909
1909
1910 @propertycache
1910 @propertycache
1911 def _copies(self):
1911 def _copies(self):
1912 p1copies = {}
1912 p1copies = {}
1913 p2copies = {}
1913 p2copies = {}
1914 parents = self._repo.dirstate.parents()
1914 parents = self._repo.dirstate.parents()
1915 p1manifest = self._repo[parents[0]].manifest()
1915 p1manifest = self._repo[parents[0]].manifest()
1916 p2manifest = self._repo[parents[1]].manifest()
1916 p2manifest = self._repo[parents[1]].manifest()
1917 changedset = set(self.added()) | set(self.modified())
1917 changedset = set(self.added()) | set(self.modified())
1918 narrowmatch = self._repo.narrowmatch()
1918 narrowmatch = self._repo.narrowmatch()
1919 for dst, src in self._repo.dirstate.copies().items():
1919 for dst, src in self._repo.dirstate.copies().items():
1920 if dst not in changedset or not narrowmatch(dst):
1920 if dst not in changedset or not narrowmatch(dst):
1921 continue
1921 continue
1922 if src in p1manifest:
1922 if src in p1manifest:
1923 p1copies[dst] = src
1923 p1copies[dst] = src
1924 elif src in p2manifest:
1924 elif src in p2manifest:
1925 p2copies[dst] = src
1925 p2copies[dst] = src
1926 return p1copies, p2copies
1926 return p1copies, p2copies
1927
1927
1928 @propertycache
1928 @propertycache
1929 def _manifest(self):
1929 def _manifest(self):
1930 """generate a manifest corresponding to the values in self._status
1930 """generate a manifest corresponding to the values in self._status
1931
1931
1932 This reuse the file nodeid from parent, but we use special node
1932 This reuse the file nodeid from parent, but we use special node
1933 identifiers for added and modified files. This is used by manifests
1933 identifiers for added and modified files. This is used by manifests
1934 merge to see that files are different and by update logic to avoid
1934 merge to see that files are different and by update logic to avoid
1935 deleting newly added files.
1935 deleting newly added files.
1936 """
1936 """
1937 return self._buildstatusmanifest(self._status)
1937 return self._buildstatusmanifest(self._status)
1938
1938
1939 def _buildstatusmanifest(self, status):
1939 def _buildstatusmanifest(self, status):
1940 """Builds a manifest that includes the given status results."""
1940 """Builds a manifest that includes the given status results."""
1941 parents = self.parents()
1941 parents = self.parents()
1942
1942
1943 man = parents[0].manifest().copy()
1943 man = parents[0].manifest().copy()
1944
1944
1945 ff = self._flagfunc
1945 ff = self._flagfunc
1946 for i, l in (
1946 for i, l in (
1947 (addednodeid, status.added),
1947 (addednodeid, status.added),
1948 (modifiednodeid, status.modified),
1948 (modifiednodeid, status.modified),
1949 ):
1949 ):
1950 for f in l:
1950 for f in l:
1951 man[f] = i
1951 man[f] = i
1952 try:
1952 try:
1953 man.setflag(f, ff(f))
1953 man.setflag(f, ff(f))
1954 except OSError:
1954 except OSError:
1955 pass
1955 pass
1956
1956
1957 for f in status.deleted + status.removed:
1957 for f in status.deleted + status.removed:
1958 if f in man:
1958 if f in man:
1959 del man[f]
1959 del man[f]
1960
1960
1961 return man
1961 return man
1962
1962
1963 def _buildstatus(
1963 def _buildstatus(
1964 self, other, s, match, listignored, listclean, listunknown
1964 self, other, s, match, listignored, listclean, listunknown
1965 ):
1965 ):
1966 """build a status with respect to another context
1966 """build a status with respect to another context
1967
1967
1968 This includes logic for maintaining the fast path of status when
1968 This includes logic for maintaining the fast path of status when
1969 comparing the working directory against its parent, which is to skip
1969 comparing the working directory against its parent, which is to skip
1970 building a new manifest if self (working directory) is not comparing
1970 building a new manifest if self (working directory) is not comparing
1971 against its parent (repo['.']).
1971 against its parent (repo['.']).
1972 """
1972 """
1973 s = self._dirstatestatus(match, listignored, listclean, listunknown)
1973 s = self._dirstatestatus(match, listignored, listclean, listunknown)
1974 # Filter out symlinks that, in the case of FAT32 and NTFS filesystems,
1974 # Filter out symlinks that, in the case of FAT32 and NTFS filesystems,
1975 # might have accidentally ended up with the entire contents of the file
1975 # might have accidentally ended up with the entire contents of the file
1976 # they are supposed to be linking to.
1976 # they are supposed to be linking to.
1977 s.modified[:] = self._filtersuspectsymlink(s.modified)
1977 s.modified[:] = self._filtersuspectsymlink(s.modified)
1978 if other != self._repo[b'.']:
1978 if other != self._repo[b'.']:
1979 s = super(workingctx, self)._buildstatus(
1979 s = super(workingctx, self)._buildstatus(
1980 other, s, match, listignored, listclean, listunknown
1980 other, s, match, listignored, listclean, listunknown
1981 )
1981 )
1982 return s
1982 return s
1983
1983
1984 def _matchstatus(self, other, match):
1984 def _matchstatus(self, other, match):
1985 """override the match method with a filter for directory patterns
1985 """override the match method with a filter for directory patterns
1986
1986
1987 We use inheritance to customize the match.bad method only in cases of
1987 We use inheritance to customize the match.bad method only in cases of
1988 workingctx since it belongs only to the working directory when
1988 workingctx since it belongs only to the working directory when
1989 comparing against the parent changeset.
1989 comparing against the parent changeset.
1990
1990
1991 If we aren't comparing against the working directory's parent, then we
1991 If we aren't comparing against the working directory's parent, then we
1992 just use the default match object sent to us.
1992 just use the default match object sent to us.
1993 """
1993 """
1994 if other != self._repo[b'.']:
1994 if other != self._repo[b'.']:
1995
1995
1996 def bad(f, msg):
1996 def bad(f, msg):
1997 # 'f' may be a directory pattern from 'match.files()',
1997 # 'f' may be a directory pattern from 'match.files()',
1998 # so 'f not in ctx1' is not enough
1998 # so 'f not in ctx1' is not enough
1999 if f not in other and not other.hasdir(f):
1999 if f not in other and not other.hasdir(f):
2000 self._repo.ui.warn(
2000 self._repo.ui.warn(
2001 b'%s: %s\n' % (self._repo.dirstate.pathto(f), msg)
2001 b'%s: %s\n' % (self._repo.dirstate.pathto(f), msg)
2002 )
2002 )
2003
2003
2004 match.bad = bad
2004 match.bad = bad
2005 return match
2005 return match
2006
2006
2007 def walk(self, match):
2007 def walk(self, match):
2008 '''Generates matching file names.'''
2008 '''Generates matching file names.'''
2009 return sorted(
2009 return sorted(
2010 self._repo.dirstate.walk(
2010 self._repo.dirstate.walk(
2011 self._repo.narrowmatch(match),
2011 self._repo.narrowmatch(match),
2012 subrepos=sorted(self.substate),
2012 subrepos=sorted(self.substate),
2013 unknown=True,
2013 unknown=True,
2014 ignored=False,
2014 ignored=False,
2015 )
2015 )
2016 )
2016 )
2017
2017
2018 def matches(self, match):
2018 def matches(self, match):
2019 match = self._repo.narrowmatch(match)
2019 match = self._repo.narrowmatch(match)
2020 ds = self._repo.dirstate
2020 ds = self._repo.dirstate
2021 return sorted(f for f in ds.matches(match) if ds[f] != b'r')
2021 return sorted(f for f in ds.matches(match) if ds[f] != b'r')
2022
2022
2023 def markcommitted(self, node):
2023 def markcommitted(self, node):
2024 with self._repo.dirstate.parentchange():
2024 with self._repo.dirstate.parentchange():
2025 for f in self.modified() + self.added():
2025 for f in self.modified() + self.added():
2026 self._repo.dirstate.normal(f)
2026 self._repo.dirstate.normal(f)
2027 for f in self.removed():
2027 for f in self.removed():
2028 self._repo.dirstate.drop(f)
2028 self._repo.dirstate.drop(f)
2029 self._repo.dirstate.setparents(node)
2029 self._repo.dirstate.setparents(node)
2030 self._repo._quick_access_changeid_invalidate()
2030 self._repo._quick_access_changeid_invalidate()
2031
2031
2032 # write changes out explicitly, because nesting wlock at
2032 # write changes out explicitly, because nesting wlock at
2033 # runtime may prevent 'wlock.release()' in 'repo.commit()'
2033 # runtime may prevent 'wlock.release()' in 'repo.commit()'
2034 # from immediately doing so for subsequent changing files
2034 # from immediately doing so for subsequent changing files
2035 self._repo.dirstate.write(self._repo.currenttransaction())
2035 self._repo.dirstate.write(self._repo.currenttransaction())
2036
2036
2037 sparse.aftercommit(self._repo, node)
2037 sparse.aftercommit(self._repo, node)
2038
2038
2039 def mergestate(self, clean=False):
2039 def mergestate(self, clean=False):
2040 if clean:
2040 if clean:
2041 return mergestatemod.mergestate.clean(self._repo)
2041 return mergestatemod.mergestate.clean(self._repo)
2042 return mergestatemod.mergestate.read(self._repo)
2042 return mergestatemod.mergestate.read(self._repo)
2043
2043
2044
2044
2045 class committablefilectx(basefilectx):
2045 class committablefilectx(basefilectx):
2046 """A committablefilectx provides common functionality for a file context
2046 """A committablefilectx provides common functionality for a file context
2047 that wants the ability to commit, e.g. workingfilectx or memfilectx."""
2047 that wants the ability to commit, e.g. workingfilectx or memfilectx."""
2048
2048
2049 def __init__(self, repo, path, filelog=None, ctx=None):
2049 def __init__(self, repo, path, filelog=None, ctx=None):
2050 self._repo = repo
2050 self._repo = repo
2051 self._path = path
2051 self._path = path
2052 self._changeid = None
2052 self._changeid = None
2053 self._filerev = self._filenode = None
2053 self._filerev = self._filenode = None
2054
2054
2055 if filelog is not None:
2055 if filelog is not None:
2056 self._filelog = filelog
2056 self._filelog = filelog
2057 if ctx:
2057 if ctx:
2058 self._changectx = ctx
2058 self._changectx = ctx
2059
2059
2060 def __nonzero__(self):
2060 def __nonzero__(self):
2061 return True
2061 return True
2062
2062
2063 __bool__ = __nonzero__
2063 __bool__ = __nonzero__
2064
2064
2065 def linkrev(self):
2065 def linkrev(self):
2066 # linked to self._changectx no matter if file is modified or not
2066 # linked to self._changectx no matter if file is modified or not
2067 return self.rev()
2067 return self.rev()
2068
2068
2069 def renamed(self):
2069 def renamed(self):
2070 path = self.copysource()
2070 path = self.copysource()
2071 if not path:
2071 if not path:
2072 return None
2072 return None
2073 return path, self._changectx._parents[0]._manifest.get(path, nullid)
2073 return path, self._changectx._parents[0]._manifest.get(path, nullid)
2074
2074
2075 def parents(self):
2075 def parents(self):
2076 '''return parent filectxs, following copies if necessary'''
2076 '''return parent filectxs, following copies if necessary'''
2077
2077
2078 def filenode(ctx, path):
2078 def filenode(ctx, path):
2079 return ctx._manifest.get(path, nullid)
2079 return ctx._manifest.get(path, nullid)
2080
2080
2081 path = self._path
2081 path = self._path
2082 fl = self._filelog
2082 fl = self._filelog
2083 pcl = self._changectx._parents
2083 pcl = self._changectx._parents
2084 renamed = self.renamed()
2084 renamed = self.renamed()
2085
2085
2086 if renamed:
2086 if renamed:
2087 pl = [renamed + (None,)]
2087 pl = [renamed + (None,)]
2088 else:
2088 else:
2089 pl = [(path, filenode(pcl[0], path), fl)]
2089 pl = [(path, filenode(pcl[0], path), fl)]
2090
2090
2091 for pc in pcl[1:]:
2091 for pc in pcl[1:]:
2092 pl.append((path, filenode(pc, path), fl))
2092 pl.append((path, filenode(pc, path), fl))
2093
2093
2094 return [
2094 return [
2095 self._parentfilectx(p, fileid=n, filelog=l)
2095 self._parentfilectx(p, fileid=n, filelog=l)
2096 for p, n, l in pl
2096 for p, n, l in pl
2097 if n != nullid
2097 if n != nullid
2098 ]
2098 ]
2099
2099
2100 def children(self):
2100 def children(self):
2101 return []
2101 return []
2102
2102
2103
2103
2104 class workingfilectx(committablefilectx):
2104 class workingfilectx(committablefilectx):
2105 """A workingfilectx object makes access to data related to a particular
2105 """A workingfilectx object makes access to data related to a particular
2106 file in the working directory convenient."""
2106 file in the working directory convenient."""
2107
2107
2108 def __init__(self, repo, path, filelog=None, workingctx=None):
2108 def __init__(self, repo, path, filelog=None, workingctx=None):
2109 super(workingfilectx, self).__init__(repo, path, filelog, workingctx)
2109 super(workingfilectx, self).__init__(repo, path, filelog, workingctx)
2110
2110
2111 @propertycache
2111 @propertycache
2112 def _changectx(self):
2112 def _changectx(self):
2113 return workingctx(self._repo)
2113 return workingctx(self._repo)
2114
2114
2115 def data(self):
2115 def data(self):
2116 return self._repo.wread(self._path)
2116 return self._repo.wread(self._path)
2117
2117
2118 def copysource(self):
2118 def copysource(self):
2119 return self._repo.dirstate.copied(self._path)
2119 return self._repo.dirstate.copied(self._path)
2120
2120
2121 def size(self):
2121 def size(self):
2122 return self._repo.wvfs.lstat(self._path).st_size
2122 return self._repo.wvfs.lstat(self._path).st_size
2123
2123
2124 def lstat(self):
2124 def lstat(self):
2125 return self._repo.wvfs.lstat(self._path)
2125 return self._repo.wvfs.lstat(self._path)
2126
2126
2127 def date(self):
2127 def date(self):
2128 t, tz = self._changectx.date()
2128 t, tz = self._changectx.date()
2129 try:
2129 try:
2130 return (self._repo.wvfs.lstat(self._path)[stat.ST_MTIME], tz)
2130 return (self._repo.wvfs.lstat(self._path)[stat.ST_MTIME], tz)
2131 except OSError as err:
2131 except OSError as err:
2132 if err.errno != errno.ENOENT:
2132 if err.errno != errno.ENOENT:
2133 raise
2133 raise
2134 return (t, tz)
2134 return (t, tz)
2135
2135
2136 def exists(self):
2136 def exists(self):
2137 return self._repo.wvfs.exists(self._path)
2137 return self._repo.wvfs.exists(self._path)
2138
2138
2139 def lexists(self):
2139 def lexists(self):
2140 return self._repo.wvfs.lexists(self._path)
2140 return self._repo.wvfs.lexists(self._path)
2141
2141
2142 def audit(self):
2142 def audit(self):
2143 return self._repo.wvfs.audit(self._path)
2143 return self._repo.wvfs.audit(self._path)
2144
2144
2145 def cmp(self, fctx):
2145 def cmp(self, fctx):
2146 """compare with other file context
2146 """compare with other file context
2147
2147
2148 returns True if different than fctx.
2148 returns True if different than fctx.
2149 """
2149 """
2150 # fctx should be a filectx (not a workingfilectx)
2150 # fctx should be a filectx (not a workingfilectx)
2151 # invert comparison to reuse the same code path
2151 # invert comparison to reuse the same code path
2152 return fctx.cmp(self)
2152 return fctx.cmp(self)
2153
2153
2154 def remove(self, ignoremissing=False):
2154 def remove(self, ignoremissing=False):
2155 """wraps unlink for a repo's working directory"""
2155 """wraps unlink for a repo's working directory"""
2156 rmdir = self._repo.ui.configbool(b'experimental', b'removeemptydirs')
2156 rmdir = self._repo.ui.configbool(b'experimental', b'removeemptydirs')
2157 self._repo.wvfs.unlinkpath(
2157 self._repo.wvfs.unlinkpath(
2158 self._path, ignoremissing=ignoremissing, rmdir=rmdir
2158 self._path, ignoremissing=ignoremissing, rmdir=rmdir
2159 )
2159 )
2160
2160
2161 def write(self, data, flags, backgroundclose=False, **kwargs):
2161 def write(self, data, flags, backgroundclose=False, **kwargs):
2162 """wraps repo.wwrite"""
2162 """wraps repo.wwrite"""
2163 return self._repo.wwrite(
2163 return self._repo.wwrite(
2164 self._path, data, flags, backgroundclose=backgroundclose, **kwargs
2164 self._path, data, flags, backgroundclose=backgroundclose, **kwargs
2165 )
2165 )
2166
2166
2167 def markcopied(self, src):
2167 def markcopied(self, src):
2168 """marks this file a copy of `src`"""
2168 """marks this file a copy of `src`"""
2169 self._repo.dirstate.copy(src, self._path)
2169 self._repo.dirstate.copy(src, self._path)
2170
2170
2171 def clearunknown(self):
2171 def clearunknown(self):
2172 """Removes conflicting items in the working directory so that
2172 """Removes conflicting items in the working directory so that
2173 ``write()`` can be called successfully.
2173 ``write()`` can be called successfully.
2174 """
2174 """
2175 wvfs = self._repo.wvfs
2175 wvfs = self._repo.wvfs
2176 f = self._path
2176 f = self._path
2177 wvfs.audit(f)
2177 wvfs.audit(f)
2178 if self._repo.ui.configbool(
2178 if self._repo.ui.configbool(
2179 b'experimental', b'merge.checkpathconflicts'
2179 b'experimental', b'merge.checkpathconflicts'
2180 ):
2180 ):
2181 # remove files under the directory as they should already be
2181 # remove files under the directory as they should already be
2182 # warned and backed up
2182 # warned and backed up
2183 if wvfs.isdir(f) and not wvfs.islink(f):
2183 if wvfs.isdir(f) and not wvfs.islink(f):
2184 wvfs.rmtree(f, forcibly=True)
2184 wvfs.rmtree(f, forcibly=True)
2185 for p in reversed(list(pathutil.finddirs(f))):
2185 for p in reversed(list(pathutil.finddirs(f))):
2186 if wvfs.isfileorlink(p):
2186 if wvfs.isfileorlink(p):
2187 wvfs.unlink(p)
2187 wvfs.unlink(p)
2188 break
2188 break
2189 else:
2189 else:
2190 # don't remove files if path conflicts are not processed
2190 # don't remove files if path conflicts are not processed
2191 if wvfs.isdir(f) and not wvfs.islink(f):
2191 if wvfs.isdir(f) and not wvfs.islink(f):
2192 wvfs.removedirs(f)
2192 wvfs.removedirs(f)
2193
2193
2194 def setflags(self, l, x):
2194 def setflags(self, l, x):
2195 self._repo.wvfs.setflags(self._path, l, x)
2195 self._repo.wvfs.setflags(self._path, l, x)
2196
2196
2197
2197
2198 class overlayworkingctx(committablectx):
2198 class overlayworkingctx(committablectx):
2199 """Wraps another mutable context with a write-back cache that can be
2199 """Wraps another mutable context with a write-back cache that can be
2200 converted into a commit context.
2200 converted into a commit context.
2201
2201
2202 self._cache[path] maps to a dict with keys: {
2202 self._cache[path] maps to a dict with keys: {
2203 'exists': bool?
2203 'exists': bool?
2204 'date': date?
2204 'date': date?
2205 'data': str?
2205 'data': str?
2206 'flags': str?
2206 'flags': str?
2207 'copied': str? (path or None)
2207 'copied': str? (path or None)
2208 }
2208 }
2209 If `exists` is True, `flags` must be non-None and 'date' is non-None. If it
2209 If `exists` is True, `flags` must be non-None and 'date' is non-None. If it
2210 is `False`, the file was deleted.
2210 is `False`, the file was deleted.
2211 """
2211 """
2212
2212
2213 def __init__(self, repo):
2213 def __init__(self, repo):
2214 super(overlayworkingctx, self).__init__(repo)
2214 super(overlayworkingctx, self).__init__(repo)
2215 self.clean()
2215 self.clean()
2216
2216
2217 def setbase(self, wrappedctx):
2217 def setbase(self, wrappedctx):
2218 self._wrappedctx = wrappedctx
2218 self._wrappedctx = wrappedctx
2219 self._parents = [wrappedctx]
2219 self._parents = [wrappedctx]
2220 # Drop old manifest cache as it is now out of date.
2220 # Drop old manifest cache as it is now out of date.
2221 # This is necessary when, e.g., rebasing several nodes with one
2221 # This is necessary when, e.g., rebasing several nodes with one
2222 # ``overlayworkingctx`` (e.g. with --collapse).
2222 # ``overlayworkingctx`` (e.g. with --collapse).
2223 util.clearcachedproperty(self, b'_manifest')
2223 util.clearcachedproperty(self, b'_manifest')
2224
2224
2225 def setparents(self, p1node, p2node=nullid):
2225 def setparents(self, p1node, p2node=nullid):
2226 assert p1node == self._wrappedctx.node()
2226 assert p1node == self._wrappedctx.node()
2227 self._parents = [self._wrappedctx, self._repo.unfiltered()[p2node]]
2227 self._parents = [self._wrappedctx, self._repo.unfiltered()[p2node]]
2228
2228
2229 def data(self, path):
2229 def data(self, path):
2230 if self.isdirty(path):
2230 if self.isdirty(path):
2231 if self._cache[path][b'exists']:
2231 if self._cache[path][b'exists']:
2232 if self._cache[path][b'data'] is not None:
2232 if self._cache[path][b'data'] is not None:
2233 return self._cache[path][b'data']
2233 return self._cache[path][b'data']
2234 else:
2234 else:
2235 # Must fallback here, too, because we only set flags.
2235 # Must fallback here, too, because we only set flags.
2236 return self._wrappedctx[path].data()
2236 return self._wrappedctx[path].data()
2237 else:
2237 else:
2238 raise error.ProgrammingError(
2238 raise error.ProgrammingError(
2239 b"No such file or directory: %s" % path
2239 b"No such file or directory: %s" % path
2240 )
2240 )
2241 else:
2241 else:
2242 return self._wrappedctx[path].data()
2242 return self._wrappedctx[path].data()
2243
2243
2244 @propertycache
2244 @propertycache
2245 def _manifest(self):
2245 def _manifest(self):
2246 parents = self.parents()
2246 parents = self.parents()
2247 man = parents[0].manifest().copy()
2247 man = parents[0].manifest().copy()
2248
2248
2249 flag = self._flagfunc
2249 flag = self._flagfunc
2250 for path in self.added():
2250 for path in self.added():
2251 man[path] = addednodeid
2251 man[path] = addednodeid
2252 man.setflag(path, flag(path))
2252 man.setflag(path, flag(path))
2253 for path in self.modified():
2253 for path in self.modified():
2254 man[path] = modifiednodeid
2254 man[path] = modifiednodeid
2255 man.setflag(path, flag(path))
2255 man.setflag(path, flag(path))
2256 for path in self.removed():
2256 for path in self.removed():
2257 del man[path]
2257 del man[path]
2258 return man
2258 return man
2259
2259
2260 @propertycache
2260 @propertycache
2261 def _flagfunc(self):
2261 def _flagfunc(self):
2262 def f(path):
2262 def f(path):
2263 return self._cache[path][b'flags']
2263 return self._cache[path][b'flags']
2264
2264
2265 return f
2265 return f
2266
2266
2267 def files(self):
2267 def files(self):
2268 return sorted(self.added() + self.modified() + self.removed())
2268 return sorted(self.added() + self.modified() + self.removed())
2269
2269
2270 def modified(self):
2270 def modified(self):
2271 return [
2271 return [
2272 f
2272 f
2273 for f in self._cache.keys()
2273 for f in self._cache.keys()
2274 if self._cache[f][b'exists'] and self._existsinparent(f)
2274 if self._cache[f][b'exists'] and self._existsinparent(f)
2275 ]
2275 ]
2276
2276
2277 def added(self):
2277 def added(self):
2278 return [
2278 return [
2279 f
2279 f
2280 for f in self._cache.keys()
2280 for f in self._cache.keys()
2281 if self._cache[f][b'exists'] and not self._existsinparent(f)
2281 if self._cache[f][b'exists'] and not self._existsinparent(f)
2282 ]
2282 ]
2283
2283
2284 def removed(self):
2284 def removed(self):
2285 return [
2285 return [
2286 f
2286 f
2287 for f in self._cache.keys()
2287 for f in self._cache.keys()
2288 if not self._cache[f][b'exists'] and self._existsinparent(f)
2288 if not self._cache[f][b'exists'] and self._existsinparent(f)
2289 ]
2289 ]
2290
2290
2291 def p1copies(self):
2291 def p1copies(self):
2292 copies = {}
2292 copies = {}
2293 narrowmatch = self._repo.narrowmatch()
2293 narrowmatch = self._repo.narrowmatch()
2294 for f in self._cache.keys():
2294 for f in self._cache.keys():
2295 if not narrowmatch(f):
2295 if not narrowmatch(f):
2296 continue
2296 continue
2297 copies.pop(f, None) # delete if it exists
2297 copies.pop(f, None) # delete if it exists
2298 source = self._cache[f][b'copied']
2298 source = self._cache[f][b'copied']
2299 if source:
2299 if source:
2300 copies[f] = source
2300 copies[f] = source
2301 return copies
2301 return copies
2302
2302
2303 def p2copies(self):
2303 def p2copies(self):
2304 copies = {}
2304 copies = {}
2305 narrowmatch = self._repo.narrowmatch()
2305 narrowmatch = self._repo.narrowmatch()
2306 for f in self._cache.keys():
2306 for f in self._cache.keys():
2307 if not narrowmatch(f):
2307 if not narrowmatch(f):
2308 continue
2308 continue
2309 copies.pop(f, None) # delete if it exists
2309 copies.pop(f, None) # delete if it exists
2310 source = self._cache[f][b'copied']
2310 source = self._cache[f][b'copied']
2311 if source:
2311 if source:
2312 copies[f] = source
2312 copies[f] = source
2313 return copies
2313 return copies
2314
2314
2315 def isinmemory(self):
2315 def isinmemory(self):
2316 return True
2316 return True
2317
2317
2318 def filedate(self, path):
2318 def filedate(self, path):
2319 if self.isdirty(path):
2319 if self.isdirty(path):
2320 return self._cache[path][b'date']
2320 return self._cache[path][b'date']
2321 else:
2321 else:
2322 return self._wrappedctx[path].date()
2322 return self._wrappedctx[path].date()
2323
2323
2324 def markcopied(self, path, origin):
2324 def markcopied(self, path, origin):
2325 self._markdirty(
2325 self._markdirty(
2326 path,
2326 path,
2327 exists=True,
2327 exists=True,
2328 date=self.filedate(path),
2328 date=self.filedate(path),
2329 flags=self.flags(path),
2329 flags=self.flags(path),
2330 copied=origin,
2330 copied=origin,
2331 )
2331 )
2332
2332
2333 def copydata(self, path):
2333 def copydata(self, path):
2334 if self.isdirty(path):
2334 if self.isdirty(path):
2335 return self._cache[path][b'copied']
2335 return self._cache[path][b'copied']
2336 else:
2336 else:
2337 return None
2337 return None
2338
2338
2339 def flags(self, path):
2339 def flags(self, path):
2340 if self.isdirty(path):
2340 if self.isdirty(path):
2341 if self._cache[path][b'exists']:
2341 if self._cache[path][b'exists']:
2342 return self._cache[path][b'flags']
2342 return self._cache[path][b'flags']
2343 else:
2343 else:
2344 raise error.ProgrammingError(
2344 raise error.ProgrammingError(
2345 b"No such file or directory: %s" % path
2345 b"No such file or directory: %s" % path
2346 )
2346 )
2347 else:
2347 else:
2348 return self._wrappedctx[path].flags()
2348 return self._wrappedctx[path].flags()
2349
2349
2350 def __contains__(self, key):
2350 def __contains__(self, key):
2351 if key in self._cache:
2351 if key in self._cache:
2352 return self._cache[key][b'exists']
2352 return self._cache[key][b'exists']
2353 return key in self.p1()
2353 return key in self.p1()
2354
2354
2355 def _existsinparent(self, path):
2355 def _existsinparent(self, path):
2356 try:
2356 try:
2357 # ``commitctx` raises a ``ManifestLookupError`` if a path does not
2357 # ``commitctx` raises a ``ManifestLookupError`` if a path does not
2358 # exist, unlike ``workingctx``, which returns a ``workingfilectx``
2358 # exist, unlike ``workingctx``, which returns a ``workingfilectx``
2359 # with an ``exists()`` function.
2359 # with an ``exists()`` function.
2360 self._wrappedctx[path]
2360 self._wrappedctx[path]
2361 return True
2361 return True
2362 except error.ManifestLookupError:
2362 except error.ManifestLookupError:
2363 return False
2363 return False
2364
2364
2365 def _auditconflicts(self, path):
2365 def _auditconflicts(self, path):
2366 """Replicates conflict checks done by wvfs.write().
2366 """Replicates conflict checks done by wvfs.write().
2367
2367
2368 Since we never write to the filesystem and never call `applyupdates` in
2368 Since we never write to the filesystem and never call `applyupdates` in
2369 IMM, we'll never check that a path is actually writable -- e.g., because
2369 IMM, we'll never check that a path is actually writable -- e.g., because
2370 it adds `a/foo`, but `a` is actually a file in the other commit.
2370 it adds `a/foo`, but `a` is actually a file in the other commit.
2371 """
2371 """
2372
2372
2373 def fail(path, component):
2373 def fail(path, component):
2374 # p1() is the base and we're receiving "writes" for p2()'s
2374 # p1() is the base and we're receiving "writes" for p2()'s
2375 # files.
2375 # files.
2376 if b'l' in self.p1()[component].flags():
2376 if b'l' in self.p1()[component].flags():
2377 raise error.Abort(
2377 raise error.Abort(
2378 b"error: %s conflicts with symlink %s "
2378 b"error: %s conflicts with symlink %s "
2379 b"in %d." % (path, component, self.p1().rev())
2379 b"in %d." % (path, component, self.p1().rev())
2380 )
2380 )
2381 else:
2381 else:
2382 raise error.Abort(
2382 raise error.Abort(
2383 b"error: '%s' conflicts with file '%s' in "
2383 b"error: '%s' conflicts with file '%s' in "
2384 b"%d." % (path, component, self.p1().rev())
2384 b"%d." % (path, component, self.p1().rev())
2385 )
2385 )
2386
2386
2387 # Test that each new directory to be created to write this path from p2
2387 # Test that each new directory to be created to write this path from p2
2388 # is not a file in p1.
2388 # is not a file in p1.
2389 components = path.split(b'/')
2389 components = path.split(b'/')
2390 for i in pycompat.xrange(len(components)):
2390 for i in pycompat.xrange(len(components)):
2391 component = b"/".join(components[0:i])
2391 component = b"/".join(components[0:i])
2392 if component in self:
2392 if component in self:
2393 fail(path, component)
2393 fail(path, component)
2394
2394
2395 # Test the other direction -- that this path from p2 isn't a directory
2395 # Test the other direction -- that this path from p2 isn't a directory
2396 # in p1 (test that p1 doesn't have any paths matching `path/*`).
2396 # in p1 (test that p1 doesn't have any paths matching `path/*`).
2397 match = self.match([path], default=b'path')
2397 match = self.match([path], default=b'path')
2398 mfiles = list(self.p1().manifest().walk(match))
2398 mfiles = list(self.p1().manifest().walk(match))
2399 if len(mfiles) > 0:
2399 if len(mfiles) > 0:
2400 if len(mfiles) == 1 and mfiles[0] == path:
2400 if len(mfiles) == 1 and mfiles[0] == path:
2401 return
2401 return
2402 # omit the files which are deleted in current IMM wctx
2402 # omit the files which are deleted in current IMM wctx
2403 mfiles = [m for m in mfiles if m in self]
2403 mfiles = [m for m in mfiles if m in self]
2404 if not mfiles:
2404 if not mfiles:
2405 return
2405 return
2406 raise error.Abort(
2406 raise error.Abort(
2407 b"error: file '%s' cannot be written because "
2407 b"error: file '%s' cannot be written because "
2408 b" '%s/' is a directory in %s (containing %d "
2408 b" '%s/' is a directory in %s (containing %d "
2409 b"entries: %s)"
2409 b"entries: %s)"
2410 % (path, path, self.p1(), len(mfiles), b', '.join(mfiles))
2410 % (path, path, self.p1(), len(mfiles), b', '.join(mfiles))
2411 )
2411 )
2412
2412
2413 def write(self, path, data, flags=b'', **kwargs):
2413 def write(self, path, data, flags=b'', **kwargs):
2414 if data is None:
2414 if data is None:
2415 raise error.ProgrammingError(b"data must be non-None")
2415 raise error.ProgrammingError(b"data must be non-None")
2416 self._auditconflicts(path)
2416 self._auditconflicts(path)
2417 self._markdirty(
2417 self._markdirty(
2418 path, exists=True, data=data, date=dateutil.makedate(), flags=flags
2418 path, exists=True, data=data, date=dateutil.makedate(), flags=flags
2419 )
2419 )
2420
2420
2421 def setflags(self, path, l, x):
2421 def setflags(self, path, l, x):
2422 flag = b''
2422 flag = b''
2423 if l:
2423 if l:
2424 flag = b'l'
2424 flag = b'l'
2425 elif x:
2425 elif x:
2426 flag = b'x'
2426 flag = b'x'
2427 self._markdirty(path, exists=True, date=dateutil.makedate(), flags=flag)
2427 self._markdirty(path, exists=True, date=dateutil.makedate(), flags=flag)
2428
2428
2429 def remove(self, path):
2429 def remove(self, path):
2430 self._markdirty(path, exists=False)
2430 self._markdirty(path, exists=False)
2431
2431
2432 def exists(self, path):
2432 def exists(self, path):
2433 """exists behaves like `lexists`, but needs to follow symlinks and
2433 """exists behaves like `lexists`, but needs to follow symlinks and
2434 return False if they are broken.
2434 return False if they are broken.
2435 """
2435 """
2436 if self.isdirty(path):
2436 if self.isdirty(path):
2437 # If this path exists and is a symlink, "follow" it by calling
2437 # If this path exists and is a symlink, "follow" it by calling
2438 # exists on the destination path.
2438 # exists on the destination path.
2439 if (
2439 if (
2440 self._cache[path][b'exists']
2440 self._cache[path][b'exists']
2441 and b'l' in self._cache[path][b'flags']
2441 and b'l' in self._cache[path][b'flags']
2442 ):
2442 ):
2443 return self.exists(self._cache[path][b'data'].strip())
2443 return self.exists(self._cache[path][b'data'].strip())
2444 else:
2444 else:
2445 return self._cache[path][b'exists']
2445 return self._cache[path][b'exists']
2446
2446
2447 return self._existsinparent(path)
2447 return self._existsinparent(path)
2448
2448
2449 def lexists(self, path):
2449 def lexists(self, path):
2450 """lexists returns True if the path exists"""
2450 """lexists returns True if the path exists"""
2451 if self.isdirty(path):
2451 if self.isdirty(path):
2452 return self._cache[path][b'exists']
2452 return self._cache[path][b'exists']
2453
2453
2454 return self._existsinparent(path)
2454 return self._existsinparent(path)
2455
2455
2456 def size(self, path):
2456 def size(self, path):
2457 if self.isdirty(path):
2457 if self.isdirty(path):
2458 if self._cache[path][b'exists']:
2458 if self._cache[path][b'exists']:
2459 return len(self._cache[path][b'data'])
2459 return len(self._cache[path][b'data'])
2460 else:
2460 else:
2461 raise error.ProgrammingError(
2461 raise error.ProgrammingError(
2462 b"No such file or directory: %s" % path
2462 b"No such file or directory: %s" % path
2463 )
2463 )
2464 return self._wrappedctx[path].size()
2464 return self._wrappedctx[path].size()
2465
2465
2466 def tomemctx(
2466 def tomemctx(
2467 self,
2467 self,
2468 text,
2468 text,
2469 branch=None,
2469 branch=None,
2470 extra=None,
2470 extra=None,
2471 date=None,
2471 date=None,
2472 parents=None,
2472 parents=None,
2473 user=None,
2473 user=None,
2474 editor=None,
2474 editor=None,
2475 ):
2475 ):
2476 """Converts this ``overlayworkingctx`` into a ``memctx`` ready to be
2476 """Converts this ``overlayworkingctx`` into a ``memctx`` ready to be
2477 committed.
2477 committed.
2478
2478
2479 ``text`` is the commit message.
2479 ``text`` is the commit message.
2480 ``parents`` (optional) are rev numbers.
2480 ``parents`` (optional) are rev numbers.
2481 """
2481 """
2482 # Default parents to the wrapped context if not passed.
2482 # Default parents to the wrapped context if not passed.
2483 if parents is None:
2483 if parents is None:
2484 parents = self.parents()
2484 parents = self.parents()
2485 if len(parents) == 1:
2485 if len(parents) == 1:
2486 parents = (parents[0], None)
2486 parents = (parents[0], None)
2487
2487
2488 # ``parents`` is passed as rev numbers; convert to ``commitctxs``.
2488 # ``parents`` is passed as rev numbers; convert to ``commitctxs``.
2489 if parents[1] is None:
2489 if parents[1] is None:
2490 parents = (self._repo[parents[0]], None)
2490 parents = (self._repo[parents[0]], None)
2491 else:
2491 else:
2492 parents = (self._repo[parents[0]], self._repo[parents[1]])
2492 parents = (self._repo[parents[0]], self._repo[parents[1]])
2493
2493
2494 files = self.files()
2494 files = self.files()
2495
2495
2496 def getfile(repo, memctx, path):
2496 def getfile(repo, memctx, path):
2497 if self._cache[path][b'exists']:
2497 if self._cache[path][b'exists']:
2498 return memfilectx(
2498 return memfilectx(
2499 repo,
2499 repo,
2500 memctx,
2500 memctx,
2501 path,
2501 path,
2502 self._cache[path][b'data'],
2502 self._cache[path][b'data'],
2503 b'l' in self._cache[path][b'flags'],
2503 b'l' in self._cache[path][b'flags'],
2504 b'x' in self._cache[path][b'flags'],
2504 b'x' in self._cache[path][b'flags'],
2505 self._cache[path][b'copied'],
2505 self._cache[path][b'copied'],
2506 )
2506 )
2507 else:
2507 else:
2508 # Returning None, but including the path in `files`, is
2508 # Returning None, but including the path in `files`, is
2509 # necessary for memctx to register a deletion.
2509 # necessary for memctx to register a deletion.
2510 return None
2510 return None
2511
2511
2512 if branch is None:
2512 if branch is None:
2513 branch = self._wrappedctx.branch()
2513 branch = self._wrappedctx.branch()
2514
2514
2515 return memctx(
2515 return memctx(
2516 self._repo,
2516 self._repo,
2517 parents,
2517 parents,
2518 text,
2518 text,
2519 files,
2519 files,
2520 getfile,
2520 getfile,
2521 date=date,
2521 date=date,
2522 extra=extra,
2522 extra=extra,
2523 user=user,
2523 user=user,
2524 branch=branch,
2524 branch=branch,
2525 editor=editor,
2525 editor=editor,
2526 )
2526 )
2527
2527
2528 def tomemctx_for_amend(self, precursor):
2528 def tomemctx_for_amend(self, precursor):
2529 extra = precursor.extra().copy()
2529 extra = precursor.extra().copy()
2530 extra[b'amend_source'] = precursor.hex()
2530 extra[b'amend_source'] = precursor.hex()
2531 return self.tomemctx(
2531 return self.tomemctx(
2532 text=precursor.description(),
2532 text=precursor.description(),
2533 branch=precursor.branch(),
2533 branch=precursor.branch(),
2534 extra=extra,
2534 extra=extra,
2535 date=precursor.date(),
2535 date=precursor.date(),
2536 user=precursor.user(),
2536 user=precursor.user(),
2537 )
2537 )
2538
2538
2539 def isdirty(self, path):
2539 def isdirty(self, path):
2540 return path in self._cache
2540 return path in self._cache
2541
2541
2542 def clean(self):
2542 def clean(self):
2543 self._mergestate = None
2543 self._mergestate = None
2544 self._cache = {}
2544 self._cache = {}
2545
2545
2546 def _compact(self):
2546 def _compact(self):
2547 """Removes keys from the cache that are actually clean, by comparing
2547 """Removes keys from the cache that are actually clean, by comparing
2548 them with the underlying context.
2548 them with the underlying context.
2549
2549
2550 This can occur during the merge process, e.g. by passing --tool :local
2550 This can occur during the merge process, e.g. by passing --tool :local
2551 to resolve a conflict.
2551 to resolve a conflict.
2552 """
2552 """
2553 keys = []
2553 keys = []
2554 # This won't be perfect, but can help performance significantly when
2554 # This won't be perfect, but can help performance significantly when
2555 # using things like remotefilelog.
2555 # using things like remotefilelog.
2556 scmutil.prefetchfiles(
2556 scmutil.prefetchfiles(
2557 self.repo(),
2557 self.repo(),
2558 [
2558 [
2559 (
2559 (
2560 self.p1().rev(),
2560 self.p1().rev(),
2561 scmutil.matchfiles(self.repo(), self._cache.keys()),
2561 scmutil.matchfiles(self.repo(), self._cache.keys()),
2562 )
2562 )
2563 ],
2563 ],
2564 )
2564 )
2565
2565
2566 for path in self._cache.keys():
2566 for path in self._cache.keys():
2567 cache = self._cache[path]
2567 cache = self._cache[path]
2568 try:
2568 try:
2569 underlying = self._wrappedctx[path]
2569 underlying = self._wrappedctx[path]
2570 if (
2570 if (
2571 underlying.data() == cache[b'data']
2571 underlying.data() == cache[b'data']
2572 and underlying.flags() == cache[b'flags']
2572 and underlying.flags() == cache[b'flags']
2573 ):
2573 ):
2574 keys.append(path)
2574 keys.append(path)
2575 except error.ManifestLookupError:
2575 except error.ManifestLookupError:
2576 # Path not in the underlying manifest (created).
2576 # Path not in the underlying manifest (created).
2577 continue
2577 continue
2578
2578
2579 for path in keys:
2579 for path in keys:
2580 del self._cache[path]
2580 del self._cache[path]
2581 return keys
2581 return keys
2582
2582
2583 def _markdirty(
2583 def _markdirty(
2584 self, path, exists, data=None, date=None, flags=b'', copied=None
2584 self, path, exists, data=None, date=None, flags=b'', copied=None
2585 ):
2585 ):
2586 # data not provided, let's see if we already have some; if not, let's
2586 # data not provided, let's see if we already have some; if not, let's
2587 # grab it from our underlying context, so that we always have data if
2587 # grab it from our underlying context, so that we always have data if
2588 # the file is marked as existing.
2588 # the file is marked as existing.
2589 if exists and data is None:
2589 if exists and data is None:
2590 oldentry = self._cache.get(path) or {}
2590 oldentry = self._cache.get(path) or {}
2591 data = oldentry.get(b'data')
2591 data = oldentry.get(b'data')
2592 if data is None:
2592 if data is None:
2593 data = self._wrappedctx[path].data()
2593 data = self._wrappedctx[path].data()
2594
2594
2595 self._cache[path] = {
2595 self._cache[path] = {
2596 b'exists': exists,
2596 b'exists': exists,
2597 b'data': data,
2597 b'data': data,
2598 b'date': date,
2598 b'date': date,
2599 b'flags': flags,
2599 b'flags': flags,
2600 b'copied': copied,
2600 b'copied': copied,
2601 }
2601 }
2602 util.clearcachedproperty(self, b'_manifest')
2602 util.clearcachedproperty(self, b'_manifest')
2603
2603
2604 def filectx(self, path, filelog=None):
2604 def filectx(self, path, filelog=None):
2605 return overlayworkingfilectx(
2605 return overlayworkingfilectx(
2606 self._repo, path, parent=self, filelog=filelog
2606 self._repo, path, parent=self, filelog=filelog
2607 )
2607 )
2608
2608
2609 def mergestate(self, clean=False):
2609 def mergestate(self, clean=False):
2610 if clean or self._mergestate is None:
2610 if clean or self._mergestate is None:
2611 self._mergestate = mergestatemod.memmergestate(self._repo)
2611 self._mergestate = mergestatemod.memmergestate(self._repo)
2612 return self._mergestate
2612 return self._mergestate
2613
2613
2614
2614
2615 class overlayworkingfilectx(committablefilectx):
2615 class overlayworkingfilectx(committablefilectx):
2616 """Wrap a ``workingfilectx`` but intercepts all writes into an in-memory
2616 """Wrap a ``workingfilectx`` but intercepts all writes into an in-memory
2617 cache, which can be flushed through later by calling ``flush()``."""
2617 cache, which can be flushed through later by calling ``flush()``."""
2618
2618
2619 def __init__(self, repo, path, filelog=None, parent=None):
2619 def __init__(self, repo, path, filelog=None, parent=None):
2620 super(overlayworkingfilectx, self).__init__(repo, path, filelog, parent)
2620 super(overlayworkingfilectx, self).__init__(repo, path, filelog, parent)
2621 self._repo = repo
2621 self._repo = repo
2622 self._parent = parent
2622 self._parent = parent
2623 self._path = path
2623 self._path = path
2624
2624
2625 def cmp(self, fctx):
2625 def cmp(self, fctx):
2626 return self.data() != fctx.data()
2626 return self.data() != fctx.data()
2627
2627
2628 def changectx(self):
2628 def changectx(self):
2629 return self._parent
2629 return self._parent
2630
2630
2631 def data(self):
2631 def data(self):
2632 return self._parent.data(self._path)
2632 return self._parent.data(self._path)
2633
2633
2634 def date(self):
2634 def date(self):
2635 return self._parent.filedate(self._path)
2635 return self._parent.filedate(self._path)
2636
2636
2637 def exists(self):
2637 def exists(self):
2638 return self.lexists()
2638 return self.lexists()
2639
2639
2640 def lexists(self):
2640 def lexists(self):
2641 return self._parent.exists(self._path)
2641 return self._parent.exists(self._path)
2642
2642
2643 def copysource(self):
2643 def copysource(self):
2644 return self._parent.copydata(self._path)
2644 return self._parent.copydata(self._path)
2645
2645
2646 def size(self):
2646 def size(self):
2647 return self._parent.size(self._path)
2647 return self._parent.size(self._path)
2648
2648
2649 def markcopied(self, origin):
2649 def markcopied(self, origin):
2650 self._parent.markcopied(self._path, origin)
2650 self._parent.markcopied(self._path, origin)
2651
2651
2652 def audit(self):
2652 def audit(self):
2653 pass
2653 pass
2654
2654
2655 def flags(self):
2655 def flags(self):
2656 return self._parent.flags(self._path)
2656 return self._parent.flags(self._path)
2657
2657
2658 def setflags(self, islink, isexec):
2658 def setflags(self, islink, isexec):
2659 return self._parent.setflags(self._path, islink, isexec)
2659 return self._parent.setflags(self._path, islink, isexec)
2660
2660
2661 def write(self, data, flags, backgroundclose=False, **kwargs):
2661 def write(self, data, flags, backgroundclose=False, **kwargs):
2662 return self._parent.write(self._path, data, flags, **kwargs)
2662 return self._parent.write(self._path, data, flags, **kwargs)
2663
2663
2664 def remove(self, ignoremissing=False):
2664 def remove(self, ignoremissing=False):
2665 return self._parent.remove(self._path)
2665 return self._parent.remove(self._path)
2666
2666
2667 def clearunknown(self):
2667 def clearunknown(self):
2668 pass
2668 pass
2669
2669
2670
2670
2671 class workingcommitctx(workingctx):
2671 class workingcommitctx(workingctx):
2672 """A workingcommitctx object makes access to data related to
2672 """A workingcommitctx object makes access to data related to
2673 the revision being committed convenient.
2673 the revision being committed convenient.
2674
2674
2675 This hides changes in the working directory, if they aren't
2675 This hides changes in the working directory, if they aren't
2676 committed in this context.
2676 committed in this context.
2677 """
2677 """
2678
2678
2679 def __init__(
2679 def __init__(
2680 self, repo, changes, text=b"", user=None, date=None, extra=None
2680 self, repo, changes, text=b"", user=None, date=None, extra=None
2681 ):
2681 ):
2682 super(workingcommitctx, self).__init__(
2682 super(workingcommitctx, self).__init__(
2683 repo, text, user, date, extra, changes
2683 repo, text, user, date, extra, changes
2684 )
2684 )
2685
2685
2686 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
2686 def _dirstatestatus(self, match, ignored=False, clean=False, unknown=False):
2687 """Return matched files only in ``self._status``
2687 """Return matched files only in ``self._status``
2688
2688
2689 Uncommitted files appear "clean" via this context, even if
2689 Uncommitted files appear "clean" via this context, even if
2690 they aren't actually so in the working directory.
2690 they aren't actually so in the working directory.
2691 """
2691 """
2692 if clean:
2692 if clean:
2693 clean = [f for f in self._manifest if f not in self._changedset]
2693 clean = [f for f in self._manifest if f not in self._changedset]
2694 else:
2694 else:
2695 clean = []
2695 clean = []
2696 return scmutil.status(
2696 return scmutil.status(
2697 [f for f in self._status.modified if match(f)],
2697 [f for f in self._status.modified if match(f)],
2698 [f for f in self._status.added if match(f)],
2698 [f for f in self._status.added if match(f)],
2699 [f for f in self._status.removed if match(f)],
2699 [f for f in self._status.removed if match(f)],
2700 [],
2700 [],
2701 [],
2701 [],
2702 [],
2702 [],
2703 clean,
2703 clean,
2704 )
2704 )
2705
2705
2706 @propertycache
2706 @propertycache
2707 def _changedset(self):
2707 def _changedset(self):
2708 """Return the set of files changed in this context"""
2708 """Return the set of files changed in this context"""
2709 changed = set(self._status.modified)
2709 changed = set(self._status.modified)
2710 changed.update(self._status.added)
2710 changed.update(self._status.added)
2711 changed.update(self._status.removed)
2711 changed.update(self._status.removed)
2712 return changed
2712 return changed
2713
2713
2714
2714
2715 def makecachingfilectxfn(func):
2715 def makecachingfilectxfn(func):
2716 """Create a filectxfn that caches based on the path.
2716 """Create a filectxfn that caches based on the path.
2717
2717
2718 We can't use util.cachefunc because it uses all arguments as the cache
2718 We can't use util.cachefunc because it uses all arguments as the cache
2719 key and this creates a cycle since the arguments include the repo and
2719 key and this creates a cycle since the arguments include the repo and
2720 memctx.
2720 memctx.
2721 """
2721 """
2722 cache = {}
2722 cache = {}
2723
2723
2724 def getfilectx(repo, memctx, path):
2724 def getfilectx(repo, memctx, path):
2725 if path not in cache:
2725 if path not in cache:
2726 cache[path] = func(repo, memctx, path)
2726 cache[path] = func(repo, memctx, path)
2727 return cache[path]
2727 return cache[path]
2728
2728
2729 return getfilectx
2729 return getfilectx
2730
2730
2731
2731
2732 def memfilefromctx(ctx):
2732 def memfilefromctx(ctx):
2733 """Given a context return a memfilectx for ctx[path]
2733 """Given a context return a memfilectx for ctx[path]
2734
2734
2735 This is a convenience method for building a memctx based on another
2735 This is a convenience method for building a memctx based on another
2736 context.
2736 context.
2737 """
2737 """
2738
2738
2739 def getfilectx(repo, memctx, path):
2739 def getfilectx(repo, memctx, path):
2740 fctx = ctx[path]
2740 fctx = ctx[path]
2741 copysource = fctx.copysource()
2741 copysource = fctx.copysource()
2742 return memfilectx(
2742 return memfilectx(
2743 repo,
2743 repo,
2744 memctx,
2744 memctx,
2745 path,
2745 path,
2746 fctx.data(),
2746 fctx.data(),
2747 islink=fctx.islink(),
2747 islink=fctx.islink(),
2748 isexec=fctx.isexec(),
2748 isexec=fctx.isexec(),
2749 copysource=copysource,
2749 copysource=copysource,
2750 )
2750 )
2751
2751
2752 return getfilectx
2752 return getfilectx
2753
2753
2754
2754
2755 def memfilefrompatch(patchstore):
2755 def memfilefrompatch(patchstore):
2756 """Given a patch (e.g. patchstore object) return a memfilectx
2756 """Given a patch (e.g. patchstore object) return a memfilectx
2757
2757
2758 This is a convenience method for building a memctx based on a patchstore.
2758 This is a convenience method for building a memctx based on a patchstore.
2759 """
2759 """
2760
2760
2761 def getfilectx(repo, memctx, path):
2761 def getfilectx(repo, memctx, path):
2762 data, mode, copysource = patchstore.getfile(path)
2762 data, mode, copysource = patchstore.getfile(path)
2763 if data is None:
2763 if data is None:
2764 return None
2764 return None
2765 islink, isexec = mode
2765 islink, isexec = mode
2766 return memfilectx(
2766 return memfilectx(
2767 repo,
2767 repo,
2768 memctx,
2768 memctx,
2769 path,
2769 path,
2770 data,
2770 data,
2771 islink=islink,
2771 islink=islink,
2772 isexec=isexec,
2772 isexec=isexec,
2773 copysource=copysource,
2773 copysource=copysource,
2774 )
2774 )
2775
2775
2776 return getfilectx
2776 return getfilectx
2777
2777
2778
2778
2779 class memctx(committablectx):
2779 class memctx(committablectx):
2780 """Use memctx to perform in-memory commits via localrepo.commitctx().
2780 """Use memctx to perform in-memory commits via localrepo.commitctx().
2781
2781
2782 Revision information is supplied at initialization time while
2782 Revision information is supplied at initialization time while
2783 related files data and is made available through a callback
2783 related files data and is made available through a callback
2784 mechanism. 'repo' is the current localrepo, 'parents' is a
2784 mechanism. 'repo' is the current localrepo, 'parents' is a
2785 sequence of two parent revisions identifiers (pass None for every
2785 sequence of two parent revisions identifiers (pass None for every
2786 missing parent), 'text' is the commit message and 'files' lists
2786 missing parent), 'text' is the commit message and 'files' lists
2787 names of files touched by the revision (normalized and relative to
2787 names of files touched by the revision (normalized and relative to
2788 repository root).
2788 repository root).
2789
2789
2790 filectxfn(repo, memctx, path) is a callable receiving the
2790 filectxfn(repo, memctx, path) is a callable receiving the
2791 repository, the current memctx object and the normalized path of
2791 repository, the current memctx object and the normalized path of
2792 requested file, relative to repository root. It is fired by the
2792 requested file, relative to repository root. It is fired by the
2793 commit function for every file in 'files', but calls order is
2793 commit function for every file in 'files', but calls order is
2794 undefined. If the file is available in the revision being
2794 undefined. If the file is available in the revision being
2795 committed (updated or added), filectxfn returns a memfilectx
2795 committed (updated or added), filectxfn returns a memfilectx
2796 object. If the file was removed, filectxfn return None for recent
2796 object. If the file was removed, filectxfn return None for recent
2797 Mercurial. Moved files are represented by marking the source file
2797 Mercurial. Moved files are represented by marking the source file
2798 removed and the new file added with copy information (see
2798 removed and the new file added with copy information (see
2799 memfilectx).
2799 memfilectx).
2800
2800
2801 user receives the committer name and defaults to current
2801 user receives the committer name and defaults to current
2802 repository username, date is the commit date in any format
2802 repository username, date is the commit date in any format
2803 supported by dateutil.parsedate() and defaults to current date, extra
2803 supported by dateutil.parsedate() and defaults to current date, extra
2804 is a dictionary of metadata or is left empty.
2804 is a dictionary of metadata or is left empty.
2805 """
2805 """
2806
2806
2807 # Mercurial <= 3.1 expects the filectxfn to raise IOError for missing files.
2807 # Mercurial <= 3.1 expects the filectxfn to raise IOError for missing files.
2808 # Extensions that need to retain compatibility across Mercurial 3.1 can use
2808 # Extensions that need to retain compatibility across Mercurial 3.1 can use
2809 # this field to determine what to do in filectxfn.
2809 # this field to determine what to do in filectxfn.
2810 _returnnoneformissingfiles = True
2810 _returnnoneformissingfiles = True
2811
2811
2812 def __init__(
2812 def __init__(
2813 self,
2813 self,
2814 repo,
2814 repo,
2815 parents,
2815 parents,
2816 text,
2816 text,
2817 files,
2817 files,
2818 filectxfn,
2818 filectxfn,
2819 user=None,
2819 user=None,
2820 date=None,
2820 date=None,
2821 extra=None,
2821 extra=None,
2822 branch=None,
2822 branch=None,
2823 editor=None,
2823 editor=None,
2824 ):
2824 ):
2825 super(memctx, self).__init__(
2825 super(memctx, self).__init__(
2826 repo, text, user, date, extra, branch=branch
2826 repo, text, user, date, extra, branch=branch
2827 )
2827 )
2828 self._rev = None
2828 self._rev = None
2829 self._node = None
2829 self._node = None
2830 parents = [(p or nullid) for p in parents]
2830 parents = [(p or nullid) for p in parents]
2831 p1, p2 = parents
2831 p1, p2 = parents
2832 self._parents = [self._repo[p] for p in (p1, p2)]
2832 self._parents = [self._repo[p] for p in (p1, p2)]
2833 files = sorted(set(files))
2833 files = sorted(set(files))
2834 self._files = files
2834 self._files = files
2835 self.substate = {}
2835 self.substate = {}
2836
2836
2837 if isinstance(filectxfn, patch.filestore):
2837 if isinstance(filectxfn, patch.filestore):
2838 filectxfn = memfilefrompatch(filectxfn)
2838 filectxfn = memfilefrompatch(filectxfn)
2839 elif not callable(filectxfn):
2839 elif not callable(filectxfn):
2840 # if store is not callable, wrap it in a function
2840 # if store is not callable, wrap it in a function
2841 filectxfn = memfilefromctx(filectxfn)
2841 filectxfn = memfilefromctx(filectxfn)
2842
2842
2843 # memoizing increases performance for e.g. vcs convert scenarios.
2843 # memoizing increases performance for e.g. vcs convert scenarios.
2844 self._filectxfn = makecachingfilectxfn(filectxfn)
2844 self._filectxfn = makecachingfilectxfn(filectxfn)
2845
2845
2846 if editor:
2846 if editor:
2847 self._text = editor(self._repo, self, [])
2847 self._text = editor(self._repo, self, [])
2848 self._repo.savecommitmessage(self._text)
2848 self._repo.savecommitmessage(self._text)
2849
2849
2850 def filectx(self, path, filelog=None):
2850 def filectx(self, path, filelog=None):
2851 """get a file context from the working directory
2851 """get a file context from the working directory
2852
2852
2853 Returns None if file doesn't exist and should be removed."""
2853 Returns None if file doesn't exist and should be removed."""
2854 return self._filectxfn(self._repo, self, path)
2854 return self._filectxfn(self._repo, self, path)
2855
2855
2856 def commit(self):
2856 def commit(self):
2857 """commit context to the repo"""
2857 """commit context to the repo"""
2858 return self._repo.commitctx(self)
2858 return self._repo.commitctx(self)
2859
2859
2860 @propertycache
2860 @propertycache
2861 def _manifest(self):
2861 def _manifest(self):
2862 """generate a manifest based on the return values of filectxfn"""
2862 """generate a manifest based on the return values of filectxfn"""
2863
2863
2864 # keep this simple for now; just worry about p1
2864 # keep this simple for now; just worry about p1
2865 pctx = self._parents[0]
2865 pctx = self._parents[0]
2866 man = pctx.manifest().copy()
2866 man = pctx.manifest().copy()
2867
2867
2868 for f in self._status.modified:
2868 for f in self._status.modified:
2869 man[f] = modifiednodeid
2869 man[f] = modifiednodeid
2870
2870
2871 for f in self._status.added:
2871 for f in self._status.added:
2872 man[f] = addednodeid
2872 man[f] = addednodeid
2873
2873
2874 for f in self._status.removed:
2874 for f in self._status.removed:
2875 if f in man:
2875 if f in man:
2876 del man[f]
2876 del man[f]
2877
2877
2878 return man
2878 return man
2879
2879
2880 @propertycache
2880 @propertycache
2881 def _status(self):
2881 def _status(self):
2882 """Calculate exact status from ``files`` specified at construction"""
2882 """Calculate exact status from ``files`` specified at construction"""
2883 man1 = self.p1().manifest()
2883 man1 = self.p1().manifest()
2884 p2 = self._parents[1]
2884 p2 = self._parents[1]
2885 # "1 < len(self._parents)" can't be used for checking
2885 # "1 < len(self._parents)" can't be used for checking
2886 # existence of the 2nd parent, because "memctx._parents" is
2886 # existence of the 2nd parent, because "memctx._parents" is
2887 # explicitly initialized by the list, of which length is 2.
2887 # explicitly initialized by the list, of which length is 2.
2888 if p2.node() != nullid:
2888 if p2.rev() != nullrev:
2889 man2 = p2.manifest()
2889 man2 = p2.manifest()
2890 managing = lambda f: f in man1 or f in man2
2890 managing = lambda f: f in man1 or f in man2
2891 else:
2891 else:
2892 managing = lambda f: f in man1
2892 managing = lambda f: f in man1
2893
2893
2894 modified, added, removed = [], [], []
2894 modified, added, removed = [], [], []
2895 for f in self._files:
2895 for f in self._files:
2896 if not managing(f):
2896 if not managing(f):
2897 added.append(f)
2897 added.append(f)
2898 elif self[f]:
2898 elif self[f]:
2899 modified.append(f)
2899 modified.append(f)
2900 else:
2900 else:
2901 removed.append(f)
2901 removed.append(f)
2902
2902
2903 return scmutil.status(modified, added, removed, [], [], [], [])
2903 return scmutil.status(modified, added, removed, [], [], [], [])
2904
2904
2905 def parents(self):
2905 def parents(self):
2906 if self._parents[1].node() == nullid:
2906 if self._parents[1].rev() == nullrev:
2907 return [self._parents[0]]
2907 return [self._parents[0]]
2908 return self._parents
2908 return self._parents
2909
2909
2910
2910
2911 class memfilectx(committablefilectx):
2911 class memfilectx(committablefilectx):
2912 """memfilectx represents an in-memory file to commit.
2912 """memfilectx represents an in-memory file to commit.
2913
2913
2914 See memctx and committablefilectx for more details.
2914 See memctx and committablefilectx for more details.
2915 """
2915 """
2916
2916
2917 def __init__(
2917 def __init__(
2918 self,
2918 self,
2919 repo,
2919 repo,
2920 changectx,
2920 changectx,
2921 path,
2921 path,
2922 data,
2922 data,
2923 islink=False,
2923 islink=False,
2924 isexec=False,
2924 isexec=False,
2925 copysource=None,
2925 copysource=None,
2926 ):
2926 ):
2927 """
2927 """
2928 path is the normalized file path relative to repository root.
2928 path is the normalized file path relative to repository root.
2929 data is the file content as a string.
2929 data is the file content as a string.
2930 islink is True if the file is a symbolic link.
2930 islink is True if the file is a symbolic link.
2931 isexec is True if the file is executable.
2931 isexec is True if the file is executable.
2932 copied is the source file path if current file was copied in the
2932 copied is the source file path if current file was copied in the
2933 revision being committed, or None."""
2933 revision being committed, or None."""
2934 super(memfilectx, self).__init__(repo, path, None, changectx)
2934 super(memfilectx, self).__init__(repo, path, None, changectx)
2935 self._data = data
2935 self._data = data
2936 if islink:
2936 if islink:
2937 self._flags = b'l'
2937 self._flags = b'l'
2938 elif isexec:
2938 elif isexec:
2939 self._flags = b'x'
2939 self._flags = b'x'
2940 else:
2940 else:
2941 self._flags = b''
2941 self._flags = b''
2942 self._copysource = copysource
2942 self._copysource = copysource
2943
2943
2944 def copysource(self):
2944 def copysource(self):
2945 return self._copysource
2945 return self._copysource
2946
2946
2947 def cmp(self, fctx):
2947 def cmp(self, fctx):
2948 return self.data() != fctx.data()
2948 return self.data() != fctx.data()
2949
2949
2950 def data(self):
2950 def data(self):
2951 return self._data
2951 return self._data
2952
2952
2953 def remove(self, ignoremissing=False):
2953 def remove(self, ignoremissing=False):
2954 """wraps unlink for a repo's working directory"""
2954 """wraps unlink for a repo's working directory"""
2955 # need to figure out what to do here
2955 # need to figure out what to do here
2956 del self._changectx[self._path]
2956 del self._changectx[self._path]
2957
2957
2958 def write(self, data, flags, **kwargs):
2958 def write(self, data, flags, **kwargs):
2959 """wraps repo.wwrite"""
2959 """wraps repo.wwrite"""
2960 self._data = data
2960 self._data = data
2961
2961
2962
2962
2963 class metadataonlyctx(committablectx):
2963 class metadataonlyctx(committablectx):
2964 """Like memctx but it's reusing the manifest of different commit.
2964 """Like memctx but it's reusing the manifest of different commit.
2965 Intended to be used by lightweight operations that are creating
2965 Intended to be used by lightweight operations that are creating
2966 metadata-only changes.
2966 metadata-only changes.
2967
2967
2968 Revision information is supplied at initialization time. 'repo' is the
2968 Revision information is supplied at initialization time. 'repo' is the
2969 current localrepo, 'ctx' is original revision which manifest we're reuisng
2969 current localrepo, 'ctx' is original revision which manifest we're reuisng
2970 'parents' is a sequence of two parent revisions identifiers (pass None for
2970 'parents' is a sequence of two parent revisions identifiers (pass None for
2971 every missing parent), 'text' is the commit.
2971 every missing parent), 'text' is the commit.
2972
2972
2973 user receives the committer name and defaults to current repository
2973 user receives the committer name and defaults to current repository
2974 username, date is the commit date in any format supported by
2974 username, date is the commit date in any format supported by
2975 dateutil.parsedate() and defaults to current date, extra is a dictionary of
2975 dateutil.parsedate() and defaults to current date, extra is a dictionary of
2976 metadata or is left empty.
2976 metadata or is left empty.
2977 """
2977 """
2978
2978
2979 def __init__(
2979 def __init__(
2980 self,
2980 self,
2981 repo,
2981 repo,
2982 originalctx,
2982 originalctx,
2983 parents=None,
2983 parents=None,
2984 text=None,
2984 text=None,
2985 user=None,
2985 user=None,
2986 date=None,
2986 date=None,
2987 extra=None,
2987 extra=None,
2988 editor=None,
2988 editor=None,
2989 ):
2989 ):
2990 if text is None:
2990 if text is None:
2991 text = originalctx.description()
2991 text = originalctx.description()
2992 super(metadataonlyctx, self).__init__(repo, text, user, date, extra)
2992 super(metadataonlyctx, self).__init__(repo, text, user, date, extra)
2993 self._rev = None
2993 self._rev = None
2994 self._node = None
2994 self._node = None
2995 self._originalctx = originalctx
2995 self._originalctx = originalctx
2996 self._manifestnode = originalctx.manifestnode()
2996 self._manifestnode = originalctx.manifestnode()
2997 if parents is None:
2997 if parents is None:
2998 parents = originalctx.parents()
2998 parents = originalctx.parents()
2999 else:
2999 else:
3000 parents = [repo[p] for p in parents if p is not None]
3000 parents = [repo[p] for p in parents if p is not None]
3001 parents = parents[:]
3001 parents = parents[:]
3002 while len(parents) < 2:
3002 while len(parents) < 2:
3003 parents.append(repo[nullrev])
3003 parents.append(repo[nullrev])
3004 p1, p2 = self._parents = parents
3004 p1, p2 = self._parents = parents
3005
3005
3006 # sanity check to ensure that the reused manifest parents are
3006 # sanity check to ensure that the reused manifest parents are
3007 # manifests of our commit parents
3007 # manifests of our commit parents
3008 mp1, mp2 = self.manifestctx().parents
3008 mp1, mp2 = self.manifestctx().parents
3009 if p1 != nullid and p1.manifestnode() != mp1:
3009 if p1 != nullid and p1.manifestnode() != mp1:
3010 raise RuntimeError(
3010 raise RuntimeError(
3011 r"can't reuse the manifest: its p1 "
3011 r"can't reuse the manifest: its p1 "
3012 r"doesn't match the new ctx p1"
3012 r"doesn't match the new ctx p1"
3013 )
3013 )
3014 if p2 != nullid and p2.manifestnode() != mp2:
3014 if p2 != nullid and p2.manifestnode() != mp2:
3015 raise RuntimeError(
3015 raise RuntimeError(
3016 r"can't reuse the manifest: "
3016 r"can't reuse the manifest: "
3017 r"its p2 doesn't match the new ctx p2"
3017 r"its p2 doesn't match the new ctx p2"
3018 )
3018 )
3019
3019
3020 self._files = originalctx.files()
3020 self._files = originalctx.files()
3021 self.substate = {}
3021 self.substate = {}
3022
3022
3023 if editor:
3023 if editor:
3024 self._text = editor(self._repo, self, [])
3024 self._text = editor(self._repo, self, [])
3025 self._repo.savecommitmessage(self._text)
3025 self._repo.savecommitmessage(self._text)
3026
3026
3027 def manifestnode(self):
3027 def manifestnode(self):
3028 return self._manifestnode
3028 return self._manifestnode
3029
3029
3030 @property
3030 @property
3031 def _manifestctx(self):
3031 def _manifestctx(self):
3032 return self._repo.manifestlog[self._manifestnode]
3032 return self._repo.manifestlog[self._manifestnode]
3033
3033
3034 def filectx(self, path, filelog=None):
3034 def filectx(self, path, filelog=None):
3035 return self._originalctx.filectx(path, filelog=filelog)
3035 return self._originalctx.filectx(path, filelog=filelog)
3036
3036
3037 def commit(self):
3037 def commit(self):
3038 """commit context to the repo"""
3038 """commit context to the repo"""
3039 return self._repo.commitctx(self)
3039 return self._repo.commitctx(self)
3040
3040
3041 @property
3041 @property
3042 def _manifest(self):
3042 def _manifest(self):
3043 return self._originalctx.manifest()
3043 return self._originalctx.manifest()
3044
3044
3045 @propertycache
3045 @propertycache
3046 def _status(self):
3046 def _status(self):
3047 """Calculate exact status from ``files`` specified in the ``origctx``
3047 """Calculate exact status from ``files`` specified in the ``origctx``
3048 and parents manifests.
3048 and parents manifests.
3049 """
3049 """
3050 man1 = self.p1().manifest()
3050 man1 = self.p1().manifest()
3051 p2 = self._parents[1]
3051 p2 = self._parents[1]
3052 # "1 < len(self._parents)" can't be used for checking
3052 # "1 < len(self._parents)" can't be used for checking
3053 # existence of the 2nd parent, because "metadataonlyctx._parents" is
3053 # existence of the 2nd parent, because "metadataonlyctx._parents" is
3054 # explicitly initialized by the list, of which length is 2.
3054 # explicitly initialized by the list, of which length is 2.
3055 if p2.node() != nullid:
3055 if p2.rev() != nullrev:
3056 man2 = p2.manifest()
3056 man2 = p2.manifest()
3057 managing = lambda f: f in man1 or f in man2
3057 managing = lambda f: f in man1 or f in man2
3058 else:
3058 else:
3059 managing = lambda f: f in man1
3059 managing = lambda f: f in man1
3060
3060
3061 modified, added, removed = [], [], []
3061 modified, added, removed = [], [], []
3062 for f in self._files:
3062 for f in self._files:
3063 if not managing(f):
3063 if not managing(f):
3064 added.append(f)
3064 added.append(f)
3065 elif f in self:
3065 elif f in self:
3066 modified.append(f)
3066 modified.append(f)
3067 else:
3067 else:
3068 removed.append(f)
3068 removed.append(f)
3069
3069
3070 return scmutil.status(modified, added, removed, [], [], [], [])
3070 return scmutil.status(modified, added, removed, [], [], [], [])
3071
3071
3072
3072
3073 class arbitraryfilectx(object):
3073 class arbitraryfilectx(object):
3074 """Allows you to use filectx-like functions on a file in an arbitrary
3074 """Allows you to use filectx-like functions on a file in an arbitrary
3075 location on disk, possibly not in the working directory.
3075 location on disk, possibly not in the working directory.
3076 """
3076 """
3077
3077
3078 def __init__(self, path, repo=None):
3078 def __init__(self, path, repo=None):
3079 # Repo is optional because contrib/simplemerge uses this class.
3079 # Repo is optional because contrib/simplemerge uses this class.
3080 self._repo = repo
3080 self._repo = repo
3081 self._path = path
3081 self._path = path
3082
3082
3083 def cmp(self, fctx):
3083 def cmp(self, fctx):
3084 # filecmp follows symlinks whereas `cmp` should not, so skip the fast
3084 # filecmp follows symlinks whereas `cmp` should not, so skip the fast
3085 # path if either side is a symlink.
3085 # path if either side is a symlink.
3086 symlinks = b'l' in self.flags() or b'l' in fctx.flags()
3086 symlinks = b'l' in self.flags() or b'l' in fctx.flags()
3087 if not symlinks and isinstance(fctx, workingfilectx) and self._repo:
3087 if not symlinks and isinstance(fctx, workingfilectx) and self._repo:
3088 # Add a fast-path for merge if both sides are disk-backed.
3088 # Add a fast-path for merge if both sides are disk-backed.
3089 # Note that filecmp uses the opposite return values (True if same)
3089 # Note that filecmp uses the opposite return values (True if same)
3090 # from our cmp functions (True if different).
3090 # from our cmp functions (True if different).
3091 return not filecmp.cmp(self.path(), self._repo.wjoin(fctx.path()))
3091 return not filecmp.cmp(self.path(), self._repo.wjoin(fctx.path()))
3092 return self.data() != fctx.data()
3092 return self.data() != fctx.data()
3093
3093
3094 def path(self):
3094 def path(self):
3095 return self._path
3095 return self._path
3096
3096
3097 def flags(self):
3097 def flags(self):
3098 return b''
3098 return b''
3099
3099
3100 def data(self):
3100 def data(self):
3101 return util.readfile(self._path)
3101 return util.readfile(self._path)
3102
3102
3103 def decodeddata(self):
3103 def decodeddata(self):
3104 with open(self._path, b"rb") as f:
3104 with open(self._path, b"rb") as f:
3105 return f.read()
3105 return f.read()
3106
3106
3107 def remove(self):
3107 def remove(self):
3108 util.unlink(self._path)
3108 util.unlink(self._path)
3109
3109
3110 def write(self, data, flags, **kwargs):
3110 def write(self, data, flags, **kwargs):
3111 assert not flags
3111 assert not flags
3112 with open(self._path, b"wb") as f:
3112 with open(self._path, b"wb") as f:
3113 f.write(data)
3113 f.write(data)
@@ -1,1308 +1,1308 b''
1 # coding: utf8
1 # coding: utf8
2 # copies.py - copy detection for Mercurial
2 # copies.py - copy detection for Mercurial
3 #
3 #
4 # Copyright 2008 Olivia Mackall <olivia@selenic.com>
4 # Copyright 2008 Olivia Mackall <olivia@selenic.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 from __future__ import absolute_import
9 from __future__ import absolute_import
10
10
11 import collections
11 import collections
12 import os
12 import os
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import (
15 from .node import (
16 nullid,
16 nullid,
17 nullrev,
17 nullrev,
18 )
18 )
19
19
20 from . import (
20 from . import (
21 match as matchmod,
21 match as matchmod,
22 pathutil,
22 pathutil,
23 policy,
23 policy,
24 pycompat,
24 pycompat,
25 util,
25 util,
26 )
26 )
27
27
28
28
29 from .utils import stringutil
29 from .utils import stringutil
30
30
31 from .revlogutils import (
31 from .revlogutils import (
32 flagutil,
32 flagutil,
33 sidedata as sidedatamod,
33 sidedata as sidedatamod,
34 )
34 )
35
35
36 rustmod = policy.importrust("copy_tracing")
36 rustmod = policy.importrust("copy_tracing")
37
37
38
38
39 def _filter(src, dst, t):
39 def _filter(src, dst, t):
40 """filters out invalid copies after chaining"""
40 """filters out invalid copies after chaining"""
41
41
42 # When _chain()'ing copies in 'a' (from 'src' via some other commit 'mid')
42 # When _chain()'ing copies in 'a' (from 'src' via some other commit 'mid')
43 # with copies in 'b' (from 'mid' to 'dst'), we can get the different cases
43 # with copies in 'b' (from 'mid' to 'dst'), we can get the different cases
44 # in the following table (not including trivial cases). For example, case 6
44 # in the following table (not including trivial cases). For example, case 6
45 # is where a file existed in 'src' and remained under that name in 'mid' and
45 # is where a file existed in 'src' and remained under that name in 'mid' and
46 # then was renamed between 'mid' and 'dst'.
46 # then was renamed between 'mid' and 'dst'.
47 #
47 #
48 # case src mid dst result
48 # case src mid dst result
49 # 1 x y - -
49 # 1 x y - -
50 # 2 x y y x->y
50 # 2 x y y x->y
51 # 3 x y x -
51 # 3 x y x -
52 # 4 x y z x->z
52 # 4 x y z x->z
53 # 5 - x y -
53 # 5 - x y -
54 # 6 x x y x->y
54 # 6 x x y x->y
55 #
55 #
56 # _chain() takes care of chaining the copies in 'a' and 'b', but it
56 # _chain() takes care of chaining the copies in 'a' and 'b', but it
57 # cannot tell the difference between cases 1 and 2, between 3 and 4, or
57 # cannot tell the difference between cases 1 and 2, between 3 and 4, or
58 # between 5 and 6, so it includes all cases in its result.
58 # between 5 and 6, so it includes all cases in its result.
59 # Cases 1, 3, and 5 are then removed by _filter().
59 # Cases 1, 3, and 5 are then removed by _filter().
60
60
61 for k, v in list(t.items()):
61 for k, v in list(t.items()):
62 if k == v: # case 3
62 if k == v: # case 3
63 del t[k]
63 del t[k]
64 elif v not in src: # case 5
64 elif v not in src: # case 5
65 # remove copies from files that didn't exist
65 # remove copies from files that didn't exist
66 del t[k]
66 del t[k]
67 elif k not in dst: # case 1
67 elif k not in dst: # case 1
68 # remove copies to files that were then removed
68 # remove copies to files that were then removed
69 del t[k]
69 del t[k]
70
70
71
71
72 def _chain(prefix, suffix):
72 def _chain(prefix, suffix):
73 """chain two sets of copies 'prefix' and 'suffix'"""
73 """chain two sets of copies 'prefix' and 'suffix'"""
74 result = prefix.copy()
74 result = prefix.copy()
75 for key, value in pycompat.iteritems(suffix):
75 for key, value in pycompat.iteritems(suffix):
76 result[key] = prefix.get(value, value)
76 result[key] = prefix.get(value, value)
77 return result
77 return result
78
78
79
79
80 def _tracefile(fctx, am, basemf):
80 def _tracefile(fctx, am, basemf):
81 """return file context that is the ancestor of fctx present in ancestor
81 """return file context that is the ancestor of fctx present in ancestor
82 manifest am
82 manifest am
83
83
84 Note: we used to try and stop after a given limit, however checking if that
84 Note: we used to try and stop after a given limit, however checking if that
85 limit is reached turned out to be very expensive. we are better off
85 limit is reached turned out to be very expensive. we are better off
86 disabling that feature."""
86 disabling that feature."""
87
87
88 for f in fctx.ancestors():
88 for f in fctx.ancestors():
89 path = f.path()
89 path = f.path()
90 if am.get(path, None) == f.filenode():
90 if am.get(path, None) == f.filenode():
91 return path
91 return path
92 if basemf and basemf.get(path, None) == f.filenode():
92 if basemf and basemf.get(path, None) == f.filenode():
93 return path
93 return path
94
94
95
95
96 def _dirstatecopies(repo, match=None):
96 def _dirstatecopies(repo, match=None):
97 ds = repo.dirstate
97 ds = repo.dirstate
98 c = ds.copies().copy()
98 c = ds.copies().copy()
99 for k in list(c):
99 for k in list(c):
100 if ds[k] not in b'anm' or (match and not match(k)):
100 if ds[k] not in b'anm' or (match and not match(k)):
101 del c[k]
101 del c[k]
102 return c
102 return c
103
103
104
104
105 def _computeforwardmissing(a, b, match=None):
105 def _computeforwardmissing(a, b, match=None):
106 """Computes which files are in b but not a.
106 """Computes which files are in b but not a.
107 This is its own function so extensions can easily wrap this call to see what
107 This is its own function so extensions can easily wrap this call to see what
108 files _forwardcopies is about to process.
108 files _forwardcopies is about to process.
109 """
109 """
110 ma = a.manifest()
110 ma = a.manifest()
111 mb = b.manifest()
111 mb = b.manifest()
112 return mb.filesnotin(ma, match=match)
112 return mb.filesnotin(ma, match=match)
113
113
114
114
115 def usechangesetcentricalgo(repo):
115 def usechangesetcentricalgo(repo):
116 """Checks if we should use changeset-centric copy algorithms"""
116 """Checks if we should use changeset-centric copy algorithms"""
117 if repo.filecopiesmode == b'changeset-sidedata':
117 if repo.filecopiesmode == b'changeset-sidedata':
118 return True
118 return True
119 readfrom = repo.ui.config(b'experimental', b'copies.read-from')
119 readfrom = repo.ui.config(b'experimental', b'copies.read-from')
120 changesetsource = (b'changeset-only', b'compatibility')
120 changesetsource = (b'changeset-only', b'compatibility')
121 return readfrom in changesetsource
121 return readfrom in changesetsource
122
122
123
123
124 def _committedforwardcopies(a, b, base, match):
124 def _committedforwardcopies(a, b, base, match):
125 """Like _forwardcopies(), but b.rev() cannot be None (working copy)"""
125 """Like _forwardcopies(), but b.rev() cannot be None (working copy)"""
126 # files might have to be traced back to the fctx parent of the last
126 # files might have to be traced back to the fctx parent of the last
127 # one-side-only changeset, but not further back than that
127 # one-side-only changeset, but not further back than that
128 repo = a._repo
128 repo = a._repo
129
129
130 if usechangesetcentricalgo(repo):
130 if usechangesetcentricalgo(repo):
131 return _changesetforwardcopies(a, b, match)
131 return _changesetforwardcopies(a, b, match)
132
132
133 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
133 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
134 dbg = repo.ui.debug
134 dbg = repo.ui.debug
135 if debug:
135 if debug:
136 dbg(b'debug.copies: looking into rename from %s to %s\n' % (a, b))
136 dbg(b'debug.copies: looking into rename from %s to %s\n' % (a, b))
137 am = a.manifest()
137 am = a.manifest()
138 basemf = None if base is None else base.manifest()
138 basemf = None if base is None else base.manifest()
139
139
140 # find where new files came from
140 # find where new files came from
141 # we currently don't try to find where old files went, too expensive
141 # we currently don't try to find where old files went, too expensive
142 # this means we can miss a case like 'hg rm b; hg cp a b'
142 # this means we can miss a case like 'hg rm b; hg cp a b'
143 cm = {}
143 cm = {}
144
144
145 # Computing the forward missing is quite expensive on large manifests, since
145 # Computing the forward missing is quite expensive on large manifests, since
146 # it compares the entire manifests. We can optimize it in the common use
146 # it compares the entire manifests. We can optimize it in the common use
147 # case of computing what copies are in a commit versus its parent (like
147 # case of computing what copies are in a commit versus its parent (like
148 # during a rebase or histedit). Note, we exclude merge commits from this
148 # during a rebase or histedit). Note, we exclude merge commits from this
149 # optimization, since the ctx.files() for a merge commit is not correct for
149 # optimization, since the ctx.files() for a merge commit is not correct for
150 # this comparison.
150 # this comparison.
151 forwardmissingmatch = match
151 forwardmissingmatch = match
152 if b.p1() == a and b.p2().node() == nullid:
152 if b.p1() == a and b.p2().rev() == nullrev:
153 filesmatcher = matchmod.exact(b.files())
153 filesmatcher = matchmod.exact(b.files())
154 forwardmissingmatch = matchmod.intersectmatchers(match, filesmatcher)
154 forwardmissingmatch = matchmod.intersectmatchers(match, filesmatcher)
155 if repo.ui.configbool(b'devel', b'copy-tracing.trace-all-files'):
155 if repo.ui.configbool(b'devel', b'copy-tracing.trace-all-files'):
156 missing = list(b.walk(match))
156 missing = list(b.walk(match))
157 # _computeforwardmissing(a, b, match=forwardmissingmatch)
157 # _computeforwardmissing(a, b, match=forwardmissingmatch)
158 if debug:
158 if debug:
159 dbg(b'debug.copies: searching all files: %d\n' % len(missing))
159 dbg(b'debug.copies: searching all files: %d\n' % len(missing))
160 else:
160 else:
161 missing = _computeforwardmissing(a, b, match=forwardmissingmatch)
161 missing = _computeforwardmissing(a, b, match=forwardmissingmatch)
162 if debug:
162 if debug:
163 dbg(
163 dbg(
164 b'debug.copies: missing files to search: %d\n'
164 b'debug.copies: missing files to search: %d\n'
165 % len(missing)
165 % len(missing)
166 )
166 )
167
167
168 ancestrycontext = a._repo.changelog.ancestors([b.rev()], inclusive=True)
168 ancestrycontext = a._repo.changelog.ancestors([b.rev()], inclusive=True)
169
169
170 for f in sorted(missing):
170 for f in sorted(missing):
171 if debug:
171 if debug:
172 dbg(b'debug.copies: tracing file: %s\n' % f)
172 dbg(b'debug.copies: tracing file: %s\n' % f)
173 fctx = b[f]
173 fctx = b[f]
174 fctx._ancestrycontext = ancestrycontext
174 fctx._ancestrycontext = ancestrycontext
175
175
176 if debug:
176 if debug:
177 start = util.timer()
177 start = util.timer()
178 opath = _tracefile(fctx, am, basemf)
178 opath = _tracefile(fctx, am, basemf)
179 if opath:
179 if opath:
180 if debug:
180 if debug:
181 dbg(b'debug.copies: rename of: %s\n' % opath)
181 dbg(b'debug.copies: rename of: %s\n' % opath)
182 cm[f] = opath
182 cm[f] = opath
183 if debug:
183 if debug:
184 dbg(
184 dbg(
185 b'debug.copies: time: %f seconds\n'
185 b'debug.copies: time: %f seconds\n'
186 % (util.timer() - start)
186 % (util.timer() - start)
187 )
187 )
188 return cm
188 return cm
189
189
190
190
191 def _revinfo_getter(repo, match):
191 def _revinfo_getter(repo, match):
192 """returns a function that returns the following data given a <rev>"
192 """returns a function that returns the following data given a <rev>"
193
193
194 * p1: revision number of first parent
194 * p1: revision number of first parent
195 * p2: revision number of first parent
195 * p2: revision number of first parent
196 * changes: a ChangingFiles object
196 * changes: a ChangingFiles object
197 """
197 """
198 cl = repo.changelog
198 cl = repo.changelog
199 parents = cl.parentrevs
199 parents = cl.parentrevs
200 flags = cl.flags
200 flags = cl.flags
201
201
202 HASCOPIESINFO = flagutil.REVIDX_HASCOPIESINFO
202 HASCOPIESINFO = flagutil.REVIDX_HASCOPIESINFO
203
203
204 changelogrevision = cl.changelogrevision
204 changelogrevision = cl.changelogrevision
205
205
206 if rustmod is not None:
206 if rustmod is not None:
207
207
208 def revinfo(rev):
208 def revinfo(rev):
209 p1, p2 = parents(rev)
209 p1, p2 = parents(rev)
210 if flags(rev) & HASCOPIESINFO:
210 if flags(rev) & HASCOPIESINFO:
211 raw = changelogrevision(rev)._sidedata.get(sidedatamod.SD_FILES)
211 raw = changelogrevision(rev)._sidedata.get(sidedatamod.SD_FILES)
212 else:
212 else:
213 raw = None
213 raw = None
214 return (p1, p2, raw)
214 return (p1, p2, raw)
215
215
216 else:
216 else:
217
217
218 def revinfo(rev):
218 def revinfo(rev):
219 p1, p2 = parents(rev)
219 p1, p2 = parents(rev)
220 if flags(rev) & HASCOPIESINFO:
220 if flags(rev) & HASCOPIESINFO:
221 changes = changelogrevision(rev).changes
221 changes = changelogrevision(rev).changes
222 else:
222 else:
223 changes = None
223 changes = None
224 return (p1, p2, changes)
224 return (p1, p2, changes)
225
225
226 return revinfo
226 return revinfo
227
227
228
228
229 def cached_is_ancestor(is_ancestor):
229 def cached_is_ancestor(is_ancestor):
230 """return a cached version of is_ancestor"""
230 """return a cached version of is_ancestor"""
231 cache = {}
231 cache = {}
232
232
233 def _is_ancestor(anc, desc):
233 def _is_ancestor(anc, desc):
234 if anc > desc:
234 if anc > desc:
235 return False
235 return False
236 elif anc == desc:
236 elif anc == desc:
237 return True
237 return True
238 key = (anc, desc)
238 key = (anc, desc)
239 ret = cache.get(key)
239 ret = cache.get(key)
240 if ret is None:
240 if ret is None:
241 ret = cache[key] = is_ancestor(anc, desc)
241 ret = cache[key] = is_ancestor(anc, desc)
242 return ret
242 return ret
243
243
244 return _is_ancestor
244 return _is_ancestor
245
245
246
246
247 def _changesetforwardcopies(a, b, match):
247 def _changesetforwardcopies(a, b, match):
248 if a.rev() in (nullrev, b.rev()):
248 if a.rev() in (nullrev, b.rev()):
249 return {}
249 return {}
250
250
251 repo = a.repo().unfiltered()
251 repo = a.repo().unfiltered()
252 children = {}
252 children = {}
253
253
254 cl = repo.changelog
254 cl = repo.changelog
255 isancestor = cl.isancestorrev
255 isancestor = cl.isancestorrev
256
256
257 # To track rename from "A" to B, we need to gather all parent β†’ children
257 # To track rename from "A" to B, we need to gather all parent β†’ children
258 # edges that are contains in `::B` but not in `::A`.
258 # edges that are contains in `::B` but not in `::A`.
259 #
259 #
260 #
260 #
261 # To do so, we need to gather all revisions exclusiveΒΉ to "B" (ieΒΉ: `::b -
261 # To do so, we need to gather all revisions exclusiveΒΉ to "B" (ieΒΉ: `::b -
262 # ::a`) and also all the "roots point", ie the parents of the exclusive set
262 # ::a`) and also all the "roots point", ie the parents of the exclusive set
263 # that belong to ::a. These are exactly all the revisions needed to express
263 # that belong to ::a. These are exactly all the revisions needed to express
264 # the parent β†’ children we need to combine.
264 # the parent β†’ children we need to combine.
265 #
265 #
266 # [1] actually, we need to gather all the edges within `(::a)::b`, ie:
266 # [1] actually, we need to gather all the edges within `(::a)::b`, ie:
267 # excluding paths that leads to roots that are not ancestors of `a`. We
267 # excluding paths that leads to roots that are not ancestors of `a`. We
268 # keep this out of the explanation because it is hard enough without this special case..
268 # keep this out of the explanation because it is hard enough without this special case..
269
269
270 parents = cl._uncheckedparentrevs
270 parents = cl._uncheckedparentrevs
271 graph_roots = (nullrev, nullrev)
271 graph_roots = (nullrev, nullrev)
272
272
273 ancestors = cl.ancestors([a.rev()], inclusive=True)
273 ancestors = cl.ancestors([a.rev()], inclusive=True)
274 revs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()])
274 revs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()])
275 roots = set()
275 roots = set()
276 has_graph_roots = False
276 has_graph_roots = False
277 multi_thread = repo.ui.configbool(b'devel', b'copy-tracing.multi-thread')
277 multi_thread = repo.ui.configbool(b'devel', b'copy-tracing.multi-thread')
278
278
279 # iterate over `only(B, A)`
279 # iterate over `only(B, A)`
280 for r in revs:
280 for r in revs:
281 ps = parents(r)
281 ps = parents(r)
282 if ps == graph_roots:
282 if ps == graph_roots:
283 has_graph_roots = True
283 has_graph_roots = True
284 else:
284 else:
285 p1, p2 = ps
285 p1, p2 = ps
286
286
287 # find all the "root points" (see larger comment above)
287 # find all the "root points" (see larger comment above)
288 if p1 != nullrev and p1 in ancestors:
288 if p1 != nullrev and p1 in ancestors:
289 roots.add(p1)
289 roots.add(p1)
290 if p2 != nullrev and p2 in ancestors:
290 if p2 != nullrev and p2 in ancestors:
291 roots.add(p2)
291 roots.add(p2)
292 if not roots:
292 if not roots:
293 # no common revision to track copies from
293 # no common revision to track copies from
294 return {}
294 return {}
295 if has_graph_roots:
295 if has_graph_roots:
296 # this deal with the special case mentionned in the [1] footnotes. We
296 # this deal with the special case mentionned in the [1] footnotes. We
297 # must filter out revisions that leads to non-common graphroots.
297 # must filter out revisions that leads to non-common graphroots.
298 roots = list(roots)
298 roots = list(roots)
299 m = min(roots)
299 m = min(roots)
300 h = [b.rev()]
300 h = [b.rev()]
301 roots_to_head = cl.reachableroots(m, h, roots, includepath=True)
301 roots_to_head = cl.reachableroots(m, h, roots, includepath=True)
302 roots_to_head = set(roots_to_head)
302 roots_to_head = set(roots_to_head)
303 revs = [r for r in revs if r in roots_to_head]
303 revs = [r for r in revs if r in roots_to_head]
304
304
305 if repo.filecopiesmode == b'changeset-sidedata':
305 if repo.filecopiesmode == b'changeset-sidedata':
306 # When using side-data, we will process the edges "from" the children.
306 # When using side-data, we will process the edges "from" the children.
307 # We iterate over the childre, gathering previous collected data for
307 # We iterate over the childre, gathering previous collected data for
308 # the parents. Do know when the parents data is no longer necessary, we
308 # the parents. Do know when the parents data is no longer necessary, we
309 # keep a counter of how many children each revision has.
309 # keep a counter of how many children each revision has.
310 #
310 #
311 # An interresting property of `children_count` is that it only contains
311 # An interresting property of `children_count` is that it only contains
312 # revision that will be relevant for a edge of the graph. So if a
312 # revision that will be relevant for a edge of the graph. So if a
313 # children has parent not in `children_count`, that edges should not be
313 # children has parent not in `children_count`, that edges should not be
314 # processed.
314 # processed.
315 children_count = dict((r, 0) for r in roots)
315 children_count = dict((r, 0) for r in roots)
316 for r in revs:
316 for r in revs:
317 for p in cl.parentrevs(r):
317 for p in cl.parentrevs(r):
318 if p == nullrev:
318 if p == nullrev:
319 continue
319 continue
320 children_count[r] = 0
320 children_count[r] = 0
321 if p in children_count:
321 if p in children_count:
322 children_count[p] += 1
322 children_count[p] += 1
323 revinfo = _revinfo_getter(repo, match)
323 revinfo = _revinfo_getter(repo, match)
324 return _combine_changeset_copies(
324 return _combine_changeset_copies(
325 revs,
325 revs,
326 children_count,
326 children_count,
327 b.rev(),
327 b.rev(),
328 revinfo,
328 revinfo,
329 match,
329 match,
330 isancestor,
330 isancestor,
331 multi_thread,
331 multi_thread,
332 )
332 )
333 else:
333 else:
334 # When not using side-data, we will process the edges "from" the parent.
334 # When not using side-data, we will process the edges "from" the parent.
335 # so we need a full mapping of the parent -> children relation.
335 # so we need a full mapping of the parent -> children relation.
336 children = dict((r, []) for r in roots)
336 children = dict((r, []) for r in roots)
337 for r in revs:
337 for r in revs:
338 for p in cl.parentrevs(r):
338 for p in cl.parentrevs(r):
339 if p == nullrev:
339 if p == nullrev:
340 continue
340 continue
341 children[r] = []
341 children[r] = []
342 if p in children:
342 if p in children:
343 children[p].append(r)
343 children[p].append(r)
344 x = revs.pop()
344 x = revs.pop()
345 assert x == b.rev()
345 assert x == b.rev()
346 revs.extend(roots)
346 revs.extend(roots)
347 revs.sort()
347 revs.sort()
348
348
349 revinfo = _revinfo_getter_extra(repo)
349 revinfo = _revinfo_getter_extra(repo)
350 return _combine_changeset_copies_extra(
350 return _combine_changeset_copies_extra(
351 revs, children, b.rev(), revinfo, match, isancestor
351 revs, children, b.rev(), revinfo, match, isancestor
352 )
352 )
353
353
354
354
355 def _combine_changeset_copies(
355 def _combine_changeset_copies(
356 revs, children_count, targetrev, revinfo, match, isancestor, multi_thread
356 revs, children_count, targetrev, revinfo, match, isancestor, multi_thread
357 ):
357 ):
358 """combine the copies information for each item of iterrevs
358 """combine the copies information for each item of iterrevs
359
359
360 revs: sorted iterable of revision to visit
360 revs: sorted iterable of revision to visit
361 children_count: a {parent: <number-of-relevant-children>} mapping.
361 children_count: a {parent: <number-of-relevant-children>} mapping.
362 targetrev: the final copies destination revision (not in iterrevs)
362 targetrev: the final copies destination revision (not in iterrevs)
363 revinfo(rev): a function that return (p1, p2, p1copies, p2copies, removed)
363 revinfo(rev): a function that return (p1, p2, p1copies, p2copies, removed)
364 match: a matcher
364 match: a matcher
365
365
366 It returns the aggregated copies information for `targetrev`.
366 It returns the aggregated copies information for `targetrev`.
367 """
367 """
368
368
369 alwaysmatch = match.always()
369 alwaysmatch = match.always()
370
370
371 if rustmod is not None:
371 if rustmod is not None:
372 final_copies = rustmod.combine_changeset_copies(
372 final_copies = rustmod.combine_changeset_copies(
373 list(revs), children_count, targetrev, revinfo, multi_thread
373 list(revs), children_count, targetrev, revinfo, multi_thread
374 )
374 )
375 else:
375 else:
376 isancestor = cached_is_ancestor(isancestor)
376 isancestor = cached_is_ancestor(isancestor)
377
377
378 all_copies = {}
378 all_copies = {}
379 # iterate over all the "children" side of copy tracing "edge"
379 # iterate over all the "children" side of copy tracing "edge"
380 for current_rev in revs:
380 for current_rev in revs:
381 p1, p2, changes = revinfo(current_rev)
381 p1, p2, changes = revinfo(current_rev)
382 current_copies = None
382 current_copies = None
383 # iterate over all parents to chain the existing data with the
383 # iterate over all parents to chain the existing data with the
384 # data from the parent β†’ child edge.
384 # data from the parent β†’ child edge.
385 for parent, parent_rev in ((1, p1), (2, p2)):
385 for parent, parent_rev in ((1, p1), (2, p2)):
386 if parent_rev == nullrev:
386 if parent_rev == nullrev:
387 continue
387 continue
388 remaining_children = children_count.get(parent_rev)
388 remaining_children = children_count.get(parent_rev)
389 if remaining_children is None:
389 if remaining_children is None:
390 continue
390 continue
391 remaining_children -= 1
391 remaining_children -= 1
392 children_count[parent_rev] = remaining_children
392 children_count[parent_rev] = remaining_children
393 if remaining_children:
393 if remaining_children:
394 copies = all_copies.get(parent_rev, None)
394 copies = all_copies.get(parent_rev, None)
395 else:
395 else:
396 copies = all_copies.pop(parent_rev, None)
396 copies = all_copies.pop(parent_rev, None)
397
397
398 if copies is None:
398 if copies is None:
399 # this is a root
399 # this is a root
400 newcopies = copies = {}
400 newcopies = copies = {}
401 elif remaining_children:
401 elif remaining_children:
402 newcopies = copies.copy()
402 newcopies = copies.copy()
403 else:
403 else:
404 newcopies = copies
404 newcopies = copies
405 # chain the data in the edge with the existing data
405 # chain the data in the edge with the existing data
406 if changes is not None:
406 if changes is not None:
407 childcopies = {}
407 childcopies = {}
408 if parent == 1:
408 if parent == 1:
409 childcopies = changes.copied_from_p1
409 childcopies = changes.copied_from_p1
410 elif parent == 2:
410 elif parent == 2:
411 childcopies = changes.copied_from_p2
411 childcopies = changes.copied_from_p2
412
412
413 if childcopies:
413 if childcopies:
414 newcopies = copies.copy()
414 newcopies = copies.copy()
415 for dest, source in pycompat.iteritems(childcopies):
415 for dest, source in pycompat.iteritems(childcopies):
416 prev = copies.get(source)
416 prev = copies.get(source)
417 if prev is not None and prev[1] is not None:
417 if prev is not None and prev[1] is not None:
418 source = prev[1]
418 source = prev[1]
419 newcopies[dest] = (current_rev, source)
419 newcopies[dest] = (current_rev, source)
420 assert newcopies is not copies
420 assert newcopies is not copies
421 if changes.removed:
421 if changes.removed:
422 for f in changes.removed:
422 for f in changes.removed:
423 if f in newcopies:
423 if f in newcopies:
424 if newcopies is copies:
424 if newcopies is copies:
425 # copy on write to avoid affecting potential other
425 # copy on write to avoid affecting potential other
426 # branches. when there are no other branches, this
426 # branches. when there are no other branches, this
427 # could be avoided.
427 # could be avoided.
428 newcopies = copies.copy()
428 newcopies = copies.copy()
429 newcopies[f] = (current_rev, None)
429 newcopies[f] = (current_rev, None)
430 # check potential need to combine the data from another parent (for
430 # check potential need to combine the data from another parent (for
431 # that child). See comment below for details.
431 # that child). See comment below for details.
432 if current_copies is None:
432 if current_copies is None:
433 current_copies = newcopies
433 current_copies = newcopies
434 else:
434 else:
435 # we are the second parent to work on c, we need to merge our
435 # we are the second parent to work on c, we need to merge our
436 # work with the other.
436 # work with the other.
437 #
437 #
438 # In case of conflict, parent 1 take precedence over parent 2.
438 # In case of conflict, parent 1 take precedence over parent 2.
439 # This is an arbitrary choice made anew when implementing
439 # This is an arbitrary choice made anew when implementing
440 # changeset based copies. It was made without regards with
440 # changeset based copies. It was made without regards with
441 # potential filelog related behavior.
441 # potential filelog related behavior.
442 assert parent == 2
442 assert parent == 2
443 current_copies = _merge_copies_dict(
443 current_copies = _merge_copies_dict(
444 newcopies,
444 newcopies,
445 current_copies,
445 current_copies,
446 isancestor,
446 isancestor,
447 changes,
447 changes,
448 current_rev,
448 current_rev,
449 )
449 )
450 all_copies[current_rev] = current_copies
450 all_copies[current_rev] = current_copies
451
451
452 # filter out internal details and return a {dest: source mapping}
452 # filter out internal details and return a {dest: source mapping}
453 final_copies = {}
453 final_copies = {}
454 for dest, (tt, source) in all_copies[targetrev].items():
454 for dest, (tt, source) in all_copies[targetrev].items():
455 if source is not None:
455 if source is not None:
456 final_copies[dest] = source
456 final_copies[dest] = source
457 if not alwaysmatch:
457 if not alwaysmatch:
458 for filename in list(final_copies.keys()):
458 for filename in list(final_copies.keys()):
459 if not match(filename):
459 if not match(filename):
460 del final_copies[filename]
460 del final_copies[filename]
461 return final_copies
461 return final_copies
462
462
463
463
464 # constant to decide which side to pick with _merge_copies_dict
464 # constant to decide which side to pick with _merge_copies_dict
465 PICK_MINOR = 0
465 PICK_MINOR = 0
466 PICK_MAJOR = 1
466 PICK_MAJOR = 1
467 PICK_EITHER = 2
467 PICK_EITHER = 2
468
468
469
469
470 def _merge_copies_dict(minor, major, isancestor, changes, current_merge):
470 def _merge_copies_dict(minor, major, isancestor, changes, current_merge):
471 """merge two copies-mapping together, minor and major
471 """merge two copies-mapping together, minor and major
472
472
473 In case of conflict, value from "major" will be picked.
473 In case of conflict, value from "major" will be picked.
474
474
475 - `isancestors(low_rev, high_rev)`: callable return True if `low_rev` is an
475 - `isancestors(low_rev, high_rev)`: callable return True if `low_rev` is an
476 ancestors of `high_rev`,
476 ancestors of `high_rev`,
477
477
478 - `ismerged(path)`: callable return True if `path` have been merged in the
478 - `ismerged(path)`: callable return True if `path` have been merged in the
479 current revision,
479 current revision,
480
480
481 return the resulting dict (in practice, the "minor" object, updated)
481 return the resulting dict (in practice, the "minor" object, updated)
482 """
482 """
483 for dest, value in major.items():
483 for dest, value in major.items():
484 other = minor.get(dest)
484 other = minor.get(dest)
485 if other is None:
485 if other is None:
486 minor[dest] = value
486 minor[dest] = value
487 else:
487 else:
488 pick, overwrite = _compare_values(
488 pick, overwrite = _compare_values(
489 changes, isancestor, dest, other, value
489 changes, isancestor, dest, other, value
490 )
490 )
491 if overwrite:
491 if overwrite:
492 if pick == PICK_MAJOR:
492 if pick == PICK_MAJOR:
493 minor[dest] = (current_merge, value[1])
493 minor[dest] = (current_merge, value[1])
494 else:
494 else:
495 minor[dest] = (current_merge, other[1])
495 minor[dest] = (current_merge, other[1])
496 elif pick == PICK_MAJOR:
496 elif pick == PICK_MAJOR:
497 minor[dest] = value
497 minor[dest] = value
498 return minor
498 return minor
499
499
500
500
501 def _compare_values(changes, isancestor, dest, minor, major):
501 def _compare_values(changes, isancestor, dest, minor, major):
502 """compare two value within a _merge_copies_dict loop iteration
502 """compare two value within a _merge_copies_dict loop iteration
503
503
504 return (pick, overwrite).
504 return (pick, overwrite).
505
505
506 - pick is one of PICK_MINOR, PICK_MAJOR or PICK_EITHER
506 - pick is one of PICK_MINOR, PICK_MAJOR or PICK_EITHER
507 - overwrite is True if pick is a return of an ambiguity that needs resolution.
507 - overwrite is True if pick is a return of an ambiguity that needs resolution.
508 """
508 """
509 major_tt, major_value = major
509 major_tt, major_value = major
510 minor_tt, minor_value = minor
510 minor_tt, minor_value = minor
511
511
512 if major_tt == minor_tt:
512 if major_tt == minor_tt:
513 # if it comes from the same revision it must be the same value
513 # if it comes from the same revision it must be the same value
514 assert major_value == minor_value
514 assert major_value == minor_value
515 return PICK_EITHER, False
515 return PICK_EITHER, False
516 elif (
516 elif (
517 changes is not None
517 changes is not None
518 and minor_value is not None
518 and minor_value is not None
519 and major_value is None
519 and major_value is None
520 and dest in changes.salvaged
520 and dest in changes.salvaged
521 ):
521 ):
522 # In this case, a deletion was reverted, the "alive" value overwrite
522 # In this case, a deletion was reverted, the "alive" value overwrite
523 # the deleted one.
523 # the deleted one.
524 return PICK_MINOR, True
524 return PICK_MINOR, True
525 elif (
525 elif (
526 changes is not None
526 changes is not None
527 and major_value is not None
527 and major_value is not None
528 and minor_value is None
528 and minor_value is None
529 and dest in changes.salvaged
529 and dest in changes.salvaged
530 ):
530 ):
531 # In this case, a deletion was reverted, the "alive" value overwrite
531 # In this case, a deletion was reverted, the "alive" value overwrite
532 # the deleted one.
532 # the deleted one.
533 return PICK_MAJOR, True
533 return PICK_MAJOR, True
534 elif isancestor(minor_tt, major_tt):
534 elif isancestor(minor_tt, major_tt):
535 if changes is not None and dest in changes.merged:
535 if changes is not None and dest in changes.merged:
536 # change to dest happened on the branch without copy-source change,
536 # change to dest happened on the branch without copy-source change,
537 # so both source are valid and "major" wins.
537 # so both source are valid and "major" wins.
538 return PICK_MAJOR, True
538 return PICK_MAJOR, True
539 else:
539 else:
540 return PICK_MAJOR, False
540 return PICK_MAJOR, False
541 elif isancestor(major_tt, minor_tt):
541 elif isancestor(major_tt, minor_tt):
542 if changes is not None and dest in changes.merged:
542 if changes is not None and dest in changes.merged:
543 # change to dest happened on the branch without copy-source change,
543 # change to dest happened on the branch without copy-source change,
544 # so both source are valid and "major" wins.
544 # so both source are valid and "major" wins.
545 return PICK_MAJOR, True
545 return PICK_MAJOR, True
546 else:
546 else:
547 return PICK_MINOR, False
547 return PICK_MINOR, False
548 elif minor_value is None:
548 elif minor_value is None:
549 # in case of conflict, the "alive" side wins.
549 # in case of conflict, the "alive" side wins.
550 return PICK_MAJOR, True
550 return PICK_MAJOR, True
551 elif major_value is None:
551 elif major_value is None:
552 # in case of conflict, the "alive" side wins.
552 # in case of conflict, the "alive" side wins.
553 return PICK_MINOR, True
553 return PICK_MINOR, True
554 else:
554 else:
555 # in case of conflict where both side are alive, major wins.
555 # in case of conflict where both side are alive, major wins.
556 return PICK_MAJOR, True
556 return PICK_MAJOR, True
557
557
558
558
559 def _revinfo_getter_extra(repo):
559 def _revinfo_getter_extra(repo):
560 """return a function that return multiple data given a <rev>"i
560 """return a function that return multiple data given a <rev>"i
561
561
562 * p1: revision number of first parent
562 * p1: revision number of first parent
563 * p2: revision number of first parent
563 * p2: revision number of first parent
564 * p1copies: mapping of copies from p1
564 * p1copies: mapping of copies from p1
565 * p2copies: mapping of copies from p2
565 * p2copies: mapping of copies from p2
566 * removed: a list of removed files
566 * removed: a list of removed files
567 * ismerged: a callback to know if file was merged in that revision
567 * ismerged: a callback to know if file was merged in that revision
568 """
568 """
569 cl = repo.changelog
569 cl = repo.changelog
570 parents = cl.parentrevs
570 parents = cl.parentrevs
571
571
572 def get_ismerged(rev):
572 def get_ismerged(rev):
573 ctx = repo[rev]
573 ctx = repo[rev]
574
574
575 def ismerged(path):
575 def ismerged(path):
576 if path not in ctx.files():
576 if path not in ctx.files():
577 return False
577 return False
578 fctx = ctx[path]
578 fctx = ctx[path]
579 parents = fctx._filelog.parents(fctx._filenode)
579 parents = fctx._filelog.parents(fctx._filenode)
580 nb_parents = 0
580 nb_parents = 0
581 for n in parents:
581 for n in parents:
582 if n != nullid:
582 if n != nullid:
583 nb_parents += 1
583 nb_parents += 1
584 return nb_parents >= 2
584 return nb_parents >= 2
585
585
586 return ismerged
586 return ismerged
587
587
588 def revinfo(rev):
588 def revinfo(rev):
589 p1, p2 = parents(rev)
589 p1, p2 = parents(rev)
590 ctx = repo[rev]
590 ctx = repo[rev]
591 p1copies, p2copies = ctx._copies
591 p1copies, p2copies = ctx._copies
592 removed = ctx.filesremoved()
592 removed = ctx.filesremoved()
593 return p1, p2, p1copies, p2copies, removed, get_ismerged(rev)
593 return p1, p2, p1copies, p2copies, removed, get_ismerged(rev)
594
594
595 return revinfo
595 return revinfo
596
596
597
597
598 def _combine_changeset_copies_extra(
598 def _combine_changeset_copies_extra(
599 revs, children, targetrev, revinfo, match, isancestor
599 revs, children, targetrev, revinfo, match, isancestor
600 ):
600 ):
601 """version of `_combine_changeset_copies` that works with the Google
601 """version of `_combine_changeset_copies` that works with the Google
602 specific "extra" based storage for copy information"""
602 specific "extra" based storage for copy information"""
603 all_copies = {}
603 all_copies = {}
604 alwaysmatch = match.always()
604 alwaysmatch = match.always()
605 for r in revs:
605 for r in revs:
606 copies = all_copies.pop(r, None)
606 copies = all_copies.pop(r, None)
607 if copies is None:
607 if copies is None:
608 # this is a root
608 # this is a root
609 copies = {}
609 copies = {}
610 for i, c in enumerate(children[r]):
610 for i, c in enumerate(children[r]):
611 p1, p2, p1copies, p2copies, removed, ismerged = revinfo(c)
611 p1, p2, p1copies, p2copies, removed, ismerged = revinfo(c)
612 if r == p1:
612 if r == p1:
613 parent = 1
613 parent = 1
614 childcopies = p1copies
614 childcopies = p1copies
615 else:
615 else:
616 assert r == p2
616 assert r == p2
617 parent = 2
617 parent = 2
618 childcopies = p2copies
618 childcopies = p2copies
619 if not alwaysmatch:
619 if not alwaysmatch:
620 childcopies = {
620 childcopies = {
621 dst: src for dst, src in childcopies.items() if match(dst)
621 dst: src for dst, src in childcopies.items() if match(dst)
622 }
622 }
623 newcopies = copies
623 newcopies = copies
624 if childcopies:
624 if childcopies:
625 newcopies = copies.copy()
625 newcopies = copies.copy()
626 for dest, source in pycompat.iteritems(childcopies):
626 for dest, source in pycompat.iteritems(childcopies):
627 prev = copies.get(source)
627 prev = copies.get(source)
628 if prev is not None and prev[1] is not None:
628 if prev is not None and prev[1] is not None:
629 source = prev[1]
629 source = prev[1]
630 newcopies[dest] = (c, source)
630 newcopies[dest] = (c, source)
631 assert newcopies is not copies
631 assert newcopies is not copies
632 for f in removed:
632 for f in removed:
633 if f in newcopies:
633 if f in newcopies:
634 if newcopies is copies:
634 if newcopies is copies:
635 # copy on write to avoid affecting potential other
635 # copy on write to avoid affecting potential other
636 # branches. when there are no other branches, this
636 # branches. when there are no other branches, this
637 # could be avoided.
637 # could be avoided.
638 newcopies = copies.copy()
638 newcopies = copies.copy()
639 newcopies[f] = (c, None)
639 newcopies[f] = (c, None)
640 othercopies = all_copies.get(c)
640 othercopies = all_copies.get(c)
641 if othercopies is None:
641 if othercopies is None:
642 all_copies[c] = newcopies
642 all_copies[c] = newcopies
643 else:
643 else:
644 # we are the second parent to work on c, we need to merge our
644 # we are the second parent to work on c, we need to merge our
645 # work with the other.
645 # work with the other.
646 #
646 #
647 # In case of conflict, parent 1 take precedence over parent 2.
647 # In case of conflict, parent 1 take precedence over parent 2.
648 # This is an arbitrary choice made anew when implementing
648 # This is an arbitrary choice made anew when implementing
649 # changeset based copies. It was made without regards with
649 # changeset based copies. It was made without regards with
650 # potential filelog related behavior.
650 # potential filelog related behavior.
651 if parent == 1:
651 if parent == 1:
652 _merge_copies_dict_extra(
652 _merge_copies_dict_extra(
653 othercopies, newcopies, isancestor, ismerged
653 othercopies, newcopies, isancestor, ismerged
654 )
654 )
655 else:
655 else:
656 _merge_copies_dict_extra(
656 _merge_copies_dict_extra(
657 newcopies, othercopies, isancestor, ismerged
657 newcopies, othercopies, isancestor, ismerged
658 )
658 )
659 all_copies[c] = newcopies
659 all_copies[c] = newcopies
660
660
661 final_copies = {}
661 final_copies = {}
662 for dest, (tt, source) in all_copies[targetrev].items():
662 for dest, (tt, source) in all_copies[targetrev].items():
663 if source is not None:
663 if source is not None:
664 final_copies[dest] = source
664 final_copies[dest] = source
665 return final_copies
665 return final_copies
666
666
667
667
668 def _merge_copies_dict_extra(minor, major, isancestor, ismerged):
668 def _merge_copies_dict_extra(minor, major, isancestor, ismerged):
669 """version of `_merge_copies_dict` that works with the Google
669 """version of `_merge_copies_dict` that works with the Google
670 specific "extra" based storage for copy information"""
670 specific "extra" based storage for copy information"""
671 for dest, value in major.items():
671 for dest, value in major.items():
672 other = minor.get(dest)
672 other = minor.get(dest)
673 if other is None:
673 if other is None:
674 minor[dest] = value
674 minor[dest] = value
675 else:
675 else:
676 new_tt = value[0]
676 new_tt = value[0]
677 other_tt = other[0]
677 other_tt = other[0]
678 if value[1] == other[1]:
678 if value[1] == other[1]:
679 continue
679 continue
680 # content from "major" wins, unless it is older
680 # content from "major" wins, unless it is older
681 # than the branch point or there is a merge
681 # than the branch point or there is a merge
682 if (
682 if (
683 new_tt == other_tt
683 new_tt == other_tt
684 or not isancestor(new_tt, other_tt)
684 or not isancestor(new_tt, other_tt)
685 or ismerged(dest)
685 or ismerged(dest)
686 ):
686 ):
687 minor[dest] = value
687 minor[dest] = value
688
688
689
689
690 def _forwardcopies(a, b, base=None, match=None):
690 def _forwardcopies(a, b, base=None, match=None):
691 """find {dst@b: src@a} copy mapping where a is an ancestor of b"""
691 """find {dst@b: src@a} copy mapping where a is an ancestor of b"""
692
692
693 if base is None:
693 if base is None:
694 base = a
694 base = a
695 match = a.repo().narrowmatch(match)
695 match = a.repo().narrowmatch(match)
696 # check for working copy
696 # check for working copy
697 if b.rev() is None:
697 if b.rev() is None:
698 cm = _committedforwardcopies(a, b.p1(), base, match)
698 cm = _committedforwardcopies(a, b.p1(), base, match)
699 # combine copies from dirstate if necessary
699 # combine copies from dirstate if necessary
700 copies = _chain(cm, _dirstatecopies(b._repo, match))
700 copies = _chain(cm, _dirstatecopies(b._repo, match))
701 else:
701 else:
702 copies = _committedforwardcopies(a, b, base, match)
702 copies = _committedforwardcopies(a, b, base, match)
703 return copies
703 return copies
704
704
705
705
706 def _backwardrenames(a, b, match):
706 def _backwardrenames(a, b, match):
707 """find renames from a to b"""
707 """find renames from a to b"""
708 if a._repo.ui.config(b'experimental', b'copytrace') == b'off':
708 if a._repo.ui.config(b'experimental', b'copytrace') == b'off':
709 return {}
709 return {}
710
710
711 # We don't want to pass in "match" here, since that would filter
711 # We don't want to pass in "match" here, since that would filter
712 # the destination by it. Since we're reversing the copies, we want
712 # the destination by it. Since we're reversing the copies, we want
713 # to filter the source instead.
713 # to filter the source instead.
714 copies = _forwardcopies(b, a)
714 copies = _forwardcopies(b, a)
715 return _reverse_renames(copies, a, match)
715 return _reverse_renames(copies, a, match)
716
716
717
717
718 def _reverse_renames(copies, dst, match):
718 def _reverse_renames(copies, dst, match):
719 """given copies to context 'dst', finds renames from that context"""
719 """given copies to context 'dst', finds renames from that context"""
720 # Even though we're not taking copies into account, 1:n rename situations
720 # Even though we're not taking copies into account, 1:n rename situations
721 # can still exist (e.g. hg cp a b; hg mv a c). In those cases we
721 # can still exist (e.g. hg cp a b; hg mv a c). In those cases we
722 # arbitrarily pick one of the renames.
722 # arbitrarily pick one of the renames.
723 r = {}
723 r = {}
724 for k, v in sorted(pycompat.iteritems(copies)):
724 for k, v in sorted(pycompat.iteritems(copies)):
725 if match and not match(v):
725 if match and not match(v):
726 continue
726 continue
727 # remove copies
727 # remove copies
728 if v in dst:
728 if v in dst:
729 continue
729 continue
730 r[v] = k
730 r[v] = k
731 return r
731 return r
732
732
733
733
734 def pathcopies(x, y, match=None):
734 def pathcopies(x, y, match=None):
735 """find {dst@y: src@x} copy mapping for directed compare"""
735 """find {dst@y: src@x} copy mapping for directed compare"""
736 repo = x._repo
736 repo = x._repo
737 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
737 debug = repo.ui.debugflag and repo.ui.configbool(b'devel', b'debug.copies')
738 if debug:
738 if debug:
739 repo.ui.debug(
739 repo.ui.debug(
740 b'debug.copies: searching copies from %s to %s\n' % (x, y)
740 b'debug.copies: searching copies from %s to %s\n' % (x, y)
741 )
741 )
742 if x == y or not x or not y:
742 if x == y or not x or not y:
743 return {}
743 return {}
744 if y.rev() is None and x == y.p1():
744 if y.rev() is None and x == y.p1():
745 if debug:
745 if debug:
746 repo.ui.debug(b'debug.copies: search mode: dirstate\n')
746 repo.ui.debug(b'debug.copies: search mode: dirstate\n')
747 # short-circuit to avoid issues with merge states
747 # short-circuit to avoid issues with merge states
748 return _dirstatecopies(repo, match)
748 return _dirstatecopies(repo, match)
749 a = y.ancestor(x)
749 a = y.ancestor(x)
750 if a == x:
750 if a == x:
751 if debug:
751 if debug:
752 repo.ui.debug(b'debug.copies: search mode: forward\n')
752 repo.ui.debug(b'debug.copies: search mode: forward\n')
753 copies = _forwardcopies(x, y, match=match)
753 copies = _forwardcopies(x, y, match=match)
754 elif a == y:
754 elif a == y:
755 if debug:
755 if debug:
756 repo.ui.debug(b'debug.copies: search mode: backward\n')
756 repo.ui.debug(b'debug.copies: search mode: backward\n')
757 copies = _backwardrenames(x, y, match=match)
757 copies = _backwardrenames(x, y, match=match)
758 else:
758 else:
759 if debug:
759 if debug:
760 repo.ui.debug(b'debug.copies: search mode: combined\n')
760 repo.ui.debug(b'debug.copies: search mode: combined\n')
761 base = None
761 base = None
762 if a.rev() != nullrev:
762 if a.rev() != nullrev:
763 base = x
763 base = x
764 x_copies = _forwardcopies(a, x)
764 x_copies = _forwardcopies(a, x)
765 y_copies = _forwardcopies(a, y, base, match=match)
765 y_copies = _forwardcopies(a, y, base, match=match)
766 same_keys = set(x_copies) & set(y_copies)
766 same_keys = set(x_copies) & set(y_copies)
767 for k in same_keys:
767 for k in same_keys:
768 if x_copies.get(k) == y_copies.get(k):
768 if x_copies.get(k) == y_copies.get(k):
769 del x_copies[k]
769 del x_copies[k]
770 del y_copies[k]
770 del y_copies[k]
771 x_backward_renames = _reverse_renames(x_copies, x, match)
771 x_backward_renames = _reverse_renames(x_copies, x, match)
772 copies = _chain(
772 copies = _chain(
773 x_backward_renames,
773 x_backward_renames,
774 y_copies,
774 y_copies,
775 )
775 )
776 _filter(x, y, copies)
776 _filter(x, y, copies)
777 return copies
777 return copies
778
778
779
779
780 def mergecopies(repo, c1, c2, base):
780 def mergecopies(repo, c1, c2, base):
781 """
781 """
782 Finds moves and copies between context c1 and c2 that are relevant for
782 Finds moves and copies between context c1 and c2 that are relevant for
783 merging. 'base' will be used as the merge base.
783 merging. 'base' will be used as the merge base.
784
784
785 Copytracing is used in commands like rebase, merge, unshelve, etc to merge
785 Copytracing is used in commands like rebase, merge, unshelve, etc to merge
786 files that were moved/ copied in one merge parent and modified in another.
786 files that were moved/ copied in one merge parent and modified in another.
787 For example:
787 For example:
788
788
789 o ---> 4 another commit
789 o ---> 4 another commit
790 |
790 |
791 | o ---> 3 commit that modifies a.txt
791 | o ---> 3 commit that modifies a.txt
792 | /
792 | /
793 o / ---> 2 commit that moves a.txt to b.txt
793 o / ---> 2 commit that moves a.txt to b.txt
794 |/
794 |/
795 o ---> 1 merge base
795 o ---> 1 merge base
796
796
797 If we try to rebase revision 3 on revision 4, since there is no a.txt in
797 If we try to rebase revision 3 on revision 4, since there is no a.txt in
798 revision 4, and if user have copytrace disabled, we prints the following
798 revision 4, and if user have copytrace disabled, we prints the following
799 message:
799 message:
800
800
801 ```other changed <file> which local deleted```
801 ```other changed <file> which local deleted```
802
802
803 Returns a tuple where:
803 Returns a tuple where:
804
804
805 "branch_copies" an instance of branch_copies.
805 "branch_copies" an instance of branch_copies.
806
806
807 "diverge" is a mapping of source name -> list of destination names
807 "diverge" is a mapping of source name -> list of destination names
808 for divergent renames.
808 for divergent renames.
809
809
810 This function calls different copytracing algorithms based on config.
810 This function calls different copytracing algorithms based on config.
811 """
811 """
812 # avoid silly behavior for update from empty dir
812 # avoid silly behavior for update from empty dir
813 if not c1 or not c2 or c1 == c2:
813 if not c1 or not c2 or c1 == c2:
814 return branch_copies(), branch_copies(), {}
814 return branch_copies(), branch_copies(), {}
815
815
816 narrowmatch = c1.repo().narrowmatch()
816 narrowmatch = c1.repo().narrowmatch()
817
817
818 # avoid silly behavior for parent -> working dir
818 # avoid silly behavior for parent -> working dir
819 if c2.node() is None and c1.node() == repo.dirstate.p1():
819 if c2.node() is None and c1.node() == repo.dirstate.p1():
820 return (
820 return (
821 branch_copies(_dirstatecopies(repo, narrowmatch)),
821 branch_copies(_dirstatecopies(repo, narrowmatch)),
822 branch_copies(),
822 branch_copies(),
823 {},
823 {},
824 )
824 )
825
825
826 copytracing = repo.ui.config(b'experimental', b'copytrace')
826 copytracing = repo.ui.config(b'experimental', b'copytrace')
827 if stringutil.parsebool(copytracing) is False:
827 if stringutil.parsebool(copytracing) is False:
828 # stringutil.parsebool() returns None when it is unable to parse the
828 # stringutil.parsebool() returns None when it is unable to parse the
829 # value, so we should rely on making sure copytracing is on such cases
829 # value, so we should rely on making sure copytracing is on such cases
830 return branch_copies(), branch_copies(), {}
830 return branch_copies(), branch_copies(), {}
831
831
832 if usechangesetcentricalgo(repo):
832 if usechangesetcentricalgo(repo):
833 # The heuristics don't make sense when we need changeset-centric algos
833 # The heuristics don't make sense when we need changeset-centric algos
834 return _fullcopytracing(repo, c1, c2, base)
834 return _fullcopytracing(repo, c1, c2, base)
835
835
836 # Copy trace disabling is explicitly below the node == p1 logic above
836 # Copy trace disabling is explicitly below the node == p1 logic above
837 # because the logic above is required for a simple copy to be kept across a
837 # because the logic above is required for a simple copy to be kept across a
838 # rebase.
838 # rebase.
839 if copytracing == b'heuristics':
839 if copytracing == b'heuristics':
840 # Do full copytracing if only non-public revisions are involved as
840 # Do full copytracing if only non-public revisions are involved as
841 # that will be fast enough and will also cover the copies which could
841 # that will be fast enough and will also cover the copies which could
842 # be missed by heuristics
842 # be missed by heuristics
843 if _isfullcopytraceable(repo, c1, base):
843 if _isfullcopytraceable(repo, c1, base):
844 return _fullcopytracing(repo, c1, c2, base)
844 return _fullcopytracing(repo, c1, c2, base)
845 return _heuristicscopytracing(repo, c1, c2, base)
845 return _heuristicscopytracing(repo, c1, c2, base)
846 else:
846 else:
847 return _fullcopytracing(repo, c1, c2, base)
847 return _fullcopytracing(repo, c1, c2, base)
848
848
849
849
850 def _isfullcopytraceable(repo, c1, base):
850 def _isfullcopytraceable(repo, c1, base):
851 """Checks that if base, source and destination are all no-public branches,
851 """Checks that if base, source and destination are all no-public branches,
852 if yes let's use the full copytrace algorithm for increased capabilities
852 if yes let's use the full copytrace algorithm for increased capabilities
853 since it will be fast enough.
853 since it will be fast enough.
854
854
855 `experimental.copytrace.sourcecommitlimit` can be used to set a limit for
855 `experimental.copytrace.sourcecommitlimit` can be used to set a limit for
856 number of changesets from c1 to base such that if number of changesets are
856 number of changesets from c1 to base such that if number of changesets are
857 more than the limit, full copytracing algorithm won't be used.
857 more than the limit, full copytracing algorithm won't be used.
858 """
858 """
859 if c1.rev() is None:
859 if c1.rev() is None:
860 c1 = c1.p1()
860 c1 = c1.p1()
861 if c1.mutable() and base.mutable():
861 if c1.mutable() and base.mutable():
862 sourcecommitlimit = repo.ui.configint(
862 sourcecommitlimit = repo.ui.configint(
863 b'experimental', b'copytrace.sourcecommitlimit'
863 b'experimental', b'copytrace.sourcecommitlimit'
864 )
864 )
865 commits = len(repo.revs(b'%d::%d', base.rev(), c1.rev()))
865 commits = len(repo.revs(b'%d::%d', base.rev(), c1.rev()))
866 return commits < sourcecommitlimit
866 return commits < sourcecommitlimit
867 return False
867 return False
868
868
869
869
870 def _checksinglesidecopies(
870 def _checksinglesidecopies(
871 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
871 src, dsts1, m1, m2, mb, c2, base, copy, renamedelete
872 ):
872 ):
873 if src not in m2:
873 if src not in m2:
874 # deleted on side 2
874 # deleted on side 2
875 if src not in m1:
875 if src not in m1:
876 # renamed on side 1, deleted on side 2
876 # renamed on side 1, deleted on side 2
877 renamedelete[src] = dsts1
877 renamedelete[src] = dsts1
878 elif src not in mb:
878 elif src not in mb:
879 # Work around the "short-circuit to avoid issues with merge states"
879 # Work around the "short-circuit to avoid issues with merge states"
880 # thing in pathcopies(): pathcopies(x, y) can return a copy where the
880 # thing in pathcopies(): pathcopies(x, y) can return a copy where the
881 # destination doesn't exist in y.
881 # destination doesn't exist in y.
882 pass
882 pass
883 elif mb[src] != m2[src] and not _related(c2[src], base[src]):
883 elif mb[src] != m2[src] and not _related(c2[src], base[src]):
884 return
884 return
885 elif mb[src] != m2[src] or mb.flags(src) != m2.flags(src):
885 elif mb[src] != m2[src] or mb.flags(src) != m2.flags(src):
886 # modified on side 2
886 # modified on side 2
887 for dst in dsts1:
887 for dst in dsts1:
888 copy[dst] = src
888 copy[dst] = src
889
889
890
890
891 class branch_copies(object):
891 class branch_copies(object):
892 """Information about copies made on one side of a merge/graft.
892 """Information about copies made on one side of a merge/graft.
893
893
894 "copy" is a mapping from destination name -> source name,
894 "copy" is a mapping from destination name -> source name,
895 where source is in c1 and destination is in c2 or vice-versa.
895 where source is in c1 and destination is in c2 or vice-versa.
896
896
897 "movewithdir" is a mapping from source name -> destination name,
897 "movewithdir" is a mapping from source name -> destination name,
898 where the file at source present in one context but not the other
898 where the file at source present in one context but not the other
899 needs to be moved to destination by the merge process, because the
899 needs to be moved to destination by the merge process, because the
900 other context moved the directory it is in.
900 other context moved the directory it is in.
901
901
902 "renamedelete" is a mapping of source name -> list of destination
902 "renamedelete" is a mapping of source name -> list of destination
903 names for files deleted in c1 that were renamed in c2 or vice-versa.
903 names for files deleted in c1 that were renamed in c2 or vice-versa.
904
904
905 "dirmove" is a mapping of detected source dir -> destination dir renames.
905 "dirmove" is a mapping of detected source dir -> destination dir renames.
906 This is needed for handling changes to new files previously grafted into
906 This is needed for handling changes to new files previously grafted into
907 renamed directories.
907 renamed directories.
908 """
908 """
909
909
910 def __init__(
910 def __init__(
911 self, copy=None, renamedelete=None, dirmove=None, movewithdir=None
911 self, copy=None, renamedelete=None, dirmove=None, movewithdir=None
912 ):
912 ):
913 self.copy = {} if copy is None else copy
913 self.copy = {} if copy is None else copy
914 self.renamedelete = {} if renamedelete is None else renamedelete
914 self.renamedelete = {} if renamedelete is None else renamedelete
915 self.dirmove = {} if dirmove is None else dirmove
915 self.dirmove = {} if dirmove is None else dirmove
916 self.movewithdir = {} if movewithdir is None else movewithdir
916 self.movewithdir = {} if movewithdir is None else movewithdir
917
917
918 def __repr__(self):
918 def __repr__(self):
919 return '<branch_copies\n copy=%r\n renamedelete=%r\n dirmove=%r\n movewithdir=%r\n>' % (
919 return '<branch_copies\n copy=%r\n renamedelete=%r\n dirmove=%r\n movewithdir=%r\n>' % (
920 self.copy,
920 self.copy,
921 self.renamedelete,
921 self.renamedelete,
922 self.dirmove,
922 self.dirmove,
923 self.movewithdir,
923 self.movewithdir,
924 )
924 )
925
925
926
926
927 def _fullcopytracing(repo, c1, c2, base):
927 def _fullcopytracing(repo, c1, c2, base):
928 """The full copytracing algorithm which finds all the new files that were
928 """The full copytracing algorithm which finds all the new files that were
929 added from merge base up to the top commit and for each file it checks if
929 added from merge base up to the top commit and for each file it checks if
930 this file was copied from another file.
930 this file was copied from another file.
931
931
932 This is pretty slow when a lot of changesets are involved but will track all
932 This is pretty slow when a lot of changesets are involved but will track all
933 the copies.
933 the copies.
934 """
934 """
935 m1 = c1.manifest()
935 m1 = c1.manifest()
936 m2 = c2.manifest()
936 m2 = c2.manifest()
937 mb = base.manifest()
937 mb = base.manifest()
938
938
939 copies1 = pathcopies(base, c1)
939 copies1 = pathcopies(base, c1)
940 copies2 = pathcopies(base, c2)
940 copies2 = pathcopies(base, c2)
941
941
942 if not (copies1 or copies2):
942 if not (copies1 or copies2):
943 return branch_copies(), branch_copies(), {}
943 return branch_copies(), branch_copies(), {}
944
944
945 inversecopies1 = {}
945 inversecopies1 = {}
946 inversecopies2 = {}
946 inversecopies2 = {}
947 for dst, src in copies1.items():
947 for dst, src in copies1.items():
948 inversecopies1.setdefault(src, []).append(dst)
948 inversecopies1.setdefault(src, []).append(dst)
949 for dst, src in copies2.items():
949 for dst, src in copies2.items():
950 inversecopies2.setdefault(src, []).append(dst)
950 inversecopies2.setdefault(src, []).append(dst)
951
951
952 copy1 = {}
952 copy1 = {}
953 copy2 = {}
953 copy2 = {}
954 diverge = {}
954 diverge = {}
955 renamedelete1 = {}
955 renamedelete1 = {}
956 renamedelete2 = {}
956 renamedelete2 = {}
957 allsources = set(inversecopies1) | set(inversecopies2)
957 allsources = set(inversecopies1) | set(inversecopies2)
958 for src in allsources:
958 for src in allsources:
959 dsts1 = inversecopies1.get(src)
959 dsts1 = inversecopies1.get(src)
960 dsts2 = inversecopies2.get(src)
960 dsts2 = inversecopies2.get(src)
961 if dsts1 and dsts2:
961 if dsts1 and dsts2:
962 # copied/renamed on both sides
962 # copied/renamed on both sides
963 if src not in m1 and src not in m2:
963 if src not in m1 and src not in m2:
964 # renamed on both sides
964 # renamed on both sides
965 dsts1 = set(dsts1)
965 dsts1 = set(dsts1)
966 dsts2 = set(dsts2)
966 dsts2 = set(dsts2)
967 # If there's some overlap in the rename destinations, we
967 # If there's some overlap in the rename destinations, we
968 # consider it not divergent. For example, if side 1 copies 'a'
968 # consider it not divergent. For example, if side 1 copies 'a'
969 # to 'b' and 'c' and deletes 'a', and side 2 copies 'a' to 'c'
969 # to 'b' and 'c' and deletes 'a', and side 2 copies 'a' to 'c'
970 # and 'd' and deletes 'a'.
970 # and 'd' and deletes 'a'.
971 if dsts1 & dsts2:
971 if dsts1 & dsts2:
972 for dst in dsts1 & dsts2:
972 for dst in dsts1 & dsts2:
973 copy1[dst] = src
973 copy1[dst] = src
974 copy2[dst] = src
974 copy2[dst] = src
975 else:
975 else:
976 diverge[src] = sorted(dsts1 | dsts2)
976 diverge[src] = sorted(dsts1 | dsts2)
977 elif src in m1 and src in m2:
977 elif src in m1 and src in m2:
978 # copied on both sides
978 # copied on both sides
979 dsts1 = set(dsts1)
979 dsts1 = set(dsts1)
980 dsts2 = set(dsts2)
980 dsts2 = set(dsts2)
981 for dst in dsts1 & dsts2:
981 for dst in dsts1 & dsts2:
982 copy1[dst] = src
982 copy1[dst] = src
983 copy2[dst] = src
983 copy2[dst] = src
984 # TODO: Handle cases where it was renamed on one side and copied
984 # TODO: Handle cases where it was renamed on one side and copied
985 # on the other side
985 # on the other side
986 elif dsts1:
986 elif dsts1:
987 # copied/renamed only on side 1
987 # copied/renamed only on side 1
988 _checksinglesidecopies(
988 _checksinglesidecopies(
989 src, dsts1, m1, m2, mb, c2, base, copy1, renamedelete1
989 src, dsts1, m1, m2, mb, c2, base, copy1, renamedelete1
990 )
990 )
991 elif dsts2:
991 elif dsts2:
992 # copied/renamed only on side 2
992 # copied/renamed only on side 2
993 _checksinglesidecopies(
993 _checksinglesidecopies(
994 src, dsts2, m2, m1, mb, c1, base, copy2, renamedelete2
994 src, dsts2, m2, m1, mb, c1, base, copy2, renamedelete2
995 )
995 )
996
996
997 # find interesting file sets from manifests
997 # find interesting file sets from manifests
998 cache = []
998 cache = []
999
999
1000 def _get_addedfiles(idx):
1000 def _get_addedfiles(idx):
1001 if not cache:
1001 if not cache:
1002 addedinm1 = m1.filesnotin(mb, repo.narrowmatch())
1002 addedinm1 = m1.filesnotin(mb, repo.narrowmatch())
1003 addedinm2 = m2.filesnotin(mb, repo.narrowmatch())
1003 addedinm2 = m2.filesnotin(mb, repo.narrowmatch())
1004 u1 = sorted(addedinm1 - addedinm2)
1004 u1 = sorted(addedinm1 - addedinm2)
1005 u2 = sorted(addedinm2 - addedinm1)
1005 u2 = sorted(addedinm2 - addedinm1)
1006 cache.extend((u1, u2))
1006 cache.extend((u1, u2))
1007 return cache[idx]
1007 return cache[idx]
1008
1008
1009 u1fn = lambda: _get_addedfiles(0)
1009 u1fn = lambda: _get_addedfiles(0)
1010 u2fn = lambda: _get_addedfiles(1)
1010 u2fn = lambda: _get_addedfiles(1)
1011 if repo.ui.debugflag:
1011 if repo.ui.debugflag:
1012 u1 = u1fn()
1012 u1 = u1fn()
1013 u2 = u2fn()
1013 u2 = u2fn()
1014
1014
1015 header = b" unmatched files in %s"
1015 header = b" unmatched files in %s"
1016 if u1:
1016 if u1:
1017 repo.ui.debug(
1017 repo.ui.debug(
1018 b"%s:\n %s\n" % (header % b'local', b"\n ".join(u1))
1018 b"%s:\n %s\n" % (header % b'local', b"\n ".join(u1))
1019 )
1019 )
1020 if u2:
1020 if u2:
1021 repo.ui.debug(
1021 repo.ui.debug(
1022 b"%s:\n %s\n" % (header % b'other', b"\n ".join(u2))
1022 b"%s:\n %s\n" % (header % b'other', b"\n ".join(u2))
1023 )
1023 )
1024
1024
1025 renamedeleteset = set()
1025 renamedeleteset = set()
1026 divergeset = set()
1026 divergeset = set()
1027 for dsts in diverge.values():
1027 for dsts in diverge.values():
1028 divergeset.update(dsts)
1028 divergeset.update(dsts)
1029 for dsts in renamedelete1.values():
1029 for dsts in renamedelete1.values():
1030 renamedeleteset.update(dsts)
1030 renamedeleteset.update(dsts)
1031 for dsts in renamedelete2.values():
1031 for dsts in renamedelete2.values():
1032 renamedeleteset.update(dsts)
1032 renamedeleteset.update(dsts)
1033
1033
1034 repo.ui.debug(
1034 repo.ui.debug(
1035 b" all copies found (* = to merge, ! = divergent, "
1035 b" all copies found (* = to merge, ! = divergent, "
1036 b"% = renamed and deleted):\n"
1036 b"% = renamed and deleted):\n"
1037 )
1037 )
1038 for side, copies in ((b"local", copies1), (b"remote", copies2)):
1038 for side, copies in ((b"local", copies1), (b"remote", copies2)):
1039 if not copies:
1039 if not copies:
1040 continue
1040 continue
1041 repo.ui.debug(b" on %s side:\n" % side)
1041 repo.ui.debug(b" on %s side:\n" % side)
1042 for f in sorted(copies):
1042 for f in sorted(copies):
1043 note = b""
1043 note = b""
1044 if f in copy1 or f in copy2:
1044 if f in copy1 or f in copy2:
1045 note += b"*"
1045 note += b"*"
1046 if f in divergeset:
1046 if f in divergeset:
1047 note += b"!"
1047 note += b"!"
1048 if f in renamedeleteset:
1048 if f in renamedeleteset:
1049 note += b"%"
1049 note += b"%"
1050 repo.ui.debug(
1050 repo.ui.debug(
1051 b" src: '%s' -> dst: '%s' %s\n" % (copies[f], f, note)
1051 b" src: '%s' -> dst: '%s' %s\n" % (copies[f], f, note)
1052 )
1052 )
1053 del renamedeleteset
1053 del renamedeleteset
1054 del divergeset
1054 del divergeset
1055
1055
1056 repo.ui.debug(b" checking for directory renames\n")
1056 repo.ui.debug(b" checking for directory renames\n")
1057
1057
1058 dirmove1, movewithdir2 = _dir_renames(repo, c1, copy1, copies1, u2fn)
1058 dirmove1, movewithdir2 = _dir_renames(repo, c1, copy1, copies1, u2fn)
1059 dirmove2, movewithdir1 = _dir_renames(repo, c2, copy2, copies2, u1fn)
1059 dirmove2, movewithdir1 = _dir_renames(repo, c2, copy2, copies2, u1fn)
1060
1060
1061 branch_copies1 = branch_copies(copy1, renamedelete1, dirmove1, movewithdir1)
1061 branch_copies1 = branch_copies(copy1, renamedelete1, dirmove1, movewithdir1)
1062 branch_copies2 = branch_copies(copy2, renamedelete2, dirmove2, movewithdir2)
1062 branch_copies2 = branch_copies(copy2, renamedelete2, dirmove2, movewithdir2)
1063
1063
1064 return branch_copies1, branch_copies2, diverge
1064 return branch_copies1, branch_copies2, diverge
1065
1065
1066
1066
1067 def _dir_renames(repo, ctx, copy, fullcopy, addedfilesfn):
1067 def _dir_renames(repo, ctx, copy, fullcopy, addedfilesfn):
1068 """Finds moved directories and files that should move with them.
1068 """Finds moved directories and files that should move with them.
1069
1069
1070 ctx: the context for one of the sides
1070 ctx: the context for one of the sides
1071 copy: files copied on the same side (as ctx)
1071 copy: files copied on the same side (as ctx)
1072 fullcopy: files copied on the same side (as ctx), including those that
1072 fullcopy: files copied on the same side (as ctx), including those that
1073 merge.manifestmerge() won't care about
1073 merge.manifestmerge() won't care about
1074 addedfilesfn: function returning added files on the other side (compared to
1074 addedfilesfn: function returning added files on the other side (compared to
1075 ctx)
1075 ctx)
1076 """
1076 """
1077 # generate a directory move map
1077 # generate a directory move map
1078 invalid = set()
1078 invalid = set()
1079 dirmove = {}
1079 dirmove = {}
1080
1080
1081 # examine each file copy for a potential directory move, which is
1081 # examine each file copy for a potential directory move, which is
1082 # when all the files in a directory are moved to a new directory
1082 # when all the files in a directory are moved to a new directory
1083 for dst, src in pycompat.iteritems(fullcopy):
1083 for dst, src in pycompat.iteritems(fullcopy):
1084 dsrc, ddst = pathutil.dirname(src), pathutil.dirname(dst)
1084 dsrc, ddst = pathutil.dirname(src), pathutil.dirname(dst)
1085 if dsrc in invalid:
1085 if dsrc in invalid:
1086 # already seen to be uninteresting
1086 # already seen to be uninteresting
1087 continue
1087 continue
1088 elif ctx.hasdir(dsrc) and ctx.hasdir(ddst):
1088 elif ctx.hasdir(dsrc) and ctx.hasdir(ddst):
1089 # directory wasn't entirely moved locally
1089 # directory wasn't entirely moved locally
1090 invalid.add(dsrc)
1090 invalid.add(dsrc)
1091 elif dsrc in dirmove and dirmove[dsrc] != ddst:
1091 elif dsrc in dirmove and dirmove[dsrc] != ddst:
1092 # files from the same directory moved to two different places
1092 # files from the same directory moved to two different places
1093 invalid.add(dsrc)
1093 invalid.add(dsrc)
1094 else:
1094 else:
1095 # looks good so far
1095 # looks good so far
1096 dirmove[dsrc] = ddst
1096 dirmove[dsrc] = ddst
1097
1097
1098 for i in invalid:
1098 for i in invalid:
1099 if i in dirmove:
1099 if i in dirmove:
1100 del dirmove[i]
1100 del dirmove[i]
1101 del invalid
1101 del invalid
1102
1102
1103 if not dirmove:
1103 if not dirmove:
1104 return {}, {}
1104 return {}, {}
1105
1105
1106 dirmove = {k + b"/": v + b"/" for k, v in pycompat.iteritems(dirmove)}
1106 dirmove = {k + b"/": v + b"/" for k, v in pycompat.iteritems(dirmove)}
1107
1107
1108 for d in dirmove:
1108 for d in dirmove:
1109 repo.ui.debug(
1109 repo.ui.debug(
1110 b" discovered dir src: '%s' -> dst: '%s'\n" % (d, dirmove[d])
1110 b" discovered dir src: '%s' -> dst: '%s'\n" % (d, dirmove[d])
1111 )
1111 )
1112
1112
1113 # Sort the directories in reverse order, so we find children first
1113 # Sort the directories in reverse order, so we find children first
1114 # For example, if dir1/ was renamed to dir2/, and dir1/subdir1/
1114 # For example, if dir1/ was renamed to dir2/, and dir1/subdir1/
1115 # was renamed to dir2/subdir2/, we want to move dir1/subdir1/file
1115 # was renamed to dir2/subdir2/, we want to move dir1/subdir1/file
1116 # to dir2/subdir2/file (not dir2/subdir1/file)
1116 # to dir2/subdir2/file (not dir2/subdir1/file)
1117 dirmove_children_first = sorted(dirmove, reverse=True)
1117 dirmove_children_first = sorted(dirmove, reverse=True)
1118
1118
1119 movewithdir = {}
1119 movewithdir = {}
1120 # check unaccounted nonoverlapping files against directory moves
1120 # check unaccounted nonoverlapping files against directory moves
1121 for f in addedfilesfn():
1121 for f in addedfilesfn():
1122 if f not in fullcopy:
1122 if f not in fullcopy:
1123 for d in dirmove_children_first:
1123 for d in dirmove_children_first:
1124 if f.startswith(d):
1124 if f.startswith(d):
1125 # new file added in a directory that was moved, move it
1125 # new file added in a directory that was moved, move it
1126 df = dirmove[d] + f[len(d) :]
1126 df = dirmove[d] + f[len(d) :]
1127 if df not in copy:
1127 if df not in copy:
1128 movewithdir[f] = df
1128 movewithdir[f] = df
1129 repo.ui.debug(
1129 repo.ui.debug(
1130 b" pending file src: '%s' -> dst: '%s'\n"
1130 b" pending file src: '%s' -> dst: '%s'\n"
1131 % (f, df)
1131 % (f, df)
1132 )
1132 )
1133 break
1133 break
1134
1134
1135 return dirmove, movewithdir
1135 return dirmove, movewithdir
1136
1136
1137
1137
1138 def _heuristicscopytracing(repo, c1, c2, base):
1138 def _heuristicscopytracing(repo, c1, c2, base):
1139 """Fast copytracing using filename heuristics
1139 """Fast copytracing using filename heuristics
1140
1140
1141 Assumes that moves or renames are of following two types:
1141 Assumes that moves or renames are of following two types:
1142
1142
1143 1) Inside a directory only (same directory name but different filenames)
1143 1) Inside a directory only (same directory name but different filenames)
1144 2) Move from one directory to another
1144 2) Move from one directory to another
1145 (same filenames but different directory names)
1145 (same filenames but different directory names)
1146
1146
1147 Works only when there are no merge commits in the "source branch".
1147 Works only when there are no merge commits in the "source branch".
1148 Source branch is commits from base up to c2 not including base.
1148 Source branch is commits from base up to c2 not including base.
1149
1149
1150 If merge is involved it fallbacks to _fullcopytracing().
1150 If merge is involved it fallbacks to _fullcopytracing().
1151
1151
1152 Can be used by setting the following config:
1152 Can be used by setting the following config:
1153
1153
1154 [experimental]
1154 [experimental]
1155 copytrace = heuristics
1155 copytrace = heuristics
1156
1156
1157 In some cases the copy/move candidates found by heuristics can be very large
1157 In some cases the copy/move candidates found by heuristics can be very large
1158 in number and that will make the algorithm slow. The number of possible
1158 in number and that will make the algorithm slow. The number of possible
1159 candidates to check can be limited by using the config
1159 candidates to check can be limited by using the config
1160 `experimental.copytrace.movecandidateslimit` which defaults to 100.
1160 `experimental.copytrace.movecandidateslimit` which defaults to 100.
1161 """
1161 """
1162
1162
1163 if c1.rev() is None:
1163 if c1.rev() is None:
1164 c1 = c1.p1()
1164 c1 = c1.p1()
1165 if c2.rev() is None:
1165 if c2.rev() is None:
1166 c2 = c2.p1()
1166 c2 = c2.p1()
1167
1167
1168 changedfiles = set()
1168 changedfiles = set()
1169 m1 = c1.manifest()
1169 m1 = c1.manifest()
1170 if not repo.revs(b'%d::%d', base.rev(), c2.rev()):
1170 if not repo.revs(b'%d::%d', base.rev(), c2.rev()):
1171 # If base is not in c2 branch, we switch to fullcopytracing
1171 # If base is not in c2 branch, we switch to fullcopytracing
1172 repo.ui.debug(
1172 repo.ui.debug(
1173 b"switching to full copytracing as base is not "
1173 b"switching to full copytracing as base is not "
1174 b"an ancestor of c2\n"
1174 b"an ancestor of c2\n"
1175 )
1175 )
1176 return _fullcopytracing(repo, c1, c2, base)
1176 return _fullcopytracing(repo, c1, c2, base)
1177
1177
1178 ctx = c2
1178 ctx = c2
1179 while ctx != base:
1179 while ctx != base:
1180 if len(ctx.parents()) == 2:
1180 if len(ctx.parents()) == 2:
1181 # To keep things simple let's not handle merges
1181 # To keep things simple let's not handle merges
1182 repo.ui.debug(b"switching to full copytracing because of merges\n")
1182 repo.ui.debug(b"switching to full copytracing because of merges\n")
1183 return _fullcopytracing(repo, c1, c2, base)
1183 return _fullcopytracing(repo, c1, c2, base)
1184 changedfiles.update(ctx.files())
1184 changedfiles.update(ctx.files())
1185 ctx = ctx.p1()
1185 ctx = ctx.p1()
1186
1186
1187 copies2 = {}
1187 copies2 = {}
1188 cp = _forwardcopies(base, c2)
1188 cp = _forwardcopies(base, c2)
1189 for dst, src in pycompat.iteritems(cp):
1189 for dst, src in pycompat.iteritems(cp):
1190 if src in m1:
1190 if src in m1:
1191 copies2[dst] = src
1191 copies2[dst] = src
1192
1192
1193 # file is missing if it isn't present in the destination, but is present in
1193 # file is missing if it isn't present in the destination, but is present in
1194 # the base and present in the source.
1194 # the base and present in the source.
1195 # Presence in the base is important to exclude added files, presence in the
1195 # Presence in the base is important to exclude added files, presence in the
1196 # source is important to exclude removed files.
1196 # source is important to exclude removed files.
1197 filt = lambda f: f not in m1 and f in base and f in c2
1197 filt = lambda f: f not in m1 and f in base and f in c2
1198 missingfiles = [f for f in changedfiles if filt(f)]
1198 missingfiles = [f for f in changedfiles if filt(f)]
1199
1199
1200 copies1 = {}
1200 copies1 = {}
1201 if missingfiles:
1201 if missingfiles:
1202 basenametofilename = collections.defaultdict(list)
1202 basenametofilename = collections.defaultdict(list)
1203 dirnametofilename = collections.defaultdict(list)
1203 dirnametofilename = collections.defaultdict(list)
1204
1204
1205 for f in m1.filesnotin(base.manifest()):
1205 for f in m1.filesnotin(base.manifest()):
1206 basename = os.path.basename(f)
1206 basename = os.path.basename(f)
1207 dirname = os.path.dirname(f)
1207 dirname = os.path.dirname(f)
1208 basenametofilename[basename].append(f)
1208 basenametofilename[basename].append(f)
1209 dirnametofilename[dirname].append(f)
1209 dirnametofilename[dirname].append(f)
1210
1210
1211 for f in missingfiles:
1211 for f in missingfiles:
1212 basename = os.path.basename(f)
1212 basename = os.path.basename(f)
1213 dirname = os.path.dirname(f)
1213 dirname = os.path.dirname(f)
1214 samebasename = basenametofilename[basename]
1214 samebasename = basenametofilename[basename]
1215 samedirname = dirnametofilename[dirname]
1215 samedirname = dirnametofilename[dirname]
1216 movecandidates = samebasename + samedirname
1216 movecandidates = samebasename + samedirname
1217 # f is guaranteed to be present in c2, that's why
1217 # f is guaranteed to be present in c2, that's why
1218 # c2.filectx(f) won't fail
1218 # c2.filectx(f) won't fail
1219 f2 = c2.filectx(f)
1219 f2 = c2.filectx(f)
1220 # we can have a lot of candidates which can slow down the heuristics
1220 # we can have a lot of candidates which can slow down the heuristics
1221 # config value to limit the number of candidates moves to check
1221 # config value to limit the number of candidates moves to check
1222 maxcandidates = repo.ui.configint(
1222 maxcandidates = repo.ui.configint(
1223 b'experimental', b'copytrace.movecandidateslimit'
1223 b'experimental', b'copytrace.movecandidateslimit'
1224 )
1224 )
1225
1225
1226 if len(movecandidates) > maxcandidates:
1226 if len(movecandidates) > maxcandidates:
1227 repo.ui.status(
1227 repo.ui.status(
1228 _(
1228 _(
1229 b"skipping copytracing for '%s', more "
1229 b"skipping copytracing for '%s', more "
1230 b"candidates than the limit: %d\n"
1230 b"candidates than the limit: %d\n"
1231 )
1231 )
1232 % (f, len(movecandidates))
1232 % (f, len(movecandidates))
1233 )
1233 )
1234 continue
1234 continue
1235
1235
1236 for candidate in movecandidates:
1236 for candidate in movecandidates:
1237 f1 = c1.filectx(candidate)
1237 f1 = c1.filectx(candidate)
1238 if _related(f1, f2):
1238 if _related(f1, f2):
1239 # if there are a few related copies then we'll merge
1239 # if there are a few related copies then we'll merge
1240 # changes into all of them. This matches the behaviour
1240 # changes into all of them. This matches the behaviour
1241 # of upstream copytracing
1241 # of upstream copytracing
1242 copies1[candidate] = f
1242 copies1[candidate] = f
1243
1243
1244 return branch_copies(copies1), branch_copies(copies2), {}
1244 return branch_copies(copies1), branch_copies(copies2), {}
1245
1245
1246
1246
1247 def _related(f1, f2):
1247 def _related(f1, f2):
1248 """return True if f1 and f2 filectx have a common ancestor
1248 """return True if f1 and f2 filectx have a common ancestor
1249
1249
1250 Walk back to common ancestor to see if the two files originate
1250 Walk back to common ancestor to see if the two files originate
1251 from the same file. Since workingfilectx's rev() is None it messes
1251 from the same file. Since workingfilectx's rev() is None it messes
1252 up the integer comparison logic, hence the pre-step check for
1252 up the integer comparison logic, hence the pre-step check for
1253 None (f1 and f2 can only be workingfilectx's initially).
1253 None (f1 and f2 can only be workingfilectx's initially).
1254 """
1254 """
1255
1255
1256 if f1 == f2:
1256 if f1 == f2:
1257 return True # a match
1257 return True # a match
1258
1258
1259 g1, g2 = f1.ancestors(), f2.ancestors()
1259 g1, g2 = f1.ancestors(), f2.ancestors()
1260 try:
1260 try:
1261 f1r, f2r = f1.linkrev(), f2.linkrev()
1261 f1r, f2r = f1.linkrev(), f2.linkrev()
1262
1262
1263 if f1r is None:
1263 if f1r is None:
1264 f1 = next(g1)
1264 f1 = next(g1)
1265 if f2r is None:
1265 if f2r is None:
1266 f2 = next(g2)
1266 f2 = next(g2)
1267
1267
1268 while True:
1268 while True:
1269 f1r, f2r = f1.linkrev(), f2.linkrev()
1269 f1r, f2r = f1.linkrev(), f2.linkrev()
1270 if f1r > f2r:
1270 if f1r > f2r:
1271 f1 = next(g1)
1271 f1 = next(g1)
1272 elif f2r > f1r:
1272 elif f2r > f1r:
1273 f2 = next(g2)
1273 f2 = next(g2)
1274 else: # f1 and f2 point to files in the same linkrev
1274 else: # f1 and f2 point to files in the same linkrev
1275 return f1 == f2 # true if they point to the same file
1275 return f1 == f2 # true if they point to the same file
1276 except StopIteration:
1276 except StopIteration:
1277 return False
1277 return False
1278
1278
1279
1279
1280 def graftcopies(wctx, ctx, base):
1280 def graftcopies(wctx, ctx, base):
1281 """reproduce copies between base and ctx in the wctx
1281 """reproduce copies between base and ctx in the wctx
1282
1282
1283 Unlike mergecopies(), this function will only consider copies between base
1283 Unlike mergecopies(), this function will only consider copies between base
1284 and ctx; it will ignore copies between base and wctx. Also unlike
1284 and ctx; it will ignore copies between base and wctx. Also unlike
1285 mergecopies(), this function will apply copies to the working copy (instead
1285 mergecopies(), this function will apply copies to the working copy (instead
1286 of just returning information about the copies). That makes it cheaper
1286 of just returning information about the copies). That makes it cheaper
1287 (especially in the common case of base==ctx.p1()) and useful also when
1287 (especially in the common case of base==ctx.p1()) and useful also when
1288 experimental.copytrace=off.
1288 experimental.copytrace=off.
1289
1289
1290 merge.update() will have already marked most copies, but it will only
1290 merge.update() will have already marked most copies, but it will only
1291 mark copies if it thinks the source files are related (see
1291 mark copies if it thinks the source files are related (see
1292 merge._related()). It will also not mark copies if the file wasn't modified
1292 merge._related()). It will also not mark copies if the file wasn't modified
1293 on the local side. This function adds the copies that were "missed"
1293 on the local side. This function adds the copies that were "missed"
1294 by merge.update().
1294 by merge.update().
1295 """
1295 """
1296 new_copies = pathcopies(base, ctx)
1296 new_copies = pathcopies(base, ctx)
1297 parent = wctx.p1()
1297 parent = wctx.p1()
1298 _filter(parent, wctx, new_copies)
1298 _filter(parent, wctx, new_copies)
1299 # Extra filtering to drop copy information for files that existed before
1299 # Extra filtering to drop copy information for files that existed before
1300 # the graft. This is to handle the case of grafting a rename onto a commit
1300 # the graft. This is to handle the case of grafting a rename onto a commit
1301 # that already has the rename. Otherwise the presence of copy information
1301 # that already has the rename. Otherwise the presence of copy information
1302 # would result in the creation of an empty commit where we would prefer to
1302 # would result in the creation of an empty commit where we would prefer to
1303 # not create one.
1303 # not create one.
1304 for dest, __ in list(new_copies.items()):
1304 for dest, __ in list(new_copies.items()):
1305 if dest in parent:
1305 if dest in parent:
1306 del new_copies[dest]
1306 del new_copies[dest]
1307 for dst, src in pycompat.iteritems(new_copies):
1307 for dst, src in pycompat.iteritems(new_copies):
1308 wctx[dst].markcopied(src)
1308 wctx[dst].markcopied(src)
@@ -1,1251 +1,1252 b''
1 # logcmdutil.py - utility for log-like commands
1 # logcmdutil.py - utility for log-like commands
2 #
2 #
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import itertools
10 import itertools
11 import os
11 import os
12 import posixpath
12 import posixpath
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import (
15 from .node import (
16 nullid,
16 nullid,
17 nullrev,
17 wdirid,
18 wdirid,
18 wdirrev,
19 wdirrev,
19 )
20 )
20
21
21 from .thirdparty import attr
22 from .thirdparty import attr
22
23
23 from . import (
24 from . import (
24 dagop,
25 dagop,
25 error,
26 error,
26 formatter,
27 formatter,
27 graphmod,
28 graphmod,
28 match as matchmod,
29 match as matchmod,
29 mdiff,
30 mdiff,
30 merge,
31 merge,
31 patch,
32 patch,
32 pathutil,
33 pathutil,
33 pycompat,
34 pycompat,
34 revset,
35 revset,
35 revsetlang,
36 revsetlang,
36 scmutil,
37 scmutil,
37 smartset,
38 smartset,
38 templatekw,
39 templatekw,
39 templater,
40 templater,
40 util,
41 util,
41 )
42 )
42 from .utils import (
43 from .utils import (
43 dateutil,
44 dateutil,
44 stringutil,
45 stringutil,
45 )
46 )
46
47
47
48
48 if pycompat.TYPE_CHECKING:
49 if pycompat.TYPE_CHECKING:
49 from typing import (
50 from typing import (
50 Any,
51 Any,
51 Callable,
52 Callable,
52 Dict,
53 Dict,
53 List,
54 List,
54 Optional,
55 Optional,
55 Sequence,
56 Sequence,
56 Tuple,
57 Tuple,
57 )
58 )
58
59
59 for t in (Any, Callable, Dict, List, Optional, Tuple):
60 for t in (Any, Callable, Dict, List, Optional, Tuple):
60 assert t
61 assert t
61
62
62
63
63 def getlimit(opts):
64 def getlimit(opts):
64 """get the log limit according to option -l/--limit"""
65 """get the log limit according to option -l/--limit"""
65 limit = opts.get(b'limit')
66 limit = opts.get(b'limit')
66 if limit:
67 if limit:
67 try:
68 try:
68 limit = int(limit)
69 limit = int(limit)
69 except ValueError:
70 except ValueError:
70 raise error.Abort(_(b'limit must be a positive integer'))
71 raise error.Abort(_(b'limit must be a positive integer'))
71 if limit <= 0:
72 if limit <= 0:
72 raise error.Abort(_(b'limit must be positive'))
73 raise error.Abort(_(b'limit must be positive'))
73 else:
74 else:
74 limit = None
75 limit = None
75 return limit
76 return limit
76
77
77
78
78 def diff_parent(ctx):
79 def diff_parent(ctx):
79 """get the context object to use as parent when diffing
80 """get the context object to use as parent when diffing
80
81
81
82
82 If diff.merge is enabled, an overlayworkingctx of the auto-merged parents will be returned.
83 If diff.merge is enabled, an overlayworkingctx of the auto-merged parents will be returned.
83 """
84 """
84 repo = ctx.repo()
85 repo = ctx.repo()
85 if repo.ui.configbool(b"diff", b"merge") and ctx.p2().node() != nullid:
86 if repo.ui.configbool(b"diff", b"merge") and ctx.p2().rev() != nullrev:
86 # avoid cycle context -> subrepo -> cmdutil -> logcmdutil
87 # avoid cycle context -> subrepo -> cmdutil -> logcmdutil
87 from . import context
88 from . import context
88
89
89 wctx = context.overlayworkingctx(repo)
90 wctx = context.overlayworkingctx(repo)
90 wctx.setbase(ctx.p1())
91 wctx.setbase(ctx.p1())
91 with repo.ui.configoverride(
92 with repo.ui.configoverride(
92 {
93 {
93 (
94 (
94 b"ui",
95 b"ui",
95 b"forcemerge",
96 b"forcemerge",
96 ): b"internal:merge3-lie-about-conflicts",
97 ): b"internal:merge3-lie-about-conflicts",
97 },
98 },
98 b"merge-diff",
99 b"merge-diff",
99 ):
100 ):
100 repo.ui.pushbuffer()
101 repo.ui.pushbuffer()
101 merge.merge(ctx.p2(), wc=wctx)
102 merge.merge(ctx.p2(), wc=wctx)
102 repo.ui.popbuffer()
103 repo.ui.popbuffer()
103 return wctx
104 return wctx
104 else:
105 else:
105 return ctx.p1()
106 return ctx.p1()
106
107
107
108
108 def diffordiffstat(
109 def diffordiffstat(
109 ui,
110 ui,
110 repo,
111 repo,
111 diffopts,
112 diffopts,
112 ctx1,
113 ctx1,
113 ctx2,
114 ctx2,
114 match,
115 match,
115 changes=None,
116 changes=None,
116 stat=False,
117 stat=False,
117 fp=None,
118 fp=None,
118 graphwidth=0,
119 graphwidth=0,
119 prefix=b'',
120 prefix=b'',
120 root=b'',
121 root=b'',
121 listsubrepos=False,
122 listsubrepos=False,
122 hunksfilterfn=None,
123 hunksfilterfn=None,
123 ):
124 ):
124 '''show diff or diffstat.'''
125 '''show diff or diffstat.'''
125 if root:
126 if root:
126 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
127 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
127 else:
128 else:
128 relroot = b''
129 relroot = b''
129 copysourcematch = None
130 copysourcematch = None
130
131
131 def compose(f, g):
132 def compose(f, g):
132 return lambda x: f(g(x))
133 return lambda x: f(g(x))
133
134
134 def pathfn(f):
135 def pathfn(f):
135 return posixpath.join(prefix, f)
136 return posixpath.join(prefix, f)
136
137
137 if relroot != b'':
138 if relroot != b'':
138 # XXX relative roots currently don't work if the root is within a
139 # XXX relative roots currently don't work if the root is within a
139 # subrepo
140 # subrepo
140 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
141 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
141 uirelroot = uipathfn(pathfn(relroot))
142 uirelroot = uipathfn(pathfn(relroot))
142 relroot += b'/'
143 relroot += b'/'
143 for matchroot in match.files():
144 for matchroot in match.files():
144 if not matchroot.startswith(relroot):
145 if not matchroot.startswith(relroot):
145 ui.warn(
146 ui.warn(
146 _(b'warning: %s not inside relative root %s\n')
147 _(b'warning: %s not inside relative root %s\n')
147 % (uipathfn(pathfn(matchroot)), uirelroot)
148 % (uipathfn(pathfn(matchroot)), uirelroot)
148 )
149 )
149
150
150 relrootmatch = scmutil.match(ctx2, pats=[relroot], default=b'path')
151 relrootmatch = scmutil.match(ctx2, pats=[relroot], default=b'path')
151 match = matchmod.intersectmatchers(match, relrootmatch)
152 match = matchmod.intersectmatchers(match, relrootmatch)
152 copysourcematch = relrootmatch
153 copysourcematch = relrootmatch
153
154
154 checkroot = repo.ui.configbool(
155 checkroot = repo.ui.configbool(
155 b'devel', b'all-warnings'
156 b'devel', b'all-warnings'
156 ) or repo.ui.configbool(b'devel', b'check-relroot')
157 ) or repo.ui.configbool(b'devel', b'check-relroot')
157
158
158 def relrootpathfn(f):
159 def relrootpathfn(f):
159 if checkroot and not f.startswith(relroot):
160 if checkroot and not f.startswith(relroot):
160 raise AssertionError(
161 raise AssertionError(
161 b"file %s doesn't start with relroot %s" % (f, relroot)
162 b"file %s doesn't start with relroot %s" % (f, relroot)
162 )
163 )
163 return f[len(relroot) :]
164 return f[len(relroot) :]
164
165
165 pathfn = compose(relrootpathfn, pathfn)
166 pathfn = compose(relrootpathfn, pathfn)
166
167
167 if stat:
168 if stat:
168 diffopts = diffopts.copy(context=0, noprefix=False)
169 diffopts = diffopts.copy(context=0, noprefix=False)
169 width = 80
170 width = 80
170 if not ui.plain():
171 if not ui.plain():
171 width = ui.termwidth() - graphwidth
172 width = ui.termwidth() - graphwidth
172 # If an explicit --root was given, don't respect ui.relative-paths
173 # If an explicit --root was given, don't respect ui.relative-paths
173 if not relroot:
174 if not relroot:
174 pathfn = compose(scmutil.getuipathfn(repo), pathfn)
175 pathfn = compose(scmutil.getuipathfn(repo), pathfn)
175
176
176 chunks = ctx2.diff(
177 chunks = ctx2.diff(
177 ctx1,
178 ctx1,
178 match,
179 match,
179 changes,
180 changes,
180 opts=diffopts,
181 opts=diffopts,
181 pathfn=pathfn,
182 pathfn=pathfn,
182 copysourcematch=copysourcematch,
183 copysourcematch=copysourcematch,
183 hunksfilterfn=hunksfilterfn,
184 hunksfilterfn=hunksfilterfn,
184 )
185 )
185
186
186 if fp is not None or ui.canwritewithoutlabels():
187 if fp is not None or ui.canwritewithoutlabels():
187 out = fp or ui
188 out = fp or ui
188 if stat:
189 if stat:
189 chunks = [patch.diffstat(util.iterlines(chunks), width=width)]
190 chunks = [patch.diffstat(util.iterlines(chunks), width=width)]
190 for chunk in util.filechunkiter(util.chunkbuffer(chunks)):
191 for chunk in util.filechunkiter(util.chunkbuffer(chunks)):
191 out.write(chunk)
192 out.write(chunk)
192 else:
193 else:
193 if stat:
194 if stat:
194 chunks = patch.diffstatui(util.iterlines(chunks), width=width)
195 chunks = patch.diffstatui(util.iterlines(chunks), width=width)
195 else:
196 else:
196 chunks = patch.difflabel(
197 chunks = patch.difflabel(
197 lambda chunks, **kwargs: chunks, chunks, opts=diffopts
198 lambda chunks, **kwargs: chunks, chunks, opts=diffopts
198 )
199 )
199 if ui.canbatchlabeledwrites():
200 if ui.canbatchlabeledwrites():
200
201
201 def gen():
202 def gen():
202 for chunk, label in chunks:
203 for chunk, label in chunks:
203 yield ui.label(chunk, label=label)
204 yield ui.label(chunk, label=label)
204
205
205 for chunk in util.filechunkiter(util.chunkbuffer(gen())):
206 for chunk in util.filechunkiter(util.chunkbuffer(gen())):
206 ui.write(chunk)
207 ui.write(chunk)
207 else:
208 else:
208 for chunk, label in chunks:
209 for chunk, label in chunks:
209 ui.write(chunk, label=label)
210 ui.write(chunk, label=label)
210
211
211 node2 = ctx2.node()
212 node2 = ctx2.node()
212 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
213 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
213 tempnode2 = node2
214 tempnode2 = node2
214 try:
215 try:
215 if node2 is not None:
216 if node2 is not None:
216 tempnode2 = ctx2.substate[subpath][1]
217 tempnode2 = ctx2.substate[subpath][1]
217 except KeyError:
218 except KeyError:
218 # A subrepo that existed in node1 was deleted between node1 and
219 # A subrepo that existed in node1 was deleted between node1 and
219 # node2 (inclusive). Thus, ctx2's substate won't contain that
220 # node2 (inclusive). Thus, ctx2's substate won't contain that
220 # subpath. The best we can do is to ignore it.
221 # subpath. The best we can do is to ignore it.
221 tempnode2 = None
222 tempnode2 = None
222 submatch = matchmod.subdirmatcher(subpath, match)
223 submatch = matchmod.subdirmatcher(subpath, match)
223 subprefix = repo.wvfs.reljoin(prefix, subpath)
224 subprefix = repo.wvfs.reljoin(prefix, subpath)
224 if listsubrepos or match.exact(subpath) or any(submatch.files()):
225 if listsubrepos or match.exact(subpath) or any(submatch.files()):
225 sub.diff(
226 sub.diff(
226 ui,
227 ui,
227 diffopts,
228 diffopts,
228 tempnode2,
229 tempnode2,
229 submatch,
230 submatch,
230 changes=changes,
231 changes=changes,
231 stat=stat,
232 stat=stat,
232 fp=fp,
233 fp=fp,
233 prefix=subprefix,
234 prefix=subprefix,
234 )
235 )
235
236
236
237
237 class changesetdiffer(object):
238 class changesetdiffer(object):
238 """Generate diff of changeset with pre-configured filtering functions"""
239 """Generate diff of changeset with pre-configured filtering functions"""
239
240
240 def _makefilematcher(self, ctx):
241 def _makefilematcher(self, ctx):
241 return scmutil.matchall(ctx.repo())
242 return scmutil.matchall(ctx.repo())
242
243
243 def _makehunksfilter(self, ctx):
244 def _makehunksfilter(self, ctx):
244 return None
245 return None
245
246
246 def showdiff(self, ui, ctx, diffopts, graphwidth=0, stat=False):
247 def showdiff(self, ui, ctx, diffopts, graphwidth=0, stat=False):
247 diffordiffstat(
248 diffordiffstat(
248 ui,
249 ui,
249 ctx.repo(),
250 ctx.repo(),
250 diffopts,
251 diffopts,
251 diff_parent(ctx),
252 diff_parent(ctx),
252 ctx,
253 ctx,
253 match=self._makefilematcher(ctx),
254 match=self._makefilematcher(ctx),
254 stat=stat,
255 stat=stat,
255 graphwidth=graphwidth,
256 graphwidth=graphwidth,
256 hunksfilterfn=self._makehunksfilter(ctx),
257 hunksfilterfn=self._makehunksfilter(ctx),
257 )
258 )
258
259
259
260
260 def changesetlabels(ctx):
261 def changesetlabels(ctx):
261 labels = [b'log.changeset', b'changeset.%s' % ctx.phasestr()]
262 labels = [b'log.changeset', b'changeset.%s' % ctx.phasestr()]
262 if ctx.obsolete():
263 if ctx.obsolete():
263 labels.append(b'changeset.obsolete')
264 labels.append(b'changeset.obsolete')
264 if ctx.isunstable():
265 if ctx.isunstable():
265 labels.append(b'changeset.unstable')
266 labels.append(b'changeset.unstable')
266 for instability in ctx.instabilities():
267 for instability in ctx.instabilities():
267 labels.append(b'instability.%s' % instability)
268 labels.append(b'instability.%s' % instability)
268 return b' '.join(labels)
269 return b' '.join(labels)
269
270
270
271
271 class changesetprinter(object):
272 class changesetprinter(object):
272 '''show changeset information when templating not requested.'''
273 '''show changeset information when templating not requested.'''
273
274
274 def __init__(self, ui, repo, differ=None, diffopts=None, buffered=False):
275 def __init__(self, ui, repo, differ=None, diffopts=None, buffered=False):
275 self.ui = ui
276 self.ui = ui
276 self.repo = repo
277 self.repo = repo
277 self.buffered = buffered
278 self.buffered = buffered
278 self._differ = differ or changesetdiffer()
279 self._differ = differ or changesetdiffer()
279 self._diffopts = patch.diffallopts(ui, diffopts)
280 self._diffopts = patch.diffallopts(ui, diffopts)
280 self._includestat = diffopts and diffopts.get(b'stat')
281 self._includestat = diffopts and diffopts.get(b'stat')
281 self._includediff = diffopts and diffopts.get(b'patch')
282 self._includediff = diffopts and diffopts.get(b'patch')
282 self.header = {}
283 self.header = {}
283 self.hunk = {}
284 self.hunk = {}
284 self.lastheader = None
285 self.lastheader = None
285 self.footer = None
286 self.footer = None
286 self._columns = templatekw.getlogcolumns()
287 self._columns = templatekw.getlogcolumns()
287
288
288 def flush(self, ctx):
289 def flush(self, ctx):
289 rev = ctx.rev()
290 rev = ctx.rev()
290 if rev in self.header:
291 if rev in self.header:
291 h = self.header[rev]
292 h = self.header[rev]
292 if h != self.lastheader:
293 if h != self.lastheader:
293 self.lastheader = h
294 self.lastheader = h
294 self.ui.write(h)
295 self.ui.write(h)
295 del self.header[rev]
296 del self.header[rev]
296 if rev in self.hunk:
297 if rev in self.hunk:
297 self.ui.write(self.hunk[rev])
298 self.ui.write(self.hunk[rev])
298 del self.hunk[rev]
299 del self.hunk[rev]
299
300
300 def close(self):
301 def close(self):
301 if self.footer:
302 if self.footer:
302 self.ui.write(self.footer)
303 self.ui.write(self.footer)
303
304
304 def show(self, ctx, copies=None, **props):
305 def show(self, ctx, copies=None, **props):
305 props = pycompat.byteskwargs(props)
306 props = pycompat.byteskwargs(props)
306 if self.buffered:
307 if self.buffered:
307 self.ui.pushbuffer(labeled=True)
308 self.ui.pushbuffer(labeled=True)
308 self._show(ctx, copies, props)
309 self._show(ctx, copies, props)
309 self.hunk[ctx.rev()] = self.ui.popbuffer()
310 self.hunk[ctx.rev()] = self.ui.popbuffer()
310 else:
311 else:
311 self._show(ctx, copies, props)
312 self._show(ctx, copies, props)
312
313
313 def _show(self, ctx, copies, props):
314 def _show(self, ctx, copies, props):
314 '''show a single changeset or file revision'''
315 '''show a single changeset or file revision'''
315 changenode = ctx.node()
316 changenode = ctx.node()
316 graphwidth = props.get(b'graphwidth', 0)
317 graphwidth = props.get(b'graphwidth', 0)
317
318
318 if self.ui.quiet:
319 if self.ui.quiet:
319 self.ui.write(
320 self.ui.write(
320 b"%s\n" % scmutil.formatchangeid(ctx), label=b'log.node'
321 b"%s\n" % scmutil.formatchangeid(ctx), label=b'log.node'
321 )
322 )
322 return
323 return
323
324
324 columns = self._columns
325 columns = self._columns
325 self.ui.write(
326 self.ui.write(
326 columns[b'changeset'] % scmutil.formatchangeid(ctx),
327 columns[b'changeset'] % scmutil.formatchangeid(ctx),
327 label=changesetlabels(ctx),
328 label=changesetlabels(ctx),
328 )
329 )
329
330
330 # branches are shown first before any other names due to backwards
331 # branches are shown first before any other names due to backwards
331 # compatibility
332 # compatibility
332 branch = ctx.branch()
333 branch = ctx.branch()
333 # don't show the default branch name
334 # don't show the default branch name
334 if branch != b'default':
335 if branch != b'default':
335 self.ui.write(columns[b'branch'] % branch, label=b'log.branch')
336 self.ui.write(columns[b'branch'] % branch, label=b'log.branch')
336
337
337 for nsname, ns in pycompat.iteritems(self.repo.names):
338 for nsname, ns in pycompat.iteritems(self.repo.names):
338 # branches has special logic already handled above, so here we just
339 # branches has special logic already handled above, so here we just
339 # skip it
340 # skip it
340 if nsname == b'branches':
341 if nsname == b'branches':
341 continue
342 continue
342 # we will use the templatename as the color name since those two
343 # we will use the templatename as the color name since those two
343 # should be the same
344 # should be the same
344 for name in ns.names(self.repo, changenode):
345 for name in ns.names(self.repo, changenode):
345 self.ui.write(ns.logfmt % name, label=b'log.%s' % ns.colorname)
346 self.ui.write(ns.logfmt % name, label=b'log.%s' % ns.colorname)
346 if self.ui.debugflag:
347 if self.ui.debugflag:
347 self.ui.write(
348 self.ui.write(
348 columns[b'phase'] % ctx.phasestr(), label=b'log.phase'
349 columns[b'phase'] % ctx.phasestr(), label=b'log.phase'
349 )
350 )
350 for pctx in scmutil.meaningfulparents(self.repo, ctx):
351 for pctx in scmutil.meaningfulparents(self.repo, ctx):
351 label = b'log.parent changeset.%s' % pctx.phasestr()
352 label = b'log.parent changeset.%s' % pctx.phasestr()
352 self.ui.write(
353 self.ui.write(
353 columns[b'parent'] % scmutil.formatchangeid(pctx), label=label
354 columns[b'parent'] % scmutil.formatchangeid(pctx), label=label
354 )
355 )
355
356
356 if self.ui.debugflag:
357 if self.ui.debugflag:
357 mnode = ctx.manifestnode()
358 mnode = ctx.manifestnode()
358 if mnode is None:
359 if mnode is None:
359 mnode = wdirid
360 mnode = wdirid
360 mrev = wdirrev
361 mrev = wdirrev
361 else:
362 else:
362 mrev = self.repo.manifestlog.rev(mnode)
363 mrev = self.repo.manifestlog.rev(mnode)
363 self.ui.write(
364 self.ui.write(
364 columns[b'manifest']
365 columns[b'manifest']
365 % scmutil.formatrevnode(self.ui, mrev, mnode),
366 % scmutil.formatrevnode(self.ui, mrev, mnode),
366 label=b'ui.debug log.manifest',
367 label=b'ui.debug log.manifest',
367 )
368 )
368 self.ui.write(columns[b'user'] % ctx.user(), label=b'log.user')
369 self.ui.write(columns[b'user'] % ctx.user(), label=b'log.user')
369 self.ui.write(
370 self.ui.write(
370 columns[b'date'] % dateutil.datestr(ctx.date()), label=b'log.date'
371 columns[b'date'] % dateutil.datestr(ctx.date()), label=b'log.date'
371 )
372 )
372
373
373 if ctx.isunstable():
374 if ctx.isunstable():
374 instabilities = ctx.instabilities()
375 instabilities = ctx.instabilities()
375 self.ui.write(
376 self.ui.write(
376 columns[b'instability'] % b', '.join(instabilities),
377 columns[b'instability'] % b', '.join(instabilities),
377 label=b'log.instability',
378 label=b'log.instability',
378 )
379 )
379
380
380 elif ctx.obsolete():
381 elif ctx.obsolete():
381 self._showobsfate(ctx)
382 self._showobsfate(ctx)
382
383
383 self._exthook(ctx)
384 self._exthook(ctx)
384
385
385 if self.ui.debugflag:
386 if self.ui.debugflag:
386 files = ctx.p1().status(ctx)
387 files = ctx.p1().status(ctx)
387 for key, value in zip(
388 for key, value in zip(
388 [b'files', b'files+', b'files-'],
389 [b'files', b'files+', b'files-'],
389 [files.modified, files.added, files.removed],
390 [files.modified, files.added, files.removed],
390 ):
391 ):
391 if value:
392 if value:
392 self.ui.write(
393 self.ui.write(
393 columns[key] % b" ".join(value),
394 columns[key] % b" ".join(value),
394 label=b'ui.debug log.files',
395 label=b'ui.debug log.files',
395 )
396 )
396 elif ctx.files() and self.ui.verbose:
397 elif ctx.files() and self.ui.verbose:
397 self.ui.write(
398 self.ui.write(
398 columns[b'files'] % b" ".join(ctx.files()),
399 columns[b'files'] % b" ".join(ctx.files()),
399 label=b'ui.note log.files',
400 label=b'ui.note log.files',
400 )
401 )
401 if copies and self.ui.verbose:
402 if copies and self.ui.verbose:
402 copies = [b'%s (%s)' % c for c in copies]
403 copies = [b'%s (%s)' % c for c in copies]
403 self.ui.write(
404 self.ui.write(
404 columns[b'copies'] % b' '.join(copies),
405 columns[b'copies'] % b' '.join(copies),
405 label=b'ui.note log.copies',
406 label=b'ui.note log.copies',
406 )
407 )
407
408
408 extra = ctx.extra()
409 extra = ctx.extra()
409 if extra and self.ui.debugflag:
410 if extra and self.ui.debugflag:
410 for key, value in sorted(extra.items()):
411 for key, value in sorted(extra.items()):
411 self.ui.write(
412 self.ui.write(
412 columns[b'extra'] % (key, stringutil.escapestr(value)),
413 columns[b'extra'] % (key, stringutil.escapestr(value)),
413 label=b'ui.debug log.extra',
414 label=b'ui.debug log.extra',
414 )
415 )
415
416
416 description = ctx.description().strip()
417 description = ctx.description().strip()
417 if description:
418 if description:
418 if self.ui.verbose:
419 if self.ui.verbose:
419 self.ui.write(
420 self.ui.write(
420 _(b"description:\n"), label=b'ui.note log.description'
421 _(b"description:\n"), label=b'ui.note log.description'
421 )
422 )
422 self.ui.write(description, label=b'ui.note log.description')
423 self.ui.write(description, label=b'ui.note log.description')
423 self.ui.write(b"\n\n")
424 self.ui.write(b"\n\n")
424 else:
425 else:
425 self.ui.write(
426 self.ui.write(
426 columns[b'summary'] % description.splitlines()[0],
427 columns[b'summary'] % description.splitlines()[0],
427 label=b'log.summary',
428 label=b'log.summary',
428 )
429 )
429 self.ui.write(b"\n")
430 self.ui.write(b"\n")
430
431
431 self._showpatch(ctx, graphwidth)
432 self._showpatch(ctx, graphwidth)
432
433
433 def _showobsfate(self, ctx):
434 def _showobsfate(self, ctx):
434 # TODO: do not depend on templater
435 # TODO: do not depend on templater
435 tres = formatter.templateresources(self.repo.ui, self.repo)
436 tres = formatter.templateresources(self.repo.ui, self.repo)
436 t = formatter.maketemplater(
437 t = formatter.maketemplater(
437 self.repo.ui,
438 self.repo.ui,
438 b'{join(obsfate, "\n")}',
439 b'{join(obsfate, "\n")}',
439 defaults=templatekw.keywords,
440 defaults=templatekw.keywords,
440 resources=tres,
441 resources=tres,
441 )
442 )
442 obsfate = t.renderdefault({b'ctx': ctx}).splitlines()
443 obsfate = t.renderdefault({b'ctx': ctx}).splitlines()
443
444
444 if obsfate:
445 if obsfate:
445 for obsfateline in obsfate:
446 for obsfateline in obsfate:
446 self.ui.write(
447 self.ui.write(
447 self._columns[b'obsolete'] % obsfateline,
448 self._columns[b'obsolete'] % obsfateline,
448 label=b'log.obsfate',
449 label=b'log.obsfate',
449 )
450 )
450
451
451 def _exthook(self, ctx):
452 def _exthook(self, ctx):
452 """empty method used by extension as a hook point"""
453 """empty method used by extension as a hook point"""
453
454
454 def _showpatch(self, ctx, graphwidth=0):
455 def _showpatch(self, ctx, graphwidth=0):
455 if self._includestat:
456 if self._includestat:
456 self._differ.showdiff(
457 self._differ.showdiff(
457 self.ui, ctx, self._diffopts, graphwidth, stat=True
458 self.ui, ctx, self._diffopts, graphwidth, stat=True
458 )
459 )
459 if self._includestat and self._includediff:
460 if self._includestat and self._includediff:
460 self.ui.write(b"\n")
461 self.ui.write(b"\n")
461 if self._includediff:
462 if self._includediff:
462 self._differ.showdiff(
463 self._differ.showdiff(
463 self.ui, ctx, self._diffopts, graphwidth, stat=False
464 self.ui, ctx, self._diffopts, graphwidth, stat=False
464 )
465 )
465 if self._includestat or self._includediff:
466 if self._includestat or self._includediff:
466 self.ui.write(b"\n")
467 self.ui.write(b"\n")
467
468
468
469
469 class changesetformatter(changesetprinter):
470 class changesetformatter(changesetprinter):
470 """Format changeset information by generic formatter"""
471 """Format changeset information by generic formatter"""
471
472
472 def __init__(
473 def __init__(
473 self, ui, repo, fm, differ=None, diffopts=None, buffered=False
474 self, ui, repo, fm, differ=None, diffopts=None, buffered=False
474 ):
475 ):
475 changesetprinter.__init__(self, ui, repo, differ, diffopts, buffered)
476 changesetprinter.__init__(self, ui, repo, differ, diffopts, buffered)
476 self._diffopts = patch.difffeatureopts(ui, diffopts, git=True)
477 self._diffopts = patch.difffeatureopts(ui, diffopts, git=True)
477 self._fm = fm
478 self._fm = fm
478
479
479 def close(self):
480 def close(self):
480 self._fm.end()
481 self._fm.end()
481
482
482 def _show(self, ctx, copies, props):
483 def _show(self, ctx, copies, props):
483 '''show a single changeset or file revision'''
484 '''show a single changeset or file revision'''
484 fm = self._fm
485 fm = self._fm
485 fm.startitem()
486 fm.startitem()
486 fm.context(ctx=ctx)
487 fm.context(ctx=ctx)
487 fm.data(rev=scmutil.intrev(ctx), node=fm.hexfunc(scmutil.binnode(ctx)))
488 fm.data(rev=scmutil.intrev(ctx), node=fm.hexfunc(scmutil.binnode(ctx)))
488
489
489 datahint = fm.datahint()
490 datahint = fm.datahint()
490 if self.ui.quiet and not datahint:
491 if self.ui.quiet and not datahint:
491 return
492 return
492
493
493 fm.data(
494 fm.data(
494 branch=ctx.branch(),
495 branch=ctx.branch(),
495 phase=ctx.phasestr(),
496 phase=ctx.phasestr(),
496 user=ctx.user(),
497 user=ctx.user(),
497 date=fm.formatdate(ctx.date()),
498 date=fm.formatdate(ctx.date()),
498 desc=ctx.description(),
499 desc=ctx.description(),
499 bookmarks=fm.formatlist(ctx.bookmarks(), name=b'bookmark'),
500 bookmarks=fm.formatlist(ctx.bookmarks(), name=b'bookmark'),
500 tags=fm.formatlist(ctx.tags(), name=b'tag'),
501 tags=fm.formatlist(ctx.tags(), name=b'tag'),
501 parents=fm.formatlist(
502 parents=fm.formatlist(
502 [fm.hexfunc(c.node()) for c in ctx.parents()], name=b'node'
503 [fm.hexfunc(c.node()) for c in ctx.parents()], name=b'node'
503 ),
504 ),
504 )
505 )
505
506
506 if self.ui.debugflag or b'manifest' in datahint:
507 if self.ui.debugflag or b'manifest' in datahint:
507 fm.data(manifest=fm.hexfunc(ctx.manifestnode() or wdirid))
508 fm.data(manifest=fm.hexfunc(ctx.manifestnode() or wdirid))
508 if self.ui.debugflag or b'extra' in datahint:
509 if self.ui.debugflag or b'extra' in datahint:
509 fm.data(extra=fm.formatdict(ctx.extra()))
510 fm.data(extra=fm.formatdict(ctx.extra()))
510
511
511 if (
512 if (
512 self.ui.debugflag
513 self.ui.debugflag
513 or b'modified' in datahint
514 or b'modified' in datahint
514 or b'added' in datahint
515 or b'added' in datahint
515 or b'removed' in datahint
516 or b'removed' in datahint
516 ):
517 ):
517 files = ctx.p1().status(ctx)
518 files = ctx.p1().status(ctx)
518 fm.data(
519 fm.data(
519 modified=fm.formatlist(files.modified, name=b'file'),
520 modified=fm.formatlist(files.modified, name=b'file'),
520 added=fm.formatlist(files.added, name=b'file'),
521 added=fm.formatlist(files.added, name=b'file'),
521 removed=fm.formatlist(files.removed, name=b'file'),
522 removed=fm.formatlist(files.removed, name=b'file'),
522 )
523 )
523
524
524 verbose = not self.ui.debugflag and self.ui.verbose
525 verbose = not self.ui.debugflag and self.ui.verbose
525 if verbose or b'files' in datahint:
526 if verbose or b'files' in datahint:
526 fm.data(files=fm.formatlist(ctx.files(), name=b'file'))
527 fm.data(files=fm.formatlist(ctx.files(), name=b'file'))
527 if verbose and copies or b'copies' in datahint:
528 if verbose and copies or b'copies' in datahint:
528 fm.data(
529 fm.data(
529 copies=fm.formatdict(copies or {}, key=b'name', value=b'source')
530 copies=fm.formatdict(copies or {}, key=b'name', value=b'source')
530 )
531 )
531
532
532 if self._includestat or b'diffstat' in datahint:
533 if self._includestat or b'diffstat' in datahint:
533 self.ui.pushbuffer()
534 self.ui.pushbuffer()
534 self._differ.showdiff(self.ui, ctx, self._diffopts, stat=True)
535 self._differ.showdiff(self.ui, ctx, self._diffopts, stat=True)
535 fm.data(diffstat=self.ui.popbuffer())
536 fm.data(diffstat=self.ui.popbuffer())
536 if self._includediff or b'diff' in datahint:
537 if self._includediff or b'diff' in datahint:
537 self.ui.pushbuffer()
538 self.ui.pushbuffer()
538 self._differ.showdiff(self.ui, ctx, self._diffopts, stat=False)
539 self._differ.showdiff(self.ui, ctx, self._diffopts, stat=False)
539 fm.data(diff=self.ui.popbuffer())
540 fm.data(diff=self.ui.popbuffer())
540
541
541
542
542 class changesettemplater(changesetprinter):
543 class changesettemplater(changesetprinter):
543 """format changeset information.
544 """format changeset information.
544
545
545 Note: there are a variety of convenience functions to build a
546 Note: there are a variety of convenience functions to build a
546 changesettemplater for common cases. See functions such as:
547 changesettemplater for common cases. See functions such as:
547 maketemplater, changesetdisplayer, buildcommittemplate, or other
548 maketemplater, changesetdisplayer, buildcommittemplate, or other
548 functions that use changesest_templater.
549 functions that use changesest_templater.
549 """
550 """
550
551
551 # Arguments before "buffered" used to be positional. Consider not
552 # Arguments before "buffered" used to be positional. Consider not
552 # adding/removing arguments before "buffered" to not break callers.
553 # adding/removing arguments before "buffered" to not break callers.
553 def __init__(
554 def __init__(
554 self, ui, repo, tmplspec, differ=None, diffopts=None, buffered=False
555 self, ui, repo, tmplspec, differ=None, diffopts=None, buffered=False
555 ):
556 ):
556 changesetprinter.__init__(self, ui, repo, differ, diffopts, buffered)
557 changesetprinter.__init__(self, ui, repo, differ, diffopts, buffered)
557 # tres is shared with _graphnodeformatter()
558 # tres is shared with _graphnodeformatter()
558 self._tresources = tres = formatter.templateresources(ui, repo)
559 self._tresources = tres = formatter.templateresources(ui, repo)
559 self.t = formatter.loadtemplater(
560 self.t = formatter.loadtemplater(
560 ui,
561 ui,
561 tmplspec,
562 tmplspec,
562 defaults=templatekw.keywords,
563 defaults=templatekw.keywords,
563 resources=tres,
564 resources=tres,
564 cache=templatekw.defaulttempl,
565 cache=templatekw.defaulttempl,
565 )
566 )
566 self._counter = itertools.count()
567 self._counter = itertools.count()
567
568
568 self._tref = tmplspec.ref
569 self._tref = tmplspec.ref
569 self._parts = {
570 self._parts = {
570 b'header': b'',
571 b'header': b'',
571 b'footer': b'',
572 b'footer': b'',
572 tmplspec.ref: tmplspec.ref,
573 tmplspec.ref: tmplspec.ref,
573 b'docheader': b'',
574 b'docheader': b'',
574 b'docfooter': b'',
575 b'docfooter': b'',
575 b'separator': b'',
576 b'separator': b'',
576 }
577 }
577 if tmplspec.mapfile:
578 if tmplspec.mapfile:
578 # find correct templates for current mode, for backward
579 # find correct templates for current mode, for backward
579 # compatibility with 'log -v/-q/--debug' using a mapfile
580 # compatibility with 'log -v/-q/--debug' using a mapfile
580 tmplmodes = [
581 tmplmodes = [
581 (True, b''),
582 (True, b''),
582 (self.ui.verbose, b'_verbose'),
583 (self.ui.verbose, b'_verbose'),
583 (self.ui.quiet, b'_quiet'),
584 (self.ui.quiet, b'_quiet'),
584 (self.ui.debugflag, b'_debug'),
585 (self.ui.debugflag, b'_debug'),
585 ]
586 ]
586 for mode, postfix in tmplmodes:
587 for mode, postfix in tmplmodes:
587 for t in self._parts:
588 for t in self._parts:
588 cur = t + postfix
589 cur = t + postfix
589 if mode and cur in self.t:
590 if mode and cur in self.t:
590 self._parts[t] = cur
591 self._parts[t] = cur
591 else:
592 else:
592 partnames = [p for p in self._parts.keys() if p != tmplspec.ref]
593 partnames = [p for p in self._parts.keys() if p != tmplspec.ref]
593 m = formatter.templatepartsmap(tmplspec, self.t, partnames)
594 m = formatter.templatepartsmap(tmplspec, self.t, partnames)
594 self._parts.update(m)
595 self._parts.update(m)
595
596
596 if self._parts[b'docheader']:
597 if self._parts[b'docheader']:
597 self.ui.write(self.t.render(self._parts[b'docheader'], {}))
598 self.ui.write(self.t.render(self._parts[b'docheader'], {}))
598
599
599 def close(self):
600 def close(self):
600 if self._parts[b'docfooter']:
601 if self._parts[b'docfooter']:
601 if not self.footer:
602 if not self.footer:
602 self.footer = b""
603 self.footer = b""
603 self.footer += self.t.render(self._parts[b'docfooter'], {})
604 self.footer += self.t.render(self._parts[b'docfooter'], {})
604 return super(changesettemplater, self).close()
605 return super(changesettemplater, self).close()
605
606
606 def _show(self, ctx, copies, props):
607 def _show(self, ctx, copies, props):
607 '''show a single changeset or file revision'''
608 '''show a single changeset or file revision'''
608 props = props.copy()
609 props = props.copy()
609 props[b'ctx'] = ctx
610 props[b'ctx'] = ctx
610 props[b'index'] = index = next(self._counter)
611 props[b'index'] = index = next(self._counter)
611 props[b'revcache'] = {b'copies': copies}
612 props[b'revcache'] = {b'copies': copies}
612 graphwidth = props.get(b'graphwidth', 0)
613 graphwidth = props.get(b'graphwidth', 0)
613
614
614 # write separator, which wouldn't work well with the header part below
615 # write separator, which wouldn't work well with the header part below
615 # since there's inherently a conflict between header (across items) and
616 # since there's inherently a conflict between header (across items) and
616 # separator (per item)
617 # separator (per item)
617 if self._parts[b'separator'] and index > 0:
618 if self._parts[b'separator'] and index > 0:
618 self.ui.write(self.t.render(self._parts[b'separator'], {}))
619 self.ui.write(self.t.render(self._parts[b'separator'], {}))
619
620
620 # write header
621 # write header
621 if self._parts[b'header']:
622 if self._parts[b'header']:
622 h = self.t.render(self._parts[b'header'], props)
623 h = self.t.render(self._parts[b'header'], props)
623 if self.buffered:
624 if self.buffered:
624 self.header[ctx.rev()] = h
625 self.header[ctx.rev()] = h
625 else:
626 else:
626 if self.lastheader != h:
627 if self.lastheader != h:
627 self.lastheader = h
628 self.lastheader = h
628 self.ui.write(h)
629 self.ui.write(h)
629
630
630 # write changeset metadata, then patch if requested
631 # write changeset metadata, then patch if requested
631 key = self._parts[self._tref]
632 key = self._parts[self._tref]
632 self.ui.write(self.t.render(key, props))
633 self.ui.write(self.t.render(key, props))
633 self._exthook(ctx)
634 self._exthook(ctx)
634 self._showpatch(ctx, graphwidth)
635 self._showpatch(ctx, graphwidth)
635
636
636 if self._parts[b'footer']:
637 if self._parts[b'footer']:
637 if not self.footer:
638 if not self.footer:
638 self.footer = self.t.render(self._parts[b'footer'], props)
639 self.footer = self.t.render(self._parts[b'footer'], props)
639
640
640
641
641 def templatespec(tmpl, mapfile):
642 def templatespec(tmpl, mapfile):
642 assert not (tmpl and mapfile)
643 assert not (tmpl and mapfile)
643 if mapfile:
644 if mapfile:
644 return formatter.mapfile_templatespec(b'changeset', mapfile)
645 return formatter.mapfile_templatespec(b'changeset', mapfile)
645 else:
646 else:
646 return formatter.literal_templatespec(tmpl)
647 return formatter.literal_templatespec(tmpl)
647
648
648
649
649 def _lookuptemplate(ui, tmpl, style):
650 def _lookuptemplate(ui, tmpl, style):
650 """Find the template matching the given template spec or style
651 """Find the template matching the given template spec or style
651
652
652 See formatter.lookuptemplate() for details.
653 See formatter.lookuptemplate() for details.
653 """
654 """
654
655
655 # ui settings
656 # ui settings
656 if not tmpl and not style: # template are stronger than style
657 if not tmpl and not style: # template are stronger than style
657 tmpl = ui.config(b'command-templates', b'log')
658 tmpl = ui.config(b'command-templates', b'log')
658 if tmpl:
659 if tmpl:
659 return formatter.literal_templatespec(templater.unquotestring(tmpl))
660 return formatter.literal_templatespec(templater.unquotestring(tmpl))
660 else:
661 else:
661 style = util.expandpath(ui.config(b'ui', b'style'))
662 style = util.expandpath(ui.config(b'ui', b'style'))
662
663
663 if not tmpl and style:
664 if not tmpl and style:
664 mapfile = style
665 mapfile = style
665 fp = None
666 fp = None
666 if not os.path.split(mapfile)[0]:
667 if not os.path.split(mapfile)[0]:
667 (mapname, fp) = templater.try_open_template(
668 (mapname, fp) = templater.try_open_template(
668 b'map-cmdline.' + mapfile
669 b'map-cmdline.' + mapfile
669 ) or templater.try_open_template(mapfile)
670 ) or templater.try_open_template(mapfile)
670 if mapname:
671 if mapname:
671 mapfile = mapname
672 mapfile = mapname
672 return formatter.mapfile_templatespec(b'changeset', mapfile, fp)
673 return formatter.mapfile_templatespec(b'changeset', mapfile, fp)
673
674
674 return formatter.lookuptemplate(ui, b'changeset', tmpl)
675 return formatter.lookuptemplate(ui, b'changeset', tmpl)
675
676
676
677
677 def maketemplater(ui, repo, tmpl, buffered=False):
678 def maketemplater(ui, repo, tmpl, buffered=False):
678 """Create a changesettemplater from a literal template 'tmpl'
679 """Create a changesettemplater from a literal template 'tmpl'
679 byte-string."""
680 byte-string."""
680 spec = formatter.literal_templatespec(tmpl)
681 spec = formatter.literal_templatespec(tmpl)
681 return changesettemplater(ui, repo, spec, buffered=buffered)
682 return changesettemplater(ui, repo, spec, buffered=buffered)
682
683
683
684
684 def changesetdisplayer(ui, repo, opts, differ=None, buffered=False):
685 def changesetdisplayer(ui, repo, opts, differ=None, buffered=False):
685 """show one changeset using template or regular display.
686 """show one changeset using template or regular display.
686
687
687 Display format will be the first non-empty hit of:
688 Display format will be the first non-empty hit of:
688 1. option 'template'
689 1. option 'template'
689 2. option 'style'
690 2. option 'style'
690 3. [command-templates] setting 'log'
691 3. [command-templates] setting 'log'
691 4. [ui] setting 'style'
692 4. [ui] setting 'style'
692 If all of these values are either the unset or the empty string,
693 If all of these values are either the unset or the empty string,
693 regular display via changesetprinter() is done.
694 regular display via changesetprinter() is done.
694 """
695 """
695 postargs = (differ, opts, buffered)
696 postargs = (differ, opts, buffered)
696 spec = _lookuptemplate(ui, opts.get(b'template'), opts.get(b'style'))
697 spec = _lookuptemplate(ui, opts.get(b'template'), opts.get(b'style'))
697
698
698 # machine-readable formats have slightly different keyword set than
699 # machine-readable formats have slightly different keyword set than
699 # plain templates, which are handled by changesetformatter.
700 # plain templates, which are handled by changesetformatter.
700 # note that {b'pickle', b'debug'} can also be added to the list if needed.
701 # note that {b'pickle', b'debug'} can also be added to the list if needed.
701 if spec.ref in {b'cbor', b'json'}:
702 if spec.ref in {b'cbor', b'json'}:
702 fm = ui.formatter(b'log', opts)
703 fm = ui.formatter(b'log', opts)
703 return changesetformatter(ui, repo, fm, *postargs)
704 return changesetformatter(ui, repo, fm, *postargs)
704
705
705 if not spec.ref and not spec.tmpl and not spec.mapfile:
706 if not spec.ref and not spec.tmpl and not spec.mapfile:
706 return changesetprinter(ui, repo, *postargs)
707 return changesetprinter(ui, repo, *postargs)
707
708
708 return changesettemplater(ui, repo, spec, *postargs)
709 return changesettemplater(ui, repo, spec, *postargs)
709
710
710
711
711 @attr.s
712 @attr.s
712 class walkopts(object):
713 class walkopts(object):
713 """Options to configure a set of revisions and file matcher factory
714 """Options to configure a set of revisions and file matcher factory
714 to scan revision/file history
715 to scan revision/file history
715 """
716 """
716
717
717 # raw command-line parameters, which a matcher will be built from
718 # raw command-line parameters, which a matcher will be built from
718 pats = attr.ib() # type: List[bytes]
719 pats = attr.ib() # type: List[bytes]
719 opts = attr.ib() # type: Dict[bytes, Any]
720 opts = attr.ib() # type: Dict[bytes, Any]
720
721
721 # a list of revset expressions to be traversed; if follow, it specifies
722 # a list of revset expressions to be traversed; if follow, it specifies
722 # the start revisions
723 # the start revisions
723 revspec = attr.ib() # type: List[bytes]
724 revspec = attr.ib() # type: List[bytes]
724
725
725 # miscellaneous queries to filter revisions (see "hg help log" for details)
726 # miscellaneous queries to filter revisions (see "hg help log" for details)
726 bookmarks = attr.ib(default=attr.Factory(list)) # type: List[bytes]
727 bookmarks = attr.ib(default=attr.Factory(list)) # type: List[bytes]
727 branches = attr.ib(default=attr.Factory(list)) # type: List[bytes]
728 branches = attr.ib(default=attr.Factory(list)) # type: List[bytes]
728 date = attr.ib(default=None) # type: Optional[bytes]
729 date = attr.ib(default=None) # type: Optional[bytes]
729 keywords = attr.ib(default=attr.Factory(list)) # type: List[bytes]
730 keywords = attr.ib(default=attr.Factory(list)) # type: List[bytes]
730 no_merges = attr.ib(default=False) # type: bool
731 no_merges = attr.ib(default=False) # type: bool
731 only_merges = attr.ib(default=False) # type: bool
732 only_merges = attr.ib(default=False) # type: bool
732 prune_ancestors = attr.ib(default=attr.Factory(list)) # type: List[bytes]
733 prune_ancestors = attr.ib(default=attr.Factory(list)) # type: List[bytes]
733 users = attr.ib(default=attr.Factory(list)) # type: List[bytes]
734 users = attr.ib(default=attr.Factory(list)) # type: List[bytes]
734
735
735 # miscellaneous matcher arguments
736 # miscellaneous matcher arguments
736 include_pats = attr.ib(default=attr.Factory(list)) # type: List[bytes]
737 include_pats = attr.ib(default=attr.Factory(list)) # type: List[bytes]
737 exclude_pats = attr.ib(default=attr.Factory(list)) # type: List[bytes]
738 exclude_pats = attr.ib(default=attr.Factory(list)) # type: List[bytes]
738
739
739 # 0: no follow, 1: follow first, 2: follow both parents
740 # 0: no follow, 1: follow first, 2: follow both parents
740 follow = attr.ib(default=0) # type: int
741 follow = attr.ib(default=0) # type: int
741
742
742 # do not attempt filelog-based traversal, which may be fast but cannot
743 # do not attempt filelog-based traversal, which may be fast but cannot
743 # include revisions where files were removed
744 # include revisions where files were removed
744 force_changelog_traversal = attr.ib(default=False) # type: bool
745 force_changelog_traversal = attr.ib(default=False) # type: bool
745
746
746 # filter revisions by file patterns, which should be disabled only if
747 # filter revisions by file patterns, which should be disabled only if
747 # you want to include revisions where files were unmodified
748 # you want to include revisions where files were unmodified
748 filter_revisions_by_pats = attr.ib(default=True) # type: bool
749 filter_revisions_by_pats = attr.ib(default=True) # type: bool
749
750
750 # sort revisions prior to traversal: 'desc', 'topo', or None
751 # sort revisions prior to traversal: 'desc', 'topo', or None
751 sort_revisions = attr.ib(default=None) # type: Optional[bytes]
752 sort_revisions = attr.ib(default=None) # type: Optional[bytes]
752
753
753 # limit number of changes displayed; None means unlimited
754 # limit number of changes displayed; None means unlimited
754 limit = attr.ib(default=None) # type: Optional[int]
755 limit = attr.ib(default=None) # type: Optional[int]
755
756
756
757
757 def parseopts(ui, pats, opts):
758 def parseopts(ui, pats, opts):
758 # type: (Any, Sequence[bytes], Dict[bytes, Any]) -> walkopts
759 # type: (Any, Sequence[bytes], Dict[bytes, Any]) -> walkopts
759 """Parse log command options into walkopts
760 """Parse log command options into walkopts
760
761
761 The returned walkopts will be passed in to getrevs() or makewalker().
762 The returned walkopts will be passed in to getrevs() or makewalker().
762 """
763 """
763 if opts.get(b'follow_first'):
764 if opts.get(b'follow_first'):
764 follow = 1
765 follow = 1
765 elif opts.get(b'follow'):
766 elif opts.get(b'follow'):
766 follow = 2
767 follow = 2
767 else:
768 else:
768 follow = 0
769 follow = 0
769
770
770 if opts.get(b'graph'):
771 if opts.get(b'graph'):
771 if ui.configbool(b'experimental', b'log.topo'):
772 if ui.configbool(b'experimental', b'log.topo'):
772 sort_revisions = b'topo'
773 sort_revisions = b'topo'
773 else:
774 else:
774 sort_revisions = b'desc'
775 sort_revisions = b'desc'
775 else:
776 else:
776 sort_revisions = None
777 sort_revisions = None
777
778
778 return walkopts(
779 return walkopts(
779 pats=pats,
780 pats=pats,
780 opts=opts,
781 opts=opts,
781 revspec=opts.get(b'rev', []),
782 revspec=opts.get(b'rev', []),
782 bookmarks=opts.get(b'bookmark', []),
783 bookmarks=opts.get(b'bookmark', []),
783 # branch and only_branch are really aliases and must be handled at
784 # branch and only_branch are really aliases and must be handled at
784 # the same time
785 # the same time
785 branches=opts.get(b'branch', []) + opts.get(b'only_branch', []),
786 branches=opts.get(b'branch', []) + opts.get(b'only_branch', []),
786 date=opts.get(b'date'),
787 date=opts.get(b'date'),
787 keywords=opts.get(b'keyword', []),
788 keywords=opts.get(b'keyword', []),
788 no_merges=bool(opts.get(b'no_merges')),
789 no_merges=bool(opts.get(b'no_merges')),
789 only_merges=bool(opts.get(b'only_merges')),
790 only_merges=bool(opts.get(b'only_merges')),
790 prune_ancestors=opts.get(b'prune', []),
791 prune_ancestors=opts.get(b'prune', []),
791 users=opts.get(b'user', []),
792 users=opts.get(b'user', []),
792 include_pats=opts.get(b'include', []),
793 include_pats=opts.get(b'include', []),
793 exclude_pats=opts.get(b'exclude', []),
794 exclude_pats=opts.get(b'exclude', []),
794 follow=follow,
795 follow=follow,
795 force_changelog_traversal=bool(opts.get(b'removed')),
796 force_changelog_traversal=bool(opts.get(b'removed')),
796 sort_revisions=sort_revisions,
797 sort_revisions=sort_revisions,
797 limit=getlimit(opts),
798 limit=getlimit(opts),
798 )
799 )
799
800
800
801
801 def _makematcher(repo, revs, wopts):
802 def _makematcher(repo, revs, wopts):
802 """Build matcher and expanded patterns from log options
803 """Build matcher and expanded patterns from log options
803
804
804 If --follow, revs are the revisions to follow from.
805 If --follow, revs are the revisions to follow from.
805
806
806 Returns (match, pats, slowpath) where
807 Returns (match, pats, slowpath) where
807 - match: a matcher built from the given pats and -I/-X opts
808 - match: a matcher built from the given pats and -I/-X opts
808 - pats: patterns used (globs are expanded on Windows)
809 - pats: patterns used (globs are expanded on Windows)
809 - slowpath: True if patterns aren't as simple as scanning filelogs
810 - slowpath: True if patterns aren't as simple as scanning filelogs
810 """
811 """
811 # pats/include/exclude are passed to match.match() directly in
812 # pats/include/exclude are passed to match.match() directly in
812 # _matchfiles() revset, but a log-like command should build its matcher
813 # _matchfiles() revset, but a log-like command should build its matcher
813 # with scmutil.match(). The difference is input pats are globbed on
814 # with scmutil.match(). The difference is input pats are globbed on
814 # platforms without shell expansion (windows).
815 # platforms without shell expansion (windows).
815 wctx = repo[None]
816 wctx = repo[None]
816 match, pats = scmutil.matchandpats(wctx, wopts.pats, wopts.opts)
817 match, pats = scmutil.matchandpats(wctx, wopts.pats, wopts.opts)
817 slowpath = match.anypats() or (
818 slowpath = match.anypats() or (
818 not match.always() and wopts.force_changelog_traversal
819 not match.always() and wopts.force_changelog_traversal
819 )
820 )
820 if not slowpath:
821 if not slowpath:
821 if wopts.follow and wopts.revspec:
822 if wopts.follow and wopts.revspec:
822 # There may be the case that a path doesn't exist in some (but
823 # There may be the case that a path doesn't exist in some (but
823 # not all) of the specified start revisions, but let's consider
824 # not all) of the specified start revisions, but let's consider
824 # the path is valid. Missing files will be warned by the matcher.
825 # the path is valid. Missing files will be warned by the matcher.
825 startctxs = [repo[r] for r in revs]
826 startctxs = [repo[r] for r in revs]
826 for f in match.files():
827 for f in match.files():
827 found = False
828 found = False
828 for c in startctxs:
829 for c in startctxs:
829 if f in c:
830 if f in c:
830 found = True
831 found = True
831 elif c.hasdir(f):
832 elif c.hasdir(f):
832 # If a directory exists in any of the start revisions,
833 # If a directory exists in any of the start revisions,
833 # take the slow path.
834 # take the slow path.
834 found = slowpath = True
835 found = slowpath = True
835 if not found:
836 if not found:
836 raise error.Abort(
837 raise error.Abort(
837 _(
838 _(
838 b'cannot follow file not in any of the specified '
839 b'cannot follow file not in any of the specified '
839 b'revisions: "%s"'
840 b'revisions: "%s"'
840 )
841 )
841 % f
842 % f
842 )
843 )
843 elif wopts.follow:
844 elif wopts.follow:
844 for f in match.files():
845 for f in match.files():
845 if f not in wctx:
846 if f not in wctx:
846 # If the file exists, it may be a directory, so let it
847 # If the file exists, it may be a directory, so let it
847 # take the slow path.
848 # take the slow path.
848 if os.path.exists(repo.wjoin(f)):
849 if os.path.exists(repo.wjoin(f)):
849 slowpath = True
850 slowpath = True
850 continue
851 continue
851 else:
852 else:
852 raise error.Abort(
853 raise error.Abort(
853 _(
854 _(
854 b'cannot follow file not in parent '
855 b'cannot follow file not in parent '
855 b'revision: "%s"'
856 b'revision: "%s"'
856 )
857 )
857 % f
858 % f
858 )
859 )
859 filelog = repo.file(f)
860 filelog = repo.file(f)
860 if not filelog:
861 if not filelog:
861 # A file exists in wdir but not in history, which means
862 # A file exists in wdir but not in history, which means
862 # the file isn't committed yet.
863 # the file isn't committed yet.
863 raise error.Abort(
864 raise error.Abort(
864 _(b'cannot follow nonexistent file: "%s"') % f
865 _(b'cannot follow nonexistent file: "%s"') % f
865 )
866 )
866 else:
867 else:
867 for f in match.files():
868 for f in match.files():
868 filelog = repo.file(f)
869 filelog = repo.file(f)
869 if not filelog:
870 if not filelog:
870 # A zero count may be a directory or deleted file, so
871 # A zero count may be a directory or deleted file, so
871 # try to find matching entries on the slow path.
872 # try to find matching entries on the slow path.
872 slowpath = True
873 slowpath = True
873
874
874 # We decided to fall back to the slowpath because at least one
875 # We decided to fall back to the slowpath because at least one
875 # of the paths was not a file. Check to see if at least one of them
876 # of the paths was not a file. Check to see if at least one of them
876 # existed in history - in that case, we'll continue down the
877 # existed in history - in that case, we'll continue down the
877 # slowpath; otherwise, we can turn off the slowpath
878 # slowpath; otherwise, we can turn off the slowpath
878 if slowpath:
879 if slowpath:
879 for path in match.files():
880 for path in match.files():
880 if not path or path in repo.store:
881 if not path or path in repo.store:
881 break
882 break
882 else:
883 else:
883 slowpath = False
884 slowpath = False
884
885
885 return match, pats, slowpath
886 return match, pats, slowpath
886
887
887
888
888 def _fileancestors(repo, revs, match, followfirst):
889 def _fileancestors(repo, revs, match, followfirst):
889 fctxs = []
890 fctxs = []
890 for r in revs:
891 for r in revs:
891 ctx = repo[r]
892 ctx = repo[r]
892 fctxs.extend(ctx[f].introfilectx() for f in ctx.walk(match))
893 fctxs.extend(ctx[f].introfilectx() for f in ctx.walk(match))
893
894
894 # When displaying a revision with --patch --follow FILE, we have
895 # When displaying a revision with --patch --follow FILE, we have
895 # to know which file of the revision must be diffed. With
896 # to know which file of the revision must be diffed. With
896 # --follow, we want the names of the ancestors of FILE in the
897 # --follow, we want the names of the ancestors of FILE in the
897 # revision, stored in "fcache". "fcache" is populated as a side effect
898 # revision, stored in "fcache". "fcache" is populated as a side effect
898 # of the graph traversal.
899 # of the graph traversal.
899 fcache = {}
900 fcache = {}
900
901
901 def filematcher(ctx):
902 def filematcher(ctx):
902 return scmutil.matchfiles(repo, fcache.get(scmutil.intrev(ctx), []))
903 return scmutil.matchfiles(repo, fcache.get(scmutil.intrev(ctx), []))
903
904
904 def revgen():
905 def revgen():
905 for rev, cs in dagop.filectxancestors(fctxs, followfirst=followfirst):
906 for rev, cs in dagop.filectxancestors(fctxs, followfirst=followfirst):
906 fcache[rev] = [c.path() for c in cs]
907 fcache[rev] = [c.path() for c in cs]
907 yield rev
908 yield rev
908
909
909 return smartset.generatorset(revgen(), iterasc=False), filematcher
910 return smartset.generatorset(revgen(), iterasc=False), filematcher
910
911
911
912
912 def _makenofollowfilematcher(repo, pats, opts):
913 def _makenofollowfilematcher(repo, pats, opts):
913 '''hook for extensions to override the filematcher for non-follow cases'''
914 '''hook for extensions to override the filematcher for non-follow cases'''
914 return None
915 return None
915
916
916
917
917 _opt2logrevset = {
918 _opt2logrevset = {
918 b'no_merges': (b'not merge()', None),
919 b'no_merges': (b'not merge()', None),
919 b'only_merges': (b'merge()', None),
920 b'only_merges': (b'merge()', None),
920 b'_matchfiles': (None, b'_matchfiles(%ps)'),
921 b'_matchfiles': (None, b'_matchfiles(%ps)'),
921 b'date': (b'date(%s)', None),
922 b'date': (b'date(%s)', None),
922 b'branch': (b'branch(%s)', b'%lr'),
923 b'branch': (b'branch(%s)', b'%lr'),
923 b'_patslog': (b'filelog(%s)', b'%lr'),
924 b'_patslog': (b'filelog(%s)', b'%lr'),
924 b'keyword': (b'keyword(%s)', b'%lr'),
925 b'keyword': (b'keyword(%s)', b'%lr'),
925 b'prune': (b'ancestors(%s)', b'not %lr'),
926 b'prune': (b'ancestors(%s)', b'not %lr'),
926 b'user': (b'user(%s)', b'%lr'),
927 b'user': (b'user(%s)', b'%lr'),
927 }
928 }
928
929
929
930
930 def _makerevset(repo, wopts, slowpath):
931 def _makerevset(repo, wopts, slowpath):
931 """Return a revset string built from log options and file patterns"""
932 """Return a revset string built from log options and file patterns"""
932 opts = {
933 opts = {
933 b'branch': [b'literal:' + repo.lookupbranch(b) for b in wopts.branches],
934 b'branch': [b'literal:' + repo.lookupbranch(b) for b in wopts.branches],
934 b'date': wopts.date,
935 b'date': wopts.date,
935 b'keyword': wopts.keywords,
936 b'keyword': wopts.keywords,
936 b'no_merges': wopts.no_merges,
937 b'no_merges': wopts.no_merges,
937 b'only_merges': wopts.only_merges,
938 b'only_merges': wopts.only_merges,
938 b'prune': wopts.prune_ancestors,
939 b'prune': wopts.prune_ancestors,
939 b'user': [b'literal:' + v for v in wopts.users],
940 b'user': [b'literal:' + v for v in wopts.users],
940 }
941 }
941
942
942 if wopts.filter_revisions_by_pats and slowpath:
943 if wopts.filter_revisions_by_pats and slowpath:
943 # pats/include/exclude cannot be represented as separate
944 # pats/include/exclude cannot be represented as separate
944 # revset expressions as their filtering logic applies at file
945 # revset expressions as their filtering logic applies at file
945 # level. For instance "-I a -X b" matches a revision touching
946 # level. For instance "-I a -X b" matches a revision touching
946 # "a" and "b" while "file(a) and not file(b)" does
947 # "a" and "b" while "file(a) and not file(b)" does
947 # not. Besides, filesets are evaluated against the working
948 # not. Besides, filesets are evaluated against the working
948 # directory.
949 # directory.
949 matchargs = [b'r:', b'd:relpath']
950 matchargs = [b'r:', b'd:relpath']
950 for p in wopts.pats:
951 for p in wopts.pats:
951 matchargs.append(b'p:' + p)
952 matchargs.append(b'p:' + p)
952 for p in wopts.include_pats:
953 for p in wopts.include_pats:
953 matchargs.append(b'i:' + p)
954 matchargs.append(b'i:' + p)
954 for p in wopts.exclude_pats:
955 for p in wopts.exclude_pats:
955 matchargs.append(b'x:' + p)
956 matchargs.append(b'x:' + p)
956 opts[b'_matchfiles'] = matchargs
957 opts[b'_matchfiles'] = matchargs
957 elif wopts.filter_revisions_by_pats and not wopts.follow:
958 elif wopts.filter_revisions_by_pats and not wopts.follow:
958 opts[b'_patslog'] = list(wopts.pats)
959 opts[b'_patslog'] = list(wopts.pats)
959
960
960 expr = []
961 expr = []
961 for op, val in sorted(pycompat.iteritems(opts)):
962 for op, val in sorted(pycompat.iteritems(opts)):
962 if not val:
963 if not val:
963 continue
964 continue
964 revop, listop = _opt2logrevset[op]
965 revop, listop = _opt2logrevset[op]
965 if revop and b'%' not in revop:
966 if revop and b'%' not in revop:
966 expr.append(revop)
967 expr.append(revop)
967 elif not listop:
968 elif not listop:
968 expr.append(revsetlang.formatspec(revop, val))
969 expr.append(revsetlang.formatspec(revop, val))
969 else:
970 else:
970 if revop:
971 if revop:
971 val = [revsetlang.formatspec(revop, v) for v in val]
972 val = [revsetlang.formatspec(revop, v) for v in val]
972 expr.append(revsetlang.formatspec(listop, val))
973 expr.append(revsetlang.formatspec(listop, val))
973
974
974 if wopts.bookmarks:
975 if wopts.bookmarks:
975 expr.append(
976 expr.append(
976 revsetlang.formatspec(
977 revsetlang.formatspec(
977 b'%lr',
978 b'%lr',
978 [scmutil.format_bookmark_revspec(v) for v in wopts.bookmarks],
979 [scmutil.format_bookmark_revspec(v) for v in wopts.bookmarks],
979 )
980 )
980 )
981 )
981
982
982 if expr:
983 if expr:
983 expr = b'(' + b' and '.join(expr) + b')'
984 expr = b'(' + b' and '.join(expr) + b')'
984 else:
985 else:
985 expr = None
986 expr = None
986 return expr
987 return expr
987
988
988
989
989 def _initialrevs(repo, wopts):
990 def _initialrevs(repo, wopts):
990 """Return the initial set of revisions to be filtered or followed"""
991 """Return the initial set of revisions to be filtered or followed"""
991 if wopts.revspec:
992 if wopts.revspec:
992 revs = scmutil.revrange(repo, wopts.revspec)
993 revs = scmutil.revrange(repo, wopts.revspec)
993 elif wopts.follow and repo.dirstate.p1() == nullid:
994 elif wopts.follow and repo.dirstate.p1() == nullid:
994 revs = smartset.baseset()
995 revs = smartset.baseset()
995 elif wopts.follow:
996 elif wopts.follow:
996 revs = repo.revs(b'.')
997 revs = repo.revs(b'.')
997 else:
998 else:
998 revs = smartset.spanset(repo)
999 revs = smartset.spanset(repo)
999 revs.reverse()
1000 revs.reverse()
1000 return revs
1001 return revs
1001
1002
1002
1003
1003 def makewalker(repo, wopts):
1004 def makewalker(repo, wopts):
1004 # type: (Any, walkopts) -> Tuple[smartset.abstractsmartset, Optional[Callable[[Any], matchmod.basematcher]]]
1005 # type: (Any, walkopts) -> Tuple[smartset.abstractsmartset, Optional[Callable[[Any], matchmod.basematcher]]]
1005 """Build (revs, makefilematcher) to scan revision/file history
1006 """Build (revs, makefilematcher) to scan revision/file history
1006
1007
1007 - revs is the smartset to be traversed.
1008 - revs is the smartset to be traversed.
1008 - makefilematcher is a function to map ctx to a matcher for that revision
1009 - makefilematcher is a function to map ctx to a matcher for that revision
1009 """
1010 """
1010 revs = _initialrevs(repo, wopts)
1011 revs = _initialrevs(repo, wopts)
1011 if not revs:
1012 if not revs:
1012 return smartset.baseset(), None
1013 return smartset.baseset(), None
1013 # TODO: might want to merge slowpath with wopts.force_changelog_traversal
1014 # TODO: might want to merge slowpath with wopts.force_changelog_traversal
1014 match, pats, slowpath = _makematcher(repo, revs, wopts)
1015 match, pats, slowpath = _makematcher(repo, revs, wopts)
1015 wopts = attr.evolve(wopts, pats=pats)
1016 wopts = attr.evolve(wopts, pats=pats)
1016
1017
1017 filematcher = None
1018 filematcher = None
1018 if wopts.follow:
1019 if wopts.follow:
1019 if slowpath or match.always():
1020 if slowpath or match.always():
1020 revs = dagop.revancestors(repo, revs, followfirst=wopts.follow == 1)
1021 revs = dagop.revancestors(repo, revs, followfirst=wopts.follow == 1)
1021 else:
1022 else:
1022 assert not wopts.force_changelog_traversal
1023 assert not wopts.force_changelog_traversal
1023 revs, filematcher = _fileancestors(
1024 revs, filematcher = _fileancestors(
1024 repo, revs, match, followfirst=wopts.follow == 1
1025 repo, revs, match, followfirst=wopts.follow == 1
1025 )
1026 )
1026 revs.reverse()
1027 revs.reverse()
1027 if filematcher is None:
1028 if filematcher is None:
1028 filematcher = _makenofollowfilematcher(repo, wopts.pats, wopts.opts)
1029 filematcher = _makenofollowfilematcher(repo, wopts.pats, wopts.opts)
1029 if filematcher is None:
1030 if filematcher is None:
1030
1031
1031 def filematcher(ctx):
1032 def filematcher(ctx):
1032 return match
1033 return match
1033
1034
1034 expr = _makerevset(repo, wopts, slowpath)
1035 expr = _makerevset(repo, wopts, slowpath)
1035 if wopts.sort_revisions:
1036 if wopts.sort_revisions:
1036 assert wopts.sort_revisions in {b'topo', b'desc'}
1037 assert wopts.sort_revisions in {b'topo', b'desc'}
1037 if wopts.sort_revisions == b'topo':
1038 if wopts.sort_revisions == b'topo':
1038 if not revs.istopo():
1039 if not revs.istopo():
1039 revs = dagop.toposort(revs, repo.changelog.parentrevs)
1040 revs = dagop.toposort(revs, repo.changelog.parentrevs)
1040 # TODO: try to iterate the set lazily
1041 # TODO: try to iterate the set lazily
1041 revs = revset.baseset(list(revs), istopo=True)
1042 revs = revset.baseset(list(revs), istopo=True)
1042 elif not (revs.isdescending() or revs.istopo()):
1043 elif not (revs.isdescending() or revs.istopo()):
1043 # User-specified revs might be unsorted
1044 # User-specified revs might be unsorted
1044 revs.sort(reverse=True)
1045 revs.sort(reverse=True)
1045 if expr:
1046 if expr:
1046 matcher = revset.match(None, expr)
1047 matcher = revset.match(None, expr)
1047 revs = matcher(repo, revs)
1048 revs = matcher(repo, revs)
1048 if wopts.limit is not None:
1049 if wopts.limit is not None:
1049 revs = revs.slice(0, wopts.limit)
1050 revs = revs.slice(0, wopts.limit)
1050
1051
1051 return revs, filematcher
1052 return revs, filematcher
1052
1053
1053
1054
1054 def getrevs(repo, wopts):
1055 def getrevs(repo, wopts):
1055 # type: (Any, walkopts) -> Tuple[smartset.abstractsmartset, Optional[changesetdiffer]]
1056 # type: (Any, walkopts) -> Tuple[smartset.abstractsmartset, Optional[changesetdiffer]]
1056 """Return (revs, differ) where revs is a smartset
1057 """Return (revs, differ) where revs is a smartset
1057
1058
1058 differ is a changesetdiffer with pre-configured file matcher.
1059 differ is a changesetdiffer with pre-configured file matcher.
1059 """
1060 """
1060 revs, filematcher = makewalker(repo, wopts)
1061 revs, filematcher = makewalker(repo, wopts)
1061 if not revs:
1062 if not revs:
1062 return revs, None
1063 return revs, None
1063 differ = changesetdiffer()
1064 differ = changesetdiffer()
1064 differ._makefilematcher = filematcher
1065 differ._makefilematcher = filematcher
1065 return revs, differ
1066 return revs, differ
1066
1067
1067
1068
1068 def _parselinerangeopt(repo, opts):
1069 def _parselinerangeopt(repo, opts):
1069 """Parse --line-range log option and return a list of tuples (filename,
1070 """Parse --line-range log option and return a list of tuples (filename,
1070 (fromline, toline)).
1071 (fromline, toline)).
1071 """
1072 """
1072 linerangebyfname = []
1073 linerangebyfname = []
1073 for pat in opts.get(b'line_range', []):
1074 for pat in opts.get(b'line_range', []):
1074 try:
1075 try:
1075 pat, linerange = pat.rsplit(b',', 1)
1076 pat, linerange = pat.rsplit(b',', 1)
1076 except ValueError:
1077 except ValueError:
1077 raise error.Abort(_(b'malformatted line-range pattern %s') % pat)
1078 raise error.Abort(_(b'malformatted line-range pattern %s') % pat)
1078 try:
1079 try:
1079 fromline, toline = map(int, linerange.split(b':'))
1080 fromline, toline = map(int, linerange.split(b':'))
1080 except ValueError:
1081 except ValueError:
1081 raise error.Abort(_(b"invalid line range for %s") % pat)
1082 raise error.Abort(_(b"invalid line range for %s") % pat)
1082 msg = _(b"line range pattern '%s' must match exactly one file") % pat
1083 msg = _(b"line range pattern '%s' must match exactly one file") % pat
1083 fname = scmutil.parsefollowlinespattern(repo, None, pat, msg)
1084 fname = scmutil.parsefollowlinespattern(repo, None, pat, msg)
1084 linerangebyfname.append(
1085 linerangebyfname.append(
1085 (fname, util.processlinerange(fromline, toline))
1086 (fname, util.processlinerange(fromline, toline))
1086 )
1087 )
1087 return linerangebyfname
1088 return linerangebyfname
1088
1089
1089
1090
1090 def getlinerangerevs(repo, userrevs, opts):
1091 def getlinerangerevs(repo, userrevs, opts):
1091 """Return (revs, differ).
1092 """Return (revs, differ).
1092
1093
1093 "revs" are revisions obtained by processing "line-range" log options and
1094 "revs" are revisions obtained by processing "line-range" log options and
1094 walking block ancestors of each specified file/line-range.
1095 walking block ancestors of each specified file/line-range.
1095
1096
1096 "differ" is a changesetdiffer with pre-configured file matcher and hunks
1097 "differ" is a changesetdiffer with pre-configured file matcher and hunks
1097 filter.
1098 filter.
1098 """
1099 """
1099 wctx = repo[None]
1100 wctx = repo[None]
1100
1101
1101 # Two-levels map of "rev -> file ctx -> [line range]".
1102 # Two-levels map of "rev -> file ctx -> [line range]".
1102 linerangesbyrev = {}
1103 linerangesbyrev = {}
1103 for fname, (fromline, toline) in _parselinerangeopt(repo, opts):
1104 for fname, (fromline, toline) in _parselinerangeopt(repo, opts):
1104 if fname not in wctx:
1105 if fname not in wctx:
1105 raise error.Abort(
1106 raise error.Abort(
1106 _(b'cannot follow file not in parent revision: "%s"') % fname
1107 _(b'cannot follow file not in parent revision: "%s"') % fname
1107 )
1108 )
1108 fctx = wctx.filectx(fname)
1109 fctx = wctx.filectx(fname)
1109 for fctx, linerange in dagop.blockancestors(fctx, fromline, toline):
1110 for fctx, linerange in dagop.blockancestors(fctx, fromline, toline):
1110 rev = fctx.introrev()
1111 rev = fctx.introrev()
1111 if rev is None:
1112 if rev is None:
1112 rev = wdirrev
1113 rev = wdirrev
1113 if rev not in userrevs:
1114 if rev not in userrevs:
1114 continue
1115 continue
1115 linerangesbyrev.setdefault(rev, {}).setdefault(
1116 linerangesbyrev.setdefault(rev, {}).setdefault(
1116 fctx.path(), []
1117 fctx.path(), []
1117 ).append(linerange)
1118 ).append(linerange)
1118
1119
1119 def nofilterhunksfn(fctx, hunks):
1120 def nofilterhunksfn(fctx, hunks):
1120 return hunks
1121 return hunks
1121
1122
1122 def hunksfilter(ctx):
1123 def hunksfilter(ctx):
1123 fctxlineranges = linerangesbyrev.get(scmutil.intrev(ctx))
1124 fctxlineranges = linerangesbyrev.get(scmutil.intrev(ctx))
1124 if fctxlineranges is None:
1125 if fctxlineranges is None:
1125 return nofilterhunksfn
1126 return nofilterhunksfn
1126
1127
1127 def filterfn(fctx, hunks):
1128 def filterfn(fctx, hunks):
1128 lineranges = fctxlineranges.get(fctx.path())
1129 lineranges = fctxlineranges.get(fctx.path())
1129 if lineranges is not None:
1130 if lineranges is not None:
1130 for hr, lines in hunks:
1131 for hr, lines in hunks:
1131 if hr is None: # binary
1132 if hr is None: # binary
1132 yield hr, lines
1133 yield hr, lines
1133 continue
1134 continue
1134 if any(mdiff.hunkinrange(hr[2:], lr) for lr in lineranges):
1135 if any(mdiff.hunkinrange(hr[2:], lr) for lr in lineranges):
1135 yield hr, lines
1136 yield hr, lines
1136 else:
1137 else:
1137 for hunk in hunks:
1138 for hunk in hunks:
1138 yield hunk
1139 yield hunk
1139
1140
1140 return filterfn
1141 return filterfn
1141
1142
1142 def filematcher(ctx):
1143 def filematcher(ctx):
1143 files = list(linerangesbyrev.get(scmutil.intrev(ctx), []))
1144 files = list(linerangesbyrev.get(scmutil.intrev(ctx), []))
1144 return scmutil.matchfiles(repo, files)
1145 return scmutil.matchfiles(repo, files)
1145
1146
1146 revs = sorted(linerangesbyrev, reverse=True)
1147 revs = sorted(linerangesbyrev, reverse=True)
1147
1148
1148 differ = changesetdiffer()
1149 differ = changesetdiffer()
1149 differ._makefilematcher = filematcher
1150 differ._makefilematcher = filematcher
1150 differ._makehunksfilter = hunksfilter
1151 differ._makehunksfilter = hunksfilter
1151 return smartset.baseset(revs), differ
1152 return smartset.baseset(revs), differ
1152
1153
1153
1154
1154 def _graphnodeformatter(ui, displayer):
1155 def _graphnodeformatter(ui, displayer):
1155 spec = ui.config(b'command-templates', b'graphnode')
1156 spec = ui.config(b'command-templates', b'graphnode')
1156 if not spec:
1157 if not spec:
1157 return templatekw.getgraphnode # fast path for "{graphnode}"
1158 return templatekw.getgraphnode # fast path for "{graphnode}"
1158
1159
1159 spec = templater.unquotestring(spec)
1160 spec = templater.unquotestring(spec)
1160 if isinstance(displayer, changesettemplater):
1161 if isinstance(displayer, changesettemplater):
1161 # reuse cache of slow templates
1162 # reuse cache of slow templates
1162 tres = displayer._tresources
1163 tres = displayer._tresources
1163 else:
1164 else:
1164 tres = formatter.templateresources(ui)
1165 tres = formatter.templateresources(ui)
1165 templ = formatter.maketemplater(
1166 templ = formatter.maketemplater(
1166 ui, spec, defaults=templatekw.keywords, resources=tres
1167 ui, spec, defaults=templatekw.keywords, resources=tres
1167 )
1168 )
1168
1169
1169 def formatnode(repo, ctx, cache):
1170 def formatnode(repo, ctx, cache):
1170 props = {b'ctx': ctx, b'repo': repo}
1171 props = {b'ctx': ctx, b'repo': repo}
1171 return templ.renderdefault(props)
1172 return templ.renderdefault(props)
1172
1173
1173 return formatnode
1174 return formatnode
1174
1175
1175
1176
1176 def displaygraph(ui, repo, dag, displayer, edgefn, getcopies=None, props=None):
1177 def displaygraph(ui, repo, dag, displayer, edgefn, getcopies=None, props=None):
1177 props = props or {}
1178 props = props or {}
1178 formatnode = _graphnodeformatter(ui, displayer)
1179 formatnode = _graphnodeformatter(ui, displayer)
1179 state = graphmod.asciistate()
1180 state = graphmod.asciistate()
1180 styles = state.styles
1181 styles = state.styles
1181
1182
1182 # only set graph styling if HGPLAIN is not set.
1183 # only set graph styling if HGPLAIN is not set.
1183 if ui.plain(b'graph'):
1184 if ui.plain(b'graph'):
1184 # set all edge styles to |, the default pre-3.8 behaviour
1185 # set all edge styles to |, the default pre-3.8 behaviour
1185 styles.update(dict.fromkeys(styles, b'|'))
1186 styles.update(dict.fromkeys(styles, b'|'))
1186 else:
1187 else:
1187 edgetypes = {
1188 edgetypes = {
1188 b'parent': graphmod.PARENT,
1189 b'parent': graphmod.PARENT,
1189 b'grandparent': graphmod.GRANDPARENT,
1190 b'grandparent': graphmod.GRANDPARENT,
1190 b'missing': graphmod.MISSINGPARENT,
1191 b'missing': graphmod.MISSINGPARENT,
1191 }
1192 }
1192 for name, key in edgetypes.items():
1193 for name, key in edgetypes.items():
1193 # experimental config: experimental.graphstyle.*
1194 # experimental config: experimental.graphstyle.*
1194 styles[key] = ui.config(
1195 styles[key] = ui.config(
1195 b'experimental', b'graphstyle.%s' % name, styles[key]
1196 b'experimental', b'graphstyle.%s' % name, styles[key]
1196 )
1197 )
1197 if not styles[key]:
1198 if not styles[key]:
1198 styles[key] = None
1199 styles[key] = None
1199
1200
1200 # experimental config: experimental.graphshorten
1201 # experimental config: experimental.graphshorten
1201 state.graphshorten = ui.configbool(b'experimental', b'graphshorten')
1202 state.graphshorten = ui.configbool(b'experimental', b'graphshorten')
1202
1203
1203 formatnode_cache = {}
1204 formatnode_cache = {}
1204 for rev, type, ctx, parents in dag:
1205 for rev, type, ctx, parents in dag:
1205 char = formatnode(repo, ctx, formatnode_cache)
1206 char = formatnode(repo, ctx, formatnode_cache)
1206 copies = getcopies(ctx) if getcopies else None
1207 copies = getcopies(ctx) if getcopies else None
1207 edges = edgefn(type, char, state, rev, parents)
1208 edges = edgefn(type, char, state, rev, parents)
1208 firstedge = next(edges)
1209 firstedge = next(edges)
1209 width = firstedge[2]
1210 width = firstedge[2]
1210 displayer.show(
1211 displayer.show(
1211 ctx, copies=copies, graphwidth=width, **pycompat.strkwargs(props)
1212 ctx, copies=copies, graphwidth=width, **pycompat.strkwargs(props)
1212 )
1213 )
1213 lines = displayer.hunk.pop(rev).split(b'\n')
1214 lines = displayer.hunk.pop(rev).split(b'\n')
1214 if not lines[-1]:
1215 if not lines[-1]:
1215 del lines[-1]
1216 del lines[-1]
1216 displayer.flush(ctx)
1217 displayer.flush(ctx)
1217 for type, char, width, coldata in itertools.chain([firstedge], edges):
1218 for type, char, width, coldata in itertools.chain([firstedge], edges):
1218 graphmod.ascii(ui, state, type, char, lines, coldata)
1219 graphmod.ascii(ui, state, type, char, lines, coldata)
1219 lines = []
1220 lines = []
1220 displayer.close()
1221 displayer.close()
1221
1222
1222
1223
1223 def displaygraphrevs(ui, repo, revs, displayer, getrenamed):
1224 def displaygraphrevs(ui, repo, revs, displayer, getrenamed):
1224 revdag = graphmod.dagwalker(repo, revs)
1225 revdag = graphmod.dagwalker(repo, revs)
1225 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed)
1226 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed)
1226
1227
1227
1228
1228 def displayrevs(ui, repo, revs, displayer, getcopies):
1229 def displayrevs(ui, repo, revs, displayer, getcopies):
1229 for rev in revs:
1230 for rev in revs:
1230 ctx = repo[rev]
1231 ctx = repo[rev]
1231 copies = getcopies(ctx) if getcopies else None
1232 copies = getcopies(ctx) if getcopies else None
1232 displayer.show(ctx, copies=copies)
1233 displayer.show(ctx, copies=copies)
1233 displayer.flush(ctx)
1234 displayer.flush(ctx)
1234 displayer.close()
1235 displayer.close()
1235
1236
1236
1237
1237 def checkunsupportedgraphflags(pats, opts):
1238 def checkunsupportedgraphflags(pats, opts):
1238 for op in [b"newest_first"]:
1239 for op in [b"newest_first"]:
1239 if op in opts and opts[op]:
1240 if op in opts and opts[op]:
1240 raise error.Abort(
1241 raise error.Abort(
1241 _(b"-G/--graph option is incompatible with --%s")
1242 _(b"-G/--graph option is incompatible with --%s")
1242 % op.replace(b"_", b"-")
1243 % op.replace(b"_", b"-")
1243 )
1244 )
1244
1245
1245
1246
1246 def graphrevs(repo, nodes, opts):
1247 def graphrevs(repo, nodes, opts):
1247 limit = getlimit(opts)
1248 limit = getlimit(opts)
1248 nodes.reverse()
1249 nodes.reverse()
1249 if limit is not None:
1250 if limit is not None:
1250 nodes = nodes[:limit]
1251 nodes = nodes[:limit]
1251 return graphmod.nodes(repo, nodes)
1252 return graphmod.nodes(repo, nodes)
@@ -1,840 +1,841 b''
1 from __future__ import absolute_import
1 from __future__ import absolute_import
2
2
3 import collections
3 import collections
4 import errno
4 import errno
5 import shutil
5 import shutil
6 import struct
6 import struct
7
7
8 from .i18n import _
8 from .i18n import _
9 from .node import (
9 from .node import (
10 bin,
10 bin,
11 hex,
11 hex,
12 nullhex,
12 nullhex,
13 nullid,
13 nullid,
14 nullrev,
14 )
15 )
15 from . import (
16 from . import (
16 error,
17 error,
17 filemerge,
18 filemerge,
18 pycompat,
19 pycompat,
19 util,
20 util,
20 )
21 )
21 from .utils import hashutil
22 from .utils import hashutil
22
23
23 _pack = struct.pack
24 _pack = struct.pack
24 _unpack = struct.unpack
25 _unpack = struct.unpack
25
26
26
27
27 def _droponode(data):
28 def _droponode(data):
28 # used for compatibility for v1
29 # used for compatibility for v1
29 bits = data.split(b'\0')
30 bits = data.split(b'\0')
30 bits = bits[:-2] + bits[-1:]
31 bits = bits[:-2] + bits[-1:]
31 return b'\0'.join(bits)
32 return b'\0'.join(bits)
32
33
33
34
34 def _filectxorabsent(hexnode, ctx, f):
35 def _filectxorabsent(hexnode, ctx, f):
35 if hexnode == nullhex:
36 if hexnode == nullhex:
36 return filemerge.absentfilectx(ctx, f)
37 return filemerge.absentfilectx(ctx, f)
37 else:
38 else:
38 return ctx[f]
39 return ctx[f]
39
40
40
41
41 # Merge state record types. See ``mergestate`` docs for more.
42 # Merge state record types. See ``mergestate`` docs for more.
42
43
43 ####
44 ####
44 # merge records which records metadata about a current merge
45 # merge records which records metadata about a current merge
45 # exists only once in a mergestate
46 # exists only once in a mergestate
46 #####
47 #####
47 RECORD_LOCAL = b'L'
48 RECORD_LOCAL = b'L'
48 RECORD_OTHER = b'O'
49 RECORD_OTHER = b'O'
49 # record merge labels
50 # record merge labels
50 RECORD_LABELS = b'l'
51 RECORD_LABELS = b'l'
51
52
52 #####
53 #####
53 # record extra information about files, with one entry containing info about one
54 # record extra information about files, with one entry containing info about one
54 # file. Hence, multiple of them can exists
55 # file. Hence, multiple of them can exists
55 #####
56 #####
56 RECORD_FILE_VALUES = b'f'
57 RECORD_FILE_VALUES = b'f'
57
58
58 #####
59 #####
59 # merge records which represents state of individual merges of files/folders
60 # merge records which represents state of individual merges of files/folders
60 # These are top level records for each entry containing merge related info.
61 # These are top level records for each entry containing merge related info.
61 # Each record of these has info about one file. Hence multiple of them can
62 # Each record of these has info about one file. Hence multiple of them can
62 # exists
63 # exists
63 #####
64 #####
64 RECORD_MERGED = b'F'
65 RECORD_MERGED = b'F'
65 RECORD_CHANGEDELETE_CONFLICT = b'C'
66 RECORD_CHANGEDELETE_CONFLICT = b'C'
66 # the path was dir on one side of merge and file on another
67 # the path was dir on one side of merge and file on another
67 RECORD_PATH_CONFLICT = b'P'
68 RECORD_PATH_CONFLICT = b'P'
68
69
69 #####
70 #####
70 # possible state which a merge entry can have. These are stored inside top-level
71 # possible state which a merge entry can have. These are stored inside top-level
71 # merge records mentioned just above.
72 # merge records mentioned just above.
72 #####
73 #####
73 MERGE_RECORD_UNRESOLVED = b'u'
74 MERGE_RECORD_UNRESOLVED = b'u'
74 MERGE_RECORD_RESOLVED = b'r'
75 MERGE_RECORD_RESOLVED = b'r'
75 MERGE_RECORD_UNRESOLVED_PATH = b'pu'
76 MERGE_RECORD_UNRESOLVED_PATH = b'pu'
76 MERGE_RECORD_RESOLVED_PATH = b'pr'
77 MERGE_RECORD_RESOLVED_PATH = b'pr'
77 # represents that the file was automatically merged in favor
78 # represents that the file was automatically merged in favor
78 # of other version. This info is used on commit.
79 # of other version. This info is used on commit.
79 # This is now deprecated and commit related information is now
80 # This is now deprecated and commit related information is now
80 # stored in RECORD_FILE_VALUES
81 # stored in RECORD_FILE_VALUES
81 MERGE_RECORD_MERGED_OTHER = b'o'
82 MERGE_RECORD_MERGED_OTHER = b'o'
82
83
83 #####
84 #####
84 # top level record which stores other unknown records. Multiple of these can
85 # top level record which stores other unknown records. Multiple of these can
85 # exists
86 # exists
86 #####
87 #####
87 RECORD_OVERRIDE = b't'
88 RECORD_OVERRIDE = b't'
88
89
89 #####
90 #####
90 # legacy records which are no longer used but kept to prevent breaking BC
91 # legacy records which are no longer used but kept to prevent breaking BC
91 #####
92 #####
92 # This record was release in 5.4 and usage was removed in 5.5
93 # This record was release in 5.4 and usage was removed in 5.5
93 LEGACY_RECORD_RESOLVED_OTHER = b'R'
94 LEGACY_RECORD_RESOLVED_OTHER = b'R'
94 # This record was release in 3.7 and usage was removed in 5.6
95 # This record was release in 3.7 and usage was removed in 5.6
95 LEGACY_RECORD_DRIVER_RESOLVED = b'd'
96 LEGACY_RECORD_DRIVER_RESOLVED = b'd'
96 # This record was release in 3.7 and usage was removed in 5.6
97 # This record was release in 3.7 and usage was removed in 5.6
97 LEGACY_MERGE_DRIVER_STATE = b'm'
98 LEGACY_MERGE_DRIVER_STATE = b'm'
98 # This record was release in 3.7 and usage was removed in 5.6
99 # This record was release in 3.7 and usage was removed in 5.6
99 LEGACY_MERGE_DRIVER_MERGE = b'D'
100 LEGACY_MERGE_DRIVER_MERGE = b'D'
100
101
101
102
102 ACTION_FORGET = b'f'
103 ACTION_FORGET = b'f'
103 ACTION_REMOVE = b'r'
104 ACTION_REMOVE = b'r'
104 ACTION_ADD = b'a'
105 ACTION_ADD = b'a'
105 ACTION_GET = b'g'
106 ACTION_GET = b'g'
106 ACTION_PATH_CONFLICT = b'p'
107 ACTION_PATH_CONFLICT = b'p'
107 ACTION_PATH_CONFLICT_RESOLVE = b'pr'
108 ACTION_PATH_CONFLICT_RESOLVE = b'pr'
108 ACTION_ADD_MODIFIED = b'am'
109 ACTION_ADD_MODIFIED = b'am'
109 ACTION_CREATED = b'c'
110 ACTION_CREATED = b'c'
110 ACTION_DELETED_CHANGED = b'dc'
111 ACTION_DELETED_CHANGED = b'dc'
111 ACTION_CHANGED_DELETED = b'cd'
112 ACTION_CHANGED_DELETED = b'cd'
112 ACTION_MERGE = b'm'
113 ACTION_MERGE = b'm'
113 ACTION_LOCAL_DIR_RENAME_GET = b'dg'
114 ACTION_LOCAL_DIR_RENAME_GET = b'dg'
114 ACTION_DIR_RENAME_MOVE_LOCAL = b'dm'
115 ACTION_DIR_RENAME_MOVE_LOCAL = b'dm'
115 ACTION_KEEP = b'k'
116 ACTION_KEEP = b'k'
116 # the file was absent on local side before merge and we should
117 # the file was absent on local side before merge and we should
117 # keep it absent (absent means file not present, it can be a result
118 # keep it absent (absent means file not present, it can be a result
118 # of file deletion, rename etc.)
119 # of file deletion, rename etc.)
119 ACTION_KEEP_ABSENT = b'ka'
120 ACTION_KEEP_ABSENT = b'ka'
120 # the file is absent on the ancestor and remote side of the merge
121 # the file is absent on the ancestor and remote side of the merge
121 # hence this file is new and we should keep it
122 # hence this file is new and we should keep it
122 ACTION_KEEP_NEW = b'kn'
123 ACTION_KEEP_NEW = b'kn'
123 ACTION_EXEC = b'e'
124 ACTION_EXEC = b'e'
124 ACTION_CREATED_MERGE = b'cm'
125 ACTION_CREATED_MERGE = b'cm'
125
126
126 # actions which are no op
127 # actions which are no op
127 NO_OP_ACTIONS = (
128 NO_OP_ACTIONS = (
128 ACTION_KEEP,
129 ACTION_KEEP,
129 ACTION_KEEP_ABSENT,
130 ACTION_KEEP_ABSENT,
130 ACTION_KEEP_NEW,
131 ACTION_KEEP_NEW,
131 )
132 )
132
133
133
134
134 class _mergestate_base(object):
135 class _mergestate_base(object):
135 """track 3-way merge state of individual files
136 """track 3-way merge state of individual files
136
137
137 The merge state is stored on disk when needed. Two files are used: one with
138 The merge state is stored on disk when needed. Two files are used: one with
138 an old format (version 1), and one with a new format (version 2). Version 2
139 an old format (version 1), and one with a new format (version 2). Version 2
139 stores a superset of the data in version 1, including new kinds of records
140 stores a superset of the data in version 1, including new kinds of records
140 in the future. For more about the new format, see the documentation for
141 in the future. For more about the new format, see the documentation for
141 `_readrecordsv2`.
142 `_readrecordsv2`.
142
143
143 Each record can contain arbitrary content, and has an associated type. This
144 Each record can contain arbitrary content, and has an associated type. This
144 `type` should be a letter. If `type` is uppercase, the record is mandatory:
145 `type` should be a letter. If `type` is uppercase, the record is mandatory:
145 versions of Mercurial that don't support it should abort. If `type` is
146 versions of Mercurial that don't support it should abort. If `type` is
146 lowercase, the record can be safely ignored.
147 lowercase, the record can be safely ignored.
147
148
148 Currently known records:
149 Currently known records:
149
150
150 L: the node of the "local" part of the merge (hexified version)
151 L: the node of the "local" part of the merge (hexified version)
151 O: the node of the "other" part of the merge (hexified version)
152 O: the node of the "other" part of the merge (hexified version)
152 F: a file to be merged entry
153 F: a file to be merged entry
153 C: a change/delete or delete/change conflict
154 C: a change/delete or delete/change conflict
154 P: a path conflict (file vs directory)
155 P: a path conflict (file vs directory)
155 f: a (filename, dictionary) tuple of optional values for a given file
156 f: a (filename, dictionary) tuple of optional values for a given file
156 l: the labels for the parts of the merge.
157 l: the labels for the parts of the merge.
157
158
158 Merge record states (stored in self._state, indexed by filename):
159 Merge record states (stored in self._state, indexed by filename):
159 u: unresolved conflict
160 u: unresolved conflict
160 r: resolved conflict
161 r: resolved conflict
161 pu: unresolved path conflict (file conflicts with directory)
162 pu: unresolved path conflict (file conflicts with directory)
162 pr: resolved path conflict
163 pr: resolved path conflict
163 o: file was merged in favor of other parent of merge (DEPRECATED)
164 o: file was merged in favor of other parent of merge (DEPRECATED)
164
165
165 The resolve command transitions between 'u' and 'r' for conflicts and
166 The resolve command transitions between 'u' and 'r' for conflicts and
166 'pu' and 'pr' for path conflicts.
167 'pu' and 'pr' for path conflicts.
167 """
168 """
168
169
169 def __init__(self, repo):
170 def __init__(self, repo):
170 """Initialize the merge state.
171 """Initialize the merge state.
171
172
172 Do not use this directly! Instead call read() or clean()."""
173 Do not use this directly! Instead call read() or clean()."""
173 self._repo = repo
174 self._repo = repo
174 self._state = {}
175 self._state = {}
175 self._stateextras = collections.defaultdict(dict)
176 self._stateextras = collections.defaultdict(dict)
176 self._local = None
177 self._local = None
177 self._other = None
178 self._other = None
178 self._labels = None
179 self._labels = None
179 # contains a mapping of form:
180 # contains a mapping of form:
180 # {filename : (merge_return_value, action_to_be_performed}
181 # {filename : (merge_return_value, action_to_be_performed}
181 # these are results of re-running merge process
182 # these are results of re-running merge process
182 # this dict is used to perform actions on dirstate caused by re-running
183 # this dict is used to perform actions on dirstate caused by re-running
183 # the merge
184 # the merge
184 self._results = {}
185 self._results = {}
185 self._dirty = False
186 self._dirty = False
186
187
187 def reset(self):
188 def reset(self):
188 pass
189 pass
189
190
190 def start(self, node, other, labels=None):
191 def start(self, node, other, labels=None):
191 self._local = node
192 self._local = node
192 self._other = other
193 self._other = other
193 self._labels = labels
194 self._labels = labels
194
195
195 @util.propertycache
196 @util.propertycache
196 def local(self):
197 def local(self):
197 if self._local is None:
198 if self._local is None:
198 msg = b"local accessed but self._local isn't set"
199 msg = b"local accessed but self._local isn't set"
199 raise error.ProgrammingError(msg)
200 raise error.ProgrammingError(msg)
200 return self._local
201 return self._local
201
202
202 @util.propertycache
203 @util.propertycache
203 def localctx(self):
204 def localctx(self):
204 return self._repo[self.local]
205 return self._repo[self.local]
205
206
206 @util.propertycache
207 @util.propertycache
207 def other(self):
208 def other(self):
208 if self._other is None:
209 if self._other is None:
209 msg = b"other accessed but self._other isn't set"
210 msg = b"other accessed but self._other isn't set"
210 raise error.ProgrammingError(msg)
211 raise error.ProgrammingError(msg)
211 return self._other
212 return self._other
212
213
213 @util.propertycache
214 @util.propertycache
214 def otherctx(self):
215 def otherctx(self):
215 return self._repo[self.other]
216 return self._repo[self.other]
216
217
217 def active(self):
218 def active(self):
218 """Whether mergestate is active.
219 """Whether mergestate is active.
219
220
220 Returns True if there appears to be mergestate. This is a rough proxy
221 Returns True if there appears to be mergestate. This is a rough proxy
221 for "is a merge in progress."
222 for "is a merge in progress."
222 """
223 """
223 return bool(self._local) or bool(self._state)
224 return bool(self._local) or bool(self._state)
224
225
225 def commit(self):
226 def commit(self):
226 """Write current state on disk (if necessary)"""
227 """Write current state on disk (if necessary)"""
227
228
228 @staticmethod
229 @staticmethod
229 def getlocalkey(path):
230 def getlocalkey(path):
230 """hash the path of a local file context for storage in the .hg/merge
231 """hash the path of a local file context for storage in the .hg/merge
231 directory."""
232 directory."""
232
233
233 return hex(hashutil.sha1(path).digest())
234 return hex(hashutil.sha1(path).digest())
234
235
235 def _make_backup(self, fctx, localkey):
236 def _make_backup(self, fctx, localkey):
236 raise NotImplementedError()
237 raise NotImplementedError()
237
238
238 def _restore_backup(self, fctx, localkey, flags):
239 def _restore_backup(self, fctx, localkey, flags):
239 raise NotImplementedError()
240 raise NotImplementedError()
240
241
241 def add(self, fcl, fco, fca, fd):
242 def add(self, fcl, fco, fca, fd):
242 """add a new (potentially?) conflicting file the merge state
243 """add a new (potentially?) conflicting file the merge state
243 fcl: file context for local,
244 fcl: file context for local,
244 fco: file context for remote,
245 fco: file context for remote,
245 fca: file context for ancestors,
246 fca: file context for ancestors,
246 fd: file path of the resulting merge.
247 fd: file path of the resulting merge.
247
248
248 note: also write the local version to the `.hg/merge` directory.
249 note: also write the local version to the `.hg/merge` directory.
249 """
250 """
250 if fcl.isabsent():
251 if fcl.isabsent():
251 localkey = nullhex
252 localkey = nullhex
252 else:
253 else:
253 localkey = mergestate.getlocalkey(fcl.path())
254 localkey = mergestate.getlocalkey(fcl.path())
254 self._make_backup(fcl, localkey)
255 self._make_backup(fcl, localkey)
255 self._state[fd] = [
256 self._state[fd] = [
256 MERGE_RECORD_UNRESOLVED,
257 MERGE_RECORD_UNRESOLVED,
257 localkey,
258 localkey,
258 fcl.path(),
259 fcl.path(),
259 fca.path(),
260 fca.path(),
260 hex(fca.filenode()),
261 hex(fca.filenode()),
261 fco.path(),
262 fco.path(),
262 hex(fco.filenode()),
263 hex(fco.filenode()),
263 fcl.flags(),
264 fcl.flags(),
264 ]
265 ]
265 self._stateextras[fd][b'ancestorlinknode'] = hex(fca.node())
266 self._stateextras[fd][b'ancestorlinknode'] = hex(fca.node())
266 self._dirty = True
267 self._dirty = True
267
268
268 def addpathconflict(self, path, frename, forigin):
269 def addpathconflict(self, path, frename, forigin):
269 """add a new conflicting path to the merge state
270 """add a new conflicting path to the merge state
270 path: the path that conflicts
271 path: the path that conflicts
271 frename: the filename the conflicting file was renamed to
272 frename: the filename the conflicting file was renamed to
272 forigin: origin of the file ('l' or 'r' for local/remote)
273 forigin: origin of the file ('l' or 'r' for local/remote)
273 """
274 """
274 self._state[path] = [MERGE_RECORD_UNRESOLVED_PATH, frename, forigin]
275 self._state[path] = [MERGE_RECORD_UNRESOLVED_PATH, frename, forigin]
275 self._dirty = True
276 self._dirty = True
276
277
277 def addcommitinfo(self, path, data):
278 def addcommitinfo(self, path, data):
278 """stores information which is required at commit
279 """stores information which is required at commit
279 into _stateextras"""
280 into _stateextras"""
280 self._stateextras[path].update(data)
281 self._stateextras[path].update(data)
281 self._dirty = True
282 self._dirty = True
282
283
283 def __contains__(self, dfile):
284 def __contains__(self, dfile):
284 return dfile in self._state
285 return dfile in self._state
285
286
286 def __getitem__(self, dfile):
287 def __getitem__(self, dfile):
287 return self._state[dfile][0]
288 return self._state[dfile][0]
288
289
289 def __iter__(self):
290 def __iter__(self):
290 return iter(sorted(self._state))
291 return iter(sorted(self._state))
291
292
292 def files(self):
293 def files(self):
293 return self._state.keys()
294 return self._state.keys()
294
295
295 def mark(self, dfile, state):
296 def mark(self, dfile, state):
296 self._state[dfile][0] = state
297 self._state[dfile][0] = state
297 self._dirty = True
298 self._dirty = True
298
299
299 def unresolved(self):
300 def unresolved(self):
300 """Obtain the paths of unresolved files."""
301 """Obtain the paths of unresolved files."""
301
302
302 for f, entry in pycompat.iteritems(self._state):
303 for f, entry in pycompat.iteritems(self._state):
303 if entry[0] in (
304 if entry[0] in (
304 MERGE_RECORD_UNRESOLVED,
305 MERGE_RECORD_UNRESOLVED,
305 MERGE_RECORD_UNRESOLVED_PATH,
306 MERGE_RECORD_UNRESOLVED_PATH,
306 ):
307 ):
307 yield f
308 yield f
308
309
309 def allextras(self):
310 def allextras(self):
310 """ return all extras information stored with the mergestate """
311 """ return all extras information stored with the mergestate """
311 return self._stateextras
312 return self._stateextras
312
313
313 def extras(self, filename):
314 def extras(self, filename):
314 """ return extras stored with the mergestate for the given filename """
315 """ return extras stored with the mergestate for the given filename """
315 return self._stateextras[filename]
316 return self._stateextras[filename]
316
317
317 def _resolve(self, preresolve, dfile, wctx):
318 def _resolve(self, preresolve, dfile, wctx):
318 """rerun merge process for file path `dfile`.
319 """rerun merge process for file path `dfile`.
319 Returns whether the merge was completed and the return value of merge
320 Returns whether the merge was completed and the return value of merge
320 obtained from filemerge._filemerge().
321 obtained from filemerge._filemerge().
321 """
322 """
322 if self[dfile] in (
323 if self[dfile] in (
323 MERGE_RECORD_RESOLVED,
324 MERGE_RECORD_RESOLVED,
324 LEGACY_RECORD_DRIVER_RESOLVED,
325 LEGACY_RECORD_DRIVER_RESOLVED,
325 ):
326 ):
326 return True, 0
327 return True, 0
327 stateentry = self._state[dfile]
328 stateentry = self._state[dfile]
328 state, localkey, lfile, afile, anode, ofile, onode, flags = stateentry
329 state, localkey, lfile, afile, anode, ofile, onode, flags = stateentry
329 octx = self._repo[self._other]
330 octx = self._repo[self._other]
330 extras = self.extras(dfile)
331 extras = self.extras(dfile)
331 anccommitnode = extras.get(b'ancestorlinknode')
332 anccommitnode = extras.get(b'ancestorlinknode')
332 if anccommitnode:
333 if anccommitnode:
333 actx = self._repo[anccommitnode]
334 actx = self._repo[anccommitnode]
334 else:
335 else:
335 actx = None
336 actx = None
336 fcd = _filectxorabsent(localkey, wctx, dfile)
337 fcd = _filectxorabsent(localkey, wctx, dfile)
337 fco = _filectxorabsent(onode, octx, ofile)
338 fco = _filectxorabsent(onode, octx, ofile)
338 # TODO: move this to filectxorabsent
339 # TODO: move this to filectxorabsent
339 fca = self._repo.filectx(afile, fileid=anode, changectx=actx)
340 fca = self._repo.filectx(afile, fileid=anode, changectx=actx)
340 # "premerge" x flags
341 # "premerge" x flags
341 flo = fco.flags()
342 flo = fco.flags()
342 fla = fca.flags()
343 fla = fca.flags()
343 if b'x' in flags + flo + fla and b'l' not in flags + flo + fla:
344 if b'x' in flags + flo + fla and b'l' not in flags + flo + fla:
344 if fca.node() == nullid and flags != flo:
345 if fca.rev() == nullrev and flags != flo:
345 if preresolve:
346 if preresolve:
346 self._repo.ui.warn(
347 self._repo.ui.warn(
347 _(
348 _(
348 b'warning: cannot merge flags for %s '
349 b'warning: cannot merge flags for %s '
349 b'without common ancestor - keeping local flags\n'
350 b'without common ancestor - keeping local flags\n'
350 )
351 )
351 % afile
352 % afile
352 )
353 )
353 elif flags == fla:
354 elif flags == fla:
354 flags = flo
355 flags = flo
355 if preresolve:
356 if preresolve:
356 # restore local
357 # restore local
357 if localkey != nullhex:
358 if localkey != nullhex:
358 self._restore_backup(wctx[dfile], localkey, flags)
359 self._restore_backup(wctx[dfile], localkey, flags)
359 else:
360 else:
360 wctx[dfile].remove(ignoremissing=True)
361 wctx[dfile].remove(ignoremissing=True)
361 complete, merge_ret, deleted = filemerge.premerge(
362 complete, merge_ret, deleted = filemerge.premerge(
362 self._repo,
363 self._repo,
363 wctx,
364 wctx,
364 self._local,
365 self._local,
365 lfile,
366 lfile,
366 fcd,
367 fcd,
367 fco,
368 fco,
368 fca,
369 fca,
369 labels=self._labels,
370 labels=self._labels,
370 )
371 )
371 else:
372 else:
372 complete, merge_ret, deleted = filemerge.filemerge(
373 complete, merge_ret, deleted = filemerge.filemerge(
373 self._repo,
374 self._repo,
374 wctx,
375 wctx,
375 self._local,
376 self._local,
376 lfile,
377 lfile,
377 fcd,
378 fcd,
378 fco,
379 fco,
379 fca,
380 fca,
380 labels=self._labels,
381 labels=self._labels,
381 )
382 )
382 if merge_ret is None:
383 if merge_ret is None:
383 # If return value of merge is None, then there are no real conflict
384 # If return value of merge is None, then there are no real conflict
384 del self._state[dfile]
385 del self._state[dfile]
385 self._dirty = True
386 self._dirty = True
386 elif not merge_ret:
387 elif not merge_ret:
387 self.mark(dfile, MERGE_RECORD_RESOLVED)
388 self.mark(dfile, MERGE_RECORD_RESOLVED)
388
389
389 if complete:
390 if complete:
390 action = None
391 action = None
391 if deleted:
392 if deleted:
392 if fcd.isabsent():
393 if fcd.isabsent():
393 # dc: local picked. Need to drop if present, which may
394 # dc: local picked. Need to drop if present, which may
394 # happen on re-resolves.
395 # happen on re-resolves.
395 action = ACTION_FORGET
396 action = ACTION_FORGET
396 else:
397 else:
397 # cd: remote picked (or otherwise deleted)
398 # cd: remote picked (or otherwise deleted)
398 action = ACTION_REMOVE
399 action = ACTION_REMOVE
399 else:
400 else:
400 if fcd.isabsent(): # dc: remote picked
401 if fcd.isabsent(): # dc: remote picked
401 action = ACTION_GET
402 action = ACTION_GET
402 elif fco.isabsent(): # cd: local picked
403 elif fco.isabsent(): # cd: local picked
403 if dfile in self.localctx:
404 if dfile in self.localctx:
404 action = ACTION_ADD_MODIFIED
405 action = ACTION_ADD_MODIFIED
405 else:
406 else:
406 action = ACTION_ADD
407 action = ACTION_ADD
407 # else: regular merges (no action necessary)
408 # else: regular merges (no action necessary)
408 self._results[dfile] = merge_ret, action
409 self._results[dfile] = merge_ret, action
409
410
410 return complete, merge_ret
411 return complete, merge_ret
411
412
412 def preresolve(self, dfile, wctx):
413 def preresolve(self, dfile, wctx):
413 """run premerge process for dfile
414 """run premerge process for dfile
414
415
415 Returns whether the merge is complete, and the exit code."""
416 Returns whether the merge is complete, and the exit code."""
416 return self._resolve(True, dfile, wctx)
417 return self._resolve(True, dfile, wctx)
417
418
418 def resolve(self, dfile, wctx):
419 def resolve(self, dfile, wctx):
419 """run merge process (assuming premerge was run) for dfile
420 """run merge process (assuming premerge was run) for dfile
420
421
421 Returns the exit code of the merge."""
422 Returns the exit code of the merge."""
422 return self._resolve(False, dfile, wctx)[1]
423 return self._resolve(False, dfile, wctx)[1]
423
424
424 def counts(self):
425 def counts(self):
425 """return counts for updated, merged and removed files in this
426 """return counts for updated, merged and removed files in this
426 session"""
427 session"""
427 updated, merged, removed = 0, 0, 0
428 updated, merged, removed = 0, 0, 0
428 for r, action in pycompat.itervalues(self._results):
429 for r, action in pycompat.itervalues(self._results):
429 if r is None:
430 if r is None:
430 updated += 1
431 updated += 1
431 elif r == 0:
432 elif r == 0:
432 if action == ACTION_REMOVE:
433 if action == ACTION_REMOVE:
433 removed += 1
434 removed += 1
434 else:
435 else:
435 merged += 1
436 merged += 1
436 return updated, merged, removed
437 return updated, merged, removed
437
438
438 def unresolvedcount(self):
439 def unresolvedcount(self):
439 """get unresolved count for this merge (persistent)"""
440 """get unresolved count for this merge (persistent)"""
440 return len(list(self.unresolved()))
441 return len(list(self.unresolved()))
441
442
442 def actions(self):
443 def actions(self):
443 """return lists of actions to perform on the dirstate"""
444 """return lists of actions to perform on the dirstate"""
444 actions = {
445 actions = {
445 ACTION_REMOVE: [],
446 ACTION_REMOVE: [],
446 ACTION_FORGET: [],
447 ACTION_FORGET: [],
447 ACTION_ADD: [],
448 ACTION_ADD: [],
448 ACTION_ADD_MODIFIED: [],
449 ACTION_ADD_MODIFIED: [],
449 ACTION_GET: [],
450 ACTION_GET: [],
450 }
451 }
451 for f, (r, action) in pycompat.iteritems(self._results):
452 for f, (r, action) in pycompat.iteritems(self._results):
452 if action is not None:
453 if action is not None:
453 actions[action].append((f, None, b"merge result"))
454 actions[action].append((f, None, b"merge result"))
454 return actions
455 return actions
455
456
456
457
457 class mergestate(_mergestate_base):
458 class mergestate(_mergestate_base):
458
459
459 statepathv1 = b'merge/state'
460 statepathv1 = b'merge/state'
460 statepathv2 = b'merge/state2'
461 statepathv2 = b'merge/state2'
461
462
462 @staticmethod
463 @staticmethod
463 def clean(repo):
464 def clean(repo):
464 """Initialize a brand new merge state, removing any existing state on
465 """Initialize a brand new merge state, removing any existing state on
465 disk."""
466 disk."""
466 ms = mergestate(repo)
467 ms = mergestate(repo)
467 ms.reset()
468 ms.reset()
468 return ms
469 return ms
469
470
470 @staticmethod
471 @staticmethod
471 def read(repo):
472 def read(repo):
472 """Initialize the merge state, reading it from disk."""
473 """Initialize the merge state, reading it from disk."""
473 ms = mergestate(repo)
474 ms = mergestate(repo)
474 ms._read()
475 ms._read()
475 return ms
476 return ms
476
477
477 def _read(self):
478 def _read(self):
478 """Analyse each record content to restore a serialized state from disk
479 """Analyse each record content to restore a serialized state from disk
479
480
480 This function process "record" entry produced by the de-serialization
481 This function process "record" entry produced by the de-serialization
481 of on disk file.
482 of on disk file.
482 """
483 """
483 unsupported = set()
484 unsupported = set()
484 records = self._readrecords()
485 records = self._readrecords()
485 for rtype, record in records:
486 for rtype, record in records:
486 if rtype == RECORD_LOCAL:
487 if rtype == RECORD_LOCAL:
487 self._local = bin(record)
488 self._local = bin(record)
488 elif rtype == RECORD_OTHER:
489 elif rtype == RECORD_OTHER:
489 self._other = bin(record)
490 self._other = bin(record)
490 elif rtype == LEGACY_MERGE_DRIVER_STATE:
491 elif rtype == LEGACY_MERGE_DRIVER_STATE:
491 pass
492 pass
492 elif rtype in (
493 elif rtype in (
493 RECORD_MERGED,
494 RECORD_MERGED,
494 RECORD_CHANGEDELETE_CONFLICT,
495 RECORD_CHANGEDELETE_CONFLICT,
495 RECORD_PATH_CONFLICT,
496 RECORD_PATH_CONFLICT,
496 LEGACY_MERGE_DRIVER_MERGE,
497 LEGACY_MERGE_DRIVER_MERGE,
497 LEGACY_RECORD_RESOLVED_OTHER,
498 LEGACY_RECORD_RESOLVED_OTHER,
498 ):
499 ):
499 bits = record.split(b'\0')
500 bits = record.split(b'\0')
500 # merge entry type MERGE_RECORD_MERGED_OTHER is deprecated
501 # merge entry type MERGE_RECORD_MERGED_OTHER is deprecated
501 # and we now store related information in _stateextras, so
502 # and we now store related information in _stateextras, so
502 # lets write to _stateextras directly
503 # lets write to _stateextras directly
503 if bits[1] == MERGE_RECORD_MERGED_OTHER:
504 if bits[1] == MERGE_RECORD_MERGED_OTHER:
504 self._stateextras[bits[0]][b'filenode-source'] = b'other'
505 self._stateextras[bits[0]][b'filenode-source'] = b'other'
505 else:
506 else:
506 self._state[bits[0]] = bits[1:]
507 self._state[bits[0]] = bits[1:]
507 elif rtype == RECORD_FILE_VALUES:
508 elif rtype == RECORD_FILE_VALUES:
508 filename, rawextras = record.split(b'\0', 1)
509 filename, rawextras = record.split(b'\0', 1)
509 extraparts = rawextras.split(b'\0')
510 extraparts = rawextras.split(b'\0')
510 extras = {}
511 extras = {}
511 i = 0
512 i = 0
512 while i < len(extraparts):
513 while i < len(extraparts):
513 extras[extraparts[i]] = extraparts[i + 1]
514 extras[extraparts[i]] = extraparts[i + 1]
514 i += 2
515 i += 2
515
516
516 self._stateextras[filename] = extras
517 self._stateextras[filename] = extras
517 elif rtype == RECORD_LABELS:
518 elif rtype == RECORD_LABELS:
518 labels = record.split(b'\0', 2)
519 labels = record.split(b'\0', 2)
519 self._labels = [l for l in labels if len(l) > 0]
520 self._labels = [l for l in labels if len(l) > 0]
520 elif not rtype.islower():
521 elif not rtype.islower():
521 unsupported.add(rtype)
522 unsupported.add(rtype)
522
523
523 if unsupported:
524 if unsupported:
524 raise error.UnsupportedMergeRecords(unsupported)
525 raise error.UnsupportedMergeRecords(unsupported)
525
526
526 def _readrecords(self):
527 def _readrecords(self):
527 """Read merge state from disk and return a list of record (TYPE, data)
528 """Read merge state from disk and return a list of record (TYPE, data)
528
529
529 We read data from both v1 and v2 files and decide which one to use.
530 We read data from both v1 and v2 files and decide which one to use.
530
531
531 V1 has been used by version prior to 2.9.1 and contains less data than
532 V1 has been used by version prior to 2.9.1 and contains less data than
532 v2. We read both versions and check if no data in v2 contradicts
533 v2. We read both versions and check if no data in v2 contradicts
533 v1. If there is not contradiction we can safely assume that both v1
534 v1. If there is not contradiction we can safely assume that both v1
534 and v2 were written at the same time and use the extract data in v2. If
535 and v2 were written at the same time and use the extract data in v2. If
535 there is contradiction we ignore v2 content as we assume an old version
536 there is contradiction we ignore v2 content as we assume an old version
536 of Mercurial has overwritten the mergestate file and left an old v2
537 of Mercurial has overwritten the mergestate file and left an old v2
537 file around.
538 file around.
538
539
539 returns list of record [(TYPE, data), ...]"""
540 returns list of record [(TYPE, data), ...]"""
540 v1records = self._readrecordsv1()
541 v1records = self._readrecordsv1()
541 v2records = self._readrecordsv2()
542 v2records = self._readrecordsv2()
542 if self._v1v2match(v1records, v2records):
543 if self._v1v2match(v1records, v2records):
543 return v2records
544 return v2records
544 else:
545 else:
545 # v1 file is newer than v2 file, use it
546 # v1 file is newer than v2 file, use it
546 # we have to infer the "other" changeset of the merge
547 # we have to infer the "other" changeset of the merge
547 # we cannot do better than that with v1 of the format
548 # we cannot do better than that with v1 of the format
548 mctx = self._repo[None].parents()[-1]
549 mctx = self._repo[None].parents()[-1]
549 v1records.append((RECORD_OTHER, mctx.hex()))
550 v1records.append((RECORD_OTHER, mctx.hex()))
550 # add place holder "other" file node information
551 # add place holder "other" file node information
551 # nobody is using it yet so we do no need to fetch the data
552 # nobody is using it yet so we do no need to fetch the data
552 # if mctx was wrong `mctx[bits[-2]]` may fails.
553 # if mctx was wrong `mctx[bits[-2]]` may fails.
553 for idx, r in enumerate(v1records):
554 for idx, r in enumerate(v1records):
554 if r[0] == RECORD_MERGED:
555 if r[0] == RECORD_MERGED:
555 bits = r[1].split(b'\0')
556 bits = r[1].split(b'\0')
556 bits.insert(-2, b'')
557 bits.insert(-2, b'')
557 v1records[idx] = (r[0], b'\0'.join(bits))
558 v1records[idx] = (r[0], b'\0'.join(bits))
558 return v1records
559 return v1records
559
560
560 def _v1v2match(self, v1records, v2records):
561 def _v1v2match(self, v1records, v2records):
561 oldv2 = set() # old format version of v2 record
562 oldv2 = set() # old format version of v2 record
562 for rec in v2records:
563 for rec in v2records:
563 if rec[0] == RECORD_LOCAL:
564 if rec[0] == RECORD_LOCAL:
564 oldv2.add(rec)
565 oldv2.add(rec)
565 elif rec[0] == RECORD_MERGED:
566 elif rec[0] == RECORD_MERGED:
566 # drop the onode data (not contained in v1)
567 # drop the onode data (not contained in v1)
567 oldv2.add((RECORD_MERGED, _droponode(rec[1])))
568 oldv2.add((RECORD_MERGED, _droponode(rec[1])))
568 for rec in v1records:
569 for rec in v1records:
569 if rec not in oldv2:
570 if rec not in oldv2:
570 return False
571 return False
571 else:
572 else:
572 return True
573 return True
573
574
574 def _readrecordsv1(self):
575 def _readrecordsv1(self):
575 """read on disk merge state for version 1 file
576 """read on disk merge state for version 1 file
576
577
577 returns list of record [(TYPE, data), ...]
578 returns list of record [(TYPE, data), ...]
578
579
579 Note: the "F" data from this file are one entry short
580 Note: the "F" data from this file are one entry short
580 (no "other file node" entry)
581 (no "other file node" entry)
581 """
582 """
582 records = []
583 records = []
583 try:
584 try:
584 f = self._repo.vfs(self.statepathv1)
585 f = self._repo.vfs(self.statepathv1)
585 for i, l in enumerate(f):
586 for i, l in enumerate(f):
586 if i == 0:
587 if i == 0:
587 records.append((RECORD_LOCAL, l[:-1]))
588 records.append((RECORD_LOCAL, l[:-1]))
588 else:
589 else:
589 records.append((RECORD_MERGED, l[:-1]))
590 records.append((RECORD_MERGED, l[:-1]))
590 f.close()
591 f.close()
591 except IOError as err:
592 except IOError as err:
592 if err.errno != errno.ENOENT:
593 if err.errno != errno.ENOENT:
593 raise
594 raise
594 return records
595 return records
595
596
596 def _readrecordsv2(self):
597 def _readrecordsv2(self):
597 """read on disk merge state for version 2 file
598 """read on disk merge state for version 2 file
598
599
599 This format is a list of arbitrary records of the form:
600 This format is a list of arbitrary records of the form:
600
601
601 [type][length][content]
602 [type][length][content]
602
603
603 `type` is a single character, `length` is a 4 byte integer, and
604 `type` is a single character, `length` is a 4 byte integer, and
604 `content` is an arbitrary byte sequence of length `length`.
605 `content` is an arbitrary byte sequence of length `length`.
605
606
606 Mercurial versions prior to 3.7 have a bug where if there are
607 Mercurial versions prior to 3.7 have a bug where if there are
607 unsupported mandatory merge records, attempting to clear out the merge
608 unsupported mandatory merge records, attempting to clear out the merge
608 state with hg update --clean or similar aborts. The 't' record type
609 state with hg update --clean or similar aborts. The 't' record type
609 works around that by writing out what those versions treat as an
610 works around that by writing out what those versions treat as an
610 advisory record, but later versions interpret as special: the first
611 advisory record, but later versions interpret as special: the first
611 character is the 'real' record type and everything onwards is the data.
612 character is the 'real' record type and everything onwards is the data.
612
613
613 Returns list of records [(TYPE, data), ...]."""
614 Returns list of records [(TYPE, data), ...]."""
614 records = []
615 records = []
615 try:
616 try:
616 f = self._repo.vfs(self.statepathv2)
617 f = self._repo.vfs(self.statepathv2)
617 data = f.read()
618 data = f.read()
618 off = 0
619 off = 0
619 end = len(data)
620 end = len(data)
620 while off < end:
621 while off < end:
621 rtype = data[off : off + 1]
622 rtype = data[off : off + 1]
622 off += 1
623 off += 1
623 length = _unpack(b'>I', data[off : (off + 4)])[0]
624 length = _unpack(b'>I', data[off : (off + 4)])[0]
624 off += 4
625 off += 4
625 record = data[off : (off + length)]
626 record = data[off : (off + length)]
626 off += length
627 off += length
627 if rtype == RECORD_OVERRIDE:
628 if rtype == RECORD_OVERRIDE:
628 rtype, record = record[0:1], record[1:]
629 rtype, record = record[0:1], record[1:]
629 records.append((rtype, record))
630 records.append((rtype, record))
630 f.close()
631 f.close()
631 except IOError as err:
632 except IOError as err:
632 if err.errno != errno.ENOENT:
633 if err.errno != errno.ENOENT:
633 raise
634 raise
634 return records
635 return records
635
636
636 def commit(self):
637 def commit(self):
637 if self._dirty:
638 if self._dirty:
638 records = self._makerecords()
639 records = self._makerecords()
639 self._writerecords(records)
640 self._writerecords(records)
640 self._dirty = False
641 self._dirty = False
641
642
642 def _makerecords(self):
643 def _makerecords(self):
643 records = []
644 records = []
644 records.append((RECORD_LOCAL, hex(self._local)))
645 records.append((RECORD_LOCAL, hex(self._local)))
645 records.append((RECORD_OTHER, hex(self._other)))
646 records.append((RECORD_OTHER, hex(self._other)))
646 # Write out state items. In all cases, the value of the state map entry
647 # Write out state items. In all cases, the value of the state map entry
647 # is written as the contents of the record. The record type depends on
648 # is written as the contents of the record. The record type depends on
648 # the type of state that is stored, and capital-letter records are used
649 # the type of state that is stored, and capital-letter records are used
649 # to prevent older versions of Mercurial that do not support the feature
650 # to prevent older versions of Mercurial that do not support the feature
650 # from loading them.
651 # from loading them.
651 for filename, v in pycompat.iteritems(self._state):
652 for filename, v in pycompat.iteritems(self._state):
652 if v[0] in (
653 if v[0] in (
653 MERGE_RECORD_UNRESOLVED_PATH,
654 MERGE_RECORD_UNRESOLVED_PATH,
654 MERGE_RECORD_RESOLVED_PATH,
655 MERGE_RECORD_RESOLVED_PATH,
655 ):
656 ):
656 # Path conflicts. These are stored in 'P' records. The current
657 # Path conflicts. These are stored in 'P' records. The current
657 # resolution state ('pu' or 'pr') is stored within the record.
658 # resolution state ('pu' or 'pr') is stored within the record.
658 records.append(
659 records.append(
659 (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v))
660 (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v))
660 )
661 )
661 elif v[1] == nullhex or v[6] == nullhex:
662 elif v[1] == nullhex or v[6] == nullhex:
662 # Change/Delete or Delete/Change conflicts. These are stored in
663 # Change/Delete or Delete/Change conflicts. These are stored in
663 # 'C' records. v[1] is the local file, and is nullhex when the
664 # 'C' records. v[1] is the local file, and is nullhex when the
664 # file is deleted locally ('dc'). v[6] is the remote file, and
665 # file is deleted locally ('dc'). v[6] is the remote file, and
665 # is nullhex when the file is deleted remotely ('cd').
666 # is nullhex when the file is deleted remotely ('cd').
666 records.append(
667 records.append(
667 (RECORD_CHANGEDELETE_CONFLICT, b'\0'.join([filename] + v))
668 (RECORD_CHANGEDELETE_CONFLICT, b'\0'.join([filename] + v))
668 )
669 )
669 else:
670 else:
670 # Normal files. These are stored in 'F' records.
671 # Normal files. These are stored in 'F' records.
671 records.append((RECORD_MERGED, b'\0'.join([filename] + v)))
672 records.append((RECORD_MERGED, b'\0'.join([filename] + v)))
672 for filename, extras in sorted(pycompat.iteritems(self._stateextras)):
673 for filename, extras in sorted(pycompat.iteritems(self._stateextras)):
673 rawextras = b'\0'.join(
674 rawextras = b'\0'.join(
674 b'%s\0%s' % (k, v) for k, v in pycompat.iteritems(extras)
675 b'%s\0%s' % (k, v) for k, v in pycompat.iteritems(extras)
675 )
676 )
676 records.append(
677 records.append(
677 (RECORD_FILE_VALUES, b'%s\0%s' % (filename, rawextras))
678 (RECORD_FILE_VALUES, b'%s\0%s' % (filename, rawextras))
678 )
679 )
679 if self._labels is not None:
680 if self._labels is not None:
680 labels = b'\0'.join(self._labels)
681 labels = b'\0'.join(self._labels)
681 records.append((RECORD_LABELS, labels))
682 records.append((RECORD_LABELS, labels))
682 return records
683 return records
683
684
684 def _writerecords(self, records):
685 def _writerecords(self, records):
685 """Write current state on disk (both v1 and v2)"""
686 """Write current state on disk (both v1 and v2)"""
686 self._writerecordsv1(records)
687 self._writerecordsv1(records)
687 self._writerecordsv2(records)
688 self._writerecordsv2(records)
688
689
689 def _writerecordsv1(self, records):
690 def _writerecordsv1(self, records):
690 """Write current state on disk in a version 1 file"""
691 """Write current state on disk in a version 1 file"""
691 f = self._repo.vfs(self.statepathv1, b'wb')
692 f = self._repo.vfs(self.statepathv1, b'wb')
692 irecords = iter(records)
693 irecords = iter(records)
693 lrecords = next(irecords)
694 lrecords = next(irecords)
694 assert lrecords[0] == RECORD_LOCAL
695 assert lrecords[0] == RECORD_LOCAL
695 f.write(hex(self._local) + b'\n')
696 f.write(hex(self._local) + b'\n')
696 for rtype, data in irecords:
697 for rtype, data in irecords:
697 if rtype == RECORD_MERGED:
698 if rtype == RECORD_MERGED:
698 f.write(b'%s\n' % _droponode(data))
699 f.write(b'%s\n' % _droponode(data))
699 f.close()
700 f.close()
700
701
701 def _writerecordsv2(self, records):
702 def _writerecordsv2(self, records):
702 """Write current state on disk in a version 2 file
703 """Write current state on disk in a version 2 file
703
704
704 See the docstring for _readrecordsv2 for why we use 't'."""
705 See the docstring for _readrecordsv2 for why we use 't'."""
705 # these are the records that all version 2 clients can read
706 # these are the records that all version 2 clients can read
706 allowlist = (RECORD_LOCAL, RECORD_OTHER, RECORD_MERGED)
707 allowlist = (RECORD_LOCAL, RECORD_OTHER, RECORD_MERGED)
707 f = self._repo.vfs(self.statepathv2, b'wb')
708 f = self._repo.vfs(self.statepathv2, b'wb')
708 for key, data in records:
709 for key, data in records:
709 assert len(key) == 1
710 assert len(key) == 1
710 if key not in allowlist:
711 if key not in allowlist:
711 key, data = RECORD_OVERRIDE, b'%s%s' % (key, data)
712 key, data = RECORD_OVERRIDE, b'%s%s' % (key, data)
712 format = b'>sI%is' % len(data)
713 format = b'>sI%is' % len(data)
713 f.write(_pack(format, key, len(data), data))
714 f.write(_pack(format, key, len(data), data))
714 f.close()
715 f.close()
715
716
716 def _make_backup(self, fctx, localkey):
717 def _make_backup(self, fctx, localkey):
717 self._repo.vfs.write(b'merge/' + localkey, fctx.data())
718 self._repo.vfs.write(b'merge/' + localkey, fctx.data())
718
719
719 def _restore_backup(self, fctx, localkey, flags):
720 def _restore_backup(self, fctx, localkey, flags):
720 with self._repo.vfs(b'merge/' + localkey) as f:
721 with self._repo.vfs(b'merge/' + localkey) as f:
721 fctx.write(f.read(), flags)
722 fctx.write(f.read(), flags)
722
723
723 def reset(self):
724 def reset(self):
724 shutil.rmtree(self._repo.vfs.join(b'merge'), True)
725 shutil.rmtree(self._repo.vfs.join(b'merge'), True)
725
726
726
727
727 class memmergestate(_mergestate_base):
728 class memmergestate(_mergestate_base):
728 def __init__(self, repo):
729 def __init__(self, repo):
729 super(memmergestate, self).__init__(repo)
730 super(memmergestate, self).__init__(repo)
730 self._backups = {}
731 self._backups = {}
731
732
732 def _make_backup(self, fctx, localkey):
733 def _make_backup(self, fctx, localkey):
733 self._backups[localkey] = fctx.data()
734 self._backups[localkey] = fctx.data()
734
735
735 def _restore_backup(self, fctx, localkey, flags):
736 def _restore_backup(self, fctx, localkey, flags):
736 fctx.write(self._backups[localkey], flags)
737 fctx.write(self._backups[localkey], flags)
737
738
738
739
739 def recordupdates(repo, actions, branchmerge, getfiledata):
740 def recordupdates(repo, actions, branchmerge, getfiledata):
740 """record merge actions to the dirstate"""
741 """record merge actions to the dirstate"""
741 # remove (must come first)
742 # remove (must come first)
742 for f, args, msg in actions.get(ACTION_REMOVE, []):
743 for f, args, msg in actions.get(ACTION_REMOVE, []):
743 if branchmerge:
744 if branchmerge:
744 repo.dirstate.remove(f)
745 repo.dirstate.remove(f)
745 else:
746 else:
746 repo.dirstate.drop(f)
747 repo.dirstate.drop(f)
747
748
748 # forget (must come first)
749 # forget (must come first)
749 for f, args, msg in actions.get(ACTION_FORGET, []):
750 for f, args, msg in actions.get(ACTION_FORGET, []):
750 repo.dirstate.drop(f)
751 repo.dirstate.drop(f)
751
752
752 # resolve path conflicts
753 # resolve path conflicts
753 for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []):
754 for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []):
754 (f0, origf0) = args
755 (f0, origf0) = args
755 repo.dirstate.add(f)
756 repo.dirstate.add(f)
756 repo.dirstate.copy(origf0, f)
757 repo.dirstate.copy(origf0, f)
757 if f0 == origf0:
758 if f0 == origf0:
758 repo.dirstate.remove(f0)
759 repo.dirstate.remove(f0)
759 else:
760 else:
760 repo.dirstate.drop(f0)
761 repo.dirstate.drop(f0)
761
762
762 # re-add
763 # re-add
763 for f, args, msg in actions.get(ACTION_ADD, []):
764 for f, args, msg in actions.get(ACTION_ADD, []):
764 repo.dirstate.add(f)
765 repo.dirstate.add(f)
765
766
766 # re-add/mark as modified
767 # re-add/mark as modified
767 for f, args, msg in actions.get(ACTION_ADD_MODIFIED, []):
768 for f, args, msg in actions.get(ACTION_ADD_MODIFIED, []):
768 if branchmerge:
769 if branchmerge:
769 repo.dirstate.normallookup(f)
770 repo.dirstate.normallookup(f)
770 else:
771 else:
771 repo.dirstate.add(f)
772 repo.dirstate.add(f)
772
773
773 # exec change
774 # exec change
774 for f, args, msg in actions.get(ACTION_EXEC, []):
775 for f, args, msg in actions.get(ACTION_EXEC, []):
775 repo.dirstate.normallookup(f)
776 repo.dirstate.normallookup(f)
776
777
777 # keep
778 # keep
778 for f, args, msg in actions.get(ACTION_KEEP, []):
779 for f, args, msg in actions.get(ACTION_KEEP, []):
779 pass
780 pass
780
781
781 # keep deleted
782 # keep deleted
782 for f, args, msg in actions.get(ACTION_KEEP_ABSENT, []):
783 for f, args, msg in actions.get(ACTION_KEEP_ABSENT, []):
783 pass
784 pass
784
785
785 # keep new
786 # keep new
786 for f, args, msg in actions.get(ACTION_KEEP_NEW, []):
787 for f, args, msg in actions.get(ACTION_KEEP_NEW, []):
787 pass
788 pass
788
789
789 # get
790 # get
790 for f, args, msg in actions.get(ACTION_GET, []):
791 for f, args, msg in actions.get(ACTION_GET, []):
791 if branchmerge:
792 if branchmerge:
792 repo.dirstate.otherparent(f)
793 repo.dirstate.otherparent(f)
793 else:
794 else:
794 parentfiledata = getfiledata[f] if getfiledata else None
795 parentfiledata = getfiledata[f] if getfiledata else None
795 repo.dirstate.normal(f, parentfiledata=parentfiledata)
796 repo.dirstate.normal(f, parentfiledata=parentfiledata)
796
797
797 # merge
798 # merge
798 for f, args, msg in actions.get(ACTION_MERGE, []):
799 for f, args, msg in actions.get(ACTION_MERGE, []):
799 f1, f2, fa, move, anc = args
800 f1, f2, fa, move, anc = args
800 if branchmerge:
801 if branchmerge:
801 # We've done a branch merge, mark this file as merged
802 # We've done a branch merge, mark this file as merged
802 # so that we properly record the merger later
803 # so that we properly record the merger later
803 repo.dirstate.merge(f)
804 repo.dirstate.merge(f)
804 if f1 != f2: # copy/rename
805 if f1 != f2: # copy/rename
805 if move:
806 if move:
806 repo.dirstate.remove(f1)
807 repo.dirstate.remove(f1)
807 if f1 != f:
808 if f1 != f:
808 repo.dirstate.copy(f1, f)
809 repo.dirstate.copy(f1, f)
809 else:
810 else:
810 repo.dirstate.copy(f2, f)
811 repo.dirstate.copy(f2, f)
811 else:
812 else:
812 # We've update-merged a locally modified file, so
813 # We've update-merged a locally modified file, so
813 # we set the dirstate to emulate a normal checkout
814 # we set the dirstate to emulate a normal checkout
814 # of that file some time in the past. Thus our
815 # of that file some time in the past. Thus our
815 # merge will appear as a normal local file
816 # merge will appear as a normal local file
816 # modification.
817 # modification.
817 if f2 == f: # file not locally copied/moved
818 if f2 == f: # file not locally copied/moved
818 repo.dirstate.normallookup(f)
819 repo.dirstate.normallookup(f)
819 if move:
820 if move:
820 repo.dirstate.drop(f1)
821 repo.dirstate.drop(f1)
821
822
822 # directory rename, move local
823 # directory rename, move local
823 for f, args, msg in actions.get(ACTION_DIR_RENAME_MOVE_LOCAL, []):
824 for f, args, msg in actions.get(ACTION_DIR_RENAME_MOVE_LOCAL, []):
824 f0, flag = args
825 f0, flag = args
825 if branchmerge:
826 if branchmerge:
826 repo.dirstate.add(f)
827 repo.dirstate.add(f)
827 repo.dirstate.remove(f0)
828 repo.dirstate.remove(f0)
828 repo.dirstate.copy(f0, f)
829 repo.dirstate.copy(f0, f)
829 else:
830 else:
830 repo.dirstate.normal(f)
831 repo.dirstate.normal(f)
831 repo.dirstate.drop(f0)
832 repo.dirstate.drop(f0)
832
833
833 # directory rename, get
834 # directory rename, get
834 for f, args, msg in actions.get(ACTION_LOCAL_DIR_RENAME_GET, []):
835 for f, args, msg in actions.get(ACTION_LOCAL_DIR_RENAME_GET, []):
835 f0, flag = args
836 f0, flag = args
836 if branchmerge:
837 if branchmerge:
837 repo.dirstate.add(f)
838 repo.dirstate.add(f)
838 repo.dirstate.copy(f0, f)
839 repo.dirstate.copy(f0, f)
839 else:
840 else:
840 repo.dirstate.normal(f)
841 repo.dirstate.normal(f)
@@ -1,1188 +1,1188 b''
1 # shelve.py - save/restore working directory state
1 # shelve.py - save/restore working directory state
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """save and restore changes to the working directory
8 """save and restore changes to the working directory
9
9
10 The "hg shelve" command saves changes made to the working directory
10 The "hg shelve" command saves changes made to the working directory
11 and reverts those changes, resetting the working directory to a clean
11 and reverts those changes, resetting the working directory to a clean
12 state.
12 state.
13
13
14 Later on, the "hg unshelve" command restores the changes saved by "hg
14 Later on, the "hg unshelve" command restores the changes saved by "hg
15 shelve". Changes can be restored even after updating to a different
15 shelve". Changes can be restored even after updating to a different
16 parent, in which case Mercurial's merge machinery will resolve any
16 parent, in which case Mercurial's merge machinery will resolve any
17 conflicts if necessary.
17 conflicts if necessary.
18
18
19 You can have more than one shelved change outstanding at a time; each
19 You can have more than one shelved change outstanding at a time; each
20 shelved change has a distinct name. For details, see the help for "hg
20 shelved change has a distinct name. For details, see the help for "hg
21 shelve".
21 shelve".
22 """
22 """
23 from __future__ import absolute_import
23 from __future__ import absolute_import
24
24
25 import collections
25 import collections
26 import errno
26 import errno
27 import itertools
27 import itertools
28 import stat
28 import stat
29
29
30 from .i18n import _
30 from .i18n import _
31 from .node import (
31 from .node import (
32 bin,
32 bin,
33 hex,
33 hex,
34 nullid,
34 nullid,
35 nullrev,
35 nullrev,
36 )
36 )
37 from . import (
37 from . import (
38 bookmarks,
38 bookmarks,
39 bundle2,
39 bundle2,
40 changegroup,
40 changegroup,
41 cmdutil,
41 cmdutil,
42 discovery,
42 discovery,
43 error,
43 error,
44 exchange,
44 exchange,
45 hg,
45 hg,
46 lock as lockmod,
46 lock as lockmod,
47 mdiff,
47 mdiff,
48 merge,
48 merge,
49 mergestate as mergestatemod,
49 mergestate as mergestatemod,
50 patch,
50 patch,
51 phases,
51 phases,
52 pycompat,
52 pycompat,
53 repair,
53 repair,
54 scmutil,
54 scmutil,
55 templatefilters,
55 templatefilters,
56 util,
56 util,
57 vfs as vfsmod,
57 vfs as vfsmod,
58 )
58 )
59 from .utils import (
59 from .utils import (
60 dateutil,
60 dateutil,
61 stringutil,
61 stringutil,
62 )
62 )
63
63
64 backupdir = b'shelve-backup'
64 backupdir = b'shelve-backup'
65 shelvedir = b'shelved'
65 shelvedir = b'shelved'
66 shelvefileextensions = [b'hg', b'patch', b'shelve']
66 shelvefileextensions = [b'hg', b'patch', b'shelve']
67
67
68 # we never need the user, so we use a
68 # we never need the user, so we use a
69 # generic user for all shelve operations
69 # generic user for all shelve operations
70 shelveuser = b'shelve@localhost'
70 shelveuser = b'shelve@localhost'
71
71
72
72
73 class ShelfDir(object):
73 class ShelfDir(object):
74 def __init__(self, repo, for_backups=False):
74 def __init__(self, repo, for_backups=False):
75 if for_backups:
75 if for_backups:
76 self.vfs = vfsmod.vfs(repo.vfs.join(backupdir))
76 self.vfs = vfsmod.vfs(repo.vfs.join(backupdir))
77 else:
77 else:
78 self.vfs = vfsmod.vfs(repo.vfs.join(shelvedir))
78 self.vfs = vfsmod.vfs(repo.vfs.join(shelvedir))
79
79
80 def get(self, name):
80 def get(self, name):
81 return Shelf(self.vfs, name)
81 return Shelf(self.vfs, name)
82
82
83 def listshelves(self):
83 def listshelves(self):
84 """return all shelves in repo as list of (time, name)"""
84 """return all shelves in repo as list of (time, name)"""
85 try:
85 try:
86 names = self.vfs.listdir()
86 names = self.vfs.listdir()
87 except OSError as err:
87 except OSError as err:
88 if err.errno != errno.ENOENT:
88 if err.errno != errno.ENOENT:
89 raise
89 raise
90 return []
90 return []
91 info = []
91 info = []
92 seen = set()
92 seen = set()
93 for filename in names:
93 for filename in names:
94 name = filename.rsplit(b'.', 1)[0]
94 name = filename.rsplit(b'.', 1)[0]
95 if name in seen:
95 if name in seen:
96 continue
96 continue
97 seen.add(name)
97 seen.add(name)
98 shelf = self.get(name)
98 shelf = self.get(name)
99 if not shelf.exists():
99 if not shelf.exists():
100 continue
100 continue
101 mtime = shelf.mtime()
101 mtime = shelf.mtime()
102 info.append((mtime, name))
102 info.append((mtime, name))
103 return sorted(info, reverse=True)
103 return sorted(info, reverse=True)
104
104
105
105
106 class Shelf(object):
106 class Shelf(object):
107 """Represents a shelf, including possibly multiple files storing it.
107 """Represents a shelf, including possibly multiple files storing it.
108
108
109 Old shelves will have a .patch and a .hg file. Newer shelves will
109 Old shelves will have a .patch and a .hg file. Newer shelves will
110 also have a .shelve file. This class abstracts away some of the
110 also have a .shelve file. This class abstracts away some of the
111 differences and lets you work with the shelf as a whole.
111 differences and lets you work with the shelf as a whole.
112 """
112 """
113
113
114 def __init__(self, vfs, name):
114 def __init__(self, vfs, name):
115 self.vfs = vfs
115 self.vfs = vfs
116 self.name = name
116 self.name = name
117
117
118 def exists(self):
118 def exists(self):
119 return self.vfs.exists(self.name + b'.patch') and self.vfs.exists(
119 return self.vfs.exists(self.name + b'.patch') and self.vfs.exists(
120 self.name + b'.hg'
120 self.name + b'.hg'
121 )
121 )
122
122
123 def mtime(self):
123 def mtime(self):
124 return self.vfs.stat(self.name + b'.patch')[stat.ST_MTIME]
124 return self.vfs.stat(self.name + b'.patch')[stat.ST_MTIME]
125
125
126 def writeinfo(self, info):
126 def writeinfo(self, info):
127 scmutil.simplekeyvaluefile(self.vfs, self.name + b'.shelve').write(info)
127 scmutil.simplekeyvaluefile(self.vfs, self.name + b'.shelve').write(info)
128
128
129 def hasinfo(self):
129 def hasinfo(self):
130 return self.vfs.exists(self.name + b'.shelve')
130 return self.vfs.exists(self.name + b'.shelve')
131
131
132 def readinfo(self):
132 def readinfo(self):
133 return scmutil.simplekeyvaluefile(
133 return scmutil.simplekeyvaluefile(
134 self.vfs, self.name + b'.shelve'
134 self.vfs, self.name + b'.shelve'
135 ).read()
135 ).read()
136
136
137 def writebundle(self, repo, bases, node):
137 def writebundle(self, repo, bases, node):
138 cgversion = changegroup.safeversion(repo)
138 cgversion = changegroup.safeversion(repo)
139 if cgversion == b'01':
139 if cgversion == b'01':
140 btype = b'HG10BZ'
140 btype = b'HG10BZ'
141 compression = None
141 compression = None
142 else:
142 else:
143 btype = b'HG20'
143 btype = b'HG20'
144 compression = b'BZ'
144 compression = b'BZ'
145
145
146 repo = repo.unfiltered()
146 repo = repo.unfiltered()
147
147
148 outgoing = discovery.outgoing(
148 outgoing = discovery.outgoing(
149 repo, missingroots=bases, ancestorsof=[node]
149 repo, missingroots=bases, ancestorsof=[node]
150 )
150 )
151 cg = changegroup.makechangegroup(repo, outgoing, cgversion, b'shelve')
151 cg = changegroup.makechangegroup(repo, outgoing, cgversion, b'shelve')
152
152
153 bundle_filename = self.vfs.join(self.name + b'.hg')
153 bundle_filename = self.vfs.join(self.name + b'.hg')
154 bundle2.writebundle(
154 bundle2.writebundle(
155 repo.ui,
155 repo.ui,
156 cg,
156 cg,
157 bundle_filename,
157 bundle_filename,
158 btype,
158 btype,
159 self.vfs,
159 self.vfs,
160 compression=compression,
160 compression=compression,
161 )
161 )
162
162
163 def applybundle(self, repo, tr):
163 def applybundle(self, repo, tr):
164 filename = self.name + b'.hg'
164 filename = self.name + b'.hg'
165 fp = self.vfs(filename)
165 fp = self.vfs(filename)
166 try:
166 try:
167 targetphase = phases.internal
167 targetphase = phases.internal
168 if not phases.supportinternal(repo):
168 if not phases.supportinternal(repo):
169 targetphase = phases.secret
169 targetphase = phases.secret
170 gen = exchange.readbundle(repo.ui, fp, filename, self.vfs)
170 gen = exchange.readbundle(repo.ui, fp, filename, self.vfs)
171 pretip = repo[b'tip']
171 pretip = repo[b'tip']
172 bundle2.applybundle(
172 bundle2.applybundle(
173 repo,
173 repo,
174 gen,
174 gen,
175 tr,
175 tr,
176 source=b'unshelve',
176 source=b'unshelve',
177 url=b'bundle:' + self.vfs.join(filename),
177 url=b'bundle:' + self.vfs.join(filename),
178 targetphase=targetphase,
178 targetphase=targetphase,
179 )
179 )
180 shelvectx = repo[b'tip']
180 shelvectx = repo[b'tip']
181 if pretip == shelvectx:
181 if pretip == shelvectx:
182 shelverev = tr.changes[b'revduplicates'][-1]
182 shelverev = tr.changes[b'revduplicates'][-1]
183 shelvectx = repo[shelverev]
183 shelvectx = repo[shelverev]
184 return shelvectx
184 return shelvectx
185 finally:
185 finally:
186 fp.close()
186 fp.close()
187
187
188 def open_patch(self, mode=b'rb'):
188 def open_patch(self, mode=b'rb'):
189 return self.vfs(self.name + b'.patch', mode)
189 return self.vfs(self.name + b'.patch', mode)
190
190
191 def _backupfilename(self, backupvfs, filename):
191 def _backupfilename(self, backupvfs, filename):
192 def gennames(base):
192 def gennames(base):
193 yield base
193 yield base
194 base, ext = base.rsplit(b'.', 1)
194 base, ext = base.rsplit(b'.', 1)
195 for i in itertools.count(1):
195 for i in itertools.count(1):
196 yield b'%s-%d.%s' % (base, i, ext)
196 yield b'%s-%d.%s' % (base, i, ext)
197
197
198 for n in gennames(filename):
198 for n in gennames(filename):
199 if not backupvfs.exists(n):
199 if not backupvfs.exists(n):
200 return backupvfs.join(n)
200 return backupvfs.join(n)
201
201
202 def movetobackup(self, backupvfs):
202 def movetobackup(self, backupvfs):
203 if not backupvfs.isdir():
203 if not backupvfs.isdir():
204 backupvfs.makedir()
204 backupvfs.makedir()
205 for suffix in shelvefileextensions:
205 for suffix in shelvefileextensions:
206 filename = self.name + b'.' + suffix
206 filename = self.name + b'.' + suffix
207 if self.vfs.exists(filename):
207 if self.vfs.exists(filename):
208 util.rename(
208 util.rename(
209 self.vfs.join(filename),
209 self.vfs.join(filename),
210 self._backupfilename(backupvfs, filename),
210 self._backupfilename(backupvfs, filename),
211 )
211 )
212
212
213 def delete(self):
213 def delete(self):
214 for ext in shelvefileextensions:
214 for ext in shelvefileextensions:
215 self.vfs.tryunlink(self.name + b'.' + ext)
215 self.vfs.tryunlink(self.name + b'.' + ext)
216
216
217
217
218 class shelvedstate(object):
218 class shelvedstate(object):
219 """Handle persistence during unshelving operations.
219 """Handle persistence during unshelving operations.
220
220
221 Handles saving and restoring a shelved state. Ensures that different
221 Handles saving and restoring a shelved state. Ensures that different
222 versions of a shelved state are possible and handles them appropriately.
222 versions of a shelved state are possible and handles them appropriately.
223 """
223 """
224
224
225 _version = 2
225 _version = 2
226 _filename = b'shelvedstate'
226 _filename = b'shelvedstate'
227 _keep = b'keep'
227 _keep = b'keep'
228 _nokeep = b'nokeep'
228 _nokeep = b'nokeep'
229 # colon is essential to differentiate from a real bookmark name
229 # colon is essential to differentiate from a real bookmark name
230 _noactivebook = b':no-active-bookmark'
230 _noactivebook = b':no-active-bookmark'
231 _interactive = b'interactive'
231 _interactive = b'interactive'
232
232
233 @classmethod
233 @classmethod
234 def _verifyandtransform(cls, d):
234 def _verifyandtransform(cls, d):
235 """Some basic shelvestate syntactic verification and transformation"""
235 """Some basic shelvestate syntactic verification and transformation"""
236 try:
236 try:
237 d[b'originalwctx'] = bin(d[b'originalwctx'])
237 d[b'originalwctx'] = bin(d[b'originalwctx'])
238 d[b'pendingctx'] = bin(d[b'pendingctx'])
238 d[b'pendingctx'] = bin(d[b'pendingctx'])
239 d[b'parents'] = [bin(h) for h in d[b'parents'].split(b' ')]
239 d[b'parents'] = [bin(h) for h in d[b'parents'].split(b' ')]
240 d[b'nodestoremove'] = [
240 d[b'nodestoremove'] = [
241 bin(h) for h in d[b'nodestoremove'].split(b' ')
241 bin(h) for h in d[b'nodestoremove'].split(b' ')
242 ]
242 ]
243 except (ValueError, TypeError, KeyError) as err:
243 except (ValueError, TypeError, KeyError) as err:
244 raise error.CorruptedState(stringutil.forcebytestr(err))
244 raise error.CorruptedState(stringutil.forcebytestr(err))
245
245
246 @classmethod
246 @classmethod
247 def _getversion(cls, repo):
247 def _getversion(cls, repo):
248 """Read version information from shelvestate file"""
248 """Read version information from shelvestate file"""
249 fp = repo.vfs(cls._filename)
249 fp = repo.vfs(cls._filename)
250 try:
250 try:
251 version = int(fp.readline().strip())
251 version = int(fp.readline().strip())
252 except ValueError as err:
252 except ValueError as err:
253 raise error.CorruptedState(stringutil.forcebytestr(err))
253 raise error.CorruptedState(stringutil.forcebytestr(err))
254 finally:
254 finally:
255 fp.close()
255 fp.close()
256 return version
256 return version
257
257
258 @classmethod
258 @classmethod
259 def _readold(cls, repo):
259 def _readold(cls, repo):
260 """Read the old position-based version of a shelvestate file"""
260 """Read the old position-based version of a shelvestate file"""
261 # Order is important, because old shelvestate file uses it
261 # Order is important, because old shelvestate file uses it
262 # to detemine values of fields (i.g. name is on the second line,
262 # to detemine values of fields (i.g. name is on the second line,
263 # originalwctx is on the third and so forth). Please do not change.
263 # originalwctx is on the third and so forth). Please do not change.
264 keys = [
264 keys = [
265 b'version',
265 b'version',
266 b'name',
266 b'name',
267 b'originalwctx',
267 b'originalwctx',
268 b'pendingctx',
268 b'pendingctx',
269 b'parents',
269 b'parents',
270 b'nodestoremove',
270 b'nodestoremove',
271 b'branchtorestore',
271 b'branchtorestore',
272 b'keep',
272 b'keep',
273 b'activebook',
273 b'activebook',
274 ]
274 ]
275 # this is executed only seldomly, so it is not a big deal
275 # this is executed only seldomly, so it is not a big deal
276 # that we open this file twice
276 # that we open this file twice
277 fp = repo.vfs(cls._filename)
277 fp = repo.vfs(cls._filename)
278 d = {}
278 d = {}
279 try:
279 try:
280 for key in keys:
280 for key in keys:
281 d[key] = fp.readline().strip()
281 d[key] = fp.readline().strip()
282 finally:
282 finally:
283 fp.close()
283 fp.close()
284 return d
284 return d
285
285
286 @classmethod
286 @classmethod
287 def load(cls, repo):
287 def load(cls, repo):
288 version = cls._getversion(repo)
288 version = cls._getversion(repo)
289 if version < cls._version:
289 if version < cls._version:
290 d = cls._readold(repo)
290 d = cls._readold(repo)
291 elif version == cls._version:
291 elif version == cls._version:
292 d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename).read(
292 d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename).read(
293 firstlinenonkeyval=True
293 firstlinenonkeyval=True
294 )
294 )
295 else:
295 else:
296 raise error.Abort(
296 raise error.Abort(
297 _(
297 _(
298 b'this version of shelve is incompatible '
298 b'this version of shelve is incompatible '
299 b'with the version used in this repo'
299 b'with the version used in this repo'
300 )
300 )
301 )
301 )
302
302
303 cls._verifyandtransform(d)
303 cls._verifyandtransform(d)
304 try:
304 try:
305 obj = cls()
305 obj = cls()
306 obj.name = d[b'name']
306 obj.name = d[b'name']
307 obj.wctx = repo[d[b'originalwctx']]
307 obj.wctx = repo[d[b'originalwctx']]
308 obj.pendingctx = repo[d[b'pendingctx']]
308 obj.pendingctx = repo[d[b'pendingctx']]
309 obj.parents = d[b'parents']
309 obj.parents = d[b'parents']
310 obj.nodestoremove = d[b'nodestoremove']
310 obj.nodestoremove = d[b'nodestoremove']
311 obj.branchtorestore = d.get(b'branchtorestore', b'')
311 obj.branchtorestore = d.get(b'branchtorestore', b'')
312 obj.keep = d.get(b'keep') == cls._keep
312 obj.keep = d.get(b'keep') == cls._keep
313 obj.activebookmark = b''
313 obj.activebookmark = b''
314 if d.get(b'activebook', b'') != cls._noactivebook:
314 if d.get(b'activebook', b'') != cls._noactivebook:
315 obj.activebookmark = d.get(b'activebook', b'')
315 obj.activebookmark = d.get(b'activebook', b'')
316 obj.interactive = d.get(b'interactive') == cls._interactive
316 obj.interactive = d.get(b'interactive') == cls._interactive
317 except (error.RepoLookupError, KeyError) as err:
317 except (error.RepoLookupError, KeyError) as err:
318 raise error.CorruptedState(pycompat.bytestr(err))
318 raise error.CorruptedState(pycompat.bytestr(err))
319
319
320 return obj
320 return obj
321
321
322 @classmethod
322 @classmethod
323 def save(
323 def save(
324 cls,
324 cls,
325 repo,
325 repo,
326 name,
326 name,
327 originalwctx,
327 originalwctx,
328 pendingctx,
328 pendingctx,
329 nodestoremove,
329 nodestoremove,
330 branchtorestore,
330 branchtorestore,
331 keep=False,
331 keep=False,
332 activebook=b'',
332 activebook=b'',
333 interactive=False,
333 interactive=False,
334 ):
334 ):
335 info = {
335 info = {
336 b"name": name,
336 b"name": name,
337 b"originalwctx": hex(originalwctx.node()),
337 b"originalwctx": hex(originalwctx.node()),
338 b"pendingctx": hex(pendingctx.node()),
338 b"pendingctx": hex(pendingctx.node()),
339 b"parents": b' '.join([hex(p) for p in repo.dirstate.parents()]),
339 b"parents": b' '.join([hex(p) for p in repo.dirstate.parents()]),
340 b"nodestoremove": b' '.join([hex(n) for n in nodestoremove]),
340 b"nodestoremove": b' '.join([hex(n) for n in nodestoremove]),
341 b"branchtorestore": branchtorestore,
341 b"branchtorestore": branchtorestore,
342 b"keep": cls._keep if keep else cls._nokeep,
342 b"keep": cls._keep if keep else cls._nokeep,
343 b"activebook": activebook or cls._noactivebook,
343 b"activebook": activebook or cls._noactivebook,
344 }
344 }
345 if interactive:
345 if interactive:
346 info[b'interactive'] = cls._interactive
346 info[b'interactive'] = cls._interactive
347 scmutil.simplekeyvaluefile(repo.vfs, cls._filename).write(
347 scmutil.simplekeyvaluefile(repo.vfs, cls._filename).write(
348 info, firstline=(b"%d" % cls._version)
348 info, firstline=(b"%d" % cls._version)
349 )
349 )
350
350
351 @classmethod
351 @classmethod
352 def clear(cls, repo):
352 def clear(cls, repo):
353 repo.vfs.unlinkpath(cls._filename, ignoremissing=True)
353 repo.vfs.unlinkpath(cls._filename, ignoremissing=True)
354
354
355
355
356 def cleanupoldbackups(repo):
356 def cleanupoldbackups(repo):
357 maxbackups = repo.ui.configint(b'shelve', b'maxbackups')
357 maxbackups = repo.ui.configint(b'shelve', b'maxbackups')
358 backup_dir = ShelfDir(repo, for_backups=True)
358 backup_dir = ShelfDir(repo, for_backups=True)
359 hgfiles = backup_dir.listshelves()
359 hgfiles = backup_dir.listshelves()
360 if maxbackups > 0 and maxbackups < len(hgfiles):
360 if maxbackups > 0 and maxbackups < len(hgfiles):
361 bordermtime = hgfiles[maxbackups - 1][0]
361 bordermtime = hgfiles[maxbackups - 1][0]
362 else:
362 else:
363 bordermtime = None
363 bordermtime = None
364 for mtime, name in hgfiles[maxbackups:]:
364 for mtime, name in hgfiles[maxbackups:]:
365 if mtime == bordermtime:
365 if mtime == bordermtime:
366 # keep it, because timestamp can't decide exact order of backups
366 # keep it, because timestamp can't decide exact order of backups
367 continue
367 continue
368 backup_dir.get(name).delete()
368 backup_dir.get(name).delete()
369
369
370
370
371 def _backupactivebookmark(repo):
371 def _backupactivebookmark(repo):
372 activebookmark = repo._activebookmark
372 activebookmark = repo._activebookmark
373 if activebookmark:
373 if activebookmark:
374 bookmarks.deactivate(repo)
374 bookmarks.deactivate(repo)
375 return activebookmark
375 return activebookmark
376
376
377
377
378 def _restoreactivebookmark(repo, mark):
378 def _restoreactivebookmark(repo, mark):
379 if mark:
379 if mark:
380 bookmarks.activate(repo, mark)
380 bookmarks.activate(repo, mark)
381
381
382
382
383 def _aborttransaction(repo, tr):
383 def _aborttransaction(repo, tr):
384 """Abort current transaction for shelve/unshelve, but keep dirstate"""
384 """Abort current transaction for shelve/unshelve, but keep dirstate"""
385 dirstatebackupname = b'dirstate.shelve'
385 dirstatebackupname = b'dirstate.shelve'
386 repo.dirstate.savebackup(tr, dirstatebackupname)
386 repo.dirstate.savebackup(tr, dirstatebackupname)
387 tr.abort()
387 tr.abort()
388 repo.dirstate.restorebackup(None, dirstatebackupname)
388 repo.dirstate.restorebackup(None, dirstatebackupname)
389
389
390
390
391 def getshelvename(repo, parent, opts):
391 def getshelvename(repo, parent, opts):
392 """Decide on the name this shelve is going to have"""
392 """Decide on the name this shelve is going to have"""
393
393
394 def gennames():
394 def gennames():
395 yield label
395 yield label
396 for i in itertools.count(1):
396 for i in itertools.count(1):
397 yield b'%s-%02d' % (label, i)
397 yield b'%s-%02d' % (label, i)
398
398
399 name = opts.get(b'name')
399 name = opts.get(b'name')
400 label = repo._activebookmark or parent.branch() or b'default'
400 label = repo._activebookmark or parent.branch() or b'default'
401 # slashes aren't allowed in filenames, therefore we rename it
401 # slashes aren't allowed in filenames, therefore we rename it
402 label = label.replace(b'/', b'_')
402 label = label.replace(b'/', b'_')
403 label = label.replace(b'\\', b'_')
403 label = label.replace(b'\\', b'_')
404 # filenames must not start with '.' as it should not be hidden
404 # filenames must not start with '.' as it should not be hidden
405 if label.startswith(b'.'):
405 if label.startswith(b'.'):
406 label = label.replace(b'.', b'_', 1)
406 label = label.replace(b'.', b'_', 1)
407
407
408 if name:
408 if name:
409 if ShelfDir(repo).get(name).exists():
409 if ShelfDir(repo).get(name).exists():
410 e = _(b"a shelved change named '%s' already exists") % name
410 e = _(b"a shelved change named '%s' already exists") % name
411 raise error.Abort(e)
411 raise error.Abort(e)
412
412
413 # ensure we are not creating a subdirectory or a hidden file
413 # ensure we are not creating a subdirectory or a hidden file
414 if b'/' in name or b'\\' in name:
414 if b'/' in name or b'\\' in name:
415 raise error.Abort(
415 raise error.Abort(
416 _(b'shelved change names can not contain slashes')
416 _(b'shelved change names can not contain slashes')
417 )
417 )
418 if name.startswith(b'.'):
418 if name.startswith(b'.'):
419 raise error.Abort(_(b"shelved change names can not start with '.'"))
419 raise error.Abort(_(b"shelved change names can not start with '.'"))
420
420
421 else:
421 else:
422 shelf_dir = ShelfDir(repo)
422 shelf_dir = ShelfDir(repo)
423 for n in gennames():
423 for n in gennames():
424 if not shelf_dir.get(n).exists():
424 if not shelf_dir.get(n).exists():
425 name = n
425 name = n
426 break
426 break
427
427
428 return name
428 return name
429
429
430
430
431 def mutableancestors(ctx):
431 def mutableancestors(ctx):
432 """return all mutable ancestors for ctx (included)
432 """return all mutable ancestors for ctx (included)
433
433
434 Much faster than the revset ancestors(ctx) & draft()"""
434 Much faster than the revset ancestors(ctx) & draft()"""
435 seen = {nullrev}
435 seen = {nullrev}
436 visit = collections.deque()
436 visit = collections.deque()
437 visit.append(ctx)
437 visit.append(ctx)
438 while visit:
438 while visit:
439 ctx = visit.popleft()
439 ctx = visit.popleft()
440 yield ctx.node()
440 yield ctx.node()
441 for parent in ctx.parents():
441 for parent in ctx.parents():
442 rev = parent.rev()
442 rev = parent.rev()
443 if rev not in seen:
443 if rev not in seen:
444 seen.add(rev)
444 seen.add(rev)
445 if parent.mutable():
445 if parent.mutable():
446 visit.append(parent)
446 visit.append(parent)
447
447
448
448
449 def getcommitfunc(extra, interactive, editor=False):
449 def getcommitfunc(extra, interactive, editor=False):
450 def commitfunc(ui, repo, message, match, opts):
450 def commitfunc(ui, repo, message, match, opts):
451 hasmq = util.safehasattr(repo, b'mq')
451 hasmq = util.safehasattr(repo, b'mq')
452 if hasmq:
452 if hasmq:
453 saved, repo.mq.checkapplied = repo.mq.checkapplied, False
453 saved, repo.mq.checkapplied = repo.mq.checkapplied, False
454
454
455 targetphase = phases.internal
455 targetphase = phases.internal
456 if not phases.supportinternal(repo):
456 if not phases.supportinternal(repo):
457 targetphase = phases.secret
457 targetphase = phases.secret
458 overrides = {(b'phases', b'new-commit'): targetphase}
458 overrides = {(b'phases', b'new-commit'): targetphase}
459 try:
459 try:
460 editor_ = False
460 editor_ = False
461 if editor:
461 if editor:
462 editor_ = cmdutil.getcommiteditor(
462 editor_ = cmdutil.getcommiteditor(
463 editform=b'shelve.shelve', **pycompat.strkwargs(opts)
463 editform=b'shelve.shelve', **pycompat.strkwargs(opts)
464 )
464 )
465 with repo.ui.configoverride(overrides):
465 with repo.ui.configoverride(overrides):
466 return repo.commit(
466 return repo.commit(
467 message,
467 message,
468 shelveuser,
468 shelveuser,
469 opts.get(b'date'),
469 opts.get(b'date'),
470 match,
470 match,
471 editor=editor_,
471 editor=editor_,
472 extra=extra,
472 extra=extra,
473 )
473 )
474 finally:
474 finally:
475 if hasmq:
475 if hasmq:
476 repo.mq.checkapplied = saved
476 repo.mq.checkapplied = saved
477
477
478 def interactivecommitfunc(ui, repo, *pats, **opts):
478 def interactivecommitfunc(ui, repo, *pats, **opts):
479 opts = pycompat.byteskwargs(opts)
479 opts = pycompat.byteskwargs(opts)
480 match = scmutil.match(repo[b'.'], pats, {})
480 match = scmutil.match(repo[b'.'], pats, {})
481 message = opts[b'message']
481 message = opts[b'message']
482 return commitfunc(ui, repo, message, match, opts)
482 return commitfunc(ui, repo, message, match, opts)
483
483
484 return interactivecommitfunc if interactive else commitfunc
484 return interactivecommitfunc if interactive else commitfunc
485
485
486
486
487 def _nothingtoshelvemessaging(ui, repo, pats, opts):
487 def _nothingtoshelvemessaging(ui, repo, pats, opts):
488 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
488 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
489 if stat.deleted:
489 if stat.deleted:
490 ui.status(
490 ui.status(
491 _(b"nothing changed (%d missing files, see 'hg status')\n")
491 _(b"nothing changed (%d missing files, see 'hg status')\n")
492 % len(stat.deleted)
492 % len(stat.deleted)
493 )
493 )
494 else:
494 else:
495 ui.status(_(b"nothing changed\n"))
495 ui.status(_(b"nothing changed\n"))
496
496
497
497
498 def _shelvecreatedcommit(repo, node, name, match):
498 def _shelvecreatedcommit(repo, node, name, match):
499 info = {b'node': hex(node)}
499 info = {b'node': hex(node)}
500 shelf = ShelfDir(repo).get(name)
500 shelf = ShelfDir(repo).get(name)
501 shelf.writeinfo(info)
501 shelf.writeinfo(info)
502 bases = list(mutableancestors(repo[node]))
502 bases = list(mutableancestors(repo[node]))
503 shelf.writebundle(repo, bases, node)
503 shelf.writebundle(repo, bases, node)
504 with shelf.open_patch(b'wb') as fp:
504 with shelf.open_patch(b'wb') as fp:
505 cmdutil.exportfile(
505 cmdutil.exportfile(
506 repo, [node], fp, opts=mdiff.diffopts(git=True), match=match
506 repo, [node], fp, opts=mdiff.diffopts(git=True), match=match
507 )
507 )
508
508
509
509
510 def _includeunknownfiles(repo, pats, opts, extra):
510 def _includeunknownfiles(repo, pats, opts, extra):
511 s = repo.status(match=scmutil.match(repo[None], pats, opts), unknown=True)
511 s = repo.status(match=scmutil.match(repo[None], pats, opts), unknown=True)
512 if s.unknown:
512 if s.unknown:
513 extra[b'shelve_unknown'] = b'\0'.join(s.unknown)
513 extra[b'shelve_unknown'] = b'\0'.join(s.unknown)
514 repo[None].add(s.unknown)
514 repo[None].add(s.unknown)
515
515
516
516
517 def _finishshelve(repo, tr):
517 def _finishshelve(repo, tr):
518 if phases.supportinternal(repo):
518 if phases.supportinternal(repo):
519 tr.close()
519 tr.close()
520 else:
520 else:
521 _aborttransaction(repo, tr)
521 _aborttransaction(repo, tr)
522
522
523
523
524 def createcmd(ui, repo, pats, opts):
524 def createcmd(ui, repo, pats, opts):
525 """subcommand that creates a new shelve"""
525 """subcommand that creates a new shelve"""
526 with repo.wlock():
526 with repo.wlock():
527 cmdutil.checkunfinished(repo)
527 cmdutil.checkunfinished(repo)
528 return _docreatecmd(ui, repo, pats, opts)
528 return _docreatecmd(ui, repo, pats, opts)
529
529
530
530
531 def _docreatecmd(ui, repo, pats, opts):
531 def _docreatecmd(ui, repo, pats, opts):
532 wctx = repo[None]
532 wctx = repo[None]
533 parents = wctx.parents()
533 parents = wctx.parents()
534 parent = parents[0]
534 parent = parents[0]
535 origbranch = wctx.branch()
535 origbranch = wctx.branch()
536
536
537 if parent.node() != nullid:
537 if parent.rev() != nullrev:
538 desc = b"changes to: %s" % parent.description().split(b'\n', 1)[0]
538 desc = b"changes to: %s" % parent.description().split(b'\n', 1)[0]
539 else:
539 else:
540 desc = b'(changes in empty repository)'
540 desc = b'(changes in empty repository)'
541
541
542 if not opts.get(b'message'):
542 if not opts.get(b'message'):
543 opts[b'message'] = desc
543 opts[b'message'] = desc
544
544
545 lock = tr = activebookmark = None
545 lock = tr = activebookmark = None
546 try:
546 try:
547 lock = repo.lock()
547 lock = repo.lock()
548
548
549 # use an uncommitted transaction to generate the bundle to avoid
549 # use an uncommitted transaction to generate the bundle to avoid
550 # pull races. ensure we don't print the abort message to stderr.
550 # pull races. ensure we don't print the abort message to stderr.
551 tr = repo.transaction(b'shelve', report=lambda x: None)
551 tr = repo.transaction(b'shelve', report=lambda x: None)
552
552
553 interactive = opts.get(b'interactive', False)
553 interactive = opts.get(b'interactive', False)
554 includeunknown = opts.get(b'unknown', False) and not opts.get(
554 includeunknown = opts.get(b'unknown', False) and not opts.get(
555 b'addremove', False
555 b'addremove', False
556 )
556 )
557
557
558 name = getshelvename(repo, parent, opts)
558 name = getshelvename(repo, parent, opts)
559 activebookmark = _backupactivebookmark(repo)
559 activebookmark = _backupactivebookmark(repo)
560 extra = {b'internal': b'shelve'}
560 extra = {b'internal': b'shelve'}
561 if includeunknown:
561 if includeunknown:
562 _includeunknownfiles(repo, pats, opts, extra)
562 _includeunknownfiles(repo, pats, opts, extra)
563
563
564 if _iswctxonnewbranch(repo) and not _isbareshelve(pats, opts):
564 if _iswctxonnewbranch(repo) and not _isbareshelve(pats, opts):
565 # In non-bare shelve we don't store newly created branch
565 # In non-bare shelve we don't store newly created branch
566 # at bundled commit
566 # at bundled commit
567 repo.dirstate.setbranch(repo[b'.'].branch())
567 repo.dirstate.setbranch(repo[b'.'].branch())
568
568
569 commitfunc = getcommitfunc(extra, interactive, editor=True)
569 commitfunc = getcommitfunc(extra, interactive, editor=True)
570 if not interactive:
570 if not interactive:
571 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
571 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
572 else:
572 else:
573 node = cmdutil.dorecord(
573 node = cmdutil.dorecord(
574 ui,
574 ui,
575 repo,
575 repo,
576 commitfunc,
576 commitfunc,
577 None,
577 None,
578 False,
578 False,
579 cmdutil.recordfilter,
579 cmdutil.recordfilter,
580 *pats,
580 *pats,
581 **pycompat.strkwargs(opts)
581 **pycompat.strkwargs(opts)
582 )
582 )
583 if not node:
583 if not node:
584 _nothingtoshelvemessaging(ui, repo, pats, opts)
584 _nothingtoshelvemessaging(ui, repo, pats, opts)
585 return 1
585 return 1
586
586
587 # Create a matcher so that prefetch doesn't attempt to fetch
587 # Create a matcher so that prefetch doesn't attempt to fetch
588 # the entire repository pointlessly, and as an optimisation
588 # the entire repository pointlessly, and as an optimisation
589 # for movedirstate, if needed.
589 # for movedirstate, if needed.
590 match = scmutil.matchfiles(repo, repo[node].files())
590 match = scmutil.matchfiles(repo, repo[node].files())
591 _shelvecreatedcommit(repo, node, name, match)
591 _shelvecreatedcommit(repo, node, name, match)
592
592
593 ui.status(_(b'shelved as %s\n') % name)
593 ui.status(_(b'shelved as %s\n') % name)
594 if opts[b'keep']:
594 if opts[b'keep']:
595 with repo.dirstate.parentchange():
595 with repo.dirstate.parentchange():
596 scmutil.movedirstate(repo, parent, match)
596 scmutil.movedirstate(repo, parent, match)
597 else:
597 else:
598 hg.update(repo, parent.node())
598 hg.update(repo, parent.node())
599 ms = mergestatemod.mergestate.read(repo)
599 ms = mergestatemod.mergestate.read(repo)
600 if not ms.unresolvedcount():
600 if not ms.unresolvedcount():
601 ms.reset()
601 ms.reset()
602
602
603 if origbranch != repo[b'.'].branch() and not _isbareshelve(pats, opts):
603 if origbranch != repo[b'.'].branch() and not _isbareshelve(pats, opts):
604 repo.dirstate.setbranch(origbranch)
604 repo.dirstate.setbranch(origbranch)
605
605
606 _finishshelve(repo, tr)
606 _finishshelve(repo, tr)
607 finally:
607 finally:
608 _restoreactivebookmark(repo, activebookmark)
608 _restoreactivebookmark(repo, activebookmark)
609 lockmod.release(tr, lock)
609 lockmod.release(tr, lock)
610
610
611
611
612 def _isbareshelve(pats, opts):
612 def _isbareshelve(pats, opts):
613 return (
613 return (
614 not pats
614 not pats
615 and not opts.get(b'interactive', False)
615 and not opts.get(b'interactive', False)
616 and not opts.get(b'include', False)
616 and not opts.get(b'include', False)
617 and not opts.get(b'exclude', False)
617 and not opts.get(b'exclude', False)
618 )
618 )
619
619
620
620
621 def _iswctxonnewbranch(repo):
621 def _iswctxonnewbranch(repo):
622 return repo[None].branch() != repo[b'.'].branch()
622 return repo[None].branch() != repo[b'.'].branch()
623
623
624
624
625 def cleanupcmd(ui, repo):
625 def cleanupcmd(ui, repo):
626 """subcommand that deletes all shelves"""
626 """subcommand that deletes all shelves"""
627
627
628 with repo.wlock():
628 with repo.wlock():
629 shelf_dir = ShelfDir(repo)
629 shelf_dir = ShelfDir(repo)
630 backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
630 backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
631 for _mtime, name in shelf_dir.listshelves():
631 for _mtime, name in shelf_dir.listshelves():
632 shelf_dir.get(name).movetobackup(backupvfs)
632 shelf_dir.get(name).movetobackup(backupvfs)
633 cleanupoldbackups(repo)
633 cleanupoldbackups(repo)
634
634
635
635
636 def deletecmd(ui, repo, pats):
636 def deletecmd(ui, repo, pats):
637 """subcommand that deletes a specific shelve"""
637 """subcommand that deletes a specific shelve"""
638 if not pats:
638 if not pats:
639 raise error.InputError(_(b'no shelved changes specified!'))
639 raise error.InputError(_(b'no shelved changes specified!'))
640 with repo.wlock():
640 with repo.wlock():
641 backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
641 backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
642 for name in pats:
642 for name in pats:
643 shelf = ShelfDir(repo).get(name)
643 shelf = ShelfDir(repo).get(name)
644 if not shelf.exists():
644 if not shelf.exists():
645 raise error.InputError(
645 raise error.InputError(
646 _(b"shelved change '%s' not found") % name
646 _(b"shelved change '%s' not found") % name
647 )
647 )
648 shelf.movetobackup(backupvfs)
648 shelf.movetobackup(backupvfs)
649 cleanupoldbackups(repo)
649 cleanupoldbackups(repo)
650
650
651
651
652 def listcmd(ui, repo, pats, opts):
652 def listcmd(ui, repo, pats, opts):
653 """subcommand that displays the list of shelves"""
653 """subcommand that displays the list of shelves"""
654 pats = set(pats)
654 pats = set(pats)
655 width = 80
655 width = 80
656 if not ui.plain():
656 if not ui.plain():
657 width = ui.termwidth()
657 width = ui.termwidth()
658 namelabel = b'shelve.newest'
658 namelabel = b'shelve.newest'
659 ui.pager(b'shelve')
659 ui.pager(b'shelve')
660 shelf_dir = ShelfDir(repo)
660 shelf_dir = ShelfDir(repo)
661 for mtime, name in shelf_dir.listshelves():
661 for mtime, name in shelf_dir.listshelves():
662 if pats and name not in pats:
662 if pats and name not in pats:
663 continue
663 continue
664 ui.write(name, label=namelabel)
664 ui.write(name, label=namelabel)
665 namelabel = b'shelve.name'
665 namelabel = b'shelve.name'
666 if ui.quiet:
666 if ui.quiet:
667 ui.write(b'\n')
667 ui.write(b'\n')
668 continue
668 continue
669 ui.write(b' ' * (16 - len(name)))
669 ui.write(b' ' * (16 - len(name)))
670 used = 16
670 used = 16
671 date = dateutil.makedate(mtime)
671 date = dateutil.makedate(mtime)
672 age = b'(%s)' % templatefilters.age(date, abbrev=True)
672 age = b'(%s)' % templatefilters.age(date, abbrev=True)
673 ui.write(age, label=b'shelve.age')
673 ui.write(age, label=b'shelve.age')
674 ui.write(b' ' * (12 - len(age)))
674 ui.write(b' ' * (12 - len(age)))
675 used += 12
675 used += 12
676 with shelf_dir.get(name).open_patch() as fp:
676 with shelf_dir.get(name).open_patch() as fp:
677 while True:
677 while True:
678 line = fp.readline()
678 line = fp.readline()
679 if not line:
679 if not line:
680 break
680 break
681 if not line.startswith(b'#'):
681 if not line.startswith(b'#'):
682 desc = line.rstrip()
682 desc = line.rstrip()
683 if ui.formatted():
683 if ui.formatted():
684 desc = stringutil.ellipsis(desc, width - used)
684 desc = stringutil.ellipsis(desc, width - used)
685 ui.write(desc)
685 ui.write(desc)
686 break
686 break
687 ui.write(b'\n')
687 ui.write(b'\n')
688 if not (opts[b'patch'] or opts[b'stat']):
688 if not (opts[b'patch'] or opts[b'stat']):
689 continue
689 continue
690 difflines = fp.readlines()
690 difflines = fp.readlines()
691 if opts[b'patch']:
691 if opts[b'patch']:
692 for chunk, label in patch.difflabel(iter, difflines):
692 for chunk, label in patch.difflabel(iter, difflines):
693 ui.write(chunk, label=label)
693 ui.write(chunk, label=label)
694 if opts[b'stat']:
694 if opts[b'stat']:
695 for chunk, label in patch.diffstatui(difflines, width=width):
695 for chunk, label in patch.diffstatui(difflines, width=width):
696 ui.write(chunk, label=label)
696 ui.write(chunk, label=label)
697
697
698
698
699 def patchcmds(ui, repo, pats, opts):
699 def patchcmds(ui, repo, pats, opts):
700 """subcommand that displays shelves"""
700 """subcommand that displays shelves"""
701 shelf_dir = ShelfDir(repo)
701 shelf_dir = ShelfDir(repo)
702 if len(pats) == 0:
702 if len(pats) == 0:
703 shelves = shelf_dir.listshelves()
703 shelves = shelf_dir.listshelves()
704 if not shelves:
704 if not shelves:
705 raise error.Abort(_(b"there are no shelves to show"))
705 raise error.Abort(_(b"there are no shelves to show"))
706 mtime, name = shelves[0]
706 mtime, name = shelves[0]
707 pats = [name]
707 pats = [name]
708
708
709 for shelfname in pats:
709 for shelfname in pats:
710 if not shelf_dir.get(shelfname).exists():
710 if not shelf_dir.get(shelfname).exists():
711 raise error.Abort(_(b"cannot find shelf %s") % shelfname)
711 raise error.Abort(_(b"cannot find shelf %s") % shelfname)
712
712
713 listcmd(ui, repo, pats, opts)
713 listcmd(ui, repo, pats, opts)
714
714
715
715
716 def checkparents(repo, state):
716 def checkparents(repo, state):
717 """check parent while resuming an unshelve"""
717 """check parent while resuming an unshelve"""
718 if state.parents != repo.dirstate.parents():
718 if state.parents != repo.dirstate.parents():
719 raise error.Abort(
719 raise error.Abort(
720 _(b'working directory parents do not match unshelve state')
720 _(b'working directory parents do not match unshelve state')
721 )
721 )
722
722
723
723
724 def _loadshelvedstate(ui, repo, opts):
724 def _loadshelvedstate(ui, repo, opts):
725 try:
725 try:
726 state = shelvedstate.load(repo)
726 state = shelvedstate.load(repo)
727 if opts.get(b'keep') is None:
727 if opts.get(b'keep') is None:
728 opts[b'keep'] = state.keep
728 opts[b'keep'] = state.keep
729 except IOError as err:
729 except IOError as err:
730 if err.errno != errno.ENOENT:
730 if err.errno != errno.ENOENT:
731 raise
731 raise
732 cmdutil.wrongtooltocontinue(repo, _(b'unshelve'))
732 cmdutil.wrongtooltocontinue(repo, _(b'unshelve'))
733 except error.CorruptedState as err:
733 except error.CorruptedState as err:
734 ui.debug(pycompat.bytestr(err) + b'\n')
734 ui.debug(pycompat.bytestr(err) + b'\n')
735 if opts.get(b'continue'):
735 if opts.get(b'continue'):
736 msg = _(b'corrupted shelved state file')
736 msg = _(b'corrupted shelved state file')
737 hint = _(
737 hint = _(
738 b'please run hg unshelve --abort to abort unshelve '
738 b'please run hg unshelve --abort to abort unshelve '
739 b'operation'
739 b'operation'
740 )
740 )
741 raise error.Abort(msg, hint=hint)
741 raise error.Abort(msg, hint=hint)
742 elif opts.get(b'abort'):
742 elif opts.get(b'abort'):
743 shelvedstate.clear(repo)
743 shelvedstate.clear(repo)
744 raise error.Abort(
744 raise error.Abort(
745 _(
745 _(
746 b'could not read shelved state file, your '
746 b'could not read shelved state file, your '
747 b'working copy may be in an unexpected state\n'
747 b'working copy may be in an unexpected state\n'
748 b'please update to some commit\n'
748 b'please update to some commit\n'
749 )
749 )
750 )
750 )
751 return state
751 return state
752
752
753
753
754 def unshelveabort(ui, repo, state):
754 def unshelveabort(ui, repo, state):
755 """subcommand that abort an in-progress unshelve"""
755 """subcommand that abort an in-progress unshelve"""
756 with repo.lock():
756 with repo.lock():
757 try:
757 try:
758 checkparents(repo, state)
758 checkparents(repo, state)
759
759
760 merge.clean_update(state.pendingctx)
760 merge.clean_update(state.pendingctx)
761 if state.activebookmark and state.activebookmark in repo._bookmarks:
761 if state.activebookmark and state.activebookmark in repo._bookmarks:
762 bookmarks.activate(repo, state.activebookmark)
762 bookmarks.activate(repo, state.activebookmark)
763 mergefiles(ui, repo, state.wctx, state.pendingctx)
763 mergefiles(ui, repo, state.wctx, state.pendingctx)
764 if not phases.supportinternal(repo):
764 if not phases.supportinternal(repo):
765 repair.strip(
765 repair.strip(
766 ui, repo, state.nodestoremove, backup=False, topic=b'shelve'
766 ui, repo, state.nodestoremove, backup=False, topic=b'shelve'
767 )
767 )
768 finally:
768 finally:
769 shelvedstate.clear(repo)
769 shelvedstate.clear(repo)
770 ui.warn(_(b"unshelve of '%s' aborted\n") % state.name)
770 ui.warn(_(b"unshelve of '%s' aborted\n") % state.name)
771
771
772
772
773 def hgabortunshelve(ui, repo):
773 def hgabortunshelve(ui, repo):
774 """logic to abort unshelve using 'hg abort"""
774 """logic to abort unshelve using 'hg abort"""
775 with repo.wlock():
775 with repo.wlock():
776 state = _loadshelvedstate(ui, repo, {b'abort': True})
776 state = _loadshelvedstate(ui, repo, {b'abort': True})
777 return unshelveabort(ui, repo, state)
777 return unshelveabort(ui, repo, state)
778
778
779
779
780 def mergefiles(ui, repo, wctx, shelvectx):
780 def mergefiles(ui, repo, wctx, shelvectx):
781 """updates to wctx and merges the changes from shelvectx into the
781 """updates to wctx and merges the changes from shelvectx into the
782 dirstate."""
782 dirstate."""
783 with ui.configoverride({(b'ui', b'quiet'): True}):
783 with ui.configoverride({(b'ui', b'quiet'): True}):
784 hg.update(repo, wctx.node())
784 hg.update(repo, wctx.node())
785 ui.pushbuffer(True)
785 ui.pushbuffer(True)
786 cmdutil.revert(ui, repo, shelvectx)
786 cmdutil.revert(ui, repo, shelvectx)
787 ui.popbuffer()
787 ui.popbuffer()
788
788
789
789
790 def restorebranch(ui, repo, branchtorestore):
790 def restorebranch(ui, repo, branchtorestore):
791 if branchtorestore and branchtorestore != repo.dirstate.branch():
791 if branchtorestore and branchtorestore != repo.dirstate.branch():
792 repo.dirstate.setbranch(branchtorestore)
792 repo.dirstate.setbranch(branchtorestore)
793 ui.status(
793 ui.status(
794 _(b'marked working directory as branch %s\n') % branchtorestore
794 _(b'marked working directory as branch %s\n') % branchtorestore
795 )
795 )
796
796
797
797
798 def unshelvecleanup(ui, repo, name, opts):
798 def unshelvecleanup(ui, repo, name, opts):
799 """remove related files after an unshelve"""
799 """remove related files after an unshelve"""
800 if not opts.get(b'keep'):
800 if not opts.get(b'keep'):
801 backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
801 backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
802 ShelfDir(repo).get(name).movetobackup(backupvfs)
802 ShelfDir(repo).get(name).movetobackup(backupvfs)
803 cleanupoldbackups(repo)
803 cleanupoldbackups(repo)
804
804
805
805
806 def unshelvecontinue(ui, repo, state, opts):
806 def unshelvecontinue(ui, repo, state, opts):
807 """subcommand to continue an in-progress unshelve"""
807 """subcommand to continue an in-progress unshelve"""
808 # We're finishing off a merge. First parent is our original
808 # We're finishing off a merge. First parent is our original
809 # parent, second is the temporary "fake" commit we're unshelving.
809 # parent, second is the temporary "fake" commit we're unshelving.
810 interactive = state.interactive
810 interactive = state.interactive
811 basename = state.name
811 basename = state.name
812 with repo.lock():
812 with repo.lock():
813 checkparents(repo, state)
813 checkparents(repo, state)
814 ms = mergestatemod.mergestate.read(repo)
814 ms = mergestatemod.mergestate.read(repo)
815 if ms.unresolvedcount():
815 if ms.unresolvedcount():
816 raise error.Abort(
816 raise error.Abort(
817 _(b"unresolved conflicts, can't continue"),
817 _(b"unresolved conflicts, can't continue"),
818 hint=_(b"see 'hg resolve', then 'hg unshelve --continue'"),
818 hint=_(b"see 'hg resolve', then 'hg unshelve --continue'"),
819 )
819 )
820
820
821 shelvectx = repo[state.parents[1]]
821 shelvectx = repo[state.parents[1]]
822 pendingctx = state.pendingctx
822 pendingctx = state.pendingctx
823
823
824 with repo.dirstate.parentchange():
824 with repo.dirstate.parentchange():
825 repo.setparents(state.pendingctx.node(), nullid)
825 repo.setparents(state.pendingctx.node(), nullid)
826 repo.dirstate.write(repo.currenttransaction())
826 repo.dirstate.write(repo.currenttransaction())
827
827
828 targetphase = phases.internal
828 targetphase = phases.internal
829 if not phases.supportinternal(repo):
829 if not phases.supportinternal(repo):
830 targetphase = phases.secret
830 targetphase = phases.secret
831 overrides = {(b'phases', b'new-commit'): targetphase}
831 overrides = {(b'phases', b'new-commit'): targetphase}
832 with repo.ui.configoverride(overrides, b'unshelve'):
832 with repo.ui.configoverride(overrides, b'unshelve'):
833 with repo.dirstate.parentchange():
833 with repo.dirstate.parentchange():
834 repo.setparents(state.parents[0], nullid)
834 repo.setparents(state.parents[0], nullid)
835 newnode, ispartialunshelve = _createunshelvectx(
835 newnode, ispartialunshelve = _createunshelvectx(
836 ui, repo, shelvectx, basename, interactive, opts
836 ui, repo, shelvectx, basename, interactive, opts
837 )
837 )
838
838
839 if newnode is None:
839 if newnode is None:
840 shelvectx = state.pendingctx
840 shelvectx = state.pendingctx
841 msg = _(
841 msg = _(
842 b'note: unshelved changes already existed '
842 b'note: unshelved changes already existed '
843 b'in the working copy\n'
843 b'in the working copy\n'
844 )
844 )
845 ui.status(msg)
845 ui.status(msg)
846 else:
846 else:
847 # only strip the shelvectx if we produced one
847 # only strip the shelvectx if we produced one
848 state.nodestoremove.append(newnode)
848 state.nodestoremove.append(newnode)
849 shelvectx = repo[newnode]
849 shelvectx = repo[newnode]
850
850
851 merge.update(pendingctx)
851 merge.update(pendingctx)
852 mergefiles(ui, repo, state.wctx, shelvectx)
852 mergefiles(ui, repo, state.wctx, shelvectx)
853 restorebranch(ui, repo, state.branchtorestore)
853 restorebranch(ui, repo, state.branchtorestore)
854
854
855 if not phases.supportinternal(repo):
855 if not phases.supportinternal(repo):
856 repair.strip(
856 repair.strip(
857 ui, repo, state.nodestoremove, backup=False, topic=b'shelve'
857 ui, repo, state.nodestoremove, backup=False, topic=b'shelve'
858 )
858 )
859 shelvedstate.clear(repo)
859 shelvedstate.clear(repo)
860 if not ispartialunshelve:
860 if not ispartialunshelve:
861 unshelvecleanup(ui, repo, state.name, opts)
861 unshelvecleanup(ui, repo, state.name, opts)
862 _restoreactivebookmark(repo, state.activebookmark)
862 _restoreactivebookmark(repo, state.activebookmark)
863 ui.status(_(b"unshelve of '%s' complete\n") % state.name)
863 ui.status(_(b"unshelve of '%s' complete\n") % state.name)
864
864
865
865
866 def hgcontinueunshelve(ui, repo):
866 def hgcontinueunshelve(ui, repo):
867 """logic to resume unshelve using 'hg continue'"""
867 """logic to resume unshelve using 'hg continue'"""
868 with repo.wlock():
868 with repo.wlock():
869 state = _loadshelvedstate(ui, repo, {b'continue': True})
869 state = _loadshelvedstate(ui, repo, {b'continue': True})
870 return unshelvecontinue(ui, repo, state, {b'keep': state.keep})
870 return unshelvecontinue(ui, repo, state, {b'keep': state.keep})
871
871
872
872
873 def _commitworkingcopychanges(ui, repo, opts, tmpwctx):
873 def _commitworkingcopychanges(ui, repo, opts, tmpwctx):
874 """Temporarily commit working copy changes before moving unshelve commit"""
874 """Temporarily commit working copy changes before moving unshelve commit"""
875 # Store pending changes in a commit and remember added in case a shelve
875 # Store pending changes in a commit and remember added in case a shelve
876 # contains unknown files that are part of the pending change
876 # contains unknown files that are part of the pending change
877 s = repo.status()
877 s = repo.status()
878 addedbefore = frozenset(s.added)
878 addedbefore = frozenset(s.added)
879 if not (s.modified or s.added or s.removed):
879 if not (s.modified or s.added or s.removed):
880 return tmpwctx, addedbefore
880 return tmpwctx, addedbefore
881 ui.status(
881 ui.status(
882 _(
882 _(
883 b"temporarily committing pending changes "
883 b"temporarily committing pending changes "
884 b"(restore with 'hg unshelve --abort')\n"
884 b"(restore with 'hg unshelve --abort')\n"
885 )
885 )
886 )
886 )
887 extra = {b'internal': b'shelve'}
887 extra = {b'internal': b'shelve'}
888 commitfunc = getcommitfunc(extra=extra, interactive=False, editor=False)
888 commitfunc = getcommitfunc(extra=extra, interactive=False, editor=False)
889 tempopts = {}
889 tempopts = {}
890 tempopts[b'message'] = b"pending changes temporary commit"
890 tempopts[b'message'] = b"pending changes temporary commit"
891 tempopts[b'date'] = opts.get(b'date')
891 tempopts[b'date'] = opts.get(b'date')
892 with ui.configoverride({(b'ui', b'quiet'): True}):
892 with ui.configoverride({(b'ui', b'quiet'): True}):
893 node = cmdutil.commit(ui, repo, commitfunc, [], tempopts)
893 node = cmdutil.commit(ui, repo, commitfunc, [], tempopts)
894 tmpwctx = repo[node]
894 tmpwctx = repo[node]
895 return tmpwctx, addedbefore
895 return tmpwctx, addedbefore
896
896
897
897
898 def _unshelverestorecommit(ui, repo, tr, basename):
898 def _unshelverestorecommit(ui, repo, tr, basename):
899 """Recreate commit in the repository during the unshelve"""
899 """Recreate commit in the repository during the unshelve"""
900 repo = repo.unfiltered()
900 repo = repo.unfiltered()
901 node = None
901 node = None
902 shelf = ShelfDir(repo).get(basename)
902 shelf = ShelfDir(repo).get(basename)
903 if shelf.hasinfo():
903 if shelf.hasinfo():
904 node = shelf.readinfo()[b'node']
904 node = shelf.readinfo()[b'node']
905 if node is None or node not in repo:
905 if node is None or node not in repo:
906 with ui.configoverride({(b'ui', b'quiet'): True}):
906 with ui.configoverride({(b'ui', b'quiet'): True}):
907 shelvectx = shelf.applybundle(repo, tr)
907 shelvectx = shelf.applybundle(repo, tr)
908 # We might not strip the unbundled changeset, so we should keep track of
908 # We might not strip the unbundled changeset, so we should keep track of
909 # the unshelve node in case we need to reuse it (eg: unshelve --keep)
909 # the unshelve node in case we need to reuse it (eg: unshelve --keep)
910 if node is None:
910 if node is None:
911 info = {b'node': hex(shelvectx.node())}
911 info = {b'node': hex(shelvectx.node())}
912 shelf.writeinfo(info)
912 shelf.writeinfo(info)
913 else:
913 else:
914 shelvectx = repo[node]
914 shelvectx = repo[node]
915
915
916 return repo, shelvectx
916 return repo, shelvectx
917
917
918
918
919 def _createunshelvectx(ui, repo, shelvectx, basename, interactive, opts):
919 def _createunshelvectx(ui, repo, shelvectx, basename, interactive, opts):
920 """Handles the creation of unshelve commit and updates the shelve if it
920 """Handles the creation of unshelve commit and updates the shelve if it
921 was partially unshelved.
921 was partially unshelved.
922
922
923 If interactive is:
923 If interactive is:
924
924
925 * False: Commits all the changes in the working directory.
925 * False: Commits all the changes in the working directory.
926 * True: Prompts the user to select changes to unshelve and commit them.
926 * True: Prompts the user to select changes to unshelve and commit them.
927 Update the shelve with remaining changes.
927 Update the shelve with remaining changes.
928
928
929 Returns the node of the new commit formed and a bool indicating whether
929 Returns the node of the new commit formed and a bool indicating whether
930 the shelve was partially unshelved.Creates a commit ctx to unshelve
930 the shelve was partially unshelved.Creates a commit ctx to unshelve
931 interactively or non-interactively.
931 interactively or non-interactively.
932
932
933 The user might want to unshelve certain changes only from the stored
933 The user might want to unshelve certain changes only from the stored
934 shelve in interactive. So, we would create two commits. One with requested
934 shelve in interactive. So, we would create two commits. One with requested
935 changes to unshelve at that time and the latter is shelved for future.
935 changes to unshelve at that time and the latter is shelved for future.
936
936
937 Here, we return both the newnode which is created interactively and a
937 Here, we return both the newnode which is created interactively and a
938 bool to know whether the shelve is partly done or completely done.
938 bool to know whether the shelve is partly done or completely done.
939 """
939 """
940 opts[b'message'] = shelvectx.description()
940 opts[b'message'] = shelvectx.description()
941 opts[b'interactive-unshelve'] = True
941 opts[b'interactive-unshelve'] = True
942 pats = []
942 pats = []
943 if not interactive:
943 if not interactive:
944 newnode = repo.commit(
944 newnode = repo.commit(
945 text=shelvectx.description(),
945 text=shelvectx.description(),
946 extra=shelvectx.extra(),
946 extra=shelvectx.extra(),
947 user=shelvectx.user(),
947 user=shelvectx.user(),
948 date=shelvectx.date(),
948 date=shelvectx.date(),
949 )
949 )
950 return newnode, False
950 return newnode, False
951
951
952 commitfunc = getcommitfunc(shelvectx.extra(), interactive=True, editor=True)
952 commitfunc = getcommitfunc(shelvectx.extra(), interactive=True, editor=True)
953 newnode = cmdutil.dorecord(
953 newnode = cmdutil.dorecord(
954 ui,
954 ui,
955 repo,
955 repo,
956 commitfunc,
956 commitfunc,
957 None,
957 None,
958 False,
958 False,
959 cmdutil.recordfilter,
959 cmdutil.recordfilter,
960 *pats,
960 *pats,
961 **pycompat.strkwargs(opts)
961 **pycompat.strkwargs(opts)
962 )
962 )
963 snode = repo.commit(
963 snode = repo.commit(
964 text=shelvectx.description(),
964 text=shelvectx.description(),
965 extra=shelvectx.extra(),
965 extra=shelvectx.extra(),
966 user=shelvectx.user(),
966 user=shelvectx.user(),
967 )
967 )
968 if snode:
968 if snode:
969 m = scmutil.matchfiles(repo, repo[snode].files())
969 m = scmutil.matchfiles(repo, repo[snode].files())
970 _shelvecreatedcommit(repo, snode, basename, m)
970 _shelvecreatedcommit(repo, snode, basename, m)
971
971
972 return newnode, bool(snode)
972 return newnode, bool(snode)
973
973
974
974
975 def _rebaserestoredcommit(
975 def _rebaserestoredcommit(
976 ui,
976 ui,
977 repo,
977 repo,
978 opts,
978 opts,
979 tr,
979 tr,
980 oldtiprev,
980 oldtiprev,
981 basename,
981 basename,
982 pctx,
982 pctx,
983 tmpwctx,
983 tmpwctx,
984 shelvectx,
984 shelvectx,
985 branchtorestore,
985 branchtorestore,
986 activebookmark,
986 activebookmark,
987 ):
987 ):
988 """Rebase restored commit from its original location to a destination"""
988 """Rebase restored commit from its original location to a destination"""
989 # If the shelve is not immediately on top of the commit
989 # If the shelve is not immediately on top of the commit
990 # we'll be merging with, rebase it to be on top.
990 # we'll be merging with, rebase it to be on top.
991 interactive = opts.get(b'interactive')
991 interactive = opts.get(b'interactive')
992 if tmpwctx.node() == shelvectx.p1().node() and not interactive:
992 if tmpwctx.node() == shelvectx.p1().node() and not interactive:
993 # We won't skip on interactive mode because, the user might want to
993 # We won't skip on interactive mode because, the user might want to
994 # unshelve certain changes only.
994 # unshelve certain changes only.
995 return shelvectx, False
995 return shelvectx, False
996
996
997 overrides = {
997 overrides = {
998 (b'ui', b'forcemerge'): opts.get(b'tool', b''),
998 (b'ui', b'forcemerge'): opts.get(b'tool', b''),
999 (b'phases', b'new-commit'): phases.secret,
999 (b'phases', b'new-commit'): phases.secret,
1000 }
1000 }
1001 with repo.ui.configoverride(overrides, b'unshelve'):
1001 with repo.ui.configoverride(overrides, b'unshelve'):
1002 ui.status(_(b'rebasing shelved changes\n'))
1002 ui.status(_(b'rebasing shelved changes\n'))
1003 stats = merge.graft(
1003 stats = merge.graft(
1004 repo,
1004 repo,
1005 shelvectx,
1005 shelvectx,
1006 labels=[b'working-copy', b'shelve'],
1006 labels=[b'working-copy', b'shelve'],
1007 keepconflictparent=True,
1007 keepconflictparent=True,
1008 )
1008 )
1009 if stats.unresolvedcount:
1009 if stats.unresolvedcount:
1010 tr.close()
1010 tr.close()
1011
1011
1012 nodestoremove = [
1012 nodestoremove = [
1013 repo.changelog.node(rev)
1013 repo.changelog.node(rev)
1014 for rev in pycompat.xrange(oldtiprev, len(repo))
1014 for rev in pycompat.xrange(oldtiprev, len(repo))
1015 ]
1015 ]
1016 shelvedstate.save(
1016 shelvedstate.save(
1017 repo,
1017 repo,
1018 basename,
1018 basename,
1019 pctx,
1019 pctx,
1020 tmpwctx,
1020 tmpwctx,
1021 nodestoremove,
1021 nodestoremove,
1022 branchtorestore,
1022 branchtorestore,
1023 opts.get(b'keep'),
1023 opts.get(b'keep'),
1024 activebookmark,
1024 activebookmark,
1025 interactive,
1025 interactive,
1026 )
1026 )
1027 raise error.ConflictResolutionRequired(b'unshelve')
1027 raise error.ConflictResolutionRequired(b'unshelve')
1028
1028
1029 with repo.dirstate.parentchange():
1029 with repo.dirstate.parentchange():
1030 repo.setparents(tmpwctx.node(), nullid)
1030 repo.setparents(tmpwctx.node(), nullid)
1031 newnode, ispartialunshelve = _createunshelvectx(
1031 newnode, ispartialunshelve = _createunshelvectx(
1032 ui, repo, shelvectx, basename, interactive, opts
1032 ui, repo, shelvectx, basename, interactive, opts
1033 )
1033 )
1034
1034
1035 if newnode is None:
1035 if newnode is None:
1036 shelvectx = tmpwctx
1036 shelvectx = tmpwctx
1037 msg = _(
1037 msg = _(
1038 b'note: unshelved changes already existed '
1038 b'note: unshelved changes already existed '
1039 b'in the working copy\n'
1039 b'in the working copy\n'
1040 )
1040 )
1041 ui.status(msg)
1041 ui.status(msg)
1042 else:
1042 else:
1043 shelvectx = repo[newnode]
1043 shelvectx = repo[newnode]
1044 merge.update(tmpwctx)
1044 merge.update(tmpwctx)
1045
1045
1046 return shelvectx, ispartialunshelve
1046 return shelvectx, ispartialunshelve
1047
1047
1048
1048
1049 def _forgetunknownfiles(repo, shelvectx, addedbefore):
1049 def _forgetunknownfiles(repo, shelvectx, addedbefore):
1050 # Forget any files that were unknown before the shelve, unknown before
1050 # Forget any files that were unknown before the shelve, unknown before
1051 # unshelve started, but are now added.
1051 # unshelve started, but are now added.
1052 shelveunknown = shelvectx.extra().get(b'shelve_unknown')
1052 shelveunknown = shelvectx.extra().get(b'shelve_unknown')
1053 if not shelveunknown:
1053 if not shelveunknown:
1054 return
1054 return
1055 shelveunknown = frozenset(shelveunknown.split(b'\0'))
1055 shelveunknown = frozenset(shelveunknown.split(b'\0'))
1056 addedafter = frozenset(repo.status().added)
1056 addedafter = frozenset(repo.status().added)
1057 toforget = (addedafter & shelveunknown) - addedbefore
1057 toforget = (addedafter & shelveunknown) - addedbefore
1058 repo[None].forget(toforget)
1058 repo[None].forget(toforget)
1059
1059
1060
1060
1061 def _finishunshelve(repo, oldtiprev, tr, activebookmark):
1061 def _finishunshelve(repo, oldtiprev, tr, activebookmark):
1062 _restoreactivebookmark(repo, activebookmark)
1062 _restoreactivebookmark(repo, activebookmark)
1063 # The transaction aborting will strip all the commits for us,
1063 # The transaction aborting will strip all the commits for us,
1064 # but it doesn't update the inmemory structures, so addchangegroup
1064 # but it doesn't update the inmemory structures, so addchangegroup
1065 # hooks still fire and try to operate on the missing commits.
1065 # hooks still fire and try to operate on the missing commits.
1066 # Clean up manually to prevent this.
1066 # Clean up manually to prevent this.
1067 repo.unfiltered().changelog.strip(oldtiprev, tr)
1067 repo.unfiltered().changelog.strip(oldtiprev, tr)
1068 _aborttransaction(repo, tr)
1068 _aborttransaction(repo, tr)
1069
1069
1070
1070
1071 def _checkunshelveuntrackedproblems(ui, repo, shelvectx):
1071 def _checkunshelveuntrackedproblems(ui, repo, shelvectx):
1072 """Check potential problems which may result from working
1072 """Check potential problems which may result from working
1073 copy having untracked changes."""
1073 copy having untracked changes."""
1074 wcdeleted = set(repo.status().deleted)
1074 wcdeleted = set(repo.status().deleted)
1075 shelvetouched = set(shelvectx.files())
1075 shelvetouched = set(shelvectx.files())
1076 intersection = wcdeleted.intersection(shelvetouched)
1076 intersection = wcdeleted.intersection(shelvetouched)
1077 if intersection:
1077 if intersection:
1078 m = _(b"shelved change touches missing files")
1078 m = _(b"shelved change touches missing files")
1079 hint = _(b"run hg status to see which files are missing")
1079 hint = _(b"run hg status to see which files are missing")
1080 raise error.Abort(m, hint=hint)
1080 raise error.Abort(m, hint=hint)
1081
1081
1082
1082
1083 def unshelvecmd(ui, repo, *shelved, **opts):
1083 def unshelvecmd(ui, repo, *shelved, **opts):
1084 opts = pycompat.byteskwargs(opts)
1084 opts = pycompat.byteskwargs(opts)
1085 abortf = opts.get(b'abort')
1085 abortf = opts.get(b'abort')
1086 continuef = opts.get(b'continue')
1086 continuef = opts.get(b'continue')
1087 interactive = opts.get(b'interactive')
1087 interactive = opts.get(b'interactive')
1088 if not abortf and not continuef:
1088 if not abortf and not continuef:
1089 cmdutil.checkunfinished(repo)
1089 cmdutil.checkunfinished(repo)
1090 shelved = list(shelved)
1090 shelved = list(shelved)
1091 if opts.get(b"name"):
1091 if opts.get(b"name"):
1092 shelved.append(opts[b"name"])
1092 shelved.append(opts[b"name"])
1093
1093
1094 if interactive and opts.get(b'keep'):
1094 if interactive and opts.get(b'keep'):
1095 raise error.InputError(
1095 raise error.InputError(
1096 _(b'--keep on --interactive is not yet supported')
1096 _(b'--keep on --interactive is not yet supported')
1097 )
1097 )
1098 if abortf or continuef:
1098 if abortf or continuef:
1099 if abortf and continuef:
1099 if abortf and continuef:
1100 raise error.InputError(_(b'cannot use both abort and continue'))
1100 raise error.InputError(_(b'cannot use both abort and continue'))
1101 if shelved:
1101 if shelved:
1102 raise error.InputError(
1102 raise error.InputError(
1103 _(
1103 _(
1104 b'cannot combine abort/continue with '
1104 b'cannot combine abort/continue with '
1105 b'naming a shelved change'
1105 b'naming a shelved change'
1106 )
1106 )
1107 )
1107 )
1108 if abortf and opts.get(b'tool', False):
1108 if abortf and opts.get(b'tool', False):
1109 ui.warn(_(b'tool option will be ignored\n'))
1109 ui.warn(_(b'tool option will be ignored\n'))
1110
1110
1111 state = _loadshelvedstate(ui, repo, opts)
1111 state = _loadshelvedstate(ui, repo, opts)
1112 if abortf:
1112 if abortf:
1113 return unshelveabort(ui, repo, state)
1113 return unshelveabort(ui, repo, state)
1114 elif continuef and interactive:
1114 elif continuef and interactive:
1115 raise error.InputError(
1115 raise error.InputError(
1116 _(b'cannot use both continue and interactive')
1116 _(b'cannot use both continue and interactive')
1117 )
1117 )
1118 elif continuef:
1118 elif continuef:
1119 return unshelvecontinue(ui, repo, state, opts)
1119 return unshelvecontinue(ui, repo, state, opts)
1120 elif len(shelved) > 1:
1120 elif len(shelved) > 1:
1121 raise error.InputError(_(b'can only unshelve one change at a time'))
1121 raise error.InputError(_(b'can only unshelve one change at a time'))
1122 elif not shelved:
1122 elif not shelved:
1123 shelved = ShelfDir(repo).listshelves()
1123 shelved = ShelfDir(repo).listshelves()
1124 if not shelved:
1124 if not shelved:
1125 raise error.StateError(_(b'no shelved changes to apply!'))
1125 raise error.StateError(_(b'no shelved changes to apply!'))
1126 basename = shelved[0][1]
1126 basename = shelved[0][1]
1127 ui.status(_(b"unshelving change '%s'\n") % basename)
1127 ui.status(_(b"unshelving change '%s'\n") % basename)
1128 else:
1128 else:
1129 basename = shelved[0]
1129 basename = shelved[0]
1130
1130
1131 if not ShelfDir(repo).get(basename).exists():
1131 if not ShelfDir(repo).get(basename).exists():
1132 raise error.InputError(_(b"shelved change '%s' not found") % basename)
1132 raise error.InputError(_(b"shelved change '%s' not found") % basename)
1133
1133
1134 return _dounshelve(ui, repo, basename, opts)
1134 return _dounshelve(ui, repo, basename, opts)
1135
1135
1136
1136
1137 def _dounshelve(ui, repo, basename, opts):
1137 def _dounshelve(ui, repo, basename, opts):
1138 repo = repo.unfiltered()
1138 repo = repo.unfiltered()
1139 lock = tr = None
1139 lock = tr = None
1140 try:
1140 try:
1141 lock = repo.lock()
1141 lock = repo.lock()
1142 tr = repo.transaction(b'unshelve', report=lambda x: None)
1142 tr = repo.transaction(b'unshelve', report=lambda x: None)
1143 oldtiprev = len(repo)
1143 oldtiprev = len(repo)
1144
1144
1145 pctx = repo[b'.']
1145 pctx = repo[b'.']
1146 tmpwctx = pctx
1146 tmpwctx = pctx
1147 # The goal is to have a commit structure like so:
1147 # The goal is to have a commit structure like so:
1148 # ...-> pctx -> tmpwctx -> shelvectx
1148 # ...-> pctx -> tmpwctx -> shelvectx
1149 # where tmpwctx is an optional commit with the user's pending changes
1149 # where tmpwctx is an optional commit with the user's pending changes
1150 # and shelvectx is the unshelved changes. Then we merge it all down
1150 # and shelvectx is the unshelved changes. Then we merge it all down
1151 # to the original pctx.
1151 # to the original pctx.
1152
1152
1153 activebookmark = _backupactivebookmark(repo)
1153 activebookmark = _backupactivebookmark(repo)
1154 tmpwctx, addedbefore = _commitworkingcopychanges(
1154 tmpwctx, addedbefore = _commitworkingcopychanges(
1155 ui, repo, opts, tmpwctx
1155 ui, repo, opts, tmpwctx
1156 )
1156 )
1157 repo, shelvectx = _unshelverestorecommit(ui, repo, tr, basename)
1157 repo, shelvectx = _unshelverestorecommit(ui, repo, tr, basename)
1158 _checkunshelveuntrackedproblems(ui, repo, shelvectx)
1158 _checkunshelveuntrackedproblems(ui, repo, shelvectx)
1159 branchtorestore = b''
1159 branchtorestore = b''
1160 if shelvectx.branch() != shelvectx.p1().branch():
1160 if shelvectx.branch() != shelvectx.p1().branch():
1161 branchtorestore = shelvectx.branch()
1161 branchtorestore = shelvectx.branch()
1162
1162
1163 shelvectx, ispartialunshelve = _rebaserestoredcommit(
1163 shelvectx, ispartialunshelve = _rebaserestoredcommit(
1164 ui,
1164 ui,
1165 repo,
1165 repo,
1166 opts,
1166 opts,
1167 tr,
1167 tr,
1168 oldtiprev,
1168 oldtiprev,
1169 basename,
1169 basename,
1170 pctx,
1170 pctx,
1171 tmpwctx,
1171 tmpwctx,
1172 shelvectx,
1172 shelvectx,
1173 branchtorestore,
1173 branchtorestore,
1174 activebookmark,
1174 activebookmark,
1175 )
1175 )
1176 overrides = {(b'ui', b'forcemerge'): opts.get(b'tool', b'')}
1176 overrides = {(b'ui', b'forcemerge'): opts.get(b'tool', b'')}
1177 with ui.configoverride(overrides, b'unshelve'):
1177 with ui.configoverride(overrides, b'unshelve'):
1178 mergefiles(ui, repo, pctx, shelvectx)
1178 mergefiles(ui, repo, pctx, shelvectx)
1179 restorebranch(ui, repo, branchtorestore)
1179 restorebranch(ui, repo, branchtorestore)
1180 shelvedstate.clear(repo)
1180 shelvedstate.clear(repo)
1181 _finishunshelve(repo, oldtiprev, tr, activebookmark)
1181 _finishunshelve(repo, oldtiprev, tr, activebookmark)
1182 _forgetunknownfiles(repo, shelvectx, addedbefore)
1182 _forgetunknownfiles(repo, shelvectx, addedbefore)
1183 if not ispartialunshelve:
1183 if not ispartialunshelve:
1184 unshelvecleanup(ui, repo, basename, opts)
1184 unshelvecleanup(ui, repo, basename, opts)
1185 finally:
1185 finally:
1186 if tr:
1186 if tr:
1187 tr.release()
1187 tr.release()
1188 lockmod.release(lock)
1188 lockmod.release(lock)
@@ -1,566 +1,566 b''
1 # Copyright (C) 2004, 2005 Canonical Ltd
1 # Copyright (C) 2004, 2005 Canonical Ltd
2 #
2 #
3 # This program is free software; you can redistribute it and/or modify
3 # This program is free software; you can redistribute it and/or modify
4 # it under the terms of the GNU General Public License as published by
4 # it under the terms of the GNU General Public License as published by
5 # the Free Software Foundation; either version 2 of the License, or
5 # the Free Software Foundation; either version 2 of the License, or
6 # (at your option) any later version.
6 # (at your option) any later version.
7 #
7 #
8 # This program is distributed in the hope that it will be useful,
8 # This program is distributed in the hope that it will be useful,
9 # but WITHOUT ANY WARRANTY; without even the implied warranty of
9 # but WITHOUT ANY WARRANTY; without even the implied warranty of
10 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11 # GNU General Public License for more details.
11 # GNU General Public License for more details.
12 #
12 #
13 # You should have received a copy of the GNU General Public License
13 # You should have received a copy of the GNU General Public License
14 # along with this program; if not, see <http://www.gnu.org/licenses/>.
14 # along with this program; if not, see <http://www.gnu.org/licenses/>.
15
15
16 # mbp: "you know that thing where cvs gives you conflict markers?"
16 # mbp: "you know that thing where cvs gives you conflict markers?"
17 # s: "i hate that."
17 # s: "i hate that."
18
18
19 from __future__ import absolute_import
19 from __future__ import absolute_import
20
20
21 from .i18n import _
21 from .i18n import _
22 from .node import nullid
22 from .node import nullrev
23 from . import (
23 from . import (
24 error,
24 error,
25 mdiff,
25 mdiff,
26 pycompat,
26 pycompat,
27 util,
27 util,
28 )
28 )
29 from .utils import stringutil
29 from .utils import stringutil
30
30
31
31
32 class CantReprocessAndShowBase(Exception):
32 class CantReprocessAndShowBase(Exception):
33 pass
33 pass
34
34
35
35
36 def intersect(ra, rb):
36 def intersect(ra, rb):
37 """Given two ranges return the range where they intersect or None.
37 """Given two ranges return the range where they intersect or None.
38
38
39 >>> intersect((0, 10), (0, 6))
39 >>> intersect((0, 10), (0, 6))
40 (0, 6)
40 (0, 6)
41 >>> intersect((0, 10), (5, 15))
41 >>> intersect((0, 10), (5, 15))
42 (5, 10)
42 (5, 10)
43 >>> intersect((0, 10), (10, 15))
43 >>> intersect((0, 10), (10, 15))
44 >>> intersect((0, 9), (10, 15))
44 >>> intersect((0, 9), (10, 15))
45 >>> intersect((0, 9), (7, 15))
45 >>> intersect((0, 9), (7, 15))
46 (7, 9)
46 (7, 9)
47 """
47 """
48 assert ra[0] <= ra[1]
48 assert ra[0] <= ra[1]
49 assert rb[0] <= rb[1]
49 assert rb[0] <= rb[1]
50
50
51 sa = max(ra[0], rb[0])
51 sa = max(ra[0], rb[0])
52 sb = min(ra[1], rb[1])
52 sb = min(ra[1], rb[1])
53 if sa < sb:
53 if sa < sb:
54 return sa, sb
54 return sa, sb
55 else:
55 else:
56 return None
56 return None
57
57
58
58
59 def compare_range(a, astart, aend, b, bstart, bend):
59 def compare_range(a, astart, aend, b, bstart, bend):
60 """Compare a[astart:aend] == b[bstart:bend], without slicing."""
60 """Compare a[astart:aend] == b[bstart:bend], without slicing."""
61 if (aend - astart) != (bend - bstart):
61 if (aend - astart) != (bend - bstart):
62 return False
62 return False
63 for ia, ib in zip(
63 for ia, ib in zip(
64 pycompat.xrange(astart, aend), pycompat.xrange(bstart, bend)
64 pycompat.xrange(astart, aend), pycompat.xrange(bstart, bend)
65 ):
65 ):
66 if a[ia] != b[ib]:
66 if a[ia] != b[ib]:
67 return False
67 return False
68 else:
68 else:
69 return True
69 return True
70
70
71
71
72 class Merge3Text(object):
72 class Merge3Text(object):
73 """3-way merge of texts.
73 """3-way merge of texts.
74
74
75 Given strings BASE, OTHER, THIS, tries to produce a combined text
75 Given strings BASE, OTHER, THIS, tries to produce a combined text
76 incorporating the changes from both BASE->OTHER and BASE->THIS."""
76 incorporating the changes from both BASE->OTHER and BASE->THIS."""
77
77
78 def __init__(self, basetext, atext, btext, base=None, a=None, b=None):
78 def __init__(self, basetext, atext, btext, base=None, a=None, b=None):
79 self.basetext = basetext
79 self.basetext = basetext
80 self.atext = atext
80 self.atext = atext
81 self.btext = btext
81 self.btext = btext
82 if base is None:
82 if base is None:
83 base = mdiff.splitnewlines(basetext)
83 base = mdiff.splitnewlines(basetext)
84 if a is None:
84 if a is None:
85 a = mdiff.splitnewlines(atext)
85 a = mdiff.splitnewlines(atext)
86 if b is None:
86 if b is None:
87 b = mdiff.splitnewlines(btext)
87 b = mdiff.splitnewlines(btext)
88 self.base = base
88 self.base = base
89 self.a = a
89 self.a = a
90 self.b = b
90 self.b = b
91
91
92 def merge_lines(
92 def merge_lines(
93 self,
93 self,
94 name_a=None,
94 name_a=None,
95 name_b=None,
95 name_b=None,
96 name_base=None,
96 name_base=None,
97 start_marker=b'<<<<<<<',
97 start_marker=b'<<<<<<<',
98 mid_marker=b'=======',
98 mid_marker=b'=======',
99 end_marker=b'>>>>>>>',
99 end_marker=b'>>>>>>>',
100 base_marker=None,
100 base_marker=None,
101 localorother=None,
101 localorother=None,
102 minimize=False,
102 minimize=False,
103 ):
103 ):
104 """Return merge in cvs-like form."""
104 """Return merge in cvs-like form."""
105 self.conflicts = False
105 self.conflicts = False
106 newline = b'\n'
106 newline = b'\n'
107 if len(self.a) > 0:
107 if len(self.a) > 0:
108 if self.a[0].endswith(b'\r\n'):
108 if self.a[0].endswith(b'\r\n'):
109 newline = b'\r\n'
109 newline = b'\r\n'
110 elif self.a[0].endswith(b'\r'):
110 elif self.a[0].endswith(b'\r'):
111 newline = b'\r'
111 newline = b'\r'
112 if name_a and start_marker:
112 if name_a and start_marker:
113 start_marker = start_marker + b' ' + name_a
113 start_marker = start_marker + b' ' + name_a
114 if name_b and end_marker:
114 if name_b and end_marker:
115 end_marker = end_marker + b' ' + name_b
115 end_marker = end_marker + b' ' + name_b
116 if name_base and base_marker:
116 if name_base and base_marker:
117 base_marker = base_marker + b' ' + name_base
117 base_marker = base_marker + b' ' + name_base
118 merge_regions = self.merge_regions()
118 merge_regions = self.merge_regions()
119 if minimize:
119 if minimize:
120 merge_regions = self.minimize(merge_regions)
120 merge_regions = self.minimize(merge_regions)
121 for t in merge_regions:
121 for t in merge_regions:
122 what = t[0]
122 what = t[0]
123 if what == b'unchanged':
123 if what == b'unchanged':
124 for i in range(t[1], t[2]):
124 for i in range(t[1], t[2]):
125 yield self.base[i]
125 yield self.base[i]
126 elif what == b'a' or what == b'same':
126 elif what == b'a' or what == b'same':
127 for i in range(t[1], t[2]):
127 for i in range(t[1], t[2]):
128 yield self.a[i]
128 yield self.a[i]
129 elif what == b'b':
129 elif what == b'b':
130 for i in range(t[1], t[2]):
130 for i in range(t[1], t[2]):
131 yield self.b[i]
131 yield self.b[i]
132 elif what == b'conflict':
132 elif what == b'conflict':
133 if localorother == b'local':
133 if localorother == b'local':
134 for i in range(t[3], t[4]):
134 for i in range(t[3], t[4]):
135 yield self.a[i]
135 yield self.a[i]
136 elif localorother == b'other':
136 elif localorother == b'other':
137 for i in range(t[5], t[6]):
137 for i in range(t[5], t[6]):
138 yield self.b[i]
138 yield self.b[i]
139 else:
139 else:
140 self.conflicts = True
140 self.conflicts = True
141 if start_marker is not None:
141 if start_marker is not None:
142 yield start_marker + newline
142 yield start_marker + newline
143 for i in range(t[3], t[4]):
143 for i in range(t[3], t[4]):
144 yield self.a[i]
144 yield self.a[i]
145 if base_marker is not None:
145 if base_marker is not None:
146 yield base_marker + newline
146 yield base_marker + newline
147 for i in range(t[1], t[2]):
147 for i in range(t[1], t[2]):
148 yield self.base[i]
148 yield self.base[i]
149 if mid_marker is not None:
149 if mid_marker is not None:
150 yield mid_marker + newline
150 yield mid_marker + newline
151 for i in range(t[5], t[6]):
151 for i in range(t[5], t[6]):
152 yield self.b[i]
152 yield self.b[i]
153 if end_marker is not None:
153 if end_marker is not None:
154 yield end_marker + newline
154 yield end_marker + newline
155 else:
155 else:
156 raise ValueError(what)
156 raise ValueError(what)
157
157
158 def merge_groups(self):
158 def merge_groups(self):
159 """Yield sequence of line groups. Each one is a tuple:
159 """Yield sequence of line groups. Each one is a tuple:
160
160
161 'unchanged', lines
161 'unchanged', lines
162 Lines unchanged from base
162 Lines unchanged from base
163
163
164 'a', lines
164 'a', lines
165 Lines taken from a
165 Lines taken from a
166
166
167 'same', lines
167 'same', lines
168 Lines taken from a (and equal to b)
168 Lines taken from a (and equal to b)
169
169
170 'b', lines
170 'b', lines
171 Lines taken from b
171 Lines taken from b
172
172
173 'conflict', base_lines, a_lines, b_lines
173 'conflict', base_lines, a_lines, b_lines
174 Lines from base were changed to either a or b and conflict.
174 Lines from base were changed to either a or b and conflict.
175 """
175 """
176 for t in self.merge_regions():
176 for t in self.merge_regions():
177 what = t[0]
177 what = t[0]
178 if what == b'unchanged':
178 if what == b'unchanged':
179 yield what, self.base[t[1] : t[2]]
179 yield what, self.base[t[1] : t[2]]
180 elif what == b'a' or what == b'same':
180 elif what == b'a' or what == b'same':
181 yield what, self.a[t[1] : t[2]]
181 yield what, self.a[t[1] : t[2]]
182 elif what == b'b':
182 elif what == b'b':
183 yield what, self.b[t[1] : t[2]]
183 yield what, self.b[t[1] : t[2]]
184 elif what == b'conflict':
184 elif what == b'conflict':
185 yield (
185 yield (
186 what,
186 what,
187 self.base[t[1] : t[2]],
187 self.base[t[1] : t[2]],
188 self.a[t[3] : t[4]],
188 self.a[t[3] : t[4]],
189 self.b[t[5] : t[6]],
189 self.b[t[5] : t[6]],
190 )
190 )
191 else:
191 else:
192 raise ValueError(what)
192 raise ValueError(what)
193
193
194 def merge_regions(self):
194 def merge_regions(self):
195 """Return sequences of matching and conflicting regions.
195 """Return sequences of matching and conflicting regions.
196
196
197 This returns tuples, where the first value says what kind we
197 This returns tuples, where the first value says what kind we
198 have:
198 have:
199
199
200 'unchanged', start, end
200 'unchanged', start, end
201 Take a region of base[start:end]
201 Take a region of base[start:end]
202
202
203 'same', astart, aend
203 'same', astart, aend
204 b and a are different from base but give the same result
204 b and a are different from base but give the same result
205
205
206 'a', start, end
206 'a', start, end
207 Non-clashing insertion from a[start:end]
207 Non-clashing insertion from a[start:end]
208
208
209 'conflict', zstart, zend, astart, aend, bstart, bend
209 'conflict', zstart, zend, astart, aend, bstart, bend
210 Conflict between a and b, with z as common ancestor
210 Conflict between a and b, with z as common ancestor
211
211
212 Method is as follows:
212 Method is as follows:
213
213
214 The two sequences align only on regions which match the base
214 The two sequences align only on regions which match the base
215 and both descendants. These are found by doing a two-way diff
215 and both descendants. These are found by doing a two-way diff
216 of each one against the base, and then finding the
216 of each one against the base, and then finding the
217 intersections between those regions. These "sync regions"
217 intersections between those regions. These "sync regions"
218 are by definition unchanged in both and easily dealt with.
218 are by definition unchanged in both and easily dealt with.
219
219
220 The regions in between can be in any of three cases:
220 The regions in between can be in any of three cases:
221 conflicted, or changed on only one side.
221 conflicted, or changed on only one side.
222 """
222 """
223
223
224 # section a[0:ia] has been disposed of, etc
224 # section a[0:ia] has been disposed of, etc
225 iz = ia = ib = 0
225 iz = ia = ib = 0
226
226
227 for region in self.find_sync_regions():
227 for region in self.find_sync_regions():
228 zmatch, zend, amatch, aend, bmatch, bend = region
228 zmatch, zend, amatch, aend, bmatch, bend = region
229 # print 'match base [%d:%d]' % (zmatch, zend)
229 # print 'match base [%d:%d]' % (zmatch, zend)
230
230
231 matchlen = zend - zmatch
231 matchlen = zend - zmatch
232 assert matchlen >= 0
232 assert matchlen >= 0
233 assert matchlen == (aend - amatch)
233 assert matchlen == (aend - amatch)
234 assert matchlen == (bend - bmatch)
234 assert matchlen == (bend - bmatch)
235
235
236 len_a = amatch - ia
236 len_a = amatch - ia
237 len_b = bmatch - ib
237 len_b = bmatch - ib
238 len_base = zmatch - iz
238 len_base = zmatch - iz
239 assert len_a >= 0
239 assert len_a >= 0
240 assert len_b >= 0
240 assert len_b >= 0
241 assert len_base >= 0
241 assert len_base >= 0
242
242
243 # print 'unmatched a=%d, b=%d' % (len_a, len_b)
243 # print 'unmatched a=%d, b=%d' % (len_a, len_b)
244
244
245 if len_a or len_b:
245 if len_a or len_b:
246 # try to avoid actually slicing the lists
246 # try to avoid actually slicing the lists
247 equal_a = compare_range(
247 equal_a = compare_range(
248 self.a, ia, amatch, self.base, iz, zmatch
248 self.a, ia, amatch, self.base, iz, zmatch
249 )
249 )
250 equal_b = compare_range(
250 equal_b = compare_range(
251 self.b, ib, bmatch, self.base, iz, zmatch
251 self.b, ib, bmatch, self.base, iz, zmatch
252 )
252 )
253 same = compare_range(self.a, ia, amatch, self.b, ib, bmatch)
253 same = compare_range(self.a, ia, amatch, self.b, ib, bmatch)
254
254
255 if same:
255 if same:
256 yield b'same', ia, amatch
256 yield b'same', ia, amatch
257 elif equal_a and not equal_b:
257 elif equal_a and not equal_b:
258 yield b'b', ib, bmatch
258 yield b'b', ib, bmatch
259 elif equal_b and not equal_a:
259 elif equal_b and not equal_a:
260 yield b'a', ia, amatch
260 yield b'a', ia, amatch
261 elif not equal_a and not equal_b:
261 elif not equal_a and not equal_b:
262 yield b'conflict', iz, zmatch, ia, amatch, ib, bmatch
262 yield b'conflict', iz, zmatch, ia, amatch, ib, bmatch
263 else:
263 else:
264 raise AssertionError(b"can't handle a=b=base but unmatched")
264 raise AssertionError(b"can't handle a=b=base but unmatched")
265
265
266 ia = amatch
266 ia = amatch
267 ib = bmatch
267 ib = bmatch
268 iz = zmatch
268 iz = zmatch
269
269
270 # if the same part of the base was deleted on both sides
270 # if the same part of the base was deleted on both sides
271 # that's OK, we can just skip it.
271 # that's OK, we can just skip it.
272
272
273 if matchlen > 0:
273 if matchlen > 0:
274 assert ia == amatch
274 assert ia == amatch
275 assert ib == bmatch
275 assert ib == bmatch
276 assert iz == zmatch
276 assert iz == zmatch
277
277
278 yield b'unchanged', zmatch, zend
278 yield b'unchanged', zmatch, zend
279 iz = zend
279 iz = zend
280 ia = aend
280 ia = aend
281 ib = bend
281 ib = bend
282
282
283 def minimize(self, merge_regions):
283 def minimize(self, merge_regions):
284 """Trim conflict regions of lines where A and B sides match.
284 """Trim conflict regions of lines where A and B sides match.
285
285
286 Lines where both A and B have made the same changes at the beginning
286 Lines where both A and B have made the same changes at the beginning
287 or the end of each merge region are eliminated from the conflict
287 or the end of each merge region are eliminated from the conflict
288 region and are instead considered the same.
288 region and are instead considered the same.
289 """
289 """
290 for region in merge_regions:
290 for region in merge_regions:
291 if region[0] != b"conflict":
291 if region[0] != b"conflict":
292 yield region
292 yield region
293 continue
293 continue
294 # pytype thinks this tuple contains only 3 things, but
294 # pytype thinks this tuple contains only 3 things, but
295 # that's clearly not true because this code successfully
295 # that's clearly not true because this code successfully
296 # executes. It might be wise to rework merge_regions to be
296 # executes. It might be wise to rework merge_regions to be
297 # some kind of attrs type.
297 # some kind of attrs type.
298 (
298 (
299 issue,
299 issue,
300 z1,
300 z1,
301 z2,
301 z2,
302 a1,
302 a1,
303 a2,
303 a2,
304 b1,
304 b1,
305 b2,
305 b2,
306 ) = region # pytype: disable=bad-unpacking
306 ) = region # pytype: disable=bad-unpacking
307 alen = a2 - a1
307 alen = a2 - a1
308 blen = b2 - b1
308 blen = b2 - b1
309
309
310 # find matches at the front
310 # find matches at the front
311 ii = 0
311 ii = 0
312 while (
312 while (
313 ii < alen and ii < blen and self.a[a1 + ii] == self.b[b1 + ii]
313 ii < alen and ii < blen and self.a[a1 + ii] == self.b[b1 + ii]
314 ):
314 ):
315 ii += 1
315 ii += 1
316 startmatches = ii
316 startmatches = ii
317
317
318 # find matches at the end
318 # find matches at the end
319 ii = 0
319 ii = 0
320 while (
320 while (
321 ii < alen
321 ii < alen
322 and ii < blen
322 and ii < blen
323 and self.a[a2 - ii - 1] == self.b[b2 - ii - 1]
323 and self.a[a2 - ii - 1] == self.b[b2 - ii - 1]
324 ):
324 ):
325 ii += 1
325 ii += 1
326 endmatches = ii
326 endmatches = ii
327
327
328 if startmatches > 0:
328 if startmatches > 0:
329 yield b'same', a1, a1 + startmatches
329 yield b'same', a1, a1 + startmatches
330
330
331 yield (
331 yield (
332 b'conflict',
332 b'conflict',
333 z1,
333 z1,
334 z2,
334 z2,
335 a1 + startmatches,
335 a1 + startmatches,
336 a2 - endmatches,
336 a2 - endmatches,
337 b1 + startmatches,
337 b1 + startmatches,
338 b2 - endmatches,
338 b2 - endmatches,
339 )
339 )
340
340
341 if endmatches > 0:
341 if endmatches > 0:
342 yield b'same', a2 - endmatches, a2
342 yield b'same', a2 - endmatches, a2
343
343
344 def find_sync_regions(self):
344 def find_sync_regions(self):
345 """Return a list of sync regions, where both descendants match the base.
345 """Return a list of sync regions, where both descendants match the base.
346
346
347 Generates a list of (base1, base2, a1, a2, b1, b2). There is
347 Generates a list of (base1, base2, a1, a2, b1, b2). There is
348 always a zero-length sync region at the end of all the files.
348 always a zero-length sync region at the end of all the files.
349 """
349 """
350
350
351 ia = ib = 0
351 ia = ib = 0
352 amatches = mdiff.get_matching_blocks(self.basetext, self.atext)
352 amatches = mdiff.get_matching_blocks(self.basetext, self.atext)
353 bmatches = mdiff.get_matching_blocks(self.basetext, self.btext)
353 bmatches = mdiff.get_matching_blocks(self.basetext, self.btext)
354 len_a = len(amatches)
354 len_a = len(amatches)
355 len_b = len(bmatches)
355 len_b = len(bmatches)
356
356
357 sl = []
357 sl = []
358
358
359 while ia < len_a and ib < len_b:
359 while ia < len_a and ib < len_b:
360 abase, amatch, alen = amatches[ia]
360 abase, amatch, alen = amatches[ia]
361 bbase, bmatch, blen = bmatches[ib]
361 bbase, bmatch, blen = bmatches[ib]
362
362
363 # there is an unconflicted block at i; how long does it
363 # there is an unconflicted block at i; how long does it
364 # extend? until whichever one ends earlier.
364 # extend? until whichever one ends earlier.
365 i = intersect((abase, abase + alen), (bbase, bbase + blen))
365 i = intersect((abase, abase + alen), (bbase, bbase + blen))
366 if i:
366 if i:
367 intbase = i[0]
367 intbase = i[0]
368 intend = i[1]
368 intend = i[1]
369 intlen = intend - intbase
369 intlen = intend - intbase
370
370
371 # found a match of base[i[0], i[1]]; this may be less than
371 # found a match of base[i[0], i[1]]; this may be less than
372 # the region that matches in either one
372 # the region that matches in either one
373 assert intlen <= alen
373 assert intlen <= alen
374 assert intlen <= blen
374 assert intlen <= blen
375 assert abase <= intbase
375 assert abase <= intbase
376 assert bbase <= intbase
376 assert bbase <= intbase
377
377
378 asub = amatch + (intbase - abase)
378 asub = amatch + (intbase - abase)
379 bsub = bmatch + (intbase - bbase)
379 bsub = bmatch + (intbase - bbase)
380 aend = asub + intlen
380 aend = asub + intlen
381 bend = bsub + intlen
381 bend = bsub + intlen
382
382
383 assert self.base[intbase:intend] == self.a[asub:aend], (
383 assert self.base[intbase:intend] == self.a[asub:aend], (
384 self.base[intbase:intend],
384 self.base[intbase:intend],
385 self.a[asub:aend],
385 self.a[asub:aend],
386 )
386 )
387
387
388 assert self.base[intbase:intend] == self.b[bsub:bend]
388 assert self.base[intbase:intend] == self.b[bsub:bend]
389
389
390 sl.append((intbase, intend, asub, aend, bsub, bend))
390 sl.append((intbase, intend, asub, aend, bsub, bend))
391
391
392 # advance whichever one ends first in the base text
392 # advance whichever one ends first in the base text
393 if (abase + alen) < (bbase + blen):
393 if (abase + alen) < (bbase + blen):
394 ia += 1
394 ia += 1
395 else:
395 else:
396 ib += 1
396 ib += 1
397
397
398 intbase = len(self.base)
398 intbase = len(self.base)
399 abase = len(self.a)
399 abase = len(self.a)
400 bbase = len(self.b)
400 bbase = len(self.b)
401 sl.append((intbase, intbase, abase, abase, bbase, bbase))
401 sl.append((intbase, intbase, abase, abase, bbase, bbase))
402
402
403 return sl
403 return sl
404
404
405
405
406 def _verifytext(text, path, ui, opts):
406 def _verifytext(text, path, ui, opts):
407 """verifies that text is non-binary (unless opts[text] is passed,
407 """verifies that text is non-binary (unless opts[text] is passed,
408 then we just warn)"""
408 then we just warn)"""
409 if stringutil.binary(text):
409 if stringutil.binary(text):
410 msg = _(b"%s looks like a binary file.") % path
410 msg = _(b"%s looks like a binary file.") % path
411 if not opts.get('quiet'):
411 if not opts.get('quiet'):
412 ui.warn(_(b'warning: %s\n') % msg)
412 ui.warn(_(b'warning: %s\n') % msg)
413 if not opts.get('text'):
413 if not opts.get('text'):
414 raise error.Abort(msg)
414 raise error.Abort(msg)
415 return text
415 return text
416
416
417
417
418 def _picklabels(defaults, overrides):
418 def _picklabels(defaults, overrides):
419 if len(overrides) > 3:
419 if len(overrides) > 3:
420 raise error.Abort(_(b"can only specify three labels."))
420 raise error.Abort(_(b"can only specify three labels."))
421 result = defaults[:]
421 result = defaults[:]
422 for i, override in enumerate(overrides):
422 for i, override in enumerate(overrides):
423 result[i] = override
423 result[i] = override
424 return result
424 return result
425
425
426
426
427 def is_not_null(ctx):
427 def is_not_null(ctx):
428 if not util.safehasattr(ctx, "node"):
428 if not util.safehasattr(ctx, "node"):
429 return False
429 return False
430 return ctx.node() != nullid
430 return ctx.rev() != nullrev
431
431
432
432
433 def _mergediff(m3, name_a, name_b, name_base):
433 def _mergediff(m3, name_a, name_b, name_base):
434 lines = []
434 lines = []
435 conflicts = False
435 conflicts = False
436 for group in m3.merge_groups():
436 for group in m3.merge_groups():
437 if group[0] == b'conflict':
437 if group[0] == b'conflict':
438 base_lines, a_lines, b_lines = group[1:]
438 base_lines, a_lines, b_lines = group[1:]
439 base_text = b''.join(base_lines)
439 base_text = b''.join(base_lines)
440 b_blocks = list(
440 b_blocks = list(
441 mdiff.allblocks(
441 mdiff.allblocks(
442 base_text,
442 base_text,
443 b''.join(b_lines),
443 b''.join(b_lines),
444 lines1=base_lines,
444 lines1=base_lines,
445 lines2=b_lines,
445 lines2=b_lines,
446 )
446 )
447 )
447 )
448 a_blocks = list(
448 a_blocks = list(
449 mdiff.allblocks(
449 mdiff.allblocks(
450 base_text,
450 base_text,
451 b''.join(a_lines),
451 b''.join(a_lines),
452 lines1=base_lines,
452 lines1=base_lines,
453 lines2=b_lines,
453 lines2=b_lines,
454 )
454 )
455 )
455 )
456
456
457 def matching_lines(blocks):
457 def matching_lines(blocks):
458 return sum(
458 return sum(
459 block[1] - block[0]
459 block[1] - block[0]
460 for block, kind in blocks
460 for block, kind in blocks
461 if kind == b'='
461 if kind == b'='
462 )
462 )
463
463
464 def diff_lines(blocks, lines1, lines2):
464 def diff_lines(blocks, lines1, lines2):
465 for block, kind in blocks:
465 for block, kind in blocks:
466 if kind == b'=':
466 if kind == b'=':
467 for line in lines1[block[0] : block[1]]:
467 for line in lines1[block[0] : block[1]]:
468 yield b' ' + line
468 yield b' ' + line
469 else:
469 else:
470 for line in lines1[block[0] : block[1]]:
470 for line in lines1[block[0] : block[1]]:
471 yield b'-' + line
471 yield b'-' + line
472 for line in lines2[block[2] : block[3]]:
472 for line in lines2[block[2] : block[3]]:
473 yield b'+' + line
473 yield b'+' + line
474
474
475 lines.append(b"<<<<<<<\n")
475 lines.append(b"<<<<<<<\n")
476 if matching_lines(a_blocks) < matching_lines(b_blocks):
476 if matching_lines(a_blocks) < matching_lines(b_blocks):
477 lines.append(b"======= %s\n" % name_a)
477 lines.append(b"======= %s\n" % name_a)
478 lines.extend(a_lines)
478 lines.extend(a_lines)
479 lines.append(b"------- %s\n" % name_base)
479 lines.append(b"------- %s\n" % name_base)
480 lines.append(b"+++++++ %s\n" % name_b)
480 lines.append(b"+++++++ %s\n" % name_b)
481 lines.extend(diff_lines(b_blocks, base_lines, b_lines))
481 lines.extend(diff_lines(b_blocks, base_lines, b_lines))
482 else:
482 else:
483 lines.append(b"------- %s\n" % name_base)
483 lines.append(b"------- %s\n" % name_base)
484 lines.append(b"+++++++ %s\n" % name_a)
484 lines.append(b"+++++++ %s\n" % name_a)
485 lines.extend(diff_lines(a_blocks, base_lines, a_lines))
485 lines.extend(diff_lines(a_blocks, base_lines, a_lines))
486 lines.append(b"======= %s\n" % name_b)
486 lines.append(b"======= %s\n" % name_b)
487 lines.extend(b_lines)
487 lines.extend(b_lines)
488 lines.append(b">>>>>>>\n")
488 lines.append(b">>>>>>>\n")
489 conflicts = True
489 conflicts = True
490 else:
490 else:
491 lines.extend(group[1])
491 lines.extend(group[1])
492 return lines, conflicts
492 return lines, conflicts
493
493
494
494
495 def simplemerge(ui, localctx, basectx, otherctx, **opts):
495 def simplemerge(ui, localctx, basectx, otherctx, **opts):
496 """Performs the simplemerge algorithm.
496 """Performs the simplemerge algorithm.
497
497
498 The merged result is written into `localctx`.
498 The merged result is written into `localctx`.
499 """
499 """
500
500
501 def readctx(ctx):
501 def readctx(ctx):
502 # Merges were always run in the working copy before, which means
502 # Merges were always run in the working copy before, which means
503 # they used decoded data, if the user defined any repository
503 # they used decoded data, if the user defined any repository
504 # filters.
504 # filters.
505 #
505 #
506 # Maintain that behavior today for BC, though perhaps in the future
506 # Maintain that behavior today for BC, though perhaps in the future
507 # it'd be worth considering whether merging encoded data (what the
507 # it'd be worth considering whether merging encoded data (what the
508 # repository usually sees) might be more useful.
508 # repository usually sees) might be more useful.
509 return _verifytext(ctx.decodeddata(), ctx.path(), ui, opts)
509 return _verifytext(ctx.decodeddata(), ctx.path(), ui, opts)
510
510
511 mode = opts.get('mode', b'merge')
511 mode = opts.get('mode', b'merge')
512 name_a, name_b, name_base = None, None, None
512 name_a, name_b, name_base = None, None, None
513 if mode != b'union':
513 if mode != b'union':
514 name_a, name_b, name_base = _picklabels(
514 name_a, name_b, name_base = _picklabels(
515 [localctx.path(), otherctx.path(), None], opts.get('label', [])
515 [localctx.path(), otherctx.path(), None], opts.get('label', [])
516 )
516 )
517
517
518 try:
518 try:
519 localtext = readctx(localctx)
519 localtext = readctx(localctx)
520 basetext = readctx(basectx)
520 basetext = readctx(basectx)
521 othertext = readctx(otherctx)
521 othertext = readctx(otherctx)
522 except error.Abort:
522 except error.Abort:
523 return 1
523 return 1
524
524
525 m3 = Merge3Text(basetext, localtext, othertext)
525 m3 = Merge3Text(basetext, localtext, othertext)
526 extrakwargs = {
526 extrakwargs = {
527 b"localorother": opts.get("localorother", None),
527 b"localorother": opts.get("localorother", None),
528 b'minimize': True,
528 b'minimize': True,
529 }
529 }
530 if mode == b'union':
530 if mode == b'union':
531 extrakwargs[b'start_marker'] = None
531 extrakwargs[b'start_marker'] = None
532 extrakwargs[b'mid_marker'] = None
532 extrakwargs[b'mid_marker'] = None
533 extrakwargs[b'end_marker'] = None
533 extrakwargs[b'end_marker'] = None
534 elif name_base is not None:
534 elif name_base is not None:
535 extrakwargs[b'base_marker'] = b'|||||||'
535 extrakwargs[b'base_marker'] = b'|||||||'
536 extrakwargs[b'name_base'] = name_base
536 extrakwargs[b'name_base'] = name_base
537 extrakwargs[b'minimize'] = False
537 extrakwargs[b'minimize'] = False
538
538
539 if mode == b'mergediff':
539 if mode == b'mergediff':
540 lines, conflicts = _mergediff(m3, name_a, name_b, name_base)
540 lines, conflicts = _mergediff(m3, name_a, name_b, name_base)
541 else:
541 else:
542 lines = list(
542 lines = list(
543 m3.merge_lines(
543 m3.merge_lines(
544 name_a=name_a, name_b=name_b, **pycompat.strkwargs(extrakwargs)
544 name_a=name_a, name_b=name_b, **pycompat.strkwargs(extrakwargs)
545 )
545 )
546 )
546 )
547 conflicts = m3.conflicts
547 conflicts = m3.conflicts
548
548
549 # merge flags if necessary
549 # merge flags if necessary
550 flags = localctx.flags()
550 flags = localctx.flags()
551 localflags = set(pycompat.iterbytestr(flags))
551 localflags = set(pycompat.iterbytestr(flags))
552 otherflags = set(pycompat.iterbytestr(otherctx.flags()))
552 otherflags = set(pycompat.iterbytestr(otherctx.flags()))
553 if is_not_null(basectx) and localflags != otherflags:
553 if is_not_null(basectx) and localflags != otherflags:
554 baseflags = set(pycompat.iterbytestr(basectx.flags()))
554 baseflags = set(pycompat.iterbytestr(basectx.flags()))
555 commonflags = localflags & otherflags
555 commonflags = localflags & otherflags
556 addedflags = (localflags ^ otherflags) - baseflags
556 addedflags = (localflags ^ otherflags) - baseflags
557 flags = b''.join(sorted(commonflags | addedflags))
557 flags = b''.join(sorted(commonflags | addedflags))
558
558
559 mergedtext = b''.join(lines)
559 mergedtext = b''.join(lines)
560 if opts.get('print'):
560 if opts.get('print'):
561 ui.fout.write(mergedtext)
561 ui.fout.write(mergedtext)
562 else:
562 else:
563 localctx.write(mergedtext, flags)
563 localctx.write(mergedtext, flags)
564
564
565 if conflicts and not mode == b'union':
565 if conflicts and not mode == b'union':
566 return 1
566 return 1
General Comments 0
You need to be logged in to leave comments. Login now