Show More
The requested changes are too big and content was truncated. Show full diff
@@ -0,0 +1,115 b'' | |||||
|
1 | Bid merge is a feature introduced in Mercurial 3.0, a merge algorithm for | |||
|
2 | dealing with complicated merges. | |||
|
3 | ||||
|
4 | Bid merge is controled by the `merge.preferancestor` configuration option. The | |||
|
5 | default is set to `merge.preferancetors=*` and enable bid merge. Mercurial will | |||
|
6 | perform a bid merge in the cases where a merge otherwise would emit a note: | |||
|
7 | using X as ancestor of X and X message. | |||
|
8 | ||||
|
9 | Problem it is solving | |||
|
10 | ===================== | |||
|
11 | ||||
|
12 | Mercurial's core merge algorithm is the traditional "three-way merge". This | |||
|
13 | algorithm combines all the changes in two changesets relative to a common | |||
|
14 | ancestor. But with complex DAGs, it is often possible to have more than one | |||
|
15 | "best" common ancestor, with no easy way to distinguish between them. | |||
|
16 | ||||
|
17 | For example, C and D has 2 common ancestors in the following graph:: | |||
|
18 | ||||
|
19 | C D | |||
|
20 | |\ /| | |||
|
21 | | x | | |||
|
22 | |/ \| | |||
|
23 | A B | |||
|
24 | \ / | |||
|
25 | R | |||
|
26 | ||||
|
27 | Mercurial used to arbitrarily chooses the first of these, which can result in | |||
|
28 | various issues: | |||
|
29 | ||||
|
30 | * unexpected hard 3-way merges that would have been completely trivial if | |||
|
31 | another ancestor had been used | |||
|
32 | ||||
|
33 | * conflicts that have already been resolved may reappear | |||
|
34 | ||||
|
35 | * changes that have been reversed can silently oscillate | |||
|
36 | ||||
|
37 | One common problem is a merge which with the "right" ancestor would be trivial | |||
|
38 | to resolve because only one side changed. Using another ancestor where the same | |||
|
39 | lines are different, it will give an annoying 3-way merge. | |||
|
40 | ||||
|
41 | Other systems like Git have attacked some of these problems with a so-called | |||
|
42 | "recursive" merge strategy, that internally merges all the possible ancestors | |||
|
43 | to produce a single "virtual" ancestor to merge against. This is awkward as the | |||
|
44 | internal merge itself may involve conflicts (and possibly even multiple levels | |||
|
45 | of recursion), which either requires choosing a conflict disposition (e.g. | |||
|
46 | always choose the local version) or exposing the user to extremely confusing | |||
|
47 | merge prompts for old revisions. Generating the virtual merge also potentially | |||
|
48 | involves invoking filters and extensions. | |||
|
49 | ||||
|
50 | Concept | |||
|
51 | ======= | |||
|
52 | ||||
|
53 | (Bid merge is pretty much the same as Consensus merge.) | |||
|
54 | ||||
|
55 | Bid merge is a strategy that attempts to sensibly combine the results of the | |||
|
56 | multiple possible three-way merges directly without producing a virtual | |||
|
57 | ancestor. The basic idea is that for each ancestor, we perform a top-level | |||
|
58 | manifest merge and generate a list of proposed actions, which we consider | |||
|
59 | "bids". We then make an "auction" among all the bids for each file and pick the | |||
|
60 | most favourable. Some files might be trivial to merge with one ancestor, other | |||
|
61 | files with another ancestor. | |||
|
62 | ||||
|
63 | The most obvious advantage of considering multiple ancestors is the case where | |||
|
64 | some of the bids for a file is a "real" (interactive) merge but where one or | |||
|
65 | more bids just take on of the parent revisions. A bid for just taking an | |||
|
66 | existing revision is very simple and low risk and is an obvious winner. | |||
|
67 | ||||
|
68 | The auction algorithm for merging the bids is so far very simple: | |||
|
69 | ||||
|
70 | * If there is consensus from all the ancestors, there is no doubt what to do. A | |||
|
71 | clever result will be indistinguishable from just picking a random bid. The | |||
|
72 | consensus case is thus not only trivial, it is also already handled | |||
|
73 | perfectly. | |||
|
74 | ||||
|
75 | * If "keep local" or "get from other" actions is an option (and there is only | |||
|
76 | one such option), just do it. | |||
|
77 | ||||
|
78 | * If the auction doesn't have a single clear winner, pick one of the bids | |||
|
79 | "randomly" - just as it would have done if only one ancestor was considered. | |||
|
80 | ||||
|
81 | This meta merge algorithm has room for future improvements, especially for | |||
|
82 | doing better than picking a random bid. | |||
|
83 | ||||
|
84 | Some observations | |||
|
85 | ================= | |||
|
86 | ||||
|
87 | Experience with bid merge shows that many merges that actually have a very | |||
|
88 | simple solution (because only one side changed) only can be solved efficiently | |||
|
89 | when we start looking at file content in filemerge ... and it thus also | |||
|
90 | requires all ancestors passed to filemerge. That is because Mercurial includes | |||
|
91 | the history in filelog hashes. A file with changes that ends up not changing | |||
|
92 | the content (could be change + backout or graft + merge or criss cross merges) | |||
|
93 | still shows up as a changed file to manifestmerge. (The git data model has an | |||
|
94 | advantage here when it uses hashes of content without history.) One way to | |||
|
95 | handle that would be to refactor manifestmerge, mergestate/resolve and | |||
|
96 | filemerge so they become more of the same thing. | |||
|
97 | ||||
|
98 | There is also cases where different conflicting chunks could benefit from using | |||
|
99 | multiple ancestors in filemerge - but that will require merge tools with fancy | |||
|
100 | support for using multiple ancestors in 3+-way merge. That is left as an | |||
|
101 | exercise for another day. That seems to be a case where "recursive merge" has | |||
|
102 | an advantage. | |||
|
103 | ||||
|
104 | The current manifest merge actions are very low level imperative and not | |||
|
105 | symmetrical. They do not only describe how two manifests should be merged, they | |||
|
106 | also describe a strategy for changing a context from a state where it is one of | |||
|
107 | the parents to the state where it is the result of the merge with the other | |||
|
108 | parent. I can imagine that manifestmerge could be simplified (and made more | |||
|
109 | suitable for in memory merges) by separating the abstract merge actions from | |||
|
110 | the actual file system operation actions. A more clever wcontext could perhaps | |||
|
111 | also take care of some of the branchmerge special cases. | |||
|
112 | ||||
|
113 | We assume that the definition of Mercurial manifest merge will make sure that | |||
|
114 | exactly the same files will be produced, no matter which ancestor is used. That | |||
|
115 | assumption might be wrong in very rare cases that really not is a problem. |
@@ -0,0 +1,327 b'' | |||||
|
1 | # metadata.py -- code related to various metadata computation and access. | |||
|
2 | # | |||
|
3 | # Copyright 2019 Google, Inc <martinvonz@google.com> | |||
|
4 | # Copyright 2020 Pierre-Yves David <pierre-yves.david@octobus.net> | |||
|
5 | # | |||
|
6 | # This software may be used and distributed according to the terms of the | |||
|
7 | # GNU General Public License version 2 or any later version. | |||
|
8 | from __future__ import absolute_import, print_function | |||
|
9 | ||||
|
10 | import multiprocessing | |||
|
11 | ||||
|
12 | from . import ( | |||
|
13 | error, | |||
|
14 | node, | |||
|
15 | pycompat, | |||
|
16 | util, | |||
|
17 | ) | |||
|
18 | ||||
|
19 | from .revlogutils import ( | |||
|
20 | flagutil as sidedataflag, | |||
|
21 | sidedata as sidedatamod, | |||
|
22 | ) | |||
|
23 | ||||
|
24 | ||||
|
25 | def computechangesetfilesadded(ctx): | |||
|
26 | """return the list of files added in a changeset | |||
|
27 | """ | |||
|
28 | added = [] | |||
|
29 | for f in ctx.files(): | |||
|
30 | if not any(f in p for p in ctx.parents()): | |||
|
31 | added.append(f) | |||
|
32 | return added | |||
|
33 | ||||
|
34 | ||||
|
35 | def get_removal_filter(ctx, x=None): | |||
|
36 | """return a function to detect files "wrongly" detected as `removed` | |||
|
37 | ||||
|
38 | When a file is removed relative to p1 in a merge, this | |||
|
39 | function determines whether the absence is due to a | |||
|
40 | deletion from a parent, or whether the merge commit | |||
|
41 | itself deletes the file. We decide this by doing a | |||
|
42 | simplified three way merge of the manifest entry for | |||
|
43 | the file. There are two ways we decide the merge | |||
|
44 | itself didn't delete a file: | |||
|
45 | - neither parent (nor the merge) contain the file | |||
|
46 | - exactly one parent contains the file, and that | |||
|
47 | parent has the same filelog entry as the merge | |||
|
48 | ancestor (or all of them if there two). In other | |||
|
49 | words, that parent left the file unchanged while the | |||
|
50 | other one deleted it. | |||
|
51 | One way to think about this is that deleting a file is | |||
|
52 | similar to emptying it, so the list of changed files | |||
|
53 | should be similar either way. The computation | |||
|
54 | described above is not done directly in _filecommit | |||
|
55 | when creating the list of changed files, however | |||
|
56 | it does something very similar by comparing filelog | |||
|
57 | nodes. | |||
|
58 | """ | |||
|
59 | ||||
|
60 | if x is not None: | |||
|
61 | p1, p2, m1, m2 = x | |||
|
62 | else: | |||
|
63 | p1 = ctx.p1() | |||
|
64 | p2 = ctx.p2() | |||
|
65 | m1 = p1.manifest() | |||
|
66 | m2 = p2.manifest() | |||
|
67 | ||||
|
68 | @util.cachefunc | |||
|
69 | def mas(): | |||
|
70 | p1n = p1.node() | |||
|
71 | p2n = p2.node() | |||
|
72 | cahs = ctx.repo().changelog.commonancestorsheads(p1n, p2n) | |||
|
73 | if not cahs: | |||
|
74 | cahs = [node.nullrev] | |||
|
75 | return [ctx.repo()[r].manifest() for r in cahs] | |||
|
76 | ||||
|
77 | def deletionfromparent(f): | |||
|
78 | if f in m1: | |||
|
79 | return f not in m2 and all( | |||
|
80 | f in ma and ma.find(f) == m1.find(f) for ma in mas() | |||
|
81 | ) | |||
|
82 | elif f in m2: | |||
|
83 | return all(f in ma and ma.find(f) == m2.find(f) for ma in mas()) | |||
|
84 | else: | |||
|
85 | return True | |||
|
86 | ||||
|
87 | return deletionfromparent | |||
|
88 | ||||
|
89 | ||||
|
90 | def computechangesetfilesremoved(ctx): | |||
|
91 | """return the list of files removed in a changeset | |||
|
92 | """ | |||
|
93 | removed = [] | |||
|
94 | for f in ctx.files(): | |||
|
95 | if f not in ctx: | |||
|
96 | removed.append(f) | |||
|
97 | if removed: | |||
|
98 | rf = get_removal_filter(ctx) | |||
|
99 | removed = [r for r in removed if not rf(r)] | |||
|
100 | return removed | |||
|
101 | ||||
|
102 | ||||
|
103 | def computechangesetcopies(ctx): | |||
|
104 | """return the copies data for a changeset | |||
|
105 | ||||
|
106 | The copies data are returned as a pair of dictionnary (p1copies, p2copies). | |||
|
107 | ||||
|
108 | Each dictionnary are in the form: `{newname: oldname}` | |||
|
109 | """ | |||
|
110 | p1copies = {} | |||
|
111 | p2copies = {} | |||
|
112 | p1 = ctx.p1() | |||
|
113 | p2 = ctx.p2() | |||
|
114 | narrowmatch = ctx._repo.narrowmatch() | |||
|
115 | for dst in ctx.files(): | |||
|
116 | if not narrowmatch(dst) or dst not in ctx: | |||
|
117 | continue | |||
|
118 | copied = ctx[dst].renamed() | |||
|
119 | if not copied: | |||
|
120 | continue | |||
|
121 | src, srcnode = copied | |||
|
122 | if src in p1 and p1[src].filenode() == srcnode: | |||
|
123 | p1copies[dst] = src | |||
|
124 | elif src in p2 and p2[src].filenode() == srcnode: | |||
|
125 | p2copies[dst] = src | |||
|
126 | return p1copies, p2copies | |||
|
127 | ||||
|
128 | ||||
|
129 | def encodecopies(files, copies): | |||
|
130 | items = [] | |||
|
131 | for i, dst in enumerate(files): | |||
|
132 | if dst in copies: | |||
|
133 | items.append(b'%d\0%s' % (i, copies[dst])) | |||
|
134 | if len(items) != len(copies): | |||
|
135 | raise error.ProgrammingError( | |||
|
136 | b'some copy targets missing from file list' | |||
|
137 | ) | |||
|
138 | return b"\n".join(items) | |||
|
139 | ||||
|
140 | ||||
|
141 | def decodecopies(files, data): | |||
|
142 | try: | |||
|
143 | copies = {} | |||
|
144 | if not data: | |||
|
145 | return copies | |||
|
146 | for l in data.split(b'\n'): | |||
|
147 | strindex, src = l.split(b'\0') | |||
|
148 | i = int(strindex) | |||
|
149 | dst = files[i] | |||
|
150 | copies[dst] = src | |||
|
151 | return copies | |||
|
152 | except (ValueError, IndexError): | |||
|
153 | # Perhaps someone had chosen the same key name (e.g. "p1copies") and | |||
|
154 | # used different syntax for the value. | |||
|
155 | return None | |||
|
156 | ||||
|
157 | ||||
|
158 | def encodefileindices(files, subset): | |||
|
159 | subset = set(subset) | |||
|
160 | indices = [] | |||
|
161 | for i, f in enumerate(files): | |||
|
162 | if f in subset: | |||
|
163 | indices.append(b'%d' % i) | |||
|
164 | return b'\n'.join(indices) | |||
|
165 | ||||
|
166 | ||||
|
167 | def decodefileindices(files, data): | |||
|
168 | try: | |||
|
169 | subset = [] | |||
|
170 | if not data: | |||
|
171 | return subset | |||
|
172 | for strindex in data.split(b'\n'): | |||
|
173 | i = int(strindex) | |||
|
174 | if i < 0 or i >= len(files): | |||
|
175 | return None | |||
|
176 | subset.append(files[i]) | |||
|
177 | return subset | |||
|
178 | except (ValueError, IndexError): | |||
|
179 | # Perhaps someone had chosen the same key name (e.g. "added") and | |||
|
180 | # used different syntax for the value. | |||
|
181 | return None | |||
|
182 | ||||
|
183 | ||||
|
184 | def _getsidedata(srcrepo, rev): | |||
|
185 | ctx = srcrepo[rev] | |||
|
186 | filescopies = computechangesetcopies(ctx) | |||
|
187 | filesadded = computechangesetfilesadded(ctx) | |||
|
188 | filesremoved = computechangesetfilesremoved(ctx) | |||
|
189 | sidedata = {} | |||
|
190 | if any([filescopies, filesadded, filesremoved]): | |||
|
191 | sortedfiles = sorted(ctx.files()) | |||
|
192 | p1copies, p2copies = filescopies | |||
|
193 | p1copies = encodecopies(sortedfiles, p1copies) | |||
|
194 | p2copies = encodecopies(sortedfiles, p2copies) | |||
|
195 | filesadded = encodefileindices(sortedfiles, filesadded) | |||
|
196 | filesremoved = encodefileindices(sortedfiles, filesremoved) | |||
|
197 | if p1copies: | |||
|
198 | sidedata[sidedatamod.SD_P1COPIES] = p1copies | |||
|
199 | if p2copies: | |||
|
200 | sidedata[sidedatamod.SD_P2COPIES] = p2copies | |||
|
201 | if filesadded: | |||
|
202 | sidedata[sidedatamod.SD_FILESADDED] = filesadded | |||
|
203 | if filesremoved: | |||
|
204 | sidedata[sidedatamod.SD_FILESREMOVED] = filesremoved | |||
|
205 | return sidedata | |||
|
206 | ||||
|
207 | ||||
|
208 | def getsidedataadder(srcrepo, destrepo): | |||
|
209 | use_w = srcrepo.ui.configbool(b'experimental', b'worker.repository-upgrade') | |||
|
210 | if pycompat.iswindows or not use_w: | |||
|
211 | return _get_simple_sidedata_adder(srcrepo, destrepo) | |||
|
212 | else: | |||
|
213 | return _get_worker_sidedata_adder(srcrepo, destrepo) | |||
|
214 | ||||
|
215 | ||||
|
216 | def _sidedata_worker(srcrepo, revs_queue, sidedata_queue, tokens): | |||
|
217 | """The function used by worker precomputing sidedata | |||
|
218 | ||||
|
219 | It read an input queue containing revision numbers | |||
|
220 | It write in an output queue containing (rev, <sidedata-map>) | |||
|
221 | ||||
|
222 | The `None` input value is used as a stop signal. | |||
|
223 | ||||
|
224 | The `tokens` semaphore is user to avoid having too many unprocessed | |||
|
225 | entries. The workers needs to acquire one token before fetching a task. | |||
|
226 | They will be released by the consumer of the produced data. | |||
|
227 | """ | |||
|
228 | tokens.acquire() | |||
|
229 | rev = revs_queue.get() | |||
|
230 | while rev is not None: | |||
|
231 | data = _getsidedata(srcrepo, rev) | |||
|
232 | sidedata_queue.put((rev, data)) | |||
|
233 | tokens.acquire() | |||
|
234 | rev = revs_queue.get() | |||
|
235 | # processing of `None` is completed, release the token. | |||
|
236 | tokens.release() | |||
|
237 | ||||
|
238 | ||||
|
239 | BUFF_PER_WORKER = 50 | |||
|
240 | ||||
|
241 | ||||
|
242 | def _get_worker_sidedata_adder(srcrepo, destrepo): | |||
|
243 | """The parallel version of the sidedata computation | |||
|
244 | ||||
|
245 | This code spawn a pool of worker that precompute a buffer of sidedata | |||
|
246 | before we actually need them""" | |||
|
247 | # avoid circular import copies -> scmutil -> worker -> copies | |||
|
248 | from . import worker | |||
|
249 | ||||
|
250 | nbworkers = worker._numworkers(srcrepo.ui) | |||
|
251 | ||||
|
252 | tokens = multiprocessing.BoundedSemaphore(nbworkers * BUFF_PER_WORKER) | |||
|
253 | revsq = multiprocessing.Queue() | |||
|
254 | sidedataq = multiprocessing.Queue() | |||
|
255 | ||||
|
256 | assert srcrepo.filtername is None | |||
|
257 | # queue all tasks beforehand, revision numbers are small and it make | |||
|
258 | # synchronisation simpler | |||
|
259 | # | |||
|
260 | # Since the computation for each node can be quite expensive, the overhead | |||
|
261 | # of using a single queue is not revelant. In practice, most computation | |||
|
262 | # are fast but some are very expensive and dominate all the other smaller | |||
|
263 | # cost. | |||
|
264 | for r in srcrepo.changelog.revs(): | |||
|
265 | revsq.put(r) | |||
|
266 | # queue the "no more tasks" markers | |||
|
267 | for i in range(nbworkers): | |||
|
268 | revsq.put(None) | |||
|
269 | ||||
|
270 | allworkers = [] | |||
|
271 | for i in range(nbworkers): | |||
|
272 | args = (srcrepo, revsq, sidedataq, tokens) | |||
|
273 | w = multiprocessing.Process(target=_sidedata_worker, args=args) | |||
|
274 | allworkers.append(w) | |||
|
275 | w.start() | |||
|
276 | ||||
|
277 | # dictionnary to store results for revision higher than we one we are | |||
|
278 | # looking for. For example, if we need the sidedatamap for 42, and 43 is | |||
|
279 | # received, when shelve 43 for later use. | |||
|
280 | staging = {} | |||
|
281 | ||||
|
282 | def sidedata_companion(revlog, rev): | |||
|
283 | sidedata = {} | |||
|
284 | if util.safehasattr(revlog, b'filteredrevs'): # this is a changelog | |||
|
285 | # Is the data previously shelved ? | |||
|
286 | sidedata = staging.pop(rev, None) | |||
|
287 | if sidedata is None: | |||
|
288 | # look at the queued result until we find the one we are lookig | |||
|
289 | # for (shelve the other ones) | |||
|
290 | r, sidedata = sidedataq.get() | |||
|
291 | while r != rev: | |||
|
292 | staging[r] = sidedata | |||
|
293 | r, sidedata = sidedataq.get() | |||
|
294 | tokens.release() | |||
|
295 | return False, (), sidedata | |||
|
296 | ||||
|
297 | return sidedata_companion | |||
|
298 | ||||
|
299 | ||||
|
300 | def _get_simple_sidedata_adder(srcrepo, destrepo): | |||
|
301 | """The simple version of the sidedata computation | |||
|
302 | ||||
|
303 | It just compute it in the same thread on request""" | |||
|
304 | ||||
|
305 | def sidedatacompanion(revlog, rev): | |||
|
306 | sidedata = {} | |||
|
307 | if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog | |||
|
308 | sidedata = _getsidedata(srcrepo, rev) | |||
|
309 | return False, (), sidedata | |||
|
310 | ||||
|
311 | return sidedatacompanion | |||
|
312 | ||||
|
313 | ||||
|
314 | def getsidedataremover(srcrepo, destrepo): | |||
|
315 | def sidedatacompanion(revlog, rev): | |||
|
316 | f = () | |||
|
317 | if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog | |||
|
318 | if revlog.flags(rev) & sidedataflag.REVIDX_SIDEDATA: | |||
|
319 | f = ( | |||
|
320 | sidedatamod.SD_P1COPIES, | |||
|
321 | sidedatamod.SD_P2COPIES, | |||
|
322 | sidedatamod.SD_FILESADDED, | |||
|
323 | sidedatamod.SD_FILESREMOVED, | |||
|
324 | ) | |||
|
325 | return False, f, {} | |||
|
326 | ||||
|
327 | return sidedatacompanion |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100755 |
|
NO CONTENT: new file 100755 | ||
The requested commit or file is too big and content was truncated. Show full diff |
@@ -52,6 +52,7 b' cscope.*' | |||||
52 | .idea/* |
|
52 | .idea/* | |
53 | .asv/* |
|
53 | .asv/* | |
54 | .pytype/* |
|
54 | .pytype/* | |
|
55 | .mypy_cache | |||
55 | i18n/hg.pot |
|
56 | i18n/hg.pot | |
56 | locale/*/LC_MESSAGES/hg.mo |
|
57 | locale/*/LC_MESSAGES/hg.mo | |
57 | hgext/__index__.py |
|
58 | hgext/__index__.py |
@@ -232,7 +232,7 b' static void execcmdserver(const struct c' | |||||
232 | abortmsgerrno("failed to putenv CHG_CLEAR_LC_CTYPE"); |
|
232 | abortmsgerrno("failed to putenv CHG_CLEAR_LC_CTYPE"); | |
233 | } else { |
|
233 | } else { | |
234 | if (setenv("CHGORIG_LC_CTYPE", lc_ctype_env, 1) != 0) { |
|
234 | if (setenv("CHGORIG_LC_CTYPE", lc_ctype_env, 1) != 0) { | |
235 |
abortmsgerrno("failed to setenv CHGORIG_LC_CTY |
|
235 | abortmsgerrno("failed to setenv CHGORIG_LC_CTYPE"); | |
236 | } |
|
236 | } | |
237 | } |
|
237 | } | |
238 |
|
238 |
@@ -28,7 +28,7 b' binopen.options = {}' | |||||
28 |
|
28 | |||
29 | def printb(data, end=b'\n'): |
|
29 | def printb(data, end=b'\n'): | |
30 | sys.stdout.flush() |
|
30 | sys.stdout.flush() | |
31 |
p |
|
31 | procutil.stdout.write(data + end) | |
32 |
|
32 | |||
33 |
|
33 | |||
34 | for f in sys.argv[1:]: |
|
34 | for f in sys.argv[1:]: |
@@ -11,6 +11,7 b' CXX = clang++' | |||||
11 | LIB_FUZZING_ENGINE ?= standalone_fuzz_target_runner.o |
|
11 | LIB_FUZZING_ENGINE ?= standalone_fuzz_target_runner.o | |
12 |
|
12 | |||
13 | PYTHON_CONFIG ?= $$OUT/sanpy/bin/python-config |
|
13 | PYTHON_CONFIG ?= $$OUT/sanpy/bin/python-config | |
|
14 | PYTHON_CONFIG_FLAGS ?= --ldflags | |||
14 |
|
15 | |||
15 | CXXFLAGS += -Wno-deprecated-register |
|
16 | CXXFLAGS += -Wno-deprecated-register | |
16 |
|
17 | |||
@@ -67,7 +68,7 b' dirs_fuzzer: dirs.cc pyutil.o $(PARSERS_' | |||||
67 | -Wno-register -Wno-macro-redefined \ |
|
68 | -Wno-register -Wno-macro-redefined \ | |
68 | -I../../mercurial dirs.cc \ |
|
69 | -I../../mercurial dirs.cc \ | |
69 | pyutil.o $(PARSERS_OBJS) \ |
|
70 | pyutil.o $(PARSERS_OBJS) \ | |
70 |
$(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) |
|
71 | $(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) $(PYTHON_CONFIG_FLAGS)` \ | |
71 | -o $$OUT/dirs_fuzzer |
|
72 | -o $$OUT/dirs_fuzzer | |
72 |
|
73 | |||
73 | fncache_fuzzer: fncache.cc |
|
74 | fncache_fuzzer: fncache.cc | |
@@ -75,7 +76,7 b' fncache_fuzzer: fncache.cc' | |||||
75 | -Wno-register -Wno-macro-redefined \ |
|
76 | -Wno-register -Wno-macro-redefined \ | |
76 | -I../../mercurial fncache.cc \ |
|
77 | -I../../mercurial fncache.cc \ | |
77 | pyutil.o $(PARSERS_OBJS) \ |
|
78 | pyutil.o $(PARSERS_OBJS) \ | |
78 |
$(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) |
|
79 | $(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) $(PYTHON_CONFIG_FLAGS)` \ | |
79 | -o $$OUT/fncache_fuzzer |
|
80 | -o $$OUT/fncache_fuzzer | |
80 |
|
81 | |||
81 | jsonescapeu8fast_fuzzer: jsonescapeu8fast.cc pyutil.o $(PARSERS_OBJS) |
|
82 | jsonescapeu8fast_fuzzer: jsonescapeu8fast.cc pyutil.o $(PARSERS_OBJS) | |
@@ -83,7 +84,7 b' jsonescapeu8fast_fuzzer: jsonescapeu8fas' | |||||
83 | -Wno-register -Wno-macro-redefined \ |
|
84 | -Wno-register -Wno-macro-redefined \ | |
84 | -I../../mercurial jsonescapeu8fast.cc \ |
|
85 | -I../../mercurial jsonescapeu8fast.cc \ | |
85 | pyutil.o $(PARSERS_OBJS) \ |
|
86 | pyutil.o $(PARSERS_OBJS) \ | |
86 |
$(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) |
|
87 | $(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) $(PYTHON_CONFIG_FLAGS)` \ | |
87 | -o $$OUT/jsonescapeu8fast_fuzzer |
|
88 | -o $$OUT/jsonescapeu8fast_fuzzer | |
88 |
|
89 | |||
89 | manifest_fuzzer: manifest.cc pyutil.o $(PARSERS_OBJS) $$OUT/manifest_fuzzer_seed_corpus.zip |
|
90 | manifest_fuzzer: manifest.cc pyutil.o $(PARSERS_OBJS) $$OUT/manifest_fuzzer_seed_corpus.zip | |
@@ -91,7 +92,7 b' manifest_fuzzer: manifest.cc pyutil.o $(' | |||||
91 | -Wno-register -Wno-macro-redefined \ |
|
92 | -Wno-register -Wno-macro-redefined \ | |
92 | -I../../mercurial manifest.cc \ |
|
93 | -I../../mercurial manifest.cc \ | |
93 | pyutil.o $(PARSERS_OBJS) \ |
|
94 | pyutil.o $(PARSERS_OBJS) \ | |
94 |
$(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) |
|
95 | $(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) $(PYTHON_CONFIG_FLAGS)` \ | |
95 | -o $$OUT/manifest_fuzzer |
|
96 | -o $$OUT/manifest_fuzzer | |
96 |
|
97 | |||
97 | revlog_fuzzer: revlog.cc pyutil.o $(PARSERS_OBJS) $$OUT/revlog_fuzzer_seed_corpus.zip |
|
98 | revlog_fuzzer: revlog.cc pyutil.o $(PARSERS_OBJS) $$OUT/revlog_fuzzer_seed_corpus.zip | |
@@ -99,7 +100,7 b' revlog_fuzzer: revlog.cc pyutil.o $(PARS' | |||||
99 | -Wno-register -Wno-macro-redefined \ |
|
100 | -Wno-register -Wno-macro-redefined \ | |
100 | -I../../mercurial revlog.cc \ |
|
101 | -I../../mercurial revlog.cc \ | |
101 | pyutil.o $(PARSERS_OBJS) \ |
|
102 | pyutil.o $(PARSERS_OBJS) \ | |
102 |
$(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) |
|
103 | $(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) $(PYTHON_CONFIG_FLAGS)` \ | |
103 | -o $$OUT/revlog_fuzzer |
|
104 | -o $$OUT/revlog_fuzzer | |
104 |
|
105 | |||
105 | dirstate_fuzzer: dirstate.cc pyutil.o $(PARSERS_OBJS) $$OUT/dirstate_fuzzer_seed_corpus.zip |
|
106 | dirstate_fuzzer: dirstate.cc pyutil.o $(PARSERS_OBJS) $$OUT/dirstate_fuzzer_seed_corpus.zip | |
@@ -107,7 +108,7 b' dirstate_fuzzer: dirstate.cc pyutil.o $(' | |||||
107 | -Wno-register -Wno-macro-redefined \ |
|
108 | -Wno-register -Wno-macro-redefined \ | |
108 | -I../../mercurial dirstate.cc \ |
|
109 | -I../../mercurial dirstate.cc \ | |
109 | pyutil.o $(PARSERS_OBJS) \ |
|
110 | pyutil.o $(PARSERS_OBJS) \ | |
110 |
$(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) |
|
111 | $(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) $(PYTHON_CONFIG_FLAGS)` \ | |
111 | -o $$OUT/dirstate_fuzzer |
|
112 | -o $$OUT/dirstate_fuzzer | |
112 |
|
113 | |||
113 | fm1readmarkers_fuzzer: fm1readmarkers.cc pyutil.o $(PARSERS_OBJS) $$OUT/fm1readmarkers_fuzzer_seed_corpus.zip |
|
114 | fm1readmarkers_fuzzer: fm1readmarkers.cc pyutil.o $(PARSERS_OBJS) $$OUT/fm1readmarkers_fuzzer_seed_corpus.zip | |
@@ -115,7 +116,7 b' fm1readmarkers_fuzzer: fm1readmarkers.cc' | |||||
115 | -Wno-register -Wno-macro-redefined \ |
|
116 | -Wno-register -Wno-macro-redefined \ | |
116 | -I../../mercurial fm1readmarkers.cc \ |
|
117 | -I../../mercurial fm1readmarkers.cc \ | |
117 | pyutil.o $(PARSERS_OBJS) \ |
|
118 | pyutil.o $(PARSERS_OBJS) \ | |
118 |
$(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) |
|
119 | $(LIB_FUZZING_ENGINE) `$(PYTHON_CONFIG) $(PYTHON_CONFIG_FLAGS)` \ | |
119 | -o $$OUT/fm1readmarkers_fuzzer |
|
120 | -o $$OUT/fm1readmarkers_fuzzer | |
120 |
|
121 | |||
121 | clean: |
|
122 | clean: |
@@ -3,6 +3,7 b'' | |||||
3 | #include <stdlib.h> |
|
3 | #include <stdlib.h> | |
4 | #include <unistd.h> |
|
4 | #include <unistd.h> | |
5 |
|
5 | |||
|
6 | #include "FuzzedDataProvider.h" | |||
6 | #include "pyutil.h" |
|
7 | #include "pyutil.h" | |
7 |
|
8 | |||
8 | #include <string> |
|
9 | #include <string> | |
@@ -24,7 +25,7 b' try:' | |||||
24 | lm[e] |
|
25 | lm[e] | |
25 | e in lm |
|
26 | e in lm | |
26 | (e + 'nope') in lm |
|
27 | (e + 'nope') in lm | |
27 |
lm[b'xyzzy'] = (b'\0' * |
|
28 | lm[b'xyzzy'] = (b'\0' * nlen, 'x') | |
28 | # do an insert, text should change |
|
29 | # do an insert, text should change | |
29 | assert lm.text() != mdata, "insert should change text and didn't: %r %r" % (lm.text(), mdata) |
|
30 | assert lm.text() != mdata, "insert should change text and didn't: %r %r" % (lm.text(), mdata) | |
30 | cloned = lm.filtercopy(lambda x: x != 'xyzzy') |
|
31 | cloned = lm.filtercopy(lambda x: x != 'xyzzy') | |
@@ -51,10 +52,14 b' int LLVMFuzzerTestOneInput(const uint8_t' | |||||
51 | if (Size > 100000) { |
|
52 | if (Size > 100000) { | |
52 | return 0; |
|
53 | return 0; | |
53 | } |
|
54 | } | |
|
55 | FuzzedDataProvider provider(Data, Size); | |||
|
56 | Py_ssize_t nodelength = provider.ConsumeBool() ? 20 : 32; | |||
|
57 | PyObject *nlen = PyLong_FromSsize_t(nodelength); | |||
54 | PyObject *mtext = |
|
58 | PyObject *mtext = | |
55 | PyBytes_FromStringAndSize((const char *)Data, (Py_ssize_t)Size); |
|
59 | PyBytes_FromStringAndSize((const char *)Data, (Py_ssize_t)Size); | |
56 | PyObject *locals = PyDict_New(); |
|
60 | PyObject *locals = PyDict_New(); | |
57 | PyDict_SetItemString(locals, "mdata", mtext); |
|
61 | PyDict_SetItemString(locals, "mdata", mtext); | |
|
62 | PyDict_SetItemString(locals, "nlen", nlen); | |||
58 | PyObject *res = PyEval_EvalCode(code, contrib::pyglobals(), locals); |
|
63 | PyObject *res = PyEval_EvalCode(code, contrib::pyglobals(), locals); | |
59 | if (!res) { |
|
64 | if (!res) { | |
60 | PyErr_Print(); |
|
65 | PyErr_Print(); |
@@ -10,7 +10,7 b' args = ap.parse_args()' | |||||
10 | with zipfile.ZipFile(args.out[0], "w", zipfile.ZIP_STORED) as zf: |
|
10 | with zipfile.ZipFile(args.out[0], "w", zipfile.ZIP_STORED) as zf: | |
11 | zf.writestr( |
|
11 | zf.writestr( | |
12 | "manifest_zero", |
|
12 | "manifest_zero", | |
13 | '''PKG-INFO\09b3ed8f2b81095a13064402e930565f083346e9a |
|
13 | '''\0PKG-INFO\09b3ed8f2b81095a13064402e930565f083346e9a | |
14 | README\080b6e76643dcb44d4bc729e932fc464b3e36dbe3 |
|
14 | README\080b6e76643dcb44d4bc729e932fc464b3e36dbe3 | |
15 | hg\0b6444347c629cc058d478023905cfb83b7f5bb9d |
|
15 | hg\0b6444347c629cc058d478023905cfb83b7f5bb9d | |
16 | mercurial/__init__.py\0b80de5d138758541c5f05265ad144ab9fa86d1db |
|
16 | mercurial/__init__.py\0b80de5d138758541c5f05265ad144ab9fa86d1db | |
@@ -25,9 +25,14 b' setup.py\\0ccf3f6daf0f13101ca73631f7a1769' | |||||
25 | tkmerge\03c922edb43a9c143682f7bc7b00f98b3c756ebe7 |
|
25 | tkmerge\03c922edb43a9c143682f7bc7b00f98b3c756ebe7 | |
26 | ''', |
|
26 | ''', | |
27 | ) |
|
27 | ) | |
28 | zf.writestr("badmanifest_shorthashes", "narf\0aa\nnarf2\0aaa\n") |
|
28 | zf.writestr("badmanifest_shorthashes", "\0narf\0aa\nnarf2\0aaa\n") | |
29 | zf.writestr( |
|
29 | zf.writestr( | |
30 | "badmanifest_nonull", |
|
30 | "badmanifest_nonull", | |
31 | "narf\0cccccccccccccccccccccccccccccccccccccccc\n" |
|
31 | "\0narf\0cccccccccccccccccccccccccccccccccccccccc\n" | |
32 | "narf2aaaaaaaaaaaaaaaaaaaa\n", |
|
32 | "narf2aaaaaaaaaaaaaaaaaaaa\n", | |
33 | ) |
|
33 | ) | |
|
34 | ||||
|
35 | zf.writestr( | |||
|
36 | "manifest_long_nodes", | |||
|
37 | "\1a\0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff\n", | |||
|
38 | ) |
@@ -21,7 +21,7 b' static PyObject *globals;' | |||||
21 | void initpy(const char *cselfpath) |
|
21 | void initpy(const char *cselfpath) | |
22 | { |
|
22 | { | |
23 | #ifdef HG_FUZZER_PY3 |
|
23 | #ifdef HG_FUZZER_PY3 | |
24 |
const std::string subdir = "/sanpy/lib/python3. |
|
24 | const std::string subdir = "/sanpy/lib/python3.8"; | |
25 | #else |
|
25 | #else | |
26 | const std::string subdir = "/sanpy/lib/python2.7"; |
|
26 | const std::string subdir = "/sanpy/lib/python2.7"; | |
27 | #endif |
|
27 | #endif |
@@ -5,6 +5,8 b' image: octobus/ci-mercurial-core' | |||||
5 | before_script: |
|
5 | before_script: | |
6 | - hg clone . /tmp/mercurial-ci/ --noupdate |
|
6 | - hg clone . /tmp/mercurial-ci/ --noupdate | |
7 | - hg -R /tmp/mercurial-ci/ update `hg log --rev '.' --template '{node}'` |
|
7 | - hg -R /tmp/mercurial-ci/ update `hg log --rev '.' --template '{node}'` | |
|
8 | - cd /tmp/mercurial-ci/rust/rhg | |||
|
9 | - cargo build | |||
8 | - cd /tmp/mercurial-ci/ |
|
10 | - cd /tmp/mercurial-ci/ | |
9 | - ls -1 tests/test-check-*.* > /tmp/check-tests.txt |
|
11 | - ls -1 tests/test-check-*.* > /tmp/check-tests.txt | |
10 |
|
12 | |||
@@ -79,3 +81,9 b' test-py3-rust:' | |||||
79 | RUNTEST_ARGS: "--rust --blacklist /tmp/check-tests.txt" |
|
81 | RUNTEST_ARGS: "--rust --blacklist /tmp/check-tests.txt" | |
80 | PYTHON: python3 |
|
82 | PYTHON: python3 | |
81 | TEST_HGMODULEPOLICY: "rust+c" |
|
83 | TEST_HGMODULEPOLICY: "rust+c" | |
|
84 | ||||
|
85 | test-py2-chg: | |||
|
86 | <<: *runtests | |||
|
87 | variables: | |||
|
88 | RUNTEST_ARGS: "--blacklist /tmp/check-tests.txt --chg" | |||
|
89 | TEST_HGMODULEPOLICY: "c" |
@@ -2,14 +2,51 b'' | |||||
2 | # Uncomment this to turn on verbose mode. |
|
2 | # Uncomment this to turn on verbose mode. | |
3 | # export DH_VERBOSE=1 |
|
3 | # export DH_VERBOSE=1 | |
4 |
|
4 | |||
|
5 | # By default we build a .deb where the native components are built with the | |||
|
6 | # current "default" version of py3 on the build machine. If you wish to build a | |||
|
7 | # .deb that has native components built for multiple versions of py3: | |||
|
8 | # | |||
|
9 | # 1. install python3.x and python3.x-dev for each version you want | |||
|
10 | # 2. set DEB_HG_MULTI_VERSION=1 or DEB_HG_PYTHON_VERSIONS in your environment | |||
|
11 | # (if both are set, DEB_HG_PYTHON_VERSIONS has precedence) | |||
|
12 | # | |||
|
13 | # If you choose `DEB_HG_MULTI_VERSION=1`, it will build for every "supported" | |||
|
14 | # version of py3 that's installed on the build machine. This may not be equal to | |||
|
15 | # the actual versions that are installed, see the comment above where we set | |||
|
16 | # DEB_HG_PYTHON_VERSIONS below. If you choose to set `DEB_HG_PYTHON_VERSIONS` | |||
|
17 | # yourself, set it to a space-separated string of python version numbers, like: | |||
|
18 | # DEB_HG_PYTHON_VERSIONS="3.7 3.8" make deb | |||
|
19 | DEB_HG_MULTI_VERSION?=0 | |||
|
20 | ||||
5 | CPUS=$(shell cat /proc/cpuinfo | grep -E ^processor | wc -l) |
|
21 | CPUS=$(shell cat /proc/cpuinfo | grep -E ^processor | wc -l) | |
6 |
|
22 | |||
|
23 | # By default, only build for the version of python3 that the system considers | |||
|
24 | # the 'default' (which should be the one invoked by just running 'python3' | |||
|
25 | # without a minor version). If DEB_HG_PYTHON_VERSIONS is set, this is ignored. | |||
|
26 | ifeq ($(DEB_HG_MULTI_VERSION), 1) | |||
|
27 | # If we're building for multiple versions, use all of the "supported" versions | |||
|
28 | # on the build machine. Note: the mechanism in use here (`py3versions`) is the | |||
|
29 | # recommended one, but it relies on a file written by the python3-minimal | |||
|
30 | # package, and this file is not dynamic and does not account for manual | |||
|
31 | # installations, just the ones that would be installed by `python3-all`. This | |||
|
32 | # includes the `-i` flag, which claims it's to list all "installed" versions, | |||
|
33 | # but it doesn't. This was quite confusing, hence this tale of woe. :) | |||
|
34 | DEB_HG_PYTHON_VERSIONS?=$(shell py3versions -vs) | |||
|
35 | else | |||
|
36 | # If we're building for only one version, identify the "default" version on | |||
|
37 | # the build machine and use that when building; this is just so that we don't | |||
|
38 | # have to duplicate the rules below for multi-version vs. single-version. The | |||
|
39 | # shebang line will still be /usr/bin/python3 (no minor version). | |||
|
40 | DEB_HG_PYTHON_VERSIONS?=$(shell py3versions -vd) | |||
|
41 | endif | |||
|
42 | ||||
7 | export HGPYTHON3=1 |
|
43 | export HGPYTHON3=1 | |
8 | export PYTHON=python3 |
|
44 | export PYTHON=python3 | |
9 |
|
45 | |||
10 | %: |
|
46 | %: | |
11 | dh $@ --with python3 |
|
47 | dh $@ --with python3 | |
12 |
|
48 | |||
|
49 | # Note: testing can be disabled using the standard `DEB_BUILD_OPTIONS=nocheck` | |||
13 | override_dh_auto_test: |
|
50 | override_dh_auto_test: | |
14 | http_proxy='' dh_auto_test -- TESTFLAGS="-j$(CPUS)" |
|
51 | http_proxy='' dh_auto_test -- TESTFLAGS="-j$(CPUS)" | |
15 |
|
52 | |||
@@ -24,8 +61,15 b' override_dh_auto_build:' | |||||
24 | $(MAKE) all |
|
61 | $(MAKE) all | |
25 | $(MAKE) -C contrib/chg all |
|
62 | $(MAKE) -C contrib/chg all | |
26 |
|
63 | |||
27 | override_dh_auto_install: |
|
64 | # Build the native extensions for a specfic python3 version (which must be | |
28 | python3 setup.py install --root "$(CURDIR)"/debian/mercurial --install-layout=deb |
|
65 | # installed on the build machine). | |
|
66 | install-python%: | |||
|
67 | python$* setup.py install --root "$(CURDIR)"/debian/mercurial --install-layout=deb | |||
|
68 | ||||
|
69 | # Build the final package. This rule has a dependencies section that causes the | |||
|
70 | # native extensions to be compiled for every version of python3 listed in | |||
|
71 | # DEB_HG_PYTHON_VERSIONS. | |||
|
72 | override_dh_auto_install: $(DEB_HG_PYTHON_VERSIONS:%=install-python%) | |||
29 | # chg |
|
73 | # chg | |
30 | make -C contrib/chg \ |
|
74 | make -C contrib/chg \ | |
31 | DESTDIR="$(CURDIR)"/debian/mercurial \ |
|
75 | DESTDIR="$(CURDIR)"/debian/mercurial \ |
@@ -3794,19 +3794,47 b' def perflrucache(' | |||||
3794 | fm.end() |
|
3794 | fm.end() | |
3795 |
|
3795 | |||
3796 |
|
3796 | |||
3797 | @command(b'perfwrite', formatteropts) |
|
3797 | @command( | |
|
3798 | b'perfwrite', | |||
|
3799 | formatteropts | |||
|
3800 | + [ | |||
|
3801 | (b'', b'write-method', b'write', b'ui write method'), | |||
|
3802 | (b'', b'nlines', 100, b'number of lines'), | |||
|
3803 | (b'', b'nitems', 100, b'number of items (per line)'), | |||
|
3804 | (b'', b'item', b'x', b'item that is written'), | |||
|
3805 | (b'', b'batch-line', None, b'pass whole line to write method at once'), | |||
|
3806 | (b'', b'flush-line', None, b'flush after each line'), | |||
|
3807 | ], | |||
|
3808 | ) | |||
3798 | def perfwrite(ui, repo, **opts): |
|
3809 | def perfwrite(ui, repo, **opts): | |
3799 | """microbenchmark ui.write |
|
3810 | """microbenchmark ui.write (and others) | |
3800 | """ |
|
3811 | """ | |
3801 | opts = _byteskwargs(opts) |
|
3812 | opts = _byteskwargs(opts) | |
3802 |
|
3813 | |||
|
3814 | write = getattr(ui, _sysstr(opts[b'write_method'])) | |||
|
3815 | nlines = int(opts[b'nlines']) | |||
|
3816 | nitems = int(opts[b'nitems']) | |||
|
3817 | item = opts[b'item'] | |||
|
3818 | batch_line = opts.get(b'batch_line') | |||
|
3819 | flush_line = opts.get(b'flush_line') | |||
|
3820 | ||||
|
3821 | if batch_line: | |||
|
3822 | line = item * nitems + b'\n' | |||
|
3823 | ||||
|
3824 | def benchmark(): | |||
|
3825 | for i in pycompat.xrange(nlines): | |||
|
3826 | if batch_line: | |||
|
3827 | write(line) | |||
|
3828 | else: | |||
|
3829 | for i in pycompat.xrange(nitems): | |||
|
3830 | write(item) | |||
|
3831 | write(b'\n') | |||
|
3832 | if flush_line: | |||
|
3833 | ui.flush() | |||
|
3834 | ui.flush() | |||
|
3835 | ||||
3803 | timer, fm = gettimer(ui, opts) |
|
3836 | timer, fm = gettimer(ui, opts) | |
3804 |
|
3837 | timer(benchmark) | ||
3805 | def write(): |
|
|||
3806 | for i in range(100000): |
|
|||
3807 | ui.writenoi18n(b'Testing write performance\n') |
|
|||
3808 |
|
||||
3809 | timer(write) |
|
|||
3810 | fm.end() |
|
3838 | fm.end() | |
3811 |
|
3839 | |||
3812 |
|
3840 |
@@ -45,8 +45,8 b' class ParseError(Exception):' | |||||
45 |
|
45 | |||
46 |
|
46 | |||
47 | def showhelp(): |
|
47 | def showhelp(): | |
48 |
p |
|
48 | procutil.stdout.write(usage) | |
49 |
p |
|
49 | procutil.stdout.write(b'\noptions:\n') | |
50 |
|
50 | |||
51 | out_opts = [] |
|
51 | out_opts = [] | |
52 | for shortopt, longopt, default, desc in options: |
|
52 | for shortopt, longopt, default, desc in options: | |
@@ -62,11 +62,11 b' def showhelp():' | |||||
62 | ) |
|
62 | ) | |
63 | opts_len = max([len(opt[0]) for opt in out_opts]) |
|
63 | opts_len = max([len(opt[0]) for opt in out_opts]) | |
64 | for first, second in out_opts: |
|
64 | for first, second in out_opts: | |
65 |
p |
|
65 | procutil.stdout.write(b' %-*s %s\n' % (opts_len, first, second)) | |
66 |
|
66 | |||
67 |
|
67 | |||
68 | try: |
|
68 | try: | |
69 |
for fp in (sys.stdin, p |
|
69 | for fp in (sys.stdin, procutil.stdout, sys.stderr): | |
70 | procutil.setbinary(fp) |
|
70 | procutil.setbinary(fp) | |
71 |
|
71 | |||
72 | opts = {} |
|
72 | opts = {} | |
@@ -92,11 +92,11 b' try:' | |||||
92 | ) |
|
92 | ) | |
93 | except ParseError as e: |
|
93 | except ParseError as e: | |
94 | e = stringutil.forcebytestr(e) |
|
94 | e = stringutil.forcebytestr(e) | |
95 |
p |
|
95 | procutil.stdout.write(b"%s: %s\n" % (sys.argv[0].encode('utf8'), e)) | |
96 | showhelp() |
|
96 | showhelp() | |
97 | sys.exit(1) |
|
97 | sys.exit(1) | |
98 | except error.Abort as e: |
|
98 | except error.Abort as e: | |
99 |
p |
|
99 | procutil.stderr.write(b"abort: %s\n" % e) | |
100 | sys.exit(255) |
|
100 | sys.exit(255) | |
101 | except KeyboardInterrupt: |
|
101 | except KeyboardInterrupt: | |
102 | sys.exit(255) |
|
102 | sys.exit(255) |
@@ -9,7 +9,6 b' import sys' | |||||
9 | from mercurial import ( |
|
9 | from mercurial import ( | |
10 | encoding, |
|
10 | encoding, | |
11 | node, |
|
11 | node, | |
12 | pycompat, |
|
|||
13 | revlog, |
|
12 | revlog, | |
14 | transaction, |
|
13 | transaction, | |
15 | vfs as vfsmod, |
|
14 | vfs as vfsmod, | |
@@ -30,7 +29,7 b' while True:' | |||||
30 | if l.startswith("file:"): |
|
29 | if l.startswith("file:"): | |
31 | f = encoding.strtolocal(l[6:-1]) |
|
30 | f = encoding.strtolocal(l[6:-1]) | |
32 | r = revlog.revlog(opener, f) |
|
31 | r = revlog.revlog(opener, f) | |
33 |
p |
|
32 | procutil.stdout.write(b'%s\n' % f) | |
34 | elif l.startswith("node:"): |
|
33 | elif l.startswith("node:"): | |
35 | n = node.bin(l[6:-1]) |
|
34 | n = node.bin(l[6:-1]) | |
36 | elif l.startswith("linkrev:"): |
|
35 | elif l.startswith("linkrev:"): |
@@ -50,6 +50,7 b' from mercurial import (' | |||||
50 | phases, |
|
50 | phases, | |
51 | pycompat, |
|
51 | pycompat, | |
52 | registrar, |
|
52 | registrar, | |
|
53 | rewriteutil, | |||
53 | scmutil, |
|
54 | scmutil, | |
54 | util, |
|
55 | util, | |
55 | ) |
|
56 | ) | |
@@ -782,7 +783,10 b' class fixupstate(object):' | |||||
782 | # nothing changed, nothing commited |
|
783 | # nothing changed, nothing commited | |
783 | nextp1 = ctx |
|
784 | nextp1 = ctx | |
784 | continue |
|
785 | continue | |
785 | if self._willbecomenoop(memworkingcopy, ctx, nextp1): |
|
786 | willbecomenoop = ctx.files() and self._willbecomenoop( | |
|
787 | memworkingcopy, ctx, nextp1 | |||
|
788 | ) | |||
|
789 | if self.skip_empty_successor and willbecomenoop: | |||
786 | # changeset is no longer necessary |
|
790 | # changeset is no longer necessary | |
787 | self.replacemap[ctx.node()] = None |
|
791 | self.replacemap[ctx.node()] = None | |
788 | msg = _(b'became empty and was dropped') |
|
792 | msg = _(b'became empty and was dropped') | |
@@ -793,7 +797,11 b' class fixupstate(object):' | |||||
793 | nextp1 = lastcommitted |
|
797 | nextp1 = lastcommitted | |
794 | self.replacemap[ctx.node()] = lastcommitted.node() |
|
798 | self.replacemap[ctx.node()] = lastcommitted.node() | |
795 | if memworkingcopy: |
|
799 | if memworkingcopy: | |
796 | msg = _(b'%d file(s) changed, became %s') % ( |
|
800 | if willbecomenoop: | |
|
801 | msg = _(b'%d file(s) changed, became empty as %s') | |||
|
802 | else: | |||
|
803 | msg = _(b'%d file(s) changed, became %s') | |||
|
804 | msg = msg % ( | |||
797 | len(memworkingcopy), |
|
805 | len(memworkingcopy), | |
798 | self._ctx2str(lastcommitted), |
|
806 | self._ctx2str(lastcommitted), | |
799 | ) |
|
807 | ) | |
@@ -887,6 +895,10 b' class fixupstate(object):' | |||||
887 | if len(parents) != 1: |
|
895 | if len(parents) != 1: | |
888 | return False |
|
896 | return False | |
889 | pctx = parents[0] |
|
897 | pctx = parents[0] | |
|
898 | if ctx.branch() != pctx.branch(): | |||
|
899 | return False | |||
|
900 | if ctx.extra().get(b'close'): | |||
|
901 | return False | |||
890 | # ctx changes more files (not a subset of memworkingcopy) |
|
902 | # ctx changes more files (not a subset of memworkingcopy) | |
891 | if not set(ctx.files()).issubset(set(memworkingcopy)): |
|
903 | if not set(ctx.files()).issubset(set(memworkingcopy)): | |
892 | return False |
|
904 | return False | |
@@ -929,6 +941,10 b' class fixupstate(object):' | |||||
929 | self.repo, replacements, operation=b'absorb', fixphase=True |
|
941 | self.repo, replacements, operation=b'absorb', fixphase=True | |
930 | ) |
|
942 | ) | |
931 |
|
943 | |||
|
944 | @util.propertycache | |||
|
945 | def skip_empty_successor(self): | |||
|
946 | return rewriteutil.skip_empty_successor(self.ui, b'absorb') | |||
|
947 | ||||
932 |
|
948 | |||
933 | def _parsechunk(hunk): |
|
949 | def _parsechunk(hunk): | |
934 | """(crecord.uihunk or patch.recordhunk) -> (path, (a1, a2, [bline]))""" |
|
950 | """(crecord.uihunk or patch.recordhunk) -> (path, (a1, a2, [bline]))""" | |
@@ -1045,7 +1061,7 b' def absorb(ui, repo, stack=None, targetc' | |||||
1045 | not opts.get(b'apply_changes') |
|
1061 | not opts.get(b'apply_changes') | |
1046 | and state.ctxaffected |
|
1062 | and state.ctxaffected | |
1047 | and ui.promptchoice( |
|
1063 | and ui.promptchoice( | |
1048 |
b"apply changes (y |
|
1064 | b"apply changes (y/N)? $$ &Yes $$ &No", default=1 | |
1049 | ) |
|
1065 | ) | |
1050 | ): |
|
1066 | ): | |
1051 | raise error.Abort(_(b'absorb cancelled\n')) |
|
1067 | raise error.Abort(_(b'absorb cancelled\n')) |
@@ -226,8 +226,7 b' class convert_cvs(converter_source):' | |||||
226 | cmd = [rsh, host] + cmd |
|
226 | cmd = [rsh, host] + cmd | |
227 |
|
227 | |||
228 | # popen2 does not support argument lists under Windows |
|
228 | # popen2 does not support argument lists under Windows | |
229 |
cmd = |
|
229 | cmd = b' '.join(procutil.shellquote(arg) for arg in cmd) | |
230 | cmd = procutil.quotecommand(b' '.join(cmd)) |
|
|||
231 | self.writep, self.readp = procutil.popen2(cmd) |
|
230 | self.writep, self.readp = procutil.popen2(cmd) | |
232 |
|
231 | |||
233 | self.realroot = root |
|
232 | self.realroot = root |
@@ -217,7 +217,7 b' class gnuarch_source(common.converter_so' | |||||
217 | cmdline = [procutil.shellquote(arg) for arg in cmdline] |
|
217 | cmdline = [procutil.shellquote(arg) for arg in cmdline] | |
218 | bdevnull = pycompat.bytestr(os.devnull) |
|
218 | bdevnull = pycompat.bytestr(os.devnull) | |
219 | cmdline += [b'>', bdevnull, b'2>', bdevnull] |
|
219 | cmdline += [b'>', bdevnull, b'2>', bdevnull] | |
220 |
cmdline = |
|
220 | cmdline = b' '.join(cmdline) | |
221 | self.ui.debug(cmdline, b'\n') |
|
221 | self.ui.debug(cmdline, b'\n') | |
222 | return os.system(pycompat.rapply(procutil.tonativestr, cmdline)) |
|
222 | return os.system(pycompat.rapply(procutil.tonativestr, cmdline)) | |
223 |
|
223 |
@@ -1366,7 +1366,7 b' class svn_source(converter_source):' | |||||
1366 | arg = encodeargs(args) |
|
1366 | arg = encodeargs(args) | |
1367 | hgexe = procutil.hgexecutable() |
|
1367 | hgexe = procutil.hgexecutable() | |
1368 | cmd = b'%s debugsvnlog' % procutil.shellquote(hgexe) |
|
1368 | cmd = b'%s debugsvnlog' % procutil.shellquote(hgexe) | |
1369 |
stdin, stdout = procutil.popen2( |
|
1369 | stdin, stdout = procutil.popen2(cmd) | |
1370 | stdin.write(arg) |
|
1370 | stdin.write(arg) | |
1371 | try: |
|
1371 | try: | |
1372 | stdin.close() |
|
1372 | stdin.close() |
@@ -233,7 +233,6 b' def _systembackground(cmd, environ=None,' | |||||
233 | ''' like 'procutil.system', but returns the Popen object directly |
|
233 | ''' like 'procutil.system', but returns the Popen object directly | |
234 | so we don't have to wait on it. |
|
234 | so we don't have to wait on it. | |
235 | ''' |
|
235 | ''' | |
236 | cmd = procutil.quotecommand(cmd) |
|
|||
237 | env = procutil.shellenviron(environ) |
|
236 | env = procutil.shellenviron(environ) | |
238 | proc = subprocess.Popen( |
|
237 | proc = subprocess.Popen( | |
239 | procutil.tonativestr(cmd), |
|
238 | procutil.tonativestr(cmd), | |
@@ -351,6 +350,187 b' def _runperfilediff(' | |||||
351 | proc.wait() |
|
350 | proc.wait() | |
352 |
|
351 | |||
353 |
|
352 | |||
|
353 | def diffpatch(ui, repo, node1, node2, tmproot, matcher, cmdline): | |||
|
354 | template = b'hg-%h.patch' | |||
|
355 | # write patches to temporary files | |||
|
356 | with formatter.nullformatter(ui, b'extdiff', {}) as fm: | |||
|
357 | cmdutil.export( | |||
|
358 | repo, | |||
|
359 | [repo[node1].rev(), repo[node2].rev()], | |||
|
360 | fm, | |||
|
361 | fntemplate=repo.vfs.reljoin(tmproot, template), | |||
|
362 | match=matcher, | |||
|
363 | ) | |||
|
364 | label1 = cmdutil.makefilename(repo[node1], template) | |||
|
365 | label2 = cmdutil.makefilename(repo[node2], template) | |||
|
366 | file1 = repo.vfs.reljoin(tmproot, label1) | |||
|
367 | file2 = repo.vfs.reljoin(tmproot, label2) | |||
|
368 | cmdline = formatcmdline( | |||
|
369 | cmdline, | |||
|
370 | repo.root, | |||
|
371 | # no 3way while comparing patches | |||
|
372 | do3way=False, | |||
|
373 | parent1=file1, | |||
|
374 | plabel1=label1, | |||
|
375 | # while comparing patches, there is no second parent | |||
|
376 | parent2=None, | |||
|
377 | plabel2=None, | |||
|
378 | child=file2, | |||
|
379 | clabel=label2, | |||
|
380 | ) | |||
|
381 | ui.debug(b'running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot)) | |||
|
382 | ui.system(cmdline, cwd=tmproot, blockedtag=b'extdiff') | |||
|
383 | return 1 | |||
|
384 | ||||
|
385 | ||||
|
386 | def diffrevs( | |||
|
387 | ui, | |||
|
388 | repo, | |||
|
389 | node1a, | |||
|
390 | node1b, | |||
|
391 | node2, | |||
|
392 | matcher, | |||
|
393 | tmproot, | |||
|
394 | cmdline, | |||
|
395 | do3way, | |||
|
396 | guitool, | |||
|
397 | opts, | |||
|
398 | ): | |||
|
399 | ||||
|
400 | subrepos = opts.get(b'subrepos') | |||
|
401 | ||||
|
402 | # calculate list of files changed between both revs | |||
|
403 | st = repo.status(node1a, node2, matcher, listsubrepos=subrepos) | |||
|
404 | mod_a, add_a, rem_a = set(st.modified), set(st.added), set(st.removed) | |||
|
405 | if do3way: | |||
|
406 | stb = repo.status(node1b, node2, matcher, listsubrepos=subrepos) | |||
|
407 | mod_b, add_b, rem_b = ( | |||
|
408 | set(stb.modified), | |||
|
409 | set(stb.added), | |||
|
410 | set(stb.removed), | |||
|
411 | ) | |||
|
412 | else: | |||
|
413 | mod_b, add_b, rem_b = set(), set(), set() | |||
|
414 | modadd = mod_a | add_a | mod_b | add_b | |||
|
415 | common = modadd | rem_a | rem_b | |||
|
416 | if not common: | |||
|
417 | return 0 | |||
|
418 | ||||
|
419 | # Always make a copy of node1a (and node1b, if applicable) | |||
|
420 | # dir1a should contain files which are: | |||
|
421 | # * modified or removed from node1a to node2 | |||
|
422 | # * modified or added from node1b to node2 | |||
|
423 | # (except file added from node1a to node2 as they were not present in | |||
|
424 | # node1a) | |||
|
425 | dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a) | |||
|
426 | dir1a = snapshot(ui, repo, dir1a_files, node1a, tmproot, subrepos)[0] | |||
|
427 | rev1a = b'@%d' % repo[node1a].rev() | |||
|
428 | if do3way: | |||
|
429 | # file calculation criteria same as dir1a | |||
|
430 | dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b) | |||
|
431 | dir1b = snapshot(ui, repo, dir1b_files, node1b, tmproot, subrepos)[0] | |||
|
432 | rev1b = b'@%d' % repo[node1b].rev() | |||
|
433 | else: | |||
|
434 | dir1b = None | |||
|
435 | rev1b = b'' | |||
|
436 | ||||
|
437 | fnsandstat = [] | |||
|
438 | ||||
|
439 | # If node2 in not the wc or there is >1 change, copy it | |||
|
440 | dir2root = b'' | |||
|
441 | rev2 = b'' | |||
|
442 | if node2: | |||
|
443 | dir2 = snapshot(ui, repo, modadd, node2, tmproot, subrepos)[0] | |||
|
444 | rev2 = b'@%d' % repo[node2].rev() | |||
|
445 | elif len(common) > 1: | |||
|
446 | # we only actually need to get the files to copy back to | |||
|
447 | # the working dir in this case (because the other cases | |||
|
448 | # are: diffing 2 revisions or single file -- in which case | |||
|
449 | # the file is already directly passed to the diff tool). | |||
|
450 | dir2, fnsandstat = snapshot(ui, repo, modadd, None, tmproot, subrepos) | |||
|
451 | else: | |||
|
452 | # This lets the diff tool open the changed file directly | |||
|
453 | dir2 = b'' | |||
|
454 | dir2root = repo.root | |||
|
455 | ||||
|
456 | label1a = rev1a | |||
|
457 | label1b = rev1b | |||
|
458 | label2 = rev2 | |||
|
459 | ||||
|
460 | # If only one change, diff the files instead of the directories | |||
|
461 | # Handle bogus modifies correctly by checking if the files exist | |||
|
462 | if len(common) == 1: | |||
|
463 | common_file = util.localpath(common.pop()) | |||
|
464 | dir1a = os.path.join(tmproot, dir1a, common_file) | |||
|
465 | label1a = common_file + rev1a | |||
|
466 | if not os.path.isfile(dir1a): | |||
|
467 | dir1a = pycompat.osdevnull | |||
|
468 | if do3way: | |||
|
469 | dir1b = os.path.join(tmproot, dir1b, common_file) | |||
|
470 | label1b = common_file + rev1b | |||
|
471 | if not os.path.isfile(dir1b): | |||
|
472 | dir1b = pycompat.osdevnull | |||
|
473 | dir2 = os.path.join(dir2root, dir2, common_file) | |||
|
474 | label2 = common_file + rev2 | |||
|
475 | ||||
|
476 | if not opts.get(b'per_file'): | |||
|
477 | # Run the external tool on the 2 temp directories or the patches | |||
|
478 | cmdline = formatcmdline( | |||
|
479 | cmdline, | |||
|
480 | repo.root, | |||
|
481 | do3way=do3way, | |||
|
482 | parent1=dir1a, | |||
|
483 | plabel1=label1a, | |||
|
484 | parent2=dir1b, | |||
|
485 | plabel2=label1b, | |||
|
486 | child=dir2, | |||
|
487 | clabel=label2, | |||
|
488 | ) | |||
|
489 | ui.debug(b'running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot)) | |||
|
490 | ui.system(cmdline, cwd=tmproot, blockedtag=b'extdiff') | |||
|
491 | else: | |||
|
492 | # Run the external tool once for each pair of files | |||
|
493 | _runperfilediff( | |||
|
494 | cmdline, | |||
|
495 | repo.root, | |||
|
496 | ui, | |||
|
497 | guitool=guitool, | |||
|
498 | do3way=do3way, | |||
|
499 | confirm=opts.get(b'confirm'), | |||
|
500 | commonfiles=common, | |||
|
501 | tmproot=tmproot, | |||
|
502 | dir1a=dir1a, | |||
|
503 | dir1b=dir1b, | |||
|
504 | dir2root=dir2root, | |||
|
505 | dir2=dir2, | |||
|
506 | rev1a=rev1a, | |||
|
507 | rev1b=rev1b, | |||
|
508 | rev2=rev2, | |||
|
509 | ) | |||
|
510 | ||||
|
511 | for copy_fn, working_fn, st in fnsandstat: | |||
|
512 | cpstat = os.lstat(copy_fn) | |||
|
513 | # Some tools copy the file and attributes, so mtime may not detect | |||
|
514 | # all changes. A size check will detect more cases, but not all. | |||
|
515 | # The only certain way to detect every case is to diff all files, | |||
|
516 | # which could be expensive. | |||
|
517 | # copyfile() carries over the permission, so the mode check could | |||
|
518 | # be in an 'elif' branch, but for the case where the file has | |||
|
519 | # changed without affecting mtime or size. | |||
|
520 | if ( | |||
|
521 | cpstat[stat.ST_MTIME] != st[stat.ST_MTIME] | |||
|
522 | or cpstat.st_size != st.st_size | |||
|
523 | or (cpstat.st_mode & 0o100) != (st.st_mode & 0o100) | |||
|
524 | ): | |||
|
525 | ui.debug( | |||
|
526 | b'file changed while diffing. ' | |||
|
527 | b'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn) | |||
|
528 | ) | |||
|
529 | util.copyfile(copy_fn, working_fn) | |||
|
530 | ||||
|
531 | return 1 | |||
|
532 | ||||
|
533 | ||||
354 | def dodiff(ui, repo, cmdline, pats, opts, guitool=False): |
|
534 | def dodiff(ui, repo, cmdline, pats, opts, guitool=False): | |
355 | '''Do the actual diff: |
|
535 | '''Do the actual diff: | |
356 |
|
536 | |||
@@ -360,14 +540,12 b' def dodiff(ui, repo, cmdline, pats, opts' | |||||
360 | - just invoke the diff for a single file in the working dir |
|
540 | - just invoke the diff for a single file in the working dir | |
361 | ''' |
|
541 | ''' | |
362 |
|
542 | |||
|
543 | cmdutil.check_at_most_one_arg(opts, b'rev', b'change') | |||
363 | revs = opts.get(b'rev') |
|
544 | revs = opts.get(b'rev') | |
364 | change = opts.get(b'change') |
|
545 | change = opts.get(b'change') | |
365 | do3way = b'$parent2' in cmdline |
|
546 | do3way = b'$parent2' in cmdline | |
366 |
|
547 | |||
367 |
if |
|
548 | if change: | |
368 | msg = _(b'cannot specify --rev and --change at the same time') |
|
|||
369 | raise error.Abort(msg) |
|
|||
370 | elif change: |
|
|||
371 | ctx2 = scmutil.revsingle(repo, change, None) |
|
549 | ctx2 = scmutil.revsingle(repo, change, None) | |
372 | ctx1a, ctx1b = ctx2.p1(), ctx2.p2() |
|
550 | ctx1a, ctx1b = ctx2.p1(), ctx2.p2() | |
373 | else: |
|
551 | else: | |
@@ -377,9 +555,6 b' def dodiff(ui, repo, cmdline, pats, opts' | |||||
377 | else: |
|
555 | else: | |
378 | ctx1b = repo[nullid] |
|
556 | ctx1b = repo[nullid] | |
379 |
|
557 | |||
380 | perfile = opts.get(b'per_file') |
|
|||
381 | confirm = opts.get(b'confirm') |
|
|||
382 |
|
||||
383 | node1a = ctx1a.node() |
|
558 | node1a = ctx1a.node() | |
384 | node1b = ctx1b.node() |
|
559 | node1b = ctx1b.node() | |
385 | node2 = ctx2.node() |
|
560 | node2 = ctx2.node() | |
@@ -389,169 +564,35 b' def dodiff(ui, repo, cmdline, pats, opts' | |||||
389 | if node1b == nullid: |
|
564 | if node1b == nullid: | |
390 | do3way = False |
|
565 | do3way = False | |
391 |
|
566 | |||
392 | subrepos = opts.get(b'subrepos') |
|
|||
393 |
|
||||
394 | matcher = scmutil.match(repo[node2], pats, opts) |
|
567 | matcher = scmutil.match(repo[node2], pats, opts) | |
395 |
|
568 | |||
396 | if opts.get(b'patch'): |
|
569 | if opts.get(b'patch'): | |
397 | if subrepos: |
|
570 | if opts.get(b'subrepos'): | |
398 | raise error.Abort(_(b'--patch cannot be used with --subrepos')) |
|
571 | raise error.Abort(_(b'--patch cannot be used with --subrepos')) | |
399 | if perfile: |
|
572 | if opts.get(b'per_file'): | |
400 | raise error.Abort(_(b'--patch cannot be used with --per-file')) |
|
573 | raise error.Abort(_(b'--patch cannot be used with --per-file')) | |
401 | if node2 is None: |
|
574 | if node2 is None: | |
402 | raise error.Abort(_(b'--patch requires two revisions')) |
|
575 | raise error.Abort(_(b'--patch requires two revisions')) | |
403 | else: |
|
|||
404 | st = repo.status(node1a, node2, matcher, listsubrepos=subrepos) |
|
|||
405 | mod_a, add_a, rem_a = set(st.modified), set(st.added), set(st.removed) |
|
|||
406 | if do3way: |
|
|||
407 | stb = repo.status(node1b, node2, matcher, listsubrepos=subrepos) |
|
|||
408 | mod_b, add_b, rem_b = ( |
|
|||
409 | set(stb.modified), |
|
|||
410 | set(stb.added), |
|
|||
411 | set(stb.removed), |
|
|||
412 | ) |
|
|||
413 | else: |
|
|||
414 | mod_b, add_b, rem_b = set(), set(), set() |
|
|||
415 | modadd = mod_a | add_a | mod_b | add_b |
|
|||
416 | common = modadd | rem_a | rem_b |
|
|||
417 | if not common: |
|
|||
418 | return 0 |
|
|||
419 |
|
576 | |||
420 | tmproot = pycompat.mkdtemp(prefix=b'extdiff.') |
|
577 | tmproot = pycompat.mkdtemp(prefix=b'extdiff.') | |
421 | try: |
|
578 | try: | |
422 |
if |
|
579 | if opts.get(b'patch'): | |
423 | # Always make a copy of node1a (and node1b, if applicable) |
|
580 | return diffpatch(ui, repo, node1a, node2, tmproot, matcher, cmdline) | |
424 | dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a) |
|
|||
425 | dir1a = snapshot(ui, repo, dir1a_files, node1a, tmproot, subrepos)[ |
|
|||
426 | 0 |
|
|||
427 | ] |
|
|||
428 | rev1a = b'@%d' % repo[node1a].rev() |
|
|||
429 | if do3way: |
|
|||
430 | dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b) |
|
|||
431 | dir1b = snapshot( |
|
|||
432 | ui, repo, dir1b_files, node1b, tmproot, subrepos |
|
|||
433 | )[0] |
|
|||
434 | rev1b = b'@%d' % repo[node1b].rev() |
|
|||
435 | else: |
|
|||
436 | dir1b = None |
|
|||
437 | rev1b = b'' |
|
|||
438 |
|
||||
439 | fnsandstat = [] |
|
|||
440 |
|
||||
441 | # If node2 in not the wc or there is >1 change, copy it |
|
|||
442 | dir2root = b'' |
|
|||
443 | rev2 = b'' |
|
|||
444 | if node2: |
|
|||
445 | dir2 = snapshot(ui, repo, modadd, node2, tmproot, subrepos)[0] |
|
|||
446 | rev2 = b'@%d' % repo[node2].rev() |
|
|||
447 | elif len(common) > 1: |
|
|||
448 | # we only actually need to get the files to copy back to |
|
|||
449 | # the working dir in this case (because the other cases |
|
|||
450 | # are: diffing 2 revisions or single file -- in which case |
|
|||
451 | # the file is already directly passed to the diff tool). |
|
|||
452 | dir2, fnsandstat = snapshot( |
|
|||
453 | ui, repo, modadd, None, tmproot, subrepos |
|
|||
454 | ) |
|
|||
455 | else: |
|
|||
456 | # This lets the diff tool open the changed file directly |
|
|||
457 | dir2 = b'' |
|
|||
458 | dir2root = repo.root |
|
|||
459 |
|
||||
460 | label1a = rev1a |
|
|||
461 | label1b = rev1b |
|
|||
462 | label2 = rev2 |
|
|||
463 |
|
581 | |||
464 | # If only one change, diff the files instead of the directories |
|
582 | return diffrevs( | |
465 | # Handle bogus modifies correctly by checking if the files exist |
|
583 | ui, | |
466 | if len(common) == 1: |
|
584 | repo, | |
467 | common_file = util.localpath(common.pop()) |
|
585 | node1a, | |
468 | dir1a = os.path.join(tmproot, dir1a, common_file) |
|
586 | node1b, | |
469 | label1a = common_file + rev1a |
|
587 | node2, | |
470 | if not os.path.isfile(dir1a): |
|
588 | matcher, | |
471 | dir1a = pycompat.osdevnull |
|
589 | tmproot, | |
472 | if do3way: |
|
590 | cmdline, | |
473 | dir1b = os.path.join(tmproot, dir1b, common_file) |
|
591 | do3way, | |
474 | label1b = common_file + rev1b |
|
592 | guitool, | |
475 | if not os.path.isfile(dir1b): |
|
593 | opts, | |
476 | dir1b = pycompat.osdevnull |
|
594 | ) | |
477 | dir2 = os.path.join(dir2root, dir2, common_file) |
|
|||
478 | label2 = common_file + rev2 |
|
|||
479 | else: |
|
|||
480 | template = b'hg-%h.patch' |
|
|||
481 | with formatter.nullformatter(ui, b'extdiff', {}) as fm: |
|
|||
482 | cmdutil.export( |
|
|||
483 | repo, |
|
|||
484 | [repo[node1a].rev(), repo[node2].rev()], |
|
|||
485 | fm, |
|
|||
486 | fntemplate=repo.vfs.reljoin(tmproot, template), |
|
|||
487 | match=matcher, |
|
|||
488 | ) |
|
|||
489 | label1a = cmdutil.makefilename(repo[node1a], template) |
|
|||
490 | label2 = cmdutil.makefilename(repo[node2], template) |
|
|||
491 | dir1a = repo.vfs.reljoin(tmproot, label1a) |
|
|||
492 | dir2 = repo.vfs.reljoin(tmproot, label2) |
|
|||
493 | dir1b = None |
|
|||
494 | label1b = None |
|
|||
495 | fnsandstat = [] |
|
|||
496 |
|
595 | |||
497 | if not perfile: |
|
|||
498 | # Run the external tool on the 2 temp directories or the patches |
|
|||
499 | cmdline = formatcmdline( |
|
|||
500 | cmdline, |
|
|||
501 | repo.root, |
|
|||
502 | do3way=do3way, |
|
|||
503 | parent1=dir1a, |
|
|||
504 | plabel1=label1a, |
|
|||
505 | parent2=dir1b, |
|
|||
506 | plabel2=label1b, |
|
|||
507 | child=dir2, |
|
|||
508 | clabel=label2, |
|
|||
509 | ) |
|
|||
510 | ui.debug( |
|
|||
511 | b'running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot) |
|
|||
512 | ) |
|
|||
513 | ui.system(cmdline, cwd=tmproot, blockedtag=b'extdiff') |
|
|||
514 | else: |
|
|||
515 | # Run the external tool once for each pair of files |
|
|||
516 | _runperfilediff( |
|
|||
517 | cmdline, |
|
|||
518 | repo.root, |
|
|||
519 | ui, |
|
|||
520 | guitool=guitool, |
|
|||
521 | do3way=do3way, |
|
|||
522 | confirm=confirm, |
|
|||
523 | commonfiles=common, |
|
|||
524 | tmproot=tmproot, |
|
|||
525 | dir1a=dir1a, |
|
|||
526 | dir1b=dir1b, |
|
|||
527 | dir2root=dir2root, |
|
|||
528 | dir2=dir2, |
|
|||
529 | rev1a=rev1a, |
|
|||
530 | rev1b=rev1b, |
|
|||
531 | rev2=rev2, |
|
|||
532 | ) |
|
|||
533 |
|
||||
534 | for copy_fn, working_fn, st in fnsandstat: |
|
|||
535 | cpstat = os.lstat(copy_fn) |
|
|||
536 | # Some tools copy the file and attributes, so mtime may not detect |
|
|||
537 | # all changes. A size check will detect more cases, but not all. |
|
|||
538 | # The only certain way to detect every case is to diff all files, |
|
|||
539 | # which could be expensive. |
|
|||
540 | # copyfile() carries over the permission, so the mode check could |
|
|||
541 | # be in an 'elif' branch, but for the case where the file has |
|
|||
542 | # changed without affecting mtime or size. |
|
|||
543 | if ( |
|
|||
544 | cpstat[stat.ST_MTIME] != st[stat.ST_MTIME] |
|
|||
545 | or cpstat.st_size != st.st_size |
|
|||
546 | or (cpstat.st_mode & 0o100) != (st.st_mode & 0o100) |
|
|||
547 | ): |
|
|||
548 | ui.debug( |
|
|||
549 | b'file changed while diffing. ' |
|
|||
550 | b'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn) |
|
|||
551 | ) |
|
|||
552 | util.copyfile(copy_fn, working_fn) |
|
|||
553 |
|
||||
554 | return 1 |
|
|||
555 | finally: |
|
596 | finally: | |
556 | ui.note(_(b'cleaning up temp directory\n')) |
|
597 | ui.note(_(b'cleaning up temp directory\n')) | |
557 | shutil.rmtree(tmproot) |
|
598 | shutil.rmtree(tmproot) |
@@ -144,6 +144,7 b' from mercurial import (' | |||||
144 | match as matchmod, |
|
144 | match as matchmod, | |
145 | mdiff, |
|
145 | mdiff, | |
146 | merge, |
|
146 | merge, | |
|
147 | mergestate as mergestatemod, | |||
147 | pycompat, |
|
148 | pycompat, | |
148 | registrar, |
|
149 | registrar, | |
149 | rewriteutil, |
|
150 | rewriteutil, | |
@@ -267,8 +268,14 b' def fix(ui, repo, *pats, **opts):' | |||||
267 | workqueue, numitems = getworkqueue( |
|
268 | workqueue, numitems = getworkqueue( | |
268 | ui, repo, pats, opts, revstofix, basectxs |
|
269 | ui, repo, pats, opts, revstofix, basectxs | |
269 | ) |
|
270 | ) | |
|
271 | basepaths = getbasepaths(repo, opts, workqueue, basectxs) | |||
270 | fixers = getfixers(ui) |
|
272 | fixers = getfixers(ui) | |
271 |
|
273 | |||
|
274 | # Rather than letting each worker independently fetch the files | |||
|
275 | # (which also would add complications for shared/keepalive | |||
|
276 | # connections), prefetch them all first. | |||
|
277 | _prefetchfiles(repo, workqueue, basepaths) | |||
|
278 | ||||
272 | # There are no data dependencies between the workers fixing each file |
|
279 | # There are no data dependencies between the workers fixing each file | |
273 | # revision, so we can use all available parallelism. |
|
280 | # revision, so we can use all available parallelism. | |
274 | def getfixes(items): |
|
281 | def getfixes(items): | |
@@ -276,7 +283,7 b' def fix(ui, repo, *pats, **opts):' | |||||
276 | ctx = repo[rev] |
|
283 | ctx = repo[rev] | |
277 | olddata = ctx[path].data() |
|
284 | olddata = ctx[path].data() | |
278 | metadata, newdata = fixfile( |
|
285 | metadata, newdata = fixfile( | |
279 | ui, repo, opts, fixers, ctx, path, basectxs[rev] |
|
286 | ui, repo, opts, fixers, ctx, path, basepaths, basectxs[rev] | |
280 | ) |
|
287 | ) | |
281 | # Don't waste memory/time passing unchanged content back, but |
|
288 | # Don't waste memory/time passing unchanged content back, but | |
282 | # produce one result per item either way. |
|
289 | # produce one result per item either way. | |
@@ -426,7 +433,9 b' def getrevstofix(ui, repo, opts):' | |||||
426 | if not (len(revs) == 1 and wdirrev in revs): |
|
433 | if not (len(revs) == 1 and wdirrev in revs): | |
427 | cmdutil.checkunfinished(repo) |
|
434 | cmdutil.checkunfinished(repo) | |
428 | rewriteutil.precheck(repo, revs, b'fix') |
|
435 | rewriteutil.precheck(repo, revs, b'fix') | |
429 |
if wdirrev in revs and list( |
|
436 | if wdirrev in revs and list( | |
|
437 | mergestatemod.mergestate.read(repo).unresolved() | |||
|
438 | ): | |||
430 | raise error.Abort(b'unresolved conflicts', hint=b"use 'hg resolve'") |
|
439 | raise error.Abort(b'unresolved conflicts', hint=b"use 'hg resolve'") | |
431 | if not revs: |
|
440 | if not revs: | |
432 | raise error.Abort( |
|
441 | raise error.Abort( | |
@@ -470,7 +479,7 b' def pathstofix(ui, repo, pats, opts, mat' | |||||
470 | return files |
|
479 | return files | |
471 |
|
480 | |||
472 |
|
481 | |||
473 | def lineranges(opts, path, basectxs, fixctx, content2): |
|
482 | def lineranges(opts, path, basepaths, basectxs, fixctx, content2): | |
474 | """Returns the set of line ranges that should be fixed in a file |
|
483 | """Returns the set of line ranges that should be fixed in a file | |
475 |
|
484 | |||
476 | Of the form [(10, 20), (30, 40)]. |
|
485 | Of the form [(10, 20), (30, 40)]. | |
@@ -489,7 +498,8 b' def lineranges(opts, path, basectxs, fix' | |||||
489 |
|
498 | |||
490 | rangeslist = [] |
|
499 | rangeslist = [] | |
491 | for basectx in basectxs: |
|
500 | for basectx in basectxs: | |
492 |
basepath = |
|
501 | basepath = basepaths.get((basectx.rev(), fixctx.rev(), path), path) | |
|
502 | ||||
493 | if basepath in basectx: |
|
503 | if basepath in basectx: | |
494 | content1 = basectx[basepath].data() |
|
504 | content1 = basectx[basepath].data() | |
495 | else: |
|
505 | else: | |
@@ -498,6 +508,21 b' def lineranges(opts, path, basectxs, fix' | |||||
498 | return unionranges(rangeslist) |
|
508 | return unionranges(rangeslist) | |
499 |
|
509 | |||
500 |
|
510 | |||
|
511 | def getbasepaths(repo, opts, workqueue, basectxs): | |||
|
512 | if opts.get(b'whole'): | |||
|
513 | # Base paths will never be fetched for line range determination. | |||
|
514 | return {} | |||
|
515 | ||||
|
516 | basepaths = {} | |||
|
517 | for rev, path in workqueue: | |||
|
518 | fixctx = repo[rev] | |||
|
519 | for basectx in basectxs[rev]: | |||
|
520 | basepath = copies.pathcopies(basectx, fixctx).get(path, path) | |||
|
521 | if basepath in basectx: | |||
|
522 | basepaths[(basectx.rev(), fixctx.rev(), path)] = basepath | |||
|
523 | return basepaths | |||
|
524 | ||||
|
525 | ||||
501 | def unionranges(rangeslist): |
|
526 | def unionranges(rangeslist): | |
502 | """Return the union of some closed intervals |
|
527 | """Return the union of some closed intervals | |
503 |
|
528 | |||
@@ -610,7 +635,30 b' def getbasectxs(repo, opts, revstofix):' | |||||
610 | return basectxs |
|
635 | return basectxs | |
611 |
|
636 | |||
612 |
|
637 | |||
613 | def fixfile(ui, repo, opts, fixers, fixctx, path, basectxs): |
|
638 | def _prefetchfiles(repo, workqueue, basepaths): | |
|
639 | toprefetch = set() | |||
|
640 | ||||
|
641 | # Prefetch the files that will be fixed. | |||
|
642 | for rev, path in workqueue: | |||
|
643 | if rev == wdirrev: | |||
|
644 | continue | |||
|
645 | toprefetch.add((rev, path)) | |||
|
646 | ||||
|
647 | # Prefetch the base contents for lineranges(). | |||
|
648 | for (baserev, fixrev, path), basepath in basepaths.items(): | |||
|
649 | toprefetch.add((baserev, basepath)) | |||
|
650 | ||||
|
651 | if toprefetch: | |||
|
652 | scmutil.prefetchfiles( | |||
|
653 | repo, | |||
|
654 | [ | |||
|
655 | (rev, scmutil.matchfiles(repo, [path])) | |||
|
656 | for rev, path in toprefetch | |||
|
657 | ], | |||
|
658 | ) | |||
|
659 | ||||
|
660 | ||||
|
661 | def fixfile(ui, repo, opts, fixers, fixctx, path, basepaths, basectxs): | |||
614 | """Run any configured fixers that should affect the file in this context |
|
662 | """Run any configured fixers that should affect the file in this context | |
615 |
|
663 | |||
616 | Returns the file content that results from applying the fixers in some order |
|
664 | Returns the file content that results from applying the fixers in some order | |
@@ -626,7 +674,9 b' def fixfile(ui, repo, opts, fixers, fixc' | |||||
626 | newdata = fixctx[path].data() |
|
674 | newdata = fixctx[path].data() | |
627 | for fixername, fixer in pycompat.iteritems(fixers): |
|
675 | for fixername, fixer in pycompat.iteritems(fixers): | |
628 | if fixer.affects(opts, fixctx, path): |
|
676 | if fixer.affects(opts, fixctx, path): | |
629 |
ranges = lineranges( |
|
677 | ranges = lineranges( | |
|
678 | opts, path, basepaths, basectxs, fixctx, newdata | |||
|
679 | ) | |||
630 | command = fixer.command(ui, path, ranges) |
|
680 | command = fixer.command(ui, path, ranges) | |
631 | if command is None: |
|
681 | if command is None: | |
632 | continue |
|
682 | continue |
@@ -16,6 +16,7 b' from mercurial import (' | |||||
16 | extensions, |
|
16 | extensions, | |
17 | localrepo, |
|
17 | localrepo, | |
18 | pycompat, |
|
18 | pycompat, | |
|
19 | registrar, | |||
19 | scmutil, |
|
20 | scmutil, | |
20 | store, |
|
21 | store, | |
21 | util, |
|
22 | util, | |
@@ -28,6 +29,13 b' from . import (' | |||||
28 | index, |
|
29 | index, | |
29 | ) |
|
30 | ) | |
30 |
|
31 | |||
|
32 | configtable = {} | |||
|
33 | configitem = registrar.configitem(configtable) | |||
|
34 | # git.log-index-cache-miss: internal knob for testing | |||
|
35 | configitem( | |||
|
36 | b"git", b"log-index-cache-miss", default=False, | |||
|
37 | ) | |||
|
38 | ||||
31 | # TODO: extract an interface for this in core |
|
39 | # TODO: extract an interface for this in core | |
32 | class gitstore(object): # store.basicstore): |
|
40 | class gitstore(object): # store.basicstore): | |
33 | def __init__(self, path, vfstype): |
|
41 | def __init__(self, path, vfstype): | |
@@ -41,13 +49,14 b' class gitstore(object): # store.basicst' | |||||
41 | os.path.normpath(os.path.join(path, b'..', b'.git')) |
|
49 | os.path.normpath(os.path.join(path, b'..', b'.git')) | |
42 | ) |
|
50 | ) | |
43 | self._progress_factory = lambda *args, **kwargs: None |
|
51 | self._progress_factory = lambda *args, **kwargs: None | |
|
52 | self._logfn = lambda x: None | |||
44 |
|
53 | |||
45 | @util.propertycache |
|
54 | @util.propertycache | |
46 | def _db(self): |
|
55 | def _db(self): | |
47 | # We lazy-create the database because we want to thread a |
|
56 | # We lazy-create the database because we want to thread a | |
48 | # progress callback down to the indexing process if it's |
|
57 | # progress callback down to the indexing process if it's | |
49 | # required, and we don't have a ui handle in makestore(). |
|
58 | # required, and we don't have a ui handle in makestore(). | |
50 | return index.get_index(self.git, self._progress_factory) |
|
59 | return index.get_index(self.git, self._logfn, self._progress_factory) | |
51 |
|
60 | |||
52 | def join(self, f): |
|
61 | def join(self, f): | |
53 | """Fake store.join method for git repositories. |
|
62 | """Fake store.join method for git repositories. | |
@@ -276,6 +285,8 b' def reposetup(ui, repo):' | |||||
276 | if repo.local() and isinstance(repo.store, gitstore): |
|
285 | if repo.local() and isinstance(repo.store, gitstore): | |
277 | orig = repo.__class__ |
|
286 | orig = repo.__class__ | |
278 | repo.store._progress_factory = repo.ui.makeprogress |
|
287 | repo.store._progress_factory = repo.ui.makeprogress | |
|
288 | if ui.configbool(b'git', b'log-index-cache-miss'): | |||
|
289 | repo.store._logfn = repo.ui.warn | |||
279 |
|
290 | |||
280 | class gitlocalrepo(orig): |
|
291 | class gitlocalrepo(orig): | |
281 | def _makedirstate(self): |
|
292 | def _makedirstate(self): |
@@ -288,6 +288,10 b' class gitdirstate(object):' | |||||
288 | # TODO: track copies? |
|
288 | # TODO: track copies? | |
289 | return None |
|
289 | return None | |
290 |
|
290 | |||
|
291 | def prefetch_parents(self): | |||
|
292 | # TODO | |||
|
293 | pass | |||
|
294 | ||||
291 | @contextlib.contextmanager |
|
295 | @contextlib.contextmanager | |
292 | def parentchange(self): |
|
296 | def parentchange(self): | |
293 | # TODO: track this maybe? |
|
297 | # TODO: track this maybe? |
@@ -247,6 +247,60 b' class changelog(baselog):' | |||||
247 | def descendants(self, revs): |
|
247 | def descendants(self, revs): | |
248 | return dagop.descendantrevs(revs, self.revs, self.parentrevs) |
|
248 | return dagop.descendantrevs(revs, self.revs, self.parentrevs) | |
249 |
|
249 | |||
|
250 | def incrementalmissingrevs(self, common=None): | |||
|
251 | """Return an object that can be used to incrementally compute the | |||
|
252 | revision numbers of the ancestors of arbitrary sets that are not | |||
|
253 | ancestors of common. This is an ancestor.incrementalmissingancestors | |||
|
254 | object. | |||
|
255 | ||||
|
256 | 'common' is a list of revision numbers. If common is not supplied, uses | |||
|
257 | nullrev. | |||
|
258 | """ | |||
|
259 | if common is None: | |||
|
260 | common = [nodemod.nullrev] | |||
|
261 | ||||
|
262 | return ancestor.incrementalmissingancestors(self.parentrevs, common) | |||
|
263 | ||||
|
264 | def findmissing(self, common=None, heads=None): | |||
|
265 | """Return the ancestors of heads that are not ancestors of common. | |||
|
266 | ||||
|
267 | More specifically, return a list of nodes N such that every N | |||
|
268 | satisfies the following constraints: | |||
|
269 | ||||
|
270 | 1. N is an ancestor of some node in 'heads' | |||
|
271 | 2. N is not an ancestor of any node in 'common' | |||
|
272 | ||||
|
273 | The list is sorted by revision number, meaning it is | |||
|
274 | topologically sorted. | |||
|
275 | ||||
|
276 | 'heads' and 'common' are both lists of node IDs. If heads is | |||
|
277 | not supplied, uses all of the revlog's heads. If common is not | |||
|
278 | supplied, uses nullid.""" | |||
|
279 | if common is None: | |||
|
280 | common = [nodemod.nullid] | |||
|
281 | if heads is None: | |||
|
282 | heads = self.heads() | |||
|
283 | ||||
|
284 | common = [self.rev(n) for n in common] | |||
|
285 | heads = [self.rev(n) for n in heads] | |||
|
286 | ||||
|
287 | inc = self.incrementalmissingrevs(common=common) | |||
|
288 | return [self.node(r) for r in inc.missingancestors(heads)] | |||
|
289 | ||||
|
290 | def children(self, node): | |||
|
291 | """find the children of a given node""" | |||
|
292 | c = [] | |||
|
293 | p = self.rev(node) | |||
|
294 | for r in self.revs(start=p + 1): | |||
|
295 | prevs = [pr for pr in self.parentrevs(r) if pr != nodemod.nullrev] | |||
|
296 | if prevs: | |||
|
297 | for pr in prevs: | |||
|
298 | if pr == p: | |||
|
299 | c.append(self.node(r)) | |||
|
300 | elif p == nodemod.nullrev: | |||
|
301 | c.append(self.node(r)) | |||
|
302 | return c | |||
|
303 | ||||
250 | def reachableroots(self, minroot, heads, roots, includepath=False): |
|
304 | def reachableroots(self, minroot, heads, roots, includepath=False): | |
251 | return dagop._reachablerootspure( |
|
305 | return dagop._reachablerootspure( | |
252 | self.parentrevs, minroot, roots, heads, includepath |
|
306 | self.parentrevs, minroot, roots, heads, includepath | |
@@ -270,7 +324,10 b' class changelog(baselog):' | |||||
270 | def parentrevs(self, rev): |
|
324 | def parentrevs(self, rev): | |
271 | n = self.node(rev) |
|
325 | n = self.node(rev) | |
272 | hn = gitutil.togitnode(n) |
|
326 | hn = gitutil.togitnode(n) | |
273 | c = self.gitrepo[hn] |
|
327 | if hn != gitutil.nullgit: | |
|
328 | c = self.gitrepo[hn] | |||
|
329 | else: | |||
|
330 | return nodemod.nullrev, nodemod.nullrev | |||
274 | p1 = p2 = nodemod.nullrev |
|
331 | p1 = p2 = nodemod.nullrev | |
275 | if c.parents: |
|
332 | if c.parents: | |
276 | p1 = self.rev(c.parents[0].id.raw) |
|
333 | p1 = self.rev(c.parents[0].id.raw) | |
@@ -342,7 +399,7 b' class changelog(baselog):' | |||||
342 | 'refs/hg/internal/latest-commit', oid, force=True |
|
399 | 'refs/hg/internal/latest-commit', oid, force=True | |
343 | ) |
|
400 | ) | |
344 | # Reindex now to pick up changes. We omit the progress |
|
401 | # Reindex now to pick up changes. We omit the progress | |
345 | # callback because this will be very quick. |
|
402 | # and log callbacks because this will be very quick. | |
346 | index._index_repo(self.gitrepo, self._db) |
|
403 | index._index_repo(self.gitrepo, self._db) | |
347 | return oid.raw |
|
404 | return oid.raw | |
348 |
|
405 |
@@ -216,7 +216,12 b' def fill_in_filelog(gitrepo, db, startco' | |||||
216 | db.commit() |
|
216 | db.commit() | |
217 |
|
217 | |||
218 |
|
218 | |||
219 | def _index_repo(gitrepo, db, progress_factory=lambda *args, **kwargs: None): |
|
219 | def _index_repo( | |
|
220 | gitrepo, | |||
|
221 | db, | |||
|
222 | logfn=lambda x: None, | |||
|
223 | progress_factory=lambda *args, **kwargs: None, | |||
|
224 | ): | |||
220 | # Identify all references so we can tell the walker to visit all of them. |
|
225 | # Identify all references so we can tell the walker to visit all of them. | |
221 | all_refs = gitrepo.listall_references() |
|
226 | all_refs = gitrepo.listall_references() | |
222 | possible_heads = set() |
|
227 | possible_heads = set() | |
@@ -245,11 +250,15 b' def _index_repo(gitrepo, db, progress_fa' | |||||
245 | # TODO: we should figure out how to incrementally index history |
|
250 | # TODO: we should figure out how to incrementally index history | |
246 | # (preferably by detecting rewinds!) so that we don't have to do a |
|
251 | # (preferably by detecting rewinds!) so that we don't have to do a | |
247 | # full changelog walk every time a new commit is created. |
|
252 | # full changelog walk every time a new commit is created. | |
248 | cache_heads = {x[0] for x in db.execute('SELECT node FROM possible_heads')} |
|
253 | cache_heads = { | |
|
254 | pycompat.sysstr(x[0]) | |||
|
255 | for x in db.execute('SELECT node FROM possible_heads') | |||
|
256 | } | |||
249 | walker = None |
|
257 | walker = None | |
250 | cur_cache_heads = {h.hex for h in possible_heads} |
|
258 | cur_cache_heads = {h.hex for h in possible_heads} | |
251 | if cur_cache_heads == cache_heads: |
|
259 | if cur_cache_heads == cache_heads: | |
252 | return |
|
260 | return | |
|
261 | logfn(b'heads mismatch, rebuilding dagcache\n') | |||
253 | for start in possible_heads: |
|
262 | for start in possible_heads: | |
254 | if walker is None: |
|
263 | if walker is None: | |
255 | walker = gitrepo.walk(start, _OUR_ORDER) |
|
264 | walker = gitrepo.walk(start, _OUR_ORDER) | |
@@ -336,7 +345,9 b' def _index_repo(gitrepo, db, progress_fa' | |||||
336 | prog.complete() |
|
345 | prog.complete() | |
337 |
|
346 | |||
338 |
|
347 | |||
339 | def get_index(gitrepo, progress_factory=lambda *args, **kwargs: None): |
|
348 | def get_index( | |
|
349 | gitrepo, logfn=lambda x: None, progress_factory=lambda *args, **kwargs: None | |||
|
350 | ): | |||
340 | cachepath = os.path.join( |
|
351 | cachepath = os.path.join( | |
341 | pycompat.fsencode(gitrepo.path), b'..', b'.hg', b'cache' |
|
352 | pycompat.fsencode(gitrepo.path), b'..', b'.hg', b'cache' | |
342 | ) |
|
353 | ) | |
@@ -346,5 +357,5 b' def get_index(gitrepo, progress_factory=' | |||||
346 | db = _createdb(dbpath) |
|
357 | db = _createdb(dbpath) | |
347 | # TODO check against gitrepo heads before doing a full index |
|
358 | # TODO check against gitrepo heads before doing a full index | |
348 | # TODO thread a ui.progress call into this layer |
|
359 | # TODO thread a ui.progress call into this layer | |
349 | _index_repo(gitrepo, db, progress_factory) |
|
360 | _index_repo(gitrepo, db, logfn, progress_factory) | |
350 | return db |
|
361 | return db |
@@ -56,8 +56,9 b' class gittreemanifest(object):' | |||||
56 | return val |
|
56 | return val | |
57 | t = self._tree |
|
57 | t = self._tree | |
58 | comps = upath.split('/') |
|
58 | comps = upath.split('/') | |
|
59 | te = self._tree | |||
59 | for comp in comps[:-1]: |
|
60 | for comp in comps[:-1]: | |
60 |
te = |
|
61 | te = te[comp] | |
61 | t = self._git_repo[te.id] |
|
62 | t = self._git_repo[te.id] | |
62 | ent = t[comps[-1]] |
|
63 | ent = t[comps[-1]] | |
63 | if ent.filemode == pygit2.GIT_FILEMODE_BLOB: |
|
64 | if ent.filemode == pygit2.GIT_FILEMODE_BLOB: | |
@@ -125,9 +126,79 b' class gittreemanifest(object):' | |||||
125 | def hasdir(self, dir): |
|
126 | def hasdir(self, dir): | |
126 | return dir in self._dirs |
|
127 | return dir in self._dirs | |
127 |
|
128 | |||
128 |
def diff(self, other, match= |
|
129 | def diff(self, other, match=lambda x: True, clean=False): | |
129 | # TODO |
|
130 | '''Finds changes between the current manifest and m2. | |
130 | assert False |
|
131 | ||
|
132 | The result is returned as a dict with filename as key and | |||
|
133 | values of the form ((n1,fl1),(n2,fl2)), where n1/n2 is the | |||
|
134 | nodeid in the current/other manifest and fl1/fl2 is the flag | |||
|
135 | in the current/other manifest. Where the file does not exist, | |||
|
136 | the nodeid will be None and the flags will be the empty | |||
|
137 | string. | |||
|
138 | ''' | |||
|
139 | result = {} | |||
|
140 | ||||
|
141 | def _iterativediff(t1, t2, subdir): | |||
|
142 | """compares two trees and appends new tree nodes to examine to | |||
|
143 | the stack""" | |||
|
144 | if t1 is None: | |||
|
145 | t1 = {} | |||
|
146 | if t2 is None: | |||
|
147 | t2 = {} | |||
|
148 | ||||
|
149 | for e1 in t1: | |||
|
150 | realname = subdir + pycompat.fsencode(e1.name) | |||
|
151 | ||||
|
152 | if e1.type == pygit2.GIT_OBJ_TREE: | |||
|
153 | try: | |||
|
154 | e2 = t2[e1.name] | |||
|
155 | if e2.type != pygit2.GIT_OBJ_TREE: | |||
|
156 | e2 = None | |||
|
157 | except KeyError: | |||
|
158 | e2 = None | |||
|
159 | ||||
|
160 | stack.append((realname + b'/', e1, e2)) | |||
|
161 | else: | |||
|
162 | n1, fl1 = self.find(realname) | |||
|
163 | ||||
|
164 | try: | |||
|
165 | e2 = t2[e1.name] | |||
|
166 | n2, fl2 = other.find(realname) | |||
|
167 | except KeyError: | |||
|
168 | e2 = None | |||
|
169 | n2, fl2 = (None, b'') | |||
|
170 | ||||
|
171 | if e2 is not None and e2.type == pygit2.GIT_OBJ_TREE: | |||
|
172 | stack.append((realname + b'/', None, e2)) | |||
|
173 | ||||
|
174 | if not match(realname): | |||
|
175 | continue | |||
|
176 | ||||
|
177 | if n1 != n2 or fl1 != fl2: | |||
|
178 | result[realname] = ((n1, fl1), (n2, fl2)) | |||
|
179 | elif clean: | |||
|
180 | result[realname] = None | |||
|
181 | ||||
|
182 | for e2 in t2: | |||
|
183 | if e2.name in t1: | |||
|
184 | continue | |||
|
185 | ||||
|
186 | realname = subdir + pycompat.fsencode(e2.name) | |||
|
187 | ||||
|
188 | if e2.type == pygit2.GIT_OBJ_TREE: | |||
|
189 | stack.append((realname + b'/', None, e2)) | |||
|
190 | elif match(realname): | |||
|
191 | n2, fl2 = other.find(realname) | |||
|
192 | result[realname] = ((None, b''), (n2, fl2)) | |||
|
193 | ||||
|
194 | stack = [] | |||
|
195 | _iterativediff(self._tree, other._tree, b'') | |||
|
196 | while stack: | |||
|
197 | subdir, t1, t2 = stack.pop() | |||
|
198 | # stack is populated in the function call | |||
|
199 | _iterativediff(t1, t2, subdir) | |||
|
200 | ||||
|
201 | return result | |||
131 |
|
202 | |||
132 | def setflag(self, path, flag): |
|
203 | def setflag(self, path, flag): | |
133 | node, unused_flag = self._resolve_entry(path) |
|
204 | node, unused_flag = self._resolve_entry(path) | |
@@ -168,14 +239,13 b' class gittreemanifest(object):' | |||||
168 | for te in tree: |
|
239 | for te in tree: | |
169 | # TODO: can we prune dir walks with the matcher? |
|
240 | # TODO: can we prune dir walks with the matcher? | |
170 | realname = subdir + pycompat.fsencode(te.name) |
|
241 | realname = subdir + pycompat.fsencode(te.name) | |
171 |
if te.type == |
|
242 | if te.type == pygit2.GIT_OBJ_TREE: | |
172 | for inner in self._walkonetree( |
|
243 | for inner in self._walkonetree( | |
173 | self._git_repo[te.id], match, realname + b'/' |
|
244 | self._git_repo[te.id], match, realname + b'/' | |
174 | ): |
|
245 | ): | |
175 | yield inner |
|
246 | yield inner | |
176 |
if |
|
247 | elif match(realname): | |
177 | continue |
|
248 | yield pycompat.fsencode(realname) | |
178 | yield pycompat.fsencode(realname) |
|
|||
179 |
|
249 | |||
180 | def walk(self, match): |
|
250 | def walk(self, match): | |
181 | # TODO: this is a very lazy way to merge in the pending |
|
251 | # TODO: this is a very lazy way to merge in the pending | |
@@ -205,7 +275,7 b' class gittreemanifestctx(object):' | |||||
205 | return memgittreemanifestctx(self._repo, self._tree) |
|
275 | return memgittreemanifestctx(self._repo, self._tree) | |
206 |
|
276 | |||
207 | def find(self, path): |
|
277 | def find(self, path): | |
208 | self.read()[path] |
|
278 | return self.read()[path] | |
209 |
|
279 | |||
210 |
|
280 | |||
211 | @interfaceutil.implementer(repository.imanifestrevisionwritable) |
|
281 | @interfaceutil.implementer(repository.imanifestrevisionwritable) |
@@ -628,8 +628,17 b' def log(ui, repo, *args, **kwargs):' | |||||
628 | (b'', b'stat', None, b''), |
|
628 | (b'', b'stat', None, b''), | |
629 | (b'', b'graph', None, b''), |
|
629 | (b'', b'graph', None, b''), | |
630 | (b'p', b'patch', None, b''), |
|
630 | (b'p', b'patch', None, b''), | |
|
631 | (b'G', b'grep-diff', b'', b''), | |||
|
632 | (b'S', b'pickaxe-regex', b'', b''), | |||
631 | ] |
|
633 | ] | |
632 | args, opts = parseoptions(ui, cmdoptions, args) |
|
634 | args, opts = parseoptions(ui, cmdoptions, args) | |
|
635 | grep_pat = opts.get(b'grep_diff') or opts.get(b'pickaxe_regex') | |||
|
636 | if grep_pat: | |||
|
637 | cmd = Command(b'grep') | |||
|
638 | cmd[b'--diff'] = grep_pat | |||
|
639 | ui.status(b'%s\n' % bytes(cmd)) | |||
|
640 | return | |||
|
641 | ||||
633 | ui.status( |
|
642 | ui.status( | |
634 | _( |
|
643 | _( | |
635 | b'note: -v prints the entire commit message like Git does. To ' |
|
644 | b'note: -v prints the entire commit message like Git does. To ' |
@@ -223,6 +223,7 b' from mercurial import (' | |||||
223 | hg, |
|
223 | hg, | |
224 | logcmdutil, |
|
224 | logcmdutil, | |
225 | merge as mergemod, |
|
225 | merge as mergemod, | |
|
226 | mergestate as mergestatemod, | |||
226 | mergeutil, |
|
227 | mergeutil, | |
227 | node, |
|
228 | node, | |
228 | obsolete, |
|
229 | obsolete, | |
@@ -2285,7 +2286,7 b' def _getsummary(ctx):' | |||||
2285 | def bootstrapcontinue(ui, state, opts): |
|
2286 | def bootstrapcontinue(ui, state, opts): | |
2286 | repo = state.repo |
|
2287 | repo = state.repo | |
2287 |
|
2288 | |||
2288 | ms = mergemod.mergestate.read(repo) |
|
2289 | ms = mergestatemod.mergestate.read(repo) | |
2289 | mergeutil.checkunresolved(ms) |
|
2290 | mergeutil.checkunresolved(ms) | |
2290 |
|
2291 | |||
2291 | if state.actions: |
|
2292 | if state.actions: |
@@ -122,10 +122,18 b' def _report_commit(ui, repo, ctx):' | |||||
122 | ) |
|
122 | ) | |
123 |
|
123 | |||
124 |
|
124 | |||
|
125 | def has_successor(repo, rev): | |||
|
126 | return any( | |||
|
127 | r for r in obsutil.allsuccessors(repo.obsstore, [rev]) if r != rev | |||
|
128 | ) | |||
|
129 | ||||
|
130 | ||||
125 | def hook(ui, repo, hooktype, node=None, **kwargs): |
|
131 | def hook(ui, repo, hooktype, node=None, **kwargs): | |
126 |
if hooktype != b" |
|
132 | if hooktype != b"txnclose": | |
127 | raise error.Abort( |
|
133 | raise error.Abort( | |
128 | _(b'Unsupported hook type %r') % pycompat.bytestr(hooktype) |
|
134 | _(b'Unsupported hook type %r') % pycompat.bytestr(hooktype) | |
129 | ) |
|
135 | ) | |
130 |
for rev in obsutil.getobsoleted(repo, |
|
136 | for rev in obsutil.getobsoleted(repo, changes=kwargs['changes']): | |
131 |
|
|
137 | ctx = repo.unfiltered()[rev] | |
|
138 | if not has_successor(repo, ctx.node()): | |||
|
139 | _report_commit(ui, repo, ctx) |
@@ -466,7 +466,7 b' def _rebundle(bundlerepo, bundleroots, u' | |||||
466 |
|
466 | |||
467 | version = b'02' |
|
467 | version = b'02' | |
468 | outgoing = discovery.outgoing( |
|
468 | outgoing = discovery.outgoing( | |
469 |
bundlerepo, commonheads=bundleroots, |
|
469 | bundlerepo, commonheads=bundleroots, ancestorsof=[unknownhead] | |
470 | ) |
|
470 | ) | |
471 | cgstream = changegroup.makestream(bundlerepo, outgoing, version, b'pull') |
|
471 | cgstream = changegroup.makestream(bundlerepo, outgoing, version, b'pull') | |
472 | cgstream = util.chunkbuffer(cgstream).read() |
|
472 | cgstream = util.chunkbuffer(cgstream).read() |
@@ -163,7 +163,7 b' def lfconvert(ui, src, dest, *pats, **op' | |||||
163 | # to the destination repository's requirements. |
|
163 | # to the destination repository's requirements. | |
164 | if lfiles: |
|
164 | if lfiles: | |
165 | rdst.requirements.add(b'largefiles') |
|
165 | rdst.requirements.add(b'largefiles') | |
166 |
rdst |
|
166 | scmutil.writereporequirements(rdst) | |
167 | else: |
|
167 | else: | |
168 |
|
168 | |||
169 | class lfsource(filemap.filemap_source): |
|
169 | class lfsource(filemap.filemap_source): |
@@ -31,6 +31,7 b' from mercurial import (' | |||||
31 | logcmdutil, |
|
31 | logcmdutil, | |
32 | match as matchmod, |
|
32 | match as matchmod, | |
33 | merge, |
|
33 | merge, | |
|
34 | mergestate as mergestatemod, | |||
34 | pathutil, |
|
35 | pathutil, | |
35 | pycompat, |
|
36 | pycompat, | |
36 | scmutil, |
|
37 | scmutil, | |
@@ -622,7 +623,7 b' def overridecalculateupdates(' | |||||
622 | return actions, diverge, renamedelete |
|
623 | return actions, diverge, renamedelete | |
623 |
|
624 | |||
624 |
|
625 | |||
625 | @eh.wrapfunction(merge, b'recordupdates') |
|
626 | @eh.wrapfunction(mergestatemod, b'recordupdates') | |
626 | def mergerecordupdates(orig, repo, actions, branchmerge, getfiledata): |
|
627 | def mergerecordupdates(orig, repo, actions, branchmerge, getfiledata): | |
627 | if b'lfmr' in actions: |
|
628 | if b'lfmr' in actions: | |
628 | lfdirstate = lfutil.openlfdirstate(repo.ui, repo) |
|
629 | lfdirstate = lfutil.openlfdirstate(repo.ui, repo) |
@@ -448,7 +448,7 b' def reposetup(ui, repo):' | |||||
448 | lfutil.shortname + b'/' in f[0] for f in repo.store.datafiles() |
|
448 | lfutil.shortname + b'/' in f[0] for f in repo.store.datafiles() | |
449 | ): |
|
449 | ): | |
450 | repo.requirements.add(b'largefiles') |
|
450 | repo.requirements.add(b'largefiles') | |
451 |
repo |
|
451 | scmutil.writereporequirements(repo) | |
452 |
|
452 | |||
453 | ui.setconfig( |
|
453 | ui.setconfig( | |
454 | b'hooks', b'changegroup.lfiles', checkrequireslfiles, b'largefiles' |
|
454 | b'hooks', b'changegroup.lfiles', checkrequireslfiles, b'largefiles' |
@@ -255,7 +255,7 b' def _reposetup(ui, repo):' | |||||
255 | ): |
|
255 | ): | |
256 | repo.requirements.add(b'lfs') |
|
256 | repo.requirements.add(b'lfs') | |
257 | repo.features.add(repository.REPO_FEATURE_LFS) |
|
257 | repo.features.add(repository.REPO_FEATURE_LFS) | |
258 |
repo |
|
258 | scmutil.writereporequirements(repo) | |
259 | repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush) |
|
259 | repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush) | |
260 | break |
|
260 | break | |
261 |
|
261 |
@@ -312,7 +312,7 b' def convertsink(orig, sink):' | |||||
312 | # membership before assuming it is in the context. |
|
312 | # membership before assuming it is in the context. | |
313 | if any(f in ctx and ctx[f].islfs() for f, n in files): |
|
313 | if any(f in ctx and ctx[f].islfs() for f, n in files): | |
314 | self.repo.requirements.add(b'lfs') |
|
314 | self.repo.requirements.add(b'lfs') | |
315 |
self.repo |
|
315 | scmutil.writereporequirements(self.repo) | |
316 |
|
316 | |||
317 | return node |
|
317 | return node | |
318 |
|
318 | |||
@@ -337,7 +337,7 b' def vfsinit(orig, self, othervfs):' | |||||
337 | setattr(self, name, getattr(othervfs, name)) |
|
337 | setattr(self, name, getattr(othervfs, name)) | |
338 |
|
338 | |||
339 |
|
339 | |||
340 |
def _prefetchfiles(repo, rev |
|
340 | def _prefetchfiles(repo, revmatches): | |
341 | """Ensure that required LFS blobs are present, fetching them as a group if |
|
341 | """Ensure that required LFS blobs are present, fetching them as a group if | |
342 | needed.""" |
|
342 | needed.""" | |
343 | if not util.safehasattr(repo.svfs, b'lfslocalblobstore'): |
|
343 | if not util.safehasattr(repo.svfs, b'lfslocalblobstore'): | |
@@ -347,7 +347,7 b' def _prefetchfiles(repo, revs, match):' | |||||
347 | oids = set() |
|
347 | oids = set() | |
348 | localstore = repo.svfs.lfslocalblobstore |
|
348 | localstore = repo.svfs.lfslocalblobstore | |
349 |
|
349 | |||
350 | for rev in revs: |
|
350 | for rev, match in revmatches: | |
351 | ctx = repo[rev] |
|
351 | ctx = repo[rev] | |
352 | for f in ctx.walk(match): |
|
352 | for f in ctx.walk(match): | |
353 | p = pointerfromctx(ctx, f) |
|
353 | p = pointerfromctx(ctx, f) |
@@ -836,7 +836,15 b' class queue(object):' | |||||
836 | stat = opts.get(b'stat') |
|
836 | stat = opts.get(b'stat') | |
837 | m = scmutil.match(repo[node1], files, opts) |
|
837 | m = scmutil.match(repo[node1], files, opts) | |
838 | logcmdutil.diffordiffstat( |
|
838 | logcmdutil.diffordiffstat( | |
839 | self.ui, repo, diffopts, node1, node2, m, changes, stat, fp |
|
839 | self.ui, | |
|
840 | repo, | |||
|
841 | diffopts, | |||
|
842 | repo[node1], | |||
|
843 | repo[node2], | |||
|
844 | m, | |||
|
845 | changes, | |||
|
846 | stat, | |||
|
847 | fp, | |||
840 | ) |
|
848 | ) | |
841 |
|
849 | |||
842 | def mergeone(self, repo, mergeq, head, patch, rev, diffopts): |
|
850 | def mergeone(self, repo, mergeq, head, patch, rev, diffopts): |
@@ -20,6 +20,7 b' from mercurial import (' | |||||
20 | localrepo, |
|
20 | localrepo, | |
21 | narrowspec, |
|
21 | narrowspec, | |
22 | repair, |
|
22 | repair, | |
|
23 | scmutil, | |||
23 | util, |
|
24 | util, | |
24 | wireprototypes, |
|
25 | wireprototypes, | |
25 | ) |
|
26 | ) | |
@@ -179,7 +180,7 b' def _handlechangespec_2(op, inpart):' | |||||
179 |
|
180 | |||
180 | if not repository.NARROW_REQUIREMENT in op.repo.requirements: |
|
181 | if not repository.NARROW_REQUIREMENT in op.repo.requirements: | |
181 | op.repo.requirements.add(repository.NARROW_REQUIREMENT) |
|
182 | op.repo.requirements.add(repository.NARROW_REQUIREMENT) | |
182 |
op.repo |
|
183 | scmutil.writereporequirements(op.repo) | |
183 | op.repo.setnarrowpats(includepats, excludepats) |
|
184 | op.repo.setnarrowpats(includepats, excludepats) | |
184 | narrowspec.copytoworkingcopy(op.repo) |
|
185 | narrowspec.copytoworkingcopy(op.repo) | |
185 |
|
186 | |||
@@ -195,7 +196,7 b' def _handlenarrowspecs(op, inpart):' | |||||
195 |
|
196 | |||
196 | if repository.NARROW_REQUIREMENT not in op.repo.requirements: |
|
197 | if repository.NARROW_REQUIREMENT not in op.repo.requirements: | |
197 | op.repo.requirements.add(repository.NARROW_REQUIREMENT) |
|
198 | op.repo.requirements.add(repository.NARROW_REQUIREMENT) | |
198 |
op.repo |
|
199 | scmutil.writereporequirements(op.repo) | |
199 | op.repo.setnarrowpats(includepats, excludepats) |
|
200 | op.repo.setnarrowpats(includepats, excludepats) | |
200 | narrowspec.copytoworkingcopy(op.repo) |
|
201 | narrowspec.copytoworkingcopy(op.repo) | |
201 |
|
202 |
@@ -238,8 +238,8 b' def vcrcommand(name, flags, spec, helpca' | |||||
238 |
|
238 | |||
239 | def decorate(fn): |
|
239 | def decorate(fn): | |
240 | def inner(*args, **kwargs): |
|
240 | def inner(*args, **kwargs): | |
241 | cassette = pycompat.fsdecode(kwargs.pop('test_vcr', None)) |
|
241 | if kwargs.get('test_vcr'): | |
242 | if cassette: |
|
242 | cassette = pycompat.fsdecode(kwargs.pop('test_vcr')) | |
243 | import hgdemandimport |
|
243 | import hgdemandimport | |
244 |
|
244 | |||
245 | with hgdemandimport.deactivated(): |
|
245 | with hgdemandimport.deactivated(): | |
@@ -1311,8 +1311,8 b' def phabsend(ui, repo, *revs, **opts):' | |||||
1311 | # --fold option implies this, and the auto restacking of orphans requires |
|
1311 | # --fold option implies this, and the auto restacking of orphans requires | |
1312 | # it. Otherwise A+C in A->B->C will cause B to be orphaned, and C' to |
|
1312 | # it. Otherwise A+C in A->B->C will cause B to be orphaned, and C' to | |
1313 | # get A' as a parent. |
|
1313 | # get A' as a parent. | |
1314 |
def _fail_nonlinear_revs(revs, |
|
1314 | def _fail_nonlinear_revs(revs, revtype): | |
1315 |
badnodes = [repo[r].node() for r in revs |
|
1315 | badnodes = [repo[r].node() for r in revs] | |
1316 | raise error.Abort( |
|
1316 | raise error.Abort( | |
1317 | _(b"cannot phabsend multiple %s revisions: %s") |
|
1317 | _(b"cannot phabsend multiple %s revisions: %s") | |
1318 | % (revtype, scmutil.nodesummaries(repo, badnodes)), |
|
1318 | % (revtype, scmutil.nodesummaries(repo, badnodes)), | |
@@ -1321,11 +1321,11 b' def phabsend(ui, repo, *revs, **opts):' | |||||
1321 |
|
1321 | |||
1322 | heads = repo.revs(b'heads(%ld)', revs) |
|
1322 | heads = repo.revs(b'heads(%ld)', revs) | |
1323 | if len(heads) > 1: |
|
1323 | if len(heads) > 1: | |
1324 |
_fail_nonlinear_revs(heads |
|
1324 | _fail_nonlinear_revs(heads, b"head") | |
1325 |
|
1325 | |||
1326 | roots = repo.revs(b'roots(%ld)', revs) |
|
1326 | roots = repo.revs(b'roots(%ld)', revs) | |
1327 | if len(roots) > 1: |
|
1327 | if len(roots) > 1: | |
1328 |
_fail_nonlinear_revs(roots |
|
1328 | _fail_nonlinear_revs(roots, b"root") | |
1329 |
|
1329 | |||
1330 | fold = opts.get(b'fold') |
|
1330 | fold = opts.get(b'fold') | |
1331 | if fold: |
|
1331 | if fold: | |
@@ -1650,7 +1650,7 b' def _confirmbeforesend(repo, revs, oldma' | |||||
1650 | ) |
|
1650 | ) | |
1651 |
|
1651 | |||
1652 | if ui.promptchoice( |
|
1652 | if ui.promptchoice( | |
1653 |
_(b'Send the above changes to %s ( |
|
1653 | _(b'Send the above changes to %s (Y/n)?$$ &Yes $$ &No') % url | |
1654 | ): |
|
1654 | ): | |
1655 | return False |
|
1655 | return False | |
1656 |
|
1656 | |||
@@ -2162,8 +2162,14 b' def phabimport(ui, repo, *specs, **opts)' | |||||
2162 | [ |
|
2162 | [ | |
2163 | (b'', b'accept', False, _(b'accept revisions')), |
|
2163 | (b'', b'accept', False, _(b'accept revisions')), | |
2164 | (b'', b'reject', False, _(b'reject revisions')), |
|
2164 | (b'', b'reject', False, _(b'reject revisions')), | |
|
2165 | (b'', b'request-review', False, _(b'request review on revisions')), | |||
2165 | (b'', b'abandon', False, _(b'abandon revisions')), |
|
2166 | (b'', b'abandon', False, _(b'abandon revisions')), | |
2166 | (b'', b'reclaim', False, _(b'reclaim revisions')), |
|
2167 | (b'', b'reclaim', False, _(b'reclaim revisions')), | |
|
2168 | (b'', b'close', False, _(b'close revisions')), | |||
|
2169 | (b'', b'reopen', False, _(b'reopen revisions')), | |||
|
2170 | (b'', b'plan-changes', False, _(b'plan changes for revisions')), | |||
|
2171 | (b'', b'resign', False, _(b'resign as a reviewer from revisions')), | |||
|
2172 | (b'', b'commandeer', False, _(b'commandeer revisions')), | |||
2167 | (b'm', b'comment', b'', _(b'comment on the last revision')), |
|
2173 | (b'm', b'comment', b'', _(b'comment on the last revision')), | |
2168 | ], |
|
2174 | ], | |
2169 | _(b'DREVSPEC... [OPTIONS]'), |
|
2175 | _(b'DREVSPEC... [OPTIONS]'), | |
@@ -2176,7 +2182,19 b' def phabupdate(ui, repo, *specs, **opts)' | |||||
2176 | DREVSPEC selects revisions. See :hg:`help phabread` for its usage. |
|
2182 | DREVSPEC selects revisions. See :hg:`help phabread` for its usage. | |
2177 | """ |
|
2183 | """ | |
2178 | opts = pycompat.byteskwargs(opts) |
|
2184 | opts = pycompat.byteskwargs(opts) | |
2179 | flags = [n for n in b'accept reject abandon reclaim'.split() if opts.get(n)] |
|
2185 | transactions = [ | |
|
2186 | b'abandon', | |||
|
2187 | b'accept', | |||
|
2188 | b'close', | |||
|
2189 | b'commandeer', | |||
|
2190 | b'plan-changes', | |||
|
2191 | b'reclaim', | |||
|
2192 | b'reject', | |||
|
2193 | b'reopen', | |||
|
2194 | b'request-review', | |||
|
2195 | b'resign', | |||
|
2196 | ] | |||
|
2197 | flags = [n for n in transactions if opts.get(n.replace(b'-', b'_'))] | |||
2180 | if len(flags) > 1: |
|
2198 | if len(flags) > 1: | |
2181 | raise error.Abort(_(b'%s cannot be used together') % b', '.join(flags)) |
|
2199 | raise error.Abort(_(b'%s cannot be used together') % b', '.join(flags)) | |
2182 |
|
2200 |
@@ -64,7 +64,7 b" testedwith = b'ships-with-hg-core'" | |||||
64 | ] |
|
64 | ] | |
65 | + cmdutil.walkopts, |
|
65 | + cmdutil.walkopts, | |
66 | _(b'hg purge [OPTION]... [DIR]...'), |
|
66 | _(b'hg purge [OPTION]... [DIR]...'), | |
67 |
helpcategory=command.CATEGORY_ |
|
67 | helpcategory=command.CATEGORY_WORKING_DIRECTORY, | |
68 | ) |
|
68 | ) | |
69 | def purge(ui, repo, *dirs, **opts): |
|
69 | def purge(ui, repo, *dirs, **opts): | |
70 | '''removes files not tracked by Mercurial |
|
70 | '''removes files not tracked by Mercurial |
@@ -36,6 +36,7 b' from mercurial import (' | |||||
36 | extensions, |
|
36 | extensions, | |
37 | hg, |
|
37 | hg, | |
38 | merge as mergemod, |
|
38 | merge as mergemod, | |
|
39 | mergestate as mergestatemod, | |||
39 | mergeutil, |
|
40 | mergeutil, | |
40 | node as nodemod, |
|
41 | node as nodemod, | |
41 | obsolete, |
|
42 | obsolete, | |
@@ -205,6 +206,9 b' class rebaseruntime(object):' | |||||
205 | self.backupf = ui.configbool(b'rewrite', b'backup-bundle') |
|
206 | self.backupf = ui.configbool(b'rewrite', b'backup-bundle') | |
206 | self.keepf = opts.get(b'keep', False) |
|
207 | self.keepf = opts.get(b'keep', False) | |
207 | self.keepbranchesf = opts.get(b'keepbranches', False) |
|
208 | self.keepbranchesf = opts.get(b'keepbranches', False) | |
|
209 | self.skipemptysuccessorf = rewriteutil.skip_empty_successor( | |||
|
210 | repo.ui, b'rebase' | |||
|
211 | ) | |||
208 | self.obsoletenotrebased = {} |
|
212 | self.obsoletenotrebased = {} | |
209 | self.obsoletewithoutsuccessorindestination = set() |
|
213 | self.obsoletewithoutsuccessorindestination = set() | |
210 | self.inmemory = inmemory |
|
214 | self.inmemory = inmemory | |
@@ -528,11 +532,11 b' class rebaseruntime(object):' | |||||
528 | extra = {b'rebase_source': ctx.hex()} |
|
532 | extra = {b'rebase_source': ctx.hex()} | |
529 | for c in self.extrafns: |
|
533 | for c in self.extrafns: | |
530 | c(ctx, extra) |
|
534 | c(ctx, extra) | |
531 | keepbranch = self.keepbranchesf and repo[p1].branch() != ctx.branch() |
|
|||
532 | destphase = max(ctx.phase(), phases.draft) |
|
535 | destphase = max(ctx.phase(), phases.draft) | |
533 | overrides = {(b'phases', b'new-commit'): destphase} |
|
536 | overrides = { | |
534 | if keepbranch: |
|
537 | (b'phases', b'new-commit'): destphase, | |
535 |
|
|
538 | (b'ui', b'allowemptycommit'): not self.skipemptysuccessorf, | |
|
539 | } | |||
536 | with repo.ui.configoverride(overrides, b'rebase'): |
|
540 | with repo.ui.configoverride(overrides, b'rebase'): | |
537 | if self.inmemory: |
|
541 | if self.inmemory: | |
538 | newnode = commitmemorynode( |
|
542 | newnode = commitmemorynode( | |
@@ -544,7 +548,7 b' class rebaseruntime(object):' | |||||
544 | user=ctx.user(), |
|
548 | user=ctx.user(), | |
545 | date=date, |
|
549 | date=date, | |
546 | ) |
|
550 | ) | |
547 | mergemod.mergestate.clean(repo) |
|
551 | mergestatemod.mergestate.clean(repo) | |
548 | else: |
|
552 | else: | |
549 | newnode = commitnode( |
|
553 | newnode = commitnode( | |
550 | repo, |
|
554 | repo, | |
@@ -626,12 +630,7 b' class rebaseruntime(object):' | |||||
626 | if self.inmemory: |
|
630 | if self.inmemory: | |
627 | raise error.InMemoryMergeConflictsError() |
|
631 | raise error.InMemoryMergeConflictsError() | |
628 | else: |
|
632 | else: | |
629 |
raise error. |
|
633 | raise error.ConflictResolutionRequired(b'rebase') | |
630 | _( |
|
|||
631 | b'unresolved conflicts (see hg ' |
|
|||
632 | b'resolve, then hg rebase --continue)' |
|
|||
633 | ) |
|
|||
634 | ) |
|
|||
635 | if not self.collapsef: |
|
634 | if not self.collapsef: | |
636 | merging = p2 != nullrev |
|
635 | merging = p2 != nullrev | |
637 | editform = cmdutil.mergeeditform(merging, b'rebase') |
|
636 | editform = cmdutil.mergeeditform(merging, b'rebase') | |
@@ -652,6 +651,14 b' class rebaseruntime(object):' | |||||
652 | if newnode is not None: |
|
651 | if newnode is not None: | |
653 | self.state[rev] = repo[newnode].rev() |
|
652 | self.state[rev] = repo[newnode].rev() | |
654 | ui.debug(b'rebased as %s\n' % short(newnode)) |
|
653 | ui.debug(b'rebased as %s\n' % short(newnode)) | |
|
654 | if repo[newnode].isempty(): | |||
|
655 | ui.warn( | |||
|
656 | _( | |||
|
657 | b'note: created empty successor for %s, its ' | |||
|
658 | b'destination already has all its changes\n' | |||
|
659 | ) | |||
|
660 | % desc | |||
|
661 | ) | |||
655 | else: |
|
662 | else: | |
656 | if not self.collapsef: |
|
663 | if not self.collapsef: | |
657 | ui.warn( |
|
664 | ui.warn( | |
@@ -1084,7 +1091,7 b' def rebase(ui, repo, **opts):' | |||||
1084 | ) |
|
1091 | ) | |
1085 | # TODO: Make in-memory merge not use the on-disk merge state, so |
|
1092 | # TODO: Make in-memory merge not use the on-disk merge state, so | |
1086 | # we don't have to clean it here |
|
1093 | # we don't have to clean it here | |
1087 | mergemod.mergestate.clean(repo) |
|
1094 | mergestatemod.mergestate.clean(repo) | |
1088 | clearstatus(repo) |
|
1095 | clearstatus(repo) | |
1089 | clearcollapsemsg(repo) |
|
1096 | clearcollapsemsg(repo) | |
1090 | return _dorebase(ui, repo, action, opts, inmemory=False) |
|
1097 | return _dorebase(ui, repo, action, opts, inmemory=False) | |
@@ -1191,7 +1198,7 b' def _origrebase(' | |||||
1191 | if action == b'abort' and opts.get(b'tool', False): |
|
1198 | if action == b'abort' and opts.get(b'tool', False): | |
1192 | ui.warn(_(b'tool option will be ignored\n')) |
|
1199 | ui.warn(_(b'tool option will be ignored\n')) | |
1193 | if action == b'continue': |
|
1200 | if action == b'continue': | |
1194 | ms = mergemod.mergestate.read(repo) |
|
1201 | ms = mergestatemod.mergestate.read(repo) | |
1195 | mergeutil.checkunresolved(ms) |
|
1202 | mergeutil.checkunresolved(ms) | |
1196 |
|
1203 | |||
1197 | retcode = rbsrt._prepareabortorcontinue( |
|
1204 | retcode = rbsrt._prepareabortorcontinue( | |
@@ -1429,10 +1436,6 b' def externalparent(repo, state, destance' | |||||
1429 | def commitmemorynode(repo, wctx, editor, extra, user, date, commitmsg): |
|
1436 | def commitmemorynode(repo, wctx, editor, extra, user, date, commitmsg): | |
1430 | '''Commit the memory changes with parents p1 and p2. |
|
1437 | '''Commit the memory changes with parents p1 and p2. | |
1431 | Return node of committed revision.''' |
|
1438 | Return node of committed revision.''' | |
1432 | # Replicates the empty check in ``repo.commit``. |
|
|||
1433 | if wctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'): |
|
|||
1434 | return None |
|
|||
1435 |
|
||||
1436 | # By convention, ``extra['branch']`` (set by extrafn) clobbers |
|
1439 | # By convention, ``extra['branch']`` (set by extrafn) clobbers | |
1437 | # ``branch`` (used when passing ``--keepbranches``). |
|
1440 | # ``branch`` (used when passing ``--keepbranches``). | |
1438 | branch = None |
|
1441 | branch = None | |
@@ -1447,6 +1450,8 b' def commitmemorynode(repo, wctx, editor,' | |||||
1447 | branch=branch, |
|
1450 | branch=branch, | |
1448 | editor=editor, |
|
1451 | editor=editor, | |
1449 | ) |
|
1452 | ) | |
|
1453 | if memctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'): | |||
|
1454 | return None | |||
1450 | commitres = repo.commitctx(memctx) |
|
1455 | commitres = repo.commitctx(memctx) | |
1451 | wctx.clean() # Might be reused |
|
1456 | wctx.clean() # Might be reused | |
1452 | return commitres |
|
1457 | return commitres | |
@@ -2201,7 +2206,7 b' def abortrebase(ui, repo):' | |||||
2201 | def continuerebase(ui, repo): |
|
2206 | def continuerebase(ui, repo): | |
2202 | with repo.wlock(), repo.lock(): |
|
2207 | with repo.wlock(), repo.lock(): | |
2203 | rbsrt = rebaseruntime(repo, ui) |
|
2208 | rbsrt = rebaseruntime(repo, ui) | |
2204 | ms = mergemod.mergestate.read(repo) |
|
2209 | ms = mergestatemod.mergestate.read(repo) | |
2205 | mergeutil.checkunresolved(ms) |
|
2210 | mergeutil.checkunresolved(ms) | |
2206 | retcode = rbsrt._prepareabortorcontinue(isabort=False) |
|
2211 | retcode = rbsrt._prepareabortorcontinue(isabort=False) | |
2207 | if retcode is not None: |
|
2212 | if retcode is not None: |
@@ -30,7 +30,10 b' from mercurial import (' | |||||
30 | scmutil, |
|
30 | scmutil, | |
31 | util, |
|
31 | util, | |
32 | ) |
|
32 | ) | |
33 |
from mercurial.utils import |
|
33 | from mercurial.utils import ( | |
|
34 | procutil, | |||
|
35 | stringutil, | |||
|
36 | ) | |||
34 |
|
37 | |||
35 | cmdtable = {} |
|
38 | cmdtable = {} | |
36 | command = registrar.command(cmdtable) |
|
39 | command = registrar.command(cmdtable) | |
@@ -689,7 +692,7 b' def releasenotes(ui, repo, file_=None, *' | |||||
689 | def debugparsereleasenotes(ui, path, repo=None): |
|
692 | def debugparsereleasenotes(ui, path, repo=None): | |
690 | """parse release notes and print resulting data structure""" |
|
693 | """parse release notes and print resulting data structure""" | |
691 | if path == b'-': |
|
694 | if path == b'-': | |
692 |
text = p |
|
695 | text = procutil.stdin.read() | |
693 | else: |
|
696 | else: | |
694 | with open(path, b'rb') as fh: |
|
697 | with open(path, b'rb') as fh: | |
695 | text = fh.read() |
|
698 | text = fh.read() |
@@ -148,7 +148,7 b' from mercurial import (' | |||||
148 | extensions, |
|
148 | extensions, | |
149 | hg, |
|
149 | hg, | |
150 | localrepo, |
|
150 | localrepo, | |
151 | match, |
|
151 | match as matchmod, | |
152 | merge, |
|
152 | merge, | |
153 | node as nodemod, |
|
153 | node as nodemod, | |
154 | patch, |
|
154 | patch, | |
@@ -361,7 +361,7 b' def cloneshallow(orig, ui, repo, *args, ' | |||||
361 | self.unfiltered().__class__, |
|
361 | self.unfiltered().__class__, | |
362 | ) |
|
362 | ) | |
363 | self.requirements.add(constants.SHALLOWREPO_REQUIREMENT) |
|
363 | self.requirements.add(constants.SHALLOWREPO_REQUIREMENT) | |
364 |
self |
|
364 | scmutil.writereporequirements(self) | |
365 |
|
365 | |||
366 | # Since setupclient hadn't been called, exchange.pull was not |
|
366 | # Since setupclient hadn't been called, exchange.pull was not | |
367 | # wrapped. So we need to manually invoke our version of it. |
|
367 | # wrapped. So we need to manually invoke our version of it. | |
@@ -824,12 +824,12 b' def filelogrevset(orig, repo, subset, x)' | |||||
824 |
|
824 | |||
825 | # i18n: "filelog" is a keyword |
|
825 | # i18n: "filelog" is a keyword | |
826 | pat = revset.getstring(x, _(b"filelog requires a pattern")) |
|
826 | pat = revset.getstring(x, _(b"filelog requires a pattern")) | |
827 | m = match.match( |
|
827 | m = matchmod.match( | |
828 | repo.root, repo.getcwd(), [pat], default=b'relpath', ctx=repo[None] |
|
828 | repo.root, repo.getcwd(), [pat], default=b'relpath', ctx=repo[None] | |
829 | ) |
|
829 | ) | |
830 | s = set() |
|
830 | s = set() | |
831 |
|
831 | |||
832 | if not match.patkind(pat): |
|
832 | if not matchmod.patkind(pat): | |
833 | # slow |
|
833 | # slow | |
834 | for r in subset: |
|
834 | for r in subset: | |
835 | ctx = repo[r] |
|
835 | ctx = repo[r] | |
@@ -1118,10 +1118,10 b' def exchangepull(orig, repo, remote, *ar' | |||||
1118 | return orig(repo, remote, *args, **kwargs) |
|
1118 | return orig(repo, remote, *args, **kwargs) | |
1119 |
|
1119 | |||
1120 |
|
1120 | |||
1121 |
def _fileprefetchhook(repo, rev |
|
1121 | def _fileprefetchhook(repo, revmatches): | |
1122 | if isenabled(repo): |
|
1122 | if isenabled(repo): | |
1123 | allfiles = [] |
|
1123 | allfiles = [] | |
1124 | for rev in revs: |
|
1124 | for rev, match in revmatches: | |
1125 | if rev == nodemod.wdirrev or rev is None: |
|
1125 | if rev == nodemod.wdirrev or rev is None: | |
1126 | continue |
|
1126 | continue | |
1127 | ctx = repo[rev] |
|
1127 | ctx = repo[rev] |
@@ -13,7 +13,7 b' from mercurial import (' | |||||
13 | error, |
|
13 | error, | |
14 | hg, |
|
14 | hg, | |
15 | lock as lockmod, |
|
15 | lock as lockmod, | |
16 | merge, |
|
16 | mergestate as mergestatemod, | |
17 | node as nodemod, |
|
17 | node as nodemod, | |
18 | pycompat, |
|
18 | pycompat, | |
19 | registrar, |
|
19 | registrar, | |
@@ -269,7 +269,7 b' def stripcmd(ui, repo, *revs, **opts):' | |||||
269 | repo.dirstate.write(repo.currenttransaction()) |
|
269 | repo.dirstate.write(repo.currenttransaction()) | |
270 |
|
270 | |||
271 | # clear resolve state |
|
271 | # clear resolve state | |
272 | merge.mergestate.clean(repo, repo[b'.'].node()) |
|
272 | mergestatemod.mergestate.clean(repo, repo[b'.'].node()) | |
273 |
|
273 | |||
274 | update = False |
|
274 | update = False | |
275 |
|
275 |
@@ -369,7 +369,7 b' def archive(' | |||||
369 | if total: |
|
369 | if total: | |
370 | files.sort() |
|
370 | files.sort() | |
371 | scmutil.prefetchfiles( |
|
371 | scmutil.prefetchfiles( | |
372 |
repo, [ctx.rev() |
|
372 | repo, [(ctx.rev(), scmutil.matchfiles(repo, files))] | |
373 | ) |
|
373 | ) | |
374 | progress = repo.ui.makeprogress( |
|
374 | progress = repo.ui.makeprogress( | |
375 | _(b'archiving'), unit=_(b'files'), total=total |
|
375 | _(b'archiving'), unit=_(b'files'), total=total |
@@ -166,6 +166,7 b' from . import (' | |||||
166 | phases, |
|
166 | phases, | |
167 | pushkey, |
|
167 | pushkey, | |
168 | pycompat, |
|
168 | pycompat, | |
|
169 | scmutil, | |||
169 | streamclone, |
|
170 | streamclone, | |
170 | tags, |
|
171 | tags, | |
171 | url, |
|
172 | url, | |
@@ -1710,7 +1711,7 b' def _addpartsfromopts(ui, repo, bundler,' | |||||
1710 | b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False |
|
1711 | b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False | |
1711 | ) |
|
1712 | ) | |
1712 | if opts.get(b'phases') and repo.revs( |
|
1713 | if opts.get(b'phases') and repo.revs( | |
1713 |
b'%ln and secret()', outgoing. |
|
1714 | b'%ln and secret()', outgoing.ancestorsof | |
1714 | ): |
|
1715 | ): | |
1715 | part.addparam( |
|
1716 | part.addparam( | |
1716 | b'targetphase', b'%d' % phases.secret, mandatory=False |
|
1717 | b'targetphase', b'%d' % phases.secret, mandatory=False | |
@@ -1752,7 +1753,7 b' def addparttagsfnodescache(repo, bundler' | |||||
1752 | # consume little memory (1M heads is 40MB) b) we don't want to send the |
|
1753 | # consume little memory (1M heads is 40MB) b) we don't want to send the | |
1753 | # part if we don't have entries and knowing if we have entries requires |
|
1754 | # part if we don't have entries and knowing if we have entries requires | |
1754 | # cache lookups. |
|
1755 | # cache lookups. | |
1755 |
for node in outgoing. |
|
1756 | for node in outgoing.ancestorsof: | |
1756 | # Don't compute missing, as this may slow down serving. |
|
1757 | # Don't compute missing, as this may slow down serving. | |
1757 | fnode = cache.getfnode(node, computemissing=False) |
|
1758 | fnode = cache.getfnode(node, computemissing=False) | |
1758 | if fnode is not None: |
|
1759 | if fnode is not None: | |
@@ -1977,7 +1978,7 b' def handlechangegroup(op, inpart):' | |||||
1977 | op.repo.svfs.options = localrepo.resolvestorevfsoptions( |
|
1978 | op.repo.svfs.options = localrepo.resolvestorevfsoptions( | |
1978 | op.repo.ui, op.repo.requirements, op.repo.features |
|
1979 | op.repo.ui, op.repo.requirements, op.repo.features | |
1979 | ) |
|
1980 | ) | |
1980 |
op.repo |
|
1981 | scmutil.writereporequirements(op.repo) | |
1981 |
|
1982 | |||
1982 | bundlesidedata = bool(b'exp-sidedata' in inpart.params) |
|
1983 | bundlesidedata = bool(b'exp-sidedata' in inpart.params) | |
1983 | reposidedata = bool(b'exp-sidedata-flag' in op.repo.requirements) |
|
1984 | reposidedata = bool(b'exp-sidedata-flag' in op.repo.requirements) | |
@@ -2207,7 +2208,7 b' def handlecheckphases(op, inpart):' | |||||
2207 | b'remote repository changed while pushing - please try again ' |
|
2208 | b'remote repository changed while pushing - please try again ' | |
2208 | b'(%s is %s expected %s)' |
|
2209 | b'(%s is %s expected %s)' | |
2209 | ) |
|
2210 | ) | |
2210 |
for expectedphase, nodes in |
|
2211 | for expectedphase, nodes in pycompat.iteritems(phasetonodes): | |
2211 | for n in nodes: |
|
2212 | for n in nodes: | |
2212 | actualphase = phasecache.phase(unfi, cl.rev(n)) |
|
2213 | actualphase = phasecache.phase(unfi, cl.rev(n)) | |
2213 | if actualphase != expectedphase: |
|
2214 | if actualphase != expectedphase: |
@@ -49,23 +49,35 b' static Py_ssize_t pathlen(line *l)' | |||||
49 | } |
|
49 | } | |
50 |
|
50 | |||
51 | /* get the node value of a single line */ |
|
51 | /* get the node value of a single line */ | |
52 | static PyObject *nodeof(line *l) |
|
52 | static PyObject *nodeof(line *l, char *flag) | |
53 | { |
|
53 | { | |
54 | char *s = l->start; |
|
54 | char *s = l->start; | |
55 | Py_ssize_t llen = pathlen(l); |
|
55 | Py_ssize_t llen = pathlen(l); | |
56 | Py_ssize_t hlen = l->len - llen - 2; |
|
56 | Py_ssize_t hlen = l->len - llen - 2; | |
57 |
Py_ssize_t hlen_raw |
|
57 | Py_ssize_t hlen_raw; | |
58 | PyObject *hash; |
|
58 | PyObject *hash; | |
59 | if (llen + 1 + 40 + 1 > l->len) { /* path '\0' hash '\n' */ |
|
59 | if (llen + 1 + 40 + 1 > l->len) { /* path '\0' hash '\n' */ | |
60 | PyErr_SetString(PyExc_ValueError, "manifest line too short"); |
|
60 | PyErr_SetString(PyExc_ValueError, "manifest line too short"); | |
61 | return NULL; |
|
61 | return NULL; | |
62 | } |
|
62 | } | |
|
63 | /* Detect flags after the hash first. */ | |||
|
64 | switch (s[llen + hlen]) { | |||
|
65 | case 'l': | |||
|
66 | case 't': | |||
|
67 | case 'x': | |||
|
68 | *flag = s[llen + hlen]; | |||
|
69 | --hlen; | |||
|
70 | break; | |||
|
71 | default: | |||
|
72 | *flag = '\0'; | |||
|
73 | break; | |||
|
74 | } | |||
|
75 | ||||
63 | switch (hlen) { |
|
76 | switch (hlen) { | |
64 | case 40: /* sha1 */ |
|
77 | case 40: /* sha1 */ | |
65 | case 41: /* sha1 with cruft for a merge */ |
|
78 | hlen_raw = 20; | |
66 | break; |
|
79 | break; | |
67 | case 64: /* new hash */ |
|
80 | case 64: /* new hash */ | |
68 | case 65: /* new hash with cruft for a merge */ |
|
|||
69 | hlen_raw = 32; |
|
81 | hlen_raw = 32; | |
70 | break; |
|
82 | break; | |
71 | default: |
|
83 | default: | |
@@ -89,24 +101,14 b' static PyObject *nodeof(line *l)' | |||||
89 | /* get the node hash and flags of a line as a tuple */ |
|
101 | /* get the node hash and flags of a line as a tuple */ | |
90 | static PyObject *hashflags(line *l) |
|
102 | static PyObject *hashflags(line *l) | |
91 | { |
|
103 | { | |
92 | char *s = l->start; |
|
104 | char flag; | |
93 | Py_ssize_t plen = pathlen(l); |
|
105 | PyObject *hash = nodeof(l, &flag); | |
94 | PyObject *hash = nodeof(l); |
|
|||
95 | ssize_t hlen; |
|
|||
96 | Py_ssize_t hplen, flen; |
|
|||
97 | PyObject *flags; |
|
106 | PyObject *flags; | |
98 | PyObject *tup; |
|
107 | PyObject *tup; | |
99 |
|
108 | |||
100 | if (!hash) |
|
109 | if (!hash) | |
101 | return NULL; |
|
110 | return NULL; | |
102 | /* hash is either 20 or 21 bytes for an old hash, so we use a |
|
111 | flags = PyBytes_FromStringAndSize(&flag, flag ? 1 : 0); | |
103 | ternary here to get the "real" hexlified sha length. */ |
|
|||
104 | hlen = PyBytes_GET_SIZE(hash) < 22 ? 40 : 64; |
|
|||
105 | /* 1 for null byte, 1 for newline */ |
|
|||
106 | hplen = plen + hlen + 2; |
|
|||
107 | flen = l->len - hplen; |
|
|||
108 |
|
||||
109 | flags = PyBytes_FromStringAndSize(s + hplen - 1, flen); |
|
|||
110 | if (!flags) { |
|
112 | if (!flags) { | |
111 | Py_DECREF(hash); |
|
113 | Py_DECREF(hash); | |
112 | return NULL; |
|
114 | return NULL; | |
@@ -291,7 +293,7 b' static PyObject *lmiter_iterentriesnext(' | |||||
291 | { |
|
293 | { | |
292 | Py_ssize_t pl; |
|
294 | Py_ssize_t pl; | |
293 | line *l; |
|
295 | line *l; | |
294 | Py_ssize_t consumed; |
|
296 | char flag; | |
295 | PyObject *ret = NULL, *path = NULL, *hash = NULL, *flags = NULL; |
|
297 | PyObject *ret = NULL, *path = NULL, *hash = NULL, *flags = NULL; | |
296 | l = lmiter_nextline((lmIter *)o); |
|
298 | l = lmiter_nextline((lmIter *)o); | |
297 | if (!l) { |
|
299 | if (!l) { | |
@@ -299,13 +301,11 b' static PyObject *lmiter_iterentriesnext(' | |||||
299 | } |
|
301 | } | |
300 | pl = pathlen(l); |
|
302 | pl = pathlen(l); | |
301 | path = PyBytes_FromStringAndSize(l->start, pl); |
|
303 | path = PyBytes_FromStringAndSize(l->start, pl); | |
302 | hash = nodeof(l); |
|
304 | hash = nodeof(l, &flag); | |
303 | if (!path || !hash) { |
|
305 | if (!path || !hash) { | |
304 | goto done; |
|
306 | goto done; | |
305 | } |
|
307 | } | |
306 | consumed = pl + 41; |
|
308 | flags = PyBytes_FromStringAndSize(&flag, flag ? 1 : 0); | |
307 | flags = PyBytes_FromStringAndSize(l->start + consumed, |
|
|||
308 | l->len - consumed - 1); |
|
|||
309 | if (!flags) { |
|
309 | if (!flags) { | |
310 | goto done; |
|
310 | goto done; | |
311 | } |
|
311 | } | |
@@ -568,19 +568,13 b' static int lazymanifest_setitem(' | |||||
568 | pyhash = PyTuple_GetItem(value, 0); |
|
568 | pyhash = PyTuple_GetItem(value, 0); | |
569 | if (!PyBytes_Check(pyhash)) { |
|
569 | if (!PyBytes_Check(pyhash)) { | |
570 | PyErr_Format(PyExc_TypeError, |
|
570 | PyErr_Format(PyExc_TypeError, | |
571 |
"node must be a 20 |
|
571 | "node must be a 20 or 32 bytes string"); | |
572 | return -1; |
|
572 | return -1; | |
573 | } |
|
573 | } | |
574 | hlen = PyBytes_Size(pyhash); |
|
574 | hlen = PyBytes_Size(pyhash); | |
575 | /* Some parts of the codebase try and set 21 or 22 |
|
575 | if (hlen != 20 && hlen != 32) { | |
576 | * byte "hash" values in order to perturb things for |
|
|||
577 | * status. We have to preserve at least the 21st |
|
|||
578 | * byte. Sigh. If there's a 22nd byte, we drop it on |
|
|||
579 | * the floor, which works fine. |
|
|||
580 | */ |
|
|||
581 | if (hlen != 20 && hlen != 21 && hlen != 22) { |
|
|||
582 | PyErr_Format(PyExc_TypeError, |
|
576 | PyErr_Format(PyExc_TypeError, | |
583 |
"node must be a 20 |
|
577 | "node must be a 20 or 32 bytes string"); | |
584 | return -1; |
|
578 | return -1; | |
585 | } |
|
579 | } | |
586 | hash = PyBytes_AsString(pyhash); |
|
580 | hash = PyBytes_AsString(pyhash); | |
@@ -588,28 +582,39 b' static int lazymanifest_setitem(' | |||||
588 | pyflags = PyTuple_GetItem(value, 1); |
|
582 | pyflags = PyTuple_GetItem(value, 1); | |
589 | if (!PyBytes_Check(pyflags) || PyBytes_Size(pyflags) > 1) { |
|
583 | if (!PyBytes_Check(pyflags) || PyBytes_Size(pyflags) > 1) { | |
590 | PyErr_Format(PyExc_TypeError, |
|
584 | PyErr_Format(PyExc_TypeError, | |
591 | "flags must a 0 or 1 byte string"); |
|
585 | "flags must a 0 or 1 bytes string"); | |
592 | return -1; |
|
586 | return -1; | |
593 | } |
|
587 | } | |
594 | if (PyBytes_AsStringAndSize(pyflags, &flags, &flen) == -1) { |
|
588 | if (PyBytes_AsStringAndSize(pyflags, &flags, &flen) == -1) { | |
595 | return -1; |
|
589 | return -1; | |
596 | } |
|
590 | } | |
|
591 | if (flen == 1) { | |||
|
592 | switch (*flags) { | |||
|
593 | case 'l': | |||
|
594 | case 't': | |||
|
595 | case 'x': | |||
|
596 | break; | |||
|
597 | default: | |||
|
598 | PyErr_Format(PyExc_TypeError, "invalid manifest flag"); | |||
|
599 | return -1; | |||
|
600 | } | |||
|
601 | } | |||
597 | /* one null byte and one newline */ |
|
602 | /* one null byte and one newline */ | |
598 |
dlen = plen + |
|
603 | dlen = plen + hlen * 2 + 1 + flen + 1; | |
599 | dest = malloc(dlen); |
|
604 | dest = malloc(dlen); | |
600 | if (!dest) { |
|
605 | if (!dest) { | |
601 | PyErr_NoMemory(); |
|
606 | PyErr_NoMemory(); | |
602 | return -1; |
|
607 | return -1; | |
603 | } |
|
608 | } | |
604 | memcpy(dest, path, plen + 1); |
|
609 | memcpy(dest, path, plen + 1); | |
605 |
for (i = 0; i < |
|
610 | for (i = 0; i < hlen; i++) { | |
606 | /* Cast to unsigned, so it will not get sign-extended when promoted |
|
611 | /* Cast to unsigned, so it will not get sign-extended when promoted | |
607 | * to int (as is done when passing to a variadic function) |
|
612 | * to int (as is done when passing to a variadic function) | |
608 | */ |
|
613 | */ | |
609 | sprintf(dest + plen + 1 + (i * 2), "%02x", (unsigned char)hash[i]); |
|
614 | sprintf(dest + plen + 1 + (i * 2), "%02x", (unsigned char)hash[i]); | |
610 | } |
|
615 | } | |
611 |
memcpy(dest + plen + |
|
616 | memcpy(dest + plen + 2 * hlen + 1, flags, flen); | |
612 |
dest[plen + |
|
617 | dest[plen + 2 * hlen + 1 + flen] = '\n'; | |
613 | new.start = dest; |
|
618 | new.start = dest; | |
614 | new.len = dlen; |
|
619 | new.len = dlen; | |
615 | new.hash_suffix = '\0'; |
|
620 | new.hash_suffix = '\0'; |
@@ -336,7 +336,7 b' static PyObject *makestat(const struct s' | |||||
336 | static PyObject *_listdir_stat(char *path, int pathlen, int keepstat, |
|
336 | static PyObject *_listdir_stat(char *path, int pathlen, int keepstat, | |
337 | char *skip) |
|
337 | char *skip) | |
338 | { |
|
338 | { | |
339 |
PyObject *list, *elem, * |
|
339 | PyObject *list, *elem, *ret = NULL; | |
340 | char fullpath[PATH_MAX + 10]; |
|
340 | char fullpath[PATH_MAX + 10]; | |
341 | int kind, err; |
|
341 | int kind, err; | |
342 | struct stat st; |
|
342 | struct stat st; | |
@@ -409,7 +409,7 b' static PyObject *_listdir_stat(char *pat' | |||||
409 | } |
|
409 | } | |
410 |
|
410 | |||
411 | if (keepstat) { |
|
411 | if (keepstat) { | |
412 | stat = makestat(&st); |
|
412 | PyObject *stat = makestat(&st); | |
413 | if (!stat) |
|
413 | if (!stat) | |
414 | goto error; |
|
414 | goto error; | |
415 | elem = Py_BuildValue(PY23("siN", "yiN"), ent->d_name, |
|
415 | elem = Py_BuildValue(PY23("siN", "yiN"), ent->d_name, | |
@@ -419,7 +419,6 b' static PyObject *_listdir_stat(char *pat' | |||||
419 | kind); |
|
419 | kind); | |
420 | if (!elem) |
|
420 | if (!elem) | |
421 | goto error; |
|
421 | goto error; | |
422 | stat = NULL; |
|
|||
423 |
|
422 | |||
424 | PyList_Append(list, elem); |
|
423 | PyList_Append(list, elem); | |
425 | Py_DECREF(elem); |
|
424 | Py_DECREF(elem); | |
@@ -430,7 +429,6 b' static PyObject *_listdir_stat(char *pat' | |||||
430 |
|
429 | |||
431 | error: |
|
430 | error: | |
432 | Py_DECREF(list); |
|
431 | Py_DECREF(list); | |
433 | Py_XDECREF(stat); |
|
|||
434 | error_list: |
|
432 | error_list: | |
435 | closedir(dir); |
|
433 | closedir(dir); | |
436 | /* closedir also closes its dirfd */ |
|
434 | /* closedir also closes its dirfd */ | |
@@ -480,7 +478,7 b' int attrkind(attrbuf_entry *entry)' | |||||
480 | static PyObject *_listdir_batch(char *path, int pathlen, int keepstat, |
|
478 | static PyObject *_listdir_batch(char *path, int pathlen, int keepstat, | |
481 | char *skip, bool *fallback) |
|
479 | char *skip, bool *fallback) | |
482 | { |
|
480 | { | |
483 |
PyObject *list, *elem, * |
|
481 | PyObject *list, *elem, *ret = NULL; | |
484 | int kind, err; |
|
482 | int kind, err; | |
485 | unsigned long index; |
|
483 | unsigned long index; | |
486 | unsigned int count, old_state, new_state; |
|
484 | unsigned int count, old_state, new_state; | |
@@ -586,6 +584,7 b' static PyObject *_listdir_batch(char *pa' | |||||
586 | } |
|
584 | } | |
587 |
|
585 | |||
588 | if (keepstat) { |
|
586 | if (keepstat) { | |
|
587 | PyObject *stat = NULL; | |||
589 | /* from the getattrlist(2) man page: "Only the |
|
588 | /* from the getattrlist(2) man page: "Only the | |
590 | permission bits ... are valid". */ |
|
589 | permission bits ... are valid". */ | |
591 | st.st_mode = (entry->access_mask & ~S_IFMT) | kind; |
|
590 | st.st_mode = (entry->access_mask & ~S_IFMT) | kind; | |
@@ -601,7 +600,6 b' static PyObject *_listdir_batch(char *pa' | |||||
601 | filename, kind); |
|
600 | filename, kind); | |
602 | if (!elem) |
|
601 | if (!elem) | |
603 | goto error; |
|
602 | goto error; | |
604 | stat = NULL; |
|
|||
605 |
|
603 | |||
606 | PyList_Append(list, elem); |
|
604 | PyList_Append(list, elem); | |
607 | Py_DECREF(elem); |
|
605 | Py_DECREF(elem); | |
@@ -615,7 +613,6 b' static PyObject *_listdir_batch(char *pa' | |||||
615 |
|
613 | |||
616 | error: |
|
614 | error: | |
617 | Py_DECREF(list); |
|
615 | Py_DECREF(list); | |
618 | Py_XDECREF(stat); |
|
|||
619 | error_dir: |
|
616 | error_dir: | |
620 | close(dfd); |
|
617 | close(dfd); | |
621 | error_value: |
|
618 | error_value: |
@@ -667,7 +667,7 b' void dirs_module_init(PyObject *mod);' | |||||
667 | void manifest_module_init(PyObject *mod); |
|
667 | void manifest_module_init(PyObject *mod); | |
668 | void revlog_module_init(PyObject *mod); |
|
668 | void revlog_module_init(PyObject *mod); | |
669 |
|
669 | |||
670 |
static const int version = 1 |
|
670 | static const int version = 17; | |
671 |
|
671 | |||
672 | static void module_init(PyObject *mod) |
|
672 | static void module_init(PyObject *mod) | |
673 | { |
|
673 | { |
@@ -109,6 +109,9 b' static const Py_ssize_t nullrev = -1;' | |||||
109 |
|
109 | |||
110 | static Py_ssize_t inline_scan(indexObject *self, const char **offsets); |
|
110 | static Py_ssize_t inline_scan(indexObject *self, const char **offsets); | |
111 |
|
111 | |||
|
112 | static int index_find_node(indexObject *self, const char *node, | |||
|
113 | Py_ssize_t nodelen); | |||
|
114 | ||||
112 | #if LONG_MAX == 0x7fffffffL |
|
115 | #if LONG_MAX == 0x7fffffffL | |
113 | static const char *const tuple_format = PY23("Kiiiiiis#", "Kiiiiiiy#"); |
|
116 | static const char *const tuple_format = PY23("Kiiiiiis#", "Kiiiiiiy#"); | |
114 | #else |
|
117 | #else | |
@@ -577,34 +580,6 b' static int check_filter(PyObject *filter' | |||||
577 | } |
|
580 | } | |
578 | } |
|
581 | } | |
579 |
|
582 | |||
580 | static Py_ssize_t add_roots_get_min(indexObject *self, PyObject *list, |
|
|||
581 | Py_ssize_t marker, char *phases) |
|
|||
582 | { |
|
|||
583 | PyObject *iter = NULL; |
|
|||
584 | PyObject *iter_item = NULL; |
|
|||
585 | Py_ssize_t min_idx = index_length(self) + 2; |
|
|||
586 | long iter_item_long; |
|
|||
587 |
|
||||
588 | if (PyList_GET_SIZE(list) != 0) { |
|
|||
589 | iter = PyObject_GetIter(list); |
|
|||
590 | if (iter == NULL) |
|
|||
591 | return -2; |
|
|||
592 | while ((iter_item = PyIter_Next(iter))) { |
|
|||
593 | if (!pylong_to_long(iter_item, &iter_item_long)) { |
|
|||
594 | Py_DECREF(iter_item); |
|
|||
595 | return -2; |
|
|||
596 | } |
|
|||
597 | Py_DECREF(iter_item); |
|
|||
598 | if (iter_item_long < min_idx) |
|
|||
599 | min_idx = iter_item_long; |
|
|||
600 | phases[iter_item_long] = (char)marker; |
|
|||
601 | } |
|
|||
602 | Py_DECREF(iter); |
|
|||
603 | } |
|
|||
604 |
|
||||
605 | return min_idx; |
|
|||
606 | } |
|
|||
607 |
|
||||
608 | static inline void set_phase_from_parents(char *phases, int parent_1, |
|
583 | static inline void set_phase_from_parents(char *phases, int parent_1, | |
609 | int parent_2, Py_ssize_t i) |
|
584 | int parent_2, Py_ssize_t i) | |
610 | { |
|
585 | { | |
@@ -773,99 +748,164 b' bail:' | |||||
773 | return NULL; |
|
748 | return NULL; | |
774 | } |
|
749 | } | |
775 |
|
750 | |||
|
751 | static int add_roots_get_min(indexObject *self, PyObject *roots, char *phases, | |||
|
752 | char phase) | |||
|
753 | { | |||
|
754 | Py_ssize_t len = index_length(self); | |||
|
755 | PyObject *item; | |||
|
756 | PyObject *iterator; | |||
|
757 | int rev, minrev = -1; | |||
|
758 | char *node; | |||
|
759 | ||||
|
760 | if (!PySet_Check(roots)) { | |||
|
761 | PyErr_SetString(PyExc_TypeError, | |||
|
762 | "roots must be a set of nodes"); | |||
|
763 | return -2; | |||
|
764 | } | |||
|
765 | iterator = PyObject_GetIter(roots); | |||
|
766 | if (iterator == NULL) | |||
|
767 | return -2; | |||
|
768 | while ((item = PyIter_Next(iterator))) { | |||
|
769 | if (node_check(item, &node) == -1) | |||
|
770 | goto failed; | |||
|
771 | rev = index_find_node(self, node, 20); | |||
|
772 | /* null is implicitly public, so negative is invalid */ | |||
|
773 | if (rev < 0 || rev >= len) | |||
|
774 | goto failed; | |||
|
775 | phases[rev] = phase; | |||
|
776 | if (minrev == -1 || minrev > rev) | |||
|
777 | minrev = rev; | |||
|
778 | Py_DECREF(item); | |||
|
779 | } | |||
|
780 | Py_DECREF(iterator); | |||
|
781 | return minrev; | |||
|
782 | failed: | |||
|
783 | Py_DECREF(iterator); | |||
|
784 | Py_DECREF(item); | |||
|
785 | return -2; | |||
|
786 | } | |||
|
787 | ||||
776 | static PyObject *compute_phases_map_sets(indexObject *self, PyObject *args) |
|
788 | static PyObject *compute_phases_map_sets(indexObject *self, PyObject *args) | |
777 | { |
|
789 | { | |
|
790 | /* 0: public (untracked), 1: draft, 2: secret, 32: archive, | |||
|
791 | 96: internal */ | |||
|
792 | static const char trackedphases[] = {1, 2, 32, 96}; | |||
778 | PyObject *roots = Py_None; |
|
793 | PyObject *roots = Py_None; | |
779 |
PyObject * |
|
794 | PyObject *phasesetsdict = NULL; | |
780 |
PyObject *phases |
|
795 | PyObject *phasesets[4] = {NULL, NULL, NULL, NULL}; | |
781 | PyObject *phaseroots = NULL; |
|
|||
782 | PyObject *phaseset = NULL; |
|
|||
783 | PyObject *phasessetlist = NULL; |
|
|||
784 | PyObject *rev = NULL; |
|
|||
785 | Py_ssize_t len = index_length(self); |
|
796 | Py_ssize_t len = index_length(self); | |
786 | Py_ssize_t numphase = 0; |
|
|||
787 | Py_ssize_t minrevallphases = 0; |
|
|||
788 | Py_ssize_t minrevphase = 0; |
|
|||
789 | Py_ssize_t i = 0; |
|
|||
790 | char *phases = NULL; |
|
797 | char *phases = NULL; | |
791 | long phase; |
|
798 | int minphaserev = -1, rev, i; | |
|
799 | const int numphases = (int)(sizeof(phasesets) / sizeof(phasesets[0])); | |||
792 |
|
800 | |||
793 | if (!PyArg_ParseTuple(args, "O", &roots)) |
|
801 | if (!PyArg_ParseTuple(args, "O", &roots)) | |
794 | goto done; |
|
802 | return NULL; | |
795 |
if (roots == NULL || !Py |
|
803 | if (roots == NULL || !PyDict_Check(roots)) { | |
796 |
PyErr_SetString(PyExc_TypeError, "roots must be a |
|
804 | PyErr_SetString(PyExc_TypeError, "roots must be a dictionary"); | |
797 | goto done; |
|
805 | return NULL; | |
798 | } |
|
806 | } | |
799 |
|
807 | |||
800 | phases = calloc( |
|
808 | phases = calloc(len, 1); | |
801 | len, 1); /* phase per rev: {0: public, 1: draft, 2: secret} */ |
|
|||
802 | if (phases == NULL) { |
|
809 | if (phases == NULL) { | |
803 | PyErr_NoMemory(); |
|
810 | PyErr_NoMemory(); | |
804 | goto done; |
|
811 | return NULL; | |
805 | } |
|
812 | } | |
806 | /* Put the phase information of all the roots in phases */ |
|
813 | ||
807 | numphase = PyList_GET_SIZE(roots) + 1; |
|
814 | for (i = 0; i < numphases; ++i) { | |
808 | minrevallphases = len + 1; |
|
815 | PyObject *pyphase = PyInt_FromLong(trackedphases[i]); | |
809 | phasessetlist = PyList_New(numphase); |
|
816 | PyObject *phaseroots = NULL; | |
810 |
if (phase |
|
817 | if (pyphase == NULL) | |
811 |
goto |
|
818 | goto release; | |
|
819 | phaseroots = PyDict_GetItem(roots, pyphase); | |||
|
820 | Py_DECREF(pyphase); | |||
|
821 | if (phaseroots == NULL) | |||
|
822 | continue; | |||
|
823 | rev = add_roots_get_min(self, phaseroots, phases, | |||
|
824 | trackedphases[i]); | |||
|
825 | if (rev == -2) | |||
|
826 | goto release; | |||
|
827 | if (rev != -1 && (minphaserev == -1 || rev < minphaserev)) | |||
|
828 | minphaserev = rev; | |||
|
829 | } | |||
|
830 | ||||
|
831 | for (i = 0; i < numphases; ++i) { | |||
|
832 | phasesets[i] = PySet_New(NULL); | |||
|
833 | if (phasesets[i] == NULL) | |||
|
834 | goto release; | |||
|
835 | } | |||
812 |
|
836 | |||
813 | PyList_SET_ITEM(phasessetlist, 0, Py_None); |
|
837 | if (minphaserev == -1) | |
814 | Py_INCREF(Py_None); |
|
838 | minphaserev = len; | |
815 |
|
839 | for (rev = minphaserev; rev < len; ++rev) { | ||
816 | for (i = 0; i < numphase - 1; i++) { |
|
840 | PyObject *pyphase = NULL; | |
817 | phaseroots = PyList_GET_ITEM(roots, i); |
|
841 | PyObject *pyrev = NULL; | |
818 | phaseset = PySet_New(NULL); |
|
842 | int parents[2]; | |
819 | if (phaseset == NULL) |
|
843 | /* | |
|
844 | * The parent lookup could be skipped for phaseroots, but | |||
|
845 | * phase --force would historically not recompute them | |||
|
846 | * correctly, leaving descendents with a lower phase around. | |||
|
847 | * As such, unconditionally recompute the phase. | |||
|
848 | */ | |||
|
849 | if (index_get_parents(self, rev, parents, (int)len - 1) < 0) | |||
820 | goto release; |
|
850 | goto release; | |
821 | PyList_SET_ITEM(phasessetlist, i + 1, phaseset); |
|
851 | set_phase_from_parents(phases, parents[0], parents[1], rev); | |
822 | if (!PyList_Check(phaseroots)) { |
|
852 | switch (phases[rev]) { | |
823 | PyErr_SetString(PyExc_TypeError, |
|
853 | case 0: | |
824 | "roots item must be a list"); |
|
854 | continue; | |
|
855 | case 1: | |||
|
856 | pyphase = phasesets[0]; | |||
|
857 | break; | |||
|
858 | case 2: | |||
|
859 | pyphase = phasesets[1]; | |||
|
860 | break; | |||
|
861 | case 32: | |||
|
862 | pyphase = phasesets[2]; | |||
|
863 | break; | |||
|
864 | case 96: | |||
|
865 | pyphase = phasesets[3]; | |||
|
866 | break; | |||
|
867 | default: | |||
|
868 | /* this should never happen since the phase number is | |||
|
869 | * specified by this function. */ | |||
|
870 | PyErr_SetString(PyExc_SystemError, | |||
|
871 | "bad phase number in internal list"); | |||
825 | goto release; |
|
872 | goto release; | |
826 | } |
|
873 | } | |
827 | minrevphase = |
|
874 | pyrev = PyInt_FromLong(rev); | |
828 | add_roots_get_min(self, phaseroots, i + 1, phases); |
|
875 | if (pyrev == NULL) | |
829 | if (minrevphase == -2) /* Error from add_roots_get_min */ |
|
|||
830 | goto release; |
|
876 | goto release; | |
831 | minrevallphases = MIN(minrevallphases, minrevphase); |
|
877 | if (PySet_Add(pyphase, pyrev) == -1) { | |
832 | } |
|
878 | Py_DECREF(pyrev); | |
833 | /* Propagate the phase information from the roots to the revs */ |
|
879 | goto release; | |
834 | if (minrevallphases != -1) { |
|
|||
835 | int parents[2]; |
|
|||
836 | for (i = minrevallphases; i < len; i++) { |
|
|||
837 | if (index_get_parents(self, i, parents, (int)len - 1) < |
|
|||
838 | 0) |
|
|||
839 | goto release; |
|
|||
840 | set_phase_from_parents(phases, parents[0], parents[1], |
|
|||
841 | i); |
|
|||
842 | } |
|
880 | } | |
|
881 | Py_DECREF(pyrev); | |||
843 | } |
|
882 | } | |
844 | /* Transform phase list to a python list */ |
|
883 | ||
845 | phasessize = PyInt_FromSsize_t(len); |
|
884 | phasesetsdict = _dict_new_presized(numphases); | |
846 |
if (phases |
|
885 | if (phasesetsdict == NULL) | |
847 | goto release; |
|
886 | goto release; | |
848 |
for (i = 0; i < |
|
887 | for (i = 0; i < numphases; ++i) { | |
849 | phase = phases[i]; |
|
888 | PyObject *pyphase = PyInt_FromLong(trackedphases[i]); | |
850 | /* We only store the sets of phase for non public phase, the |
|
889 | if (pyphase == NULL) | |
851 | * public phase is computed as a difference */ |
|
890 | goto release; | |
852 | if (phase != 0) { |
|
891 | if (PyDict_SetItem(phasesetsdict, pyphase, phasesets[i]) == | |
853 | phaseset = PyList_GET_ITEM(phasessetlist, phase); |
|
892 | -1) { | |
854 | rev = PyInt_FromSsize_t(i); |
|
893 | Py_DECREF(pyphase); | |
855 | if (rev == NULL) |
|
894 | goto release; | |
856 | goto release; |
|
|||
857 | PySet_Add(phaseset, rev); |
|
|||
858 | Py_XDECREF(rev); |
|
|||
859 | } |
|
895 | } | |
|
896 | Py_DECREF(phasesets[i]); | |||
|
897 | phasesets[i] = NULL; | |||
860 | } |
|
898 | } | |
861 | ret = PyTuple_Pack(2, phasessize, phasessetlist); |
|
899 | ||
|
900 | return Py_BuildValue("nN", len, phasesetsdict); | |||
862 |
|
901 | |||
863 | release: |
|
902 | release: | |
864 | Py_XDECREF(phasessize); |
|
903 | for (i = 0; i < numphases; ++i) | |
865 |
Py_XDECREF(phases |
|
904 | Py_XDECREF(phasesets[i]); | |
866 | done: |
|
905 | Py_XDECREF(phasesetsdict); | |
|
906 | ||||
867 | free(phases); |
|
907 | free(phases); | |
868 |
return |
|
908 | return NULL; | |
869 | } |
|
909 | } | |
870 |
|
910 | |||
871 | static PyObject *index_headrevs(indexObject *self, PyObject *args) |
|
911 | static PyObject *index_headrevs(indexObject *self, PyObject *args) | |
@@ -2847,7 +2887,7 b' PyTypeObject HgRevlogIndex_Type = {' | |||||
2847 | */ |
|
2887 | */ | |
2848 | PyObject *parse_index2(PyObject *self, PyObject *args) |
|
2888 | PyObject *parse_index2(PyObject *self, PyObject *args) | |
2849 | { |
|
2889 | { | |
2850 |
PyObject |
|
2890 | PyObject *cache = NULL; | |
2851 | indexObject *idx; |
|
2891 | indexObject *idx; | |
2852 | int ret; |
|
2892 | int ret; | |
2853 |
|
2893 | |||
@@ -2868,15 +2908,11 b' PyObject *parse_index2(PyObject *self, P' | |||||
2868 | Py_INCREF(cache); |
|
2908 | Py_INCREF(cache); | |
2869 | } |
|
2909 | } | |
2870 |
|
2910 | |||
2871 |
|
|
2911 | return Py_BuildValue("NN", idx, cache); | |
2872 | if (!tuple) |
|
|||
2873 | goto bail; |
|
|||
2874 | return tuple; |
|
|||
2875 |
|
2912 | |||
2876 | bail: |
|
2913 | bail: | |
2877 | Py_XDECREF(idx); |
|
2914 | Py_XDECREF(idx); | |
2878 | Py_XDECREF(cache); |
|
2915 | Py_XDECREF(cache); | |
2879 | Py_XDECREF(tuple); |
|
|||
2880 | return NULL; |
|
2916 | return NULL; | |
2881 | } |
|
2917 | } | |
2882 |
|
2918 |
@@ -1629,7 +1629,7 b' def makestream(' | |||||
1629 | repo = repo.unfiltered() |
|
1629 | repo = repo.unfiltered() | |
1630 | commonrevs = outgoing.common |
|
1630 | commonrevs = outgoing.common | |
1631 | csets = outgoing.missing |
|
1631 | csets = outgoing.missing | |
1632 |
heads = outgoing. |
|
1632 | heads = outgoing.ancestorsof | |
1633 | # We go through the fast path if we get told to, or if all (unfiltered |
|
1633 | # We go through the fast path if we get told to, or if all (unfiltered | |
1634 | # heads have been requested (since we then know there all linkrevs will |
|
1634 | # heads have been requested (since we then know there all linkrevs will | |
1635 | # be pulled by the client). |
|
1635 | # be pulled by the client). |
@@ -16,9 +16,9 b' from .node import (' | |||||
16 | from .thirdparty import attr |
|
16 | from .thirdparty import attr | |
17 |
|
17 | |||
18 | from . import ( |
|
18 | from . import ( | |
19 | copies, |
|
|||
20 | encoding, |
|
19 | encoding, | |
21 | error, |
|
20 | error, | |
|
21 | metadata, | |||
22 | pycompat, |
|
22 | pycompat, | |
23 | revlog, |
|
23 | revlog, | |
24 | ) |
|
24 | ) | |
@@ -318,7 +318,7 b' class changelogrevision(object):' | |||||
318 | rawindices = self.extra.get(b'filesadded') |
|
318 | rawindices = self.extra.get(b'filesadded') | |
319 | if rawindices is None: |
|
319 | if rawindices is None: | |
320 | return None |
|
320 | return None | |
321 |
return |
|
321 | return metadata.decodefileindices(self.files, rawindices) | |
322 |
|
322 | |||
323 | @property |
|
323 | @property | |
324 | def filesremoved(self): |
|
324 | def filesremoved(self): | |
@@ -330,7 +330,7 b' class changelogrevision(object):' | |||||
330 | rawindices = self.extra.get(b'filesremoved') |
|
330 | rawindices = self.extra.get(b'filesremoved') | |
331 | if rawindices is None: |
|
331 | if rawindices is None: | |
332 | return None |
|
332 | return None | |
333 |
return |
|
333 | return metadata.decodefileindices(self.files, rawindices) | |
334 |
|
334 | |||
335 | @property |
|
335 | @property | |
336 | def p1copies(self): |
|
336 | def p1copies(self): | |
@@ -342,7 +342,7 b' class changelogrevision(object):' | |||||
342 | rawcopies = self.extra.get(b'p1copies') |
|
342 | rawcopies = self.extra.get(b'p1copies') | |
343 | if rawcopies is None: |
|
343 | if rawcopies is None: | |
344 | return None |
|
344 | return None | |
345 |
return |
|
345 | return metadata.decodecopies(self.files, rawcopies) | |
346 |
|
346 | |||
347 | @property |
|
347 | @property | |
348 | def p2copies(self): |
|
348 | def p2copies(self): | |
@@ -354,7 +354,7 b' class changelogrevision(object):' | |||||
354 | rawcopies = self.extra.get(b'p2copies') |
|
354 | rawcopies = self.extra.get(b'p2copies') | |
355 | if rawcopies is None: |
|
355 | if rawcopies is None: | |
356 | return None |
|
356 | return None | |
357 |
return |
|
357 | return metadata.decodecopies(self.files, rawcopies) | |
358 |
|
358 | |||
359 | @property |
|
359 | @property | |
360 | def description(self): |
|
360 | def description(self): | |
@@ -385,9 +385,7 b' class changelog(revlog.revlog):' | |||||
385 | datafile=datafile, |
|
385 | datafile=datafile, | |
386 | checkambig=True, |
|
386 | checkambig=True, | |
387 | mmaplargeindex=True, |
|
387 | mmaplargeindex=True, | |
388 | persistentnodemap=opener.options.get( |
|
388 | persistentnodemap=opener.options.get(b'persistent-nodemap', False), | |
389 | b'exp-persistent-nodemap', False |
|
|||
390 | ), |
|
|||
391 | ) |
|
389 | ) | |
392 |
|
390 | |||
393 | if self._initempty and (self.version & 0xFFFF == revlog.REVLOGV1): |
|
391 | if self._initempty and (self.version & 0xFFFF == revlog.REVLOGV1): | |
@@ -572,13 +570,13 b' class changelog(revlog.revlog):' | |||||
572 | ): |
|
570 | ): | |
573 | extra.pop(name, None) |
|
571 | extra.pop(name, None) | |
574 | if p1copies is not None: |
|
572 | if p1copies is not None: | |
575 |
p1copies = |
|
573 | p1copies = metadata.encodecopies(sortedfiles, p1copies) | |
576 | if p2copies is not None: |
|
574 | if p2copies is not None: | |
577 |
p2copies = |
|
575 | p2copies = metadata.encodecopies(sortedfiles, p2copies) | |
578 | if filesadded is not None: |
|
576 | if filesadded is not None: | |
579 |
filesadded = |
|
577 | filesadded = metadata.encodefileindices(sortedfiles, filesadded) | |
580 | if filesremoved is not None: |
|
578 | if filesremoved is not None: | |
581 |
filesremoved = |
|
579 | filesremoved = metadata.encodefileindices(sortedfiles, filesremoved) | |
582 | if self._copiesstorage == b'extra': |
|
580 | if self._copiesstorage == b'extra': | |
583 | extrasentries = p1copies, p2copies, filesadded, filesremoved |
|
581 | extrasentries = p1copies, p2copies, filesadded, filesremoved | |
584 | if extra is None and any(x is not None for x in extrasentries): |
|
582 | if extra is None and any(x is not None for x in extrasentries): |
@@ -320,7 +320,7 b' class channeledsystem(object):' | |||||
320 | self.channel = channel |
|
320 | self.channel = channel | |
321 |
|
321 | |||
322 | def __call__(self, cmd, environ, cwd=None, type=b'system', cmdtable=None): |
|
322 | def __call__(self, cmd, environ, cwd=None, type=b'system', cmdtable=None): | |
323 |
args = [type, |
|
323 | args = [type, cmd, os.path.abspath(cwd or b'.')] | |
324 | args.extend(b'%s=%s' % (k, v) for k, v in pycompat.iteritems(environ)) |
|
324 | args.extend(b'%s=%s' % (k, v) for k, v in pycompat.iteritems(environ)) | |
325 | data = b'\0'.join(args) |
|
325 | data = b'\0'.join(args) | |
326 | self.out.write(struct.pack(b'>cI', self.channel, len(data))) |
|
326 | self.out.write(struct.pack(b'>cI', self.channel, len(data))) | |
@@ -442,7 +442,20 b' class chgcmdserver(commandserver.server)' | |||||
442 | if newfp is not fp: |
|
442 | if newfp is not fp: | |
443 | newfp.close() |
|
443 | newfp.close() | |
444 | # restore original fd: fp is open again |
|
444 | # restore original fd: fp is open again | |
445 | os.dup2(fd, fp.fileno()) |
|
445 | try: | |
|
446 | os.dup2(fd, fp.fileno()) | |||
|
447 | except OSError as err: | |||
|
448 | # According to issue6330, running chg on heavy loaded systems | |||
|
449 | # can lead to EBUSY. [man dup2] indicates that, on Linux, | |||
|
450 | # EBUSY comes from a race condition between open() and dup2(). | |||
|
451 | # However it's not clear why open() race occurred for | |||
|
452 | # newfd=stdin/out/err. | |||
|
453 | self.ui.log( | |||
|
454 | b'chgserver', | |||
|
455 | b'got %s while duplicating %s\n', | |||
|
456 | stringutil.forcebytestr(err), | |||
|
457 | fn, | |||
|
458 | ) | |||
446 | os.close(fd) |
|
459 | os.close(fd) | |
447 | setattr(self, cn, ch) |
|
460 | setattr(self, cn, ch) | |
448 | setattr(ui, fn, fp) |
|
461 | setattr(ui, fn, fp) |
@@ -38,6 +38,7 b' from . import (' | |||||
38 | logcmdutil, |
|
38 | logcmdutil, | |
39 | match as matchmod, |
|
39 | match as matchmod, | |
40 | merge as mergemod, |
|
40 | merge as mergemod, | |
|
41 | mergestate as mergestatemod, | |||
41 | mergeutil, |
|
42 | mergeutil, | |
42 | obsolete, |
|
43 | obsolete, | |
43 | patch, |
|
44 | patch, | |
@@ -890,7 +891,7 b' To mark files as resolved: hg resolve -' | |||||
890 | def readmorestatus(repo): |
|
891 | def readmorestatus(repo): | |
891 | """Returns a morestatus object if the repo has unfinished state.""" |
|
892 | """Returns a morestatus object if the repo has unfinished state.""" | |
892 | statetuple = statemod.getrepostate(repo) |
|
893 | statetuple = statemod.getrepostate(repo) | |
893 | mergestate = mergemod.mergestate.read(repo) |
|
894 | mergestate = mergestatemod.mergestate.read(repo) | |
894 | activemerge = mergestate.active() |
|
895 | activemerge = mergestate.active() | |
895 | if not statetuple and not activemerge: |
|
896 | if not statetuple and not activemerge: | |
896 | return None |
|
897 | return None | |
@@ -2137,7 +2138,9 b' def _prefetchchangedfiles(repo, revs, ma' | |||||
2137 | for file in repo[rev].files(): |
|
2138 | for file in repo[rev].files(): | |
2138 | if not match or match(file): |
|
2139 | if not match or match(file): | |
2139 | allfiles.add(file) |
|
2140 | allfiles.add(file) | |
2140 |
|
|
2141 | match = scmutil.matchfiles(repo, allfiles) | |
|
2142 | revmatches = [(rev, match) for rev in revs] | |||
|
2143 | scmutil.prefetchfiles(repo, revmatches) | |||
2141 |
|
2144 | |||
2142 |
|
2145 | |||
2143 | def export( |
|
2146 | def export( | |
@@ -2751,15 +2754,28 b' def files(ui, ctx, m, uipathfn, fm, fmt,' | |||||
2751 | ret = 1 |
|
2754 | ret = 1 | |
2752 |
|
2755 | |||
2753 | needsfctx = ui.verbose or {b'size', b'flags'} & fm.datahint() |
|
2756 | needsfctx = ui.verbose or {b'size', b'flags'} & fm.datahint() | |
2754 | for f in ctx.matches(m): |
|
2757 | if fm.isplain() and not needsfctx: | |
2755 | fm.startitem() |
|
2758 | # Fast path. The speed-up comes from skipping the formatter, and batching | |
2756 | fm.context(ctx=ctx) |
|
2759 | # calls to ui.write. | |
2757 | if needsfctx: |
|
2760 | buf = [] | |
2758 | fc = ctx[f] |
|
2761 | for f in ctx.matches(m): | |
2759 | fm.write(b'size flags', b'% 10d % 1s ', fc.size(), fc.flags()) |
|
2762 | buf.append(fmt % uipathfn(f)) | |
2760 | fm.data(path=f) |
|
2763 | if len(buf) > 100: | |
2761 | fm.plain(fmt % uipathfn(f)) |
|
2764 | ui.write(b''.join(buf)) | |
2762 | ret = 0 |
|
2765 | del buf[:] | |
|
2766 | ret = 0 | |||
|
2767 | if buf: | |||
|
2768 | ui.write(b''.join(buf)) | |||
|
2769 | else: | |||
|
2770 | for f in ctx.matches(m): | |||
|
2771 | fm.startitem() | |||
|
2772 | fm.context(ctx=ctx) | |||
|
2773 | if needsfctx: | |||
|
2774 | fc = ctx[f] | |||
|
2775 | fm.write(b'size flags', b'% 10d % 1s ', fc.size(), fc.flags()) | |||
|
2776 | fm.data(path=f) | |||
|
2777 | fm.plain(fmt % uipathfn(f)) | |||
|
2778 | ret = 0 | |||
2763 |
|
2779 | |||
2764 | for subpath in sorted(ctx.substate): |
|
2780 | for subpath in sorted(ctx.substate): | |
2765 | submatch = matchmod.subdirmatcher(subpath, m) |
|
2781 | submatch = matchmod.subdirmatcher(subpath, m) | |
@@ -2983,14 +2999,14 b' def cat(ui, repo, ctx, matcher, basefm, ' | |||||
2983 | try: |
|
2999 | try: | |
2984 | if mfnode and mfl[mfnode].find(file)[0]: |
|
3000 | if mfnode and mfl[mfnode].find(file)[0]: | |
2985 | if _catfmtneedsdata(basefm): |
|
3001 | if _catfmtneedsdata(basefm): | |
2986 |
scmutil.prefetchfiles(repo, [ctx.rev() |
|
3002 | scmutil.prefetchfiles(repo, [(ctx.rev(), matcher)]) | |
2987 | write(file) |
|
3003 | write(file) | |
2988 | return 0 |
|
3004 | return 0 | |
2989 | except KeyError: |
|
3005 | except KeyError: | |
2990 | pass |
|
3006 | pass | |
2991 |
|
3007 | |||
2992 | if _catfmtneedsdata(basefm): |
|
3008 | if _catfmtneedsdata(basefm): | |
2993 |
scmutil.prefetchfiles(repo, [ctx.rev() |
|
3009 | scmutil.prefetchfiles(repo, [(ctx.rev(), matcher)]) | |
2994 |
|
3010 | |||
2995 | for abs in ctx.walk(matcher): |
|
3011 | for abs in ctx.walk(matcher): | |
2996 | write(abs) |
|
3012 | write(abs) | |
@@ -3127,7 +3143,7 b' def amend(ui, repo, old, extra, pats, op' | |||||
3127 | if subs: |
|
3143 | if subs: | |
3128 | subrepoutil.writestate(repo, newsubstate) |
|
3144 | subrepoutil.writestate(repo, newsubstate) | |
3129 |
|
3145 | |||
3130 | ms = mergemod.mergestate.read(repo) |
|
3146 | ms = mergestatemod.mergestate.read(repo) | |
3131 | mergeutil.checkunresolved(ms) |
|
3147 | mergeutil.checkunresolved(ms) | |
3132 |
|
3148 | |||
3133 | filestoamend = {f for f in wctx.files() if matcher(f)} |
|
3149 | filestoamend = {f for f in wctx.files() if matcher(f)} | |
@@ -3423,9 +3439,9 b' def commitstatus(repo, node, branch, bhe' | |||||
3423 | not opts.get(b'amend') |
|
3439 | not opts.get(b'amend') | |
3424 | and bheads |
|
3440 | and bheads | |
3425 | and node not in bheads |
|
3441 | and node not in bheads | |
3426 |
and not |
|
3442 | and not any( | |
3427 |
|
|
3443 | p.node() in bheads and p.branch() == branch for p in parents | |
3428 |
|
|
3444 | ) | |
3429 | ): |
|
3445 | ): | |
3430 | repo.ui.status(_(b'created new head\n')) |
|
3446 | repo.ui.status(_(b'created new head\n')) | |
3431 | # The message is not printed for initial roots. For the other |
|
3447 | # The message is not printed for initial roots. For the other | |
@@ -3755,11 +3771,11 b' def revert(ui, repo, ctx, parents, *pats' | |||||
3755 | needdata = (b'revert', b'add', b'undelete') |
|
3771 | needdata = (b'revert', b'add', b'undelete') | |
3756 | oplist = [actions[name][0] for name in needdata] |
|
3772 | oplist = [actions[name][0] for name in needdata] | |
3757 | prefetch = scmutil.prefetchfiles |
|
3773 | prefetch = scmutil.prefetchfiles | |
3758 | matchfiles = scmutil.matchfiles |
|
3774 | matchfiles = scmutil.matchfiles( | |
|
3775 | repo, [f for sublist in oplist for f in sublist] | |||
|
3776 | ) | |||
3759 | prefetch( |
|
3777 | prefetch( | |
3760 | repo, |
|
3778 | repo, [(ctx.rev(), matchfiles)], | |
3761 | [ctx.rev()], |
|
|||
3762 | matchfiles(repo, [f for sublist in oplist for f in sublist]), |
|
|||
3763 | ) |
|
3779 | ) | |
3764 | match = scmutil.match(repo[None], pats) |
|
3780 | match = scmutil.match(repo[None], pats) | |
3765 | _performrevert( |
|
3781 | _performrevert( |
@@ -46,6 +46,7 b' from . import (' | |||||
46 | hg, |
|
46 | hg, | |
47 | logcmdutil, |
|
47 | logcmdutil, | |
48 | merge as mergemod, |
|
48 | merge as mergemod, | |
|
49 | mergestate as mergestatemod, | |||
49 | narrowspec, |
|
50 | narrowspec, | |
50 | obsolete, |
|
51 | obsolete, | |
51 | obsutil, |
|
52 | obsutil, | |
@@ -2183,7 +2184,8 b' def config(ui, repo, *values, **opts):' | |||||
2183 | """ |
|
2184 | """ | |
2184 |
|
2185 | |||
2185 | opts = pycompat.byteskwargs(opts) |
|
2186 | opts = pycompat.byteskwargs(opts) | |
2186 |
|
|
2187 | editopts = (b'edit', b'local', b'global') | |
|
2188 | if any(opts.get(o) for o in editopts): | |||
2187 | if opts.get(b'local') and opts.get(b'global'): |
|
2189 | if opts.get(b'local') and opts.get(b'global'): | |
2188 | raise error.Abort(_(b"can't use --local and --global together")) |
|
2190 | raise error.Abort(_(b"can't use --local and --global together")) | |
2189 |
|
2191 | |||
@@ -2350,7 +2352,7 b' def copy(ui, repo, *pats, **opts):' | |||||
2350 | Returns 0 on success, 1 if errors are encountered. |
|
2352 | Returns 0 on success, 1 if errors are encountered. | |
2351 | """ |
|
2353 | """ | |
2352 | opts = pycompat.byteskwargs(opts) |
|
2354 | opts = pycompat.byteskwargs(opts) | |
2353 |
with repo.wlock( |
|
2355 | with repo.wlock(): | |
2354 | return cmdutil.copy(ui, repo, pats, opts) |
|
2356 | return cmdutil.copy(ui, repo, pats, opts) | |
2355 |
|
2357 | |||
2356 |
|
2358 | |||
@@ -2475,26 +2477,27 b' def diff(ui, repo, *pats, **opts):' | |||||
2475 | Returns 0 on success. |
|
2477 | Returns 0 on success. | |
2476 | """ |
|
2478 | """ | |
2477 |
|
2479 | |||
|
2480 | cmdutil.check_at_most_one_arg(opts, 'rev', 'change') | |||
2478 | opts = pycompat.byteskwargs(opts) |
|
2481 | opts = pycompat.byteskwargs(opts) | |
2479 | revs = opts.get(b'rev') |
|
2482 | revs = opts.get(b'rev') | |
2480 | change = opts.get(b'change') |
|
2483 | change = opts.get(b'change') | |
2481 | stat = opts.get(b'stat') |
|
2484 | stat = opts.get(b'stat') | |
2482 | reverse = opts.get(b'reverse') |
|
2485 | reverse = opts.get(b'reverse') | |
2483 |
|
2486 | |||
2484 |
if |
|
2487 | if change: | |
2485 | msg = _(b'cannot specify --rev and --change at the same time') |
|
|||
2486 | raise error.Abort(msg) |
|
|||
2487 | elif change: |
|
|||
2488 | repo = scmutil.unhidehashlikerevs(repo, [change], b'nowarn') |
|
2488 | repo = scmutil.unhidehashlikerevs(repo, [change], b'nowarn') | |
2489 | ctx2 = scmutil.revsingle(repo, change, None) |
|
2489 | ctx2 = scmutil.revsingle(repo, change, None) | |
2490 | ctx1 = ctx2.p1() |
|
2490 | ctx1 = ctx2.p1() | |
2491 | else: |
|
2491 | else: | |
2492 | repo = scmutil.unhidehashlikerevs(repo, revs, b'nowarn') |
|
2492 | repo = scmutil.unhidehashlikerevs(repo, revs, b'nowarn') | |
2493 | ctx1, ctx2 = scmutil.revpair(repo, revs) |
|
2493 | ctx1, ctx2 = scmutil.revpair(repo, revs) | |
2494 | node1, node2 = ctx1.node(), ctx2.node() |
|
|||
2495 |
|
2494 | |||
2496 | if reverse: |
|
2495 | if reverse: | |
2497 | node1, node2 = node2, node1 |
|
2496 | ctxleft = ctx2 | |
|
2497 | ctxright = ctx1 | |||
|
2498 | else: | |||
|
2499 | ctxleft = ctx1 | |||
|
2500 | ctxright = ctx2 | |||
2498 |
|
2501 | |||
2499 | diffopts = patch.diffallopts(ui, opts) |
|
2502 | diffopts = patch.diffallopts(ui, opts) | |
2500 | m = scmutil.match(ctx2, pats, opts) |
|
2503 | m = scmutil.match(ctx2, pats, opts) | |
@@ -2504,8 +2507,8 b' def diff(ui, repo, *pats, **opts):' | |||||
2504 | ui, |
|
2507 | ui, | |
2505 | repo, |
|
2508 | repo, | |
2506 | diffopts, |
|
2509 | diffopts, | |
2507 |
|
|
2510 | ctxleft, | |
2508 |
|
|
2511 | ctxright, | |
2509 | m, |
|
2512 | m, | |
2510 | stat=stat, |
|
2513 | stat=stat, | |
2511 | listsubrepos=opts.get(b'subrepos'), |
|
2514 | listsubrepos=opts.get(b'subrepos'), | |
@@ -2980,68 +2983,47 b' def _dograft(ui, repo, *revs, **opts):' | |||||
2980 | editform=b'graft', **pycompat.strkwargs(opts) |
|
2983 | editform=b'graft', **pycompat.strkwargs(opts) | |
2981 | ) |
|
2984 | ) | |
2982 |
|
2985 | |||
|
2986 | cmdutil.check_at_most_one_arg(opts, b'abort', b'stop', b'continue') | |||
|
2987 | ||||
2983 | cont = False |
|
2988 | cont = False | |
2984 | if opts.get(b'no_commit'): |
|
2989 | if opts.get(b'no_commit'): | |
2985 | if opts.get(b'edit'): |
|
2990 | cmdutil.check_incompatible_arguments( | |
2986 | raise error.Abort( |
|
2991 | opts, | |
2987 | _(b"cannot specify --no-commit and --edit together") |
|
2992 | b'no_commit', | |
2988 | ) |
|
2993 | [b'edit', b'currentuser', b'currentdate', b'log'], | |
2989 | if opts.get(b'currentuser'): |
|
2994 | ) | |
2990 | raise error.Abort( |
|
|||
2991 | _(b"cannot specify --no-commit and --currentuser together") |
|
|||
2992 | ) |
|
|||
2993 | if opts.get(b'currentdate'): |
|
|||
2994 | raise error.Abort( |
|
|||
2995 | _(b"cannot specify --no-commit and --currentdate together") |
|
|||
2996 | ) |
|
|||
2997 | if opts.get(b'log'): |
|
|||
2998 | raise error.Abort( |
|
|||
2999 | _(b"cannot specify --no-commit and --log together") |
|
|||
3000 | ) |
|
|||
3001 |
|
2995 | |||
3002 | graftstate = statemod.cmdstate(repo, b'graftstate') |
|
2996 | graftstate = statemod.cmdstate(repo, b'graftstate') | |
3003 |
|
2997 | |||
3004 | if opts.get(b'stop'): |
|
2998 | if opts.get(b'stop'): | |
3005 | if opts.get(b'continue'): |
|
2999 | cmdutil.check_incompatible_arguments( | |
3006 | raise error.Abort( |
|
3000 | opts, | |
3007 | _(b"cannot use '--continue' and '--stop' together") |
|
3001 | b'stop', | |
3008 |
|
|
3002 | [ | |
3009 | if opts.get(b'abort'): |
|
3003 | b'edit', | |
3010 | raise error.Abort(_(b"cannot use '--abort' and '--stop' together")) |
|
3004 | b'log', | |
3011 |
|
3005 | b'user', | ||
3012 | if any( |
|
3006 | b'date', | |
3013 | ( |
|
3007 | b'currentdate', | |
3014 |
|
|
3008 | b'currentuser', | |
3015 |
|
|
3009 | b'rev', | |
3016 | opts.get(b'user'), |
|
3010 | ], | |
3017 | opts.get(b'date'), |
|
3011 | ) | |
3018 | opts.get(b'currentdate'), |
|
|||
3019 | opts.get(b'currentuser'), |
|
|||
3020 | opts.get(b'rev'), |
|
|||
3021 | ) |
|
|||
3022 | ): |
|
|||
3023 | raise error.Abort(_(b"cannot specify any other flag with '--stop'")) |
|
|||
3024 | return _stopgraft(ui, repo, graftstate) |
|
3012 | return _stopgraft(ui, repo, graftstate) | |
3025 | elif opts.get(b'abort'): |
|
3013 | elif opts.get(b'abort'): | |
3026 | if opts.get(b'continue'): |
|
3014 | cmdutil.check_incompatible_arguments( | |
3027 | raise error.Abort( |
|
3015 | opts, | |
3028 | _(b"cannot use '--continue' and '--abort' together") |
|
3016 | b'abort', | |
3029 |
|
|
3017 | [ | |
3030 | if any( |
|
3018 | b'edit', | |
3031 |
|
|
3019 | b'log', | |
3032 |
|
|
3020 | b'user', | |
3033 |
|
|
3021 | b'date', | |
3034 |
|
|
3022 | b'currentdate', | |
3035 |
|
|
3023 | b'currentuser', | |
3036 |
|
|
3024 | b'rev', | |
3037 | opts.get(b'currentuser'), |
|
3025 | ], | |
3038 | opts.get(b'rev'), |
|
3026 | ) | |
3039 | ) |
|
|||
3040 | ): |
|
|||
3041 | raise error.Abort( |
|
|||
3042 | _(b"cannot specify any other flag with '--abort'") |
|
|||
3043 | ) |
|
|||
3044 |
|
||||
3045 | return cmdutil.abortgraft(ui, repo, graftstate) |
|
3027 | return cmdutil.abortgraft(ui, repo, graftstate) | |
3046 | elif opts.get(b'continue'): |
|
3028 | elif opts.get(b'continue'): | |
3047 | cont = True |
|
3029 | cont = True | |
@@ -3431,8 +3413,11 b' def grep(ui, repo, pattern, *pats, **opt' | |||||
3431 | m = regexp.search(self.line, p) |
|
3413 | m = regexp.search(self.line, p) | |
3432 | if not m: |
|
3414 | if not m: | |
3433 | break |
|
3415 | break | |
3434 |
|
|
3416 | if m.end() == p: | |
3435 |
p = |
|
3417 | p += 1 | |
|
3418 | else: | |||
|
3419 | yield m.span() | |||
|
3420 | p = m.end() | |||
3436 |
|
3421 | |||
3437 | matches = {} |
|
3422 | matches = {} | |
3438 | copies = {} |
|
3423 | copies = {} | |
@@ -3578,56 +3563,68 b' def grep(ui, repo, pattern, *pats, **opt' | |||||
3578 |
|
3563 | |||
3579 | getrenamed = scmutil.getrenamedfn(repo) |
|
3564 | getrenamed = scmutil.getrenamedfn(repo) | |
3580 |
|
3565 | |||
3581 | def get_file_content(filename, filelog, filenode, context, revision): |
|
3566 | def readfile(ctx, fn): | |
3582 | try: |
|
3567 | rev = ctx.rev() | |
3583 | content = filelog.read(filenode) |
|
3568 | if rev is None: | |
3584 | except error.WdirUnsupported: |
|
3569 | fctx = ctx[fn] | |
3585 | content = context[filename].data() |
|
3570 | try: | |
3586 | except error.CensoredNodeError: |
|
3571 | return fctx.data() | |
3587 |
|
|
3572 | except IOError as e: | |
3588 | ui.warn( |
|
3573 | if e.errno != errno.ENOENT: | |
3589 | _(b'cannot search in censored file: %(filename)s:%(revnum)s\n') |
|
3574 | raise | |
3590 | % {b'filename': filename, b'revnum': pycompat.bytestr(revision)} |
|
3575 | else: | |
3591 | ) |
|
3576 | flog = getfile(fn) | |
3592 | return content |
|
3577 | fnode = ctx.filenode(fn) | |
|
3578 | try: | |||
|
3579 | return flog.read(fnode) | |||
|
3580 | except error.CensoredNodeError: | |||
|
3581 | ui.warn( | |||
|
3582 | _( | |||
|
3583 | b'cannot search in censored file: %(filename)s:%(revnum)s\n' | |||
|
3584 | ) | |||
|
3585 | % {b'filename': fn, b'revnum': pycompat.bytestr(rev),} | |||
|
3586 | ) | |||
3593 |
|
3587 | |||
3594 | def prep(ctx, fns): |
|
3588 | def prep(ctx, fns): | |
3595 | rev = ctx.rev() |
|
3589 | rev = ctx.rev() | |
3596 | pctx = ctx.p1() |
|
3590 | pctx = ctx.p1() | |
3597 | parent = pctx.rev() |
|
|||
3598 | matches.setdefault(rev, {}) |
|
3591 | matches.setdefault(rev, {}) | |
3599 | matches.setdefault(parent, {}) |
|
3592 | if diff: | |
|
3593 | parent = pctx.rev() | |||
|
3594 | matches.setdefault(parent, {}) | |||
3600 | files = revfiles.setdefault(rev, []) |
|
3595 | files = revfiles.setdefault(rev, []) | |
3601 |
f |
|
3596 | if rev is None: | |
3602 | flog = getfile(fn) |
|
3597 | # in `hg grep pattern`, 2/3 of the time is spent is spent in | |
3603 | try: |
|
3598 | # pathauditor checks without this in mozilla-central | |
3604 | fnode = ctx.filenode(fn) |
|
3599 | contextmanager = repo.wvfs.audit.cached | |
3605 | except error.LookupError: |
|
3600 | else: | |
3606 | continue |
|
3601 | contextmanager = util.nullcontextmanager | |
3607 |
|
3602 | with contextmanager(): | ||
3608 |
|
|
3603 | for fn in fns: | |
3609 | if follow: |
|
3604 | # fn might not exist in the revision (could be a file removed by | |
3610 | copy = getrenamed(fn, rev) |
|
3605 | # the revision). We could check `fn not in ctx` even when rev is | |
3611 | if copy: |
|
3606 | # None, but it's less racy to protect againt that in readfile. | |
3612 | copies.setdefault(rev, {})[fn] = copy |
|
3607 | if rev is not None and fn not in ctx: | |
3613 |
|
|
3608 | continue | |
3614 | skip.add(copy) |
|
3609 | ||
3615 | if fn in skip: |
|
3610 | copy = None | |
3616 |
|
|
3611 | if follow: | |
3617 | files.append(fn) |
|
3612 | copy = getrenamed(fn, rev) | |
3618 |
|
3613 | if copy: | ||
3619 | if fn not in matches[rev]: |
|
3614 | copies.setdefault(rev, {})[fn] = copy | |
3620 | content = get_file_content(fn, flog, fnode, ctx, rev) |
|
3615 | if fn in skip: | |
3621 | grepbody(fn, rev, content) |
|
3616 | skip.add(copy) | |
3622 |
|
3617 | if fn in skip: | ||
3623 |
|
|
3618 | continue | |
3624 | if pfn not in matches[parent]: |
|
3619 | files.append(fn) | |
3625 | try: |
|
3620 | ||
3626 | pfnode = pctx.filenode(pfn) |
|
3621 | if fn not in matches[rev]: | |
3627 | pcontent = get_file_content(pfn, flog, pfnode, pctx, parent) |
|
3622 | grepbody(fn, rev, readfile(ctx, fn)) | |
3628 | grepbody(pfn, parent, pcontent) |
|
3623 | ||
3629 | except error.LookupError: |
|
3624 | if diff: | |
3630 |
p |
|
3625 | pfn = copy or fn | |
|
3626 | if pfn not in matches[parent] and pfn in pctx: | |||
|
3627 | grepbody(pfn, parent, readfile(pctx, pfn)) | |||
3631 |
|
3628 | |||
3632 | ui.pager(b'grep') |
|
3629 | ui.pager(b'grep') | |
3633 | fm = ui.formatter(b'grep', opts) |
|
3630 | fm = ui.formatter(b'grep', opts) | |
@@ -5812,7 +5809,7 b' def rename(ui, repo, *pats, **opts):' | |||||
5812 | Returns 0 on success, 1 if errors are encountered. |
|
5809 | Returns 0 on success, 1 if errors are encountered. | |
5813 | """ |
|
5810 | """ | |
5814 | opts = pycompat.byteskwargs(opts) |
|
5811 | opts = pycompat.byteskwargs(opts) | |
5815 |
with repo.wlock( |
|
5812 | with repo.wlock(): | |
5816 | return cmdutil.copy(ui, repo, pats, opts, rename=True) |
|
5813 | return cmdutil.copy(ui, repo, pats, opts, rename=True) | |
5817 |
|
5814 | |||
5818 |
|
5815 | |||
@@ -5934,7 +5931,7 b' def resolve(ui, repo, *pats, **opts):' | |||||
5934 | if show: |
|
5931 | if show: | |
5935 | ui.pager(b'resolve') |
|
5932 | ui.pager(b'resolve') | |
5936 | fm = ui.formatter(b'resolve', opts) |
|
5933 | fm = ui.formatter(b'resolve', opts) | |
5937 | ms = mergemod.mergestate.read(repo) |
|
5934 | ms = mergestatemod.mergestate.read(repo) | |
5938 | wctx = repo[None] |
|
5935 | wctx = repo[None] | |
5939 | m = scmutil.match(wctx, pats, opts) |
|
5936 | m = scmutil.match(wctx, pats, opts) | |
5940 |
|
5937 | |||
@@ -5942,14 +5939,20 b' def resolve(ui, repo, *pats, **opts):' | |||||
5942 | # as 'P'. Resolved path conflicts show as 'R', the same as normal |
|
5939 | # as 'P'. Resolved path conflicts show as 'R', the same as normal | |
5943 | # resolved conflicts. |
|
5940 | # resolved conflicts. | |
5944 | mergestateinfo = { |
|
5941 | mergestateinfo = { | |
5945 |
mergemod.MERGE_RECORD_UNRESOLVED: ( |
|
5942 | mergestatemod.MERGE_RECORD_UNRESOLVED: ( | |
5946 |
|
|
5943 | b'resolve.unresolved', | |
5947 | mergemod.MERGE_RECORD_UNRESOLVED_PATH: ( |
|
5944 | b'U', | |
|
5945 | ), | |||
|
5946 | mergestatemod.MERGE_RECORD_RESOLVED: (b'resolve.resolved', b'R'), | |||
|
5947 | mergestatemod.MERGE_RECORD_UNRESOLVED_PATH: ( | |||
5948 | b'resolve.unresolved', |
|
5948 | b'resolve.unresolved', | |
5949 | b'P', |
|
5949 | b'P', | |
5950 | ), |
|
5950 | ), | |
5951 |
mergemod.MERGE_RECORD_RESOLVED_PATH: ( |
|
5951 | mergestatemod.MERGE_RECORD_RESOLVED_PATH: ( | |
5952 | mergemod.MERGE_RECORD_DRIVER_RESOLVED: ( |
|
5952 | b'resolve.resolved', | |
|
5953 | b'R', | |||
|
5954 | ), | |||
|
5955 | mergestatemod.MERGE_RECORD_DRIVER_RESOLVED: ( | |||
5953 | b'resolve.driverresolved', |
|
5956 | b'resolve.driverresolved', | |
5954 | b'D', |
|
5957 | b'D', | |
5955 | ), |
|
5958 | ), | |
@@ -5959,7 +5962,7 b' def resolve(ui, repo, *pats, **opts):' | |||||
5959 | if not m(f): |
|
5962 | if not m(f): | |
5960 | continue |
|
5963 | continue | |
5961 |
|
5964 | |||
5962 | if ms[f] == mergemod.MERGE_RECORD_MERGED_OTHER: |
|
5965 | if ms[f] == mergestatemod.MERGE_RECORD_MERGED_OTHER: | |
5963 | continue |
|
5966 | continue | |
5964 | label, key = mergestateinfo[ms[f]] |
|
5967 | label, key = mergestateinfo[ms[f]] | |
5965 | fm.startitem() |
|
5968 | fm.startitem() | |
@@ -5971,7 +5974,7 b' def resolve(ui, repo, *pats, **opts):' | |||||
5971 | return 0 |
|
5974 | return 0 | |
5972 |
|
5975 | |||
5973 | with repo.wlock(): |
|
5976 | with repo.wlock(): | |
5974 | ms = mergemod.mergestate.read(repo) |
|
5977 | ms = mergestatemod.mergestate.read(repo) | |
5975 |
|
5978 | |||
5976 | if not (ms.active() or repo.dirstate.p2() != nullid): |
|
5979 | if not (ms.active() or repo.dirstate.p2() != nullid): | |
5977 | raise error.Abort( |
|
5980 | raise error.Abort( | |
@@ -5982,7 +5985,7 b' def resolve(ui, repo, *pats, **opts):' | |||||
5982 |
|
5985 | |||
5983 | if ( |
|
5986 | if ( | |
5984 | ms.mergedriver |
|
5987 | ms.mergedriver | |
5985 | and ms.mdstate() == mergemod.MERGE_DRIVER_STATE_UNMARKED |
|
5988 | and ms.mdstate() == mergestatemod.MERGE_DRIVER_STATE_UNMARKED | |
5986 | ): |
|
5989 | ): | |
5987 | proceed = mergemod.driverpreprocess(repo, ms, wctx) |
|
5990 | proceed = mergemod.driverpreprocess(repo, ms, wctx) | |
5988 | ms.commit() |
|
5991 | ms.commit() | |
@@ -6008,12 +6011,12 b' def resolve(ui, repo, *pats, **opts):' | |||||
6008 |
|
6011 | |||
6009 | didwork = True |
|
6012 | didwork = True | |
6010 |
|
6013 | |||
6011 | if ms[f] == mergemod.MERGE_RECORD_MERGED_OTHER: |
|
6014 | if ms[f] == mergestatemod.MERGE_RECORD_MERGED_OTHER: | |
6012 | continue |
|
6015 | continue | |
6013 |
|
6016 | |||
6014 | # don't let driver-resolved files be marked, and run the conclude |
|
6017 | # don't let driver-resolved files be marked, and run the conclude | |
6015 | # step if asked to resolve |
|
6018 | # step if asked to resolve | |
6016 | if ms[f] == mergemod.MERGE_RECORD_DRIVER_RESOLVED: |
|
6019 | if ms[f] == mergestatemod.MERGE_RECORD_DRIVER_RESOLVED: | |
6017 | exact = m.exact(f) |
|
6020 | exact = m.exact(f) | |
6018 | if mark: |
|
6021 | if mark: | |
6019 | if exact: |
|
6022 | if exact: | |
@@ -6033,14 +6036,14 b' def resolve(ui, repo, *pats, **opts):' | |||||
6033 |
|
6036 | |||
6034 | # path conflicts must be resolved manually |
|
6037 | # path conflicts must be resolved manually | |
6035 | if ms[f] in ( |
|
6038 | if ms[f] in ( | |
6036 | mergemod.MERGE_RECORD_UNRESOLVED_PATH, |
|
6039 | mergestatemod.MERGE_RECORD_UNRESOLVED_PATH, | |
6037 | mergemod.MERGE_RECORD_RESOLVED_PATH, |
|
6040 | mergestatemod.MERGE_RECORD_RESOLVED_PATH, | |
6038 | ): |
|
6041 | ): | |
6039 | if mark: |
|
6042 | if mark: | |
6040 | ms.mark(f, mergemod.MERGE_RECORD_RESOLVED_PATH) |
|
6043 | ms.mark(f, mergestatemod.MERGE_RECORD_RESOLVED_PATH) | |
6041 | elif unmark: |
|
6044 | elif unmark: | |
6042 | ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED_PATH) |
|
6045 | ms.mark(f, mergestatemod.MERGE_RECORD_UNRESOLVED_PATH) | |
6043 | elif ms[f] == mergemod.MERGE_RECORD_UNRESOLVED_PATH: |
|
6046 | elif ms[f] == mergestatemod.MERGE_RECORD_UNRESOLVED_PATH: | |
6044 | ui.warn( |
|
6047 | ui.warn( | |
6045 | _(b'%s: path conflict must be resolved manually\n') |
|
6048 | _(b'%s: path conflict must be resolved manually\n') | |
6046 | % uipathfn(f) |
|
6049 | % uipathfn(f) | |
@@ -6052,12 +6055,12 b' def resolve(ui, repo, *pats, **opts):' | |||||
6052 | fdata = repo.wvfs.tryread(f) |
|
6055 | fdata = repo.wvfs.tryread(f) | |
6053 | if ( |
|
6056 | if ( | |
6054 | filemerge.hasconflictmarkers(fdata) |
|
6057 | filemerge.hasconflictmarkers(fdata) | |
6055 | and ms[f] != mergemod.MERGE_RECORD_RESOLVED |
|
6058 | and ms[f] != mergestatemod.MERGE_RECORD_RESOLVED | |
6056 | ): |
|
6059 | ): | |
6057 | hasconflictmarkers.append(f) |
|
6060 | hasconflictmarkers.append(f) | |
6058 | ms.mark(f, mergemod.MERGE_RECORD_RESOLVED) |
|
6061 | ms.mark(f, mergestatemod.MERGE_RECORD_RESOLVED) | |
6059 | elif unmark: |
|
6062 | elif unmark: | |
6060 | ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED) |
|
6063 | ms.mark(f, mergestatemod.MERGE_RECORD_UNRESOLVED) | |
6061 | else: |
|
6064 | else: | |
6062 | # backup pre-resolve (merge uses .orig for its own purposes) |
|
6065 | # backup pre-resolve (merge uses .orig for its own purposes) | |
6063 | a = repo.wjoin(f) |
|
6066 | a = repo.wjoin(f) | |
@@ -6126,7 +6129,8 b' def resolve(ui, repo, *pats, **opts):' | |||||
6126 | raise |
|
6129 | raise | |
6127 |
|
6130 | |||
6128 | ms.commit() |
|
6131 | ms.commit() | |
6129 | ms.recordactions() |
|
6132 | branchmerge = repo.dirstate.p2() != nullid | |
|
6133 | mergestatemod.recordupdates(repo, ms.actions(), branchmerge, None) | |||
6130 |
|
6134 | |||
6131 | if not didwork and pats: |
|
6135 | if not didwork and pats: | |
6132 | hint = None |
|
6136 | hint = None | |
@@ -6660,7 +6664,7 b' def shelve(ui, repo, *pats, **opts):' | |||||
6660 | (b'm', b'modified', None, _(b'show only modified files')), |
|
6664 | (b'm', b'modified', None, _(b'show only modified files')), | |
6661 | (b'a', b'added', None, _(b'show only added files')), |
|
6665 | (b'a', b'added', None, _(b'show only added files')), | |
6662 | (b'r', b'removed', None, _(b'show only removed files')), |
|
6666 | (b'r', b'removed', None, _(b'show only removed files')), | |
6663 |
(b'd', b'deleted', None, _(b'show only |
|
6667 | (b'd', b'deleted', None, _(b'show only missing files')), | |
6664 | (b'c', b'clean', None, _(b'show only files without changes')), |
|
6668 | (b'c', b'clean', None, _(b'show only files without changes')), | |
6665 | (b'u', b'unknown', None, _(b'show only unknown (not tracked) files')), |
|
6669 | (b'u', b'unknown', None, _(b'show only unknown (not tracked) files')), | |
6666 | (b'i', b'ignored', None, _(b'show only ignored files')), |
|
6670 | (b'i', b'ignored', None, _(b'show only ignored files')), | |
@@ -6791,6 +6795,7 b' def status(ui, repo, *pats, **opts):' | |||||
6791 |
|
6795 | |||
6792 | """ |
|
6796 | """ | |
6793 |
|
6797 | |||
|
6798 | cmdutil.check_at_most_one_arg(opts, 'rev', 'change') | |||
6794 | opts = pycompat.byteskwargs(opts) |
|
6799 | opts = pycompat.byteskwargs(opts) | |
6795 | revs = opts.get(b'rev') |
|
6800 | revs = opts.get(b'rev') | |
6796 | change = opts.get(b'change') |
|
6801 | change = opts.get(b'change') | |
@@ -6801,10 +6806,7 b' def status(ui, repo, *pats, **opts):' | |||||
6801 | else: |
|
6806 | else: | |
6802 | terse = ui.config(b'commands', b'status.terse') |
|
6807 | terse = ui.config(b'commands', b'status.terse') | |
6803 |
|
6808 | |||
6804 |
if revs and |
|
6809 | if revs and terse: | |
6805 | msg = _(b'cannot specify --rev and --change at the same time') |
|
|||
6806 | raise error.Abort(msg) |
|
|||
6807 | elif revs and terse: |
|
|||
6808 | msg = _(b'cannot use --terse with --rev') |
|
6810 | msg = _(b'cannot use --terse with --rev') | |
6809 | raise error.Abort(msg) |
|
6811 | raise error.Abort(msg) | |
6810 | elif change: |
|
6812 | elif change: | |
@@ -6940,7 +6942,7 b' def summary(ui, repo, **opts):' | |||||
6940 | marks = [] |
|
6942 | marks = [] | |
6941 |
|
6943 | |||
6942 | try: |
|
6944 | try: | |
6943 | ms = mergemod.mergestate.read(repo) |
|
6945 | ms = mergestatemod.mergestate.read(repo) | |
6944 | except error.UnsupportedMergeRecords as e: |
|
6946 | except error.UnsupportedMergeRecords as e: | |
6945 | s = b' '.join(e.recordtypes) |
|
6947 | s = b' '.join(e.recordtypes) | |
6946 | ui.warn( |
|
6948 | ui.warn( | |
@@ -7809,7 +7811,7 b' def version_(ui, **opts):' | |||||
7809 | names = [] |
|
7811 | names = [] | |
7810 | vers = [] |
|
7812 | vers = [] | |
7811 | isinternals = [] |
|
7813 | isinternals = [] | |
7812 | for name, module in extensions.extensions(): |
|
7814 | for name, module in sorted(extensions.extensions()): | |
7813 | names.append(name) |
|
7815 | names.append(name) | |
7814 | vers.append(extensions.moduleversion(module) or None) |
|
7816 | vers.append(extensions.moduleversion(module) or None) | |
7815 | isinternals.append(extensions.ismoduleinternal(module)) |
|
7817 | isinternals.append(extensions.ismoduleinternal(module)) |
@@ -191,7 +191,6 b' class channeledinput(object):' | |||||
191 |
|
191 | |||
192 |
|
192 | |||
193 | def _selectmessageencoder(ui): |
|
193 | def _selectmessageencoder(ui): | |
194 | # experimental config: cmdserver.message-encodings |
|
|||
195 | encnames = ui.configlist(b'cmdserver', b'message-encodings') |
|
194 | encnames = ui.configlist(b'cmdserver', b'message-encodings') | |
196 | for n in encnames: |
|
195 | for n in encnames: | |
197 | f = _messageencoders.get(n) |
|
196 | f = _messageencoders.get(n) | |
@@ -234,9 +233,6 b' class server(object):' | |||||
234 | self.ui = self.ui.copy() |
|
233 | self.ui = self.ui.copy() | |
235 | setuplogging(self.ui, repo=None, fp=self.cdebug) |
|
234 | setuplogging(self.ui, repo=None, fp=self.cdebug) | |
236 |
|
235 | |||
237 | # TODO: add this to help/config.txt when stabilized |
|
|||
238 | # ``channel`` |
|
|||
239 | # Use separate channel for structured output. (Command-server only) |
|
|||
240 | self.cmsg = None |
|
236 | self.cmsg = None | |
241 | if ui.config(b'ui', b'message-output') == b'channel': |
|
237 | if ui.config(b'ui', b'message-output') == b'channel': | |
242 | encname, encfn = _selectmessageencoder(ui) |
|
238 | encname, encfn = _selectmessageencoder(ui) | |
@@ -244,8 +240,23 b' class server(object):' | |||||
244 |
|
240 | |||
245 | self.client = fin |
|
241 | self.client = fin | |
246 |
|
242 | |||
|
243 | # If shutdown-on-interrupt is off, the default SIGINT handler is | |||
|
244 | # removed so that client-server communication wouldn't be interrupted. | |||
|
245 | # For example, 'runcommand' handler will issue three short read()s. | |||
|
246 | # If one of the first two read()s were interrupted, the communication | |||
|
247 | # channel would be left at dirty state and the subsequent request | |||
|
248 | # wouldn't be parsed. So catching KeyboardInterrupt isn't enough. | |||
|
249 | self._shutdown_on_interrupt = ui.configbool( | |||
|
250 | b'cmdserver', b'shutdown-on-interrupt' | |||
|
251 | ) | |||
|
252 | self._old_inthandler = None | |||
|
253 | if not self._shutdown_on_interrupt: | |||
|
254 | self._old_inthandler = signal.signal(signal.SIGINT, signal.SIG_IGN) | |||
|
255 | ||||
247 | def cleanup(self): |
|
256 | def cleanup(self): | |
248 | """release and restore resources taken during server session""" |
|
257 | """release and restore resources taken during server session""" | |
|
258 | if not self._shutdown_on_interrupt: | |||
|
259 | signal.signal(signal.SIGINT, self._old_inthandler) | |||
249 |
|
260 | |||
250 | def _read(self, size): |
|
261 | def _read(self, size): | |
251 | if not size: |
|
262 | if not size: | |
@@ -278,6 +289,32 b' class server(object):' | |||||
278 | else: |
|
289 | else: | |
279 | return [] |
|
290 | return [] | |
280 |
|
291 | |||
|
292 | def _dispatchcommand(self, req): | |||
|
293 | from . import dispatch # avoid cycle | |||
|
294 | ||||
|
295 | if self._shutdown_on_interrupt: | |||
|
296 | # no need to restore SIGINT handler as it is unmodified. | |||
|
297 | return dispatch.dispatch(req) | |||
|
298 | ||||
|
299 | try: | |||
|
300 | signal.signal(signal.SIGINT, self._old_inthandler) | |||
|
301 | return dispatch.dispatch(req) | |||
|
302 | except error.SignalInterrupt: | |||
|
303 | # propagate SIGBREAK, SIGHUP, or SIGTERM. | |||
|
304 | raise | |||
|
305 | except KeyboardInterrupt: | |||
|
306 | # SIGINT may be received out of the try-except block of dispatch(), | |||
|
307 | # so catch it as last ditch. Another KeyboardInterrupt may be | |||
|
308 | # raised while handling exceptions here, but there's no way to | |||
|
309 | # avoid that except for doing everything in C. | |||
|
310 | pass | |||
|
311 | finally: | |||
|
312 | signal.signal(signal.SIGINT, signal.SIG_IGN) | |||
|
313 | # On KeyboardInterrupt, print error message and exit *after* SIGINT | |||
|
314 | # handler removed. | |||
|
315 | req.ui.error(_(b'interrupted!\n')) | |||
|
316 | return -1 | |||
|
317 | ||||
281 | def runcommand(self): |
|
318 | def runcommand(self): | |
282 | """ reads a list of \0 terminated arguments, executes |
|
319 | """ reads a list of \0 terminated arguments, executes | |
283 | and writes the return code to the result channel """ |
|
320 | and writes the return code to the result channel """ | |
@@ -318,7 +355,10 b' class server(object):' | |||||
318 | ) |
|
355 | ) | |
319 |
|
356 | |||
320 | try: |
|
357 | try: | |
321 |
ret = |
|
358 | ret = self._dispatchcommand(req) & 255 | |
|
359 | # If shutdown-on-interrupt is off, it's important to write the | |||
|
360 | # result code *after* SIGINT handler removed. If the result code | |||
|
361 | # were lost, the client wouldn't be able to continue processing. | |||
322 | self.cresult.write(struct.pack(b'>i', int(ret))) |
|
362 | self.cresult.write(struct.pack(b'>i', int(ret))) | |
323 | finally: |
|
363 | finally: | |
324 | # restore old cwd |
|
364 | # restore old cwd |
@@ -204,7 +204,7 b' coreconfigitem(' | |||||
204 | b'cmdserver', b'max-repo-cache', default=0, experimental=True, |
|
204 | b'cmdserver', b'max-repo-cache', default=0, experimental=True, | |
205 | ) |
|
205 | ) | |
206 | coreconfigitem( |
|
206 | coreconfigitem( | |
207 |
b'cmdserver', b'message-encodings', default=list, |
|
207 | b'cmdserver', b'message-encodings', default=list, | |
208 | ) |
|
208 | ) | |
209 | coreconfigitem( |
|
209 | coreconfigitem( | |
210 | b'cmdserver', |
|
210 | b'cmdserver', | |
@@ -212,6 +212,9 b' coreconfigitem(' | |||||
212 | default=lambda: [b'chgserver', b'cmdserver', b'repocache'], |
|
212 | default=lambda: [b'chgserver', b'cmdserver', b'repocache'], | |
213 | ) |
|
213 | ) | |
214 | coreconfigitem( |
|
214 | coreconfigitem( | |
|
215 | b'cmdserver', b'shutdown-on-interrupt', default=True, | |||
|
216 | ) | |||
|
217 | coreconfigitem( | |||
215 | b'color', b'.*', default=None, generic=True, |
|
218 | b'color', b'.*', default=None, generic=True, | |
216 | ) |
|
219 | ) | |
217 | coreconfigitem( |
|
220 | coreconfigitem( | |
@@ -405,18 +408,6 b' coreconfigitem(' | |||||
405 | coreconfigitem( |
|
408 | coreconfigitem( | |
406 | b'devel', b'legacy.exchange', default=list, |
|
409 | b'devel', b'legacy.exchange', default=list, | |
407 | ) |
|
410 | ) | |
408 | # TODO before getting `persistent-nodemap` out of experimental |
|
|||
409 | # |
|
|||
410 | # * decide for a "status" of the persistent nodemap and associated location |
|
|||
411 | # - part of the store next the revlog itself (new requirements) |
|
|||
412 | # - part of the cache directory |
|
|||
413 | # - part of an `index` directory |
|
|||
414 | # (https://www.mercurial-scm.org/wiki/ComputedIndexPlan) |
|
|||
415 | # * do we want to use this for more than just changelog? if so we need: |
|
|||
416 | # - simpler "pending" logic for them |
|
|||
417 | # - double check the memory story (we dont want to keep all revlog in memory) |
|
|||
418 | # - think about the naming scheme if we are in "cache" |
|
|||
419 | # * increment the version format to "1" and freeze it. |
|
|||
420 | coreconfigitem( |
|
411 | coreconfigitem( | |
421 | b'devel', b'persistent-nodemap', default=False, |
|
412 | b'devel', b'persistent-nodemap', default=False, | |
422 | ) |
|
413 | ) | |
@@ -675,12 +666,6 b' coreconfigitem(' | |||||
675 | b'experimental', b'rust.index', default=False, |
|
666 | b'experimental', b'rust.index', default=False, | |
676 | ) |
|
667 | ) | |
677 | coreconfigitem( |
|
668 | coreconfigitem( | |
678 | b'experimental', b'exp-persistent-nodemap', default=False, |
|
|||
679 | ) |
|
|||
680 | coreconfigitem( |
|
|||
681 | b'experimental', b'exp-persistent-nodemap.mmap', default=True, |
|
|||
682 | ) |
|
|||
683 | coreconfigitem( |
|
|||
684 | b'experimental', b'server.filesdata.recommended-batch-size', default=50000, |
|
669 | b'experimental', b'server.filesdata.recommended-batch-size', default=50000, | |
685 | ) |
|
670 | ) | |
686 | coreconfigitem( |
|
671 | coreconfigitem( | |
@@ -783,6 +768,12 b' coreconfigitem(' | |||||
783 | coreconfigitem( |
|
768 | coreconfigitem( | |
784 | b'format', b'usestore', default=True, |
|
769 | b'format', b'usestore', default=True, | |
785 | ) |
|
770 | ) | |
|
771 | # Right now, the only efficient implement of the nodemap logic is in Rust, so | |||
|
772 | # the persistent nodemap feature needs to stay experimental as long as the Rust | |||
|
773 | # extensions are an experimental feature. | |||
|
774 | coreconfigitem( | |||
|
775 | b'format', b'use-persistent-nodemap', default=False, experimental=True | |||
|
776 | ) | |||
786 | coreconfigitem( |
|
777 | coreconfigitem( | |
787 | b'format', |
|
778 | b'format', | |
788 | b'exp-use-copies-side-data-changeset', |
|
779 | b'exp-use-copies-side-data-changeset', | |
@@ -820,9 +811,6 b' coreconfigitem(' | |||||
820 | b'hostsecurity', b'ciphers', default=None, |
|
811 | b'hostsecurity', b'ciphers', default=None, | |
821 | ) |
|
812 | ) | |
822 | coreconfigitem( |
|
813 | coreconfigitem( | |
823 | b'hostsecurity', b'disabletls10warning', default=False, |
|
|||
824 | ) |
|
|||
825 | coreconfigitem( |
|
|||
826 | b'hostsecurity', b'minimumprotocol', default=dynamicdefault, |
|
814 | b'hostsecurity', b'minimumprotocol', default=dynamicdefault, | |
827 | ) |
|
815 | ) | |
828 | coreconfigitem( |
|
816 | coreconfigitem( | |
@@ -1080,6 +1068,9 b' coreconfigitem(' | |||||
1080 | b'rewrite', b'update-timestamp', default=False, |
|
1068 | b'rewrite', b'update-timestamp', default=False, | |
1081 | ) |
|
1069 | ) | |
1082 | coreconfigitem( |
|
1070 | coreconfigitem( | |
|
1071 | b'rewrite', b'empty-successor', default=b'skip', experimental=True, | |||
|
1072 | ) | |||
|
1073 | coreconfigitem( | |||
1083 | b'storage', b'new-repo-backend', default=b'revlogv1', experimental=True, |
|
1074 | b'storage', b'new-repo-backend', default=b'revlogv1', experimental=True, | |
1084 | ) |
|
1075 | ) | |
1085 | coreconfigitem( |
|
1076 | coreconfigitem( | |
@@ -1088,6 +1079,14 b' coreconfigitem(' | |||||
1088 | default=True, |
|
1079 | default=True, | |
1089 | alias=[(b'format', b'aggressivemergedeltas')], |
|
1080 | alias=[(b'format', b'aggressivemergedeltas')], | |
1090 | ) |
|
1081 | ) | |
|
1082 | # experimental as long as rust is experimental (or a C version is implemented) | |||
|
1083 | coreconfigitem( | |||
|
1084 | b'storage', b'revlog.nodemap.mmap', default=True, experimental=True | |||
|
1085 | ) | |||
|
1086 | # experimental as long as format.use-persistent-nodemap is. | |||
|
1087 | coreconfigitem( | |||
|
1088 | b'storage', b'revlog.nodemap.mode', default=b'compat', experimental=True | |||
|
1089 | ) | |||
1091 | coreconfigitem( |
|
1090 | coreconfigitem( | |
1092 | b'storage', b'revlog.reuse-external-delta', default=True, |
|
1091 | b'storage', b'revlog.reuse-external-delta', default=True, | |
1093 | ) |
|
1092 | ) | |
@@ -1235,6 +1234,10 b' coreconfigitem(' | |||||
1235 | b'ui', b'askusername', default=False, |
|
1234 | b'ui', b'askusername', default=False, | |
1236 | ) |
|
1235 | ) | |
1237 | coreconfigitem( |
|
1236 | coreconfigitem( | |
|
1237 | b'ui', b'available-memory', default=None, | |||
|
1238 | ) | |||
|
1239 | ||||
|
1240 | coreconfigitem( | |||
1238 | b'ui', b'clonebundlefallback', default=False, |
|
1241 | b'ui', b'clonebundlefallback', default=False, | |
1239 | ) |
|
1242 | ) | |
1240 | coreconfigitem( |
|
1243 | coreconfigitem( | |
@@ -1391,6 +1394,9 b' coreconfigitem(' | |||||
1391 | b'ui', b'timeout.warn', default=0, |
|
1394 | b'ui', b'timeout.warn', default=0, | |
1392 | ) |
|
1395 | ) | |
1393 | coreconfigitem( |
|
1396 | coreconfigitem( | |
|
1397 | b'ui', b'timestamp-output', default=False, | |||
|
1398 | ) | |||
|
1399 | coreconfigitem( | |||
1394 | b'ui', b'traceback', default=False, |
|
1400 | b'ui', b'traceback', default=False, | |
1395 | ) |
|
1401 | ) | |
1396 | coreconfigitem( |
|
1402 | coreconfigitem( |
@@ -28,12 +28,13 b' from .pycompat import (' | |||||
28 | open, |
|
28 | open, | |
29 | ) |
|
29 | ) | |
30 | from . import ( |
|
30 | from . import ( | |
31 | copies, |
|
|||
32 | dagop, |
|
31 | dagop, | |
33 | encoding, |
|
32 | encoding, | |
34 | error, |
|
33 | error, | |
35 | fileset, |
|
34 | fileset, | |
36 | match as matchmod, |
|
35 | match as matchmod, | |
|
36 | mergestate as mergestatemod, | |||
|
37 | metadata, | |||
37 | obsolete as obsmod, |
|
38 | obsolete as obsmod, | |
38 | patch, |
|
39 | patch, | |
39 | pathutil, |
|
40 | pathutil, | |
@@ -299,7 +300,7 b' class basectx(object):' | |||||
299 |
|
300 | |||
300 | @propertycache |
|
301 | @propertycache | |
301 | def _copies(self): |
|
302 | def _copies(self): | |
302 |
return |
|
303 | return metadata.computechangesetcopies(self) | |
303 |
|
304 | |||
304 | def p1copies(self): |
|
305 | def p1copies(self): | |
305 | return self._copies[0] |
|
306 | return self._copies[0] | |
@@ -474,6 +475,20 b' class basectx(object):' | |||||
474 |
|
475 | |||
475 | return r |
|
476 | return r | |
476 |
|
477 | |||
|
478 | def mergestate(self, clean=False): | |||
|
479 | """Get a mergestate object for this context.""" | |||
|
480 | raise NotImplementedError( | |||
|
481 | '%s does not implement mergestate()' % self.__class__ | |||
|
482 | ) | |||
|
483 | ||||
|
484 | def isempty(self): | |||
|
485 | return not ( | |||
|
486 | len(self.parents()) > 1 | |||
|
487 | or self.branch() != self.p1().branch() | |||
|
488 | or self.closesbranch() | |||
|
489 | or self.files() | |||
|
490 | ) | |||
|
491 | ||||
477 |
|
492 | |||
478 | class changectx(basectx): |
|
493 | class changectx(basectx): | |
479 | """A changecontext object makes access to data related to a particular |
|
494 | """A changecontext object makes access to data related to a particular | |
@@ -582,7 +597,7 b' class changectx(basectx):' | |||||
582 | filesadded = None |
|
597 | filesadded = None | |
583 | if filesadded is None: |
|
598 | if filesadded is None: | |
584 | if compute_on_none: |
|
599 | if compute_on_none: | |
585 |
filesadded = |
|
600 | filesadded = metadata.computechangesetfilesadded(self) | |
586 | else: |
|
601 | else: | |
587 | filesadded = [] |
|
602 | filesadded = [] | |
588 | return filesadded |
|
603 | return filesadded | |
@@ -601,7 +616,7 b' class changectx(basectx):' | |||||
601 | filesremoved = None |
|
616 | filesremoved = None | |
602 | if filesremoved is None: |
|
617 | if filesremoved is None: | |
603 | if compute_on_none: |
|
618 | if compute_on_none: | |
604 |
filesremoved = |
|
619 | filesremoved = metadata.computechangesetfilesremoved(self) | |
605 | else: |
|
620 | else: | |
606 | filesremoved = [] |
|
621 | filesremoved = [] | |
607 | return filesremoved |
|
622 | return filesremoved | |
@@ -2009,6 +2024,11 b' class workingctx(committablectx):' | |||||
2009 |
|
2024 | |||
2010 | sparse.aftercommit(self._repo, node) |
|
2025 | sparse.aftercommit(self._repo, node) | |
2011 |
|
2026 | |||
|
2027 | def mergestate(self, clean=False): | |||
|
2028 | if clean: | |||
|
2029 | return mergestatemod.mergestate.clean(self._repo) | |||
|
2030 | return mergestatemod.mergestate.read(self._repo) | |||
|
2031 | ||||
2012 |
|
2032 | |||
2013 | class committablefilectx(basefilectx): |
|
2033 | class committablefilectx(basefilectx): | |
2014 | """A committablefilectx provides common functionality for a file context |
|
2034 | """A committablefilectx provides common functionality for a file context | |
@@ -2310,7 +2330,7 b' class overlayworkingctx(committablectx):' | |||||
2310 | return self._cache[path][b'flags'] |
|
2330 | return self._cache[path][b'flags'] | |
2311 | else: |
|
2331 | else: | |
2312 | raise error.ProgrammingError( |
|
2332 | raise error.ProgrammingError( | |
2313 |
b"No such file or directory: %s" % |
|
2333 | b"No such file or directory: %s" % path | |
2314 | ) |
|
2334 | ) | |
2315 | else: |
|
2335 | else: | |
2316 | return self._wrappedctx[path].flags() |
|
2336 | return self._wrappedctx[path].flags() | |
@@ -2427,7 +2447,7 b' class overlayworkingctx(committablectx):' | |||||
2427 | return len(self._cache[path][b'data']) |
|
2447 | return len(self._cache[path][b'data']) | |
2428 | else: |
|
2448 | else: | |
2429 | raise error.ProgrammingError( |
|
2449 | raise error.ProgrammingError( | |
2430 |
b"No such file or directory: %s" % |
|
2450 | b"No such file or directory: %s" % path | |
2431 | ) |
|
2451 | ) | |
2432 | return self._wrappedctx[path].size() |
|
2452 | return self._wrappedctx[path].size() | |
2433 |
|
2453 | |||
@@ -2507,48 +2527,9 b' class overlayworkingctx(committablectx):' | |||||
2507 | def isdirty(self, path): |
|
2527 | def isdirty(self, path): | |
2508 | return path in self._cache |
|
2528 | return path in self._cache | |
2509 |
|
2529 | |||
2510 | def isempty(self): |
|
|||
2511 | # We need to discard any keys that are actually clean before the empty |
|
|||
2512 | # commit check. |
|
|||
2513 | self._compact() |
|
|||
2514 | return len(self._cache) == 0 |
|
|||
2515 |
|
||||
2516 | def clean(self): |
|
2530 | def clean(self): | |
2517 | self._cache = {} |
|
2531 | self._cache = {} | |
2518 |
|
2532 | |||
2519 | def _compact(self): |
|
|||
2520 | """Removes keys from the cache that are actually clean, by comparing |
|
|||
2521 | them with the underlying context. |
|
|||
2522 |
|
||||
2523 | This can occur during the merge process, e.g. by passing --tool :local |
|
|||
2524 | to resolve a conflict. |
|
|||
2525 | """ |
|
|||
2526 | keys = [] |
|
|||
2527 | # This won't be perfect, but can help performance significantly when |
|
|||
2528 | # using things like remotefilelog. |
|
|||
2529 | scmutil.prefetchfiles( |
|
|||
2530 | self.repo(), |
|
|||
2531 | [self.p1().rev()], |
|
|||
2532 | scmutil.matchfiles(self.repo(), self._cache.keys()), |
|
|||
2533 | ) |
|
|||
2534 |
|
||||
2535 | for path in self._cache.keys(): |
|
|||
2536 | cache = self._cache[path] |
|
|||
2537 | try: |
|
|||
2538 | underlying = self._wrappedctx[path] |
|
|||
2539 | if ( |
|
|||
2540 | underlying.data() == cache[b'data'] |
|
|||
2541 | and underlying.flags() == cache[b'flags'] |
|
|||
2542 | ): |
|
|||
2543 | keys.append(path) |
|
|||
2544 | except error.ManifestLookupError: |
|
|||
2545 | # Path not in the underlying manifest (created). |
|
|||
2546 | continue |
|
|||
2547 |
|
||||
2548 | for path in keys: |
|
|||
2549 | del self._cache[path] |
|
|||
2550 | return keys |
|
|||
2551 |
|
||||
2552 | def _markdirty( |
|
2533 | def _markdirty( | |
2553 | self, path, exists, data=None, date=None, flags=b'', copied=None |
|
2534 | self, path, exists, data=None, date=None, flags=b'', copied=None | |
2554 | ): |
|
2535 | ): | |
@@ -2867,6 +2848,11 b' class memctx(committablectx):' | |||||
2867 |
|
2848 | |||
2868 | return scmutil.status(modified, added, removed, [], [], [], []) |
|
2849 | return scmutil.status(modified, added, removed, [], [], [], []) | |
2869 |
|
2850 | |||
|
2851 | def parents(self): | |||
|
2852 | if self._parents[1].node() == nullid: | |||
|
2853 | return [self._parents[0]] | |||
|
2854 | return self._parents | |||
|
2855 | ||||
2870 |
|
2856 | |||
2871 | class memfilectx(committablefilectx): |
|
2857 | class memfilectx(committablefilectx): | |
2872 | """memfilectx represents an in-memory file to commit. |
|
2858 | """memfilectx represents an in-memory file to commit. |
@@ -8,7 +8,6 b'' | |||||
8 | from __future__ import absolute_import |
|
8 | from __future__ import absolute_import | |
9 |
|
9 | |||
10 | import collections |
|
10 | import collections | |
11 | import multiprocessing |
|
|||
12 | import os |
|
11 | import os | |
13 |
|
12 | |||
14 | from .i18n import _ |
|
13 | from .i18n import _ | |
@@ -17,7 +16,6 b' from .i18n import _' | |||||
17 | from .revlogutils.flagutil import REVIDX_SIDEDATA |
|
16 | from .revlogutils.flagutil import REVIDX_SIDEDATA | |
18 |
|
17 | |||
19 | from . import ( |
|
18 | from . import ( | |
20 | error, |
|
|||
21 | match as matchmod, |
|
19 | match as matchmod, | |
22 | node, |
|
20 | node, | |
23 | pathutil, |
|
21 | pathutil, | |
@@ -25,7 +23,6 b' from . import (' | |||||
25 | util, |
|
23 | util, | |
26 | ) |
|
24 | ) | |
27 |
|
25 | |||
28 | from .revlogutils import sidedata as sidedatamod |
|
|||
29 |
|
26 | |||
30 | from .utils import stringutil |
|
27 | from .utils import stringutil | |
31 |
|
28 | |||
@@ -183,10 +180,27 b' def _revinfogetter(repo):' | |||||
183 | * p1copies: mapping of copies from p1 |
|
180 | * p1copies: mapping of copies from p1 | |
184 | * p2copies: mapping of copies from p2 |
|
181 | * p2copies: mapping of copies from p2 | |
185 | * removed: a list of removed files |
|
182 | * removed: a list of removed files | |
|
183 | * ismerged: a callback to know if file was merged in that revision | |||
186 | """ |
|
184 | """ | |
187 | cl = repo.changelog |
|
185 | cl = repo.changelog | |
188 | parents = cl.parentrevs |
|
186 | parents = cl.parentrevs | |
189 |
|
187 | |||
|
188 | def get_ismerged(rev): | |||
|
189 | ctx = repo[rev] | |||
|
190 | ||||
|
191 | def ismerged(path): | |||
|
192 | if path not in ctx.files(): | |||
|
193 | return False | |||
|
194 | fctx = ctx[path] | |||
|
195 | parents = fctx._filelog.parents(fctx._filenode) | |||
|
196 | nb_parents = 0 | |||
|
197 | for n in parents: | |||
|
198 | if n != node.nullid: | |||
|
199 | nb_parents += 1 | |||
|
200 | return nb_parents >= 2 | |||
|
201 | ||||
|
202 | return ismerged | |||
|
203 | ||||
190 | if repo.filecopiesmode == b'changeset-sidedata': |
|
204 | if repo.filecopiesmode == b'changeset-sidedata': | |
191 | changelogrevision = cl.changelogrevision |
|
205 | changelogrevision = cl.changelogrevision | |
192 | flags = cl.flags |
|
206 | flags = cl.flags | |
@@ -218,6 +232,7 b' def _revinfogetter(repo):' | |||||
218 |
|
232 | |||
219 | def revinfo(rev): |
|
233 | def revinfo(rev): | |
220 | p1, p2 = parents(rev) |
|
234 | p1, p2 = parents(rev) | |
|
235 | value = None | |||
221 | if flags(rev) & REVIDX_SIDEDATA: |
|
236 | if flags(rev) & REVIDX_SIDEDATA: | |
222 | e = merge_caches.pop(rev, None) |
|
237 | e = merge_caches.pop(rev, None) | |
223 | if e is not None: |
|
238 | if e is not None: | |
@@ -228,12 +243,22 b' def _revinfogetter(repo):' | |||||
228 | removed = c.filesremoved |
|
243 | removed = c.filesremoved | |
229 | if p1 != node.nullrev and p2 != node.nullrev: |
|
244 | if p1 != node.nullrev and p2 != node.nullrev: | |
230 | # XXX some case we over cache, IGNORE |
|
245 | # XXX some case we over cache, IGNORE | |
231 |
merge_caches[rev] = ( |
|
246 | value = merge_caches[rev] = ( | |
|
247 | p1, | |||
|
248 | p2, | |||
|
249 | p1copies, | |||
|
250 | p2copies, | |||
|
251 | removed, | |||
|
252 | get_ismerged(rev), | |||
|
253 | ) | |||
232 | else: |
|
254 | else: | |
233 | p1copies = {} |
|
255 | p1copies = {} | |
234 | p2copies = {} |
|
256 | p2copies = {} | |
235 | removed = [] |
|
257 | removed = [] | |
236 | return p1, p2, p1copies, p2copies, removed |
|
258 | ||
|
259 | if value is None: | |||
|
260 | value = (p1, p2, p1copies, p2copies, removed, get_ismerged(rev)) | |||
|
261 | return value | |||
237 |
|
262 | |||
238 | else: |
|
263 | else: | |
239 |
|
264 | |||
@@ -242,7 +267,7 b' def _revinfogetter(repo):' | |||||
242 | ctx = repo[rev] |
|
267 | ctx = repo[rev] | |
243 | p1copies, p2copies = ctx._copies |
|
268 | p1copies, p2copies = ctx._copies | |
244 | removed = ctx.filesremoved() |
|
269 | removed = ctx.filesremoved() | |
245 | return p1, p2, p1copies, p2copies, removed |
|
270 | return p1, p2, p1copies, p2copies, removed, get_ismerged(rev) | |
246 |
|
271 | |||
247 | return revinfo |
|
272 | return revinfo | |
248 |
|
273 | |||
@@ -256,6 +281,7 b' def _changesetforwardcopies(a, b, match)' | |||||
256 | revinfo = _revinfogetter(repo) |
|
281 | revinfo = _revinfogetter(repo) | |
257 |
|
282 | |||
258 | cl = repo.changelog |
|
283 | cl = repo.changelog | |
|
284 | isancestor = cl.isancestorrev # XXX we should had chaching to this. | |||
259 | missingrevs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()]) |
|
285 | missingrevs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()]) | |
260 | mrset = set(missingrevs) |
|
286 | mrset = set(missingrevs) | |
261 | roots = set() |
|
287 | roots = set() | |
@@ -283,10 +309,14 b' def _changesetforwardcopies(a, b, match)' | |||||
283 | iterrevs.update(roots) |
|
309 | iterrevs.update(roots) | |
284 | iterrevs.remove(b.rev()) |
|
310 | iterrevs.remove(b.rev()) | |
285 | revs = sorted(iterrevs) |
|
311 | revs = sorted(iterrevs) | |
286 |
return _combinechangesetcopies( |
|
312 | return _combinechangesetcopies( | |
|
313 | revs, children, b.rev(), revinfo, match, isancestor | |||
|
314 | ) | |||
287 |
|
315 | |||
288 |
|
316 | |||
289 | def _combinechangesetcopies(revs, children, targetrev, revinfo, match): |
|
317 | def _combinechangesetcopies( | |
|
318 | revs, children, targetrev, revinfo, match, isancestor | |||
|
319 | ): | |||
290 | """combine the copies information for each item of iterrevs |
|
320 | """combine the copies information for each item of iterrevs | |
291 |
|
321 | |||
292 | revs: sorted iterable of revision to visit |
|
322 | revs: sorted iterable of revision to visit | |
@@ -305,7 +335,7 b' def _combinechangesetcopies(revs, childr' | |||||
305 | # this is a root |
|
335 | # this is a root | |
306 | copies = {} |
|
336 | copies = {} | |
307 | for i, c in enumerate(children[r]): |
|
337 | for i, c in enumerate(children[r]): | |
308 | p1, p2, p1copies, p2copies, removed = revinfo(c) |
|
338 | p1, p2, p1copies, p2copies, removed, ismerged = revinfo(c) | |
309 | if r == p1: |
|
339 | if r == p1: | |
310 | parent = 1 |
|
340 | parent = 1 | |
311 | childcopies = p1copies |
|
341 | childcopies = p1copies | |
@@ -319,9 +349,12 b' def _combinechangesetcopies(revs, childr' | |||||
319 | } |
|
349 | } | |
320 | newcopies = copies |
|
350 | newcopies = copies | |
321 | if childcopies: |
|
351 | if childcopies: | |
322 |
newcopies = |
|
352 | newcopies = copies.copy() | |
323 | # _chain makes a copies, we can avoid doing so in some |
|
353 | for dest, source in pycompat.iteritems(childcopies): | |
324 | # simple/linear cases. |
|
354 | prev = copies.get(source) | |
|
355 | if prev is not None and prev[1] is not None: | |||
|
356 | source = prev[1] | |||
|
357 | newcopies[dest] = (c, source) | |||
325 | assert newcopies is not copies |
|
358 | assert newcopies is not copies | |
326 | for f in removed: |
|
359 | for f in removed: | |
327 | if f in newcopies: |
|
360 | if f in newcopies: | |
@@ -330,7 +363,7 b' def _combinechangesetcopies(revs, childr' | |||||
330 | # branches. when there are no other branches, this |
|
363 | # branches. when there are no other branches, this | |
331 | # could be avoided. |
|
364 | # could be avoided. | |
332 | newcopies = copies.copy() |
|
365 | newcopies = copies.copy() | |
333 |
|
|
366 | newcopies[f] = (c, None) | |
334 | othercopies = all_copies.get(c) |
|
367 | othercopies = all_copies.get(c) | |
335 | if othercopies is None: |
|
368 | if othercopies is None: | |
336 | all_copies[c] = newcopies |
|
369 | all_copies[c] = newcopies | |
@@ -338,21 +371,55 b' def _combinechangesetcopies(revs, childr' | |||||
338 | # we are the second parent to work on c, we need to merge our |
|
371 | # we are the second parent to work on c, we need to merge our | |
339 | # work with the other. |
|
372 | # work with the other. | |
340 | # |
|
373 | # | |
341 | # Unlike when copies are stored in the filelog, we consider |
|
|||
342 | # it a copy even if the destination already existed on the |
|
|||
343 | # other branch. It's simply too expensive to check if the |
|
|||
344 | # file existed in the manifest. |
|
|||
345 | # |
|
|||
346 | # In case of conflict, parent 1 take precedence over parent 2. |
|
374 | # In case of conflict, parent 1 take precedence over parent 2. | |
347 | # This is an arbitrary choice made anew when implementing |
|
375 | # This is an arbitrary choice made anew when implementing | |
348 | # changeset based copies. It was made without regards with |
|
376 | # changeset based copies. It was made without regards with | |
349 | # potential filelog related behavior. |
|
377 | # potential filelog related behavior. | |
350 | if parent == 1: |
|
378 | if parent == 1: | |
351 |
|
|
379 | _merge_copies_dict( | |
|
380 | othercopies, newcopies, isancestor, ismerged | |||
|
381 | ) | |||
352 | else: |
|
382 | else: | |
353 |
|
|
383 | _merge_copies_dict( | |
|
384 | newcopies, othercopies, isancestor, ismerged | |||
|
385 | ) | |||
354 | all_copies[c] = newcopies |
|
386 | all_copies[c] = newcopies | |
355 | return all_copies[targetrev] |
|
387 | ||
|
388 | final_copies = {} | |||
|
389 | for dest, (tt, source) in all_copies[targetrev].items(): | |||
|
390 | if source is not None: | |||
|
391 | final_copies[dest] = source | |||
|
392 | return final_copies | |||
|
393 | ||||
|
394 | ||||
|
395 | def _merge_copies_dict(minor, major, isancestor, ismerged): | |||
|
396 | """merge two copies-mapping together, minor and major | |||
|
397 | ||||
|
398 | In case of conflict, value from "major" will be picked. | |||
|
399 | ||||
|
400 | - `isancestors(low_rev, high_rev)`: callable return True if `low_rev` is an | |||
|
401 | ancestors of `high_rev`, | |||
|
402 | ||||
|
403 | - `ismerged(path)`: callable return True if `path` have been merged in the | |||
|
404 | current revision, | |||
|
405 | """ | |||
|
406 | for dest, value in major.items(): | |||
|
407 | other = minor.get(dest) | |||
|
408 | if other is None: | |||
|
409 | minor[dest] = value | |||
|
410 | else: | |||
|
411 | new_tt = value[0] | |||
|
412 | other_tt = other[0] | |||
|
413 | if value[1] == other[1]: | |||
|
414 | continue | |||
|
415 | # content from "major" wins, unless it is older | |||
|
416 | # than the branch point or there is a merge | |||
|
417 | if ( | |||
|
418 | new_tt == other_tt | |||
|
419 | or not isancestor(new_tt, other_tt) | |||
|
420 | or ismerged(dest) | |||
|
421 | ): | |||
|
422 | minor[dest] = value | |||
356 |
|
423 | |||
357 |
|
424 | |||
358 | def _forwardcopies(a, b, base=None, match=None): |
|
425 | def _forwardcopies(a, b, base=None, match=None): | |
@@ -569,6 +636,12 b' class branch_copies(object):' | |||||
569 | self.dirmove = {} if dirmove is None else dirmove |
|
636 | self.dirmove = {} if dirmove is None else dirmove | |
570 | self.movewithdir = {} if movewithdir is None else movewithdir |
|
637 | self.movewithdir = {} if movewithdir is None else movewithdir | |
571 |
|
638 | |||
|
639 | def __repr__(self): | |||
|
640 | return ( | |||
|
641 | '<branch_copies\n copy=%r\n renamedelete=%r\n dirmove=%r\n movewithdir=%r\n>' | |||
|
642 | % (self.copy, self.renamedelete, self.dirmove, self.movewithdir,) | |||
|
643 | ) | |||
|
644 | ||||
572 |
|
645 | |||
573 | def _fullcopytracing(repo, c1, c2, base): |
|
646 | def _fullcopytracing(repo, c1, c2, base): | |
574 | """ The full copytracing algorithm which finds all the new files that were |
|
647 | """ The full copytracing algorithm which finds all the new files that were | |
@@ -922,250 +995,3 b' def graftcopies(wctx, ctx, base):' | |||||
922 | _filter(wctx.p1(), wctx, new_copies) |
|
995 | _filter(wctx.p1(), wctx, new_copies) | |
923 | for dst, src in pycompat.iteritems(new_copies): |
|
996 | for dst, src in pycompat.iteritems(new_copies): | |
924 | wctx[dst].markcopied(src) |
|
997 | wctx[dst].markcopied(src) | |
925 |
|
||||
926 |
|
||||
927 | def computechangesetfilesadded(ctx): |
|
|||
928 | """return the list of files added in a changeset |
|
|||
929 | """ |
|
|||
930 | added = [] |
|
|||
931 | for f in ctx.files(): |
|
|||
932 | if not any(f in p for p in ctx.parents()): |
|
|||
933 | added.append(f) |
|
|||
934 | return added |
|
|||
935 |
|
||||
936 |
|
||||
937 | def computechangesetfilesremoved(ctx): |
|
|||
938 | """return the list of files removed in a changeset |
|
|||
939 | """ |
|
|||
940 | removed = [] |
|
|||
941 | for f in ctx.files(): |
|
|||
942 | if f not in ctx: |
|
|||
943 | removed.append(f) |
|
|||
944 | return removed |
|
|||
945 |
|
||||
946 |
|
||||
947 | def computechangesetcopies(ctx): |
|
|||
948 | """return the copies data for a changeset |
|
|||
949 |
|
||||
950 | The copies data are returned as a pair of dictionnary (p1copies, p2copies). |
|
|||
951 |
|
||||
952 | Each dictionnary are in the form: `{newname: oldname}` |
|
|||
953 | """ |
|
|||
954 | p1copies = {} |
|
|||
955 | p2copies = {} |
|
|||
956 | p1 = ctx.p1() |
|
|||
957 | p2 = ctx.p2() |
|
|||
958 | narrowmatch = ctx._repo.narrowmatch() |
|
|||
959 | for dst in ctx.files(): |
|
|||
960 | if not narrowmatch(dst) or dst not in ctx: |
|
|||
961 | continue |
|
|||
962 | copied = ctx[dst].renamed() |
|
|||
963 | if not copied: |
|
|||
964 | continue |
|
|||
965 | src, srcnode = copied |
|
|||
966 | if src in p1 and p1[src].filenode() == srcnode: |
|
|||
967 | p1copies[dst] = src |
|
|||
968 | elif src in p2 and p2[src].filenode() == srcnode: |
|
|||
969 | p2copies[dst] = src |
|
|||
970 | return p1copies, p2copies |
|
|||
971 |
|
||||
972 |
|
||||
973 | def encodecopies(files, copies): |
|
|||
974 | items = [] |
|
|||
975 | for i, dst in enumerate(files): |
|
|||
976 | if dst in copies: |
|
|||
977 | items.append(b'%d\0%s' % (i, copies[dst])) |
|
|||
978 | if len(items) != len(copies): |
|
|||
979 | raise error.ProgrammingError( |
|
|||
980 | b'some copy targets missing from file list' |
|
|||
981 | ) |
|
|||
982 | return b"\n".join(items) |
|
|||
983 |
|
||||
984 |
|
||||
985 | def decodecopies(files, data): |
|
|||
986 | try: |
|
|||
987 | copies = {} |
|
|||
988 | if not data: |
|
|||
989 | return copies |
|
|||
990 | for l in data.split(b'\n'): |
|
|||
991 | strindex, src = l.split(b'\0') |
|
|||
992 | i = int(strindex) |
|
|||
993 | dst = files[i] |
|
|||
994 | copies[dst] = src |
|
|||
995 | return copies |
|
|||
996 | except (ValueError, IndexError): |
|
|||
997 | # Perhaps someone had chosen the same key name (e.g. "p1copies") and |
|
|||
998 | # used different syntax for the value. |
|
|||
999 | return None |
|
|||
1000 |
|
||||
1001 |
|
||||
1002 | def encodefileindices(files, subset): |
|
|||
1003 | subset = set(subset) |
|
|||
1004 | indices = [] |
|
|||
1005 | for i, f in enumerate(files): |
|
|||
1006 | if f in subset: |
|
|||
1007 | indices.append(b'%d' % i) |
|
|||
1008 | return b'\n'.join(indices) |
|
|||
1009 |
|
||||
1010 |
|
||||
1011 | def decodefileindices(files, data): |
|
|||
1012 | try: |
|
|||
1013 | subset = [] |
|
|||
1014 | if not data: |
|
|||
1015 | return subset |
|
|||
1016 | for strindex in data.split(b'\n'): |
|
|||
1017 | i = int(strindex) |
|
|||
1018 | if i < 0 or i >= len(files): |
|
|||
1019 | return None |
|
|||
1020 | subset.append(files[i]) |
|
|||
1021 | return subset |
|
|||
1022 | except (ValueError, IndexError): |
|
|||
1023 | # Perhaps someone had chosen the same key name (e.g. "added") and |
|
|||
1024 | # used different syntax for the value. |
|
|||
1025 | return None |
|
|||
1026 |
|
||||
1027 |
|
||||
1028 | def _getsidedata(srcrepo, rev): |
|
|||
1029 | ctx = srcrepo[rev] |
|
|||
1030 | filescopies = computechangesetcopies(ctx) |
|
|||
1031 | filesadded = computechangesetfilesadded(ctx) |
|
|||
1032 | filesremoved = computechangesetfilesremoved(ctx) |
|
|||
1033 | sidedata = {} |
|
|||
1034 | if any([filescopies, filesadded, filesremoved]): |
|
|||
1035 | sortedfiles = sorted(ctx.files()) |
|
|||
1036 | p1copies, p2copies = filescopies |
|
|||
1037 | p1copies = encodecopies(sortedfiles, p1copies) |
|
|||
1038 | p2copies = encodecopies(sortedfiles, p2copies) |
|
|||
1039 | filesadded = encodefileindices(sortedfiles, filesadded) |
|
|||
1040 | filesremoved = encodefileindices(sortedfiles, filesremoved) |
|
|||
1041 | if p1copies: |
|
|||
1042 | sidedata[sidedatamod.SD_P1COPIES] = p1copies |
|
|||
1043 | if p2copies: |
|
|||
1044 | sidedata[sidedatamod.SD_P2COPIES] = p2copies |
|
|||
1045 | if filesadded: |
|
|||
1046 | sidedata[sidedatamod.SD_FILESADDED] = filesadded |
|
|||
1047 | if filesremoved: |
|
|||
1048 | sidedata[sidedatamod.SD_FILESREMOVED] = filesremoved |
|
|||
1049 | return sidedata |
|
|||
1050 |
|
||||
1051 |
|
||||
1052 | def getsidedataadder(srcrepo, destrepo): |
|
|||
1053 | use_w = srcrepo.ui.configbool(b'experimental', b'worker.repository-upgrade') |
|
|||
1054 | if pycompat.iswindows or not use_w: |
|
|||
1055 | return _get_simple_sidedata_adder(srcrepo, destrepo) |
|
|||
1056 | else: |
|
|||
1057 | return _get_worker_sidedata_adder(srcrepo, destrepo) |
|
|||
1058 |
|
||||
1059 |
|
||||
1060 | def _sidedata_worker(srcrepo, revs_queue, sidedata_queue, tokens): |
|
|||
1061 | """The function used by worker precomputing sidedata |
|
|||
1062 |
|
||||
1063 | It read an input queue containing revision numbers |
|
|||
1064 | It write in an output queue containing (rev, <sidedata-map>) |
|
|||
1065 |
|
||||
1066 | The `None` input value is used as a stop signal. |
|
|||
1067 |
|
||||
1068 | The `tokens` semaphore is user to avoid having too many unprocessed |
|
|||
1069 | entries. The workers needs to acquire one token before fetching a task. |
|
|||
1070 | They will be released by the consumer of the produced data. |
|
|||
1071 | """ |
|
|||
1072 | tokens.acquire() |
|
|||
1073 | rev = revs_queue.get() |
|
|||
1074 | while rev is not None: |
|
|||
1075 | data = _getsidedata(srcrepo, rev) |
|
|||
1076 | sidedata_queue.put((rev, data)) |
|
|||
1077 | tokens.acquire() |
|
|||
1078 | rev = revs_queue.get() |
|
|||
1079 | # processing of `None` is completed, release the token. |
|
|||
1080 | tokens.release() |
|
|||
1081 |
|
||||
1082 |
|
||||
1083 | BUFF_PER_WORKER = 50 |
|
|||
1084 |
|
||||
1085 |
|
||||
1086 | def _get_worker_sidedata_adder(srcrepo, destrepo): |
|
|||
1087 | """The parallel version of the sidedata computation |
|
|||
1088 |
|
||||
1089 | This code spawn a pool of worker that precompute a buffer of sidedata |
|
|||
1090 | before we actually need them""" |
|
|||
1091 | # avoid circular import copies -> scmutil -> worker -> copies |
|
|||
1092 | from . import worker |
|
|||
1093 |
|
||||
1094 | nbworkers = worker._numworkers(srcrepo.ui) |
|
|||
1095 |
|
||||
1096 | tokens = multiprocessing.BoundedSemaphore(nbworkers * BUFF_PER_WORKER) |
|
|||
1097 | revsq = multiprocessing.Queue() |
|
|||
1098 | sidedataq = multiprocessing.Queue() |
|
|||
1099 |
|
||||
1100 | assert srcrepo.filtername is None |
|
|||
1101 | # queue all tasks beforehand, revision numbers are small and it make |
|
|||
1102 | # synchronisation simpler |
|
|||
1103 | # |
|
|||
1104 | # Since the computation for each node can be quite expensive, the overhead |
|
|||
1105 | # of using a single queue is not revelant. In practice, most computation |
|
|||
1106 | # are fast but some are very expensive and dominate all the other smaller |
|
|||
1107 | # cost. |
|
|||
1108 | for r in srcrepo.changelog.revs(): |
|
|||
1109 | revsq.put(r) |
|
|||
1110 | # queue the "no more tasks" markers |
|
|||
1111 | for i in range(nbworkers): |
|
|||
1112 | revsq.put(None) |
|
|||
1113 |
|
||||
1114 | allworkers = [] |
|
|||
1115 | for i in range(nbworkers): |
|
|||
1116 | args = (srcrepo, revsq, sidedataq, tokens) |
|
|||
1117 | w = multiprocessing.Process(target=_sidedata_worker, args=args) |
|
|||
1118 | allworkers.append(w) |
|
|||
1119 | w.start() |
|
|||
1120 |
|
||||
1121 | # dictionnary to store results for revision higher than we one we are |
|
|||
1122 | # looking for. For example, if we need the sidedatamap for 42, and 43 is |
|
|||
1123 | # received, when shelve 43 for later use. |
|
|||
1124 | staging = {} |
|
|||
1125 |
|
||||
1126 | def sidedata_companion(revlog, rev): |
|
|||
1127 | sidedata = {} |
|
|||
1128 | if util.safehasattr(revlog, b'filteredrevs'): # this is a changelog |
|
|||
1129 | # Is the data previously shelved ? |
|
|||
1130 | sidedata = staging.pop(rev, None) |
|
|||
1131 | if sidedata is None: |
|
|||
1132 | # look at the queued result until we find the one we are lookig |
|
|||
1133 | # for (shelve the other ones) |
|
|||
1134 | r, sidedata = sidedataq.get() |
|
|||
1135 | while r != rev: |
|
|||
1136 | staging[r] = sidedata |
|
|||
1137 | r, sidedata = sidedataq.get() |
|
|||
1138 | tokens.release() |
|
|||
1139 | return False, (), sidedata |
|
|||
1140 |
|
||||
1141 | return sidedata_companion |
|
|||
1142 |
|
||||
1143 |
|
||||
1144 | def _get_simple_sidedata_adder(srcrepo, destrepo): |
|
|||
1145 | """The simple version of the sidedata computation |
|
|||
1146 |
|
||||
1147 | It just compute it in the same thread on request""" |
|
|||
1148 |
|
||||
1149 | def sidedatacompanion(revlog, rev): |
|
|||
1150 | sidedata = {} |
|
|||
1151 | if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog |
|
|||
1152 | sidedata = _getsidedata(srcrepo, rev) |
|
|||
1153 | return False, (), sidedata |
|
|||
1154 |
|
||||
1155 | return sidedatacompanion |
|
|||
1156 |
|
||||
1157 |
|
||||
1158 | def getsidedataremover(srcrepo, destrepo): |
|
|||
1159 | def sidedatacompanion(revlog, rev): |
|
|||
1160 | f = () |
|
|||
1161 | if util.safehasattr(revlog, 'filteredrevs'): # this is a changelog |
|
|||
1162 | if revlog.flags(rev) & REVIDX_SIDEDATA: |
|
|||
1163 | f = ( |
|
|||
1164 | sidedatamod.SD_P1COPIES, |
|
|||
1165 | sidedatamod.SD_P2COPIES, |
|
|||
1166 | sidedatamod.SD_FILESADDED, |
|
|||
1167 | sidedatamod.SD_FILESREMOVED, |
|
|||
1168 | ) |
|
|||
1169 | return False, f, {} |
|
|||
1170 |
|
||||
1171 | return sidedatacompanion |
|
@@ -20,6 +20,7 b' from .pycompat import (' | |||||
20 | open, |
|
20 | open, | |
21 | ) |
|
21 | ) | |
22 | from . import ( |
|
22 | from . import ( | |
|
23 | diffhelper, | |||
23 | encoding, |
|
24 | encoding, | |
24 | error, |
|
25 | error, | |
25 | patch as patchmod, |
|
26 | patch as patchmod, | |
@@ -63,15 +64,7 b' try:' | |||||
63 |
|
64 | |||
64 | curses.error |
|
65 | curses.error | |
65 | except (ImportError, AttributeError): |
|
66 | except (ImportError, AttributeError): | |
66 | # I have no idea if wcurses works with crecord... |
|
67 | curses = False | |
67 | try: |
|
|||
68 | import wcurses as curses |
|
|||
69 |
|
||||
70 | curses.error |
|
|||
71 | except (ImportError, AttributeError): |
|
|||
72 | # wcurses is not shipped on Windows by default, or python is not |
|
|||
73 | # compiled with curses |
|
|||
74 | curses = False |
|
|||
75 |
|
68 | |||
76 |
|
69 | |||
77 | class fallbackerror(error.Abort): |
|
70 | class fallbackerror(error.Abort): | |
@@ -424,7 +417,7 b' class uihunk(patchnode):' | |||||
424 | contextlen = ( |
|
417 | contextlen = ( | |
425 | len(self.before) + len(self.after) + removedconvertedtocontext |
|
418 | len(self.before) + len(self.after) + removedconvertedtocontext | |
426 | ) |
|
419 | ) | |
427 | if self.after and self.after[-1] == b'\\ No newline at end of file\n': |
|
420 | if self.after and self.after[-1] == diffhelper.MISSING_NEWLINE_MARKER: | |
428 | contextlen -= 1 |
|
421 | contextlen -= 1 | |
429 | fromlen = contextlen + self.removed |
|
422 | fromlen = contextlen + self.removed | |
430 | tolen = contextlen + self.added |
|
423 | tolen = contextlen + self.added | |
@@ -508,8 +501,12 b' class uihunk(patchnode):' | |||||
508 | """ |
|
501 | """ | |
509 | dels = [] |
|
502 | dels = [] | |
510 | adds = [] |
|
503 | adds = [] | |
|
504 | noeol = False | |||
511 | for line in self.changedlines: |
|
505 | for line in self.changedlines: | |
512 | text = line.linetext |
|
506 | text = line.linetext | |
|
507 | if line.linetext == diffhelper.MISSING_NEWLINE_MARKER: | |||
|
508 | noeol = True | |||
|
509 | break | |||
513 | if line.applied: |
|
510 | if line.applied: | |
514 | if text.startswith(b'+'): |
|
511 | if text.startswith(b'+'): | |
515 | dels.append(text[1:]) |
|
512 | dels.append(text[1:]) | |
@@ -519,6 +516,9 b' class uihunk(patchnode):' | |||||
519 | dels.append(text[1:]) |
|
516 | dels.append(text[1:]) | |
520 | adds.append(text[1:]) |
|
517 | adds.append(text[1:]) | |
521 | hunk = [b'-%s' % l for l in dels] + [b'+%s' % l for l in adds] |
|
518 | hunk = [b'-%s' % l for l in dels] + [b'+%s' % l for l in adds] | |
|
519 | if noeol and hunk: | |||
|
520 | # Remove the newline from the end of the hunk. | |||
|
521 | hunk[-1] = hunk[-1][:-1] | |||
522 | h = self._hunk |
|
522 | h = self._hunk | |
523 | return patchmod.recordhunk( |
|
523 | return patchmod.recordhunk( | |
524 | h.header, h.toline, h.fromline, h.proc, h.before, hunk, h.after |
|
524 | h.header, h.toline, h.fromline, h.proc, h.before, hunk, h.after |
@@ -58,7 +58,7 b' from . import (' | |||||
58 | localrepo, |
|
58 | localrepo, | |
59 | lock as lockmod, |
|
59 | lock as lockmod, | |
60 | logcmdutil, |
|
60 | logcmdutil, | |
61 | merge as mergemod, |
|
61 | mergestate as mergestatemod, | |
62 | obsolete, |
|
62 | obsolete, | |
63 | obsutil, |
|
63 | obsutil, | |
64 | pathutil, |
|
64 | pathutil, | |
@@ -127,6 +127,23 b' def debugancestor(ui, repo, *args):' | |||||
127 | ui.write(b'%d:%s\n' % (r.rev(a), hex(a))) |
|
127 | ui.write(b'%d:%s\n' % (r.rev(a), hex(a))) | |
128 |
|
128 | |||
129 |
|
129 | |||
|
130 | @command(b'debugantivirusrunning', []) | |||
|
131 | def debugantivirusrunning(ui, repo): | |||
|
132 | """attempt to trigger an antivirus scanner to see if one is active""" | |||
|
133 | with repo.cachevfs.open('eicar-test-file.com', b'wb') as f: | |||
|
134 | f.write( | |||
|
135 | util.b85decode( | |||
|
136 | # This is a base85-armored version of the EICAR test file. See | |||
|
137 | # https://en.wikipedia.org/wiki/EICAR_test_file for details. | |||
|
138 | b'ST#=}P$fV?P+K%yP+C|uG$>GBDK|qyDK~v2MM*<JQY}+dK~6+LQba95P' | |||
|
139 | b'E<)&Nm5l)EmTEQR4qnHOhq9iNGnJx' | |||
|
140 | ) | |||
|
141 | ) | |||
|
142 | # Give an AV engine time to scan the file. | |||
|
143 | time.sleep(2) | |||
|
144 | util.unlink(repo.cachevfs.join('eicar-test-file.com')) | |||
|
145 | ||||
|
146 | ||||
130 | @command(b'debugapplystreamclonebundle', [], b'FILE') |
|
147 | @command(b'debugapplystreamclonebundle', [], b'FILE') | |
131 | def debugapplystreamclonebundle(ui, repo, fname): |
|
148 | def debugapplystreamclonebundle(ui, repo, fname): | |
132 | """apply a stream clone bundle file""" |
|
149 | """apply a stream clone bundle file""" | |
@@ -1465,8 +1482,8 b' def debuginstall(ui, **opts):' | |||||
1465 | fm = ui.formatter(b'debuginstall', opts) |
|
1482 | fm = ui.formatter(b'debuginstall', opts) | |
1466 | fm.startitem() |
|
1483 | fm.startitem() | |
1467 |
|
1484 | |||
1468 | # encoding |
|
1485 | # encoding might be unknown or wrong. don't translate these messages. | |
1469 |
fm.write(b'encoding', |
|
1486 | fm.write(b'encoding', b"checking encoding (%s)...\n", encoding.encoding) | |
1470 | err = None |
|
1487 | err = None | |
1471 | try: |
|
1488 | try: | |
1472 | codecs.lookup(pycompat.sysstr(encoding.encoding)) |
|
1489 | codecs.lookup(pycompat.sysstr(encoding.encoding)) | |
@@ -1476,7 +1493,7 b' def debuginstall(ui, **opts):' | |||||
1476 | fm.condwrite( |
|
1493 | fm.condwrite( | |
1477 | err, |
|
1494 | err, | |
1478 | b'encodingerror', |
|
1495 | b'encodingerror', | |
1479 |
|
|
1496 | b" %s\n (check that your locale is properly set)\n", | |
1480 | err, |
|
1497 | err, | |
1481 | ) |
|
1498 | ) | |
1482 |
|
1499 | |||
@@ -1650,13 +1667,6 b' def debuginstall(ui, **opts):' | |||||
1650 | fm.plain(_(b'checking "re2" regexp engine (%s)\n') % re2) |
|
1667 | fm.plain(_(b'checking "re2" regexp engine (%s)\n') % re2) | |
1651 | fm.data(re2=bool(util._re2)) |
|
1668 | fm.data(re2=bool(util._re2)) | |
1652 |
|
1669 | |||
1653 | rust_debug_mod = policy.importrust("debug") |
|
|||
1654 | if rust_debug_mod is not None: |
|
|||
1655 | re2_rust = b'installed' if rust_debug_mod.re2_installed else b'missing' |
|
|||
1656 |
|
||||
1657 | msg = b'checking "re2" regexp engine Rust bindings (%s)\n' |
|
|||
1658 | fm.plain(_(msg % re2_rust)) |
|
|||
1659 |
|
||||
1660 | # templates |
|
1670 | # templates | |
1661 | p = templater.templatepaths() |
|
1671 | p = templater.templatepaths() | |
1662 | fm.write(b'templatedirs', b'checking templates (%s)...\n', b' '.join(p)) |
|
1672 | fm.write(b'templatedirs', b'checking templates (%s)...\n', b' '.join(p)) | |
@@ -1974,7 +1984,7 b' def debugmergestate(ui, repo, *args, **o' | |||||
1974 | was chosen.""" |
|
1984 | was chosen.""" | |
1975 |
|
1985 | |||
1976 | if ui.verbose: |
|
1986 | if ui.verbose: | |
1977 | ms = mergemod.mergestate(repo) |
|
1987 | ms = mergestatemod.mergestate(repo) | |
1978 |
|
1988 | |||
1979 | # sort so that reasonable information is on top |
|
1989 | # sort so that reasonable information is on top | |
1980 | v1records = ms._readrecordsv1() |
|
1990 | v1records = ms._readrecordsv1() | |
@@ -2008,7 +2018,7 b' def debugmergestate(ui, repo, *args, **o' | |||||
2008 | b'"}' |
|
2018 | b'"}' | |
2009 | ) |
|
2019 | ) | |
2010 |
|
2020 | |||
2011 | ms = mergemod.mergestate.read(repo) |
|
2021 | ms = mergestatemod.mergestate.read(repo) | |
2012 |
|
2022 | |||
2013 | fm = ui.formatter(b'debugmergestate', opts) |
|
2023 | fm = ui.formatter(b'debugmergestate', opts) | |
2014 | fm.startitem() |
|
2024 | fm.startitem() | |
@@ -2034,8 +2044,8 b' def debugmergestate(ui, repo, *args, **o' | |||||
2034 | state = ms._state[f] |
|
2044 | state = ms._state[f] | |
2035 | fm_files.data(state=state[0]) |
|
2045 | fm_files.data(state=state[0]) | |
2036 | if state[0] in ( |
|
2046 | if state[0] in ( | |
2037 | mergemod.MERGE_RECORD_UNRESOLVED, |
|
2047 | mergestatemod.MERGE_RECORD_UNRESOLVED, | |
2038 | mergemod.MERGE_RECORD_RESOLVED, |
|
2048 | mergestatemod.MERGE_RECORD_RESOLVED, | |
2039 | ): |
|
2049 | ): | |
2040 | fm_files.data(local_key=state[1]) |
|
2050 | fm_files.data(local_key=state[1]) | |
2041 | fm_files.data(local_path=state[2]) |
|
2051 | fm_files.data(local_path=state[2]) | |
@@ -2045,8 +2055,8 b' def debugmergestate(ui, repo, *args, **o' | |||||
2045 | fm_files.data(other_node=state[6]) |
|
2055 | fm_files.data(other_node=state[6]) | |
2046 | fm_files.data(local_flags=state[7]) |
|
2056 | fm_files.data(local_flags=state[7]) | |
2047 | elif state[0] in ( |
|
2057 | elif state[0] in ( | |
2048 | mergemod.MERGE_RECORD_UNRESOLVED_PATH, |
|
2058 | mergestatemod.MERGE_RECORD_UNRESOLVED_PATH, | |
2049 | mergemod.MERGE_RECORD_RESOLVED_PATH, |
|
2059 | mergestatemod.MERGE_RECORD_RESOLVED_PATH, | |
2050 | ): |
|
2060 | ): | |
2051 | fm_files.data(renamed_path=state[1]) |
|
2061 | fm_files.data(renamed_path=state[1]) | |
2052 | fm_files.data(rename_side=state[2]) |
|
2062 | fm_files.data(rename_side=state[2]) | |
@@ -2658,6 +2668,13 b' def debugrename(ui, repo, *pats, **opts)' | |||||
2658 | ui.write(_(b"%s not renamed\n") % rel) |
|
2668 | ui.write(_(b"%s not renamed\n") % rel) | |
2659 |
|
2669 | |||
2660 |
|
2670 | |||
|
2671 | @command(b'debugrequires|debugrequirements', [], b'') | |||
|
2672 | def debugrequirements(ui, repo): | |||
|
2673 | """ print the current repo requirements """ | |||
|
2674 | for r in sorted(repo.requirements): | |||
|
2675 | ui.write(b"%s\n" % r) | |||
|
2676 | ||||
|
2677 | ||||
2661 | @command( |
|
2678 | @command( | |
2662 | b'debugrevlog', |
|
2679 | b'debugrevlog', | |
2663 | cmdutil.debugrevlogopts + [(b'd', b'dump', False, _(b'dump index data'))], |
|
2680 | cmdutil.debugrevlogopts + [(b'd', b'dump', False, _(b'dump index data'))], |
@@ -14,6 +14,8 b' from . import (' | |||||
14 | pycompat, |
|
14 | pycompat, | |
15 | ) |
|
15 | ) | |
16 |
|
16 | |||
|
17 | MISSING_NEWLINE_MARKER = b'\\ No newline at end of file\n' | |||
|
18 | ||||
17 |
|
19 | |||
18 | def addlines(fp, hunk, lena, lenb, a, b): |
|
20 | def addlines(fp, hunk, lena, lenb, a, b): | |
19 | """Read lines from fp into the hunk |
|
21 | """Read lines from fp into the hunk | |
@@ -32,7 +34,7 b' def addlines(fp, hunk, lena, lenb, a, b)' | |||||
32 | s = fp.readline() |
|
34 | s = fp.readline() | |
33 | if not s: |
|
35 | if not s: | |
34 | raise error.ParseError(_(b'incomplete hunk')) |
|
36 | raise error.ParseError(_(b'incomplete hunk')) | |
35 | if s == b"\\ No newline at end of file\n": |
|
37 | if s == MISSING_NEWLINE_MARKER: | |
36 | fixnewline(hunk, a, b) |
|
38 | fixnewline(hunk, a, b) | |
37 | continue |
|
39 | continue | |
38 | if s == b'\n' or s == b'\r\n': |
|
40 | if s == b'\n' or s == b'\r\n': |
@@ -187,7 +187,7 b' class dirstate(object):' | |||||
187 |
|
187 | |||
188 | @propertycache |
|
188 | @propertycache | |
189 | def _checkexec(self): |
|
189 | def _checkexec(self): | |
190 | return util.checkexec(self._root) |
|
190 | return bool(util.checkexec(self._root)) | |
191 |
|
191 | |||
192 | @propertycache |
|
192 | @propertycache | |
193 | def _checkcase(self): |
|
193 | def _checkcase(self): | |
@@ -1114,6 +1114,7 b' class dirstate(object):' | |||||
1114 | unknown, |
|
1114 | unknown, | |
1115 | warnings, |
|
1115 | warnings, | |
1116 | bad, |
|
1116 | bad, | |
|
1117 | traversed, | |||
1117 | ) = rustmod.status( |
|
1118 | ) = rustmod.status( | |
1118 | self._map._rustmap, |
|
1119 | self._map._rustmap, | |
1119 | matcher, |
|
1120 | matcher, | |
@@ -1124,7 +1125,13 b' class dirstate(object):' | |||||
1124 | bool(list_clean), |
|
1125 | bool(list_clean), | |
1125 | bool(list_ignored), |
|
1126 | bool(list_ignored), | |
1126 | bool(list_unknown), |
|
1127 | bool(list_unknown), | |
|
1128 | bool(matcher.traversedir), | |||
1127 | ) |
|
1129 | ) | |
|
1130 | ||||
|
1131 | if matcher.traversedir: | |||
|
1132 | for dir in traversed: | |||
|
1133 | matcher.traversedir(dir) | |||
|
1134 | ||||
1128 | if self._ui.warn: |
|
1135 | if self._ui.warn: | |
1129 | for item in warnings: |
|
1136 | for item in warnings: | |
1130 | if isinstance(item, tuple): |
|
1137 | if isinstance(item, tuple): | |
@@ -1200,10 +1207,8 b' class dirstate(object):' | |||||
1200 | use_rust = False |
|
1207 | use_rust = False | |
1201 | elif sparse.enabled: |
|
1208 | elif sparse.enabled: | |
1202 | use_rust = False |
|
1209 | use_rust = False | |
1203 | elif match.traversedir is not None: |
|
|||
1204 | use_rust = False |
|
|||
1205 | elif not isinstance(match, allowed_matchers): |
|
1210 | elif not isinstance(match, allowed_matchers): | |
1206 |
# |
|
1211 | # Some matchers have yet to be implemented | |
1207 | use_rust = False |
|
1212 | use_rust = False | |
1208 |
|
1213 | |||
1209 | if use_rust: |
|
1214 | if use_rust: |
@@ -41,8 +41,8 b' def findcommonincoming(repo, remote, hea' | |||||
41 | any longer. |
|
41 | any longer. | |
42 | "heads" is either the supplied heads, or else the remote's heads. |
|
42 | "heads" is either the supplied heads, or else the remote's heads. | |
43 | "ancestorsof" if not None, restrict the discovery to a subset defined by |
|
43 | "ancestorsof" if not None, restrict the discovery to a subset defined by | |
44 |
these nodes. Changeset outside of this set won't be considered ( |
|
44 | these nodes. Changeset outside of this set won't be considered (but may | |
45 |
|
|
45 | still appear in "common"). | |
46 |
|
46 | |||
47 | If you pass heads and they are all known locally, the response lists just |
|
47 | If you pass heads and they are all known locally, the response lists just | |
48 | these heads in "common" and in "heads". |
|
48 | these heads in "common" and in "heads". | |
@@ -75,28 +75,35 b' def findcommonincoming(repo, remote, hea' | |||||
75 |
|
75 | |||
76 |
|
76 | |||
77 | class outgoing(object): |
|
77 | class outgoing(object): | |
78 | '''Represents the set of nodes present in a local repo but not in a |
|
78 | '''Represents the result of a findcommonoutgoing() call. | |
79 | (possibly) remote one. |
|
|||
80 |
|
79 | |||
81 | Members: |
|
80 | Members: | |
82 |
|
81 | |||
83 | missing is a list of all nodes present in local but not in remote. |
|
82 | ancestorsof is a list of the nodes whose ancestors are included in the | |
84 | common is a list of all nodes shared between the two repos. |
|
83 | outgoing operation. | |
85 | excluded is the list of missing changeset that shouldn't be sent remotely. |
|
84 | ||
86 | missingheads is the list of heads of missing. |
|
85 | missing is a list of those ancestors of ancestorsof that are present in | |
|
86 | local but not in remote. | |||
|
87 | ||||
|
88 | common is a set containing revs common between the local and the remote | |||
|
89 | repository (at least all of those that are ancestors of ancestorsof). | |||
|
90 | ||||
87 | commonheads is the list of heads of common. |
|
91 | commonheads is the list of heads of common. | |
88 |
|
92 | |||
89 | The sets are computed on demand from the heads, unless provided upfront |
|
93 | excluded is the list of missing changeset that shouldn't be sent | |
|
94 | remotely. | |||
|
95 | ||||
|
96 | Some members are computed on demand from the heads, unless provided upfront | |||
90 | by discovery.''' |
|
97 | by discovery.''' | |
91 |
|
98 | |||
92 | def __init__( |
|
99 | def __init__( | |
93 |
self, repo, commonheads=None, |
|
100 | self, repo, commonheads=None, ancestorsof=None, missingroots=None | |
94 | ): |
|
101 | ): | |
95 | # at least one of them must not be set |
|
102 | # at least one of them must not be set | |
96 | assert None in (commonheads, missingroots) |
|
103 | assert None in (commonheads, missingroots) | |
97 | cl = repo.changelog |
|
104 | cl = repo.changelog | |
98 |
if |
|
105 | if ancestorsof is None: | |
99 |
|
|
106 | ancestorsof = cl.heads() | |
100 | if missingroots: |
|
107 | if missingroots: | |
101 | discbases = [] |
|
108 | discbases = [] | |
102 | for n in missingroots: |
|
109 | for n in missingroots: | |
@@ -104,14 +111,14 b' class outgoing(object):' | |||||
104 | # TODO remove call to nodesbetween. |
|
111 | # TODO remove call to nodesbetween. | |
105 | # TODO populate attributes on outgoing instance instead of setting |
|
112 | # TODO populate attributes on outgoing instance instead of setting | |
106 | # discbases. |
|
113 | # discbases. | |
107 |
csets, roots, heads = cl.nodesbetween(missingroots, |
|
114 | csets, roots, heads = cl.nodesbetween(missingroots, ancestorsof) | |
108 | included = set(csets) |
|
115 | included = set(csets) | |
109 |
|
|
116 | ancestorsof = heads | |
110 | commonheads = [n for n in discbases if n not in included] |
|
117 | commonheads = [n for n in discbases if n not in included] | |
111 | elif not commonheads: |
|
118 | elif not commonheads: | |
112 | commonheads = [nullid] |
|
119 | commonheads = [nullid] | |
113 | self.commonheads = commonheads |
|
120 | self.commonheads = commonheads | |
114 | self.missingheads = missingheads |
|
121 | self.ancestorsof = ancestorsof | |
115 | self._revlog = cl |
|
122 | self._revlog = cl | |
116 | self._common = None |
|
123 | self._common = None | |
117 | self._missing = None |
|
124 | self._missing = None | |
@@ -119,7 +126,7 b' class outgoing(object):' | |||||
119 |
|
126 | |||
120 | def _computecommonmissing(self): |
|
127 | def _computecommonmissing(self): | |
121 | sets = self._revlog.findcommonmissing( |
|
128 | sets = self._revlog.findcommonmissing( | |
122 |
self.commonheads, self. |
|
129 | self.commonheads, self.ancestorsof | |
123 | ) |
|
130 | ) | |
124 | self._common, self._missing = sets |
|
131 | self._common, self._missing = sets | |
125 |
|
132 | |||
@@ -135,6 +142,17 b' class outgoing(object):' | |||||
135 | self._computecommonmissing() |
|
142 | self._computecommonmissing() | |
136 | return self._missing |
|
143 | return self._missing | |
137 |
|
144 | |||
|
145 | @property | |||
|
146 | def missingheads(self): | |||
|
147 | util.nouideprecwarn( | |||
|
148 | b'outgoing.missingheads never contained what the name suggests and ' | |||
|
149 | b'was renamed to outgoing.ancestorsof. check your code for ' | |||
|
150 | b'correctness.', | |||
|
151 | b'5.5', | |||
|
152 | stacklevel=2, | |||
|
153 | ) | |||
|
154 | return self.ancestorsof | |||
|
155 | ||||
138 |
|
156 | |||
139 | def findcommonoutgoing( |
|
157 | def findcommonoutgoing( | |
140 | repo, other, onlyheads=None, force=False, commoninc=None, portable=False |
|
158 | repo, other, onlyheads=None, force=False, commoninc=None, portable=False | |
@@ -149,7 +167,7 b' def findcommonoutgoing(' | |||||
149 | If commoninc is given, it must be the result of a prior call to |
|
167 | If commoninc is given, it must be the result of a prior call to | |
150 | findcommonincoming(repo, other, force) to avoid recomputing it here. |
|
168 | findcommonincoming(repo, other, force) to avoid recomputing it here. | |
151 |
|
169 | |||
152 |
If portable is given, compute more conservative common and |
|
170 | If portable is given, compute more conservative common and ancestorsof, | |
153 | to make bundles created from the instance more portable.''' |
|
171 | to make bundles created from the instance more portable.''' | |
154 | # declare an empty outgoing object to be filled later |
|
172 | # declare an empty outgoing object to be filled later | |
155 | og = outgoing(repo, None, None) |
|
173 | og = outgoing(repo, None, None) | |
@@ -164,10 +182,10 b' def findcommonoutgoing(' | |||||
164 | # compute outgoing |
|
182 | # compute outgoing | |
165 | mayexclude = repo._phasecache.phaseroots[phases.secret] or repo.obsstore |
|
183 | mayexclude = repo._phasecache.phaseroots[phases.secret] or repo.obsstore | |
166 | if not mayexclude: |
|
184 | if not mayexclude: | |
167 |
og. |
|
185 | og.ancestorsof = onlyheads or repo.heads() | |
168 | elif onlyheads is None: |
|
186 | elif onlyheads is None: | |
169 | # use visible heads as it should be cached |
|
187 | # use visible heads as it should be cached | |
170 |
og. |
|
188 | og.ancestorsof = repo.filtered(b"served").heads() | |
171 | og.excluded = [ctx.node() for ctx in repo.set(b'secret() or extinct()')] |
|
189 | og.excluded = [ctx.node() for ctx in repo.set(b'secret() or extinct()')] | |
172 | else: |
|
190 | else: | |
173 | # compute common, missing and exclude secret stuff |
|
191 | # compute common, missing and exclude secret stuff | |
@@ -182,12 +200,12 b' def findcommonoutgoing(' | |||||
182 | else: |
|
200 | else: | |
183 | missing.append(node) |
|
201 | missing.append(node) | |
184 | if len(missing) == len(allmissing): |
|
202 | if len(missing) == len(allmissing): | |
185 |
|
|
203 | ancestorsof = onlyheads | |
186 | else: # update missing heads |
|
204 | else: # update missing heads | |
187 |
|
|
205 | ancestorsof = phases.newheads(repo, onlyheads, excluded) | |
188 | og.missingheads = missingheads |
|
206 | og.ancestorsof = ancestorsof | |
189 | if portable: |
|
207 | if portable: | |
190 |
# recompute common and |
|
208 | # recompute common and ancestorsof as if -r<rev> had been given for | |
191 | # each head of missing, and --base <rev> for each head of the proper |
|
209 | # each head of missing, and --base <rev> for each head of the proper | |
192 | # ancestors of missing |
|
210 | # ancestors of missing | |
193 | og._computecommonmissing() |
|
211 | og._computecommonmissing() | |
@@ -195,7 +213,7 b' def findcommonoutgoing(' | |||||
195 | missingrevs = {cl.rev(n) for n in og._missing} |
|
213 | missingrevs = {cl.rev(n) for n in og._missing} | |
196 | og._common = set(cl.ancestors(missingrevs)) - missingrevs |
|
214 | og._common = set(cl.ancestors(missingrevs)) - missingrevs | |
197 | commonheads = set(og.commonheads) |
|
215 | commonheads = set(og.commonheads) | |
198 |
og. |
|
216 | og.ancestorsof = [h for h in og.ancestorsof if h not in commonheads] | |
199 |
|
217 | |||
200 | return og |
|
218 | return og | |
201 |
|
219 | |||
@@ -268,7 +286,7 b' def _headssummary(pushop):' | |||||
268 | # If there are no obsstore, no post processing are needed. |
|
286 | # If there are no obsstore, no post processing are needed. | |
269 | if repo.obsstore: |
|
287 | if repo.obsstore: | |
270 | torev = repo.changelog.rev |
|
288 | torev = repo.changelog.rev | |
271 |
futureheads = {torev(h) for h in outgoing. |
|
289 | futureheads = {torev(h) for h in outgoing.ancestorsof} | |
272 | futureheads |= {torev(h) for h in outgoing.commonheads} |
|
290 | futureheads |= {torev(h) for h in outgoing.commonheads} | |
273 | allfuturecommon = repo.changelog.ancestors(futureheads, inclusive=True) |
|
291 | allfuturecommon = repo.changelog.ancestors(futureheads, inclusive=True) | |
274 | for branch, heads in sorted(pycompat.iteritems(headssum)): |
|
292 | for branch, heads in sorted(pycompat.iteritems(headssum)): |
@@ -104,41 +104,46 b' class request(object):' | |||||
104 |
|
104 | |||
105 | def run(): |
|
105 | def run(): | |
106 | """run the command in sys.argv""" |
|
106 | """run the command in sys.argv""" | |
107 | initstdio() |
|
|||
108 | with tracing.log('parse args into request'): |
|
|||
109 | req = request(pycompat.sysargv[1:]) |
|
|||
110 | err = None |
|
|||
111 | try: |
|
107 | try: | |
112 | status = dispatch(req) |
|
108 | initstdio() | |
113 | except error.StdioError as e: |
|
109 | with tracing.log('parse args into request'): | |
114 | err = e |
|
110 | req = request(pycompat.sysargv[1:]) | |
115 | status = -1 |
|
111 | err = None | |
116 |
|
||||
117 | # In all cases we try to flush stdio streams. |
|
|||
118 | if util.safehasattr(req.ui, b'fout'): |
|
|||
119 | assert req.ui is not None # help pytype |
|
|||
120 | assert req.ui.fout is not None # help pytype |
|
|||
121 | try: |
|
112 | try: | |
122 | req.ui.fout.flush() |
|
113 | status = dispatch(req) | |
123 |
except |
|
114 | except error.StdioError as e: | |
124 | err = e |
|
115 | err = e | |
125 | status = -1 |
|
116 | status = -1 | |
126 |
|
117 | |||
127 | if util.safehasattr(req.ui, b'ferr'): |
|
118 | # In all cases we try to flush stdio streams. | |
128 | assert req.ui is not None # help pytype |
|
119 | if util.safehasattr(req.ui, b'fout'): | |
129 |
assert req.ui |
|
120 | assert req.ui is not None # help pytype | |
130 | try: |
|
121 | assert req.ui.fout is not None # help pytype | |
131 | if err is not None and err.errno != errno.EPIPE: |
|
122 | try: | |
132 |
req.ui.f |
|
123 | req.ui.fout.flush() | |
133 | b'abort: %s\n' % encoding.strtolocal(err.strerror) |
|
124 | except IOError as e: | |
134 |
|
|
125 | err = e | |
135 | req.ui.ferr.flush() |
|
126 | status = -1 | |
136 | # There's not much we can do about an I/O error here. So (possibly) |
|
|||
137 | # change the status code and move on. |
|
|||
138 | except IOError: |
|
|||
139 | status = -1 |
|
|||
140 |
|
127 | |||
141 | _silencestdio() |
|
128 | if util.safehasattr(req.ui, b'ferr'): | |
|
129 | assert req.ui is not None # help pytype | |||
|
130 | assert req.ui.ferr is not None # help pytype | |||
|
131 | try: | |||
|
132 | if err is not None and err.errno != errno.EPIPE: | |||
|
133 | req.ui.ferr.write( | |||
|
134 | b'abort: %s\n' % encoding.strtolocal(err.strerror) | |||
|
135 | ) | |||
|
136 | req.ui.ferr.flush() | |||
|
137 | # There's not much we can do about an I/O error here. So (possibly) | |||
|
138 | # change the status code and move on. | |||
|
139 | except IOError: | |||
|
140 | status = -1 | |||
|
141 | ||||
|
142 | _silencestdio() | |||
|
143 | except KeyboardInterrupt: | |||
|
144 | # Catch early/late KeyboardInterrupt as last ditch. Here nothing will | |||
|
145 | # be printed to console to avoid another IOError/KeyboardInterrupt. | |||
|
146 | status = -1 | |||
142 | sys.exit(status & 255) |
|
147 | sys.exit(status & 255) | |
143 |
|
148 | |||
144 |
|
149 |
@@ -106,6 +106,22 b' class InterventionRequired(Hint, Excepti' | |||||
106 | __bytes__ = _tobytes |
|
106 | __bytes__ = _tobytes | |
107 |
|
107 | |||
108 |
|
108 | |||
|
109 | class ConflictResolutionRequired(InterventionRequired): | |||
|
110 | """Exception raised when a continuable command required merge conflict resolution.""" | |||
|
111 | ||||
|
112 | def __init__(self, opname): | |||
|
113 | from .i18n import _ | |||
|
114 | ||||
|
115 | self.opname = opname | |||
|
116 | InterventionRequired.__init__( | |||
|
117 | self, | |||
|
118 | _( | |||
|
119 | b"unresolved conflicts (see 'hg resolve', then 'hg %s --continue')" | |||
|
120 | ) | |||
|
121 | % opname, | |||
|
122 | ) | |||
|
123 | ||||
|
124 | ||||
109 | class Abort(Hint, Exception): |
|
125 | class Abort(Hint, Exception): | |
110 | """Raised if a command needs to print an error and exit.""" |
|
126 | """Raised if a command needs to print an error and exit.""" | |
111 |
|
127 |
@@ -503,7 +503,7 b' class pushoperation(object):' | |||||
503 | @util.propertycache |
|
503 | @util.propertycache | |
504 | def futureheads(self): |
|
504 | def futureheads(self): | |
505 | """future remote heads if the changeset push succeeds""" |
|
505 | """future remote heads if the changeset push succeeds""" | |
506 |
return self.outgoing. |
|
506 | return self.outgoing.ancestorsof | |
507 |
|
507 | |||
508 | @util.propertycache |
|
508 | @util.propertycache | |
509 | def fallbackheads(self): |
|
509 | def fallbackheads(self): | |
@@ -512,20 +512,20 b' class pushoperation(object):' | |||||
512 | # not target to push, all common are relevant |
|
512 | # not target to push, all common are relevant | |
513 | return self.outgoing.commonheads |
|
513 | return self.outgoing.commonheads | |
514 | unfi = self.repo.unfiltered() |
|
514 | unfi = self.repo.unfiltered() | |
515 |
# I want cheads = heads(:: |
|
515 | # I want cheads = heads(::ancestorsof and ::commonheads) | |
516 |
# ( |
|
516 | # (ancestorsof is revs with secret changeset filtered out) | |
517 | # |
|
517 | # | |
518 | # This can be expressed as: |
|
518 | # This can be expressed as: | |
519 |
# cheads = ( ( |
|
519 | # cheads = ( (ancestorsof and ::commonheads) | |
520 |
# + (commonheads and :: |
|
520 | # + (commonheads and ::ancestorsof))" | |
521 | # ) |
|
521 | # ) | |
522 | # |
|
522 | # | |
523 | # while trying to push we already computed the following: |
|
523 | # while trying to push we already computed the following: | |
524 | # common = (::commonheads) |
|
524 | # common = (::commonheads) | |
525 |
# missing = ((commonheads:: |
|
525 | # missing = ((commonheads::ancestorsof) - commonheads) | |
526 | # |
|
526 | # | |
527 | # We can pick: |
|
527 | # We can pick: | |
528 |
# * |
|
528 | # * ancestorsof part of common (::commonheads) | |
529 | common = self.outgoing.common |
|
529 | common = self.outgoing.common | |
530 | rev = self.repo.changelog.index.rev |
|
530 | rev = self.repo.changelog.index.rev | |
531 | cheads = [node for node in self.revs if rev(node) in common] |
|
531 | cheads = [node for node in self.revs if rev(node) in common] | |
@@ -905,27 +905,32 b' def _pushcheckoutgoing(pushop):' | |||||
905 | # if repo.obsstore == False --> no obsolete |
|
905 | # if repo.obsstore == False --> no obsolete | |
906 | # then, save the iteration |
|
906 | # then, save the iteration | |
907 | if unfi.obsstore: |
|
907 | if unfi.obsstore: | |
908 | # this message are here for 80 char limit reason |
|
908 | obsoletes = [] | |
909 | mso = _(b"push includes obsolete changeset: %s!") |
|
909 | unstables = [] | |
910 | mspd = _(b"push includes phase-divergent changeset: %s!") |
|
910 | for node in outgoing.missing: | |
911 | mscd = _(b"push includes content-divergent changeset: %s!") |
|
|||
912 | mst = { |
|
|||
913 | b"orphan": _(b"push includes orphan changeset: %s!"), |
|
|||
914 | b"phase-divergent": mspd, |
|
|||
915 | b"content-divergent": mscd, |
|
|||
916 | } |
|
|||
917 | # If we are to push if there is at least one |
|
|||
918 | # obsolete or unstable changeset in missing, at |
|
|||
919 | # least one of the missinghead will be obsolete or |
|
|||
920 | # unstable. So checking heads only is ok |
|
|||
921 | for node in outgoing.missingheads: |
|
|||
922 | ctx = unfi[node] |
|
911 | ctx = unfi[node] | |
923 | if ctx.obsolete(): |
|
912 | if ctx.obsolete(): | |
924 |
|
|
913 | obsoletes.append(ctx) | |
925 | elif ctx.isunstable(): |
|
914 | elif ctx.isunstable(): | |
926 | # TODO print more than one instability in the abort |
|
915 | unstables.append(ctx) | |
927 | # message |
|
916 | if obsoletes or unstables: | |
928 | raise error.Abort(mst[ctx.instabilities()[0]] % ctx) |
|
917 | msg = b"" | |
|
918 | if obsoletes: | |||
|
919 | msg += _(b"push includes obsolete changesets:\n") | |||
|
920 | msg += b"\n".join(b' %s' % ctx for ctx in obsoletes) | |||
|
921 | if unstables: | |||
|
922 | if msg: | |||
|
923 | msg += b"\n" | |||
|
924 | msg += _(b"push includes unstable changesets:\n") | |||
|
925 | msg += b"\n".join( | |||
|
926 | b' %s (%s)' | |||
|
927 | % ( | |||
|
928 | ctx, | |||
|
929 | b", ".join(_(ins) for ins in ctx.instabilities()), | |||
|
930 | ) | |||
|
931 | for ctx in unstables | |||
|
932 | ) | |||
|
933 | raise error.Abort(msg) | |||
929 |
|
934 | |||
930 | discovery.checkheads(pushop) |
|
935 | discovery.checkheads(pushop) | |
931 | return True |
|
936 | return True | |
@@ -969,7 +974,7 b' def _pushb2ctxcheckheads(pushop, bundler' | |||||
969 | """ |
|
974 | """ | |
970 | # * 'force' do not check for push race, |
|
975 | # * 'force' do not check for push race, | |
971 | # * if we don't push anything, there are nothing to check. |
|
976 | # * if we don't push anything, there are nothing to check. | |
972 |
if not pushop.force and pushop.outgoing. |
|
977 | if not pushop.force and pushop.outgoing.ancestorsof: | |
973 | allowunrelated = b'related' in bundler.capabilities.get( |
|
978 | allowunrelated = b'related' in bundler.capabilities.get( | |
974 | b'checkheads', () |
|
979 | b'checkheads', () | |
975 | ) |
|
980 | ) | |
@@ -1024,12 +1029,12 b' def _pushb2checkphases(pushop, bundler):' | |||||
1024 | hasphaseheads = b'heads' in b2caps.get(b'phases', ()) |
|
1029 | hasphaseheads = b'heads' in b2caps.get(b'phases', ()) | |
1025 | if pushop.remotephases is not None and hasphaseheads: |
|
1030 | if pushop.remotephases is not None and hasphaseheads: | |
1026 | # check that the remote phase has not changed |
|
1031 | # check that the remote phase has not changed | |
1027 |
checks = |
|
1032 | checks = {p: [] for p in phases.allphases} | |
1028 | checks[phases.public].extend(pushop.remotephases.publicheads) |
|
1033 | checks[phases.public].extend(pushop.remotephases.publicheads) | |
1029 | checks[phases.draft].extend(pushop.remotephases.draftroots) |
|
1034 | checks[phases.draft].extend(pushop.remotephases.draftroots) | |
1030 | if any(checks): |
|
1035 | if any(pycompat.itervalues(checks)): | |
1031 |
for |
|
1036 | for phase in checks: | |
1032 |
|
|
1037 | checks[phase].sort() | |
1033 | checkdata = phases.binaryencode(checks) |
|
1038 | checkdata = phases.binaryencode(checks) | |
1034 | bundler.newpart(b'check:phases', data=checkdata) |
|
1039 | bundler.newpart(b'check:phases', data=checkdata) | |
1035 |
|
1040 | |||
@@ -1104,7 +1109,7 b' def _pushb2phaseheads(pushop, bundler):' | |||||
1104 | """push phase information through a bundle2 - binary part""" |
|
1109 | """push phase information through a bundle2 - binary part""" | |
1105 | pushop.stepsdone.add(b'phases') |
|
1110 | pushop.stepsdone.add(b'phases') | |
1106 | if pushop.outdatedphases: |
|
1111 | if pushop.outdatedphases: | |
1107 |
updates = |
|
1112 | updates = {p: [] for p in phases.allphases} | |
1108 | updates[0].extend(h.node() for h in pushop.outdatedphases) |
|
1113 | updates[0].extend(h.node() for h in pushop.outdatedphases) | |
1109 | phasedata = phases.binaryencode(updates) |
|
1114 | phasedata = phases.binaryencode(updates) | |
1110 | bundler.newpart(b'phase-heads', data=phasedata) |
|
1115 | bundler.newpart(b'phase-heads', data=phasedata) | |
@@ -2658,9 +2663,9 b' def _getbundlephasespart(' | |||||
2658 | headsbyphase[phases.public].add(node(r)) |
|
2663 | headsbyphase[phases.public].add(node(r)) | |
2659 |
|
2664 | |||
2660 | # transform data in a format used by the encoding function |
|
2665 | # transform data in a format used by the encoding function | |
2661 |
phasemapping = |
|
2666 | phasemapping = { | |
2662 |
for phase in phases.allphases |
|
2667 | phase: sorted(headsbyphase[phase]) for phase in phases.allphases | |
2663 | phasemapping.append(sorted(headsbyphase[phase])) |
|
2668 | } | |
2664 |
|
2669 | |||
2665 | # generate the actual part |
|
2670 | # generate the actual part | |
2666 | phasedata = phases.binaryencode(phasemapping) |
|
2671 | phasedata = phases.binaryencode(phasemapping) | |
@@ -3025,6 +3030,23 b' def filterclonebundleentries(repo, entri' | |||||
3025 | ) |
|
3030 | ) | |
3026 | continue |
|
3031 | continue | |
3027 |
|
3032 | |||
|
3033 | if b'REQUIREDRAM' in entry: | |||
|
3034 | try: | |||
|
3035 | requiredram = util.sizetoint(entry[b'REQUIREDRAM']) | |||
|
3036 | except error.ParseError: | |||
|
3037 | repo.ui.debug( | |||
|
3038 | b'filtering %s due to a bad REQUIREDRAM attribute\n' | |||
|
3039 | % entry[b'URL'] | |||
|
3040 | ) | |||
|
3041 | continue | |||
|
3042 | actualram = repo.ui.estimatememory() | |||
|
3043 | if actualram is not None and actualram * 0.66 < requiredram: | |||
|
3044 | repo.ui.debug( | |||
|
3045 | b'filtering %s as it needs more than 2/3 of system memory\n' | |||
|
3046 | % entry[b'URL'] | |||
|
3047 | ) | |||
|
3048 | continue | |||
|
3049 | ||||
3028 | newentries.append(entry) |
|
3050 | newentries.append(entry) | |
3029 |
|
3051 | |||
3030 | return newentries |
|
3052 | return newentries |
@@ -82,15 +82,12 b' def pull(pullop):' | |||||
82 | phases.registernew(repo, tr, phases.draft, csetres[b'added']) |
|
82 | phases.registernew(repo, tr, phases.draft, csetres[b'added']) | |
83 |
|
83 | |||
84 | # And adjust the phase of all changesets accordingly. |
|
84 | # And adjust the phase of all changesets accordingly. | |
85 | for phase in phases.phasenames: |
|
85 | for phasenumber, phase in phases.phasenames.items(): | |
86 | if phase == b'secret' or not csetres[b'nodesbyphase'][phase]: |
|
86 | if phase == b'secret' or not csetres[b'nodesbyphase'][phase]: | |
87 | continue |
|
87 | continue | |
88 |
|
88 | |||
89 | phases.advanceboundary( |
|
89 | phases.advanceboundary( | |
90 | repo, |
|
90 | repo, tr, phasenumber, csetres[b'nodesbyphase'][phase], | |
91 | tr, |
|
|||
92 | phases.phasenames.index(phase), |
|
|||
93 | csetres[b'nodesbyphase'][phase], |
|
|||
94 | ) |
|
91 | ) | |
95 |
|
92 | |||
96 | # Write bookmark updates. |
|
93 | # Write bookmark updates. | |
@@ -361,7 +358,7 b' def _processchangesetdata(repo, tr, objs' | |||||
361 | # so we can set the linkrev accordingly when manifests are added. |
|
358 | # so we can set the linkrev accordingly when manifests are added. | |
362 | manifestnodes[cl.rev(node)] = revision.manifest |
|
359 | manifestnodes[cl.rev(node)] = revision.manifest | |
363 |
|
360 | |||
364 | nodesbyphase = {phase: set() for phase in phases.phasenames} |
|
361 | nodesbyphase = {phase: set() for phase in phases.phasenames.values()} | |
365 | remotebookmarks = {} |
|
362 | remotebookmarks = {} | |
366 |
|
363 | |||
367 | # addgroup() expects a 7-tuple describing revisions. This normalizes |
|
364 | # addgroup() expects a 7-tuple describing revisions. This normalizes |
@@ -706,12 +706,17 b' def _disabledpaths():' | |||||
706 | '''find paths of disabled extensions. returns a dict of {name: path}''' |
|
706 | '''find paths of disabled extensions. returns a dict of {name: path}''' | |
707 | import hgext |
|
707 | import hgext | |
708 |
|
708 | |||
709 | extpath = os.path.dirname( |
|
709 | # The hgext might not have a __file__ attribute (e.g. in PyOxidizer) and | |
710 | os.path.abspath(pycompat.fsencode(hgext.__file__)) |
|
710 | # it might not be on a filesystem even if it does. | |
711 | ) |
|
711 | if util.safehasattr(hgext, '__file__'): | |
712 | try: # might not be a filesystem path |
|
712 | extpath = os.path.dirname( | |
713 | files = os.listdir(extpath) |
|
713 | os.path.abspath(pycompat.fsencode(hgext.__file__)) | |
714 | except OSError: |
|
714 | ) | |
|
715 | try: | |||
|
716 | files = os.listdir(extpath) | |||
|
717 | except OSError: | |||
|
718 | return {} | |||
|
719 | else: | |||
715 | return {} |
|
720 | return {} | |
716 |
|
721 | |||
717 | exts = {} |
|
722 | exts = {} |
@@ -98,6 +98,9 b' class absentfilectx(object):' | |||||
98 | self._ctx = ctx |
|
98 | self._ctx = ctx | |
99 | self._f = f |
|
99 | self._f = f | |
100 |
|
100 | |||
|
101 | def __bytes__(self): | |||
|
102 | return b'absent file %s@%s' % (self._f, self._ctx) | |||
|
103 | ||||
101 | def path(self): |
|
104 | def path(self): | |
102 | return self._f |
|
105 | return self._f | |
103 |
|
106 |
@@ -16,7 +16,7 b' from . import (' | |||||
16 | error, |
|
16 | error, | |
17 | filesetlang, |
|
17 | filesetlang, | |
18 | match as matchmod, |
|
18 | match as matchmod, | |
19 | merge, |
|
19 | mergestate as mergestatemod, | |
20 | pycompat, |
|
20 | pycompat, | |
21 | registrar, |
|
21 | registrar, | |
22 | scmutil, |
|
22 | scmutil, | |
@@ -245,7 +245,7 b' def resolved(mctx, x):' | |||||
245 | getargs(x, 0, 0, _(b"resolved takes no arguments")) |
|
245 | getargs(x, 0, 0, _(b"resolved takes no arguments")) | |
246 | if mctx.ctx.rev() is not None: |
|
246 | if mctx.ctx.rev() is not None: | |
247 | return mctx.never() |
|
247 | return mctx.never() | |
248 | ms = merge.mergestate.read(mctx.ctx.repo()) |
|
248 | ms = mergestatemod.mergestate.read(mctx.ctx.repo()) | |
249 | return mctx.predicate( |
|
249 | return mctx.predicate( | |
250 | lambda f: f in ms and ms[f] == b'r', predrepr=b'resolved' |
|
250 | lambda f: f in ms and ms[f] == b'r', predrepr=b'resolved' | |
251 | ) |
|
251 | ) | |
@@ -259,7 +259,7 b' def unresolved(mctx, x):' | |||||
259 | getargs(x, 0, 0, _(b"unresolved takes no arguments")) |
|
259 | getargs(x, 0, 0, _(b"unresolved takes no arguments")) | |
260 | if mctx.ctx.rev() is not None: |
|
260 | if mctx.ctx.rev() is not None: | |
261 | return mctx.never() |
|
261 | return mctx.never() | |
262 | ms = merge.mergestate.read(mctx.ctx.repo()) |
|
262 | ms = mergestatemod.mergestate.read(mctx.ctx.repo()) | |
263 | return mctx.predicate( |
|
263 | return mctx.predicate( | |
264 | lambda f: f in ms and ms[f] == b'u', predrepr=b'unresolved' |
|
264 | lambda f: f in ms and ms[f] == b'u', predrepr=b'unresolved' | |
265 | ) |
|
265 | ) |
@@ -345,6 +345,11 b' def loaddoc(topic, subdir=None):' | |||||
345 |
|
345 | |||
346 | internalstable = sorted( |
|
346 | internalstable = sorted( | |
347 | [ |
|
347 | [ | |
|
348 | ( | |||
|
349 | [b'bid-merge'], | |||
|
350 | _(b'Bid Merge Algorithm'), | |||
|
351 | loaddoc(b'bid-merge', subdir=b'internals'), | |||
|
352 | ), | |||
348 | ([b'bundle2'], _(b'Bundle2'), loaddoc(b'bundle2', subdir=b'internals')), |
|
353 | ([b'bundle2'], _(b'Bundle2'), loaddoc(b'bundle2', subdir=b'internals')), | |
349 | ([b'bundles'], _(b'Bundles'), loaddoc(b'bundles', subdir=b'internals')), |
|
354 | ([b'bundles'], _(b'Bundles'), loaddoc(b'bundles', subdir=b'internals')), | |
350 | ([b'cbor'], _(b'CBOR'), loaddoc(b'cbor', subdir=b'internals')), |
|
355 | ([b'cbor'], _(b'CBOR'), loaddoc(b'cbor', subdir=b'internals')), |
@@ -408,6 +408,24 b' Supported arguments:' | |||||
408 | If no suitable authentication entry is found, the user is prompted |
|
408 | If no suitable authentication entry is found, the user is prompted | |
409 | for credentials as usual if required by the remote. |
|
409 | for credentials as usual if required by the remote. | |
410 |
|
410 | |||
|
411 | ``cmdserver`` | |||
|
412 | ------------- | |||
|
413 | ||||
|
414 | Controls command server settings. (ADVANCED) | |||
|
415 | ||||
|
416 | ``message-encodings`` | |||
|
417 | List of encodings for the ``m`` (message) channel. The first encoding | |||
|
418 | supported by the server will be selected and advertised in the hello | |||
|
419 | message. This is useful only when ``ui.message-output`` is set to | |||
|
420 | ``channel``. Supported encodings are ``cbor``. | |||
|
421 | ||||
|
422 | ``shutdown-on-interrupt`` | |||
|
423 | If set to false, the server's main loop will continue running after | |||
|
424 | SIGINT received. ``runcommand`` requests can still be interrupted by | |||
|
425 | SIGINT. Close the write end of the pipe to shut down the server | |||
|
426 | process gracefully. | |||
|
427 | (default: True) | |||
|
428 | ||||
411 | ``color`` |
|
429 | ``color`` | |
412 | --------- |
|
430 | --------- | |
413 |
|
431 | |||
@@ -1872,6 +1890,15 b' Alias definitions for revsets. See :hg:`' | |||||
1872 | applicable for `hg amend`, `hg commit --amend` and `hg uncommit` in the |
|
1890 | applicable for `hg amend`, `hg commit --amend` and `hg uncommit` in the | |
1873 | current version. |
|
1891 | current version. | |
1874 |
|
1892 | |||
|
1893 | ``empty-successor`` | |||
|
1894 | ||||
|
1895 | Control what happens with empty successors that are the result of rewrite | |||
|
1896 | operations. If set to ``skip``, the successor is not created. If set to | |||
|
1897 | ``keep``, the empty successor is created and kept. | |||
|
1898 | ||||
|
1899 | Currently, only the rebase and absorb commands consider this configuration. | |||
|
1900 | (EXPERIMENTAL) | |||
|
1901 | ||||
1875 | ``storage`` |
|
1902 | ``storage`` | |
1876 | ----------- |
|
1903 | ----------- | |
1877 |
|
1904 | |||
@@ -2371,6 +2398,8 b' User interface controls.' | |||||
2371 | ``message-output`` |
|
2398 | ``message-output`` | |
2372 | Where to write status and error messages. (default: ``stdio``) |
|
2399 | Where to write status and error messages. (default: ``stdio``) | |
2373 |
|
2400 | |||
|
2401 | ``channel`` | |||
|
2402 | Use separate channel for structured output. (Command-server only) | |||
2374 | ``stderr`` |
|
2403 | ``stderr`` | |
2375 | Everything to stderr. |
|
2404 | Everything to stderr. | |
2376 | ``stdio`` |
|
2405 | ``stdio`` |
@@ -10,7 +10,9 b' after the command.' | |||||
10 |
|
10 | |||
11 | Every flag has at least a long name, such as --repository. Some flags may also |
|
11 | Every flag has at least a long name, such as --repository. Some flags may also | |
12 | have a short one-letter name, such as the equivalent -R. Using the short or long |
|
12 | have a short one-letter name, such as the equivalent -R. Using the short or long | |
13 | name is equivalent and has the same effect. |
|
13 | name is equivalent and has the same effect. The long name may be abbreviated to | |
|
14 | any unambiguous prefix. For example, :hg:`commit --amend` can be abbreviated | |||
|
15 | to :hg:`commit --am`. | |||
14 |
|
16 | |||
15 | Flags that have a short name can also be bundled together - for instance, to |
|
17 | Flags that have a short name can also be bundled together - for instance, to | |
16 | specify both --edit (short -e) and --interactive (short -i), one could use:: |
|
18 | specify both --edit (short -e) and --interactive (short -i), one could use:: |
@@ -142,3 +142,16 b' Support for this requirement was added i' | |||||
142 | August 2019). The requirement will only be present on repositories |
|
142 | August 2019). The requirement will only be present on repositories | |
143 | that have opted in to this format (by having |
|
143 | that have opted in to this format (by having | |
144 | ``format.bookmarks-in-store=true`` set when they were created). |
|
144 | ``format.bookmarks-in-store=true`` set when they were created). | |
|
145 | ||||
|
146 | persistent-nodemap | |||
|
147 | ================== | |||
|
148 | ||||
|
149 | The `nodemap` index (mapping nodeid to local revision number) is persisted on | |||
|
150 | disk. This provides speed benefit (if the associated native code is used). The | |||
|
151 | persistent nodemap is only used for two revlogs: the changelog and the | |||
|
152 | manifestlog. | |||
|
153 | ||||
|
154 | Support for this requirement was added in Mercurial 5.5 (released August 2020). | |||
|
155 | Note that as of 5.5, only installations compiled with the Rust extension will | |||
|
156 | benefit from a speedup. The other installations will do the necessary work to | |||
|
157 | keep the index up to date, but will suffer a slowdown. |
@@ -161,7 +161,7 b' Version 2 Format' | |||||
161 |
|
161 | |||
162 | (In development. Format not finalized or stable.) |
|
162 | (In development. Format not finalized or stable.) | |
163 |
|
163 | |||
164 |
Version 2 is identical to version |
|
164 | Version 2 is identical to version 1 with the following differences. | |
165 |
|
165 | |||
166 | There is no dedicated *generaldelta* revlog format flag. Instead, |
|
166 | There is no dedicated *generaldelta* revlog format flag. Instead, | |
167 | the feature is implied enabled by default. |
|
167 | the feature is implied enabled by default. |
@@ -33,6 +33,7 b' from . import (' | |||||
33 | logcmdutil, |
|
33 | logcmdutil, | |
34 | logexchange, |
|
34 | logexchange, | |
35 | merge as mergemod, |
|
35 | merge as mergemod, | |
|
36 | mergestate as mergestatemod, | |||
36 | narrowspec, |
|
37 | narrowspec, | |
37 | node, |
|
38 | node, | |
38 | phases, |
|
39 | phases, | |
@@ -355,7 +356,7 b' def unshare(ui, repo):' | |||||
355 |
|
356 | |||
356 | repo.requirements.discard(b'shared') |
|
357 | repo.requirements.discard(b'shared') | |
357 | repo.requirements.discard(b'relshared') |
|
358 | repo.requirements.discard(b'relshared') | |
358 |
repo |
|
359 | scmutil.writereporequirements(repo) | |
359 |
|
360 | |||
360 | # Removing share changes some fundamental properties of the repo instance. |
|
361 | # Removing share changes some fundamental properties of the repo instance. | |
361 | # So we instantiate a new repo object and operate on it rather than |
|
362 | # So we instantiate a new repo object and operate on it rather than | |
@@ -1164,7 +1165,7 b' def merge(' | |||||
1164 |
|
1165 | |||
1165 |
|
1166 | |||
1166 | def abortmerge(ui, repo): |
|
1167 | def abortmerge(ui, repo): | |
1167 | ms = mergemod.mergestate.read(repo) |
|
1168 | ms = mergestatemod.mergestate.read(repo) | |
1168 | if ms.active(): |
|
1169 | if ms.active(): | |
1169 | # there were conflicts |
|
1170 | # there were conflicts | |
1170 | node = ms.localctx.hex() |
|
1171 | node = ms.localctx.hex() |
@@ -313,7 +313,7 b' class _httprequesthandlerssl(_httpreques' | |||||
313 | try: |
|
313 | try: | |
314 | from .. import sslutil |
|
314 | from .. import sslutil | |
315 |
|
315 | |||
316 |
sslutil. |
|
316 | sslutil.wrapserversocket | |
317 | except ImportError: |
|
317 | except ImportError: | |
318 | raise error.Abort(_(b"SSL support is unavailable")) |
|
318 | raise error.Abort(_(b"SSL support is unavailable")) | |
319 |
|
319 |
@@ -158,6 +158,10 b' def _exthook(ui, repo, htype, name, cmd,' | |||||
158 | env[b'HG_HOOKNAME'] = name |
|
158 | env[b'HG_HOOKNAME'] = name | |
159 |
|
159 | |||
160 | for k, v in pycompat.iteritems(args): |
|
160 | for k, v in pycompat.iteritems(args): | |
|
161 | # transaction changes can accumulate MBs of data, so skip it | |||
|
162 | # for external hooks | |||
|
163 | if k == b'changes': | |||
|
164 | continue | |||
161 | if callable(v): |
|
165 | if callable(v): | |
162 | v = v() |
|
166 | v = v() | |
163 | if isinstance(v, (dict, list)): |
|
167 | if isinstance(v, (dict, list)): |
@@ -1395,6 +1395,9 b' class imanifestlog(interfaceutil.Interfa' | |||||
1395 | Raises ``error.LookupError`` if the node is not known. |
|
1395 | Raises ``error.LookupError`` if the node is not known. | |
1396 | """ |
|
1396 | """ | |
1397 |
|
1397 | |||
|
1398 | def update_caches(transaction): | |||
|
1399 | """update whatever cache are relevant for the used storage.""" | |||
|
1400 | ||||
1398 |
|
1401 | |||
1399 | class ilocalrepositoryfilestorage(interfaceutil.Interface): |
|
1402 | class ilocalrepositoryfilestorage(interfaceutil.Interface): | |
1400 | """Local repository sub-interface providing access to tracked file storage. |
|
1403 | """Local repository sub-interface providing access to tracked file storage. |
@@ -44,8 +44,9 b' from . import (' | |||||
44 | hook, |
|
44 | hook, | |
45 | lock as lockmod, |
|
45 | lock as lockmod, | |
46 | match as matchmod, |
|
46 | match as matchmod, | |
47 | merge as mergemod, |
|
47 | mergestate as mergestatemod, | |
48 | mergeutil, |
|
48 | mergeutil, | |
|
49 | metadata, | |||
49 | namespaces, |
|
50 | namespaces, | |
50 | narrowspec, |
|
51 | narrowspec, | |
51 | obsolete, |
|
52 | obsolete, | |
@@ -411,13 +412,13 b' class locallegacypeer(localpeer):' | |||||
411 |
|
412 | |||
412 | def changegroup(self, nodes, source): |
|
413 | def changegroup(self, nodes, source): | |
413 | outgoing = discovery.outgoing( |
|
414 | outgoing = discovery.outgoing( | |
414 |
self._repo, missingroots=nodes, |
|
415 | self._repo, missingroots=nodes, ancestorsof=self._repo.heads() | |
415 | ) |
|
416 | ) | |
416 | return changegroup.makechangegroup(self._repo, outgoing, b'01', source) |
|
417 | return changegroup.makechangegroup(self._repo, outgoing, b'01', source) | |
417 |
|
418 | |||
418 | def changegroupsubset(self, bases, heads, source): |
|
419 | def changegroupsubset(self, bases, heads, source): | |
419 | outgoing = discovery.outgoing( |
|
420 | outgoing = discovery.outgoing( | |
420 |
self._repo, missingroots=bases, |
|
421 | self._repo, missingroots=bases, ancestorsof=heads | |
421 | ) |
|
422 | ) | |
422 | return changegroup.makechangegroup(self._repo, outgoing, b'01', source) |
|
423 | return changegroup.makechangegroup(self._repo, outgoing, b'01', source) | |
423 |
|
424 | |||
@@ -445,6 +446,9 b" SIDEDATA_REQUIREMENT = b'exp-sidedata-fl" | |||||
445 | # copies related information in changeset's sidedata. |
|
446 | # copies related information in changeset's sidedata. | |
446 | COPIESSDC_REQUIREMENT = b'exp-copies-sidedata-changeset' |
|
447 | COPIESSDC_REQUIREMENT = b'exp-copies-sidedata-changeset' | |
447 |
|
448 | |||
|
449 | # The repository use persistent nodemap for the changelog and the manifest. | |||
|
450 | NODEMAP_REQUIREMENT = b'persistent-nodemap' | |||
|
451 | ||||
448 | # Functions receiving (ui, features) that extensions can register to impact |
|
452 | # Functions receiving (ui, features) that extensions can register to impact | |
449 | # the ability to load repositories with custom requirements. Only |
|
453 | # the ability to load repositories with custom requirements. Only | |
450 | # functions defined in loaded extensions are called. |
|
454 | # functions defined in loaded extensions are called. | |
@@ -505,6 +509,11 b' def makelocalrepository(baseui, path, in' | |||||
505 | except OSError as e: |
|
509 | except OSError as e: | |
506 | if e.errno != errno.ENOENT: |
|
510 | if e.errno != errno.ENOENT: | |
507 | raise |
|
511 | raise | |
|
512 | except ValueError as e: | |||
|
513 | # Can be raised on Python 3.8 when path is invalid. | |||
|
514 | raise error.Abort( | |||
|
515 | _(b'invalid path %s: %s') % (path, pycompat.bytestr(e)) | |||
|
516 | ) | |||
508 |
|
517 | |||
509 | raise error.RepoError(_(b'repository %s not found') % path) |
|
518 | raise error.RepoError(_(b'repository %s not found') % path) | |
510 |
|
519 | |||
@@ -933,10 +942,12 b' def resolverevlogstorevfsoptions(ui, req' | |||||
933 |
|
942 | |||
934 | if ui.configbool(b'experimental', b'rust.index'): |
|
943 | if ui.configbool(b'experimental', b'rust.index'): | |
935 | options[b'rust.index'] = True |
|
944 | options[b'rust.index'] = True | |
936 | if ui.configbool(b'experimental', b'exp-persistent-nodemap'): |
|
945 | if NODEMAP_REQUIREMENT in requirements: | |
937 |
options[b' |
|
946 | options[b'persistent-nodemap'] = True | |
938 |
if ui.configbool(b' |
|
947 | if ui.configbool(b'storage', b'revlog.nodemap.mmap'): | |
939 |
options[b' |
|
948 | options[b'persistent-nodemap.mmap'] = True | |
|
949 | epnm = ui.config(b'storage', b'revlog.nodemap.mode') | |||
|
950 | options[b'persistent-nodemap.mode'] = epnm | |||
940 | if ui.configbool(b'devel', b'persistent-nodemap'): |
|
951 | if ui.configbool(b'devel', b'persistent-nodemap'): | |
941 | options[b'devel-force-nodemap'] = True |
|
952 | options[b'devel-force-nodemap'] = True | |
942 |
|
953 | |||
@@ -1021,6 +1032,7 b' class localrepository(object):' | |||||
1021 | REVLOGV2_REQUIREMENT, |
|
1032 | REVLOGV2_REQUIREMENT, | |
1022 | SIDEDATA_REQUIREMENT, |
|
1033 | SIDEDATA_REQUIREMENT, | |
1023 | SPARSEREVLOG_REQUIREMENT, |
|
1034 | SPARSEREVLOG_REQUIREMENT, | |
|
1035 | NODEMAP_REQUIREMENT, | |||
1024 | bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT, |
|
1036 | bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT, | |
1025 | } |
|
1037 | } | |
1026 | _basesupported = supportedformats | { |
|
1038 | _basesupported = supportedformats | { | |
@@ -1223,8 +1235,9 b' class localrepository(object):' | |||||
1223 | if path.startswith(b'cache/'): |
|
1235 | if path.startswith(b'cache/'): | |
1224 | msg = b'accessing cache with vfs instead of cachevfs: "%s"' |
|
1236 | msg = b'accessing cache with vfs instead of cachevfs: "%s"' | |
1225 | repo.ui.develwarn(msg % path, stacklevel=3, config=b"cache-vfs") |
|
1237 | repo.ui.develwarn(msg % path, stacklevel=3, config=b"cache-vfs") | |
1226 | if path.startswith(b'journal.') or path.startswith(b'undo.'): |
|
1238 | # path prefixes covered by 'lock' | |
1227 | # journal is covered by 'lock' |
|
1239 | vfs_path_prefixes = (b'journal.', b'undo.', b'strip-backup/') | |
|
1240 | if any(path.startswith(prefix) for prefix in vfs_path_prefixes): | |||
1228 | if repo._currentlock(repo._lockref) is None: |
|
1241 | if repo._currentlock(repo._lockref) is None: | |
1229 | repo.ui.develwarn( |
|
1242 | repo.ui.develwarn( | |
1230 | b'write with no lock: "%s"' % path, |
|
1243 | b'write with no lock: "%s"' % path, | |
@@ -1285,9 +1298,6 b' class localrepository(object):' | |||||
1285 | caps.add(b'bundle2=' + urlreq.quote(capsblob)) |
|
1298 | caps.add(b'bundle2=' + urlreq.quote(capsblob)) | |
1286 | return caps |
|
1299 | return caps | |
1287 |
|
1300 | |||
1288 | def _writerequirements(self): |
|
|||
1289 | scmutil.writerequires(self.vfs, self.requirements) |
|
|||
1290 |
|
||||
1291 | # Don't cache auditor/nofsauditor, or you'll end up with reference cycle: |
|
1301 | # Don't cache auditor/nofsauditor, or you'll end up with reference cycle: | |
1292 | # self -> auditor -> self._checknested -> self |
|
1302 | # self -> auditor -> self._checknested -> self | |
1293 |
|
1303 | |||
@@ -2239,6 +2249,7 b' class localrepository(object):' | |||||
2239 |
|
2249 | |||
2240 | tr.hookargs[b'txnid'] = txnid |
|
2250 | tr.hookargs[b'txnid'] = txnid | |
2241 | tr.hookargs[b'txnname'] = desc |
|
2251 | tr.hookargs[b'txnname'] = desc | |
|
2252 | tr.hookargs[b'changes'] = tr.changes | |||
2242 | # note: writing the fncache only during finalize mean that the file is |
|
2253 | # note: writing the fncache only during finalize mean that the file is | |
2243 | # outdated when running hooks. As fncache is used for streaming clone, |
|
2254 | # outdated when running hooks. As fncache is used for streaming clone, | |
2244 | # this is not expected to break anything that happen during the hooks. |
|
2255 | # this is not expected to break anything that happen during the hooks. | |
@@ -2461,7 +2472,7 b' class localrepository(object):' | |||||
2461 | ui.status( |
|
2472 | ui.status( | |
2462 | _(b'working directory now based on revision %d\n') % parents |
|
2473 | _(b'working directory now based on revision %d\n') % parents | |
2463 | ) |
|
2474 | ) | |
2464 | mergemod.mergestate.clean(self, self[b'.'].node()) |
|
2475 | mergestatemod.mergestate.clean(self, self[b'.'].node()) | |
2465 |
|
2476 | |||
2466 | # TODO: if we know which new heads may result from this rollback, pass |
|
2477 | # TODO: if we know which new heads may result from this rollback, pass | |
2467 | # them to destroy(), which will prevent the branchhead cache from being |
|
2478 | # them to destroy(), which will prevent the branchhead cache from being | |
@@ -2511,6 +2522,7 b' class localrepository(object):' | |||||
2511 | unfi = self.unfiltered() |
|
2522 | unfi = self.unfiltered() | |
2512 |
|
2523 | |||
2513 | self.changelog.update_caches(transaction=tr) |
|
2524 | self.changelog.update_caches(transaction=tr) | |
|
2525 | self.manifestlog.update_caches(transaction=tr) | |||
2514 |
|
2526 | |||
2515 | rbc = unfi.revbranchcache() |
|
2527 | rbc = unfi.revbranchcache() | |
2516 | for r in unfi.changelog: |
|
2528 | for r in unfi.changelog: | |
@@ -2771,6 +2783,22 b' class localrepository(object):' | |||||
2771 | ): |
|
2783 | ): | |
2772 | """ |
|
2784 | """ | |
2773 | commit an individual file as part of a larger transaction |
|
2785 | commit an individual file as part of a larger transaction | |
|
2786 | ||||
|
2787 | input: | |||
|
2788 | ||||
|
2789 | fctx: a file context with the content we are trying to commit | |||
|
2790 | manifest1: manifest of changeset first parent | |||
|
2791 | manifest2: manifest of changeset second parent | |||
|
2792 | linkrev: revision number of the changeset being created | |||
|
2793 | tr: current transation | |||
|
2794 | changelist: list of file being changed (modified inplace) | |||
|
2795 | individual: boolean, set to False to skip storing the copy data | |||
|
2796 | (only used by the Google specific feature of using | |||
|
2797 | changeset extra as copy source of truth). | |||
|
2798 | ||||
|
2799 | output: | |||
|
2800 | ||||
|
2801 | The resulting filenode | |||
2774 | """ |
|
2802 | """ | |
2775 |
|
2803 | |||
2776 | fname = fctx.path() |
|
2804 | fname = fctx.path() | |
@@ -2859,16 +2887,16 b' class localrepository(object):' | |||||
2859 | fparent2 = nullid |
|
2887 | fparent2 = nullid | |
2860 | elif not fparentancestors: |
|
2888 | elif not fparentancestors: | |
2861 | # TODO: this whole if-else might be simplified much more |
|
2889 | # TODO: this whole if-else might be simplified much more | |
2862 | ms = mergemod.mergestate.read(self) |
|
2890 | ms = mergestatemod.mergestate.read(self) | |
2863 | if ( |
|
2891 | if ( | |
2864 | fname in ms |
|
2892 | fname in ms | |
2865 | and ms[fname] == mergemod.MERGE_RECORD_MERGED_OTHER |
|
2893 | and ms[fname] == mergestatemod.MERGE_RECORD_MERGED_OTHER | |
2866 | ): |
|
2894 | ): | |
2867 | fparent1, fparent2 = fparent2, nullid |
|
2895 | fparent1, fparent2 = fparent2, nullid | |
2868 |
|
2896 | |||
2869 | # is the file changed? |
|
2897 | # is the file changed? | |
2870 | text = fctx.data() |
|
2898 | text = fctx.data() | |
2871 |
if fparent2 != nullid or flog.cmp(fparent1, text) |
|
2899 | if fparent2 != nullid or meta or flog.cmp(fparent1, text): | |
2872 | changelist.append(fname) |
|
2900 | changelist.append(fname) | |
2873 | return flog.add(text, meta, tr, linkrev, fparent1, fparent2) |
|
2901 | return flog.add(text, meta, tr, linkrev, fparent1, fparent2) | |
2874 | # are just the flags changed during merge? |
|
2902 | # are just the flags changed during merge? | |
@@ -2960,18 +2988,13 b' class localrepository(object):' | |||||
2960 | self, status, text, user, date, extra |
|
2988 | self, status, text, user, date, extra | |
2961 | ) |
|
2989 | ) | |
2962 |
|
2990 | |||
2963 | ms = mergemod.mergestate.read(self) |
|
2991 | ms = mergestatemod.mergestate.read(self) | |
2964 | mergeutil.checkunresolved(ms) |
|
2992 | mergeutil.checkunresolved(ms) | |
2965 |
|
2993 | |||
2966 | # internal config: ui.allowemptycommit |
|
2994 | # internal config: ui.allowemptycommit | |
2967 | allowemptycommit = ( |
|
2995 | if cctx.isempty() and not self.ui.configbool( | |
2968 | wctx.branch() != wctx.p1().branch() |
|
2996 | b'ui', b'allowemptycommit' | |
2969 | or extra.get(b'close') |
|
2997 | ): | |
2970 | or merge |
|
|||
2971 | or cctx.files() |
|
|||
2972 | or self.ui.configbool(b'ui', b'allowemptycommit') |
|
|||
2973 | ) |
|
|||
2974 | if not allowemptycommit: |
|
|||
2975 | self.ui.debug(b'nothing to commit, clearing merge state\n') |
|
2998 | self.ui.debug(b'nothing to commit, clearing merge state\n') | |
2976 | ms.reset() |
|
2999 | ms.reset() | |
2977 | return None |
|
3000 | return None | |
@@ -3018,6 +3041,12 b' class localrepository(object):' | |||||
3018 | self.ui.write( |
|
3041 | self.ui.write( | |
3019 | _(b'note: commit message saved in %s\n') % msgfn |
|
3042 | _(b'note: commit message saved in %s\n') % msgfn | |
3020 | ) |
|
3043 | ) | |
|
3044 | self.ui.write( | |||
|
3045 | _( | |||
|
3046 | b"note: use 'hg commit --logfile " | |||
|
3047 | b".hg/last-message.txt --edit' to reuse it\n" | |||
|
3048 | ) | |||
|
3049 | ) | |||
3021 | raise |
|
3050 | raise | |
3022 |
|
3051 | |||
3023 | def commithook(unused_success): |
|
3052 | def commithook(unused_success): | |
@@ -3131,51 +3160,8 b' class localrepository(object):' | |||||
3131 | for f in drop: |
|
3160 | for f in drop: | |
3132 | del m[f] |
|
3161 | del m[f] | |
3133 | if p2.rev() != nullrev: |
|
3162 | if p2.rev() != nullrev: | |
3134 |
|
3163 | rf = metadata.get_removal_filter(ctx, (p1, p2, m1, m2)) | ||
3135 | @util.cachefunc |
|
3164 | removed = [f for f in removed if not rf(f)] | |
3136 | def mas(): |
|
|||
3137 | p1n = p1.node() |
|
|||
3138 | p2n = p2.node() |
|
|||
3139 | cahs = self.changelog.commonancestorsheads(p1n, p2n) |
|
|||
3140 | if not cahs: |
|
|||
3141 | cahs = [nullrev] |
|
|||
3142 | return [self[r].manifest() for r in cahs] |
|
|||
3143 |
|
||||
3144 | def deletionfromparent(f): |
|
|||
3145 | # When a file is removed relative to p1 in a merge, this |
|
|||
3146 | # function determines whether the absence is due to a |
|
|||
3147 | # deletion from a parent, or whether the merge commit |
|
|||
3148 | # itself deletes the file. We decide this by doing a |
|
|||
3149 | # simplified three way merge of the manifest entry for |
|
|||
3150 | # the file. There are two ways we decide the merge |
|
|||
3151 | # itself didn't delete a file: |
|
|||
3152 | # - neither parent (nor the merge) contain the file |
|
|||
3153 | # - exactly one parent contains the file, and that |
|
|||
3154 | # parent has the same filelog entry as the merge |
|
|||
3155 | # ancestor (or all of them if there two). In other |
|
|||
3156 | # words, that parent left the file unchanged while the |
|
|||
3157 | # other one deleted it. |
|
|||
3158 | # One way to think about this is that deleting a file is |
|
|||
3159 | # similar to emptying it, so the list of changed files |
|
|||
3160 | # should be similar either way. The computation |
|
|||
3161 | # described above is not done directly in _filecommit |
|
|||
3162 | # when creating the list of changed files, however |
|
|||
3163 | # it does something very similar by comparing filelog |
|
|||
3164 | # nodes. |
|
|||
3165 | if f in m1: |
|
|||
3166 | return f not in m2 and all( |
|
|||
3167 | f in ma and ma.find(f) == m1.find(f) |
|
|||
3168 | for ma in mas() |
|
|||
3169 | ) |
|
|||
3170 | elif f in m2: |
|
|||
3171 | return all( |
|
|||
3172 | f in ma and ma.find(f) == m2.find(f) |
|
|||
3173 | for ma in mas() |
|
|||
3174 | ) |
|
|||
3175 | else: |
|
|||
3176 | return True |
|
|||
3177 |
|
||||
3178 | removed = [f for f in removed if not deletionfromparent(f)] |
|
|||
3179 |
|
3165 | |||
3180 | files = changed + removed |
|
3166 | files = changed + removed | |
3181 | md = None |
|
3167 | md = None | |
@@ -3653,6 +3639,9 b' def newreporequirements(ui, createopts):' | |||||
3653 | if ui.configbool(b'format', b'bookmarks-in-store'): |
|
3639 | if ui.configbool(b'format', b'bookmarks-in-store'): | |
3654 | requirements.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT) |
|
3640 | requirements.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT) | |
3655 |
|
3641 | |||
|
3642 | if ui.configbool(b'format', b'use-persistent-nodemap'): | |||
|
3643 | requirements.add(NODEMAP_REQUIREMENT) | |||
|
3644 | ||||
3656 | return requirements |
|
3645 | return requirements | |
3657 |
|
3646 | |||
3658 |
|
3647 |
@@ -72,8 +72,8 b' def diffordiffstat(' | |||||
72 | ui, |
|
72 | ui, | |
73 | repo, |
|
73 | repo, | |
74 | diffopts, |
|
74 | diffopts, | |
75 |
|
|
75 | ctx1, | |
76 |
|
|
76 | ctx2, | |
77 | match, |
|
77 | match, | |
78 | changes=None, |
|
78 | changes=None, | |
79 | stat=False, |
|
79 | stat=False, | |
@@ -85,8 +85,6 b' def diffordiffstat(' | |||||
85 | hunksfilterfn=None, |
|
85 | hunksfilterfn=None, | |
86 | ): |
|
86 | ): | |
87 | '''show diff or diffstat.''' |
|
87 | '''show diff or diffstat.''' | |
88 | ctx1 = repo[node1] |
|
|||
89 | ctx2 = repo[node2] |
|
|||
90 | if root: |
|
88 | if root: | |
91 | relroot = pathutil.canonpath(repo.root, repo.getcwd(), root) |
|
89 | relroot = pathutil.canonpath(repo.root, repo.getcwd(), root) | |
92 | else: |
|
90 | else: | |
@@ -173,6 +171,7 b' def diffordiffstat(' | |||||
173 | for chunk, label in chunks: |
|
171 | for chunk, label in chunks: | |
174 | ui.write(chunk, label=label) |
|
172 | ui.write(chunk, label=label) | |
175 |
|
173 | |||
|
174 | node2 = ctx2.node() | |||
176 | for subpath, sub in scmutil.itersubrepos(ctx1, ctx2): |
|
175 | for subpath, sub in scmutil.itersubrepos(ctx1, ctx2): | |
177 | tempnode2 = node2 |
|
176 | tempnode2 = node2 | |
178 | try: |
|
177 | try: | |
@@ -208,15 +207,12 b' class changesetdiffer(object):' | |||||
208 | return None |
|
207 | return None | |
209 |
|
208 | |||
210 | def showdiff(self, ui, ctx, diffopts, graphwidth=0, stat=False): |
|
209 | def showdiff(self, ui, ctx, diffopts, graphwidth=0, stat=False): | |
211 | repo = ctx.repo() |
|
|||
212 | node = ctx.node() |
|
|||
213 | prev = ctx.p1().node() |
|
|||
214 | diffordiffstat( |
|
210 | diffordiffstat( | |
215 | ui, |
|
211 | ui, | |
216 | repo, |
|
212 | ctx.repo(), | |
217 | diffopts, |
|
213 | diffopts, | |
218 |
|
|
214 | ctx.p1(), | |
219 |
|
|
215 | ctx, | |
220 | match=self._makefilematcher(ctx), |
|
216 | match=self._makefilematcher(ctx), | |
221 | stat=stat, |
|
217 | stat=stat, | |
222 | graphwidth=graphwidth, |
|
218 | graphwidth=graphwidth, |
@@ -58,14 +58,16 b' def _parse(data):' | |||||
58 | prev = l |
|
58 | prev = l | |
59 | f, n = l.split(b'\0') |
|
59 | f, n = l.split(b'\0') | |
60 | nl = len(n) |
|
60 | nl = len(n) | |
61 |
|
|
61 | flags = n[-1:] | |
62 | # modern hash, full width |
|
62 | if flags in _manifestflags: | |
63 | yield f, bin(n[:64]), n[64:] |
|
63 | n = n[:-1] | |
64 |
|
|
64 | nl -= 1 | |
65 | # legacy hash, always sha1 |
|
|||
66 | yield f, bin(n[:40]), n[40:] |
|
|||
67 | else: |
|
65 | else: | |
68 |
|
|
66 | flags = b'' | |
|
67 | if nl not in (40, 64): | |||
|
68 | raise ValueError(b'Invalid manifest line') | |||
|
69 | ||||
|
70 | yield f, bin(n), flags | |||
69 |
|
71 | |||
70 |
|
72 | |||
71 | def _text(it): |
|
73 | def _text(it): | |
@@ -121,8 +123,20 b' class lazymanifestiterentries(object):' | |||||
121 | self.pos += 1 |
|
123 | self.pos += 1 | |
122 | return data |
|
124 | return data | |
123 | zeropos = data.find(b'\x00', pos) |
|
125 | zeropos = data.find(b'\x00', pos) | |
124 | hashval = unhexlify(data, self.lm.extrainfo[self.pos], zeropos + 1, 40) |
|
126 | nlpos = data.find(b'\n', pos) | |
125 | flags = self.lm._getflags(data, self.pos, zeropos) |
|
127 | if zeropos == -1 or nlpos == -1 or nlpos < zeropos: | |
|
128 | raise error.StorageError(b'Invalid manifest line') | |||
|
129 | flags = data[nlpos - 1 : nlpos] | |||
|
130 | if flags in _manifestflags: | |||
|
131 | hlen = nlpos - zeropos - 2 | |||
|
132 | else: | |||
|
133 | hlen = nlpos - zeropos - 1 | |||
|
134 | flags = b'' | |||
|
135 | if hlen not in (40, 64): | |||
|
136 | raise error.StorageError(b'Invalid manifest line') | |||
|
137 | hashval = unhexlify( | |||
|
138 | data, self.lm.extrainfo[self.pos], zeropos + 1, hlen | |||
|
139 | ) | |||
126 | self.pos += 1 |
|
140 | self.pos += 1 | |
127 | return (data[pos:zeropos], hashval, flags) |
|
141 | return (data[pos:zeropos], hashval, flags) | |
128 |
|
142 | |||
@@ -140,6 +154,9 b' def _cmp(a, b):' | |||||
140 | return (a > b) - (a < b) |
|
154 | return (a > b) - (a < b) | |
141 |
|
155 | |||
142 |
|
156 | |||
|
157 | _manifestflags = {b'', b'l', b't', b'x'} | |||
|
158 | ||||
|
159 | ||||
143 | class _lazymanifest(object): |
|
160 | class _lazymanifest(object): | |
144 | """A pure python manifest backed by a byte string. It is supplimented with |
|
161 | """A pure python manifest backed by a byte string. It is supplimented with | |
145 | internal lists as it is modified, until it is compacted back to a pure byte |
|
162 | internal lists as it is modified, until it is compacted back to a pure byte | |
@@ -251,15 +268,6 b' class _lazymanifest(object):' | |||||
251 | def __contains__(self, key): |
|
268 | def __contains__(self, key): | |
252 | return self.bsearch(key) != -1 |
|
269 | return self.bsearch(key) != -1 | |
253 |
|
270 | |||
254 | def _getflags(self, data, needle, pos): |
|
|||
255 | start = pos + 41 |
|
|||
256 | end = data.find(b"\n", start) |
|
|||
257 | if end == -1: |
|
|||
258 | end = len(data) - 1 |
|
|||
259 | if start == end: |
|
|||
260 | return b'' |
|
|||
261 | return self.data[start:end] |
|
|||
262 |
|
||||
263 | def __getitem__(self, key): |
|
271 | def __getitem__(self, key): | |
264 | if not isinstance(key, bytes): |
|
272 | if not isinstance(key, bytes): | |
265 | raise TypeError(b"getitem: manifest keys must be a bytes.") |
|
273 | raise TypeError(b"getitem: manifest keys must be a bytes.") | |
@@ -273,13 +281,17 b' class _lazymanifest(object):' | |||||
273 | nlpos = data.find(b'\n', zeropos) |
|
281 | nlpos = data.find(b'\n', zeropos) | |
274 | assert 0 <= needle <= len(self.positions) |
|
282 | assert 0 <= needle <= len(self.positions) | |
275 | assert len(self.extrainfo) == len(self.positions) |
|
283 | assert len(self.extrainfo) == len(self.positions) | |
|
284 | if zeropos == -1 or nlpos == -1 or nlpos < zeropos: | |||
|
285 | raise error.StorageError(b'Invalid manifest line') | |||
276 | hlen = nlpos - zeropos - 1 |
|
286 | hlen = nlpos - zeropos - 1 | |
277 | # Hashes sometimes have an extra byte tucked on the end, so |
|
287 | flags = data[nlpos - 1 : nlpos] | |
278 | # detect that. |
|
288 | if flags in _manifestflags: | |
279 | if hlen % 2: |
|
|||
280 | hlen -= 1 |
|
289 | hlen -= 1 | |
|
290 | else: | |||
|
291 | flags = b'' | |||
|
292 | if hlen not in (40, 64): | |||
|
293 | raise error.StorageError(b'Invalid manifest line') | |||
281 | hashval = unhexlify(data, self.extrainfo[needle], zeropos + 1, hlen) |
|
294 | hashval = unhexlify(data, self.extrainfo[needle], zeropos + 1, hlen) | |
282 | flags = self._getflags(data, needle, zeropos) |
|
|||
283 | return (hashval, flags) |
|
295 | return (hashval, flags) | |
284 |
|
296 | |||
285 | def __delitem__(self, key): |
|
297 | def __delitem__(self, key): | |
@@ -408,9 +420,7 b' class _lazymanifest(object):' | |||||
408 |
|
420 | |||
409 | def _pack(self, d): |
|
421 | def _pack(self, d): | |
410 | n = d[1] |
|
422 | n = d[1] | |
411 | if len(n) == 21 or len(n) == 33: |
|
423 | assert len(n) in (20, 32) | |
412 | n = n[:-1] |
|
|||
413 | assert len(n) == 20 or len(n) == 32 |
|
|||
414 | return d[0] + b'\x00' + hex(n) + d[2] + b'\n' |
|
424 | return d[0] + b'\x00' + hex(n) + d[2] + b'\n' | |
415 |
|
425 | |||
416 | def text(self): |
|
426 | def text(self): | |
@@ -609,6 +619,8 b' class manifestdict(object):' | |||||
609 | return self._lm.diff(m2._lm, clean) |
|
619 | return self._lm.diff(m2._lm, clean) | |
610 |
|
620 | |||
611 | def setflag(self, key, flag): |
|
621 | def setflag(self, key, flag): | |
|
622 | if flag not in _manifestflags: | |||
|
623 | raise TypeError(b"Invalid manifest flag set.") | |||
612 | self._lm[key] = self[key], flag |
|
624 | self._lm[key] = self[key], flag | |
613 |
|
625 | |||
614 | def get(self, key, default=None): |
|
626 | def get(self, key, default=None): | |
@@ -1049,11 +1061,10 b' class treemanifest(object):' | |||||
1049 | self._dirs[dir].__setitem__(subpath, n) |
|
1061 | self._dirs[dir].__setitem__(subpath, n) | |
1050 | else: |
|
1062 | else: | |
1051 | # manifest nodes are either 20 bytes or 32 bytes, |
|
1063 | # manifest nodes are either 20 bytes or 32 bytes, | |
1052 |
# depending on the hash in use. A |
|
1064 | # depending on the hash in use. Assert this as historically | |
1053 | # occasionally used by hg, but won't ever be |
|
1065 | # sometimes extra bytes were added. | |
1054 | # persisted. Trim to 21 or 33 bytes as appropriate. |
|
1066 | assert len(n) in (20, 32) | |
1055 | trim = 21 if len(n) < 25 else 33 |
|
1067 | self._files[f] = n | |
1056 | self._files[f] = n[:trim] # to match manifestdict's behavior |
|
|||
1057 | self._dirty = True |
|
1068 | self._dirty = True | |
1058 |
|
1069 | |||
1059 | def _load(self): |
|
1070 | def _load(self): | |
@@ -1066,6 +1077,8 b' class treemanifest(object):' | |||||
1066 |
|
1077 | |||
1067 | def setflag(self, f, flags): |
|
1078 | def setflag(self, f, flags): | |
1068 | """Set the flags (symlink, executable) for path f.""" |
|
1079 | """Set the flags (symlink, executable) for path f.""" | |
|
1080 | if flags not in _manifestflags: | |||
|
1081 | raise TypeError(b"Invalid manifest flag set.") | |||
1069 | self._load() |
|
1082 | self._load() | |
1070 | dir, subpath = _splittopdir(f) |
|
1083 | dir, subpath = _splittopdir(f) | |
1071 | if dir: |
|
1084 | if dir: | |
@@ -1599,6 +1612,7 b' class manifestrevlog(object):' | |||||
1599 | checkambig=not bool(tree), |
|
1612 | checkambig=not bool(tree), | |
1600 | mmaplargeindex=True, |
|
1613 | mmaplargeindex=True, | |
1601 | upperboundcomp=MAXCOMPRESSION, |
|
1614 | upperboundcomp=MAXCOMPRESSION, | |
|
1615 | persistentnodemap=opener.options.get(b'persistent-nodemap', False), | |||
1602 | ) |
|
1616 | ) | |
1603 |
|
1617 | |||
1604 | self.index = self._revlog.index |
|
1618 | self.index = self._revlog.index | |
@@ -1664,6 +1678,22 b' class manifestrevlog(object):' | |||||
1664 | readtree=None, |
|
1678 | readtree=None, | |
1665 | match=None, |
|
1679 | match=None, | |
1666 | ): |
|
1680 | ): | |
|
1681 | """add some manifest entry in to the manifest log | |||
|
1682 | ||||
|
1683 | input: | |||
|
1684 | ||||
|
1685 | m: the manifest dict we want to store | |||
|
1686 | transaction: the open transaction | |||
|
1687 | p1: manifest-node of p1 | |||
|
1688 | p2: manifest-node of p2 | |||
|
1689 | added: file added/changed compared to parent | |||
|
1690 | removed: file removed compared to parent | |||
|
1691 | ||||
|
1692 | tree manifest input: | |||
|
1693 | ||||
|
1694 | readtree: a function to read a subtree | |||
|
1695 | match: a filematcher for the subpart of the tree manifest | |||
|
1696 | """ | |||
1667 | try: |
|
1697 | try: | |
1668 | if p1 not in self.fulltextcache: |
|
1698 | if p1 not in self.fulltextcache: | |
1669 | raise FastdeltaUnavailable() |
|
1699 | raise FastdeltaUnavailable() | |
@@ -1959,6 +1989,9 b' class manifestlog(object):' | |||||
1959 | def rev(self, node): |
|
1989 | def rev(self, node): | |
1960 | return self._rootstore.rev(node) |
|
1990 | return self._rootstore.rev(node) | |
1961 |
|
1991 | |||
|
1992 | def update_caches(self, transaction): | |||
|
1993 | return self._rootstore._revlog.update_caches(transaction=transaction) | |||
|
1994 | ||||
1962 |
|
1995 | |||
1963 | @interfaceutil.implementer(repository.imanifestrevisionwritable) |
|
1996 | @interfaceutil.implementer(repository.imanifestrevisionwritable) | |
1964 | class memmanifestctx(object): |
|
1997 | class memmanifestctx(object): |
@@ -17,6 +17,7 b' from .pycompat import (' | |||||
17 | setattr, |
|
17 | setattr, | |
18 | ) |
|
18 | ) | |
19 | from . import ( |
|
19 | from . import ( | |
|
20 | diffhelper, | |||
20 | encoding, |
|
21 | encoding, | |
21 | error, |
|
22 | error, | |
22 | policy, |
|
23 | policy, | |
@@ -25,8 +26,6 b' from . import (' | |||||
25 | ) |
|
26 | ) | |
26 | from .utils import dateutil |
|
27 | from .utils import dateutil | |
27 |
|
28 | |||
28 | _missing_newline_marker = b"\\ No newline at end of file\n" |
|
|||
29 |
|
||||
30 | bdiff = policy.importmod('bdiff') |
|
29 | bdiff = policy.importmod('bdiff') | |
31 | mpatch = policy.importmod('mpatch') |
|
30 | mpatch = policy.importmod('mpatch') | |
32 |
|
31 | |||
@@ -309,7 +308,7 b' def unidiff(a, ad, b, bd, fn1, fn2, bina' | |||||
309 | hunklines = [b"@@ -0,0 +1,%d @@\n" % size] + [b"+" + e for e in b] |
|
308 | hunklines = [b"@@ -0,0 +1,%d @@\n" % size] + [b"+" + e for e in b] | |
310 | if without_newline: |
|
309 | if without_newline: | |
311 | hunklines[-1] += b'\n' |
|
310 | hunklines[-1] += b'\n' | |
312 |
hunklines.append( |
|
311 | hunklines.append(diffhelper.MISSING_NEWLINE_MARKER) | |
313 | hunks = ((hunkrange, hunklines),) |
|
312 | hunks = ((hunkrange, hunklines),) | |
314 | elif not b: |
|
313 | elif not b: | |
315 | without_newline = not a.endswith(b'\n') |
|
314 | without_newline = not a.endswith(b'\n') | |
@@ -325,7 +324,7 b' def unidiff(a, ad, b, bd, fn1, fn2, bina' | |||||
325 | hunklines = [b"@@ -1,%d +0,0 @@\n" % size] + [b"-" + e for e in a] |
|
324 | hunklines = [b"@@ -1,%d +0,0 @@\n" % size] + [b"-" + e for e in a] | |
326 | if without_newline: |
|
325 | if without_newline: | |
327 | hunklines[-1] += b'\n' |
|
326 | hunklines[-1] += b'\n' | |
328 |
hunklines.append( |
|
327 | hunklines.append(diffhelper.MISSING_NEWLINE_MARKER) | |
329 | hunks = ((hunkrange, hunklines),) |
|
328 | hunks = ((hunkrange, hunklines),) | |
330 | else: |
|
329 | else: | |
331 | hunks = _unidiff(a, b, opts=opts) |
|
330 | hunks = _unidiff(a, b, opts=opts) | |
@@ -418,13 +417,13 b' def _unidiff(t1, t2, opts=defaultopts):' | |||||
418 | if hunklines[i].startswith(b' '): |
|
417 | if hunklines[i].startswith(b' '): | |
419 | skip = True |
|
418 | skip = True | |
420 | hunklines[i] += b'\n' |
|
419 | hunklines[i] += b'\n' | |
421 |
hunklines.insert(i + 1, |
|
420 | hunklines.insert(i + 1, diffhelper.MISSING_NEWLINE_MARKER) | |
422 | break |
|
421 | break | |
423 | if not skip and not t2.endswith(b'\n') and bstart + blen == len(l2) + 1: |
|
422 | if not skip and not t2.endswith(b'\n') and bstart + blen == len(l2) + 1: | |
424 | for i in pycompat.xrange(len(hunklines) - 1, -1, -1): |
|
423 | for i in pycompat.xrange(len(hunklines) - 1, -1, -1): | |
425 | if hunklines[i].startswith(b'+'): |
|
424 | if hunklines[i].startswith(b'+'): | |
426 | hunklines[i] += b'\n' |
|
425 | hunklines[i] += b'\n' | |
427 |
hunklines.insert(i + 1, |
|
426 | hunklines.insert(i + 1, diffhelper.MISSING_NEWLINE_MARKER) | |
428 | break |
|
427 | break | |
429 | yield hunkrange, hunklines |
|
428 | yield hunkrange, hunklines | |
430 |
|
429 |
This diff has been collapsed as it changes many lines, (1308 lines changed) Show them Hide them | |||||
@@ -8,21 +8,16 b'' | |||||
8 | from __future__ import absolute_import |
|
8 | from __future__ import absolute_import | |
9 |
|
9 | |||
10 | import errno |
|
10 | import errno | |
11 | import shutil |
|
|||
12 | import stat |
|
11 | import stat | |
13 | import struct |
|
12 | import struct | |
14 |
|
13 | |||
15 | from .i18n import _ |
|
14 | from .i18n import _ | |
16 | from .node import ( |
|
15 | from .node import ( | |
17 | addednodeid, |
|
16 | addednodeid, | |
18 | bin, |
|
|||
19 | hex, |
|
|||
20 | modifiednodeid, |
|
17 | modifiednodeid, | |
21 | nullhex, |
|
|||
22 | nullid, |
|
18 | nullid, | |
23 | nullrev, |
|
19 | nullrev, | |
24 | ) |
|
20 | ) | |
25 | from .pycompat import delattr |
|
|||
26 | from .thirdparty import attr |
|
21 | from .thirdparty import attr | |
27 | from . import ( |
|
22 | from . import ( | |
28 | copies, |
|
23 | copies, | |
@@ -30,6 +25,7 b' from . import (' | |||||
30 | error, |
|
25 | error, | |
31 | filemerge, |
|
26 | filemerge, | |
32 | match as matchmod, |
|
27 | match as matchmod, | |
|
28 | mergestate as mergestatemod, | |||
33 | obsutil, |
|
29 | obsutil, | |
34 | pathutil, |
|
30 | pathutil, | |
35 | pycompat, |
|
31 | pycompat, | |
@@ -38,741 +34,11 b' from . import (' | |||||
38 | util, |
|
34 | util, | |
39 | worker, |
|
35 | worker, | |
40 | ) |
|
36 | ) | |
41 | from .utils import hashutil |
|
|||
42 |
|
37 | |||
43 | _pack = struct.pack |
|
38 | _pack = struct.pack | |
44 | _unpack = struct.unpack |
|
39 | _unpack = struct.unpack | |
45 |
|
40 | |||
46 |
|
41 | |||
47 | def _droponode(data): |
|
|||
48 | # used for compatibility for v1 |
|
|||
49 | bits = data.split(b'\0') |
|
|||
50 | bits = bits[:-2] + bits[-1:] |
|
|||
51 | return b'\0'.join(bits) |
|
|||
52 |
|
||||
53 |
|
||||
54 | # Merge state record types. See ``mergestate`` docs for more. |
|
|||
55 | RECORD_LOCAL = b'L' |
|
|||
56 | RECORD_OTHER = b'O' |
|
|||
57 | RECORD_MERGED = b'F' |
|
|||
58 | RECORD_CHANGEDELETE_CONFLICT = b'C' |
|
|||
59 | RECORD_MERGE_DRIVER_MERGE = b'D' |
|
|||
60 | RECORD_PATH_CONFLICT = b'P' |
|
|||
61 | RECORD_MERGE_DRIVER_STATE = b'm' |
|
|||
62 | RECORD_FILE_VALUES = b'f' |
|
|||
63 | RECORD_LABELS = b'l' |
|
|||
64 | RECORD_OVERRIDE = b't' |
|
|||
65 | RECORD_UNSUPPORTED_MANDATORY = b'X' |
|
|||
66 | RECORD_UNSUPPORTED_ADVISORY = b'x' |
|
|||
67 | RECORD_RESOLVED_OTHER = b'R' |
|
|||
68 |
|
||||
69 | MERGE_DRIVER_STATE_UNMARKED = b'u' |
|
|||
70 | MERGE_DRIVER_STATE_MARKED = b'm' |
|
|||
71 | MERGE_DRIVER_STATE_SUCCESS = b's' |
|
|||
72 |
|
||||
73 | MERGE_RECORD_UNRESOLVED = b'u' |
|
|||
74 | MERGE_RECORD_RESOLVED = b'r' |
|
|||
75 | MERGE_RECORD_UNRESOLVED_PATH = b'pu' |
|
|||
76 | MERGE_RECORD_RESOLVED_PATH = b'pr' |
|
|||
77 | MERGE_RECORD_DRIVER_RESOLVED = b'd' |
|
|||
78 | # represents that the file was automatically merged in favor |
|
|||
79 | # of other version. This info is used on commit. |
|
|||
80 | MERGE_RECORD_MERGED_OTHER = b'o' |
|
|||
81 |
|
||||
82 | ACTION_FORGET = b'f' |
|
|||
83 | ACTION_REMOVE = b'r' |
|
|||
84 | ACTION_ADD = b'a' |
|
|||
85 | ACTION_GET = b'g' |
|
|||
86 | ACTION_PATH_CONFLICT = b'p' |
|
|||
87 | ACTION_PATH_CONFLICT_RESOLVE = b'pr' |
|
|||
88 | ACTION_ADD_MODIFIED = b'am' |
|
|||
89 | ACTION_CREATED = b'c' |
|
|||
90 | ACTION_DELETED_CHANGED = b'dc' |
|
|||
91 | ACTION_CHANGED_DELETED = b'cd' |
|
|||
92 | ACTION_MERGE = b'm' |
|
|||
93 | ACTION_LOCAL_DIR_RENAME_GET = b'dg' |
|
|||
94 | ACTION_DIR_RENAME_MOVE_LOCAL = b'dm' |
|
|||
95 | ACTION_KEEP = b'k' |
|
|||
96 | ACTION_EXEC = b'e' |
|
|||
97 | ACTION_CREATED_MERGE = b'cm' |
|
|||
98 | # GET the other/remote side and store this info in mergestate |
|
|||
99 | ACTION_GET_OTHER_AND_STORE = b'gs' |
|
|||
100 |
|
||||
101 |
|
||||
102 | class mergestate(object): |
|
|||
103 | '''track 3-way merge state of individual files |
|
|||
104 |
|
||||
105 | The merge state is stored on disk when needed. Two files are used: one with |
|
|||
106 | an old format (version 1), and one with a new format (version 2). Version 2 |
|
|||
107 | stores a superset of the data in version 1, including new kinds of records |
|
|||
108 | in the future. For more about the new format, see the documentation for |
|
|||
109 | `_readrecordsv2`. |
|
|||
110 |
|
||||
111 | Each record can contain arbitrary content, and has an associated type. This |
|
|||
112 | `type` should be a letter. If `type` is uppercase, the record is mandatory: |
|
|||
113 | versions of Mercurial that don't support it should abort. If `type` is |
|
|||
114 | lowercase, the record can be safely ignored. |
|
|||
115 |
|
||||
116 | Currently known records: |
|
|||
117 |
|
||||
118 | L: the node of the "local" part of the merge (hexified version) |
|
|||
119 | O: the node of the "other" part of the merge (hexified version) |
|
|||
120 | F: a file to be merged entry |
|
|||
121 | C: a change/delete or delete/change conflict |
|
|||
122 | D: a file that the external merge driver will merge internally |
|
|||
123 | (experimental) |
|
|||
124 | P: a path conflict (file vs directory) |
|
|||
125 | m: the external merge driver defined for this merge plus its run state |
|
|||
126 | (experimental) |
|
|||
127 | f: a (filename, dictionary) tuple of optional values for a given file |
|
|||
128 | X: unsupported mandatory record type (used in tests) |
|
|||
129 | x: unsupported advisory record type (used in tests) |
|
|||
130 | l: the labels for the parts of the merge. |
|
|||
131 |
|
||||
132 | Merge driver run states (experimental): |
|
|||
133 | u: driver-resolved files unmarked -- needs to be run next time we're about |
|
|||
134 | to resolve or commit |
|
|||
135 | m: driver-resolved files marked -- only needs to be run before commit |
|
|||
136 | s: success/skipped -- does not need to be run any more |
|
|||
137 |
|
||||
138 | Merge record states (stored in self._state, indexed by filename): |
|
|||
139 | u: unresolved conflict |
|
|||
140 | r: resolved conflict |
|
|||
141 | pu: unresolved path conflict (file conflicts with directory) |
|
|||
142 | pr: resolved path conflict |
|
|||
143 | d: driver-resolved conflict |
|
|||
144 |
|
||||
145 | The resolve command transitions between 'u' and 'r' for conflicts and |
|
|||
146 | 'pu' and 'pr' for path conflicts. |
|
|||
147 | ''' |
|
|||
148 |
|
||||
149 | statepathv1 = b'merge/state' |
|
|||
150 | statepathv2 = b'merge/state2' |
|
|||
151 |
|
||||
152 | @staticmethod |
|
|||
153 | def clean(repo, node=None, other=None, labels=None): |
|
|||
154 | """Initialize a brand new merge state, removing any existing state on |
|
|||
155 | disk.""" |
|
|||
156 | ms = mergestate(repo) |
|
|||
157 | ms.reset(node, other, labels) |
|
|||
158 | return ms |
|
|||
159 |
|
||||
160 | @staticmethod |
|
|||
161 | def read(repo): |
|
|||
162 | """Initialize the merge state, reading it from disk.""" |
|
|||
163 | ms = mergestate(repo) |
|
|||
164 | ms._read() |
|
|||
165 | return ms |
|
|||
166 |
|
||||
167 | def __init__(self, repo): |
|
|||
168 | """Initialize the merge state. |
|
|||
169 |
|
||||
170 | Do not use this directly! Instead call read() or clean().""" |
|
|||
171 | self._repo = repo |
|
|||
172 | self._dirty = False |
|
|||
173 | self._labels = None |
|
|||
174 |
|
||||
175 | def reset(self, node=None, other=None, labels=None): |
|
|||
176 | self._state = {} |
|
|||
177 | self._stateextras = {} |
|
|||
178 | self._local = None |
|
|||
179 | self._other = None |
|
|||
180 | self._labels = labels |
|
|||
181 | for var in ('localctx', 'otherctx'): |
|
|||
182 | if var in vars(self): |
|
|||
183 | delattr(self, var) |
|
|||
184 | if node: |
|
|||
185 | self._local = node |
|
|||
186 | self._other = other |
|
|||
187 | self._readmergedriver = None |
|
|||
188 | if self.mergedriver: |
|
|||
189 | self._mdstate = MERGE_DRIVER_STATE_SUCCESS |
|
|||
190 | else: |
|
|||
191 | self._mdstate = MERGE_DRIVER_STATE_UNMARKED |
|
|||
192 | shutil.rmtree(self._repo.vfs.join(b'merge'), True) |
|
|||
193 | self._results = {} |
|
|||
194 | self._dirty = False |
|
|||
195 |
|
||||
196 | def _read(self): |
|
|||
197 | """Analyse each record content to restore a serialized state from disk |
|
|||
198 |
|
||||
199 | This function process "record" entry produced by the de-serialization |
|
|||
200 | of on disk file. |
|
|||
201 | """ |
|
|||
202 | self._state = {} |
|
|||
203 | self._stateextras = {} |
|
|||
204 | self._local = None |
|
|||
205 | self._other = None |
|
|||
206 | for var in ('localctx', 'otherctx'): |
|
|||
207 | if var in vars(self): |
|
|||
208 | delattr(self, var) |
|
|||
209 | self._readmergedriver = None |
|
|||
210 | self._mdstate = MERGE_DRIVER_STATE_SUCCESS |
|
|||
211 | unsupported = set() |
|
|||
212 | records = self._readrecords() |
|
|||
213 | for rtype, record in records: |
|
|||
214 | if rtype == RECORD_LOCAL: |
|
|||
215 | self._local = bin(record) |
|
|||
216 | elif rtype == RECORD_OTHER: |
|
|||
217 | self._other = bin(record) |
|
|||
218 | elif rtype == RECORD_MERGE_DRIVER_STATE: |
|
|||
219 | bits = record.split(b'\0', 1) |
|
|||
220 | mdstate = bits[1] |
|
|||
221 | if len(mdstate) != 1 or mdstate not in ( |
|
|||
222 | MERGE_DRIVER_STATE_UNMARKED, |
|
|||
223 | MERGE_DRIVER_STATE_MARKED, |
|
|||
224 | MERGE_DRIVER_STATE_SUCCESS, |
|
|||
225 | ): |
|
|||
226 | # the merge driver should be idempotent, so just rerun it |
|
|||
227 | mdstate = MERGE_DRIVER_STATE_UNMARKED |
|
|||
228 |
|
||||
229 | self._readmergedriver = bits[0] |
|
|||
230 | self._mdstate = mdstate |
|
|||
231 | elif rtype in ( |
|
|||
232 | RECORD_MERGED, |
|
|||
233 | RECORD_CHANGEDELETE_CONFLICT, |
|
|||
234 | RECORD_PATH_CONFLICT, |
|
|||
235 | RECORD_MERGE_DRIVER_MERGE, |
|
|||
236 | RECORD_RESOLVED_OTHER, |
|
|||
237 | ): |
|
|||
238 | bits = record.split(b'\0') |
|
|||
239 | self._state[bits[0]] = bits[1:] |
|
|||
240 | elif rtype == RECORD_FILE_VALUES: |
|
|||
241 | filename, rawextras = record.split(b'\0', 1) |
|
|||
242 | extraparts = rawextras.split(b'\0') |
|
|||
243 | extras = {} |
|
|||
244 | i = 0 |
|
|||
245 | while i < len(extraparts): |
|
|||
246 | extras[extraparts[i]] = extraparts[i + 1] |
|
|||
247 | i += 2 |
|
|||
248 |
|
||||
249 | self._stateextras[filename] = extras |
|
|||
250 | elif rtype == RECORD_LABELS: |
|
|||
251 | labels = record.split(b'\0', 2) |
|
|||
252 | self._labels = [l for l in labels if len(l) > 0] |
|
|||
253 | elif not rtype.islower(): |
|
|||
254 | unsupported.add(rtype) |
|
|||
255 | self._results = {} |
|
|||
256 | self._dirty = False |
|
|||
257 |
|
||||
258 | if unsupported: |
|
|||
259 | raise error.UnsupportedMergeRecords(unsupported) |
|
|||
260 |
|
||||
261 | def _readrecords(self): |
|
|||
262 | """Read merge state from disk and return a list of record (TYPE, data) |
|
|||
263 |
|
||||
264 | We read data from both v1 and v2 files and decide which one to use. |
|
|||
265 |
|
||||
266 | V1 has been used by version prior to 2.9.1 and contains less data than |
|
|||
267 | v2. We read both versions and check if no data in v2 contradicts |
|
|||
268 | v1. If there is not contradiction we can safely assume that both v1 |
|
|||
269 | and v2 were written at the same time and use the extract data in v2. If |
|
|||
270 | there is contradiction we ignore v2 content as we assume an old version |
|
|||
271 | of Mercurial has overwritten the mergestate file and left an old v2 |
|
|||
272 | file around. |
|
|||
273 |
|
||||
274 | returns list of record [(TYPE, data), ...]""" |
|
|||
275 | v1records = self._readrecordsv1() |
|
|||
276 | v2records = self._readrecordsv2() |
|
|||
277 | if self._v1v2match(v1records, v2records): |
|
|||
278 | return v2records |
|
|||
279 | else: |
|
|||
280 | # v1 file is newer than v2 file, use it |
|
|||
281 | # we have to infer the "other" changeset of the merge |
|
|||
282 | # we cannot do better than that with v1 of the format |
|
|||
283 | mctx = self._repo[None].parents()[-1] |
|
|||
284 | v1records.append((RECORD_OTHER, mctx.hex())) |
|
|||
285 | # add place holder "other" file node information |
|
|||
286 | # nobody is using it yet so we do no need to fetch the data |
|
|||
287 | # if mctx was wrong `mctx[bits[-2]]` may fails. |
|
|||
288 | for idx, r in enumerate(v1records): |
|
|||
289 | if r[0] == RECORD_MERGED: |
|
|||
290 | bits = r[1].split(b'\0') |
|
|||
291 | bits.insert(-2, b'') |
|
|||
292 | v1records[idx] = (r[0], b'\0'.join(bits)) |
|
|||
293 | return v1records |
|
|||
294 |
|
||||
295 | def _v1v2match(self, v1records, v2records): |
|
|||
296 | oldv2 = set() # old format version of v2 record |
|
|||
297 | for rec in v2records: |
|
|||
298 | if rec[0] == RECORD_LOCAL: |
|
|||
299 | oldv2.add(rec) |
|
|||
300 | elif rec[0] == RECORD_MERGED: |
|
|||
301 | # drop the onode data (not contained in v1) |
|
|||
302 | oldv2.add((RECORD_MERGED, _droponode(rec[1]))) |
|
|||
303 | for rec in v1records: |
|
|||
304 | if rec not in oldv2: |
|
|||
305 | return False |
|
|||
306 | else: |
|
|||
307 | return True |
|
|||
308 |
|
||||
309 | def _readrecordsv1(self): |
|
|||
310 | """read on disk merge state for version 1 file |
|
|||
311 |
|
||||
312 | returns list of record [(TYPE, data), ...] |
|
|||
313 |
|
||||
314 | Note: the "F" data from this file are one entry short |
|
|||
315 | (no "other file node" entry) |
|
|||
316 | """ |
|
|||
317 | records = [] |
|
|||
318 | try: |
|
|||
319 | f = self._repo.vfs(self.statepathv1) |
|
|||
320 | for i, l in enumerate(f): |
|
|||
321 | if i == 0: |
|
|||
322 | records.append((RECORD_LOCAL, l[:-1])) |
|
|||
323 | else: |
|
|||
324 | records.append((RECORD_MERGED, l[:-1])) |
|
|||
325 | f.close() |
|
|||
326 | except IOError as err: |
|
|||
327 | if err.errno != errno.ENOENT: |
|
|||
328 | raise |
|
|||
329 | return records |
|
|||
330 |
|
||||
331 | def _readrecordsv2(self): |
|
|||
332 | """read on disk merge state for version 2 file |
|
|||
333 |
|
||||
334 | This format is a list of arbitrary records of the form: |
|
|||
335 |
|
||||
336 | [type][length][content] |
|
|||
337 |
|
||||
338 | `type` is a single character, `length` is a 4 byte integer, and |
|
|||
339 | `content` is an arbitrary byte sequence of length `length`. |
|
|||
340 |
|
||||
341 | Mercurial versions prior to 3.7 have a bug where if there are |
|
|||
342 | unsupported mandatory merge records, attempting to clear out the merge |
|
|||
343 | state with hg update --clean or similar aborts. The 't' record type |
|
|||
344 | works around that by writing out what those versions treat as an |
|
|||
345 | advisory record, but later versions interpret as special: the first |
|
|||
346 | character is the 'real' record type and everything onwards is the data. |
|
|||
347 |
|
||||
348 | Returns list of records [(TYPE, data), ...].""" |
|
|||
349 | records = [] |
|
|||
350 | try: |
|
|||
351 | f = self._repo.vfs(self.statepathv2) |
|
|||
352 | data = f.read() |
|
|||
353 | off = 0 |
|
|||
354 | end = len(data) |
|
|||
355 | while off < end: |
|
|||
356 | rtype = data[off : off + 1] |
|
|||
357 | off += 1 |
|
|||
358 | length = _unpack(b'>I', data[off : (off + 4)])[0] |
|
|||
359 | off += 4 |
|
|||
360 | record = data[off : (off + length)] |
|
|||
361 | off += length |
|
|||
362 | if rtype == RECORD_OVERRIDE: |
|
|||
363 | rtype, record = record[0:1], record[1:] |
|
|||
364 | records.append((rtype, record)) |
|
|||
365 | f.close() |
|
|||
366 | except IOError as err: |
|
|||
367 | if err.errno != errno.ENOENT: |
|
|||
368 | raise |
|
|||
369 | return records |
|
|||
370 |
|
||||
371 | @util.propertycache |
|
|||
372 | def mergedriver(self): |
|
|||
373 | # protect against the following: |
|
|||
374 | # - A configures a malicious merge driver in their hgrc, then |
|
|||
375 | # pauses the merge |
|
|||
376 | # - A edits their hgrc to remove references to the merge driver |
|
|||
377 | # - A gives a copy of their entire repo, including .hg, to B |
|
|||
378 | # - B inspects .hgrc and finds it to be clean |
|
|||
379 | # - B then continues the merge and the malicious merge driver |
|
|||
380 | # gets invoked |
|
|||
381 | configmergedriver = self._repo.ui.config( |
|
|||
382 | b'experimental', b'mergedriver' |
|
|||
383 | ) |
|
|||
384 | if ( |
|
|||
385 | self._readmergedriver is not None |
|
|||
386 | and self._readmergedriver != configmergedriver |
|
|||
387 | ): |
|
|||
388 | raise error.ConfigError( |
|
|||
389 | _(b"merge driver changed since merge started"), |
|
|||
390 | hint=_(b"revert merge driver change or abort merge"), |
|
|||
391 | ) |
|
|||
392 |
|
||||
393 | return configmergedriver |
|
|||
394 |
|
||||
395 | @util.propertycache |
|
|||
396 | def local(self): |
|
|||
397 | if self._local is None: |
|
|||
398 | msg = b"local accessed but self._local isn't set" |
|
|||
399 | raise error.ProgrammingError(msg) |
|
|||
400 | return self._local |
|
|||
401 |
|
||||
402 | @util.propertycache |
|
|||
403 | def localctx(self): |
|
|||
404 | return self._repo[self.local] |
|
|||
405 |
|
||||
406 | @util.propertycache |
|
|||
407 | def other(self): |
|
|||
408 | if self._other is None: |
|
|||
409 | msg = b"other accessed but self._other isn't set" |
|
|||
410 | raise error.ProgrammingError(msg) |
|
|||
411 | return self._other |
|
|||
412 |
|
||||
413 | @util.propertycache |
|
|||
414 | def otherctx(self): |
|
|||
415 | return self._repo[self.other] |
|
|||
416 |
|
||||
417 | def active(self): |
|
|||
418 | """Whether mergestate is active. |
|
|||
419 |
|
||||
420 | Returns True if there appears to be mergestate. This is a rough proxy |
|
|||
421 | for "is a merge in progress." |
|
|||
422 | """ |
|
|||
423 | return bool(self._local) or bool(self._state) |
|
|||
424 |
|
||||
425 | def commit(self): |
|
|||
426 | """Write current state on disk (if necessary)""" |
|
|||
427 | if self._dirty: |
|
|||
428 | records = self._makerecords() |
|
|||
429 | self._writerecords(records) |
|
|||
430 | self._dirty = False |
|
|||
431 |
|
||||
432 | def _makerecords(self): |
|
|||
433 | records = [] |
|
|||
434 | records.append((RECORD_LOCAL, hex(self._local))) |
|
|||
435 | records.append((RECORD_OTHER, hex(self._other))) |
|
|||
436 | if self.mergedriver: |
|
|||
437 | records.append( |
|
|||
438 | ( |
|
|||
439 | RECORD_MERGE_DRIVER_STATE, |
|
|||
440 | b'\0'.join([self.mergedriver, self._mdstate]), |
|
|||
441 | ) |
|
|||
442 | ) |
|
|||
443 | # Write out state items. In all cases, the value of the state map entry |
|
|||
444 | # is written as the contents of the record. The record type depends on |
|
|||
445 | # the type of state that is stored, and capital-letter records are used |
|
|||
446 | # to prevent older versions of Mercurial that do not support the feature |
|
|||
447 | # from loading them. |
|
|||
448 | for filename, v in pycompat.iteritems(self._state): |
|
|||
449 | if v[0] == MERGE_RECORD_DRIVER_RESOLVED: |
|
|||
450 | # Driver-resolved merge. These are stored in 'D' records. |
|
|||
451 | records.append( |
|
|||
452 | (RECORD_MERGE_DRIVER_MERGE, b'\0'.join([filename] + v)) |
|
|||
453 | ) |
|
|||
454 | elif v[0] in ( |
|
|||
455 | MERGE_RECORD_UNRESOLVED_PATH, |
|
|||
456 | MERGE_RECORD_RESOLVED_PATH, |
|
|||
457 | ): |
|
|||
458 | # Path conflicts. These are stored in 'P' records. The current |
|
|||
459 | # resolution state ('pu' or 'pr') is stored within the record. |
|
|||
460 | records.append( |
|
|||
461 | (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v)) |
|
|||
462 | ) |
|
|||
463 | elif v[0] == MERGE_RECORD_MERGED_OTHER: |
|
|||
464 | records.append( |
|
|||
465 | (RECORD_RESOLVED_OTHER, b'\0'.join([filename] + v)) |
|
|||
466 | ) |
|
|||
467 | elif v[1] == nullhex or v[6] == nullhex: |
|
|||
468 | # Change/Delete or Delete/Change conflicts. These are stored in |
|
|||
469 | # 'C' records. v[1] is the local file, and is nullhex when the |
|
|||
470 | # file is deleted locally ('dc'). v[6] is the remote file, and |
|
|||
471 | # is nullhex when the file is deleted remotely ('cd'). |
|
|||
472 | records.append( |
|
|||
473 | (RECORD_CHANGEDELETE_CONFLICT, b'\0'.join([filename] + v)) |
|
|||
474 | ) |
|
|||
475 | else: |
|
|||
476 | # Normal files. These are stored in 'F' records. |
|
|||
477 | records.append((RECORD_MERGED, b'\0'.join([filename] + v))) |
|
|||
478 | for filename, extras in sorted(pycompat.iteritems(self._stateextras)): |
|
|||
479 | rawextras = b'\0'.join( |
|
|||
480 | b'%s\0%s' % (k, v) for k, v in pycompat.iteritems(extras) |
|
|||
481 | ) |
|
|||
482 | records.append( |
|
|||
483 | (RECORD_FILE_VALUES, b'%s\0%s' % (filename, rawextras)) |
|
|||
484 | ) |
|
|||
485 | if self._labels is not None: |
|
|||
486 | labels = b'\0'.join(self._labels) |
|
|||
487 | records.append((RECORD_LABELS, labels)) |
|
|||
488 | return records |
|
|||
489 |
|
||||
490 | def _writerecords(self, records): |
|
|||
491 | """Write current state on disk (both v1 and v2)""" |
|
|||
492 | self._writerecordsv1(records) |
|
|||
493 | self._writerecordsv2(records) |
|
|||
494 |
|
||||
495 | def _writerecordsv1(self, records): |
|
|||
496 | """Write current state on disk in a version 1 file""" |
|
|||
497 | f = self._repo.vfs(self.statepathv1, b'wb') |
|
|||
498 | irecords = iter(records) |
|
|||
499 | lrecords = next(irecords) |
|
|||
500 | assert lrecords[0] == RECORD_LOCAL |
|
|||
501 | f.write(hex(self._local) + b'\n') |
|
|||
502 | for rtype, data in irecords: |
|
|||
503 | if rtype == RECORD_MERGED: |
|
|||
504 | f.write(b'%s\n' % _droponode(data)) |
|
|||
505 | f.close() |
|
|||
506 |
|
||||
507 | def _writerecordsv2(self, records): |
|
|||
508 | """Write current state on disk in a version 2 file |
|
|||
509 |
|
||||
510 | See the docstring for _readrecordsv2 for why we use 't'.""" |
|
|||
511 | # these are the records that all version 2 clients can read |
|
|||
512 | allowlist = (RECORD_LOCAL, RECORD_OTHER, RECORD_MERGED) |
|
|||
513 | f = self._repo.vfs(self.statepathv2, b'wb') |
|
|||
514 | for key, data in records: |
|
|||
515 | assert len(key) == 1 |
|
|||
516 | if key not in allowlist: |
|
|||
517 | key, data = RECORD_OVERRIDE, b'%s%s' % (key, data) |
|
|||
518 | format = b'>sI%is' % len(data) |
|
|||
519 | f.write(_pack(format, key, len(data), data)) |
|
|||
520 | f.close() |
|
|||
521 |
|
||||
522 | @staticmethod |
|
|||
523 | def getlocalkey(path): |
|
|||
524 | """hash the path of a local file context for storage in the .hg/merge |
|
|||
525 | directory.""" |
|
|||
526 |
|
||||
527 | return hex(hashutil.sha1(path).digest()) |
|
|||
528 |
|
||||
529 | def add(self, fcl, fco, fca, fd): |
|
|||
530 | """add a new (potentially?) conflicting file the merge state |
|
|||
531 | fcl: file context for local, |
|
|||
532 | fco: file context for remote, |
|
|||
533 | fca: file context for ancestors, |
|
|||
534 | fd: file path of the resulting merge. |
|
|||
535 |
|
||||
536 | note: also write the local version to the `.hg/merge` directory. |
|
|||
537 | """ |
|
|||
538 | if fcl.isabsent(): |
|
|||
539 | localkey = nullhex |
|
|||
540 | else: |
|
|||
541 | localkey = mergestate.getlocalkey(fcl.path()) |
|
|||
542 | self._repo.vfs.write(b'merge/' + localkey, fcl.data()) |
|
|||
543 | self._state[fd] = [ |
|
|||
544 | MERGE_RECORD_UNRESOLVED, |
|
|||
545 | localkey, |
|
|||
546 | fcl.path(), |
|
|||
547 | fca.path(), |
|
|||
548 | hex(fca.filenode()), |
|
|||
549 | fco.path(), |
|
|||
550 | hex(fco.filenode()), |
|
|||
551 | fcl.flags(), |
|
|||
552 | ] |
|
|||
553 | self._stateextras[fd] = {b'ancestorlinknode': hex(fca.node())} |
|
|||
554 | self._dirty = True |
|
|||
555 |
|
||||
556 | def addpath(self, path, frename, forigin): |
|
|||
557 | """add a new conflicting path to the merge state |
|
|||
558 | path: the path that conflicts |
|
|||
559 | frename: the filename the conflicting file was renamed to |
|
|||
560 | forigin: origin of the file ('l' or 'r' for local/remote) |
|
|||
561 | """ |
|
|||
562 | self._state[path] = [MERGE_RECORD_UNRESOLVED_PATH, frename, forigin] |
|
|||
563 | self._dirty = True |
|
|||
564 |
|
||||
565 | def addmergedother(self, path): |
|
|||
566 | self._state[path] = [MERGE_RECORD_MERGED_OTHER, nullhex, nullhex] |
|
|||
567 | self._dirty = True |
|
|||
568 |
|
||||
569 | def __contains__(self, dfile): |
|
|||
570 | return dfile in self._state |
|
|||
571 |
|
||||
572 | def __getitem__(self, dfile): |
|
|||
573 | return self._state[dfile][0] |
|
|||
574 |
|
||||
575 | def __iter__(self): |
|
|||
576 | return iter(sorted(self._state)) |
|
|||
577 |
|
||||
578 | def files(self): |
|
|||
579 | return self._state.keys() |
|
|||
580 |
|
||||
581 | def mark(self, dfile, state): |
|
|||
582 | self._state[dfile][0] = state |
|
|||
583 | self._dirty = True |
|
|||
584 |
|
||||
585 | def mdstate(self): |
|
|||
586 | return self._mdstate |
|
|||
587 |
|
||||
588 | def unresolved(self): |
|
|||
589 | """Obtain the paths of unresolved files.""" |
|
|||
590 |
|
||||
591 | for f, entry in pycompat.iteritems(self._state): |
|
|||
592 | if entry[0] in ( |
|
|||
593 | MERGE_RECORD_UNRESOLVED, |
|
|||
594 | MERGE_RECORD_UNRESOLVED_PATH, |
|
|||
595 | ): |
|
|||
596 | yield f |
|
|||
597 |
|
||||
598 | def driverresolved(self): |
|
|||
599 | """Obtain the paths of driver-resolved files.""" |
|
|||
600 |
|
||||
601 | for f, entry in self._state.items(): |
|
|||
602 | if entry[0] == MERGE_RECORD_DRIVER_RESOLVED: |
|
|||
603 | yield f |
|
|||
604 |
|
||||
605 | def extras(self, filename): |
|
|||
606 | return self._stateextras.setdefault(filename, {}) |
|
|||
607 |
|
||||
608 | def _resolve(self, preresolve, dfile, wctx): |
|
|||
609 | """rerun merge process for file path `dfile`""" |
|
|||
610 | if self[dfile] in (MERGE_RECORD_RESOLVED, MERGE_RECORD_DRIVER_RESOLVED): |
|
|||
611 | return True, 0 |
|
|||
612 | if self._state[dfile][0] == MERGE_RECORD_MERGED_OTHER: |
|
|||
613 | return True, 0 |
|
|||
614 | stateentry = self._state[dfile] |
|
|||
615 | state, localkey, lfile, afile, anode, ofile, onode, flags = stateentry |
|
|||
616 | octx = self._repo[self._other] |
|
|||
617 | extras = self.extras(dfile) |
|
|||
618 | anccommitnode = extras.get(b'ancestorlinknode') |
|
|||
619 | if anccommitnode: |
|
|||
620 | actx = self._repo[anccommitnode] |
|
|||
621 | else: |
|
|||
622 | actx = None |
|
|||
623 | fcd = self._filectxorabsent(localkey, wctx, dfile) |
|
|||
624 | fco = self._filectxorabsent(onode, octx, ofile) |
|
|||
625 | # TODO: move this to filectxorabsent |
|
|||
626 | fca = self._repo.filectx(afile, fileid=anode, changectx=actx) |
|
|||
627 | # "premerge" x flags |
|
|||
628 | flo = fco.flags() |
|
|||
629 | fla = fca.flags() |
|
|||
630 | if b'x' in flags + flo + fla and b'l' not in flags + flo + fla: |
|
|||
631 | if fca.node() == nullid and flags != flo: |
|
|||
632 | if preresolve: |
|
|||
633 | self._repo.ui.warn( |
|
|||
634 | _( |
|
|||
635 | b'warning: cannot merge flags for %s ' |
|
|||
636 | b'without common ancestor - keeping local flags\n' |
|
|||
637 | ) |
|
|||
638 | % afile |
|
|||
639 | ) |
|
|||
640 | elif flags == fla: |
|
|||
641 | flags = flo |
|
|||
642 | if preresolve: |
|
|||
643 | # restore local |
|
|||
644 | if localkey != nullhex: |
|
|||
645 | f = self._repo.vfs(b'merge/' + localkey) |
|
|||
646 | wctx[dfile].write(f.read(), flags) |
|
|||
647 | f.close() |
|
|||
648 | else: |
|
|||
649 | wctx[dfile].remove(ignoremissing=True) |
|
|||
650 | complete, r, deleted = filemerge.premerge( |
|
|||
651 | self._repo, |
|
|||
652 | wctx, |
|
|||
653 | self._local, |
|
|||
654 | lfile, |
|
|||
655 | fcd, |
|
|||
656 | fco, |
|
|||
657 | fca, |
|
|||
658 | labels=self._labels, |
|
|||
659 | ) |
|
|||
660 | else: |
|
|||
661 | complete, r, deleted = filemerge.filemerge( |
|
|||
662 | self._repo, |
|
|||
663 | wctx, |
|
|||
664 | self._local, |
|
|||
665 | lfile, |
|
|||
666 | fcd, |
|
|||
667 | fco, |
|
|||
668 | fca, |
|
|||
669 | labels=self._labels, |
|
|||
670 | ) |
|
|||
671 | if r is None: |
|
|||
672 | # no real conflict |
|
|||
673 | del self._state[dfile] |
|
|||
674 | self._stateextras.pop(dfile, None) |
|
|||
675 | self._dirty = True |
|
|||
676 | elif not r: |
|
|||
677 | self.mark(dfile, MERGE_RECORD_RESOLVED) |
|
|||
678 |
|
||||
679 | if complete: |
|
|||
680 | action = None |
|
|||
681 | if deleted: |
|
|||
682 | if fcd.isabsent(): |
|
|||
683 | # dc: local picked. Need to drop if present, which may |
|
|||
684 | # happen on re-resolves. |
|
|||
685 | action = ACTION_FORGET |
|
|||
686 | else: |
|
|||
687 | # cd: remote picked (or otherwise deleted) |
|
|||
688 | action = ACTION_REMOVE |
|
|||
689 | else: |
|
|||
690 | if fcd.isabsent(): # dc: remote picked |
|
|||
691 | action = ACTION_GET |
|
|||
692 | elif fco.isabsent(): # cd: local picked |
|
|||
693 | if dfile in self.localctx: |
|
|||
694 | action = ACTION_ADD_MODIFIED |
|
|||
695 | else: |
|
|||
696 | action = ACTION_ADD |
|
|||
697 | # else: regular merges (no action necessary) |
|
|||
698 | self._results[dfile] = r, action |
|
|||
699 |
|
||||
700 | return complete, r |
|
|||
701 |
|
||||
702 | def _filectxorabsent(self, hexnode, ctx, f): |
|
|||
703 | if hexnode == nullhex: |
|
|||
704 | return filemerge.absentfilectx(ctx, f) |
|
|||
705 | else: |
|
|||
706 | return ctx[f] |
|
|||
707 |
|
||||
708 | def preresolve(self, dfile, wctx): |
|
|||
709 | """run premerge process for dfile |
|
|||
710 |
|
||||
711 | Returns whether the merge is complete, and the exit code.""" |
|
|||
712 | return self._resolve(True, dfile, wctx) |
|
|||
713 |
|
||||
714 | def resolve(self, dfile, wctx): |
|
|||
715 | """run merge process (assuming premerge was run) for dfile |
|
|||
716 |
|
||||
717 | Returns the exit code of the merge.""" |
|
|||
718 | return self._resolve(False, dfile, wctx)[1] |
|
|||
719 |
|
||||
720 | def counts(self): |
|
|||
721 | """return counts for updated, merged and removed files in this |
|
|||
722 | session""" |
|
|||
723 | updated, merged, removed = 0, 0, 0 |
|
|||
724 | for r, action in pycompat.itervalues(self._results): |
|
|||
725 | if r is None: |
|
|||
726 | updated += 1 |
|
|||
727 | elif r == 0: |
|
|||
728 | if action == ACTION_REMOVE: |
|
|||
729 | removed += 1 |
|
|||
730 | else: |
|
|||
731 | merged += 1 |
|
|||
732 | return updated, merged, removed |
|
|||
733 |
|
||||
734 | def unresolvedcount(self): |
|
|||
735 | """get unresolved count for this merge (persistent)""" |
|
|||
736 | return len(list(self.unresolved())) |
|
|||
737 |
|
||||
738 | def actions(self): |
|
|||
739 | """return lists of actions to perform on the dirstate""" |
|
|||
740 | actions = { |
|
|||
741 | ACTION_REMOVE: [], |
|
|||
742 | ACTION_FORGET: [], |
|
|||
743 | ACTION_ADD: [], |
|
|||
744 | ACTION_ADD_MODIFIED: [], |
|
|||
745 | ACTION_GET: [], |
|
|||
746 | } |
|
|||
747 | for f, (r, action) in pycompat.iteritems(self._results): |
|
|||
748 | if action is not None: |
|
|||
749 | actions[action].append((f, None, b"merge result")) |
|
|||
750 | return actions |
|
|||
751 |
|
||||
752 | def recordactions(self): |
|
|||
753 | """record remove/add/get actions in the dirstate""" |
|
|||
754 | branchmerge = self._repo.dirstate.p2() != nullid |
|
|||
755 | recordupdates(self._repo, self.actions(), branchmerge, None) |
|
|||
756 |
|
||||
757 | def queueremove(self, f): |
|
|||
758 | """queues a file to be removed from the dirstate |
|
|||
759 |
|
||||
760 | Meant for use by custom merge drivers.""" |
|
|||
761 | self._results[f] = 0, ACTION_REMOVE |
|
|||
762 |
|
||||
763 | def queueadd(self, f): |
|
|||
764 | """queues a file to be added to the dirstate |
|
|||
765 |
|
||||
766 | Meant for use by custom merge drivers.""" |
|
|||
767 | self._results[f] = 0, ACTION_ADD |
|
|||
768 |
|
||||
769 | def queueget(self, f): |
|
|||
770 | """queues a file to be marked modified in the dirstate |
|
|||
771 |
|
||||
772 | Meant for use by custom merge drivers.""" |
|
|||
773 | self._results[f] = 0, ACTION_GET |
|
|||
774 |
|
||||
775 |
|
||||
776 | def _getcheckunknownconfig(repo, section, name): |
|
42 | def _getcheckunknownconfig(repo, section, name): | |
777 | config = repo.ui.config(section, name) |
|
43 | config = repo.ui.config(section, name) | |
778 | valid = [b'abort', b'ignore', b'warn'] |
|
44 | valid = [b'abort', b'ignore', b'warn'] | |
@@ -885,14 +151,17 b' def _checkunknownfiles(repo, wctx, mctx,' | |||||
885 |
|
151 | |||
886 | checkunknowndirs = _unknowndirschecker() |
|
152 | checkunknowndirs = _unknowndirschecker() | |
887 | for f, (m, args, msg) in pycompat.iteritems(actions): |
|
153 | for f, (m, args, msg) in pycompat.iteritems(actions): | |
888 | if m in (ACTION_CREATED, ACTION_DELETED_CHANGED): |
|
154 | if m in ( | |
|
155 | mergestatemod.ACTION_CREATED, | |||
|
156 | mergestatemod.ACTION_DELETED_CHANGED, | |||
|
157 | ): | |||
889 | if _checkunknownfile(repo, wctx, mctx, f): |
|
158 | if _checkunknownfile(repo, wctx, mctx, f): | |
890 | fileconflicts.add(f) |
|
159 | fileconflicts.add(f) | |
891 | elif pathconfig and f not in wctx: |
|
160 | elif pathconfig and f not in wctx: | |
892 | path = checkunknowndirs(repo, wctx, f) |
|
161 | path = checkunknowndirs(repo, wctx, f) | |
893 | if path is not None: |
|
162 | if path is not None: | |
894 | pathconflicts.add(path) |
|
163 | pathconflicts.add(path) | |
895 | elif m == ACTION_LOCAL_DIR_RENAME_GET: |
|
164 | elif m == mergestatemod.ACTION_LOCAL_DIR_RENAME_GET: | |
896 | if _checkunknownfile(repo, wctx, mctx, f, args[0]): |
|
165 | if _checkunknownfile(repo, wctx, mctx, f, args[0]): | |
897 | fileconflicts.add(f) |
|
166 | fileconflicts.add(f) | |
898 |
|
167 | |||
@@ -903,7 +172,7 b' def _checkunknownfiles(repo, wctx, mctx,' | |||||
903 | collectconflicts(unknownconflicts, unknownconfig) |
|
172 | collectconflicts(unknownconflicts, unknownconfig) | |
904 | else: |
|
173 | else: | |
905 | for f, (m, args, msg) in pycompat.iteritems(actions): |
|
174 | for f, (m, args, msg) in pycompat.iteritems(actions): | |
906 | if m == ACTION_CREATED_MERGE: |
|
175 | if m == mergestatemod.ACTION_CREATED_MERGE: | |
907 | fl2, anc = args |
|
176 | fl2, anc = args | |
908 | different = _checkunknownfile(repo, wctx, mctx, f) |
|
177 | different = _checkunknownfile(repo, wctx, mctx, f) | |
909 | if repo.dirstate._ignore(f): |
|
178 | if repo.dirstate._ignore(f): | |
@@ -924,10 +193,14 b' def _checkunknownfiles(repo, wctx, mctx,' | |||||
924 | # don't like an abort happening in the middle of |
|
193 | # don't like an abort happening in the middle of | |
925 | # merge.update. |
|
194 | # merge.update. | |
926 | if not different: |
|
195 | if not different: | |
927 |
actions[f] = ( |
|
196 | actions[f] = ( | |
|
197 | mergestatemod.ACTION_GET, | |||
|
198 | (fl2, False), | |||
|
199 | b'remote created', | |||
|
200 | ) | |||
928 | elif mergeforce or config == b'abort': |
|
201 | elif mergeforce or config == b'abort': | |
929 | actions[f] = ( |
|
202 | actions[f] = ( | |
930 | ACTION_MERGE, |
|
203 | mergestatemod.ACTION_MERGE, | |
931 | (f, f, None, False, anc), |
|
204 | (f, f, None, False, anc), | |
932 | b'remote differs from untracked local', |
|
205 | b'remote differs from untracked local', | |
933 | ) |
|
206 | ) | |
@@ -936,7 +209,11 b' def _checkunknownfiles(repo, wctx, mctx,' | |||||
936 | else: |
|
209 | else: | |
937 | if config == b'warn': |
|
210 | if config == b'warn': | |
938 | warnconflicts.add(f) |
|
211 | warnconflicts.add(f) | |
939 |
actions[f] = ( |
|
212 | actions[f] = ( | |
|
213 | mergestatemod.ACTION_GET, | |||
|
214 | (fl2, True), | |||
|
215 | b'remote created', | |||
|
216 | ) | |||
940 |
|
217 | |||
941 | for f in sorted(abortconflicts): |
|
218 | for f in sorted(abortconflicts): | |
942 | warn = repo.ui.warn |
|
219 | warn = repo.ui.warn | |
@@ -962,14 +239,14 b' def _checkunknownfiles(repo, wctx, mctx,' | |||||
962 | repo.ui.warn(_(b"%s: replacing untracked files in directory\n") % f) |
|
239 | repo.ui.warn(_(b"%s: replacing untracked files in directory\n") % f) | |
963 |
|
240 | |||
964 | for f, (m, args, msg) in pycompat.iteritems(actions): |
|
241 | for f, (m, args, msg) in pycompat.iteritems(actions): | |
965 | if m == ACTION_CREATED: |
|
242 | if m == mergestatemod.ACTION_CREATED: | |
966 | backup = ( |
|
243 | backup = ( | |
967 | f in fileconflicts |
|
244 | f in fileconflicts | |
968 | or f in pathconflicts |
|
245 | or f in pathconflicts | |
969 | or any(p in pathconflicts for p in pathutil.finddirs(f)) |
|
246 | or any(p in pathconflicts for p in pathutil.finddirs(f)) | |
970 | ) |
|
247 | ) | |
971 | (flags,) = args |
|
248 | (flags,) = args | |
972 | actions[f] = (ACTION_GET, (flags, backup), msg) |
|
249 | actions[f] = (mergestatemod.ACTION_GET, (flags, backup), msg) | |
973 |
|
250 | |||
974 |
|
251 | |||
975 | def _forgetremoved(wctx, mctx, branchmerge): |
|
252 | def _forgetremoved(wctx, mctx, branchmerge): | |
@@ -988,9 +265,9 b' def _forgetremoved(wctx, mctx, branchmer' | |||||
988 | """ |
|
265 | """ | |
989 |
|
266 | |||
990 | actions = {} |
|
267 | actions = {} | |
991 | m = ACTION_FORGET |
|
268 | m = mergestatemod.ACTION_FORGET | |
992 | if branchmerge: |
|
269 | if branchmerge: | |
993 | m = ACTION_REMOVE |
|
270 | m = mergestatemod.ACTION_REMOVE | |
994 | for f in wctx.deleted(): |
|
271 | for f in wctx.deleted(): | |
995 | if f not in mctx: |
|
272 | if f not in mctx: | |
996 | actions[f] = m, None, b"forget deleted" |
|
273 | actions[f] = m, None, b"forget deleted" | |
@@ -998,7 +275,11 b' def _forgetremoved(wctx, mctx, branchmer' | |||||
998 | if not branchmerge: |
|
275 | if not branchmerge: | |
999 | for f in wctx.removed(): |
|
276 | for f in wctx.removed(): | |
1000 | if f not in mctx: |
|
277 | if f not in mctx: | |
1001 | actions[f] = ACTION_FORGET, None, b"forget removed" |
|
278 | actions[f] = ( | |
|
279 | mergestatemod.ACTION_FORGET, | |||
|
280 | None, | |||
|
281 | b"forget removed", | |||
|
282 | ) | |||
1002 |
|
283 | |||
1003 | return actions |
|
284 | return actions | |
1004 |
|
285 | |||
@@ -1026,24 +307,24 b' def _checkcollision(repo, wmf, actions):' | |||||
1026 | if actions: |
|
307 | if actions: | |
1027 | # KEEP and EXEC are no-op |
|
308 | # KEEP and EXEC are no-op | |
1028 | for m in ( |
|
309 | for m in ( | |
1029 | ACTION_ADD, |
|
310 | mergestatemod.ACTION_ADD, | |
1030 | ACTION_ADD_MODIFIED, |
|
311 | mergestatemod.ACTION_ADD_MODIFIED, | |
1031 | ACTION_FORGET, |
|
312 | mergestatemod.ACTION_FORGET, | |
1032 | ACTION_GET, |
|
313 | mergestatemod.ACTION_GET, | |
1033 | ACTION_CHANGED_DELETED, |
|
314 | mergestatemod.ACTION_CHANGED_DELETED, | |
1034 | ACTION_DELETED_CHANGED, |
|
315 | mergestatemod.ACTION_DELETED_CHANGED, | |
1035 | ): |
|
316 | ): | |
1036 | for f, args, msg in actions[m]: |
|
317 | for f, args, msg in actions[m]: | |
1037 | pmmf.add(f) |
|
318 | pmmf.add(f) | |
1038 | for f, args, msg in actions[ACTION_REMOVE]: |
|
319 | for f, args, msg in actions[mergestatemod.ACTION_REMOVE]: | |
1039 | pmmf.discard(f) |
|
320 | pmmf.discard(f) | |
1040 | for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]: |
|
321 | for f, args, msg in actions[mergestatemod.ACTION_DIR_RENAME_MOVE_LOCAL]: | |
1041 | f2, flags = args |
|
322 | f2, flags = args | |
1042 | pmmf.discard(f2) |
|
323 | pmmf.discard(f2) | |
1043 | pmmf.add(f) |
|
324 | pmmf.add(f) | |
1044 | for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]: |
|
325 | for f, args, msg in actions[mergestatemod.ACTION_LOCAL_DIR_RENAME_GET]: | |
1045 | pmmf.add(f) |
|
326 | pmmf.add(f) | |
1046 | for f, args, msg in actions[ACTION_MERGE]: |
|
327 | for f, args, msg in actions[mergestatemod.ACTION_MERGE]: | |
1047 | f1, f2, fa, move, anc = args |
|
328 | f1, f2, fa, move, anc = args | |
1048 | if move: |
|
329 | if move: | |
1049 | pmmf.discard(f1) |
|
330 | pmmf.discard(f1) | |
@@ -1128,10 +409,10 b' def checkpathconflicts(repo, wctx, mctx,' | |||||
1128 |
|
409 | |||
1129 | for f, (m, args, msg) in actions.items(): |
|
410 | for f, (m, args, msg) in actions.items(): | |
1130 | if m in ( |
|
411 | if m in ( | |
1131 | ACTION_CREATED, |
|
412 | mergestatemod.ACTION_CREATED, | |
1132 | ACTION_DELETED_CHANGED, |
|
413 | mergestatemod.ACTION_DELETED_CHANGED, | |
1133 | ACTION_MERGE, |
|
414 | mergestatemod.ACTION_MERGE, | |
1134 | ACTION_CREATED_MERGE, |
|
415 | mergestatemod.ACTION_CREATED_MERGE, | |
1135 | ): |
|
416 | ): | |
1136 | # This action may create a new local file. |
|
417 | # This action may create a new local file. | |
1137 | createdfiledirs.update(pathutil.finddirs(f)) |
|
418 | createdfiledirs.update(pathutil.finddirs(f)) | |
@@ -1141,13 +422,13 b' def checkpathconflicts(repo, wctx, mctx,' | |||||
1141 | # will be checked once we know what all the deleted files are. |
|
422 | # will be checked once we know what all the deleted files are. | |
1142 | remoteconflicts.add(f) |
|
423 | remoteconflicts.add(f) | |
1143 | # Track the names of all deleted files. |
|
424 | # Track the names of all deleted files. | |
1144 | if m == ACTION_REMOVE: |
|
425 | if m == mergestatemod.ACTION_REMOVE: | |
1145 | deletedfiles.add(f) |
|
426 | deletedfiles.add(f) | |
1146 | if m == ACTION_MERGE: |
|
427 | if m == mergestatemod.ACTION_MERGE: | |
1147 | f1, f2, fa, move, anc = args |
|
428 | f1, f2, fa, move, anc = args | |
1148 | if move: |
|
429 | if move: | |
1149 | deletedfiles.add(f1) |
|
430 | deletedfiles.add(f1) | |
1150 | if m == ACTION_DIR_RENAME_MOVE_LOCAL: |
|
431 | if m == mergestatemod.ACTION_DIR_RENAME_MOVE_LOCAL: | |
1151 | f2, flags = args |
|
432 | f2, flags = args | |
1152 | deletedfiles.add(f2) |
|
433 | deletedfiles.add(f2) | |
1153 |
|
434 | |||
@@ -1164,10 +445,10 b' def checkpathconflicts(repo, wctx, mctx,' | |||||
1164 | # We will need to rename the local file. |
|
445 | # We will need to rename the local file. | |
1165 | localconflicts.add(p) |
|
446 | localconflicts.add(p) | |
1166 | if p in actions and actions[p][0] in ( |
|
447 | if p in actions and actions[p][0] in ( | |
1167 | ACTION_CREATED, |
|
448 | mergestatemod.ACTION_CREATED, | |
1168 | ACTION_DELETED_CHANGED, |
|
449 | mergestatemod.ACTION_DELETED_CHANGED, | |
1169 | ACTION_MERGE, |
|
450 | mergestatemod.ACTION_MERGE, | |
1170 | ACTION_CREATED_MERGE, |
|
451 | mergestatemod.ACTION_CREATED_MERGE, | |
1171 | ): |
|
452 | ): | |
1172 | # The file is in a directory which aliases a remote file. |
|
453 | # The file is in a directory which aliases a remote file. | |
1173 | # This is an internal inconsistency within the remote |
|
454 | # This is an internal inconsistency within the remote | |
@@ -1179,12 +460,17 b' def checkpathconflicts(repo, wctx, mctx,' | |||||
1179 | if p not in deletedfiles: |
|
460 | if p not in deletedfiles: | |
1180 | ctxname = bytes(wctx).rstrip(b'+') |
|
461 | ctxname = bytes(wctx).rstrip(b'+') | |
1181 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) |
|
462 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) | |
|
463 | porig = wctx[p].copysource() or p | |||
1182 | actions[pnew] = ( |
|
464 | actions[pnew] = ( | |
1183 | ACTION_PATH_CONFLICT_RESOLVE, |
|
465 | mergestatemod.ACTION_PATH_CONFLICT_RESOLVE, | |
1184 | (p,), |
|
466 | (p, porig), | |
1185 | b'local path conflict', |
|
467 | b'local path conflict', | |
1186 | ) |
|
468 | ) | |
1187 | actions[p] = (ACTION_PATH_CONFLICT, (pnew, b'l'), b'path conflict') |
|
469 | actions[p] = ( | |
|
470 | mergestatemod.ACTION_PATH_CONFLICT, | |||
|
471 | (pnew, b'l'), | |||
|
472 | b'path conflict', | |||
|
473 | ) | |||
1188 |
|
474 | |||
1189 | if remoteconflicts: |
|
475 | if remoteconflicts: | |
1190 | # Check if all files in the conflicting directories have been removed. |
|
476 | # Check if all files in the conflicting directories have been removed. | |
@@ -1193,20 +479,23 b' def checkpathconflicts(repo, wctx, mctx,' | |||||
1193 | if f not in deletedfiles: |
|
479 | if f not in deletedfiles: | |
1194 | m, args, msg = actions[p] |
|
480 | m, args, msg = actions[p] | |
1195 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) |
|
481 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) | |
1196 | if m in (ACTION_DELETED_CHANGED, ACTION_MERGE): |
|
482 | if m in ( | |
|
483 | mergestatemod.ACTION_DELETED_CHANGED, | |||
|
484 | mergestatemod.ACTION_MERGE, | |||
|
485 | ): | |||
1197 | # Action was merge, just update target. |
|
486 | # Action was merge, just update target. | |
1198 | actions[pnew] = (m, args, msg) |
|
487 | actions[pnew] = (m, args, msg) | |
1199 | else: |
|
488 | else: | |
1200 | # Action was create, change to renamed get action. |
|
489 | # Action was create, change to renamed get action. | |
1201 | fl = args[0] |
|
490 | fl = args[0] | |
1202 | actions[pnew] = ( |
|
491 | actions[pnew] = ( | |
1203 | ACTION_LOCAL_DIR_RENAME_GET, |
|
492 | mergestatemod.ACTION_LOCAL_DIR_RENAME_GET, | |
1204 | (p, fl), |
|
493 | (p, fl), | |
1205 | b'remote path conflict', |
|
494 | b'remote path conflict', | |
1206 | ) |
|
495 | ) | |
1207 | actions[p] = ( |
|
496 | actions[p] = ( | |
1208 | ACTION_PATH_CONFLICT, |
|
497 | mergestatemod.ACTION_PATH_CONFLICT, | |
1209 | (pnew, ACTION_REMOVE), |
|
498 | (pnew, mergestatemod.ACTION_REMOVE), | |
1210 | b'path conflict', |
|
499 | b'path conflict', | |
1211 | ) |
|
500 | ) | |
1212 | remoteconflicts.remove(p) |
|
501 | remoteconflicts.remove(p) | |
@@ -1269,6 +558,13 b' def manifestmerge(' | |||||
1269 | branchmerge and force are as passed in to update |
|
558 | branchmerge and force are as passed in to update | |
1270 | matcher = matcher to filter file lists |
|
559 | matcher = matcher to filter file lists | |
1271 | acceptremote = accept the incoming changes without prompting |
|
560 | acceptremote = accept the incoming changes without prompting | |
|
561 | ||||
|
562 | Returns: | |||
|
563 | ||||
|
564 | actions: dict of filename as keys and action related info as values | |||
|
565 | diverge: mapping of source name -> list of dest name for divergent renames | |||
|
566 | renamedelete: mapping of source name -> list of destinations for files | |||
|
567 | deleted on one side and renamed on other. | |||
1272 | """ |
|
568 | """ | |
1273 | if matcher is not None and matcher.always(): |
|
569 | if matcher is not None and matcher.always(): | |
1274 | matcher = None |
|
570 | matcher = None | |
@@ -1340,13 +636,13 b' def manifestmerge(' | |||||
1340 | ) or branch_copies2.copy.get(f, None) |
|
636 | ) or branch_copies2.copy.get(f, None) | |
1341 | if fa is not None: |
|
637 | if fa is not None: | |
1342 | actions[f] = ( |
|
638 | actions[f] = ( | |
1343 | ACTION_MERGE, |
|
639 | mergestatemod.ACTION_MERGE, | |
1344 | (f, f, fa, False, pa.node()), |
|
640 | (f, f, fa, False, pa.node()), | |
1345 | b'both renamed from %s' % fa, |
|
641 | b'both renamed from %s' % fa, | |
1346 | ) |
|
642 | ) | |
1347 | else: |
|
643 | else: | |
1348 | actions[f] = ( |
|
644 | actions[f] = ( | |
1349 | ACTION_MERGE, |
|
645 | mergestatemod.ACTION_MERGE, | |
1350 | (f, f, None, False, pa.node()), |
|
646 | (f, f, None, False, pa.node()), | |
1351 | b'both created', |
|
647 | b'both created', | |
1352 | ) |
|
648 | ) | |
@@ -1355,35 +651,43 b' def manifestmerge(' | |||||
1355 | fla = ma.flags(f) |
|
651 | fla = ma.flags(f) | |
1356 | nol = b'l' not in fl1 + fl2 + fla |
|
652 | nol = b'l' not in fl1 + fl2 + fla | |
1357 | if n2 == a and fl2 == fla: |
|
653 | if n2 == a and fl2 == fla: | |
1358 |
actions[f] = ( |
|
654 | actions[f] = ( | |
|
655 | mergestatemod.ACTION_KEEP, | |||
|
656 | (), | |||
|
657 | b'remote unchanged', | |||
|
658 | ) | |||
1359 | elif n1 == a and fl1 == fla: # local unchanged - use remote |
|
659 | elif n1 == a and fl1 == fla: # local unchanged - use remote | |
1360 | if n1 == n2: # optimization: keep local content |
|
660 | if n1 == n2: # optimization: keep local content | |
1361 | actions[f] = ( |
|
661 | actions[f] = ( | |
1362 | ACTION_EXEC, |
|
662 | mergestatemod.ACTION_EXEC, | |
1363 | (fl2,), |
|
663 | (fl2,), | |
1364 | b'update permissions', |
|
664 | b'update permissions', | |
1365 | ) |
|
665 | ) | |
1366 | else: |
|
666 | else: | |
1367 | actions[f] = ( |
|
667 | actions[f] = ( | |
1368 | ACTION_GET_OTHER_AND_STORE |
|
668 | mergestatemod.ACTION_GET_OTHER_AND_STORE | |
1369 | if branchmerge |
|
669 | if branchmerge | |
1370 | else ACTION_GET, |
|
670 | else mergestatemod.ACTION_GET, | |
1371 | (fl2, False), |
|
671 | (fl2, False), | |
1372 | b'remote is newer', |
|
672 | b'remote is newer', | |
1373 | ) |
|
673 | ) | |
1374 | elif nol and n2 == a: # remote only changed 'x' |
|
674 | elif nol and n2 == a: # remote only changed 'x' | |
1375 |
actions[f] = ( |
|
675 | actions[f] = ( | |
|
676 | mergestatemod.ACTION_EXEC, | |||
|
677 | (fl2,), | |||
|
678 | b'update permissions', | |||
|
679 | ) | |||
1376 | elif nol and n1 == a: # local only changed 'x' |
|
680 | elif nol and n1 == a: # local only changed 'x' | |
1377 | actions[f] = ( |
|
681 | actions[f] = ( | |
1378 | ACTION_GET_OTHER_AND_STORE |
|
682 | mergestatemod.ACTION_GET_OTHER_AND_STORE | |
1379 | if branchmerge |
|
683 | if branchmerge | |
1380 | else ACTION_GET, |
|
684 | else mergestatemod.ACTION_GET, | |
1381 | (fl1, False), |
|
685 | (fl1, False), | |
1382 | b'remote is newer', |
|
686 | b'remote is newer', | |
1383 | ) |
|
687 | ) | |
1384 | else: # both changed something |
|
688 | else: # both changed something | |
1385 | actions[f] = ( |
|
689 | actions[f] = ( | |
1386 | ACTION_MERGE, |
|
690 | mergestatemod.ACTION_MERGE, | |
1387 | (f, f, f, False, pa.node()), |
|
691 | (f, f, f, False, pa.node()), | |
1388 | b'versions differ', |
|
692 | b'versions differ', | |
1389 | ) |
|
693 | ) | |
@@ -1396,40 +700,51 b' def manifestmerge(' | |||||
1396 | f2 = branch_copies1.movewithdir[f] |
|
700 | f2 = branch_copies1.movewithdir[f] | |
1397 | if f2 in m2: |
|
701 | if f2 in m2: | |
1398 | actions[f2] = ( |
|
702 | actions[f2] = ( | |
1399 | ACTION_MERGE, |
|
703 | mergestatemod.ACTION_MERGE, | |
1400 | (f, f2, None, True, pa.node()), |
|
704 | (f, f2, None, True, pa.node()), | |
1401 | b'remote directory rename, both created', |
|
705 | b'remote directory rename, both created', | |
1402 | ) |
|
706 | ) | |
1403 | else: |
|
707 | else: | |
1404 | actions[f2] = ( |
|
708 | actions[f2] = ( | |
1405 | ACTION_DIR_RENAME_MOVE_LOCAL, |
|
709 | mergestatemod.ACTION_DIR_RENAME_MOVE_LOCAL, | |
1406 | (f, fl1), |
|
710 | (f, fl1), | |
1407 | b'remote directory rename - move from %s' % f, |
|
711 | b'remote directory rename - move from %s' % f, | |
1408 | ) |
|
712 | ) | |
1409 | elif f in branch_copies1.copy: |
|
713 | elif f in branch_copies1.copy: | |
1410 | f2 = branch_copies1.copy[f] |
|
714 | f2 = branch_copies1.copy[f] | |
1411 | actions[f] = ( |
|
715 | actions[f] = ( | |
1412 | ACTION_MERGE, |
|
716 | mergestatemod.ACTION_MERGE, | |
1413 | (f, f2, f2, False, pa.node()), |
|
717 | (f, f2, f2, False, pa.node()), | |
1414 | b'local copied/moved from %s' % f2, |
|
718 | b'local copied/moved from %s' % f2, | |
1415 | ) |
|
719 | ) | |
1416 | elif f in ma: # clean, a different, no remote |
|
720 | elif f in ma: # clean, a different, no remote | |
1417 | if n1 != ma[f]: |
|
721 | if n1 != ma[f]: | |
1418 | if acceptremote: |
|
722 | if acceptremote: | |
1419 |
actions[f] = ( |
|
723 | actions[f] = ( | |
|
724 | mergestatemod.ACTION_REMOVE, | |||
|
725 | None, | |||
|
726 | b'remote delete', | |||
|
727 | ) | |||
1420 | else: |
|
728 | else: | |
1421 | actions[f] = ( |
|
729 | actions[f] = ( | |
1422 | ACTION_CHANGED_DELETED, |
|
730 | mergestatemod.ACTION_CHANGED_DELETED, | |
1423 | (f, None, f, False, pa.node()), |
|
731 | (f, None, f, False, pa.node()), | |
1424 | b'prompt changed/deleted', |
|
732 | b'prompt changed/deleted', | |
1425 | ) |
|
733 | ) | |
1426 | elif n1 == addednodeid: |
|
734 | elif n1 == addednodeid: | |
1427 | # This extra 'a' is added by working copy manifest to mark |
|
735 | # This file was locally added. We should forget it instead of | |
1428 | # the file as locally added. We should forget it instead of |
|
|||
1429 | # deleting it. |
|
736 | # deleting it. | |
1430 |
actions[f] = ( |
|
737 | actions[f] = ( | |
|
738 | mergestatemod.ACTION_FORGET, | |||
|
739 | None, | |||
|
740 | b'remote deleted', | |||
|
741 | ) | |||
1431 | else: |
|
742 | else: | |
1432 |
actions[f] = ( |
|
743 | actions[f] = ( | |
|
744 | mergestatemod.ACTION_REMOVE, | |||
|
745 | None, | |||
|
746 | b'other deleted', | |||
|
747 | ) | |||
1433 | elif n2: # file exists only on remote side |
|
748 | elif n2: # file exists only on remote side | |
1434 | if f in copied1: |
|
749 | if f in copied1: | |
1435 | pass # we'll deal with it on m1 side |
|
750 | pass # we'll deal with it on m1 side | |
@@ -1437,13 +752,13 b' def manifestmerge(' | |||||
1437 | f2 = branch_copies2.movewithdir[f] |
|
752 | f2 = branch_copies2.movewithdir[f] | |
1438 | if f2 in m1: |
|
753 | if f2 in m1: | |
1439 | actions[f2] = ( |
|
754 | actions[f2] = ( | |
1440 | ACTION_MERGE, |
|
755 | mergestatemod.ACTION_MERGE, | |
1441 | (f2, f, None, False, pa.node()), |
|
756 | (f2, f, None, False, pa.node()), | |
1442 | b'local directory rename, both created', |
|
757 | b'local directory rename, both created', | |
1443 | ) |
|
758 | ) | |
1444 | else: |
|
759 | else: | |
1445 | actions[f2] = ( |
|
760 | actions[f2] = ( | |
1446 | ACTION_LOCAL_DIR_RENAME_GET, |
|
761 | mergestatemod.ACTION_LOCAL_DIR_RENAME_GET, | |
1447 | (f, fl2), |
|
762 | (f, fl2), | |
1448 | b'local directory rename - get from %s' % f, |
|
763 | b'local directory rename - get from %s' % f, | |
1449 | ) |
|
764 | ) | |
@@ -1451,13 +766,13 b' def manifestmerge(' | |||||
1451 | f2 = branch_copies2.copy[f] |
|
766 | f2 = branch_copies2.copy[f] | |
1452 | if f2 in m2: |
|
767 | if f2 in m2: | |
1453 | actions[f] = ( |
|
768 | actions[f] = ( | |
1454 | ACTION_MERGE, |
|
769 | mergestatemod.ACTION_MERGE, | |
1455 | (f2, f, f2, False, pa.node()), |
|
770 | (f2, f, f2, False, pa.node()), | |
1456 | b'remote copied from %s' % f2, |
|
771 | b'remote copied from %s' % f2, | |
1457 | ) |
|
772 | ) | |
1458 | else: |
|
773 | else: | |
1459 | actions[f] = ( |
|
774 | actions[f] = ( | |
1460 | ACTION_MERGE, |
|
775 | mergestatemod.ACTION_MERGE, | |
1461 | (f2, f, f2, True, pa.node()), |
|
776 | (f2, f, f2, True, pa.node()), | |
1462 | b'remote moved from %s' % f2, |
|
777 | b'remote moved from %s' % f2, | |
1463 | ) |
|
778 | ) | |
@@ -1474,12 +789,20 b' def manifestmerge(' | |||||
1474 | # Checking whether the files are different is expensive, so we |
|
789 | # Checking whether the files are different is expensive, so we | |
1475 | # don't do that when we can avoid it. |
|
790 | # don't do that when we can avoid it. | |
1476 | if not force: |
|
791 | if not force: | |
1477 |
actions[f] = ( |
|
792 | actions[f] = ( | |
|
793 | mergestatemod.ACTION_CREATED, | |||
|
794 | (fl2,), | |||
|
795 | b'remote created', | |||
|
796 | ) | |||
1478 | elif not branchmerge: |
|
797 | elif not branchmerge: | |
1479 |
actions[f] = ( |
|
798 | actions[f] = ( | |
|
799 | mergestatemod.ACTION_CREATED, | |||
|
800 | (fl2,), | |||
|
801 | b'remote created', | |||
|
802 | ) | |||
1480 | else: |
|
803 | else: | |
1481 | actions[f] = ( |
|
804 | actions[f] = ( | |
1482 | ACTION_CREATED_MERGE, |
|
805 | mergestatemod.ACTION_CREATED_MERGE, | |
1483 | (fl2, pa.node()), |
|
806 | (fl2, pa.node()), | |
1484 | b'remote created, get or merge', |
|
807 | b'remote created, get or merge', | |
1485 | ) |
|
808 | ) | |
@@ -1492,16 +815,20 b' def manifestmerge(' | |||||
1492 | break |
|
815 | break | |
1493 | if df is not None and df in m1: |
|
816 | if df is not None and df in m1: | |
1494 | actions[df] = ( |
|
817 | actions[df] = ( | |
1495 | ACTION_MERGE, |
|
818 | mergestatemod.ACTION_MERGE, | |
1496 | (df, f, f, False, pa.node()), |
|
819 | (df, f, f, False, pa.node()), | |
1497 | b'local directory rename - respect move ' |
|
820 | b'local directory rename - respect move ' | |
1498 | b'from %s' % f, |
|
821 | b'from %s' % f, | |
1499 | ) |
|
822 | ) | |
1500 | elif acceptremote: |
|
823 | elif acceptremote: | |
1501 |
actions[f] = ( |
|
824 | actions[f] = ( | |
|
825 | mergestatemod.ACTION_CREATED, | |||
|
826 | (fl2,), | |||
|
827 | b'remote recreating', | |||
|
828 | ) | |||
1502 | else: |
|
829 | else: | |
1503 | actions[f] = ( |
|
830 | actions[f] = ( | |
1504 | ACTION_DELETED_CHANGED, |
|
831 | mergestatemod.ACTION_DELETED_CHANGED, | |
1505 | (None, f, f, False, pa.node()), |
|
832 | (None, f, f, False, pa.node()), | |
1506 | b'prompt deleted/changed', |
|
833 | b'prompt deleted/changed', | |
1507 | ) |
|
834 | ) | |
@@ -1528,14 +855,14 b' def _resolvetrivial(repo, wctx, mctx, an' | |||||
1528 | # actions as we resolve trivial conflicts. |
|
855 | # actions as we resolve trivial conflicts. | |
1529 | for f, (m, args, msg) in list(actions.items()): |
|
856 | for f, (m, args, msg) in list(actions.items()): | |
1530 | if ( |
|
857 | if ( | |
1531 | m == ACTION_CHANGED_DELETED |
|
858 | m == mergestatemod.ACTION_CHANGED_DELETED | |
1532 | and f in ancestor |
|
859 | and f in ancestor | |
1533 | and not wctx[f].cmp(ancestor[f]) |
|
860 | and not wctx[f].cmp(ancestor[f]) | |
1534 | ): |
|
861 | ): | |
1535 | # local did change but ended up with same content |
|
862 | # local did change but ended up with same content | |
1536 | actions[f] = ACTION_REMOVE, None, b'prompt same' |
|
863 | actions[f] = mergestatemod.ACTION_REMOVE, None, b'prompt same' | |
1537 | elif ( |
|
864 | elif ( | |
1538 | m == ACTION_DELETED_CHANGED |
|
865 | m == mergestatemod.ACTION_DELETED_CHANGED | |
1539 | and f in ancestor |
|
866 | and f in ancestor | |
1540 | and not mctx[f].cmp(ancestor[f]) |
|
867 | and not mctx[f].cmp(ancestor[f]) | |
1541 | ): |
|
868 | ): | |
@@ -1555,7 +882,17 b' def calculateupdates(' | |||||
1555 | matcher=None, |
|
882 | matcher=None, | |
1556 | mergeforce=False, |
|
883 | mergeforce=False, | |
1557 | ): |
|
884 | ): | |
1558 | """Calculate the actions needed to merge mctx into wctx using ancestors""" |
|
885 | """ | |
|
886 | Calculate the actions needed to merge mctx into wctx using ancestors | |||
|
887 | ||||
|
888 | Uses manifestmerge() to merge manifest and get list of actions required to | |||
|
889 | perform for merging two manifests. If there are multiple ancestors, uses bid | |||
|
890 | merge if enabled. | |||
|
891 | ||||
|
892 | Also filters out actions which are unrequired if repository is sparse. | |||
|
893 | ||||
|
894 | Returns same 3 element tuple as manifestmerge(). | |||
|
895 | """ | |||
1559 | # Avoid cycle. |
|
896 | # Avoid cycle. | |
1560 | from . import sparse |
|
897 | from . import sparse | |
1561 |
|
898 | |||
@@ -1613,8 +950,8 b' def calculateupdates(' | |||||
1613 |
|
950 | |||
1614 | for f, a in sorted(pycompat.iteritems(actions)): |
|
951 | for f, a in sorted(pycompat.iteritems(actions)): | |
1615 | m, args, msg = a |
|
952 | m, args, msg = a | |
1616 | if m == ACTION_GET_OTHER_AND_STORE: |
|
953 | if m == mergestatemod.ACTION_GET_OTHER_AND_STORE: | |
1617 | m = ACTION_GET |
|
954 | m = mergestatemod.ACTION_GET | |
1618 | repo.ui.debug(b' %s: %s -> %s\n' % (f, msg, m)) |
|
955 | repo.ui.debug(b' %s: %s -> %s\n' % (f, msg, m)) | |
1619 | if f in fbids: |
|
956 | if f in fbids: | |
1620 | d = fbids[f] |
|
957 | d = fbids[f] | |
@@ -1638,14 +975,14 b' def calculateupdates(' | |||||
1638 | actions[f] = l[0] |
|
975 | actions[f] = l[0] | |
1639 | continue |
|
976 | continue | |
1640 | # If keep is an option, just do it. |
|
977 | # If keep is an option, just do it. | |
1641 | if ACTION_KEEP in bids: |
|
978 | if mergestatemod.ACTION_KEEP in bids: | |
1642 | repo.ui.note(_(b" %s: picking 'keep' action\n") % f) |
|
979 | repo.ui.note(_(b" %s: picking 'keep' action\n") % f) | |
1643 | actions[f] = bids[ACTION_KEEP][0] |
|
980 | actions[f] = bids[mergestatemod.ACTION_KEEP][0] | |
1644 | continue |
|
981 | continue | |
1645 | # If there are gets and they all agree [how could they not?], do it. |
|
982 | # If there are gets and they all agree [how could they not?], do it. | |
1646 | if ACTION_GET in bids: |
|
983 | if mergestatemod.ACTION_GET in bids: | |
1647 | ga0 = bids[ACTION_GET][0] |
|
984 | ga0 = bids[mergestatemod.ACTION_GET][0] | |
1648 | if all(a == ga0 for a in bids[ACTION_GET][1:]): |
|
985 | if all(a == ga0 for a in bids[mergestatemod.ACTION_GET][1:]): | |
1649 | repo.ui.note(_(b" %s: picking 'get' action\n") % f) |
|
986 | repo.ui.note(_(b" %s: picking 'get' action\n") % f) | |
1650 | actions[f] = ga0 |
|
987 | actions[f] = ga0 | |
1651 | continue |
|
988 | continue | |
@@ -1790,18 +1127,24 b' def _prefetchfiles(repo, ctx, actions):' | |||||
1790 | oplist = [ |
|
1127 | oplist = [ | |
1791 | actions[a] |
|
1128 | actions[a] | |
1792 | for a in ( |
|
1129 | for a in ( | |
1793 | ACTION_GET, |
|
1130 | mergestatemod.ACTION_GET, | |
1794 | ACTION_DELETED_CHANGED, |
|
1131 | mergestatemod.ACTION_DELETED_CHANGED, | |
1795 | ACTION_LOCAL_DIR_RENAME_GET, |
|
1132 | mergestatemod.ACTION_LOCAL_DIR_RENAME_GET, | |
1796 | ACTION_MERGE, |
|
1133 | mergestatemod.ACTION_MERGE, | |
1797 | ) |
|
1134 | ) | |
1798 | ] |
|
1135 | ] | |
1799 | prefetch = scmutil.prefetchfiles |
|
1136 | prefetch = scmutil.prefetchfiles | |
1800 | matchfiles = scmutil.matchfiles |
|
1137 | matchfiles = scmutil.matchfiles | |
1801 | prefetch( |
|
1138 | prefetch( | |
1802 | repo, |
|
1139 | repo, | |
1803 |
[ |
|
1140 | [ | |
1804 | matchfiles(repo, [f for sublist in oplist for f, args, msg in sublist]), |
|
1141 | ( | |
|
1142 | ctx.rev(), | |||
|
1143 | matchfiles( | |||
|
1144 | repo, [f for sublist in oplist for f, args, msg in sublist] | |||
|
1145 | ), | |||
|
1146 | ) | |||
|
1147 | ], | |||
1805 | ) |
|
1148 | ) | |
1806 |
|
1149 | |||
1807 |
|
1150 | |||
@@ -1826,21 +1169,21 b' def emptyactions():' | |||||
1826 | return { |
|
1169 | return { | |
1827 | m: [] |
|
1170 | m: [] | |
1828 | for m in ( |
|
1171 | for m in ( | |
1829 | ACTION_ADD, |
|
1172 | mergestatemod.ACTION_ADD, | |
1830 | ACTION_ADD_MODIFIED, |
|
1173 | mergestatemod.ACTION_ADD_MODIFIED, | |
1831 | ACTION_FORGET, |
|
1174 | mergestatemod.ACTION_FORGET, | |
1832 | ACTION_GET, |
|
1175 | mergestatemod.ACTION_GET, | |
1833 | ACTION_CHANGED_DELETED, |
|
1176 | mergestatemod.ACTION_CHANGED_DELETED, | |
1834 | ACTION_DELETED_CHANGED, |
|
1177 | mergestatemod.ACTION_DELETED_CHANGED, | |
1835 | ACTION_REMOVE, |
|
1178 | mergestatemod.ACTION_REMOVE, | |
1836 | ACTION_DIR_RENAME_MOVE_LOCAL, |
|
1179 | mergestatemod.ACTION_DIR_RENAME_MOVE_LOCAL, | |
1837 | ACTION_LOCAL_DIR_RENAME_GET, |
|
1180 | mergestatemod.ACTION_LOCAL_DIR_RENAME_GET, | |
1838 | ACTION_MERGE, |
|
1181 | mergestatemod.ACTION_MERGE, | |
1839 | ACTION_EXEC, |
|
1182 | mergestatemod.ACTION_EXEC, | |
1840 | ACTION_KEEP, |
|
1183 | mergestatemod.ACTION_KEEP, | |
1841 | ACTION_PATH_CONFLICT, |
|
1184 | mergestatemod.ACTION_PATH_CONFLICT, | |
1842 | ACTION_PATH_CONFLICT_RESOLVE, |
|
1185 | mergestatemod.ACTION_PATH_CONFLICT_RESOLVE, | |
1843 | ACTION_GET_OTHER_AND_STORE, |
|
1186 | mergestatemod.ACTION_GET_OTHER_AND_STORE, | |
1844 | ) |
|
1187 | ) | |
1845 | } |
|
1188 | } | |
1846 |
|
1189 | |||
@@ -1862,10 +1205,12 b' def applyupdates(' | |||||
1862 | _prefetchfiles(repo, mctx, actions) |
|
1205 | _prefetchfiles(repo, mctx, actions) | |
1863 |
|
1206 | |||
1864 | updated, merged, removed = 0, 0, 0 |
|
1207 | updated, merged, removed = 0, 0, 0 | |
1865 | ms = mergestate.clean(repo, wctx.p1().node(), mctx.node(), labels) |
|
1208 | ms = mergestatemod.mergestate.clean( | |
|
1209 | repo, wctx.p1().node(), mctx.node(), labels | |||
|
1210 | ) | |||
1866 |
|
1211 | |||
1867 | # add ACTION_GET_OTHER_AND_STORE to mergestate |
|
1212 | # add ACTION_GET_OTHER_AND_STORE to mergestate | |
1868 | for e in actions[ACTION_GET_OTHER_AND_STORE]: |
|
1213 | for e in actions[mergestatemod.ACTION_GET_OTHER_AND_STORE]: | |
1869 | ms.addmergedother(e[0]) |
|
1214 | ms.addmergedother(e[0]) | |
1870 |
|
1215 | |||
1871 | moves = [] |
|
1216 | moves = [] | |
@@ -1873,9 +1218,9 b' def applyupdates(' | |||||
1873 | l.sort() |
|
1218 | l.sort() | |
1874 |
|
1219 | |||
1875 | # 'cd' and 'dc' actions are treated like other merge conflicts |
|
1220 | # 'cd' and 'dc' actions are treated like other merge conflicts | |
1876 | mergeactions = sorted(actions[ACTION_CHANGED_DELETED]) |
|
1221 | mergeactions = sorted(actions[mergestatemod.ACTION_CHANGED_DELETED]) | |
1877 | mergeactions.extend(sorted(actions[ACTION_DELETED_CHANGED])) |
|
1222 | mergeactions.extend(sorted(actions[mergestatemod.ACTION_DELETED_CHANGED])) | |
1878 | mergeactions.extend(actions[ACTION_MERGE]) |
|
1223 | mergeactions.extend(actions[mergestatemod.ACTION_MERGE]) | |
1879 | for f, args, msg in mergeactions: |
|
1224 | for f, args, msg in mergeactions: | |
1880 | f1, f2, fa, move, anc = args |
|
1225 | f1, f2, fa, move, anc = args | |
1881 | if f == b'.hgsubstate': # merged internally |
|
1226 | if f == b'.hgsubstate': # merged internally | |
@@ -1906,16 +1251,22 b' def applyupdates(' | |||||
1906 | wctx[f].audit() |
|
1251 | wctx[f].audit() | |
1907 | wctx[f].remove() |
|
1252 | wctx[f].remove() | |
1908 |
|
1253 | |||
1909 | numupdates = sum(len(l) for m, l in actions.items() if m != ACTION_KEEP) |
|
1254 | numupdates = sum( | |
|
1255 | len(l) for m, l in actions.items() if m != mergestatemod.ACTION_KEEP | |||
|
1256 | ) | |||
1910 | progress = repo.ui.makeprogress( |
|
1257 | progress = repo.ui.makeprogress( | |
1911 | _(b'updating'), unit=_(b'files'), total=numupdates |
|
1258 | _(b'updating'), unit=_(b'files'), total=numupdates | |
1912 | ) |
|
1259 | ) | |
1913 |
|
1260 | |||
1914 | if [a for a in actions[ACTION_REMOVE] if a[0] == b'.hgsubstate']: |
|
1261 | if [ | |
|
1262 | a | |||
|
1263 | for a in actions[mergestatemod.ACTION_REMOVE] | |||
|
1264 | if a[0] == b'.hgsubstate' | |||
|
1265 | ]: | |||
1915 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) |
|
1266 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) | |
1916 |
|
1267 | |||
1917 | # record path conflicts |
|
1268 | # record path conflicts | |
1918 | for f, args, msg in actions[ACTION_PATH_CONFLICT]: |
|
1269 | for f, args, msg in actions[mergestatemod.ACTION_PATH_CONFLICT]: | |
1919 | f1, fo = args |
|
1270 | f1, fo = args | |
1920 | s = repo.ui.status |
|
1271 | s = repo.ui.status | |
1921 | s( |
|
1272 | s( | |
@@ -1930,7 +1281,7 b' def applyupdates(' | |||||
1930 | else: |
|
1281 | else: | |
1931 | s(_(b"the remote file has been renamed to %s\n") % f1) |
|
1282 | s(_(b"the remote file has been renamed to %s\n") % f1) | |
1932 | s(_(b"resolve manually then use 'hg resolve --mark %s'\n") % f) |
|
1283 | s(_(b"resolve manually then use 'hg resolve --mark %s'\n") % f) | |
1933 | ms.addpath(f, f1, fo) |
|
1284 | ms.addpathconflict(f, f1, fo) | |
1934 | progress.increment(item=f) |
|
1285 | progress.increment(item=f) | |
1935 |
|
1286 | |||
1936 | # When merging in-memory, we can't support worker processes, so set the |
|
1287 | # When merging in-memory, we can't support worker processes, so set the | |
@@ -1939,16 +1290,20 b' def applyupdates(' | |||||
1939 |
|
1290 | |||
1940 | # remove in parallel (must come before resolving path conflicts and getting) |
|
1291 | # remove in parallel (must come before resolving path conflicts and getting) | |
1941 | prog = worker.worker( |
|
1292 | prog = worker.worker( | |
1942 | repo.ui, cost, batchremove, (repo, wctx), actions[ACTION_REMOVE] |
|
1293 | repo.ui, | |
|
1294 | cost, | |||
|
1295 | batchremove, | |||
|
1296 | (repo, wctx), | |||
|
1297 | actions[mergestatemod.ACTION_REMOVE], | |||
1943 | ) |
|
1298 | ) | |
1944 | for i, item in prog: |
|
1299 | for i, item in prog: | |
1945 | progress.increment(step=i, item=item) |
|
1300 | progress.increment(step=i, item=item) | |
1946 | removed = len(actions[ACTION_REMOVE]) |
|
1301 | removed = len(actions[mergestatemod.ACTION_REMOVE]) | |
1947 |
|
1302 | |||
1948 | # resolve path conflicts (must come before getting) |
|
1303 | # resolve path conflicts (must come before getting) | |
1949 | for f, args, msg in actions[ACTION_PATH_CONFLICT_RESOLVE]: |
|
1304 | for f, args, msg in actions[mergestatemod.ACTION_PATH_CONFLICT_RESOLVE]: | |
1950 | repo.ui.debug(b" %s: %s -> pr\n" % (f, msg)) |
|
1305 | repo.ui.debug(b" %s: %s -> pr\n" % (f, msg)) | |
1951 | (f0,) = args |
|
1306 | (f0, origf0) = args | |
1952 | if wctx[f0].lexists(): |
|
1307 | if wctx[f0].lexists(): | |
1953 | repo.ui.note(_(b"moving %s to %s\n") % (f0, f)) |
|
1308 | repo.ui.note(_(b"moving %s to %s\n") % (f0, f)) | |
1954 | wctx[f].audit() |
|
1309 | wctx[f].audit() | |
@@ -1965,7 +1320,7 b' def applyupdates(' | |||||
1965 | cost, |
|
1320 | cost, | |
1966 | batchget, |
|
1321 | batchget, | |
1967 | (repo, mctx, wctx, wantfiledata), |
|
1322 | (repo, mctx, wctx, wantfiledata), | |
1968 | actions[ACTION_GET], |
|
1323 | actions[mergestatemod.ACTION_GET], | |
1969 | threadsafe=threadsafe, |
|
1324 | threadsafe=threadsafe, | |
1970 | hasretval=True, |
|
1325 | hasretval=True, | |
1971 | ) |
|
1326 | ) | |
@@ -1976,33 +1331,33 b' def applyupdates(' | |||||
1976 | else: |
|
1331 | else: | |
1977 | i, item = res |
|
1332 | i, item = res | |
1978 | progress.increment(step=i, item=item) |
|
1333 | progress.increment(step=i, item=item) | |
1979 | updated = len(actions[ACTION_GET]) |
|
1334 | updated = len(actions[mergestatemod.ACTION_GET]) | |
1980 |
|
1335 | |||
1981 | if [a for a in actions[ACTION_GET] if a[0] == b'.hgsubstate']: |
|
1336 | if [a for a in actions[mergestatemod.ACTION_GET] if a[0] == b'.hgsubstate']: | |
1982 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) |
|
1337 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) | |
1983 |
|
1338 | |||
1984 | # forget (manifest only, just log it) (must come first) |
|
1339 | # forget (manifest only, just log it) (must come first) | |
1985 | for f, args, msg in actions[ACTION_FORGET]: |
|
1340 | for f, args, msg in actions[mergestatemod.ACTION_FORGET]: | |
1986 | repo.ui.debug(b" %s: %s -> f\n" % (f, msg)) |
|
1341 | repo.ui.debug(b" %s: %s -> f\n" % (f, msg)) | |
1987 | progress.increment(item=f) |
|
1342 | progress.increment(item=f) | |
1988 |
|
1343 | |||
1989 | # re-add (manifest only, just log it) |
|
1344 | # re-add (manifest only, just log it) | |
1990 | for f, args, msg in actions[ACTION_ADD]: |
|
1345 | for f, args, msg in actions[mergestatemod.ACTION_ADD]: | |
1991 | repo.ui.debug(b" %s: %s -> a\n" % (f, msg)) |
|
1346 | repo.ui.debug(b" %s: %s -> a\n" % (f, msg)) | |
1992 | progress.increment(item=f) |
|
1347 | progress.increment(item=f) | |
1993 |
|
1348 | |||
1994 | # re-add/mark as modified (manifest only, just log it) |
|
1349 | # re-add/mark as modified (manifest only, just log it) | |
1995 | for f, args, msg in actions[ACTION_ADD_MODIFIED]: |
|
1350 | for f, args, msg in actions[mergestatemod.ACTION_ADD_MODIFIED]: | |
1996 | repo.ui.debug(b" %s: %s -> am\n" % (f, msg)) |
|
1351 | repo.ui.debug(b" %s: %s -> am\n" % (f, msg)) | |
1997 | progress.increment(item=f) |
|
1352 | progress.increment(item=f) | |
1998 |
|
1353 | |||
1999 | # keep (noop, just log it) |
|
1354 | # keep (noop, just log it) | |
2000 | for f, args, msg in actions[ACTION_KEEP]: |
|
1355 | for f, args, msg in actions[mergestatemod.ACTION_KEEP]: | |
2001 | repo.ui.debug(b" %s: %s -> k\n" % (f, msg)) |
|
1356 | repo.ui.debug(b" %s: %s -> k\n" % (f, msg)) | |
2002 | # no progress |
|
1357 | # no progress | |
2003 |
|
1358 | |||
2004 | # directory rename, move local |
|
1359 | # directory rename, move local | |
2005 | for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]: |
|
1360 | for f, args, msg in actions[mergestatemod.ACTION_DIR_RENAME_MOVE_LOCAL]: | |
2006 | repo.ui.debug(b" %s: %s -> dm\n" % (f, msg)) |
|
1361 | repo.ui.debug(b" %s: %s -> dm\n" % (f, msg)) | |
2007 | progress.increment(item=f) |
|
1362 | progress.increment(item=f) | |
2008 | f0, flags = args |
|
1363 | f0, flags = args | |
@@ -2013,7 +1368,7 b' def applyupdates(' | |||||
2013 | updated += 1 |
|
1368 | updated += 1 | |
2014 |
|
1369 | |||
2015 | # local directory rename, get |
|
1370 | # local directory rename, get | |
2016 | for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]: |
|
1371 | for f, args, msg in actions[mergestatemod.ACTION_LOCAL_DIR_RENAME_GET]: | |
2017 | repo.ui.debug(b" %s: %s -> dg\n" % (f, msg)) |
|
1372 | repo.ui.debug(b" %s: %s -> dg\n" % (f, msg)) | |
2018 | progress.increment(item=f) |
|
1373 | progress.increment(item=f) | |
2019 | f0, flags = args |
|
1374 | f0, flags = args | |
@@ -2022,7 +1377,7 b' def applyupdates(' | |||||
2022 | updated += 1 |
|
1377 | updated += 1 | |
2023 |
|
1378 | |||
2024 | # exec |
|
1379 | # exec | |
2025 | for f, args, msg in actions[ACTION_EXEC]: |
|
1380 | for f, args, msg in actions[mergestatemod.ACTION_EXEC]: | |
2026 | repo.ui.debug(b" %s: %s -> e\n" % (f, msg)) |
|
1381 | repo.ui.debug(b" %s: %s -> e\n" % (f, msg)) | |
2027 | progress.increment(item=f) |
|
1382 | progress.increment(item=f) | |
2028 | (flags,) = args |
|
1383 | (flags,) = args | |
@@ -2087,7 +1442,7 b' def applyupdates(' | |||||
2087 | if ( |
|
1442 | if ( | |
2088 | usemergedriver |
|
1443 | usemergedriver | |
2089 | and not unresolved |
|
1444 | and not unresolved | |
2090 | and ms.mdstate() != MERGE_DRIVER_STATE_SUCCESS |
|
1445 | and ms.mdstate() != mergestatemod.MERGE_DRIVER_STATE_SUCCESS | |
2091 | ): |
|
1446 | ): | |
2092 | if not driverconclude(repo, ms, wctx, labels=labels): |
|
1447 | if not driverconclude(repo, ms, wctx, labels=labels): | |
2093 | # XXX setting unresolved to at least 1 is a hack to make sure we |
|
1448 | # XXX setting unresolved to at least 1 is a hack to make sure we | |
@@ -2103,10 +1458,10 b' def applyupdates(' | |||||
2103 |
|
1458 | |||
2104 | extraactions = ms.actions() |
|
1459 | extraactions = ms.actions() | |
2105 | if extraactions: |
|
1460 | if extraactions: | |
2106 | mfiles = {a[0] for a in actions[ACTION_MERGE]} |
|
1461 | mfiles = {a[0] for a in actions[mergestatemod.ACTION_MERGE]} | |
2107 | for k, acts in pycompat.iteritems(extraactions): |
|
1462 | for k, acts in pycompat.iteritems(extraactions): | |
2108 | actions[k].extend(acts) |
|
1463 | actions[k].extend(acts) | |
2109 | if k == ACTION_GET and wantfiledata: |
|
1464 | if k == mergestatemod.ACTION_GET and wantfiledata: | |
2110 | # no filedata until mergestate is updated to provide it |
|
1465 | # no filedata until mergestate is updated to provide it | |
2111 | for a in acts: |
|
1466 | for a in acts: | |
2112 | getfiledata[a[0]] = None |
|
1467 | getfiledata[a[0]] = None | |
@@ -2128,110 +1483,58 b' def applyupdates(' | |||||
2128 | # those lists aren't consulted again. |
|
1483 | # those lists aren't consulted again. | |
2129 | mfiles.difference_update(a[0] for a in acts) |
|
1484 | mfiles.difference_update(a[0] for a in acts) | |
2130 |
|
1485 | |||
2131 | actions[ACTION_MERGE] = [ |
|
1486 | actions[mergestatemod.ACTION_MERGE] = [ | |
2132 | a for a in actions[ACTION_MERGE] if a[0] in mfiles |
|
1487 | a for a in actions[mergestatemod.ACTION_MERGE] if a[0] in mfiles | |
2133 | ] |
|
1488 | ] | |
2134 |
|
1489 | |||
2135 | progress.complete() |
|
1490 | progress.complete() | |
2136 | assert len(getfiledata) == (len(actions[ACTION_GET]) if wantfiledata else 0) |
|
1491 | assert len(getfiledata) == ( | |
|
1492 | len(actions[mergestatemod.ACTION_GET]) if wantfiledata else 0 | |||
|
1493 | ) | |||
2137 | return updateresult(updated, merged, removed, unresolved), getfiledata |
|
1494 | return updateresult(updated, merged, removed, unresolved), getfiledata | |
2138 |
|
1495 | |||
2139 |
|
1496 | |||
2140 | def recordupdates(repo, actions, branchmerge, getfiledata): |
|
1497 | def _advertisefsmonitor(repo, num_gets, p1node): | |
2141 | """record merge actions to the dirstate""" |
|
1498 | # Advertise fsmonitor when its presence could be useful. | |
2142 | # remove (must come first) |
|
1499 | # | |
2143 | for f, args, msg in actions.get(ACTION_REMOVE, []): |
|
1500 | # We only advertise when performing an update from an empty working | |
2144 | if branchmerge: |
|
1501 | # directory. This typically only occurs during initial clone. | |
2145 | repo.dirstate.remove(f) |
|
1502 | # | |
2146 | else: |
|
1503 | # We give users a mechanism to disable the warning in case it is | |
2147 | repo.dirstate.drop(f) |
|
1504 | # annoying. | |
2148 |
|
1505 | # | ||
2149 | # forget (must come first) |
|
1506 | # We only allow on Linux and MacOS because that's where fsmonitor is | |
2150 | for f, args, msg in actions.get(ACTION_FORGET, []): |
|
1507 | # considered stable. | |
2151 | repo.dirstate.drop(f) |
|
1508 | fsmonitorwarning = repo.ui.configbool(b'fsmonitor', b'warn_when_unused') | |
2152 |
|
1509 | fsmonitorthreshold = repo.ui.configint( | ||
2153 | # resolve path conflicts |
|
1510 | b'fsmonitor', b'warn_update_file_count' | |
2154 | for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []): |
|
1511 | ) | |
2155 | (f0,) = args |
|
1512 | try: | |
2156 | origf0 = repo.dirstate.copied(f0) or f0 |
|
1513 | # avoid cycle: extensions -> cmdutil -> merge | |
2157 | repo.dirstate.add(f) |
|
1514 | from . import extensions | |
2158 | repo.dirstate.copy(origf0, f) |
|
|||
2159 | if f0 == origf0: |
|
|||
2160 | repo.dirstate.remove(f0) |
|
|||
2161 | else: |
|
|||
2162 | repo.dirstate.drop(f0) |
|
|||
2163 |
|
||||
2164 | # re-add |
|
|||
2165 | for f, args, msg in actions.get(ACTION_ADD, []): |
|
|||
2166 | repo.dirstate.add(f) |
|
|||
2167 |
|
||||
2168 | # re-add/mark as modified |
|
|||
2169 | for f, args, msg in actions.get(ACTION_ADD_MODIFIED, []): |
|
|||
2170 | if branchmerge: |
|
|||
2171 | repo.dirstate.normallookup(f) |
|
|||
2172 | else: |
|
|||
2173 | repo.dirstate.add(f) |
|
|||
2174 |
|
||||
2175 | # exec change |
|
|||
2176 | for f, args, msg in actions.get(ACTION_EXEC, []): |
|
|||
2177 | repo.dirstate.normallookup(f) |
|
|||
2178 |
|
||||
2179 | # keep |
|
|||
2180 | for f, args, msg in actions.get(ACTION_KEEP, []): |
|
|||
2181 | pass |
|
|||
2182 |
|
1515 | |||
2183 | # get |
|
1516 | extensions.find(b'fsmonitor') | |
2184 | for f, args, msg in actions.get(ACTION_GET, []): |
|
1517 | fsmonitorenabled = repo.ui.config(b'fsmonitor', b'mode') != b'off' | |
2185 | if branchmerge: |
|
1518 | # We intentionally don't look at whether fsmonitor has disabled | |
2186 | repo.dirstate.otherparent(f) |
|
1519 | # itself because a) fsmonitor may have already printed a warning | |
2187 | else: |
|
1520 | # b) we only care about the config state here. | |
2188 | parentfiledata = getfiledata[f] if getfiledata else None |
|
1521 | except KeyError: | |
2189 | repo.dirstate.normal(f, parentfiledata=parentfiledata) |
|
1522 | fsmonitorenabled = False | |
2190 |
|
1523 | |||
2191 | # merge |
|
1524 | if ( | |
2192 | for f, args, msg in actions.get(ACTION_MERGE, []): |
|
1525 | fsmonitorwarning | |
2193 | f1, f2, fa, move, anc = args |
|
1526 | and not fsmonitorenabled | |
2194 | if branchmerge: |
|
1527 | and p1node == nullid | |
2195 | # We've done a branch merge, mark this file as merged |
|
1528 | and num_gets >= fsmonitorthreshold | |
2196 | # so that we properly record the merger later |
|
1529 | and pycompat.sysplatform.startswith((b'linux', b'darwin')) | |
2197 | repo.dirstate.merge(f) |
|
1530 | ): | |
2198 | if f1 != f2: # copy/rename |
|
1531 | repo.ui.warn( | |
2199 |
|
|
1532 | _( | |
2200 | repo.dirstate.remove(f1) |
|
1533 | b'(warning: large working directory being used without ' | |
2201 | if f1 != f: |
|
1534 | b'fsmonitor enabled; enable fsmonitor to improve performance; ' | |
2202 | repo.dirstate.copy(f1, f) |
|
1535 | b'see "hg help -e fsmonitor")\n' | |
2203 |
|
|
1536 | ) | |
2204 | repo.dirstate.copy(f2, f) |
|
1537 | ) | |
2205 | else: |
|
|||
2206 | # We've update-merged a locally modified file, so |
|
|||
2207 | # we set the dirstate to emulate a normal checkout |
|
|||
2208 | # of that file some time in the past. Thus our |
|
|||
2209 | # merge will appear as a normal local file |
|
|||
2210 | # modification. |
|
|||
2211 | if f2 == f: # file not locally copied/moved |
|
|||
2212 | repo.dirstate.normallookup(f) |
|
|||
2213 | if move: |
|
|||
2214 | repo.dirstate.drop(f1) |
|
|||
2215 |
|
||||
2216 | # directory rename, move local |
|
|||
2217 | for f, args, msg in actions.get(ACTION_DIR_RENAME_MOVE_LOCAL, []): |
|
|||
2218 | f0, flag = args |
|
|||
2219 | if branchmerge: |
|
|||
2220 | repo.dirstate.add(f) |
|
|||
2221 | repo.dirstate.remove(f0) |
|
|||
2222 | repo.dirstate.copy(f0, f) |
|
|||
2223 | else: |
|
|||
2224 | repo.dirstate.normal(f) |
|
|||
2225 | repo.dirstate.drop(f0) |
|
|||
2226 |
|
||||
2227 | # directory rename, get |
|
|||
2228 | for f, args, msg in actions.get(ACTION_LOCAL_DIR_RENAME_GET, []): |
|
|||
2229 | f0, flag = args |
|
|||
2230 | if branchmerge: |
|
|||
2231 | repo.dirstate.add(f) |
|
|||
2232 | repo.dirstate.copy(f0, f) |
|
|||
2233 | else: |
|
|||
2234 | repo.dirstate.normal(f) |
|
|||
2235 |
|
1538 | |||
2236 |
|
1539 | |||
2237 | UPDATECHECK_ABORT = b'abort' # handled at higher layers |
|
1540 | UPDATECHECK_ABORT = b'abort' # handled at higher layers | |
@@ -2334,7 +1637,11 b' def update(' | |||||
2334 | ), |
|
1637 | ), | |
2335 | ) |
|
1638 | ) | |
2336 | ) |
|
1639 | ) | |
2337 | with repo.wlock(): |
|
1640 | if wc is not None and wc.isinmemory(): | |
|
1641 | maybe_wlock = util.nullcontextmanager() | |||
|
1642 | else: | |||
|
1643 | maybe_wlock = repo.wlock() | |||
|
1644 | with maybe_wlock: | |||
2338 | if wc is None: |
|
1645 | if wc is None: | |
2339 | wc = repo[None] |
|
1646 | wc = repo[None] | |
2340 | pl = wc.parents() |
|
1647 | pl = wc.parents() | |
@@ -2356,7 +1663,7 b' def update(' | |||||
2356 | if not overwrite: |
|
1663 | if not overwrite: | |
2357 | if len(pl) > 1: |
|
1664 | if len(pl) > 1: | |
2358 | raise error.Abort(_(b"outstanding uncommitted merge")) |
|
1665 | raise error.Abort(_(b"outstanding uncommitted merge")) | |
2359 | ms = mergestate.read(repo) |
|
1666 | ms = mergestatemod.mergestate.read(repo) | |
2360 | if list(ms.unresolved()): |
|
1667 | if list(ms.unresolved()): | |
2361 | raise error.Abort( |
|
1668 | raise error.Abort( | |
2362 | _(b"outstanding merge conflicts"), |
|
1669 | _(b"outstanding merge conflicts"), | |
@@ -2443,12 +1750,12 b' def update(' | |||||
2443 | if updatecheck == UPDATECHECK_NO_CONFLICT: |
|
1750 | if updatecheck == UPDATECHECK_NO_CONFLICT: | |
2444 | for f, (m, args, msg) in pycompat.iteritems(actionbyfile): |
|
1751 | for f, (m, args, msg) in pycompat.iteritems(actionbyfile): | |
2445 | if m not in ( |
|
1752 | if m not in ( | |
2446 | ACTION_GET, |
|
1753 | mergestatemod.ACTION_GET, | |
2447 | ACTION_KEEP, |
|
1754 | mergestatemod.ACTION_KEEP, | |
2448 | ACTION_EXEC, |
|
1755 | mergestatemod.ACTION_EXEC, | |
2449 | ACTION_REMOVE, |
|
1756 | mergestatemod.ACTION_REMOVE, | |
2450 | ACTION_PATH_CONFLICT_RESOLVE, |
|
1757 | mergestatemod.ACTION_PATH_CONFLICT_RESOLVE, | |
2451 | ACTION_GET_OTHER_AND_STORE, |
|
1758 | mergestatemod.ACTION_GET_OTHER_AND_STORE, | |
2452 | ): |
|
1759 | ): | |
2453 | msg = _(b"conflicting changes") |
|
1760 | msg = _(b"conflicting changes") | |
2454 | hint = _(b"commit or update --clean to discard changes") |
|
1761 | hint = _(b"commit or update --clean to discard changes") | |
@@ -2462,7 +1769,7 b' def update(' | |||||
2462 | m, args, msg = actionbyfile[f] |
|
1769 | m, args, msg = actionbyfile[f] | |
2463 | prompts = filemerge.partextras(labels) |
|
1770 | prompts = filemerge.partextras(labels) | |
2464 | prompts[b'f'] = f |
|
1771 | prompts[b'f'] = f | |
2465 | if m == ACTION_CHANGED_DELETED: |
|
1772 | if m == mergestatemod.ACTION_CHANGED_DELETED: | |
2466 | if repo.ui.promptchoice( |
|
1773 | if repo.ui.promptchoice( | |
2467 | _( |
|
1774 | _( | |
2468 | b"local%(l)s changed %(f)s which other%(o)s deleted\n" |
|
1775 | b"local%(l)s changed %(f)s which other%(o)s deleted\n" | |
@@ -2472,16 +1779,24 b' def update(' | |||||
2472 | % prompts, |
|
1779 | % prompts, | |
2473 | 0, |
|
1780 | 0, | |
2474 | ): |
|
1781 | ): | |
2475 |
actionbyfile[f] = ( |
|
1782 | actionbyfile[f] = ( | |
|
1783 | mergestatemod.ACTION_REMOVE, | |||
|
1784 | None, | |||
|
1785 | b'prompt delete', | |||
|
1786 | ) | |||
2476 | elif f in p1: |
|
1787 | elif f in p1: | |
2477 | actionbyfile[f] = ( |
|
1788 | actionbyfile[f] = ( | |
2478 | ACTION_ADD_MODIFIED, |
|
1789 | mergestatemod.ACTION_ADD_MODIFIED, | |
2479 | None, |
|
1790 | None, | |
2480 | b'prompt keep', |
|
1791 | b'prompt keep', | |
2481 | ) |
|
1792 | ) | |
2482 | else: |
|
1793 | else: | |
2483 |
actionbyfile[f] = ( |
|
1794 | actionbyfile[f] = ( | |
2484 | elif m == ACTION_DELETED_CHANGED: |
|
1795 | mergestatemod.ACTION_ADD, | |
|
1796 | None, | |||
|
1797 | b'prompt keep', | |||
|
1798 | ) | |||
|
1799 | elif m == mergestatemod.ACTION_DELETED_CHANGED: | |||
2485 | f1, f2, fa, move, anc = args |
|
1800 | f1, f2, fa, move, anc = args | |
2486 | flags = p2[f2].flags() |
|
1801 | flags = p2[f2].flags() | |
2487 | if ( |
|
1802 | if ( | |
@@ -2497,7 +1812,7 b' def update(' | |||||
2497 | == 0 |
|
1812 | == 0 | |
2498 | ): |
|
1813 | ): | |
2499 | actionbyfile[f] = ( |
|
1814 | actionbyfile[f] = ( | |
2500 | ACTION_GET, |
|
1815 | mergestatemod.ACTION_GET, | |
2501 | (flags, False), |
|
1816 | (flags, False), | |
2502 | b'prompt recreating', |
|
1817 | b'prompt recreating', | |
2503 | ) |
|
1818 | ) | |
@@ -2511,9 +1826,9 b' def update(' | |||||
2511 | actions[m] = [] |
|
1826 | actions[m] = [] | |
2512 | actions[m].append((f, args, msg)) |
|
1827 | actions[m].append((f, args, msg)) | |
2513 |
|
1828 | |||
2514 | # ACTION_GET_OTHER_AND_STORE is a ACTION_GET + store in mergestate |
|
1829 | # ACTION_GET_OTHER_AND_STORE is a mergestatemod.ACTION_GET + store in mergestate | |
2515 | for e in actions[ACTION_GET_OTHER_AND_STORE]: |
|
1830 | for e in actions[mergestatemod.ACTION_GET_OTHER_AND_STORE]: | |
2516 | actions[ACTION_GET].append(e) |
|
1831 | actions[mergestatemod.ACTION_GET].append(e) | |
2517 |
|
1832 | |||
2518 | if not util.fscasesensitive(repo.path): |
|
1833 | if not util.fscasesensitive(repo.path): | |
2519 | # check collision between files only in p2 for clean update |
|
1834 | # check collision between files only in p2 for clean update | |
@@ -2560,46 +1875,9 b' def update(' | |||||
2560 | # note that we're in the middle of an update |
|
1875 | # note that we're in the middle of an update | |
2561 | repo.vfs.write(b'updatestate', p2.hex()) |
|
1876 | repo.vfs.write(b'updatestate', p2.hex()) | |
2562 |
|
1877 | |||
2563 | # Advertise fsmonitor when its presence could be useful. |
|
1878 | _advertisefsmonitor( | |
2564 | # |
|
1879 | repo, len(actions[mergestatemod.ACTION_GET]), p1.node() | |
2565 | # We only advertise when performing an update from an empty working |
|
|||
2566 | # directory. This typically only occurs during initial clone. |
|
|||
2567 | # |
|
|||
2568 | # We give users a mechanism to disable the warning in case it is |
|
|||
2569 | # annoying. |
|
|||
2570 | # |
|
|||
2571 | # We only allow on Linux and MacOS because that's where fsmonitor is |
|
|||
2572 | # considered stable. |
|
|||
2573 | fsmonitorwarning = repo.ui.configbool(b'fsmonitor', b'warn_when_unused') |
|
|||
2574 | fsmonitorthreshold = repo.ui.configint( |
|
|||
2575 | b'fsmonitor', b'warn_update_file_count' |
|
|||
2576 | ) |
|
1880 | ) | |
2577 | try: |
|
|||
2578 | # avoid cycle: extensions -> cmdutil -> merge |
|
|||
2579 | from . import extensions |
|
|||
2580 |
|
||||
2581 | extensions.find(b'fsmonitor') |
|
|||
2582 | fsmonitorenabled = repo.ui.config(b'fsmonitor', b'mode') != b'off' |
|
|||
2583 | # We intentionally don't look at whether fsmonitor has disabled |
|
|||
2584 | # itself because a) fsmonitor may have already printed a warning |
|
|||
2585 | # b) we only care about the config state here. |
|
|||
2586 | except KeyError: |
|
|||
2587 | fsmonitorenabled = False |
|
|||
2588 |
|
||||
2589 | if ( |
|
|||
2590 | fsmonitorwarning |
|
|||
2591 | and not fsmonitorenabled |
|
|||
2592 | and p1.node() == nullid |
|
|||
2593 | and len(actions[ACTION_GET]) >= fsmonitorthreshold |
|
|||
2594 | and pycompat.sysplatform.startswith((b'linux', b'darwin')) |
|
|||
2595 | ): |
|
|||
2596 | repo.ui.warn( |
|
|||
2597 | _( |
|
|||
2598 | b'(warning: large working directory being used without ' |
|
|||
2599 | b'fsmonitor enabled; enable fsmonitor to improve performance; ' |
|
|||
2600 | b'see "hg help -e fsmonitor")\n' |
|
|||
2601 | ) |
|
|||
2602 | ) |
|
|||
2603 |
|
1881 | |||
2604 | wantfiledata = updatedirstate and not branchmerge |
|
1882 | wantfiledata = updatedirstate and not branchmerge | |
2605 | stats, getfiledata = applyupdates( |
|
1883 | stats, getfiledata = applyupdates( | |
@@ -2609,7 +1887,9 b' def update(' | |||||
2609 | if updatedirstate: |
|
1887 | if updatedirstate: | |
2610 | with repo.dirstate.parentchange(): |
|
1888 | with repo.dirstate.parentchange(): | |
2611 | repo.setparents(fp1, fp2) |
|
1889 | repo.setparents(fp1, fp2) | |
2612 | recordupdates(repo, actions, branchmerge, getfiledata) |
|
1890 | mergestatemod.recordupdates( | |
|
1891 | repo, actions, branchmerge, getfiledata | |||
|
1892 | ) | |||
2613 | # update completed, clear state |
|
1893 | # update completed, clear state | |
2614 | util.unlink(repo.vfs.join(b'updatestate')) |
|
1894 | util.unlink(repo.vfs.join(b'updatestate')) | |
2615 |
|
1895 |
This diff has been collapsed as it changes many lines, (2086 lines changed) Show them Hide them | |||||
@@ -1,42 +1,22 b'' | |||||
1 | # merge.py - directory-level update/merge handling for Mercurial |
|
|||
2 | # |
|
|||
3 | # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com> |
|
|||
4 | # |
|
|||
5 | # This software may be used and distributed according to the terms of the |
|
|||
6 | # GNU General Public License version 2 or any later version. |
|
|||
7 |
|
||||
8 |
|
|
1 | from __future__ import absolute_import | |
9 |
|
2 | |||
10 | import errno |
|
3 | import errno | |
11 | import shutil |
|
4 | import shutil | |
12 | import stat |
|
|||
13 | import struct |
|
5 | import struct | |
14 |
|
6 | |||
15 | from .i18n import _ |
|
7 | from .i18n import _ | |
16 | from .node import ( |
|
8 | from .node import ( | |
17 | addednodeid, |
|
|||
18 | bin, |
|
9 | bin, | |
19 | hex, |
|
10 | hex, | |
20 | modifiednodeid, |
|
|||
21 | nullhex, |
|
11 | nullhex, | |
22 | nullid, |
|
12 | nullid, | |
23 | nullrev, |
|
|||
24 | ) |
|
13 | ) | |
25 | from .pycompat import delattr |
|
14 | from .pycompat import delattr | |
26 | from .thirdparty import attr |
|
|||
27 | from . import ( |
|
15 | from . import ( | |
28 | copies, |
|
|||
29 | encoding, |
|
|||
30 | error, |
|
16 | error, | |
31 | filemerge, |
|
17 | filemerge, | |
32 | match as matchmod, |
|
|||
33 | obsutil, |
|
|||
34 | pathutil, |
|
|||
35 | pycompat, |
|
18 | pycompat, | |
36 | scmutil, |
|
|||
37 | subrepoutil, |
|
|||
38 | util, |
|
19 | util, | |
39 | worker, |
|
|||
40 | ) |
|
20 | ) | |
41 | from .utils import hashutil |
|
21 | from .utils import hashutil | |
42 |
|
22 | |||
@@ -51,25 +31,48 b' def _droponode(data):' | |||||
51 | return b'\0'.join(bits) |
|
31 | return b'\0'.join(bits) | |
52 |
|
32 | |||
53 |
|
33 | |||
|
34 | def _filectxorabsent(hexnode, ctx, f): | |||
|
35 | if hexnode == nullhex: | |||
|
36 | return filemerge.absentfilectx(ctx, f) | |||
|
37 | else: | |||
|
38 | return ctx[f] | |||
|
39 | ||||
|
40 | ||||
54 | # Merge state record types. See ``mergestate`` docs for more. |
|
41 | # Merge state record types. See ``mergestate`` docs for more. | |
|
42 | ||||
|
43 | #### | |||
|
44 | # merge records which records metadata about a current merge | |||
|
45 | # exists only once in a mergestate | |||
|
46 | ##### | |||
55 | RECORD_LOCAL = b'L' |
|
47 | RECORD_LOCAL = b'L' | |
56 | RECORD_OTHER = b'O' |
|
48 | RECORD_OTHER = b'O' | |
|
49 | # record merge labels | |||
|
50 | RECORD_LABELS = b'l' | |||
|
51 | # store info about merge driver used and it's state | |||
|
52 | RECORD_MERGE_DRIVER_STATE = b'm' | |||
|
53 | ||||
|
54 | ##### | |||
|
55 | # record extra information about files, with one entry containing info about one | |||
|
56 | # file. Hence, multiple of them can exists | |||
|
57 | ##### | |||
|
58 | RECORD_FILE_VALUES = b'f' | |||
|
59 | ||||
|
60 | ##### | |||
|
61 | # merge records which represents state of individual merges of files/folders | |||
|
62 | # These are top level records for each entry containing merge related info. | |||
|
63 | # Each record of these has info about one file. Hence multiple of them can | |||
|
64 | # exists | |||
|
65 | ##### | |||
57 | RECORD_MERGED = b'F' |
|
66 | RECORD_MERGED = b'F' | |
58 | RECORD_CHANGEDELETE_CONFLICT = b'C' |
|
67 | RECORD_CHANGEDELETE_CONFLICT = b'C' | |
59 | RECORD_MERGE_DRIVER_MERGE = b'D' |
|
68 | RECORD_MERGE_DRIVER_MERGE = b'D' | |
|
69 | # the path was dir on one side of merge and file on another | |||
60 | RECORD_PATH_CONFLICT = b'P' |
|
70 | RECORD_PATH_CONFLICT = b'P' | |
61 | RECORD_MERGE_DRIVER_STATE = b'm' |
|
|||
62 | RECORD_FILE_VALUES = b'f' |
|
|||
63 | RECORD_LABELS = b'l' |
|
|||
64 | RECORD_OVERRIDE = b't' |
|
|||
65 | RECORD_UNSUPPORTED_MANDATORY = b'X' |
|
|||
66 | RECORD_UNSUPPORTED_ADVISORY = b'x' |
|
|||
67 | RECORD_RESOLVED_OTHER = b'R' |
|
|||
68 |
|
71 | |||
69 | MERGE_DRIVER_STATE_UNMARKED = b'u' |
|
72 | ##### | |
70 | MERGE_DRIVER_STATE_MARKED = b'm' |
|
73 | # possible state which a merge entry can have. These are stored inside top-level | |
71 | MERGE_DRIVER_STATE_SUCCESS = b's' |
|
74 | # merge records mentioned just above. | |
72 |
|
75 | ##### | ||
73 | MERGE_RECORD_UNRESOLVED = b'u' |
|
76 | MERGE_RECORD_UNRESOLVED = b'u' | |
74 | MERGE_RECORD_RESOLVED = b'r' |
|
77 | MERGE_RECORD_RESOLVED = b'r' | |
75 | MERGE_RECORD_UNRESOLVED_PATH = b'pu' |
|
78 | MERGE_RECORD_UNRESOLVED_PATH = b'pu' | |
@@ -79,6 +82,21 b" MERGE_RECORD_DRIVER_RESOLVED = b'd'" | |||||
79 | # of other version. This info is used on commit. |
|
82 | # of other version. This info is used on commit. | |
80 | MERGE_RECORD_MERGED_OTHER = b'o' |
|
83 | MERGE_RECORD_MERGED_OTHER = b'o' | |
81 |
|
84 | |||
|
85 | ##### | |||
|
86 | # top level record which stores other unknown records. Multiple of these can | |||
|
87 | # exists | |||
|
88 | ##### | |||
|
89 | RECORD_OVERRIDE = b't' | |||
|
90 | ||||
|
91 | ##### | |||
|
92 | # possible states which a merge driver can have. These are stored inside a | |||
|
93 | # RECORD_MERGE_DRIVER_STATE entry | |||
|
94 | ##### | |||
|
95 | MERGE_DRIVER_STATE_UNMARKED = b'u' | |||
|
96 | MERGE_DRIVER_STATE_MARKED = b'm' | |||
|
97 | MERGE_DRIVER_STATE_SUCCESS = b's' | |||
|
98 | ||||
|
99 | ||||
82 | ACTION_FORGET = b'f' |
|
100 | ACTION_FORGET = b'f' | |
83 | ACTION_REMOVE = b'r' |
|
101 | ACTION_REMOVE = b'r' | |
84 | ACTION_ADD = b'a' |
|
102 | ACTION_ADD = b'a' | |
@@ -125,8 +143,6 b' class mergestate(object):' | |||||
125 | m: the external merge driver defined for this merge plus its run state |
|
143 | m: the external merge driver defined for this merge plus its run state | |
126 | (experimental) |
|
144 | (experimental) | |
127 | f: a (filename, dictionary) tuple of optional values for a given file |
|
145 | f: a (filename, dictionary) tuple of optional values for a given file | |
128 | X: unsupported mandatory record type (used in tests) |
|
|||
129 | x: unsupported advisory record type (used in tests) |
|
|||
130 | l: the labels for the parts of the merge. |
|
146 | l: the labels for the parts of the merge. | |
131 |
|
147 | |||
132 | Merge driver run states (experimental): |
|
148 | Merge driver run states (experimental): | |
@@ -233,7 +249,6 b' class mergestate(object):' | |||||
233 | RECORD_CHANGEDELETE_CONFLICT, |
|
249 | RECORD_CHANGEDELETE_CONFLICT, | |
234 | RECORD_PATH_CONFLICT, |
|
250 | RECORD_PATH_CONFLICT, | |
235 | RECORD_MERGE_DRIVER_MERGE, |
|
251 | RECORD_MERGE_DRIVER_MERGE, | |
236 | RECORD_RESOLVED_OTHER, |
|
|||
237 | ): |
|
252 | ): | |
238 | bits = record.split(b'\0') |
|
253 | bits = record.split(b'\0') | |
239 | self._state[bits[0]] = bits[1:] |
|
254 | self._state[bits[0]] = bits[1:] | |
@@ -252,6 +267,11 b' class mergestate(object):' | |||||
252 | self._labels = [l for l in labels if len(l) > 0] |
|
267 | self._labels = [l for l in labels if len(l) > 0] | |
253 | elif not rtype.islower(): |
|
268 | elif not rtype.islower(): | |
254 | unsupported.add(rtype) |
|
269 | unsupported.add(rtype) | |
|
270 | # contains a mapping of form: | |||
|
271 | # {filename : (merge_return_value, action_to_be_performed} | |||
|
272 | # these are results of re-running merge process | |||
|
273 | # this dict is used to perform actions on dirstate caused by re-running | |||
|
274 | # the merge | |||
255 | self._results = {} |
|
275 | self._results = {} | |
256 | self._dirty = False |
|
276 | self._dirty = False | |
257 |
|
277 | |||
@@ -461,9 +481,7 b' class mergestate(object):' | |||||
461 | (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v)) |
|
481 | (RECORD_PATH_CONFLICT, b'\0'.join([filename] + v)) | |
462 | ) |
|
482 | ) | |
463 | elif v[0] == MERGE_RECORD_MERGED_OTHER: |
|
483 | elif v[0] == MERGE_RECORD_MERGED_OTHER: | |
464 | records.append( |
|
484 | records.append((RECORD_MERGED, b'\0'.join([filename] + v))) | |
465 | (RECORD_RESOLVED_OTHER, b'\0'.join([filename] + v)) |
|
|||
466 | ) |
|
|||
467 | elif v[1] == nullhex or v[6] == nullhex: |
|
485 | elif v[1] == nullhex or v[6] == nullhex: | |
468 | # Change/Delete or Delete/Change conflicts. These are stored in |
|
486 | # Change/Delete or Delete/Change conflicts. These are stored in | |
469 | # 'C' records. v[1] is the local file, and is nullhex when the |
|
487 | # 'C' records. v[1] is the local file, and is nullhex when the | |
@@ -553,7 +571,7 b' class mergestate(object):' | |||||
553 | self._stateextras[fd] = {b'ancestorlinknode': hex(fca.node())} |
|
571 | self._stateextras[fd] = {b'ancestorlinknode': hex(fca.node())} | |
554 | self._dirty = True |
|
572 | self._dirty = True | |
555 |
|
573 | |||
556 | def addpath(self, path, frename, forigin): |
|
574 | def addpathconflict(self, path, frename, forigin): | |
557 | """add a new conflicting path to the merge state |
|
575 | """add a new conflicting path to the merge state | |
558 | path: the path that conflicts |
|
576 | path: the path that conflicts | |
559 | frename: the filename the conflicting file was renamed to |
|
577 | frename: the filename the conflicting file was renamed to | |
@@ -606,7 +624,10 b' class mergestate(object):' | |||||
606 | return self._stateextras.setdefault(filename, {}) |
|
624 | return self._stateextras.setdefault(filename, {}) | |
607 |
|
625 | |||
608 | def _resolve(self, preresolve, dfile, wctx): |
|
626 | def _resolve(self, preresolve, dfile, wctx): | |
609 |
"""rerun merge process for file path `dfile` |
|
627 | """rerun merge process for file path `dfile`. | |
|
628 | Returns whether the merge was completed and the return value of merge | |||
|
629 | obtained from filemerge._filemerge(). | |||
|
630 | """ | |||
610 | if self[dfile] in (MERGE_RECORD_RESOLVED, MERGE_RECORD_DRIVER_RESOLVED): |
|
631 | if self[dfile] in (MERGE_RECORD_RESOLVED, MERGE_RECORD_DRIVER_RESOLVED): | |
611 | return True, 0 |
|
632 | return True, 0 | |
612 | if self._state[dfile][0] == MERGE_RECORD_MERGED_OTHER: |
|
633 | if self._state[dfile][0] == MERGE_RECORD_MERGED_OTHER: | |
@@ -620,8 +641,8 b' class mergestate(object):' | |||||
620 | actx = self._repo[anccommitnode] |
|
641 | actx = self._repo[anccommitnode] | |
621 | else: |
|
642 | else: | |
622 | actx = None |
|
643 | actx = None | |
623 |
fcd = |
|
644 | fcd = _filectxorabsent(localkey, wctx, dfile) | |
624 |
fco = |
|
645 | fco = _filectxorabsent(onode, octx, ofile) | |
625 | # TODO: move this to filectxorabsent |
|
646 | # TODO: move this to filectxorabsent | |
626 | fca = self._repo.filectx(afile, fileid=anode, changectx=actx) |
|
647 | fca = self._repo.filectx(afile, fileid=anode, changectx=actx) | |
627 | # "premerge" x flags |
|
648 | # "premerge" x flags | |
@@ -647,7 +668,7 b' class mergestate(object):' | |||||
647 | f.close() |
|
668 | f.close() | |
648 | else: |
|
669 | else: | |
649 | wctx[dfile].remove(ignoremissing=True) |
|
670 | wctx[dfile].remove(ignoremissing=True) | |
650 | complete, r, deleted = filemerge.premerge( |
|
671 | complete, merge_ret, deleted = filemerge.premerge( | |
651 | self._repo, |
|
672 | self._repo, | |
652 | wctx, |
|
673 | wctx, | |
653 | self._local, |
|
674 | self._local, | |
@@ -658,7 +679,7 b' class mergestate(object):' | |||||
658 | labels=self._labels, |
|
679 | labels=self._labels, | |
659 | ) |
|
680 | ) | |
660 | else: |
|
681 | else: | |
661 | complete, r, deleted = filemerge.filemerge( |
|
682 | complete, merge_ret, deleted = filemerge.filemerge( | |
662 | self._repo, |
|
683 | self._repo, | |
663 | wctx, |
|
684 | wctx, | |
664 | self._local, |
|
685 | self._local, | |
@@ -668,12 +689,12 b' class mergestate(object):' | |||||
668 | fca, |
|
689 | fca, | |
669 | labels=self._labels, |
|
690 | labels=self._labels, | |
670 | ) |
|
691 | ) | |
671 | if r is None: |
|
692 | if merge_ret is None: | |
672 | # no real conflict |
|
693 | # If return value of merge is None, then there are no real conflict | |
673 | del self._state[dfile] |
|
694 | del self._state[dfile] | |
674 | self._stateextras.pop(dfile, None) |
|
695 | self._stateextras.pop(dfile, None) | |
675 | self._dirty = True |
|
696 | self._dirty = True | |
676 | elif not r: |
|
697 | elif not merge_ret: | |
677 | self.mark(dfile, MERGE_RECORD_RESOLVED) |
|
698 | self.mark(dfile, MERGE_RECORD_RESOLVED) | |
678 |
|
699 | |||
679 | if complete: |
|
700 | if complete: | |
@@ -695,15 +716,9 b' class mergestate(object):' | |||||
695 | else: |
|
716 | else: | |
696 | action = ACTION_ADD |
|
717 | action = ACTION_ADD | |
697 | # else: regular merges (no action necessary) |
|
718 | # else: regular merges (no action necessary) | |
698 | self._results[dfile] = r, action |
|
719 | self._results[dfile] = merge_ret, action | |
699 |
|
||||
700 | return complete, r |
|
|||
701 |
|
720 | |||
702 | def _filectxorabsent(self, hexnode, ctx, f): |
|
721 | return complete, merge_ret | |
703 | if hexnode == nullhex: |
|
|||
704 | return filemerge.absentfilectx(ctx, f) |
|
|||
705 | else: |
|
|||
706 | return ctx[f] |
|
|||
707 |
|
722 | |||
708 | def preresolve(self, dfile, wctx): |
|
723 | def preresolve(self, dfile, wctx): | |
709 | """run premerge process for dfile |
|
724 | """run premerge process for dfile | |
@@ -749,11 +764,6 b' class mergestate(object):' | |||||
749 | actions[action].append((f, None, b"merge result")) |
|
764 | actions[action].append((f, None, b"merge result")) | |
750 | return actions |
|
765 | return actions | |
751 |
|
766 | |||
752 | def recordactions(self): |
|
|||
753 | """record remove/add/get actions in the dirstate""" |
|
|||
754 | branchmerge = self._repo.dirstate.p2() != nullid |
|
|||
755 | recordupdates(self._repo, self.actions(), branchmerge, None) |
|
|||
756 |
|
||||
757 | def queueremove(self, f): |
|
767 | def queueremove(self, f): | |
758 | """queues a file to be removed from the dirstate |
|
768 | """queues a file to be removed from the dirstate | |
759 |
|
769 | |||
@@ -773,1370 +783,6 b' class mergestate(object):' | |||||
773 | self._results[f] = 0, ACTION_GET |
|
783 | self._results[f] = 0, ACTION_GET | |
774 |
|
784 | |||
775 |
|
785 | |||
776 | def _getcheckunknownconfig(repo, section, name): |
|
|||
777 | config = repo.ui.config(section, name) |
|
|||
778 | valid = [b'abort', b'ignore', b'warn'] |
|
|||
779 | if config not in valid: |
|
|||
780 | validstr = b', '.join([b"'" + v + b"'" for v in valid]) |
|
|||
781 | raise error.ConfigError( |
|
|||
782 | _(b"%s.%s not valid ('%s' is none of %s)") |
|
|||
783 | % (section, name, config, validstr) |
|
|||
784 | ) |
|
|||
785 | return config |
|
|||
786 |
|
||||
787 |
|
||||
788 | def _checkunknownfile(repo, wctx, mctx, f, f2=None): |
|
|||
789 | if wctx.isinmemory(): |
|
|||
790 | # Nothing to do in IMM because nothing in the "working copy" can be an |
|
|||
791 | # unknown file. |
|
|||
792 | # |
|
|||
793 | # Note that we should bail out here, not in ``_checkunknownfiles()``, |
|
|||
794 | # because that function does other useful work. |
|
|||
795 | return False |
|
|||
796 |
|
||||
797 | if f2 is None: |
|
|||
798 | f2 = f |
|
|||
799 | return ( |
|
|||
800 | repo.wvfs.audit.check(f) |
|
|||
801 | and repo.wvfs.isfileorlink(f) |
|
|||
802 | and repo.dirstate.normalize(f) not in repo.dirstate |
|
|||
803 | and mctx[f2].cmp(wctx[f]) |
|
|||
804 | ) |
|
|||
805 |
|
||||
806 |
|
||||
807 | class _unknowndirschecker(object): |
|
|||
808 | """ |
|
|||
809 | Look for any unknown files or directories that may have a path conflict |
|
|||
810 | with a file. If any path prefix of the file exists as a file or link, |
|
|||
811 | then it conflicts. If the file itself is a directory that contains any |
|
|||
812 | file that is not tracked, then it conflicts. |
|
|||
813 |
|
||||
814 | Returns the shortest path at which a conflict occurs, or None if there is |
|
|||
815 | no conflict. |
|
|||
816 | """ |
|
|||
817 |
|
||||
818 | def __init__(self): |
|
|||
819 | # A set of paths known to be good. This prevents repeated checking of |
|
|||
820 | # dirs. It will be updated with any new dirs that are checked and found |
|
|||
821 | # to be safe. |
|
|||
822 | self._unknowndircache = set() |
|
|||
823 |
|
||||
824 | # A set of paths that are known to be absent. This prevents repeated |
|
|||
825 | # checking of subdirectories that are known not to exist. It will be |
|
|||
826 | # updated with any new dirs that are checked and found to be absent. |
|
|||
827 | self._missingdircache = set() |
|
|||
828 |
|
||||
829 | def __call__(self, repo, wctx, f): |
|
|||
830 | if wctx.isinmemory(): |
|
|||
831 | # Nothing to do in IMM for the same reason as ``_checkunknownfile``. |
|
|||
832 | return False |
|
|||
833 |
|
||||
834 | # Check for path prefixes that exist as unknown files. |
|
|||
835 | for p in reversed(list(pathutil.finddirs(f))): |
|
|||
836 | if p in self._missingdircache: |
|
|||
837 | return |
|
|||
838 | if p in self._unknowndircache: |
|
|||
839 | continue |
|
|||
840 | if repo.wvfs.audit.check(p): |
|
|||
841 | if ( |
|
|||
842 | repo.wvfs.isfileorlink(p) |
|
|||
843 | and repo.dirstate.normalize(p) not in repo.dirstate |
|
|||
844 | ): |
|
|||
845 | return p |
|
|||
846 | if not repo.wvfs.lexists(p): |
|
|||
847 | self._missingdircache.add(p) |
|
|||
848 | return |
|
|||
849 | self._unknowndircache.add(p) |
|
|||
850 |
|
||||
851 | # Check if the file conflicts with a directory containing unknown files. |
|
|||
852 | if repo.wvfs.audit.check(f) and repo.wvfs.isdir(f): |
|
|||
853 | # Does the directory contain any files that are not in the dirstate? |
|
|||
854 | for p, dirs, files in repo.wvfs.walk(f): |
|
|||
855 | for fn in files: |
|
|||
856 | relf = util.pconvert(repo.wvfs.reljoin(p, fn)) |
|
|||
857 | relf = repo.dirstate.normalize(relf, isknown=True) |
|
|||
858 | if relf not in repo.dirstate: |
|
|||
859 | return f |
|
|||
860 | return None |
|
|||
861 |
|
||||
862 |
|
||||
863 | def _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce): |
|
|||
864 | """ |
|
|||
865 | Considers any actions that care about the presence of conflicting unknown |
|
|||
866 | files. For some actions, the result is to abort; for others, it is to |
|
|||
867 | choose a different action. |
|
|||
868 | """ |
|
|||
869 | fileconflicts = set() |
|
|||
870 | pathconflicts = set() |
|
|||
871 | warnconflicts = set() |
|
|||
872 | abortconflicts = set() |
|
|||
873 | unknownconfig = _getcheckunknownconfig(repo, b'merge', b'checkunknown') |
|
|||
874 | ignoredconfig = _getcheckunknownconfig(repo, b'merge', b'checkignored') |
|
|||
875 | pathconfig = repo.ui.configbool( |
|
|||
876 | b'experimental', b'merge.checkpathconflicts' |
|
|||
877 | ) |
|
|||
878 | if not force: |
|
|||
879 |
|
||||
880 | def collectconflicts(conflicts, config): |
|
|||
881 | if config == b'abort': |
|
|||
882 | abortconflicts.update(conflicts) |
|
|||
883 | elif config == b'warn': |
|
|||
884 | warnconflicts.update(conflicts) |
|
|||
885 |
|
||||
886 | checkunknowndirs = _unknowndirschecker() |
|
|||
887 | for f, (m, args, msg) in pycompat.iteritems(actions): |
|
|||
888 | if m in (ACTION_CREATED, ACTION_DELETED_CHANGED): |
|
|||
889 | if _checkunknownfile(repo, wctx, mctx, f): |
|
|||
890 | fileconflicts.add(f) |
|
|||
891 | elif pathconfig and f not in wctx: |
|
|||
892 | path = checkunknowndirs(repo, wctx, f) |
|
|||
893 | if path is not None: |
|
|||
894 | pathconflicts.add(path) |
|
|||
895 | elif m == ACTION_LOCAL_DIR_RENAME_GET: |
|
|||
896 | if _checkunknownfile(repo, wctx, mctx, f, args[0]): |
|
|||
897 | fileconflicts.add(f) |
|
|||
898 |
|
||||
899 | allconflicts = fileconflicts | pathconflicts |
|
|||
900 | ignoredconflicts = {c for c in allconflicts if repo.dirstate._ignore(c)} |
|
|||
901 | unknownconflicts = allconflicts - ignoredconflicts |
|
|||
902 | collectconflicts(ignoredconflicts, ignoredconfig) |
|
|||
903 | collectconflicts(unknownconflicts, unknownconfig) |
|
|||
904 | else: |
|
|||
905 | for f, (m, args, msg) in pycompat.iteritems(actions): |
|
|||
906 | if m == ACTION_CREATED_MERGE: |
|
|||
907 | fl2, anc = args |
|
|||
908 | different = _checkunknownfile(repo, wctx, mctx, f) |
|
|||
909 | if repo.dirstate._ignore(f): |
|
|||
910 | config = ignoredconfig |
|
|||
911 | else: |
|
|||
912 | config = unknownconfig |
|
|||
913 |
|
||||
914 | # The behavior when force is True is described by this table: |
|
|||
915 | # config different mergeforce | action backup |
|
|||
916 | # * n * | get n |
|
|||
917 | # * y y | merge - |
|
|||
918 | # abort y n | merge - (1) |
|
|||
919 | # warn y n | warn + get y |
|
|||
920 | # ignore y n | get y |
|
|||
921 | # |
|
|||
922 | # (1) this is probably the wrong behavior here -- we should |
|
|||
923 | # probably abort, but some actions like rebases currently |
|
|||
924 | # don't like an abort happening in the middle of |
|
|||
925 | # merge.update. |
|
|||
926 | if not different: |
|
|||
927 | actions[f] = (ACTION_GET, (fl2, False), b'remote created') |
|
|||
928 | elif mergeforce or config == b'abort': |
|
|||
929 | actions[f] = ( |
|
|||
930 | ACTION_MERGE, |
|
|||
931 | (f, f, None, False, anc), |
|
|||
932 | b'remote differs from untracked local', |
|
|||
933 | ) |
|
|||
934 | elif config == b'abort': |
|
|||
935 | abortconflicts.add(f) |
|
|||
936 | else: |
|
|||
937 | if config == b'warn': |
|
|||
938 | warnconflicts.add(f) |
|
|||
939 | actions[f] = (ACTION_GET, (fl2, True), b'remote created') |
|
|||
940 |
|
||||
941 | for f in sorted(abortconflicts): |
|
|||
942 | warn = repo.ui.warn |
|
|||
943 | if f in pathconflicts: |
|
|||
944 | if repo.wvfs.isfileorlink(f): |
|
|||
945 | warn(_(b"%s: untracked file conflicts with directory\n") % f) |
|
|||
946 | else: |
|
|||
947 | warn(_(b"%s: untracked directory conflicts with file\n") % f) |
|
|||
948 | else: |
|
|||
949 | warn(_(b"%s: untracked file differs\n") % f) |
|
|||
950 | if abortconflicts: |
|
|||
951 | raise error.Abort( |
|
|||
952 | _( |
|
|||
953 | b"untracked files in working directory " |
|
|||
954 | b"differ from files in requested revision" |
|
|||
955 | ) |
|
|||
956 | ) |
|
|||
957 |
|
||||
958 | for f in sorted(warnconflicts): |
|
|||
959 | if repo.wvfs.isfileorlink(f): |
|
|||
960 | repo.ui.warn(_(b"%s: replacing untracked file\n") % f) |
|
|||
961 | else: |
|
|||
962 | repo.ui.warn(_(b"%s: replacing untracked files in directory\n") % f) |
|
|||
963 |
|
||||
964 | for f, (m, args, msg) in pycompat.iteritems(actions): |
|
|||
965 | if m == ACTION_CREATED: |
|
|||
966 | backup = ( |
|
|||
967 | f in fileconflicts |
|
|||
968 | or f in pathconflicts |
|
|||
969 | or any(p in pathconflicts for p in pathutil.finddirs(f)) |
|
|||
970 | ) |
|
|||
971 | (flags,) = args |
|
|||
972 | actions[f] = (ACTION_GET, (flags, backup), msg) |
|
|||
973 |
|
||||
974 |
|
||||
975 | def _forgetremoved(wctx, mctx, branchmerge): |
|
|||
976 | """ |
|
|||
977 | Forget removed files |
|
|||
978 |
|
||||
979 | If we're jumping between revisions (as opposed to merging), and if |
|
|||
980 | neither the working directory nor the target rev has the file, |
|
|||
981 | then we need to remove it from the dirstate, to prevent the |
|
|||
982 | dirstate from listing the file when it is no longer in the |
|
|||
983 | manifest. |
|
|||
984 |
|
||||
985 | If we're merging, and the other revision has removed a file |
|
|||
986 | that is not present in the working directory, we need to mark it |
|
|||
987 | as removed. |
|
|||
988 | """ |
|
|||
989 |
|
||||
990 | actions = {} |
|
|||
991 | m = ACTION_FORGET |
|
|||
992 | if branchmerge: |
|
|||
993 | m = ACTION_REMOVE |
|
|||
994 | for f in wctx.deleted(): |
|
|||
995 | if f not in mctx: |
|
|||
996 | actions[f] = m, None, b"forget deleted" |
|
|||
997 |
|
||||
998 | if not branchmerge: |
|
|||
999 | for f in wctx.removed(): |
|
|||
1000 | if f not in mctx: |
|
|||
1001 | actions[f] = ACTION_FORGET, None, b"forget removed" |
|
|||
1002 |
|
||||
1003 | return actions |
|
|||
1004 |
|
||||
1005 |
|
||||
1006 | def _checkcollision(repo, wmf, actions): |
|
|||
1007 | """ |
|
|||
1008 | Check for case-folding collisions. |
|
|||
1009 | """ |
|
|||
1010 | # If the repo is narrowed, filter out files outside the narrowspec. |
|
|||
1011 | narrowmatch = repo.narrowmatch() |
|
|||
1012 | if not narrowmatch.always(): |
|
|||
1013 | pmmf = set(wmf.walk(narrowmatch)) |
|
|||
1014 | if actions: |
|
|||
1015 | narrowactions = {} |
|
|||
1016 | for m, actionsfortype in pycompat.iteritems(actions): |
|
|||
1017 | narrowactions[m] = [] |
|
|||
1018 | for (f, args, msg) in actionsfortype: |
|
|||
1019 | if narrowmatch(f): |
|
|||
1020 | narrowactions[m].append((f, args, msg)) |
|
|||
1021 | actions = narrowactions |
|
|||
1022 | else: |
|
|||
1023 | # build provisional merged manifest up |
|
|||
1024 | pmmf = set(wmf) |
|
|||
1025 |
|
||||
1026 | if actions: |
|
|||
1027 | # KEEP and EXEC are no-op |
|
|||
1028 | for m in ( |
|
|||
1029 | ACTION_ADD, |
|
|||
1030 | ACTION_ADD_MODIFIED, |
|
|||
1031 | ACTION_FORGET, |
|
|||
1032 | ACTION_GET, |
|
|||
1033 | ACTION_CHANGED_DELETED, |
|
|||
1034 | ACTION_DELETED_CHANGED, |
|
|||
1035 | ): |
|
|||
1036 | for f, args, msg in actions[m]: |
|
|||
1037 | pmmf.add(f) |
|
|||
1038 | for f, args, msg in actions[ACTION_REMOVE]: |
|
|||
1039 | pmmf.discard(f) |
|
|||
1040 | for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]: |
|
|||
1041 | f2, flags = args |
|
|||
1042 | pmmf.discard(f2) |
|
|||
1043 | pmmf.add(f) |
|
|||
1044 | for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]: |
|
|||
1045 | pmmf.add(f) |
|
|||
1046 | for f, args, msg in actions[ACTION_MERGE]: |
|
|||
1047 | f1, f2, fa, move, anc = args |
|
|||
1048 | if move: |
|
|||
1049 | pmmf.discard(f1) |
|
|||
1050 | pmmf.add(f) |
|
|||
1051 |
|
||||
1052 | # check case-folding collision in provisional merged manifest |
|
|||
1053 | foldmap = {} |
|
|||
1054 | for f in pmmf: |
|
|||
1055 | fold = util.normcase(f) |
|
|||
1056 | if fold in foldmap: |
|
|||
1057 | raise error.Abort( |
|
|||
1058 | _(b"case-folding collision between %s and %s") |
|
|||
1059 | % (f, foldmap[fold]) |
|
|||
1060 | ) |
|
|||
1061 | foldmap[fold] = f |
|
|||
1062 |
|
||||
1063 | # check case-folding of directories |
|
|||
1064 | foldprefix = unfoldprefix = lastfull = b'' |
|
|||
1065 | for fold, f in sorted(foldmap.items()): |
|
|||
1066 | if fold.startswith(foldprefix) and not f.startswith(unfoldprefix): |
|
|||
1067 | # the folded prefix matches but actual casing is different |
|
|||
1068 | raise error.Abort( |
|
|||
1069 | _(b"case-folding collision between %s and directory of %s") |
|
|||
1070 | % (lastfull, f) |
|
|||
1071 | ) |
|
|||
1072 | foldprefix = fold + b'/' |
|
|||
1073 | unfoldprefix = f + b'/' |
|
|||
1074 | lastfull = f |
|
|||
1075 |
|
||||
1076 |
|
||||
1077 | def driverpreprocess(repo, ms, wctx, labels=None): |
|
|||
1078 | """run the preprocess step of the merge driver, if any |
|
|||
1079 |
|
||||
1080 | This is currently not implemented -- it's an extension point.""" |
|
|||
1081 | return True |
|
|||
1082 |
|
||||
1083 |
|
||||
1084 | def driverconclude(repo, ms, wctx, labels=None): |
|
|||
1085 | """run the conclude step of the merge driver, if any |
|
|||
1086 |
|
||||
1087 | This is currently not implemented -- it's an extension point.""" |
|
|||
1088 | return True |
|
|||
1089 |
|
||||
1090 |
|
||||
1091 | def _filesindirs(repo, manifest, dirs): |
|
|||
1092 | """ |
|
|||
1093 | Generator that yields pairs of all the files in the manifest that are found |
|
|||
1094 | inside the directories listed in dirs, and which directory they are found |
|
|||
1095 | in. |
|
|||
1096 | """ |
|
|||
1097 | for f in manifest: |
|
|||
1098 | for p in pathutil.finddirs(f): |
|
|||
1099 | if p in dirs: |
|
|||
1100 | yield f, p |
|
|||
1101 | break |
|
|||
1102 |
|
||||
1103 |
|
||||
1104 | def checkpathconflicts(repo, wctx, mctx, actions): |
|
|||
1105 | """ |
|
|||
1106 | Check if any actions introduce path conflicts in the repository, updating |
|
|||
1107 | actions to record or handle the path conflict accordingly. |
|
|||
1108 | """ |
|
|||
1109 | mf = wctx.manifest() |
|
|||
1110 |
|
||||
1111 | # The set of local files that conflict with a remote directory. |
|
|||
1112 | localconflicts = set() |
|
|||
1113 |
|
||||
1114 | # The set of directories that conflict with a remote file, and so may cause |
|
|||
1115 | # conflicts if they still contain any files after the merge. |
|
|||
1116 | remoteconflicts = set() |
|
|||
1117 |
|
||||
1118 | # The set of directories that appear as both a file and a directory in the |
|
|||
1119 | # remote manifest. These indicate an invalid remote manifest, which |
|
|||
1120 | # can't be updated to cleanly. |
|
|||
1121 | invalidconflicts = set() |
|
|||
1122 |
|
||||
1123 | # The set of directories that contain files that are being created. |
|
|||
1124 | createdfiledirs = set() |
|
|||
1125 |
|
||||
1126 | # The set of files deleted by all the actions. |
|
|||
1127 | deletedfiles = set() |
|
|||
1128 |
|
||||
1129 | for f, (m, args, msg) in actions.items(): |
|
|||
1130 | if m in ( |
|
|||
1131 | ACTION_CREATED, |
|
|||
1132 | ACTION_DELETED_CHANGED, |
|
|||
1133 | ACTION_MERGE, |
|
|||
1134 | ACTION_CREATED_MERGE, |
|
|||
1135 | ): |
|
|||
1136 | # This action may create a new local file. |
|
|||
1137 | createdfiledirs.update(pathutil.finddirs(f)) |
|
|||
1138 | if mf.hasdir(f): |
|
|||
1139 | # The file aliases a local directory. This might be ok if all |
|
|||
1140 | # the files in the local directory are being deleted. This |
|
|||
1141 | # will be checked once we know what all the deleted files are. |
|
|||
1142 | remoteconflicts.add(f) |
|
|||
1143 | # Track the names of all deleted files. |
|
|||
1144 | if m == ACTION_REMOVE: |
|
|||
1145 | deletedfiles.add(f) |
|
|||
1146 | if m == ACTION_MERGE: |
|
|||
1147 | f1, f2, fa, move, anc = args |
|
|||
1148 | if move: |
|
|||
1149 | deletedfiles.add(f1) |
|
|||
1150 | if m == ACTION_DIR_RENAME_MOVE_LOCAL: |
|
|||
1151 | f2, flags = args |
|
|||
1152 | deletedfiles.add(f2) |
|
|||
1153 |
|
||||
1154 | # Check all directories that contain created files for path conflicts. |
|
|||
1155 | for p in createdfiledirs: |
|
|||
1156 | if p in mf: |
|
|||
1157 | if p in mctx: |
|
|||
1158 | # A file is in a directory which aliases both a local |
|
|||
1159 | # and a remote file. This is an internal inconsistency |
|
|||
1160 | # within the remote manifest. |
|
|||
1161 | invalidconflicts.add(p) |
|
|||
1162 | else: |
|
|||
1163 | # A file is in a directory which aliases a local file. |
|
|||
1164 | # We will need to rename the local file. |
|
|||
1165 | localconflicts.add(p) |
|
|||
1166 | if p in actions and actions[p][0] in ( |
|
|||
1167 | ACTION_CREATED, |
|
|||
1168 | ACTION_DELETED_CHANGED, |
|
|||
1169 | ACTION_MERGE, |
|
|||
1170 | ACTION_CREATED_MERGE, |
|
|||
1171 | ): |
|
|||
1172 | # The file is in a directory which aliases a remote file. |
|
|||
1173 | # This is an internal inconsistency within the remote |
|
|||
1174 | # manifest. |
|
|||
1175 | invalidconflicts.add(p) |
|
|||
1176 |
|
||||
1177 | # Rename all local conflicting files that have not been deleted. |
|
|||
1178 | for p in localconflicts: |
|
|||
1179 | if p not in deletedfiles: |
|
|||
1180 | ctxname = bytes(wctx).rstrip(b'+') |
|
|||
1181 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) |
|
|||
1182 | actions[pnew] = ( |
|
|||
1183 | ACTION_PATH_CONFLICT_RESOLVE, |
|
|||
1184 | (p,), |
|
|||
1185 | b'local path conflict', |
|
|||
1186 | ) |
|
|||
1187 | actions[p] = (ACTION_PATH_CONFLICT, (pnew, b'l'), b'path conflict') |
|
|||
1188 |
|
||||
1189 | if remoteconflicts: |
|
|||
1190 | # Check if all files in the conflicting directories have been removed. |
|
|||
1191 | ctxname = bytes(mctx).rstrip(b'+') |
|
|||
1192 | for f, p in _filesindirs(repo, mf, remoteconflicts): |
|
|||
1193 | if f not in deletedfiles: |
|
|||
1194 | m, args, msg = actions[p] |
|
|||
1195 | pnew = util.safename(p, ctxname, wctx, set(actions.keys())) |
|
|||
1196 | if m in (ACTION_DELETED_CHANGED, ACTION_MERGE): |
|
|||
1197 | # Action was merge, just update target. |
|
|||
1198 | actions[pnew] = (m, args, msg) |
|
|||
1199 | else: |
|
|||
1200 | # Action was create, change to renamed get action. |
|
|||
1201 | fl = args[0] |
|
|||
1202 | actions[pnew] = ( |
|
|||
1203 | ACTION_LOCAL_DIR_RENAME_GET, |
|
|||
1204 | (p, fl), |
|
|||
1205 | b'remote path conflict', |
|
|||
1206 | ) |
|
|||
1207 | actions[p] = ( |
|
|||
1208 | ACTION_PATH_CONFLICT, |
|
|||
1209 | (pnew, ACTION_REMOVE), |
|
|||
1210 | b'path conflict', |
|
|||
1211 | ) |
|
|||
1212 | remoteconflicts.remove(p) |
|
|||
1213 | break |
|
|||
1214 |
|
||||
1215 | if invalidconflicts: |
|
|||
1216 | for p in invalidconflicts: |
|
|||
1217 | repo.ui.warn(_(b"%s: is both a file and a directory\n") % p) |
|
|||
1218 | raise error.Abort(_(b"destination manifest contains path conflicts")) |
|
|||
1219 |
|
||||
1220 |
|
||||
1221 | def _filternarrowactions(narrowmatch, branchmerge, actions): |
|
|||
1222 | """ |
|
|||
1223 | Filters out actions that can ignored because the repo is narrowed. |
|
|||
1224 |
|
||||
1225 | Raise an exception if the merge cannot be completed because the repo is |
|
|||
1226 | narrowed. |
|
|||
1227 | """ |
|
|||
1228 | nooptypes = {b'k'} # TODO: handle with nonconflicttypes |
|
|||
1229 | nonconflicttypes = set(b'a am c cm f g gs r e'.split()) |
|
|||
1230 | # We mutate the items in the dict during iteration, so iterate |
|
|||
1231 | # over a copy. |
|
|||
1232 | for f, action in list(actions.items()): |
|
|||
1233 | if narrowmatch(f): |
|
|||
1234 | pass |
|
|||
1235 | elif not branchmerge: |
|
|||
1236 | del actions[f] # just updating, ignore changes outside clone |
|
|||
1237 | elif action[0] in nooptypes: |
|
|||
1238 | del actions[f] # merge does not affect file |
|
|||
1239 | elif action[0] in nonconflicttypes: |
|
|||
1240 | raise error.Abort( |
|
|||
1241 | _( |
|
|||
1242 | b'merge affects file \'%s\' outside narrow, ' |
|
|||
1243 | b'which is not yet supported' |
|
|||
1244 | ) |
|
|||
1245 | % f, |
|
|||
1246 | hint=_(b'merging in the other direction may work'), |
|
|||
1247 | ) |
|
|||
1248 | else: |
|
|||
1249 | raise error.Abort( |
|
|||
1250 | _(b'conflict in file \'%s\' is outside narrow clone') % f |
|
|||
1251 | ) |
|
|||
1252 |
|
||||
1253 |
|
||||
1254 | def manifestmerge( |
|
|||
1255 | repo, |
|
|||
1256 | wctx, |
|
|||
1257 | p2, |
|
|||
1258 | pa, |
|
|||
1259 | branchmerge, |
|
|||
1260 | force, |
|
|||
1261 | matcher, |
|
|||
1262 | acceptremote, |
|
|||
1263 | followcopies, |
|
|||
1264 | forcefulldiff=False, |
|
|||
1265 | ): |
|
|||
1266 | """ |
|
|||
1267 | Merge wctx and p2 with ancestor pa and generate merge action list |
|
|||
1268 |
|
||||
1269 | branchmerge and force are as passed in to update |
|
|||
1270 | matcher = matcher to filter file lists |
|
|||
1271 | acceptremote = accept the incoming changes without prompting |
|
|||
1272 | """ |
|
|||
1273 | if matcher is not None and matcher.always(): |
|
|||
1274 | matcher = None |
|
|||
1275 |
|
||||
1276 | # manifests fetched in order are going to be faster, so prime the caches |
|
|||
1277 | [ |
|
|||
1278 | x.manifest() |
|
|||
1279 | for x in sorted(wctx.parents() + [p2, pa], key=scmutil.intrev) |
|
|||
1280 | ] |
|
|||
1281 |
|
||||
1282 | branch_copies1 = copies.branch_copies() |
|
|||
1283 | branch_copies2 = copies.branch_copies() |
|
|||
1284 | diverge = {} |
|
|||
1285 | if followcopies: |
|
|||
1286 | branch_copies1, branch_copies2, diverge = copies.mergecopies( |
|
|||
1287 | repo, wctx, p2, pa |
|
|||
1288 | ) |
|
|||
1289 |
|
||||
1290 | boolbm = pycompat.bytestr(bool(branchmerge)) |
|
|||
1291 | boolf = pycompat.bytestr(bool(force)) |
|
|||
1292 | boolm = pycompat.bytestr(bool(matcher)) |
|
|||
1293 | repo.ui.note(_(b"resolving manifests\n")) |
|
|||
1294 | repo.ui.debug( |
|
|||
1295 | b" branchmerge: %s, force: %s, partial: %s\n" % (boolbm, boolf, boolm) |
|
|||
1296 | ) |
|
|||
1297 | repo.ui.debug(b" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2)) |
|
|||
1298 |
|
||||
1299 | m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest() |
|
|||
1300 | copied1 = set(branch_copies1.copy.values()) |
|
|||
1301 | copied1.update(branch_copies1.movewithdir.values()) |
|
|||
1302 | copied2 = set(branch_copies2.copy.values()) |
|
|||
1303 | copied2.update(branch_copies2.movewithdir.values()) |
|
|||
1304 |
|
||||
1305 | if b'.hgsubstate' in m1 and wctx.rev() is None: |
|
|||
1306 | # Check whether sub state is modified, and overwrite the manifest |
|
|||
1307 | # to flag the change. If wctx is a committed revision, we shouldn't |
|
|||
1308 | # care for the dirty state of the working directory. |
|
|||
1309 | if any(wctx.sub(s).dirty() for s in wctx.substate): |
|
|||
1310 | m1[b'.hgsubstate'] = modifiednodeid |
|
|||
1311 |
|
||||
1312 | # Don't use m2-vs-ma optimization if: |
|
|||
1313 | # - ma is the same as m1 or m2, which we're just going to diff again later |
|
|||
1314 | # - The caller specifically asks for a full diff, which is useful during bid |
|
|||
1315 | # merge. |
|
|||
1316 | if pa not in ([wctx, p2] + wctx.parents()) and not forcefulldiff: |
|
|||
1317 | # Identify which files are relevant to the merge, so we can limit the |
|
|||
1318 | # total m1-vs-m2 diff to just those files. This has significant |
|
|||
1319 | # performance benefits in large repositories. |
|
|||
1320 | relevantfiles = set(ma.diff(m2).keys()) |
|
|||
1321 |
|
||||
1322 | # For copied and moved files, we need to add the source file too. |
|
|||
1323 | for copykey, copyvalue in pycompat.iteritems(branch_copies1.copy): |
|
|||
1324 | if copyvalue in relevantfiles: |
|
|||
1325 | relevantfiles.add(copykey) |
|
|||
1326 | for movedirkey in branch_copies1.movewithdir: |
|
|||
1327 | relevantfiles.add(movedirkey) |
|
|||
1328 | filesmatcher = scmutil.matchfiles(repo, relevantfiles) |
|
|||
1329 | matcher = matchmod.intersectmatchers(matcher, filesmatcher) |
|
|||
1330 |
|
||||
1331 | diff = m1.diff(m2, match=matcher) |
|
|||
1332 |
|
||||
1333 | actions = {} |
|
|||
1334 | for f, ((n1, fl1), (n2, fl2)) in pycompat.iteritems(diff): |
|
|||
1335 | if n1 and n2: # file exists on both local and remote side |
|
|||
1336 | if f not in ma: |
|
|||
1337 | # TODO: what if they're renamed from different sources? |
|
|||
1338 | fa = branch_copies1.copy.get( |
|
|||
1339 | f, None |
|
|||
1340 | ) or branch_copies2.copy.get(f, None) |
|
|||
1341 | if fa is not None: |
|
|||
1342 | actions[f] = ( |
|
|||
1343 | ACTION_MERGE, |
|
|||
1344 | (f, f, fa, False, pa.node()), |
|
|||
1345 | b'both renamed from %s' % fa, |
|
|||
1346 | ) |
|
|||
1347 | else: |
|
|||
1348 | actions[f] = ( |
|
|||
1349 | ACTION_MERGE, |
|
|||
1350 | (f, f, None, False, pa.node()), |
|
|||
1351 | b'both created', |
|
|||
1352 | ) |
|
|||
1353 | else: |
|
|||
1354 | a = ma[f] |
|
|||
1355 | fla = ma.flags(f) |
|
|||
1356 | nol = b'l' not in fl1 + fl2 + fla |
|
|||
1357 | if n2 == a and fl2 == fla: |
|
|||
1358 | actions[f] = (ACTION_KEEP, (), b'remote unchanged') |
|
|||
1359 | elif n1 == a and fl1 == fla: # local unchanged - use remote |
|
|||
1360 | if n1 == n2: # optimization: keep local content |
|
|||
1361 | actions[f] = ( |
|
|||
1362 | ACTION_EXEC, |
|
|||
1363 | (fl2,), |
|
|||
1364 | b'update permissions', |
|
|||
1365 | ) |
|
|||
1366 | else: |
|
|||
1367 | actions[f] = ( |
|
|||
1368 | ACTION_GET_OTHER_AND_STORE |
|
|||
1369 | if branchmerge |
|
|||
1370 | else ACTION_GET, |
|
|||
1371 | (fl2, False), |
|
|||
1372 | b'remote is newer', |
|
|||
1373 | ) |
|
|||
1374 | elif nol and n2 == a: # remote only changed 'x' |
|
|||
1375 | actions[f] = (ACTION_EXEC, (fl2,), b'update permissions') |
|
|||
1376 | elif nol and n1 == a: # local only changed 'x' |
|
|||
1377 | actions[f] = ( |
|
|||
1378 | ACTION_GET_OTHER_AND_STORE |
|
|||
1379 | if branchmerge |
|
|||
1380 | else ACTION_GET, |
|
|||
1381 | (fl1, False), |
|
|||
1382 | b'remote is newer', |
|
|||
1383 | ) |
|
|||
1384 | else: # both changed something |
|
|||
1385 | actions[f] = ( |
|
|||
1386 | ACTION_MERGE, |
|
|||
1387 | (f, f, f, False, pa.node()), |
|
|||
1388 | b'versions differ', |
|
|||
1389 | ) |
|
|||
1390 | elif n1: # file exists only on local side |
|
|||
1391 | if f in copied2: |
|
|||
1392 | pass # we'll deal with it on m2 side |
|
|||
1393 | elif ( |
|
|||
1394 | f in branch_copies1.movewithdir |
|
|||
1395 | ): # directory rename, move local |
|
|||
1396 | f2 = branch_copies1.movewithdir[f] |
|
|||
1397 | if f2 in m2: |
|
|||
1398 | actions[f2] = ( |
|
|||
1399 | ACTION_MERGE, |
|
|||
1400 | (f, f2, None, True, pa.node()), |
|
|||
1401 | b'remote directory rename, both created', |
|
|||
1402 | ) |
|
|||
1403 | else: |
|
|||
1404 | actions[f2] = ( |
|
|||
1405 | ACTION_DIR_RENAME_MOVE_LOCAL, |
|
|||
1406 | (f, fl1), |
|
|||
1407 | b'remote directory rename - move from %s' % f, |
|
|||
1408 | ) |
|
|||
1409 | elif f in branch_copies1.copy: |
|
|||
1410 | f2 = branch_copies1.copy[f] |
|
|||
1411 | actions[f] = ( |
|
|||
1412 | ACTION_MERGE, |
|
|||
1413 | (f, f2, f2, False, pa.node()), |
|
|||
1414 | b'local copied/moved from %s' % f2, |
|
|||
1415 | ) |
|
|||
1416 | elif f in ma: # clean, a different, no remote |
|
|||
1417 | if n1 != ma[f]: |
|
|||
1418 | if acceptremote: |
|
|||
1419 | actions[f] = (ACTION_REMOVE, None, b'remote delete') |
|
|||
1420 | else: |
|
|||
1421 | actions[f] = ( |
|
|||
1422 | ACTION_CHANGED_DELETED, |
|
|||
1423 | (f, None, f, False, pa.node()), |
|
|||
1424 | b'prompt changed/deleted', |
|
|||
1425 | ) |
|
|||
1426 | elif n1 == addednodeid: |
|
|||
1427 | # This extra 'a' is added by working copy manifest to mark |
|
|||
1428 | # the file as locally added. We should forget it instead of |
|
|||
1429 | # deleting it. |
|
|||
1430 | actions[f] = (ACTION_FORGET, None, b'remote deleted') |
|
|||
1431 | else: |
|
|||
1432 | actions[f] = (ACTION_REMOVE, None, b'other deleted') |
|
|||
1433 | elif n2: # file exists only on remote side |
|
|||
1434 | if f in copied1: |
|
|||
1435 | pass # we'll deal with it on m1 side |
|
|||
1436 | elif f in branch_copies2.movewithdir: |
|
|||
1437 | f2 = branch_copies2.movewithdir[f] |
|
|||
1438 | if f2 in m1: |
|
|||
1439 | actions[f2] = ( |
|
|||
1440 | ACTION_MERGE, |
|
|||
1441 | (f2, f, None, False, pa.node()), |
|
|||
1442 | b'local directory rename, both created', |
|
|||
1443 | ) |
|
|||
1444 | else: |
|
|||
1445 | actions[f2] = ( |
|
|||
1446 | ACTION_LOCAL_DIR_RENAME_GET, |
|
|||
1447 | (f, fl2), |
|
|||
1448 | b'local directory rename - get from %s' % f, |
|
|||
1449 | ) |
|
|||
1450 | elif f in branch_copies2.copy: |
|
|||
1451 | f2 = branch_copies2.copy[f] |
|
|||
1452 | if f2 in m2: |
|
|||
1453 | actions[f] = ( |
|
|||
1454 | ACTION_MERGE, |
|
|||
1455 | (f2, f, f2, False, pa.node()), |
|
|||
1456 | b'remote copied from %s' % f2, |
|
|||
1457 | ) |
|
|||
1458 | else: |
|
|||
1459 | actions[f] = ( |
|
|||
1460 | ACTION_MERGE, |
|
|||
1461 | (f2, f, f2, True, pa.node()), |
|
|||
1462 | b'remote moved from %s' % f2, |
|
|||
1463 | ) |
|
|||
1464 | elif f not in ma: |
|
|||
1465 | # local unknown, remote created: the logic is described by the |
|
|||
1466 | # following table: |
|
|||
1467 | # |
|
|||
1468 | # force branchmerge different | action |
|
|||
1469 | # n * * | create |
|
|||
1470 | # y n * | create |
|
|||
1471 | # y y n | create |
|
|||
1472 | # y y y | merge |
|
|||
1473 | # |
|
|||
1474 | # Checking whether the files are different is expensive, so we |
|
|||
1475 | # don't do that when we can avoid it. |
|
|||
1476 | if not force: |
|
|||
1477 | actions[f] = (ACTION_CREATED, (fl2,), b'remote created') |
|
|||
1478 | elif not branchmerge: |
|
|||
1479 | actions[f] = (ACTION_CREATED, (fl2,), b'remote created') |
|
|||
1480 | else: |
|
|||
1481 | actions[f] = ( |
|
|||
1482 | ACTION_CREATED_MERGE, |
|
|||
1483 | (fl2, pa.node()), |
|
|||
1484 | b'remote created, get or merge', |
|
|||
1485 | ) |
|
|||
1486 | elif n2 != ma[f]: |
|
|||
1487 | df = None |
|
|||
1488 | for d in branch_copies1.dirmove: |
|
|||
1489 | if f.startswith(d): |
|
|||
1490 | # new file added in a directory that was moved |
|
|||
1491 | df = branch_copies1.dirmove[d] + f[len(d) :] |
|
|||
1492 | break |
|
|||
1493 | if df is not None and df in m1: |
|
|||
1494 | actions[df] = ( |
|
|||
1495 | ACTION_MERGE, |
|
|||
1496 | (df, f, f, False, pa.node()), |
|
|||
1497 | b'local directory rename - respect move ' |
|
|||
1498 | b'from %s' % f, |
|
|||
1499 | ) |
|
|||
1500 | elif acceptremote: |
|
|||
1501 | actions[f] = (ACTION_CREATED, (fl2,), b'remote recreating') |
|
|||
1502 | else: |
|
|||
1503 | actions[f] = ( |
|
|||
1504 | ACTION_DELETED_CHANGED, |
|
|||
1505 | (None, f, f, False, pa.node()), |
|
|||
1506 | b'prompt deleted/changed', |
|
|||
1507 | ) |
|
|||
1508 |
|
||||
1509 | if repo.ui.configbool(b'experimental', b'merge.checkpathconflicts'): |
|
|||
1510 | # If we are merging, look for path conflicts. |
|
|||
1511 | checkpathconflicts(repo, wctx, p2, actions) |
|
|||
1512 |
|
||||
1513 | narrowmatch = repo.narrowmatch() |
|
|||
1514 | if not narrowmatch.always(): |
|
|||
1515 | # Updates "actions" in place |
|
|||
1516 | _filternarrowactions(narrowmatch, branchmerge, actions) |
|
|||
1517 |
|
||||
1518 | renamedelete = branch_copies1.renamedelete |
|
|||
1519 | renamedelete.update(branch_copies2.renamedelete) |
|
|||
1520 |
|
||||
1521 | return actions, diverge, renamedelete |
|
|||
1522 |
|
||||
1523 |
|
||||
1524 | def _resolvetrivial(repo, wctx, mctx, ancestor, actions): |
|
|||
1525 | """Resolves false conflicts where the nodeid changed but the content |
|
|||
1526 | remained the same.""" |
|
|||
1527 | # We force a copy of actions.items() because we're going to mutate |
|
|||
1528 | # actions as we resolve trivial conflicts. |
|
|||
1529 | for f, (m, args, msg) in list(actions.items()): |
|
|||
1530 | if ( |
|
|||
1531 | m == ACTION_CHANGED_DELETED |
|
|||
1532 | and f in ancestor |
|
|||
1533 | and not wctx[f].cmp(ancestor[f]) |
|
|||
1534 | ): |
|
|||
1535 | # local did change but ended up with same content |
|
|||
1536 | actions[f] = ACTION_REMOVE, None, b'prompt same' |
|
|||
1537 | elif ( |
|
|||
1538 | m == ACTION_DELETED_CHANGED |
|
|||
1539 | and f in ancestor |
|
|||
1540 | and not mctx[f].cmp(ancestor[f]) |
|
|||
1541 | ): |
|
|||
1542 | # remote did change but ended up with same content |
|
|||
1543 | del actions[f] # don't get = keep local deleted |
|
|||
1544 |
|
||||
1545 |
|
||||
1546 | def calculateupdates( |
|
|||
1547 | repo, |
|
|||
1548 | wctx, |
|
|||
1549 | mctx, |
|
|||
1550 | ancestors, |
|
|||
1551 | branchmerge, |
|
|||
1552 | force, |
|
|||
1553 | acceptremote, |
|
|||
1554 | followcopies, |
|
|||
1555 | matcher=None, |
|
|||
1556 | mergeforce=False, |
|
|||
1557 | ): |
|
|||
1558 | """Calculate the actions needed to merge mctx into wctx using ancestors""" |
|
|||
1559 | # Avoid cycle. |
|
|||
1560 | from . import sparse |
|
|||
1561 |
|
||||
1562 | if len(ancestors) == 1: # default |
|
|||
1563 | actions, diverge, renamedelete = manifestmerge( |
|
|||
1564 | repo, |
|
|||
1565 | wctx, |
|
|||
1566 | mctx, |
|
|||
1567 | ancestors[0], |
|
|||
1568 | branchmerge, |
|
|||
1569 | force, |
|
|||
1570 | matcher, |
|
|||
1571 | acceptremote, |
|
|||
1572 | followcopies, |
|
|||
1573 | ) |
|
|||
1574 | _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce) |
|
|||
1575 |
|
||||
1576 | else: # only when merge.preferancestor=* - the default |
|
|||
1577 | repo.ui.note( |
|
|||
1578 | _(b"note: merging %s and %s using bids from ancestors %s\n") |
|
|||
1579 | % ( |
|
|||
1580 | wctx, |
|
|||
1581 | mctx, |
|
|||
1582 | _(b' and ').join(pycompat.bytestr(anc) for anc in ancestors), |
|
|||
1583 | ) |
|
|||
1584 | ) |
|
|||
1585 |
|
||||
1586 | # Call for bids |
|
|||
1587 | fbids = ( |
|
|||
1588 | {} |
|
|||
1589 | ) # mapping filename to bids (action method to list af actions) |
|
|||
1590 | diverge, renamedelete = None, None |
|
|||
1591 | for ancestor in ancestors: |
|
|||
1592 | repo.ui.note(_(b'\ncalculating bids for ancestor %s\n') % ancestor) |
|
|||
1593 | actions, diverge1, renamedelete1 = manifestmerge( |
|
|||
1594 | repo, |
|
|||
1595 | wctx, |
|
|||
1596 | mctx, |
|
|||
1597 | ancestor, |
|
|||
1598 | branchmerge, |
|
|||
1599 | force, |
|
|||
1600 | matcher, |
|
|||
1601 | acceptremote, |
|
|||
1602 | followcopies, |
|
|||
1603 | forcefulldiff=True, |
|
|||
1604 | ) |
|
|||
1605 | _checkunknownfiles(repo, wctx, mctx, force, actions, mergeforce) |
|
|||
1606 |
|
||||
1607 | # Track the shortest set of warning on the theory that bid |
|
|||
1608 | # merge will correctly incorporate more information |
|
|||
1609 | if diverge is None or len(diverge1) < len(diverge): |
|
|||
1610 | diverge = diverge1 |
|
|||
1611 | if renamedelete is None or len(renamedelete) < len(renamedelete1): |
|
|||
1612 | renamedelete = renamedelete1 |
|
|||
1613 |
|
||||
1614 | for f, a in sorted(pycompat.iteritems(actions)): |
|
|||
1615 | m, args, msg = a |
|
|||
1616 | if m == ACTION_GET_OTHER_AND_STORE: |
|
|||
1617 | m = ACTION_GET |
|
|||
1618 | repo.ui.debug(b' %s: %s -> %s\n' % (f, msg, m)) |
|
|||
1619 | if f in fbids: |
|
|||
1620 | d = fbids[f] |
|
|||
1621 | if m in d: |
|
|||
1622 | d[m].append(a) |
|
|||
1623 | else: |
|
|||
1624 | d[m] = [a] |
|
|||
1625 | else: |
|
|||
1626 | fbids[f] = {m: [a]} |
|
|||
1627 |
|
||||
1628 | # Pick the best bid for each file |
|
|||
1629 | repo.ui.note(_(b'\nauction for merging merge bids\n')) |
|
|||
1630 | actions = {} |
|
|||
1631 | for f, bids in sorted(fbids.items()): |
|
|||
1632 | # bids is a mapping from action method to list af actions |
|
|||
1633 | # Consensus? |
|
|||
1634 | if len(bids) == 1: # all bids are the same kind of method |
|
|||
1635 | m, l = list(bids.items())[0] |
|
|||
1636 | if all(a == l[0] for a in l[1:]): # len(bids) is > 1 |
|
|||
1637 | repo.ui.note(_(b" %s: consensus for %s\n") % (f, m)) |
|
|||
1638 | actions[f] = l[0] |
|
|||
1639 | continue |
|
|||
1640 | # If keep is an option, just do it. |
|
|||
1641 | if ACTION_KEEP in bids: |
|
|||
1642 | repo.ui.note(_(b" %s: picking 'keep' action\n") % f) |
|
|||
1643 | actions[f] = bids[ACTION_KEEP][0] |
|
|||
1644 | continue |
|
|||
1645 | # If there are gets and they all agree [how could they not?], do it. |
|
|||
1646 | if ACTION_GET in bids: |
|
|||
1647 | ga0 = bids[ACTION_GET][0] |
|
|||
1648 | if all(a == ga0 for a in bids[ACTION_GET][1:]): |
|
|||
1649 | repo.ui.note(_(b" %s: picking 'get' action\n") % f) |
|
|||
1650 | actions[f] = ga0 |
|
|||
1651 | continue |
|
|||
1652 | # TODO: Consider other simple actions such as mode changes |
|
|||
1653 | # Handle inefficient democrazy. |
|
|||
1654 | repo.ui.note(_(b' %s: multiple bids for merge action:\n') % f) |
|
|||
1655 | for m, l in sorted(bids.items()): |
|
|||
1656 | for _f, args, msg in l: |
|
|||
1657 | repo.ui.note(b' %s -> %s\n' % (msg, m)) |
|
|||
1658 | # Pick random action. TODO: Instead, prompt user when resolving |
|
|||
1659 | m, l = list(bids.items())[0] |
|
|||
1660 | repo.ui.warn( |
|
|||
1661 | _(b' %s: ambiguous merge - picked %s action\n') % (f, m) |
|
|||
1662 | ) |
|
|||
1663 | actions[f] = l[0] |
|
|||
1664 | continue |
|
|||
1665 | repo.ui.note(_(b'end of auction\n\n')) |
|
|||
1666 |
|
||||
1667 | if wctx.rev() is None: |
|
|||
1668 | fractions = _forgetremoved(wctx, mctx, branchmerge) |
|
|||
1669 | actions.update(fractions) |
|
|||
1670 |
|
||||
1671 | prunedactions = sparse.filterupdatesactions( |
|
|||
1672 | repo, wctx, mctx, branchmerge, actions |
|
|||
1673 | ) |
|
|||
1674 | _resolvetrivial(repo, wctx, mctx, ancestors[0], actions) |
|
|||
1675 |
|
||||
1676 | return prunedactions, diverge, renamedelete |
|
|||
1677 |
|
||||
1678 |
|
||||
1679 | def _getcwd(): |
|
|||
1680 | try: |
|
|||
1681 | return encoding.getcwd() |
|
|||
1682 | except OSError as err: |
|
|||
1683 | if err.errno == errno.ENOENT: |
|
|||
1684 | return None |
|
|||
1685 | raise |
|
|||
1686 |
|
||||
1687 |
|
||||
1688 | def batchremove(repo, wctx, actions): |
|
|||
1689 | """apply removes to the working directory |
|
|||
1690 |
|
||||
1691 | yields tuples for progress updates |
|
|||
1692 | """ |
|
|||
1693 | verbose = repo.ui.verbose |
|
|||
1694 | cwd = _getcwd() |
|
|||
1695 | i = 0 |
|
|||
1696 | for f, args, msg in actions: |
|
|||
1697 | repo.ui.debug(b" %s: %s -> r\n" % (f, msg)) |
|
|||
1698 | if verbose: |
|
|||
1699 | repo.ui.note(_(b"removing %s\n") % f) |
|
|||
1700 | wctx[f].audit() |
|
|||
1701 | try: |
|
|||
1702 | wctx[f].remove(ignoremissing=True) |
|
|||
1703 | except OSError as inst: |
|
|||
1704 | repo.ui.warn( |
|
|||
1705 | _(b"update failed to remove %s: %s!\n") % (f, inst.strerror) |
|
|||
1706 | ) |
|
|||
1707 | if i == 100: |
|
|||
1708 | yield i, f |
|
|||
1709 | i = 0 |
|
|||
1710 | i += 1 |
|
|||
1711 | if i > 0: |
|
|||
1712 | yield i, f |
|
|||
1713 |
|
||||
1714 | if cwd and not _getcwd(): |
|
|||
1715 | # cwd was removed in the course of removing files; print a helpful |
|
|||
1716 | # warning. |
|
|||
1717 | repo.ui.warn( |
|
|||
1718 | _( |
|
|||
1719 | b"current directory was removed\n" |
|
|||
1720 | b"(consider changing to repo root: %s)\n" |
|
|||
1721 | ) |
|
|||
1722 | % repo.root |
|
|||
1723 | ) |
|
|||
1724 |
|
||||
1725 |
|
||||
1726 | def batchget(repo, mctx, wctx, wantfiledata, actions): |
|
|||
1727 | """apply gets to the working directory |
|
|||
1728 |
|
||||
1729 | mctx is the context to get from |
|
|||
1730 |
|
||||
1731 | Yields arbitrarily many (False, tuple) for progress updates, followed by |
|
|||
1732 | exactly one (True, filedata). When wantfiledata is false, filedata is an |
|
|||
1733 | empty dict. When wantfiledata is true, filedata[f] is a triple (mode, size, |
|
|||
1734 | mtime) of the file f written for each action. |
|
|||
1735 | """ |
|
|||
1736 | filedata = {} |
|
|||
1737 | verbose = repo.ui.verbose |
|
|||
1738 | fctx = mctx.filectx |
|
|||
1739 | ui = repo.ui |
|
|||
1740 | i = 0 |
|
|||
1741 | with repo.wvfs.backgroundclosing(ui, expectedcount=len(actions)): |
|
|||
1742 | for f, (flags, backup), msg in actions: |
|
|||
1743 | repo.ui.debug(b" %s: %s -> g\n" % (f, msg)) |
|
|||
1744 | if verbose: |
|
|||
1745 | repo.ui.note(_(b"getting %s\n") % f) |
|
|||
1746 |
|
||||
1747 | if backup: |
|
|||
1748 | # If a file or directory exists with the same name, back that |
|
|||
1749 | # up. Otherwise, look to see if there is a file that conflicts |
|
|||
1750 | # with a directory this file is in, and if so, back that up. |
|
|||
1751 | conflicting = f |
|
|||
1752 | if not repo.wvfs.lexists(f): |
|
|||
1753 | for p in pathutil.finddirs(f): |
|
|||
1754 | if repo.wvfs.isfileorlink(p): |
|
|||
1755 | conflicting = p |
|
|||
1756 | break |
|
|||
1757 | if repo.wvfs.lexists(conflicting): |
|
|||
1758 | orig = scmutil.backuppath(ui, repo, conflicting) |
|
|||
1759 | util.rename(repo.wjoin(conflicting), orig) |
|
|||
1760 | wfctx = wctx[f] |
|
|||
1761 | wfctx.clearunknown() |
|
|||
1762 | atomictemp = ui.configbool(b"experimental", b"update.atomic-file") |
|
|||
1763 | size = wfctx.write( |
|
|||
1764 | fctx(f).data(), |
|
|||
1765 | flags, |
|
|||
1766 | backgroundclose=True, |
|
|||
1767 | atomictemp=atomictemp, |
|
|||
1768 | ) |
|
|||
1769 | if wantfiledata: |
|
|||
1770 | s = wfctx.lstat() |
|
|||
1771 | mode = s.st_mode |
|
|||
1772 | mtime = s[stat.ST_MTIME] |
|
|||
1773 | filedata[f] = (mode, size, mtime) # for dirstate.normal |
|
|||
1774 | if i == 100: |
|
|||
1775 | yield False, (i, f) |
|
|||
1776 | i = 0 |
|
|||
1777 | i += 1 |
|
|||
1778 | if i > 0: |
|
|||
1779 | yield False, (i, f) |
|
|||
1780 | yield True, filedata |
|
|||
1781 |
|
||||
1782 |
|
||||
1783 | def _prefetchfiles(repo, ctx, actions): |
|
|||
1784 | """Invoke ``scmutil.prefetchfiles()`` for the files relevant to the dict |
|
|||
1785 | of merge actions. ``ctx`` is the context being merged in.""" |
|
|||
1786 |
|
||||
1787 | # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they |
|
|||
1788 | # don't touch the context to be merged in. 'cd' is skipped, because |
|
|||
1789 | # changed/deleted never resolves to something from the remote side. |
|
|||
1790 | oplist = [ |
|
|||
1791 | actions[a] |
|
|||
1792 | for a in ( |
|
|||
1793 | ACTION_GET, |
|
|||
1794 | ACTION_DELETED_CHANGED, |
|
|||
1795 | ACTION_LOCAL_DIR_RENAME_GET, |
|
|||
1796 | ACTION_MERGE, |
|
|||
1797 | ) |
|
|||
1798 | ] |
|
|||
1799 | prefetch = scmutil.prefetchfiles |
|
|||
1800 | matchfiles = scmutil.matchfiles |
|
|||
1801 | prefetch( |
|
|||
1802 | repo, |
|
|||
1803 | [ctx.rev()], |
|
|||
1804 | matchfiles(repo, [f for sublist in oplist for f, args, msg in sublist]), |
|
|||
1805 | ) |
|
|||
1806 |
|
||||
1807 |
|
||||
1808 | @attr.s(frozen=True) |
|
|||
1809 | class updateresult(object): |
|
|||
1810 | updatedcount = attr.ib() |
|
|||
1811 | mergedcount = attr.ib() |
|
|||
1812 | removedcount = attr.ib() |
|
|||
1813 | unresolvedcount = attr.ib() |
|
|||
1814 |
|
||||
1815 | def isempty(self): |
|
|||
1816 | return not ( |
|
|||
1817 | self.updatedcount |
|
|||
1818 | or self.mergedcount |
|
|||
1819 | or self.removedcount |
|
|||
1820 | or self.unresolvedcount |
|
|||
1821 | ) |
|
|||
1822 |
|
||||
1823 |
|
||||
1824 | def emptyactions(): |
|
|||
1825 | """create an actions dict, to be populated and passed to applyupdates()""" |
|
|||
1826 | return { |
|
|||
1827 | m: [] |
|
|||
1828 | for m in ( |
|
|||
1829 | ACTION_ADD, |
|
|||
1830 | ACTION_ADD_MODIFIED, |
|
|||
1831 | ACTION_FORGET, |
|
|||
1832 | ACTION_GET, |
|
|||
1833 | ACTION_CHANGED_DELETED, |
|
|||
1834 | ACTION_DELETED_CHANGED, |
|
|||
1835 | ACTION_REMOVE, |
|
|||
1836 | ACTION_DIR_RENAME_MOVE_LOCAL, |
|
|||
1837 | ACTION_LOCAL_DIR_RENAME_GET, |
|
|||
1838 | ACTION_MERGE, |
|
|||
1839 | ACTION_EXEC, |
|
|||
1840 | ACTION_KEEP, |
|
|||
1841 | ACTION_PATH_CONFLICT, |
|
|||
1842 | ACTION_PATH_CONFLICT_RESOLVE, |
|
|||
1843 | ACTION_GET_OTHER_AND_STORE, |
|
|||
1844 | ) |
|
|||
1845 | } |
|
|||
1846 |
|
||||
1847 |
|
||||
1848 | def applyupdates( |
|
|||
1849 | repo, actions, wctx, mctx, overwrite, wantfiledata, labels=None |
|
|||
1850 | ): |
|
|||
1851 | """apply the merge action list to the working directory |
|
|||
1852 |
|
||||
1853 | wctx is the working copy context |
|
|||
1854 | mctx is the context to be merged into the working copy |
|
|||
1855 |
|
||||
1856 | Return a tuple of (counts, filedata), where counts is a tuple |
|
|||
1857 | (updated, merged, removed, unresolved) that describes how many |
|
|||
1858 | files were affected by the update, and filedata is as described in |
|
|||
1859 | batchget. |
|
|||
1860 | """ |
|
|||
1861 |
|
||||
1862 | _prefetchfiles(repo, mctx, actions) |
|
|||
1863 |
|
||||
1864 | updated, merged, removed = 0, 0, 0 |
|
|||
1865 | ms = mergestate.clean(repo, wctx.p1().node(), mctx.node(), labels) |
|
|||
1866 |
|
||||
1867 | # add ACTION_GET_OTHER_AND_STORE to mergestate |
|
|||
1868 | for e in actions[ACTION_GET_OTHER_AND_STORE]: |
|
|||
1869 | ms.addmergedother(e[0]) |
|
|||
1870 |
|
||||
1871 | moves = [] |
|
|||
1872 | for m, l in actions.items(): |
|
|||
1873 | l.sort() |
|
|||
1874 |
|
||||
1875 | # 'cd' and 'dc' actions are treated like other merge conflicts |
|
|||
1876 | mergeactions = sorted(actions[ACTION_CHANGED_DELETED]) |
|
|||
1877 | mergeactions.extend(sorted(actions[ACTION_DELETED_CHANGED])) |
|
|||
1878 | mergeactions.extend(actions[ACTION_MERGE]) |
|
|||
1879 | for f, args, msg in mergeactions: |
|
|||
1880 | f1, f2, fa, move, anc = args |
|
|||
1881 | if f == b'.hgsubstate': # merged internally |
|
|||
1882 | continue |
|
|||
1883 | if f1 is None: |
|
|||
1884 | fcl = filemerge.absentfilectx(wctx, fa) |
|
|||
1885 | else: |
|
|||
1886 | repo.ui.debug(b" preserving %s for resolve of %s\n" % (f1, f)) |
|
|||
1887 | fcl = wctx[f1] |
|
|||
1888 | if f2 is None: |
|
|||
1889 | fco = filemerge.absentfilectx(mctx, fa) |
|
|||
1890 | else: |
|
|||
1891 | fco = mctx[f2] |
|
|||
1892 | actx = repo[anc] |
|
|||
1893 | if fa in actx: |
|
|||
1894 | fca = actx[fa] |
|
|||
1895 | else: |
|
|||
1896 | # TODO: move to absentfilectx |
|
|||
1897 | fca = repo.filectx(f1, fileid=nullrev) |
|
|||
1898 | ms.add(fcl, fco, fca, f) |
|
|||
1899 | if f1 != f and move: |
|
|||
1900 | moves.append(f1) |
|
|||
1901 |
|
||||
1902 | # remove renamed files after safely stored |
|
|||
1903 | for f in moves: |
|
|||
1904 | if wctx[f].lexists(): |
|
|||
1905 | repo.ui.debug(b"removing %s\n" % f) |
|
|||
1906 | wctx[f].audit() |
|
|||
1907 | wctx[f].remove() |
|
|||
1908 |
|
||||
1909 | numupdates = sum(len(l) for m, l in actions.items() if m != ACTION_KEEP) |
|
|||
1910 | progress = repo.ui.makeprogress( |
|
|||
1911 | _(b'updating'), unit=_(b'files'), total=numupdates |
|
|||
1912 | ) |
|
|||
1913 |
|
||||
1914 | if [a for a in actions[ACTION_REMOVE] if a[0] == b'.hgsubstate']: |
|
|||
1915 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) |
|
|||
1916 |
|
||||
1917 | # record path conflicts |
|
|||
1918 | for f, args, msg in actions[ACTION_PATH_CONFLICT]: |
|
|||
1919 | f1, fo = args |
|
|||
1920 | s = repo.ui.status |
|
|||
1921 | s( |
|
|||
1922 | _( |
|
|||
1923 | b"%s: path conflict - a file or link has the same name as a " |
|
|||
1924 | b"directory\n" |
|
|||
1925 | ) |
|
|||
1926 | % f |
|
|||
1927 | ) |
|
|||
1928 | if fo == b'l': |
|
|||
1929 | s(_(b"the local file has been renamed to %s\n") % f1) |
|
|||
1930 | else: |
|
|||
1931 | s(_(b"the remote file has been renamed to %s\n") % f1) |
|
|||
1932 | s(_(b"resolve manually then use 'hg resolve --mark %s'\n") % f) |
|
|||
1933 | ms.addpath(f, f1, fo) |
|
|||
1934 | progress.increment(item=f) |
|
|||
1935 |
|
||||
1936 | # When merging in-memory, we can't support worker processes, so set the |
|
|||
1937 | # per-item cost at 0 in that case. |
|
|||
1938 | cost = 0 if wctx.isinmemory() else 0.001 |
|
|||
1939 |
|
||||
1940 | # remove in parallel (must come before resolving path conflicts and getting) |
|
|||
1941 | prog = worker.worker( |
|
|||
1942 | repo.ui, cost, batchremove, (repo, wctx), actions[ACTION_REMOVE] |
|
|||
1943 | ) |
|
|||
1944 | for i, item in prog: |
|
|||
1945 | progress.increment(step=i, item=item) |
|
|||
1946 | removed = len(actions[ACTION_REMOVE]) |
|
|||
1947 |
|
||||
1948 | # resolve path conflicts (must come before getting) |
|
|||
1949 | for f, args, msg in actions[ACTION_PATH_CONFLICT_RESOLVE]: |
|
|||
1950 | repo.ui.debug(b" %s: %s -> pr\n" % (f, msg)) |
|
|||
1951 | (f0,) = args |
|
|||
1952 | if wctx[f0].lexists(): |
|
|||
1953 | repo.ui.note(_(b"moving %s to %s\n") % (f0, f)) |
|
|||
1954 | wctx[f].audit() |
|
|||
1955 | wctx[f].write(wctx.filectx(f0).data(), wctx.filectx(f0).flags()) |
|
|||
1956 | wctx[f0].remove() |
|
|||
1957 | progress.increment(item=f) |
|
|||
1958 |
|
||||
1959 | # get in parallel. |
|
|||
1960 | threadsafe = repo.ui.configbool( |
|
|||
1961 | b'experimental', b'worker.wdir-get-thread-safe' |
|
|||
1962 | ) |
|
|||
1963 | prog = worker.worker( |
|
|||
1964 | repo.ui, |
|
|||
1965 | cost, |
|
|||
1966 | batchget, |
|
|||
1967 | (repo, mctx, wctx, wantfiledata), |
|
|||
1968 | actions[ACTION_GET], |
|
|||
1969 | threadsafe=threadsafe, |
|
|||
1970 | hasretval=True, |
|
|||
1971 | ) |
|
|||
1972 | getfiledata = {} |
|
|||
1973 | for final, res in prog: |
|
|||
1974 | if final: |
|
|||
1975 | getfiledata = res |
|
|||
1976 | else: |
|
|||
1977 | i, item = res |
|
|||
1978 | progress.increment(step=i, item=item) |
|
|||
1979 | updated = len(actions[ACTION_GET]) |
|
|||
1980 |
|
||||
1981 | if [a for a in actions[ACTION_GET] if a[0] == b'.hgsubstate']: |
|
|||
1982 | subrepoutil.submerge(repo, wctx, mctx, wctx, overwrite, labels) |
|
|||
1983 |
|
||||
1984 | # forget (manifest only, just log it) (must come first) |
|
|||
1985 | for f, args, msg in actions[ACTION_FORGET]: |
|
|||
1986 | repo.ui.debug(b" %s: %s -> f\n" % (f, msg)) |
|
|||
1987 | progress.increment(item=f) |
|
|||
1988 |
|
||||
1989 | # re-add (manifest only, just log it) |
|
|||
1990 | for f, args, msg in actions[ACTION_ADD]: |
|
|||
1991 | repo.ui.debug(b" %s: %s -> a\n" % (f, msg)) |
|
|||
1992 | progress.increment(item=f) |
|
|||
1993 |
|
||||
1994 | # re-add/mark as modified (manifest only, just log it) |
|
|||
1995 | for f, args, msg in actions[ACTION_ADD_MODIFIED]: |
|
|||
1996 | repo.ui.debug(b" %s: %s -> am\n" % (f, msg)) |
|
|||
1997 | progress.increment(item=f) |
|
|||
1998 |
|
||||
1999 | # keep (noop, just log it) |
|
|||
2000 | for f, args, msg in actions[ACTION_KEEP]: |
|
|||
2001 | repo.ui.debug(b" %s: %s -> k\n" % (f, msg)) |
|
|||
2002 | # no progress |
|
|||
2003 |
|
||||
2004 | # directory rename, move local |
|
|||
2005 | for f, args, msg in actions[ACTION_DIR_RENAME_MOVE_LOCAL]: |
|
|||
2006 | repo.ui.debug(b" %s: %s -> dm\n" % (f, msg)) |
|
|||
2007 | progress.increment(item=f) |
|
|||
2008 | f0, flags = args |
|
|||
2009 | repo.ui.note(_(b"moving %s to %s\n") % (f0, f)) |
|
|||
2010 | wctx[f].audit() |
|
|||
2011 | wctx[f].write(wctx.filectx(f0).data(), flags) |
|
|||
2012 | wctx[f0].remove() |
|
|||
2013 | updated += 1 |
|
|||
2014 |
|
||||
2015 | # local directory rename, get |
|
|||
2016 | for f, args, msg in actions[ACTION_LOCAL_DIR_RENAME_GET]: |
|
|||
2017 | repo.ui.debug(b" %s: %s -> dg\n" % (f, msg)) |
|
|||
2018 | progress.increment(item=f) |
|
|||
2019 | f0, flags = args |
|
|||
2020 | repo.ui.note(_(b"getting %s to %s\n") % (f0, f)) |
|
|||
2021 | wctx[f].write(mctx.filectx(f0).data(), flags) |
|
|||
2022 | updated += 1 |
|
|||
2023 |
|
||||
2024 | # exec |
|
|||
2025 | for f, args, msg in actions[ACTION_EXEC]: |
|
|||
2026 | repo.ui.debug(b" %s: %s -> e\n" % (f, msg)) |
|
|||
2027 | progress.increment(item=f) |
|
|||
2028 | (flags,) = args |
|
|||
2029 | wctx[f].audit() |
|
|||
2030 | wctx[f].setflags(b'l' in flags, b'x' in flags) |
|
|||
2031 | updated += 1 |
|
|||
2032 |
|
||||
2033 | # the ordering is important here -- ms.mergedriver will raise if the merge |
|
|||
2034 | # driver has changed, and we want to be able to bypass it when overwrite is |
|
|||
2035 | # True |
|
|||
2036 | usemergedriver = not overwrite and mergeactions and ms.mergedriver |
|
|||
2037 |
|
||||
2038 | if usemergedriver: |
|
|||
2039 | if wctx.isinmemory(): |
|
|||
2040 | raise error.InMemoryMergeConflictsError( |
|
|||
2041 | b"in-memory merge does not support mergedriver" |
|
|||
2042 | ) |
|
|||
2043 | ms.commit() |
|
|||
2044 | proceed = driverpreprocess(repo, ms, wctx, labels=labels) |
|
|||
2045 | # the driver might leave some files unresolved |
|
|||
2046 | unresolvedf = set(ms.unresolved()) |
|
|||
2047 | if not proceed: |
|
|||
2048 | # XXX setting unresolved to at least 1 is a hack to make sure we |
|
|||
2049 | # error out |
|
|||
2050 | return updateresult( |
|
|||
2051 | updated, merged, removed, max(len(unresolvedf), 1) |
|
|||
2052 | ) |
|
|||
2053 | newactions = [] |
|
|||
2054 | for f, args, msg in mergeactions: |
|
|||
2055 | if f in unresolvedf: |
|
|||
2056 | newactions.append((f, args, msg)) |
|
|||
2057 | mergeactions = newactions |
|
|||
2058 |
|
||||
2059 | try: |
|
|||
2060 | # premerge |
|
|||
2061 | tocomplete = [] |
|
|||
2062 | for f, args, msg in mergeactions: |
|
|||
2063 | repo.ui.debug(b" %s: %s -> m (premerge)\n" % (f, msg)) |
|
|||
2064 | progress.increment(item=f) |
|
|||
2065 | if f == b'.hgsubstate': # subrepo states need updating |
|
|||
2066 | subrepoutil.submerge( |
|
|||
2067 | repo, wctx, mctx, wctx.ancestor(mctx), overwrite, labels |
|
|||
2068 | ) |
|
|||
2069 | continue |
|
|||
2070 | wctx[f].audit() |
|
|||
2071 | complete, r = ms.preresolve(f, wctx) |
|
|||
2072 | if not complete: |
|
|||
2073 | numupdates += 1 |
|
|||
2074 | tocomplete.append((f, args, msg)) |
|
|||
2075 |
|
||||
2076 | # merge |
|
|||
2077 | for f, args, msg in tocomplete: |
|
|||
2078 | repo.ui.debug(b" %s: %s -> m (merge)\n" % (f, msg)) |
|
|||
2079 | progress.increment(item=f, total=numupdates) |
|
|||
2080 | ms.resolve(f, wctx) |
|
|||
2081 |
|
||||
2082 | finally: |
|
|||
2083 | ms.commit() |
|
|||
2084 |
|
||||
2085 | unresolved = ms.unresolvedcount() |
|
|||
2086 |
|
||||
2087 | if ( |
|
|||
2088 | usemergedriver |
|
|||
2089 | and not unresolved |
|
|||
2090 | and ms.mdstate() != MERGE_DRIVER_STATE_SUCCESS |
|
|||
2091 | ): |
|
|||
2092 | if not driverconclude(repo, ms, wctx, labels=labels): |
|
|||
2093 | # XXX setting unresolved to at least 1 is a hack to make sure we |
|
|||
2094 | # error out |
|
|||
2095 | unresolved = max(unresolved, 1) |
|
|||
2096 |
|
||||
2097 | ms.commit() |
|
|||
2098 |
|
||||
2099 | msupdated, msmerged, msremoved = ms.counts() |
|
|||
2100 | updated += msupdated |
|
|||
2101 | merged += msmerged |
|
|||
2102 | removed += msremoved |
|
|||
2103 |
|
||||
2104 | extraactions = ms.actions() |
|
|||
2105 | if extraactions: |
|
|||
2106 | mfiles = {a[0] for a in actions[ACTION_MERGE]} |
|
|||
2107 | for k, acts in pycompat.iteritems(extraactions): |
|
|||
2108 | actions[k].extend(acts) |
|
|||
2109 | if k == ACTION_GET and wantfiledata: |
|
|||
2110 | # no filedata until mergestate is updated to provide it |
|
|||
2111 | for a in acts: |
|
|||
2112 | getfiledata[a[0]] = None |
|
|||
2113 | # Remove these files from actions[ACTION_MERGE] as well. This is |
|
|||
2114 | # important because in recordupdates, files in actions[ACTION_MERGE] |
|
|||
2115 | # are processed after files in other actions, and the merge driver |
|
|||
2116 | # might add files to those actions via extraactions above. This can |
|
|||
2117 | # lead to a file being recorded twice, with poor results. This is |
|
|||
2118 | # especially problematic for actions[ACTION_REMOVE] (currently only |
|
|||
2119 | # possible with the merge driver in the initial merge process; |
|
|||
2120 | # interrupted merges don't go through this flow). |
|
|||
2121 | # |
|
|||
2122 | # The real fix here is to have indexes by both file and action so |
|
|||
2123 | # that when the action for a file is changed it is automatically |
|
|||
2124 | # reflected in the other action lists. But that involves a more |
|
|||
2125 | # complex data structure, so this will do for now. |
|
|||
2126 | # |
|
|||
2127 | # We don't need to do the same operation for 'dc' and 'cd' because |
|
|||
2128 | # those lists aren't consulted again. |
|
|||
2129 | mfiles.difference_update(a[0] for a in acts) |
|
|||
2130 |
|
||||
2131 | actions[ACTION_MERGE] = [ |
|
|||
2132 | a for a in actions[ACTION_MERGE] if a[0] in mfiles |
|
|||
2133 | ] |
|
|||
2134 |
|
||||
2135 | progress.complete() |
|
|||
2136 | assert len(getfiledata) == (len(actions[ACTION_GET]) if wantfiledata else 0) |
|
|||
2137 | return updateresult(updated, merged, removed, unresolved), getfiledata |
|
|||
2138 |
|
||||
2139 |
|
||||
2140 | def recordupdates(repo, actions, branchmerge, getfiledata): |
|
786 | def recordupdates(repo, actions, branchmerge, getfiledata): | |
2141 | """record merge actions to the dirstate""" |
|
787 | """record merge actions to the dirstate""" | |
2142 | # remove (must come first) |
|
788 | # remove (must come first) | |
@@ -2152,8 +798,7 b' def recordupdates(repo, actions, branchm' | |||||
2152 |
|
798 | |||
2153 | # resolve path conflicts |
|
799 | # resolve path conflicts | |
2154 | for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []): |
|
800 | for f, args, msg in actions.get(ACTION_PATH_CONFLICT_RESOLVE, []): | |
2155 | (f0,) = args |
|
801 | (f0, origf0) = args | |
2156 | origf0 = repo.dirstate.copied(f0) or f0 |
|
|||
2157 | repo.dirstate.add(f) |
|
802 | repo.dirstate.add(f) | |
2158 | repo.dirstate.copy(origf0, f) |
|
803 | repo.dirstate.copy(origf0, f) | |
2159 | if f0 == origf0: |
|
804 | if f0 == origf0: | |
@@ -2232,594 +877,3 b' def recordupdates(repo, actions, branchm' | |||||
2232 | repo.dirstate.copy(f0, f) |
|
877 | repo.dirstate.copy(f0, f) | |
2233 | else: |
|
878 | else: | |
2234 | repo.dirstate.normal(f) |
|
879 | repo.dirstate.normal(f) | |
2235 |
|
||||
2236 |
|
||||
2237 | UPDATECHECK_ABORT = b'abort' # handled at higher layers |
|
|||
2238 | UPDATECHECK_NONE = b'none' |
|
|||
2239 | UPDATECHECK_LINEAR = b'linear' |
|
|||
2240 | UPDATECHECK_NO_CONFLICT = b'noconflict' |
|
|||
2241 |
|
||||
2242 |
|
||||
2243 | def update( |
|
|||
2244 | repo, |
|
|||
2245 | node, |
|
|||
2246 | branchmerge, |
|
|||
2247 | force, |
|
|||
2248 | ancestor=None, |
|
|||
2249 | mergeancestor=False, |
|
|||
2250 | labels=None, |
|
|||
2251 | matcher=None, |
|
|||
2252 | mergeforce=False, |
|
|||
2253 | updatedirstate=True, |
|
|||
2254 | updatecheck=None, |
|
|||
2255 | wc=None, |
|
|||
2256 | ): |
|
|||
2257 | """ |
|
|||
2258 | Perform a merge between the working directory and the given node |
|
|||
2259 |
|
||||
2260 | node = the node to update to |
|
|||
2261 | branchmerge = whether to merge between branches |
|
|||
2262 | force = whether to force branch merging or file overwriting |
|
|||
2263 | matcher = a matcher to filter file lists (dirstate not updated) |
|
|||
2264 | mergeancestor = whether it is merging with an ancestor. If true, |
|
|||
2265 | we should accept the incoming changes for any prompts that occur. |
|
|||
2266 | If false, merging with an ancestor (fast-forward) is only allowed |
|
|||
2267 | between different named branches. This flag is used by rebase extension |
|
|||
2268 | as a temporary fix and should be avoided in general. |
|
|||
2269 | labels = labels to use for base, local and other |
|
|||
2270 | mergeforce = whether the merge was run with 'merge --force' (deprecated): if |
|
|||
2271 | this is True, then 'force' should be True as well. |
|
|||
2272 |
|
||||
2273 | The table below shows all the behaviors of the update command given the |
|
|||
2274 | -c/--check and -C/--clean or no options, whether the working directory is |
|
|||
2275 | dirty, whether a revision is specified, and the relationship of the parent |
|
|||
2276 | rev to the target rev (linear or not). Match from top first. The -n |
|
|||
2277 | option doesn't exist on the command line, but represents the |
|
|||
2278 | experimental.updatecheck=noconflict option. |
|
|||
2279 |
|
||||
2280 | This logic is tested by test-update-branches.t. |
|
|||
2281 |
|
||||
2282 | -c -C -n -m dirty rev linear | result |
|
|||
2283 | y y * * * * * | (1) |
|
|||
2284 | y * y * * * * | (1) |
|
|||
2285 | y * * y * * * | (1) |
|
|||
2286 | * y y * * * * | (1) |
|
|||
2287 | * y * y * * * | (1) |
|
|||
2288 | * * y y * * * | (1) |
|
|||
2289 | * * * * * n n | x |
|
|||
2290 | * * * * n * * | ok |
|
|||
2291 | n n n n y * y | merge |
|
|||
2292 | n n n n y y n | (2) |
|
|||
2293 | n n n y y * * | merge |
|
|||
2294 | n n y n y * * | merge if no conflict |
|
|||
2295 | n y n n y * * | discard |
|
|||
2296 | y n n n y * * | (3) |
|
|||
2297 |
|
||||
2298 | x = can't happen |
|
|||
2299 | * = don't-care |
|
|||
2300 | 1 = incompatible options (checked in commands.py) |
|
|||
2301 | 2 = abort: uncommitted changes (commit or update --clean to discard changes) |
|
|||
2302 | 3 = abort: uncommitted changes (checked in commands.py) |
|
|||
2303 |
|
||||
2304 | The merge is performed inside ``wc``, a workingctx-like objects. It defaults |
|
|||
2305 | to repo[None] if None is passed. |
|
|||
2306 |
|
||||
2307 | Return the same tuple as applyupdates(). |
|
|||
2308 | """ |
|
|||
2309 | # Avoid cycle. |
|
|||
2310 | from . import sparse |
|
|||
2311 |
|
||||
2312 | # This function used to find the default destination if node was None, but |
|
|||
2313 | # that's now in destutil.py. |
|
|||
2314 | assert node is not None |
|
|||
2315 | if not branchmerge and not force: |
|
|||
2316 | # TODO: remove the default once all callers that pass branchmerge=False |
|
|||
2317 | # and force=False pass a value for updatecheck. We may want to allow |
|
|||
2318 | # updatecheck='abort' to better suppport some of these callers. |
|
|||
2319 | if updatecheck is None: |
|
|||
2320 | updatecheck = UPDATECHECK_LINEAR |
|
|||
2321 | if updatecheck not in ( |
|
|||
2322 | UPDATECHECK_NONE, |
|
|||
2323 | UPDATECHECK_LINEAR, |
|
|||
2324 | UPDATECHECK_NO_CONFLICT, |
|
|||
2325 | ): |
|
|||
2326 | raise ValueError( |
|
|||
2327 | r'Invalid updatecheck %r (can accept %r)' |
|
|||
2328 | % ( |
|
|||
2329 | updatecheck, |
|
|||
2330 | ( |
|
|||
2331 | UPDATECHECK_NONE, |
|
|||
2332 | UPDATECHECK_LINEAR, |
|
|||
2333 | UPDATECHECK_NO_CONFLICT, |
|
|||
2334 | ), |
|
|||
2335 | ) |
|
|||
2336 | ) |
|
|||
2337 | with repo.wlock(): |
|
|||
2338 | if wc is None: |
|
|||
2339 | wc = repo[None] |
|
|||
2340 | pl = wc.parents() |
|
|||
2341 | p1 = pl[0] |
|
|||
2342 | p2 = repo[node] |
|
|||
2343 | if ancestor is not None: |
|
|||
2344 | pas = [repo[ancestor]] |
|
|||
2345 | else: |
|
|||
2346 | if repo.ui.configlist(b'merge', b'preferancestor') == [b'*']: |
|
|||
2347 | cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node()) |
|
|||
2348 | pas = [repo[anc] for anc in (sorted(cahs) or [nullid])] |
|
|||
2349 | else: |
|
|||
2350 | pas = [p1.ancestor(p2, warn=branchmerge)] |
|
|||
2351 |
|
||||
2352 | fp1, fp2, xp1, xp2 = p1.node(), p2.node(), bytes(p1), bytes(p2) |
|
|||
2353 |
|
||||
2354 | overwrite = force and not branchmerge |
|
|||
2355 | ### check phase |
|
|||
2356 | if not overwrite: |
|
|||
2357 | if len(pl) > 1: |
|
|||
2358 | raise error.Abort(_(b"outstanding uncommitted merge")) |
|
|||
2359 | ms = mergestate.read(repo) |
|
|||
2360 | if list(ms.unresolved()): |
|
|||
2361 | raise error.Abort( |
|
|||
2362 | _(b"outstanding merge conflicts"), |
|
|||
2363 | hint=_(b"use 'hg resolve' to resolve"), |
|
|||
2364 | ) |
|
|||
2365 | if branchmerge: |
|
|||
2366 | if pas == [p2]: |
|
|||
2367 | raise error.Abort( |
|
|||
2368 | _( |
|
|||
2369 | b"merging with a working directory ancestor" |
|
|||
2370 | b" has no effect" |
|
|||
2371 | ) |
|
|||
2372 | ) |
|
|||
2373 | elif pas == [p1]: |
|
|||
2374 | if not mergeancestor and wc.branch() == p2.branch(): |
|
|||
2375 | raise error.Abort( |
|
|||
2376 | _(b"nothing to merge"), |
|
|||
2377 | hint=_(b"use 'hg update' or check 'hg heads'"), |
|
|||
2378 | ) |
|
|||
2379 | if not force and (wc.files() or wc.deleted()): |
|
|||
2380 | raise error.Abort( |
|
|||
2381 | _(b"uncommitted changes"), |
|
|||
2382 | hint=_(b"use 'hg status' to list changes"), |
|
|||
2383 | ) |
|
|||
2384 | if not wc.isinmemory(): |
|
|||
2385 | for s in sorted(wc.substate): |
|
|||
2386 | wc.sub(s).bailifchanged() |
|
|||
2387 |
|
||||
2388 | elif not overwrite: |
|
|||
2389 | if p1 == p2: # no-op update |
|
|||
2390 | # call the hooks and exit early |
|
|||
2391 | repo.hook(b'preupdate', throw=True, parent1=xp2, parent2=b'') |
|
|||
2392 | repo.hook(b'update', parent1=xp2, parent2=b'', error=0) |
|
|||
2393 | return updateresult(0, 0, 0, 0) |
|
|||
2394 |
|
||||
2395 | if updatecheck == UPDATECHECK_LINEAR and pas not in ( |
|
|||
2396 | [p1], |
|
|||
2397 | [p2], |
|
|||
2398 | ): # nonlinear |
|
|||
2399 | dirty = wc.dirty(missing=True) |
|
|||
2400 | if dirty: |
|
|||
2401 | # Branching is a bit strange to ensure we do the minimal |
|
|||
2402 | # amount of call to obsutil.foreground. |
|
|||
2403 | foreground = obsutil.foreground(repo, [p1.node()]) |
|
|||
2404 | # note: the <node> variable contains a random identifier |
|
|||
2405 | if repo[node].node() in foreground: |
|
|||
2406 | pass # allow updating to successors |
|
|||
2407 | else: |
|
|||
2408 | msg = _(b"uncommitted changes") |
|
|||
2409 | hint = _(b"commit or update --clean to discard changes") |
|
|||
2410 | raise error.UpdateAbort(msg, hint=hint) |
|
|||
2411 | else: |
|
|||
2412 | # Allow jumping branches if clean and specific rev given |
|
|||
2413 | pass |
|
|||
2414 |
|
||||
2415 | if overwrite: |
|
|||
2416 | pas = [wc] |
|
|||
2417 | elif not branchmerge: |
|
|||
2418 | pas = [p1] |
|
|||
2419 |
|
||||
2420 | # deprecated config: merge.followcopies |
|
|||
2421 | followcopies = repo.ui.configbool(b'merge', b'followcopies') |
|
|||
2422 | if overwrite: |
|
|||
2423 | followcopies = False |
|
|||
2424 | elif not pas[0]: |
|
|||
2425 | followcopies = False |
|
|||
2426 | if not branchmerge and not wc.dirty(missing=True): |
|
|||
2427 | followcopies = False |
|
|||
2428 |
|
||||
2429 | ### calculate phase |
|
|||
2430 | actionbyfile, diverge, renamedelete = calculateupdates( |
|
|||
2431 | repo, |
|
|||
2432 | wc, |
|
|||
2433 | p2, |
|
|||
2434 | pas, |
|
|||
2435 | branchmerge, |
|
|||
2436 | force, |
|
|||
2437 | mergeancestor, |
|
|||
2438 | followcopies, |
|
|||
2439 | matcher=matcher, |
|
|||
2440 | mergeforce=mergeforce, |
|
|||
2441 | ) |
|
|||
2442 |
|
||||
2443 | if updatecheck == UPDATECHECK_NO_CONFLICT: |
|
|||
2444 | for f, (m, args, msg) in pycompat.iteritems(actionbyfile): |
|
|||
2445 | if m not in ( |
|
|||
2446 | ACTION_GET, |
|
|||
2447 | ACTION_KEEP, |
|
|||
2448 | ACTION_EXEC, |
|
|||
2449 | ACTION_REMOVE, |
|
|||
2450 | ACTION_PATH_CONFLICT_RESOLVE, |
|
|||
2451 | ACTION_GET_OTHER_AND_STORE, |
|
|||
2452 | ): |
|
|||
2453 | msg = _(b"conflicting changes") |
|
|||
2454 | hint = _(b"commit or update --clean to discard changes") |
|
|||
2455 | raise error.Abort(msg, hint=hint) |
|
|||
2456 |
|
||||
2457 | # Prompt and create actions. Most of this is in the resolve phase |
|
|||
2458 | # already, but we can't handle .hgsubstate in filemerge or |
|
|||
2459 | # subrepoutil.submerge yet so we have to keep prompting for it. |
|
|||
2460 | if b'.hgsubstate' in actionbyfile: |
|
|||
2461 | f = b'.hgsubstate' |
|
|||
2462 | m, args, msg = actionbyfile[f] |
|
|||
2463 | prompts = filemerge.partextras(labels) |
|
|||
2464 | prompts[b'f'] = f |
|
|||
2465 | if m == ACTION_CHANGED_DELETED: |
|
|||
2466 | if repo.ui.promptchoice( |
|
|||
2467 | _( |
|
|||
2468 | b"local%(l)s changed %(f)s which other%(o)s deleted\n" |
|
|||
2469 | b"use (c)hanged version or (d)elete?" |
|
|||
2470 | b"$$ &Changed $$ &Delete" |
|
|||
2471 | ) |
|
|||
2472 | % prompts, |
|
|||
2473 | 0, |
|
|||
2474 | ): |
|
|||
2475 | actionbyfile[f] = (ACTION_REMOVE, None, b'prompt delete') |
|
|||
2476 | elif f in p1: |
|
|||
2477 | actionbyfile[f] = ( |
|
|||
2478 | ACTION_ADD_MODIFIED, |
|
|||
2479 | None, |
|
|||
2480 | b'prompt keep', |
|
|||
2481 | ) |
|
|||
2482 | else: |
|
|||
2483 | actionbyfile[f] = (ACTION_ADD, None, b'prompt keep') |
|
|||
2484 | elif m == ACTION_DELETED_CHANGED: |
|
|||
2485 | f1, f2, fa, move, anc = args |
|
|||
2486 | flags = p2[f2].flags() |
|
|||
2487 | if ( |
|
|||
2488 | repo.ui.promptchoice( |
|
|||
2489 | _( |
|
|||
2490 | b"other%(o)s changed %(f)s which local%(l)s deleted\n" |
|
|||
2491 | b"use (c)hanged version or leave (d)eleted?" |
|
|||
2492 | b"$$ &Changed $$ &Deleted" |
|
|||
2493 | ) |
|
|||
2494 | % prompts, |
|
|||
2495 | 0, |
|
|||
2496 | ) |
|
|||
2497 | == 0 |
|
|||
2498 | ): |
|
|||
2499 | actionbyfile[f] = ( |
|
|||
2500 | ACTION_GET, |
|
|||
2501 | (flags, False), |
|
|||
2502 | b'prompt recreating', |
|
|||
2503 | ) |
|
|||
2504 | else: |
|
|||
2505 | del actionbyfile[f] |
|
|||
2506 |
|
||||
2507 | # Convert to dictionary-of-lists format |
|
|||
2508 | actions = emptyactions() |
|
|||
2509 | for f, (m, args, msg) in pycompat.iteritems(actionbyfile): |
|
|||
2510 | if m not in actions: |
|
|||
2511 | actions[m] = [] |
|
|||
2512 | actions[m].append((f, args, msg)) |
|
|||
2513 |
|
||||
2514 | # ACTION_GET_OTHER_AND_STORE is a ACTION_GET + store in mergestate |
|
|||
2515 | for e in actions[ACTION_GET_OTHER_AND_STORE]: |
|
|||
2516 | actions[ACTION_GET].append(e) |
|
|||
2517 |
|
||||
2518 | if not util.fscasesensitive(repo.path): |
|
|||
2519 | # check collision between files only in p2 for clean update |
|
|||
2520 | if not branchmerge and ( |
|
|||
2521 | force or not wc.dirty(missing=True, branch=False) |
|
|||
2522 | ): |
|
|||
2523 | _checkcollision(repo, p2.manifest(), None) |
|
|||
2524 | else: |
|
|||
2525 | _checkcollision(repo, wc.manifest(), actions) |
|
|||
2526 |
|
||||
2527 | # divergent renames |
|
|||
2528 | for f, fl in sorted(pycompat.iteritems(diverge)): |
|
|||
2529 | repo.ui.warn( |
|
|||
2530 | _( |
|
|||
2531 | b"note: possible conflict - %s was renamed " |
|
|||
2532 | b"multiple times to:\n" |
|
|||
2533 | ) |
|
|||
2534 | % f |
|
|||
2535 | ) |
|
|||
2536 | for nf in sorted(fl): |
|
|||
2537 | repo.ui.warn(b" %s\n" % nf) |
|
|||
2538 |
|
||||
2539 | # rename and delete |
|
|||
2540 | for f, fl in sorted(pycompat.iteritems(renamedelete)): |
|
|||
2541 | repo.ui.warn( |
|
|||
2542 | _( |
|
|||
2543 | b"note: possible conflict - %s was deleted " |
|
|||
2544 | b"and renamed to:\n" |
|
|||
2545 | ) |
|
|||
2546 | % f |
|
|||
2547 | ) |
|
|||
2548 | for nf in sorted(fl): |
|
|||
2549 | repo.ui.warn(b" %s\n" % nf) |
|
|||
2550 |
|
||||
2551 | ### apply phase |
|
|||
2552 | if not branchmerge: # just jump to the new rev |
|
|||
2553 | fp1, fp2, xp1, xp2 = fp2, nullid, xp2, b'' |
|
|||
2554 | # If we're doing a partial update, we need to skip updating |
|
|||
2555 | # the dirstate. |
|
|||
2556 | always = matcher is None or matcher.always() |
|
|||
2557 | updatedirstate = updatedirstate and always and not wc.isinmemory() |
|
|||
2558 | if updatedirstate: |
|
|||
2559 | repo.hook(b'preupdate', throw=True, parent1=xp1, parent2=xp2) |
|
|||
2560 | # note that we're in the middle of an update |
|
|||
2561 | repo.vfs.write(b'updatestate', p2.hex()) |
|
|||
2562 |
|
||||
2563 | # Advertise fsmonitor when its presence could be useful. |
|
|||
2564 | # |
|
|||
2565 | # We only advertise when performing an update from an empty working |
|
|||
2566 | # directory. This typically only occurs during initial clone. |
|
|||
2567 | # |
|
|||
2568 | # We give users a mechanism to disable the warning in case it is |
|
|||
2569 | # annoying. |
|
|||
2570 | # |
|
|||
2571 | # We only allow on Linux and MacOS because that's where fsmonitor is |
|
|||
2572 | # considered stable. |
|
|||
2573 | fsmonitorwarning = repo.ui.configbool(b'fsmonitor', b'warn_when_unused') |
|
|||
2574 | fsmonitorthreshold = repo.ui.configint( |
|
|||
2575 | b'fsmonitor', b'warn_update_file_count' |
|
|||
2576 | ) |
|
|||
2577 | try: |
|
|||
2578 | # avoid cycle: extensions -> cmdutil -> merge |
|
|||
2579 | from . import extensions |
|
|||
2580 |
|
||||
2581 | extensions.find(b'fsmonitor') |
|
|||
2582 | fsmonitorenabled = repo.ui.config(b'fsmonitor', b'mode') != b'off' |
|
|||
2583 | # We intentionally don't look at whether fsmonitor has disabled |
|
|||
2584 | # itself because a) fsmonitor may have already printed a warning |
|
|||
2585 | # b) we only care about the config state here. |
|
|||
2586 | except KeyError: |
|
|||
2587 | fsmonitorenabled = False |
|
|||
2588 |
|
||||
2589 | if ( |
|
|||
2590 | fsmonitorwarning |
|
|||
2591 | and not fsmonitorenabled |
|
|||
2592 | and p1.node() == nullid |
|
|||
2593 | and len(actions[ACTION_GET]) >= fsmonitorthreshold |
|
|||
2594 | and pycompat.sysplatform.startswith((b'linux', b'darwin')) |
|
|||
2595 | ): |
|
|||
2596 | repo.ui.warn( |
|
|||
2597 | _( |
|
|||
2598 | b'(warning: large working directory being used without ' |
|
|||
2599 | b'fsmonitor enabled; enable fsmonitor to improve performance; ' |
|
|||
2600 | b'see "hg help -e fsmonitor")\n' |
|
|||
2601 | ) |
|
|||
2602 | ) |
|
|||
2603 |
|
||||
2604 | wantfiledata = updatedirstate and not branchmerge |
|
|||
2605 | stats, getfiledata = applyupdates( |
|
|||
2606 | repo, actions, wc, p2, overwrite, wantfiledata, labels=labels |
|
|||
2607 | ) |
|
|||
2608 |
|
||||
2609 | if updatedirstate: |
|
|||
2610 | with repo.dirstate.parentchange(): |
|
|||
2611 | repo.setparents(fp1, fp2) |
|
|||
2612 | recordupdates(repo, actions, branchmerge, getfiledata) |
|
|||
2613 | # update completed, clear state |
|
|||
2614 | util.unlink(repo.vfs.join(b'updatestate')) |
|
|||
2615 |
|
||||
2616 | if not branchmerge: |
|
|||
2617 | repo.dirstate.setbranch(p2.branch()) |
|
|||
2618 |
|
||||
2619 | # If we're updating to a location, clean up any stale temporary includes |
|
|||
2620 | # (ex: this happens during hg rebase --abort). |
|
|||
2621 | if not branchmerge: |
|
|||
2622 | sparse.prunetemporaryincludes(repo) |
|
|||
2623 |
|
||||
2624 | if updatedirstate: |
|
|||
2625 | repo.hook( |
|
|||
2626 | b'update', parent1=xp1, parent2=xp2, error=stats.unresolvedcount |
|
|||
2627 | ) |
|
|||
2628 | return stats |
|
|||
2629 |
|
||||
2630 |
|
||||
2631 | def merge(ctx, labels=None, force=False, wc=None): |
|
|||
2632 | """Merge another topological branch into the working copy. |
|
|||
2633 |
|
||||
2634 | force = whether the merge was run with 'merge --force' (deprecated) |
|
|||
2635 | """ |
|
|||
2636 |
|
||||
2637 | return update( |
|
|||
2638 | ctx.repo(), |
|
|||
2639 | ctx.rev(), |
|
|||
2640 | labels=labels, |
|
|||
2641 | branchmerge=True, |
|
|||
2642 | force=force, |
|
|||
2643 | mergeforce=force, |
|
|||
2644 | wc=wc, |
|
|||
2645 | ) |
|
|||
2646 |
|
||||
2647 |
|
||||
2648 | def clean_update(ctx, wc=None): |
|
|||
2649 | """Do a clean update to the given commit. |
|
|||
2650 |
|
||||
2651 | This involves updating to the commit and discarding any changes in the |
|
|||
2652 | working copy. |
|
|||
2653 | """ |
|
|||
2654 | return update(ctx.repo(), ctx.rev(), branchmerge=False, force=True, wc=wc) |
|
|||
2655 |
|
||||
2656 |
|
||||
2657 | def revert_to(ctx, matcher=None, wc=None): |
|
|||
2658 | """Revert the working copy to the given commit. |
|
|||
2659 |
|
||||
2660 | The working copy will keep its current parent(s) but its content will |
|
|||
2661 | be the same as in the given commit. |
|
|||
2662 | """ |
|
|||
2663 |
|
||||
2664 | return update( |
|
|||
2665 | ctx.repo(), |
|
|||
2666 | ctx.rev(), |
|
|||
2667 | branchmerge=False, |
|
|||
2668 | force=True, |
|
|||
2669 | updatedirstate=False, |
|
|||
2670 | matcher=matcher, |
|
|||
2671 | wc=wc, |
|
|||
2672 | ) |
|
|||
2673 |
|
||||
2674 |
|
||||
2675 | def graft( |
|
|||
2676 | repo, |
|
|||
2677 | ctx, |
|
|||
2678 | base=None, |
|
|||
2679 | labels=None, |
|
|||
2680 | keepparent=False, |
|
|||
2681 | keepconflictparent=False, |
|
|||
2682 | wctx=None, |
|
|||
2683 | ): |
|
|||
2684 | """Do a graft-like merge. |
|
|||
2685 |
|
||||
2686 | This is a merge where the merge ancestor is chosen such that one |
|
|||
2687 | or more changesets are grafted onto the current changeset. In |
|
|||
2688 | addition to the merge, this fixes up the dirstate to include only |
|
|||
2689 | a single parent (if keepparent is False) and tries to duplicate any |
|
|||
2690 | renames/copies appropriately. |
|
|||
2691 |
|
||||
2692 | ctx - changeset to rebase |
|
|||
2693 | base - merge base, or ctx.p1() if not specified |
|
|||
2694 | labels - merge labels eg ['local', 'graft'] |
|
|||
2695 | keepparent - keep second parent if any |
|
|||
2696 | keepconflictparent - if unresolved, keep parent used for the merge |
|
|||
2697 |
|
||||
2698 | """ |
|
|||
2699 | # If we're grafting a descendant onto an ancestor, be sure to pass |
|
|||
2700 | # mergeancestor=True to update. This does two things: 1) allows the merge if |
|
|||
2701 | # the destination is the same as the parent of the ctx (so we can use graft |
|
|||
2702 | # to copy commits), and 2) informs update that the incoming changes are |
|
|||
2703 | # newer than the destination so it doesn't prompt about "remote changed foo |
|
|||
2704 | # which local deleted". |
|
|||
2705 | # We also pass mergeancestor=True when base is the same revision as p1. 2) |
|
|||
2706 | # doesn't matter as there can't possibly be conflicts, but 1) is necessary. |
|
|||
2707 | wctx = wctx or repo[None] |
|
|||
2708 | pctx = wctx.p1() |
|
|||
2709 | base = base or ctx.p1() |
|
|||
2710 | mergeancestor = ( |
|
|||
2711 | repo.changelog.isancestor(pctx.node(), ctx.node()) |
|
|||
2712 | or pctx.rev() == base.rev() |
|
|||
2713 | ) |
|
|||
2714 |
|
||||
2715 | stats = update( |
|
|||
2716 | repo, |
|
|||
2717 | ctx.node(), |
|
|||
2718 | True, |
|
|||
2719 | True, |
|
|||
2720 | base.node(), |
|
|||
2721 | mergeancestor=mergeancestor, |
|
|||
2722 | labels=labels, |
|
|||
2723 | wc=wctx, |
|
|||
2724 | ) |
|
|||
2725 |
|
||||
2726 | if keepconflictparent and stats.unresolvedcount: |
|
|||
2727 | pother = ctx.node() |
|
|||
2728 | else: |
|
|||
2729 | pother = nullid |
|
|||
2730 | parents = ctx.parents() |
|
|||
2731 | if keepparent and len(parents) == 2 and base in parents: |
|
|||
2732 | parents.remove(base) |
|
|||
2733 | pother = parents[0].node() |
|
|||
2734 | # Never set both parents equal to each other |
|
|||
2735 | if pother == pctx.node(): |
|
|||
2736 | pother = nullid |
|
|||
2737 |
|
||||
2738 | if wctx.isinmemory(): |
|
|||
2739 | wctx.setparents(pctx.node(), pother) |
|
|||
2740 | # fix up dirstate for copies and renames |
|
|||
2741 | copies.graftcopies(wctx, ctx, base) |
|
|||
2742 | else: |
|
|||
2743 | with repo.dirstate.parentchange(): |
|
|||
2744 | repo.setparents(pctx.node(), pother) |
|
|||
2745 | repo.dirstate.write(repo.currenttransaction()) |
|
|||
2746 | # fix up dirstate for copies and renames |
|
|||
2747 | copies.graftcopies(wctx, ctx, base) |
|
|||
2748 | return stats |
|
|||
2749 |
|
||||
2750 |
|
||||
2751 | def purge( |
|
|||
2752 | repo, |
|
|||
2753 | matcher, |
|
|||
2754 | unknown=True, |
|
|||
2755 | ignored=False, |
|
|||
2756 | removeemptydirs=True, |
|
|||
2757 | removefiles=True, |
|
|||
2758 | abortonerror=False, |
|
|||
2759 | noop=False, |
|
|||
2760 | ): |
|
|||
2761 | """Purge the working directory of untracked files. |
|
|||
2762 |
|
||||
2763 | ``matcher`` is a matcher configured to scan the working directory - |
|
|||
2764 | potentially a subset. |
|
|||
2765 |
|
||||
2766 | ``unknown`` controls whether unknown files should be purged. |
|
|||
2767 |
|
||||
2768 | ``ignored`` controls whether ignored files should be purged. |
|
|||
2769 |
|
||||
2770 | ``removeemptydirs`` controls whether empty directories should be removed. |
|
|||
2771 |
|
||||
2772 | ``removefiles`` controls whether files are removed. |
|
|||
2773 |
|
||||
2774 | ``abortonerror`` causes an exception to be raised if an error occurs |
|
|||
2775 | deleting a file or directory. |
|
|||
2776 |
|
||||
2777 | ``noop`` controls whether to actually remove files. If not defined, actions |
|
|||
2778 | will be taken. |
|
|||
2779 |
|
||||
2780 | Returns an iterable of relative paths in the working directory that were |
|
|||
2781 | or would be removed. |
|
|||
2782 | """ |
|
|||
2783 |
|
||||
2784 | def remove(removefn, path): |
|
|||
2785 | try: |
|
|||
2786 | removefn(path) |
|
|||
2787 | except OSError: |
|
|||
2788 | m = _(b'%s cannot be removed') % path |
|
|||
2789 | if abortonerror: |
|
|||
2790 | raise error.Abort(m) |
|
|||
2791 | else: |
|
|||
2792 | repo.ui.warn(_(b'warning: %s\n') % m) |
|
|||
2793 |
|
||||
2794 | # There's no API to copy a matcher. So mutate the passed matcher and |
|
|||
2795 | # restore it when we're done. |
|
|||
2796 | oldtraversedir = matcher.traversedir |
|
|||
2797 |
|
||||
2798 | res = [] |
|
|||
2799 |
|
||||
2800 | try: |
|
|||
2801 | if removeemptydirs: |
|
|||
2802 | directories = [] |
|
|||
2803 | matcher.traversedir = directories.append |
|
|||
2804 |
|
||||
2805 | status = repo.status(match=matcher, ignored=ignored, unknown=unknown) |
|
|||
2806 |
|
||||
2807 | if removefiles: |
|
|||
2808 | for f in sorted(status.unknown + status.ignored): |
|
|||
2809 | if not noop: |
|
|||
2810 | repo.ui.note(_(b'removing file %s\n') % f) |
|
|||
2811 | remove(repo.wvfs.unlink, f) |
|
|||
2812 | res.append(f) |
|
|||
2813 |
|
||||
2814 | if removeemptydirs: |
|
|||
2815 | for f in sorted(directories, reverse=True): |
|
|||
2816 | if matcher(f) and not repo.wvfs.listdir(f): |
|
|||
2817 | if not noop: |
|
|||
2818 | repo.ui.note(_(b'removing directory %s\n') % f) |
|
|||
2819 | remove(repo.wvfs.rmdir, f) |
|
|||
2820 | res.append(f) |
|
|||
2821 |
|
||||
2822 | return res |
|
|||
2823 |
|
||||
2824 | finally: |
|
|||
2825 | matcher.traversedir = oldtraversedir |
|
@@ -14,6 +14,7 b' from . import (' | |||||
14 | error, |
|
14 | error, | |
15 | match as matchmod, |
|
15 | match as matchmod, | |
16 | merge, |
|
16 | merge, | |
|
17 | mergestate as mergestatemod, | |||
17 | scmutil, |
|
18 | scmutil, | |
18 | sparse, |
|
19 | sparse, | |
19 | util, |
|
20 | util, | |
@@ -272,7 +273,7 b' def _deletecleanfiles(repo, files):' | |||||
272 |
|
273 | |||
273 | def _writeaddedfiles(repo, pctx, files): |
|
274 | def _writeaddedfiles(repo, pctx, files): | |
274 | actions = merge.emptyactions() |
|
275 | actions = merge.emptyactions() | |
275 | addgaction = actions[merge.ACTION_GET].append |
|
276 | addgaction = actions[mergestatemod.ACTION_GET].append | |
276 | mf = repo[b'.'].manifest() |
|
277 | mf = repo[b'.'].manifest() | |
277 | for f in files: |
|
278 | for f in files: | |
278 | if not repo.wvfs.exists(f): |
|
279 | if not repo.wvfs.exists(f): |
@@ -13,6 +13,7 b' from .i18n import _' | |||||
13 | from . import ( |
|
13 | from . import ( | |
14 | diffutil, |
|
14 | diffutil, | |
15 | encoding, |
|
15 | encoding, | |
|
16 | error, | |||
16 | node as nodemod, |
|
17 | node as nodemod, | |
17 | phases, |
|
18 | phases, | |
18 | pycompat, |
|
19 | pycompat, | |
@@ -481,14 +482,23 b' def geteffectflag(source, successors):' | |||||
481 | return effects |
|
482 | return effects | |
482 |
|
483 | |||
483 |
|
484 | |||
484 | def getobsoleted(repo, tr): |
|
485 | def getobsoleted(repo, tr=None, changes=None): | |
485 |
"""return the set of pre-existing revisions obsoleted by a transaction |
|
486 | """return the set of pre-existing revisions obsoleted by a transaction | |
|
487 | ||||
|
488 | Either the transaction or changes item of the transaction (for hooks) | |||
|
489 | must be provided, but not both. | |||
|
490 | """ | |||
|
491 | if (tr is None) == (changes is None): | |||
|
492 | e = b"exactly one of tr and changes must be provided" | |||
|
493 | raise error.ProgrammingError(e) | |||
486 | torev = repo.unfiltered().changelog.index.get_rev |
|
494 | torev = repo.unfiltered().changelog.index.get_rev | |
487 | phase = repo._phasecache.phase |
|
495 | phase = repo._phasecache.phase | |
488 | succsmarkers = repo.obsstore.successors.get |
|
496 | succsmarkers = repo.obsstore.successors.get | |
489 | public = phases.public |
|
497 | public = phases.public | |
490 | addedmarkers = tr.changes[b'obsmarkers'] |
|
498 | if changes is None: | |
491 | origrepolen = tr.changes[b'origrepolen'] |
|
499 | changes = tr.changes | |
|
500 | addedmarkers = changes[b'obsmarkers'] | |||
|
501 | origrepolen = changes[b'origrepolen'] | |||
492 | seenrevs = set() |
|
502 | seenrevs = set() | |
493 | obsoleted = set() |
|
503 | obsoleted = set() | |
494 | for mark in addedmarkers: |
|
504 | for mark in addedmarkers: |
@@ -785,7 +785,7 b' class patchfile(object):' | |||||
785 | for l in x.hunk: |
|
785 | for l in x.hunk: | |
786 | lines.append(l) |
|
786 | lines.append(l) | |
787 | if l[-1:] != b'\n': |
|
787 | if l[-1:] != b'\n': | |
788 | lines.append(b"\n\\ No newline at end of file\n") |
|
788 | lines.append(b'\n' + diffhelper.MISSING_NEWLINE_MARKER) | |
789 | self.backend.writerej(self.fname, len(self.rej), self.hunks, lines) |
|
789 | self.backend.writerej(self.fname, len(self.rej), self.hunks, lines) | |
790 |
|
790 | |||
791 | def apply(self, h): |
|
791 | def apply(self, h): | |
@@ -1069,7 +1069,7 b' class recordhunk(object):' | |||||
1069 |
|
1069 | |||
1070 | def write(self, fp): |
|
1070 | def write(self, fp): | |
1071 | delta = len(self.before) + len(self.after) |
|
1071 | delta = len(self.before) + len(self.after) | |
1072 | if self.after and self.after[-1] == b'\\ No newline at end of file\n': |
|
1072 | if self.after and self.after[-1] == diffhelper.MISSING_NEWLINE_MARKER: | |
1073 | delta -= 1 |
|
1073 | delta -= 1 | |
1074 | fromlen = delta + self.removed |
|
1074 | fromlen = delta + self.removed | |
1075 | tolen = delta + self.added |
|
1075 | tolen = delta + self.added | |
@@ -2666,7 +2666,11 b' def diffhunks(' | |||||
2666 | prefetchmatch = scmutil.matchfiles( |
|
2666 | prefetchmatch = scmutil.matchfiles( | |
2667 | repo, list(modifiedset | addedset | removedset) |
|
2667 | repo, list(modifiedset | addedset | removedset) | |
2668 | ) |
|
2668 | ) | |
2669 | scmutil.prefetchfiles(repo, [ctx1.rev(), ctx2.rev()], prefetchmatch) |
|
2669 | revmatches = [ | |
|
2670 | (ctx1.rev(), prefetchmatch), | |||
|
2671 | (ctx2.rev(), prefetchmatch), | |||
|
2672 | ] | |||
|
2673 | scmutil.prefetchfiles(repo, revmatches) | |||
2670 |
|
2674 | |||
2671 | def difffn(opts, losedata): |
|
2675 | def difffn(opts, losedata): | |
2672 | return trydiff( |
|
2676 | return trydiff( | |
@@ -2918,6 +2922,18 b' def _filepairs(modified, added, removed,' | |||||
2918 | yield f1, f2, copyop |
|
2922 | yield f1, f2, copyop | |
2919 |
|
2923 | |||
2920 |
|
2924 | |||
|
2925 | def _gitindex(text): | |||
|
2926 | if not text: | |||
|
2927 | text = b"" | |||
|
2928 | l = len(text) | |||
|
2929 | s = hashutil.sha1(b'blob %d\0' % l) | |||
|
2930 | s.update(text) | |||
|
2931 | return hex(s.digest()) | |||
|
2932 | ||||
|
2933 | ||||
|
2934 | _gitmode = {b'l': b'120000', b'x': b'100755', b'': b'100644'} | |||
|
2935 | ||||
|
2936 | ||||
2921 | def trydiff( |
|
2937 | def trydiff( | |
2922 | repo, |
|
2938 | repo, | |
2923 | revs, |
|
2939 | revs, | |
@@ -2940,14 +2956,6 b' def trydiff(' | |||||
2940 | pathfn is applied to every path in the diff output. |
|
2956 | pathfn is applied to every path in the diff output. | |
2941 | ''' |
|
2957 | ''' | |
2942 |
|
2958 | |||
2943 | def gitindex(text): |
|
|||
2944 | if not text: |
|
|||
2945 | text = b"" |
|
|||
2946 | l = len(text) |
|
|||
2947 | s = hashutil.sha1(b'blob %d\0' % l) |
|
|||
2948 | s.update(text) |
|
|||
2949 | return hex(s.digest()) |
|
|||
2950 |
|
||||
2951 | if opts.noprefix: |
|
2959 | if opts.noprefix: | |
2952 | aprefix = bprefix = b'' |
|
2960 | aprefix = bprefix = b'' | |
2953 | else: |
|
2961 | else: | |
@@ -2964,8 +2972,6 b' def trydiff(' | |||||
2964 | date1 = dateutil.datestr(ctx1.date()) |
|
2972 | date1 = dateutil.datestr(ctx1.date()) | |
2965 | date2 = dateutil.datestr(ctx2.date()) |
|
2973 | date2 = dateutil.datestr(ctx2.date()) | |
2966 |
|
2974 | |||
2967 | gitmode = {b'l': b'120000', b'x': b'100755', b'': b'100644'} |
|
|||
2968 |
|
||||
2969 | if not pathfn: |
|
2975 | if not pathfn: | |
2970 | pathfn = lambda f: f |
|
2976 | pathfn = lambda f: f | |
2971 |
|
2977 | |||
@@ -3019,11 +3025,11 b' def trydiff(' | |||||
3019 | b'diff --git %s%s %s%s' % (aprefix, path1, bprefix, path2) |
|
3025 | b'diff --git %s%s %s%s' % (aprefix, path1, bprefix, path2) | |
3020 | ) |
|
3026 | ) | |
3021 | if not f1: # added |
|
3027 | if not f1: # added | |
3022 | header.append(b'new file mode %s' % gitmode[flag2]) |
|
3028 | header.append(b'new file mode %s' % _gitmode[flag2]) | |
3023 | elif not f2: # removed |
|
3029 | elif not f2: # removed | |
3024 | header.append(b'deleted file mode %s' % gitmode[flag1]) |
|
3030 | header.append(b'deleted file mode %s' % _gitmode[flag1]) | |
3025 | else: # modified/copied/renamed |
|
3031 | else: # modified/copied/renamed | |
3026 | mode1, mode2 = gitmode[flag1], gitmode[flag2] |
|
3032 | mode1, mode2 = _gitmode[flag1], _gitmode[flag2] | |
3027 | if mode1 != mode2: |
|
3033 | if mode1 != mode2: | |
3028 | header.append(b'old mode %s' % mode1) |
|
3034 | header.append(b'old mode %s' % mode1) | |
3029 | header.append(b'new mode %s' % mode2) |
|
3035 | header.append(b'new mode %s' % mode2) | |
@@ -3067,39 +3073,66 b' def trydiff(' | |||||
3067 | if fctx2 is not None: |
|
3073 | if fctx2 is not None: | |
3068 | content2 = fctx2.data() |
|
3074 | content2 = fctx2.data() | |
3069 |
|
3075 | |||
3070 | if binary and opts.git and not opts.nobinary: |
|
3076 | data1 = (ctx1, fctx1, path1, flag1, content1, date1) | |
3071 | text = mdiff.b85diff(content1, content2) |
|
3077 | data2 = (ctx2, fctx2, path2, flag2, content2, date2) | |
3072 | if text: |
|
3078 | yield diffcontent(data1, data2, header, binary, opts) | |
3073 | header.append( |
|
3079 | ||
3074 | b'index %s..%s' % (gitindex(content1), gitindex(content2)) |
|
3080 | ||
|
3081 | def diffcontent(data1, data2, header, binary, opts): | |||
|
3082 | """ diffs two versions of a file. | |||
|
3083 | ||||
|
3084 | data1 and data2 are tuples containg: | |||
|
3085 | ||||
|
3086 | * ctx: changeset for the file | |||
|
3087 | * fctx: file context for that file | |||
|
3088 | * path1: name of the file | |||
|
3089 | * flag: flags of the file | |||
|
3090 | * content: full content of the file (can be null in case of binary) | |||
|
3091 | * date: date of the changeset | |||
|
3092 | ||||
|
3093 | header: the patch header | |||
|
3094 | binary: whether the any of the version of file is binary or not | |||
|
3095 | opts: user passed options | |||
|
3096 | ||||
|
3097 | It exists as a separate function so that extensions like extdiff can wrap | |||
|
3098 | it and use the file content directly. | |||
|
3099 | """ | |||
|
3100 | ||||
|
3101 | ctx1, fctx1, path1, flag1, content1, date1 = data1 | |||
|
3102 | ctx2, fctx2, path2, flag2, content2, date2 = data2 | |||
|
3103 | if binary and opts.git and not opts.nobinary: | |||
|
3104 | text = mdiff.b85diff(content1, content2) | |||
|
3105 | if text: | |||
|
3106 | header.append( | |||
|
3107 | b'index %s..%s' % (_gitindex(content1), _gitindex(content2)) | |||
|
3108 | ) | |||
|
3109 | hunks = ((None, [text]),) | |||
|
3110 | else: | |||
|
3111 | if opts.git and opts.index > 0: | |||
|
3112 | flag = flag1 | |||
|
3113 | if flag is None: | |||
|
3114 | flag = flag2 | |||
|
3115 | header.append( | |||
|
3116 | b'index %s..%s %s' | |||
|
3117 | % ( | |||
|
3118 | _gitindex(content1)[0 : opts.index], | |||
|
3119 | _gitindex(content2)[0 : opts.index], | |||
|
3120 | _gitmode[flag], | |||
3075 | ) |
|
3121 | ) | |
3076 | hunks = ((None, [text]),) |
|
|||
3077 | else: |
|
|||
3078 | if opts.git and opts.index > 0: |
|
|||
3079 | flag = flag1 |
|
|||
3080 | if flag is None: |
|
|||
3081 | flag = flag2 |
|
|||
3082 | header.append( |
|
|||
3083 | b'index %s..%s %s' |
|
|||
3084 | % ( |
|
|||
3085 | gitindex(content1)[0 : opts.index], |
|
|||
3086 | gitindex(content2)[0 : opts.index], |
|
|||
3087 | gitmode[flag], |
|
|||
3088 | ) |
|
|||
3089 | ) |
|
|||
3090 |
|
||||
3091 | uheaders, hunks = mdiff.unidiff( |
|
|||
3092 | content1, |
|
|||
3093 | date1, |
|
|||
3094 | content2, |
|
|||
3095 | date2, |
|
|||
3096 | path1, |
|
|||
3097 | path2, |
|
|||
3098 | binary=binary, |
|
|||
3099 | opts=opts, |
|
|||
3100 | ) |
|
3122 | ) | |
3101 | header.extend(uheaders) |
|
3123 | ||
3102 | yield fctx1, fctx2, header, hunks |
|
3124 | uheaders, hunks = mdiff.unidiff( | |
|
3125 | content1, | |||
|
3126 | date1, | |||
|
3127 | content2, | |||
|
3128 | date2, | |||
|
3129 | path1, | |||
|
3130 | path2, | |||
|
3131 | binary=binary, | |||
|
3132 | opts=opts, | |||
|
3133 | ) | |||
|
3134 | header.extend(uheaders) | |||
|
3135 | return fctx1, fctx2, header, hunks | |||
3103 |
|
3136 | |||
3104 |
|
3137 | |||
3105 | def diffstatsum(stats): |
|
3138 | def diffstatsum(stats): |
@@ -1,5 +1,6 b'' | |||||
1 | from __future__ import absolute_import |
|
1 | from __future__ import absolute_import | |
2 |
|
2 | |||
|
3 | import contextlib | |||
3 | import errno |
|
4 | import errno | |
4 | import os |
|
5 | import os | |
5 | import posixpath |
|
6 | import posixpath | |
@@ -148,6 +149,19 b' class pathauditor(object):' | |||||
148 | except (OSError, error.Abort): |
|
149 | except (OSError, error.Abort): | |
149 | return False |
|
150 | return False | |
150 |
|
151 | |||
|
152 | @contextlib.contextmanager | |||
|
153 | def cached(self): | |||
|
154 | if self._cached: | |||
|
155 | yield | |||
|
156 | else: | |||
|
157 | try: | |||
|
158 | self._cached = True | |||
|
159 | yield | |||
|
160 | finally: | |||
|
161 | self.audited.clear() | |||
|
162 | self.auditeddir.clear() | |||
|
163 | self._cached = False | |||
|
164 | ||||
151 |
|
165 | |||
152 | def canonpath(root, cwd, myname, auditor=None): |
|
166 | def canonpath(root, cwd, myname, auditor=None): | |
153 | '''return the canonical path of myname, given cwd and root |
|
167 | '''return the canonical path of myname, given cwd and root |
@@ -128,25 +128,28 b' from . import (' | |||||
128 |
|
128 | |||
129 | _fphasesentry = struct.Struct(b'>i20s') |
|
129 | _fphasesentry = struct.Struct(b'>i20s') | |
130 |
|
130 | |||
131 | INTERNAL_FLAG = 64 # Phases for mercurial internal usage only |
|
|||
132 | HIDEABLE_FLAG = 32 # Phases that are hideable |
|
|||
133 |
|
||||
134 | # record phase index |
|
131 | # record phase index | |
135 | public, draft, secret = range(3) |
|
132 | public, draft, secret = range(3) | |
136 | internal = INTERNAL_FLAG | HIDEABLE_FLAG |
|
133 | archived = 32 # non-continuous for compatibility | |
137 | archived = HIDEABLE_FLAG |
|
134 | internal = 96 # non-continuous for compatibility | |
138 | allphases = list(range(internal + 1)) |
|
135 | allphases = (public, draft, secret, archived, internal) | |
139 | trackedphases = allphases[1:] |
|
136 | trackedphases = (draft, secret, archived, internal) | |
140 | # record phase names |
|
137 | # record phase names | |
141 | cmdphasenames = [b'public', b'draft', b'secret'] # known to `hg phase` command |
|
138 | cmdphasenames = [b'public', b'draft', b'secret'] # known to `hg phase` command | |
142 | phasenames = [None] * len(allphases) |
|
139 | phasenames = dict(enumerate(cmdphasenames)) | |
143 | phasenames[: len(cmdphasenames)] = cmdphasenames |
|
|||
144 | phasenames[archived] = b'archived' |
|
140 | phasenames[archived] = b'archived' | |
145 | phasenames[internal] = b'internal' |
|
141 | phasenames[internal] = b'internal' | |
|
142 | # map phase name to phase number | |||
|
143 | phasenumber = {name: phase for phase, name in phasenames.items()} | |||
|
144 | # like phasenumber, but also include maps for the numeric and binary | |||
|
145 | # phase number to the phase number | |||
|
146 | phasenumber2 = phasenumber.copy() | |||
|
147 | phasenumber2.update({phase: phase for phase in phasenames}) | |||
|
148 | phasenumber2.update({b'%i' % phase: phase for phase in phasenames}) | |||
146 | # record phase property |
|
149 | # record phase property | |
147 | mutablephases = tuple(allphases[1:]) |
|
150 | mutablephases = (draft, secret, archived, internal) | |
148 | remotehiddenphases = tuple(allphases[2:]) |
|
151 | remotehiddenphases = (secret, archived, internal) | |
149 | localhiddenphases = tuple(p for p in allphases if p & HIDEABLE_FLAG) |
|
152 | localhiddenphases = (internal, archived) | |
150 |
|
153 | |||
151 |
|
154 | |||
152 | def supportinternal(repo): |
|
155 | def supportinternal(repo): | |
@@ -167,7 +170,7 b' def _readroots(repo, phasedefaults=None)' | |||||
167 | """ |
|
170 | """ | |
168 | repo = repo.unfiltered() |
|
171 | repo = repo.unfiltered() | |
169 | dirty = False |
|
172 | dirty = False | |
170 |
roots = |
|
173 | roots = {i: set() for i in allphases} | |
171 | try: |
|
174 | try: | |
172 | f, pending = txnutil.trypending(repo.root, repo.svfs, b'phaseroots') |
|
175 | f, pending = txnutil.trypending(repo.root, repo.svfs, b'phaseroots') | |
173 | try: |
|
176 | try: | |
@@ -189,11 +192,10 b' def _readroots(repo, phasedefaults=None)' | |||||
189 | def binaryencode(phasemapping): |
|
192 | def binaryencode(phasemapping): | |
190 | """encode a 'phase -> nodes' mapping into a binary stream |
|
193 | """encode a 'phase -> nodes' mapping into a binary stream | |
191 |
|
194 | |||
192 | Since phases are integer the mapping is actually a python list: |
|
195 | The revision lists are encoded as (phase, root) pairs. | |
193 | [[PUBLIC_HEADS], [DRAFTS_HEADS], [SECRET_HEADS]] |
|
|||
194 | """ |
|
196 | """ | |
195 | binarydata = [] |
|
197 | binarydata = [] | |
196 |
for phase, nodes in |
|
198 | for phase, nodes in pycompat.iteritems(phasemapping): | |
197 | for head in nodes: |
|
199 | for head in nodes: | |
198 | binarydata.append(_fphasesentry.pack(phase, head)) |
|
200 | binarydata.append(_fphasesentry.pack(phase, head)) | |
199 | return b''.join(binarydata) |
|
201 | return b''.join(binarydata) | |
@@ -202,8 +204,9 b' def binaryencode(phasemapping):' | |||||
202 | def binarydecode(stream): |
|
204 | def binarydecode(stream): | |
203 | """decode a binary stream into a 'phase -> nodes' mapping |
|
205 | """decode a binary stream into a 'phase -> nodes' mapping | |
204 |
|
206 | |||
205 | Since phases are integer the mapping is actually a python list.""" |
|
207 | The (phase, root) pairs are turned back into a dictionary with | |
206 | headsbyphase = [[] for i in allphases] |
|
208 | the phase as index and the aggregated roots of that phase as value.""" | |
|
209 | headsbyphase = {i: [] for i in allphases} | |||
207 | entrysize = _fphasesentry.size |
|
210 | entrysize = _fphasesentry.size | |
208 | while True: |
|
211 | while True: | |
209 | entry = stream.read(entrysize) |
|
212 | entry = stream.read(entrysize) | |
@@ -323,6 +326,38 b' class phasecache(object):' | |||||
323 | self.filterunknown(repo) |
|
326 | self.filterunknown(repo) | |
324 | self.opener = repo.svfs |
|
327 | self.opener = repo.svfs | |
325 |
|
328 | |||
|
329 | def hasnonpublicphases(self, repo): | |||
|
330 | """detect if there are revisions with non-public phase""" | |||
|
331 | repo = repo.unfiltered() | |||
|
332 | cl = repo.changelog | |||
|
333 | if len(cl) >= self._loadedrevslen: | |||
|
334 | self.invalidate() | |||
|
335 | self.loadphaserevs(repo) | |||
|
336 | return any( | |||
|
337 | revs | |||
|
338 | for phase, revs in pycompat.iteritems(self.phaseroots) | |||
|
339 | if phase != public | |||
|
340 | ) | |||
|
341 | ||||
|
342 | def nonpublicphaseroots(self, repo): | |||
|
343 | """returns the roots of all non-public phases | |||
|
344 | ||||
|
345 | The roots are not minimized, so if the secret revisions are | |||
|
346 | descendants of draft revisions, their roots will still be present. | |||
|
347 | """ | |||
|
348 | repo = repo.unfiltered() | |||
|
349 | cl = repo.changelog | |||
|
350 | if len(cl) >= self._loadedrevslen: | |||
|
351 | self.invalidate() | |||
|
352 | self.loadphaserevs(repo) | |||
|
353 | return set().union( | |||
|
354 | *[ | |||
|
355 | revs | |||
|
356 | for phase, revs in pycompat.iteritems(self.phaseroots) | |||
|
357 | if phase != public | |||
|
358 | ] | |||
|
359 | ) | |||
|
360 | ||||
326 | def getrevset(self, repo, phases, subset=None): |
|
361 | def getrevset(self, repo, phases, subset=None): | |
327 | """return a smartset for the given phases""" |
|
362 | """return a smartset for the given phases""" | |
328 | self.loadphaserevs(repo) # ensure phase's sets are loaded |
|
363 | self.loadphaserevs(repo) # ensure phase's sets are loaded | |
@@ -380,7 +415,7 b' class phasecache(object):' | |||||
380 | # Shallow copy meant to ensure isolation in |
|
415 | # Shallow copy meant to ensure isolation in | |
381 | # advance/retractboundary(), nothing more. |
|
416 | # advance/retractboundary(), nothing more. | |
382 | ph = self.__class__(None, None, _load=False) |
|
417 | ph = self.__class__(None, None, _load=False) | |
383 |
ph.phaseroots = self.phaseroots |
|
418 | ph.phaseroots = self.phaseroots.copy() | |
384 | ph.dirty = self.dirty |
|
419 | ph.dirty = self.dirty | |
385 | ph.opener = self.opener |
|
420 | ph.opener = self.opener | |
386 | ph._loadedrevslen = self._loadedrevslen |
|
421 | ph._loadedrevslen = self._loadedrevslen | |
@@ -400,17 +435,12 b' class phasecache(object):' | |||||
400 |
|
435 | |||
401 | def _getphaserevsnative(self, repo): |
|
436 | def _getphaserevsnative(self, repo): | |
402 | repo = repo.unfiltered() |
|
437 | repo = repo.unfiltered() | |
403 | nativeroots = [] |
|
438 | return repo.changelog.computephases(self.phaseroots) | |
404 | for phase in trackedphases: |
|
|||
405 | nativeroots.append( |
|
|||
406 | pycompat.maplist(repo.changelog.rev, self.phaseroots[phase]) |
|
|||
407 | ) |
|
|||
408 | return repo.changelog.computephases(nativeroots) |
|
|||
409 |
|
439 | |||
410 | def _computephaserevspure(self, repo): |
|
440 | def _computephaserevspure(self, repo): | |
411 | repo = repo.unfiltered() |
|
441 | repo = repo.unfiltered() | |
412 | cl = repo.changelog |
|
442 | cl = repo.changelog | |
413 |
self._phasesets = |
|
443 | self._phasesets = {phase: set() for phase in allphases} | |
414 | lowerroots = set() |
|
444 | lowerroots = set() | |
415 | for phase in reversed(trackedphases): |
|
445 | for phase in reversed(trackedphases): | |
416 | roots = pycompat.maplist(cl.rev, self.phaseroots[phase]) |
|
446 | roots = pycompat.maplist(cl.rev, self.phaseroots[phase]) | |
@@ -464,7 +494,7 b' class phasecache(object):' | |||||
464 | f.close() |
|
494 | f.close() | |
465 |
|
495 | |||
466 | def _write(self, fp): |
|
496 | def _write(self, fp): | |
467 |
for phase, roots in |
|
497 | for phase, roots in pycompat.iteritems(self.phaseroots): | |
468 | for h in sorted(roots): |
|
498 | for h in sorted(roots): | |
469 | fp.write(b'%i %s\n' % (phase, hex(h))) |
|
499 | fp.write(b'%i %s\n' % (phase, hex(h))) | |
470 | self.dirty = False |
|
500 | self.dirty = False | |
@@ -511,7 +541,7 b' class phasecache(object):' | |||||
511 |
|
541 | |||
512 | changes = set() # set of revisions to be changed |
|
542 | changes = set() # set of revisions to be changed | |
513 | delroots = [] # set of root deleted by this path |
|
543 | delroots = [] # set of root deleted by this path | |
514 | for phase in pycompat.xrange(targetphase + 1, len(allphases)): |
|
544 | for phase in (phase for phase in allphases if phase > targetphase): | |
515 | # filter nodes that are not in a compatible phase already |
|
545 | # filter nodes that are not in a compatible phase already | |
516 | nodes = [ |
|
546 | nodes = [ | |
517 | n for n in nodes if self.phase(repo, repo[n].rev()) >= phase |
|
547 | n for n in nodes if self.phase(repo, repo[n].rev()) >= phase | |
@@ -546,7 +576,11 b' class phasecache(object):' | |||||
546 | return changes |
|
576 | return changes | |
547 |
|
577 | |||
548 | def retractboundary(self, repo, tr, targetphase, nodes): |
|
578 | def retractboundary(self, repo, tr, targetphase, nodes): | |
549 | oldroots = self.phaseroots[: targetphase + 1] |
|
579 | oldroots = { | |
|
580 | phase: revs | |||
|
581 | for phase, revs in pycompat.iteritems(self.phaseroots) | |||
|
582 | if phase <= targetphase | |||
|
583 | } | |||
550 | if tr is None: |
|
584 | if tr is None: | |
551 | phasetracking = None |
|
585 | phasetracking = None | |
552 | else: |
|
586 | else: | |
@@ -565,7 +599,7 b' class phasecache(object):' | |||||
565 | # find the phase of the affected revision |
|
599 | # find the phase of the affected revision | |
566 | for phase in pycompat.xrange(targetphase, -1, -1): |
|
600 | for phase in pycompat.xrange(targetphase, -1, -1): | |
567 | if phase: |
|
601 | if phase: | |
568 |
roots = oldroots[ |
|
602 | roots = oldroots.get(phase, []) | |
569 | revs = set(repo.revs(b'%ln::%ld', roots, affected)) |
|
603 | revs = set(repo.revs(b'%ln::%ld', roots, affected)) | |
570 | affected -= revs |
|
604 | affected -= revs | |
571 | else: # public phase |
|
605 | else: # public phase | |
@@ -583,30 +617,32 b' class phasecache(object):' | |||||
583 | raise error.ProgrammingError(msg) |
|
617 | raise error.ProgrammingError(msg) | |
584 |
|
618 | |||
585 | repo = repo.unfiltered() |
|
619 | repo = repo.unfiltered() | |
586 | currentroots = self.phaseroots[targetphase] |
|
620 | torev = repo.changelog.rev | |
|
621 | tonode = repo.changelog.node | |||
|
622 | currentroots = {torev(node) for node in self.phaseroots[targetphase]} | |||
587 | finalroots = oldroots = set(currentroots) |
|
623 | finalroots = oldroots = set(currentroots) | |
|
624 | newroots = [torev(node) for node in nodes] | |||
588 | newroots = [ |
|
625 | newroots = [ | |
589 |
|
|
626 | rev for rev in newroots if self.phase(repo, rev) < targetphase | |
590 | ] |
|
627 | ] | |
|
628 | ||||
591 | if newroots: |
|
629 | if newroots: | |
592 |
|
630 | if nullrev in newroots: | ||
593 | if nullid in newroots: |
|
|||
594 | raise error.Abort(_(b'cannot change null revision phase')) |
|
631 | raise error.Abort(_(b'cannot change null revision phase')) | |
595 | currentroots = currentroots.copy() |
|
|||
596 | currentroots.update(newroots) |
|
632 | currentroots.update(newroots) | |
597 |
|
633 | |||
598 | # Only compute new roots for revs above the roots that are being |
|
634 | # Only compute new roots for revs above the roots that are being | |
599 | # retracted. |
|
635 | # retracted. | |
600 |
minnewroot = min( |
|
636 | minnewroot = min(newroots) | |
601 | aboveroots = [ |
|
637 | aboveroots = [rev for rev in currentroots if rev >= minnewroot] | |
602 | n for n in currentroots if repo[n].rev() >= minnewroot |
|
638 | updatedroots = repo.revs(b'roots(%ld::)', aboveroots) | |
603 | ] |
|
|||
604 | updatedroots = repo.set(b'roots(%ln::)', aboveroots) |
|
|||
605 |
|
639 | |||
606 |
finalroots = { |
|
640 | finalroots = {rev for rev in currentroots if rev < minnewroot} | |
607 |
finalroots.update( |
|
641 | finalroots.update(updatedroots) | |
608 | if finalroots != oldroots: |
|
642 | if finalroots != oldroots: | |
609 |
self._updateroots( |
|
643 | self._updateroots( | |
|
644 | targetphase, {tonode(rev) for rev in finalroots}, tr | |||
|
645 | ) | |||
610 | return True |
|
646 | return True | |
611 | return False |
|
647 | return False | |
612 |
|
648 | |||
@@ -617,7 +653,7 b' class phasecache(object):' | |||||
617 | """ |
|
653 | """ | |
618 | filtered = False |
|
654 | filtered = False | |
619 | has_node = repo.changelog.index.has_node # to filter unknown nodes |
|
655 | has_node = repo.changelog.index.has_node # to filter unknown nodes | |
620 |
for phase, nodes in |
|
656 | for phase, nodes in pycompat.iteritems(self.phaseroots): | |
621 | missing = sorted(node for node in nodes if not has_node(node)) |
|
657 | missing = sorted(node for node in nodes if not has_node(node)) | |
622 | if missing: |
|
658 | if missing: | |
623 | for mnode in missing: |
|
659 | for mnode in missing: | |
@@ -742,7 +778,7 b' def subsetphaseheads(repo, subset):' | |||||
742 | """ |
|
778 | """ | |
743 | cl = repo.changelog |
|
779 | cl = repo.changelog | |
744 |
|
780 | |||
745 |
headsbyphase = |
|
781 | headsbyphase = {i: [] for i in allphases} | |
746 | # No need to keep track of secret phase; any heads in the subset that |
|
782 | # No need to keep track of secret phase; any heads in the subset that | |
747 | # are not mentioned are implicitly secret. |
|
783 | # are not mentioned are implicitly secret. | |
748 | for phase in allphases[:secret]: |
|
784 | for phase in allphases[:secret]: | |
@@ -753,12 +789,12 b' def subsetphaseheads(repo, subset):' | |||||
753 |
|
789 | |||
754 | def updatephases(repo, trgetter, headsbyphase): |
|
790 | def updatephases(repo, trgetter, headsbyphase): | |
755 | """Updates the repo with the given phase heads""" |
|
791 | """Updates the repo with the given phase heads""" | |
756 |
# Now advance phase boundaries of all |
|
792 | # Now advance phase boundaries of all phases | |
757 | # |
|
793 | # | |
758 | # run the update (and fetch transaction) only if there are actually things |
|
794 | # run the update (and fetch transaction) only if there are actually things | |
759 | # to update. This avoid creating empty transaction during no-op operation. |
|
795 | # to update. This avoid creating empty transaction during no-op operation. | |
760 |
|
796 | |||
761 |
for phase in allphases |
|
797 | for phase in allphases: | |
762 | revset = b'%ln - _phase(%s)' |
|
798 | revset = b'%ln - _phase(%s)' | |
763 | heads = [c.node() for c in repo.set(revset, headsbyphase[phase], phase)] |
|
799 | heads = [c.node() for c in repo.set(revset, headsbyphase[phase], phase)] | |
764 | if heads: |
|
800 | if heads: | |
@@ -873,18 +909,16 b' def newcommitphase(ui):' | |||||
873 | """ |
|
909 | """ | |
874 | v = ui.config(b'phases', b'new-commit') |
|
910 | v = ui.config(b'phases', b'new-commit') | |
875 | try: |
|
911 | try: | |
876 |
return phasen |
|
912 | return phasenumber2[v] | |
877 |
except |
|
913 | except KeyError: | |
878 | try: |
|
914 | raise error.ConfigError( | |
879 | return int(v) |
|
915 | _(b"phases.new-commit: not a valid phase name ('%s')") % v | |
880 | except ValueError: |
|
916 | ) | |
881 | msg = _(b"phases.new-commit: not a valid phase name ('%s')") |
|
|||
882 | raise error.ConfigError(msg % v) |
|
|||
883 |
|
917 | |||
884 |
|
918 | |||
885 | def hassecret(repo): |
|
919 | def hassecret(repo): | |
886 | """utility function that check if a repo have any secret changeset.""" |
|
920 | """utility function that check if a repo have any secret changeset.""" | |
887 |
return bool(repo._phasecache.phaseroots[ |
|
921 | return bool(repo._phasecache.phaseroots[secret]) | |
888 |
|
922 | |||
889 |
|
923 | |||
890 | def preparehookargs(node, old, new): |
|
924 | def preparehookargs(node, old, new): |
@@ -80,7 +80,7 b' def _importfrom(pkgname, modname):' | |||||
80 | ('cext', 'bdiff'): 3, |
|
80 | ('cext', 'bdiff'): 3, | |
81 | ('cext', 'mpatch'): 1, |
|
81 | ('cext', 'mpatch'): 1, | |
82 | ('cext', 'osutil'): 4, |
|
82 | ('cext', 'osutil'): 4, | |
83 |
('cext', 'parsers'): 1 |
|
83 | ('cext', 'parsers'): 17, | |
84 | } |
|
84 | } | |
85 |
|
85 | |||
86 | # map import request to other package or module |
|
86 | # map import request to other package or module |
@@ -538,10 +538,6 b' def shellsplit(s):' | |||||
538 | return pycompat.shlexsplit(s, posix=True) |
|
538 | return pycompat.shlexsplit(s, posix=True) | |
539 |
|
539 | |||
540 |
|
540 | |||
541 | def quotecommand(cmd): |
|
|||
542 | return cmd |
|
|||
543 |
|
||||
544 |
|
||||
545 | def testpid(pid): |
|
541 | def testpid(pid): | |
546 | '''return False if pid dead, True if running or not sure''' |
|
542 | '''return False if pid dead, True if running or not sure''' | |
547 | if pycompat.sysplatform == b'OpenVMS': |
|
543 | if pycompat.sysplatform == b'OpenVMS': |
@@ -98,7 +98,6 b' if ispy3:' | |||||
98 | import codecs |
|
98 | import codecs | |
99 | import functools |
|
99 | import functools | |
100 | import io |
|
100 | import io | |
101 | import locale |
|
|||
102 | import struct |
|
101 | import struct | |
103 |
|
102 | |||
104 | if os.name == r'nt' and sys.version_info >= (3, 6): |
|
103 | if os.name == r'nt' and sys.version_info >= (3, 6): | |
@@ -143,29 +142,12 b' if ispy3:' | |||||
143 |
|
142 | |||
144 | long = int |
|
143 | long = int | |
145 |
|
144 | |||
146 | # Warning: sys.stdout.buffer and sys.stderr.buffer do not necessarily have |
|
|||
147 | # the same buffering behavior as sys.stdout and sys.stderr. The interpreter |
|
|||
148 | # initializes them with block-buffered streams or unbuffered streams (when |
|
|||
149 | # the -u option or the PYTHONUNBUFFERED environment variable is set), never |
|
|||
150 | # with a line-buffered stream. |
|
|||
151 | # TODO: .buffer might not exist if std streams were replaced; we'll need |
|
|||
152 | # a silly wrapper to make a bytes stream backed by a unicode one. |
|
|||
153 | stdin = sys.stdin.buffer |
|
|||
154 | stdout = sys.stdout.buffer |
|
|||
155 | stderr = sys.stderr.buffer |
|
|||
156 |
|
||||
157 | if getattr(sys, 'argv', None) is not None: |
|
145 | if getattr(sys, 'argv', None) is not None: | |
158 | # On POSIX, the char** argv array is converted to Python str using |
|
146 | # On POSIX, the char** argv array is converted to Python str using | |
159 |
# Py_DecodeLocale(). The inverse of this is Py_EncodeLocale(), which |
|
147 | # Py_DecodeLocale(). The inverse of this is Py_EncodeLocale(), which | |
160 |
# directly callable from Python code. |
|
148 | # isn't directly callable from Python code. In practice, os.fsencode() | |
161 | # Py_DecodeLocale() calls mbstowcs() and falls back to mbrtowc() with |
|
149 | # can be used instead (this is recommended by Python's documentation | |
162 | # surrogateescape error handling on failure. These functions take the |
|
150 | # for sys.argv). | |
163 | # current system locale into account. So, the inverse operation is to |
|
|||
164 | # .encode() using the system locale's encoding and using the |
|
|||
165 | # surrogateescape error handler. The only tricky part here is getting |
|
|||
166 | # the system encoding correct, since `locale.getlocale()` can return |
|
|||
167 | # None. We fall back to the filesystem encoding if lookups via `locale` |
|
|||
168 | # fail, as this seems like a reasonable thing to do. |
|
|||
169 | # |
|
151 | # | |
170 | # On Windows, the wchar_t **argv is passed into the interpreter as-is. |
|
152 | # On Windows, the wchar_t **argv is passed into the interpreter as-is. | |
171 | # Like POSIX, we need to emulate what Py_EncodeLocale() would do. But |
|
153 | # Like POSIX, we need to emulate what Py_EncodeLocale() would do. But | |
@@ -178,19 +160,7 b' if ispy3:' | |||||
178 | if os.name == r'nt': |
|
160 | if os.name == r'nt': | |
179 | sysargv = [a.encode("mbcs", "ignore") for a in sys.argv] |
|
161 | sysargv = [a.encode("mbcs", "ignore") for a in sys.argv] | |
180 | else: |
|
162 | else: | |
181 |
|
163 | sysargv = [fsencode(a) for a in sys.argv] | ||
182 | def getdefaultlocale_if_known(): |
|
|||
183 | try: |
|
|||
184 | return locale.getdefaultlocale() |
|
|||
185 | except ValueError: |
|
|||
186 | return None, None |
|
|||
187 |
|
||||
188 | encoding = ( |
|
|||
189 | locale.getlocale()[1] |
|
|||
190 | or getdefaultlocale_if_known()[1] |
|
|||
191 | or sys.getfilesystemencoding() |
|
|||
192 | ) |
|
|||
193 | sysargv = [a.encode(encoding, "surrogateescape") for a in sys.argv] |
|
|||
194 |
|
164 | |||
195 | bytechr = struct.Struct('>B').pack |
|
165 | bytechr = struct.Struct('>B').pack | |
196 | byterepr = b'%r'.__mod__ |
|
166 | byterepr = b'%r'.__mod__ | |
@@ -495,9 +465,6 b' else:' | |||||
495 | osaltsep = os.altsep |
|
465 | osaltsep = os.altsep | |
496 | osdevnull = os.devnull |
|
466 | osdevnull = os.devnull | |
497 | long = long |
|
467 | long = long | |
498 | stdin = sys.stdin |
|
|||
499 | stdout = sys.stdout |
|
|||
500 | stderr = sys.stderr |
|
|||
501 | if getattr(sys, 'argv', None) is not None: |
|
468 | if getattr(sys, 'argv', None) is not None: | |
502 | sysargv = sys.argv |
|
469 | sysargv = sys.argv | |
503 | sysplatform = sys.platform |
|
470 | sysplatform = sys.platform |
@@ -66,7 +66,7 b' def backupbundle(' | |||||
66 | else: |
|
66 | else: | |
67 | bundletype = b"HG10UN" |
|
67 | bundletype = b"HG10UN" | |
68 |
|
68 | |||
69 |
outgoing = discovery.outgoing(repo, missingroots=bases, |
|
69 | outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads) | |
70 | contentopts = { |
|
70 | contentopts = { | |
71 | b'cg.version': cgversion, |
|
71 | b'cg.version': cgversion, | |
72 | b'obsolescence': obsolescence, |
|
72 | b'obsolescence': obsolescence, |
@@ -129,10 +129,8 b' def computeunserved(repo, visibilityexce' | |||||
129 | def computemutable(repo, visibilityexceptions=None): |
|
129 | def computemutable(repo, visibilityexceptions=None): | |
130 | assert not repo.changelog.filteredrevs |
|
130 | assert not repo.changelog.filteredrevs | |
131 | # fast check to avoid revset call on huge repo |
|
131 | # fast check to avoid revset call on huge repo | |
132 |
if |
|
132 | if repo._phasecache.hasnonpublicphases(repo): | |
133 | getphase = repo._phasecache.phase |
|
133 | return frozenset(repo._phasecache.getrevset(repo, phases.mutablephases)) | |
134 | maymutable = filterrevs(repo, b'base') |
|
|||
135 | return frozenset(r for r in maymutable if getphase(repo, r)) |
|
|||
136 | return frozenset() |
|
134 | return frozenset() | |
137 |
|
135 | |||
138 |
|
136 | |||
@@ -154,9 +152,9 b' def computeimpactable(repo, visibilityex' | |||||
154 | assert not repo.changelog.filteredrevs |
|
152 | assert not repo.changelog.filteredrevs | |
155 | cl = repo.changelog |
|
153 | cl = repo.changelog | |
156 | firstmutable = len(cl) |
|
154 | firstmutable = len(cl) | |
157 |
|
|
155 | roots = repo._phasecache.nonpublicphaseroots(repo) | |
158 |
|
|
156 | if roots: | |
159 |
|
|
157 | firstmutable = min(firstmutable, min(cl.rev(r) for r in roots)) | |
160 | # protect from nullrev root |
|
158 | # protect from nullrev root | |
161 | firstmutable = max(0, firstmutable) |
|
159 | firstmutable = max(0, firstmutable) | |
162 | return frozenset(pycompat.xrange(firstmutable, len(cl))) |
|
160 | return frozenset(pycompat.xrange(firstmutable, len(cl))) |
@@ -1523,7 +1523,7 b' class revlog(object):' | |||||
1523 |
|
1523 | |||
1524 | def disambiguate(hexnode, minlength): |
|
1524 | def disambiguate(hexnode, minlength): | |
1525 | """Disambiguate against wdirid.""" |
|
1525 | """Disambiguate against wdirid.""" | |
1526 |
for length in range(minlength, |
|
1526 | for length in range(minlength, len(hexnode) + 1): | |
1527 | prefix = hexnode[:length] |
|
1527 | prefix = hexnode[:length] | |
1528 | if not maybewdir(prefix): |
|
1528 | if not maybewdir(prefix): | |
1529 | return prefix |
|
1529 | return prefix | |
@@ -1540,12 +1540,12 b' class revlog(object):' | |||||
1540 | pass |
|
1540 | pass | |
1541 |
|
1541 | |||
1542 | if node == wdirid: |
|
1542 | if node == wdirid: | |
1543 |
for length in range(minlength, |
|
1543 | for length in range(minlength, len(hexnode) + 1): | |
1544 | prefix = hexnode[:length] |
|
1544 | prefix = hexnode[:length] | |
1545 | if isvalid(prefix): |
|
1545 | if isvalid(prefix): | |
1546 | return prefix |
|
1546 | return prefix | |
1547 |
|
1547 | |||
1548 |
for length in range(minlength, |
|
1548 | for length in range(minlength, len(hexnode) + 1): | |
1549 | prefix = hexnode[:length] |
|
1549 | prefix = hexnode[:length] | |
1550 | if isvalid(prefix): |
|
1550 | if isvalid(prefix): | |
1551 | return disambiguate(hexnode, length) |
|
1551 | return disambiguate(hexnode, length) |
@@ -13,6 +13,8 b' import os' | |||||
13 | import re |
|
13 | import re | |
14 | import struct |
|
14 | import struct | |
15 |
|
15 | |||
|
16 | from ..i18n import _ | |||
|
17 | ||||
16 | from .. import ( |
|
18 | from .. import ( | |
17 | error, |
|
19 | error, | |
18 | node as nodemod, |
|
20 | node as nodemod, | |
@@ -48,7 +50,7 b' def persisted_data(revlog):' | |||||
48 | docket.data_unused = data_unused |
|
50 | docket.data_unused = data_unused | |
49 |
|
51 | |||
50 | filename = _rawdata_filepath(revlog, docket) |
|
52 | filename = _rawdata_filepath(revlog, docket) | |
51 |
use_mmap = revlog.opener.options.get(b" |
|
53 | use_mmap = revlog.opener.options.get(b"persistent-nodemap.mmap") | |
52 | try: |
|
54 | try: | |
53 | with revlog.opener(filename) as fd: |
|
55 | with revlog.opener(filename) as fd: | |
54 | if use_mmap: |
|
56 | if use_mmap: | |
@@ -105,6 +107,9 b' class _NoTransaction(object):' | |||||
105 | def addabort(self, *args, **kwargs): |
|
107 | def addabort(self, *args, **kwargs): | |
106 | pass |
|
108 | pass | |
107 |
|
109 | |||
|
110 | def _report(self, *args): | |||
|
111 | pass | |||
|
112 | ||||
108 |
|
113 | |||
109 | def update_persistent_nodemap(revlog): |
|
114 | def update_persistent_nodemap(revlog): | |
110 | """update the persistent nodemap right now |
|
115 | """update the persistent nodemap right now | |
@@ -137,7 +142,14 b' def _persist_nodemap(tr, revlog, pending' | |||||
137 | can_incremental = util.safehasattr(revlog.index, "nodemap_data_incremental") |
|
142 | can_incremental = util.safehasattr(revlog.index, "nodemap_data_incremental") | |
138 | ondisk_docket = revlog._nodemap_docket |
|
143 | ondisk_docket = revlog._nodemap_docket | |
139 | feed_data = util.safehasattr(revlog.index, "update_nodemap_data") |
|
144 | feed_data = util.safehasattr(revlog.index, "update_nodemap_data") | |
140 |
use_mmap = revlog.opener.options.get(b" |
|
145 | use_mmap = revlog.opener.options.get(b"persistent-nodemap.mmap") | |
|
146 | mode = revlog.opener.options.get(b"persistent-nodemap.mode") | |||
|
147 | if not can_incremental: | |||
|
148 | msg = _(b"persistent nodemap in strict mode without efficient method") | |||
|
149 | if mode == b'warn': | |||
|
150 | tr._report(b"%s\n" % msg) | |||
|
151 | elif mode == b'strict': | |||
|
152 | raise error.Abort(msg) | |||
141 |
|
153 | |||
142 | data = None |
|
154 | data = None | |
143 | # first attemp an incremental update of the data |
|
155 | # first attemp an incremental update of the data | |
@@ -255,8 +267,7 b' def _persist_nodemap(tr, revlog, pending' | |||||
255 | # data. Its content is currently very light, but it will expand as the on disk |
|
267 | # data. Its content is currently very light, but it will expand as the on disk | |
256 | # nodemap gains the necessary features to be used in production. |
|
268 | # nodemap gains the necessary features to be used in production. | |
257 |
|
269 | |||
258 | # version 0 is experimental, no BC garantee, do no use outside of tests. |
|
270 | ONDISK_VERSION = 1 | |
259 | ONDISK_VERSION = 0 |
|
|||
260 | S_VERSION = struct.Struct(">B") |
|
271 | S_VERSION = struct.Struct(">B") | |
261 | S_HEADER = struct.Struct(">BQQQQ") |
|
272 | S_HEADER = struct.Struct(">BQQQQ") | |
262 |
|
273 |
@@ -789,9 +789,9 b' def conflictlocal(repo, subset, x):' | |||||
789 | "merge" here includes merge conflicts from e.g. 'hg rebase' or 'hg graft'. |
|
789 | "merge" here includes merge conflicts from e.g. 'hg rebase' or 'hg graft'. | |
790 | """ |
|
790 | """ | |
791 | getargs(x, 0, 0, _(b"conflictlocal takes no arguments")) |
|
791 | getargs(x, 0, 0, _(b"conflictlocal takes no arguments")) | |
792 | from . import merge |
|
792 | from . import mergestate as mergestatemod | |
793 |
|
793 | |||
794 | mergestate = merge.mergestate.read(repo) |
|
794 | mergestate = mergestatemod.mergestate.read(repo) | |
795 | if mergestate.active() and repo.changelog.hasnode(mergestate.local): |
|
795 | if mergestate.active() and repo.changelog.hasnode(mergestate.local): | |
796 | return subset & {repo.changelog.rev(mergestate.local)} |
|
796 | return subset & {repo.changelog.rev(mergestate.local)} | |
797 |
|
797 | |||
@@ -805,9 +805,9 b' def conflictother(repo, subset, x):' | |||||
805 | "merge" here includes merge conflicts from e.g. 'hg rebase' or 'hg graft'. |
|
805 | "merge" here includes merge conflicts from e.g. 'hg rebase' or 'hg graft'. | |
806 | """ |
|
806 | """ | |
807 | getargs(x, 0, 0, _(b"conflictother takes no arguments")) |
|
807 | getargs(x, 0, 0, _(b"conflictother takes no arguments")) | |
808 | from . import merge |
|
808 | from . import mergestate as mergestatemod | |
809 |
|
809 | |||
810 | mergestate = merge.mergestate.read(repo) |
|
810 | mergestate = mergestatemod.mergestate.read(repo) | |
811 | if mergestate.active() and repo.changelog.hasnode(mergestate.other): |
|
811 | if mergestate.active() and repo.changelog.hasnode(mergestate.other): | |
812 | return subset & {repo.changelog.rev(mergestate.other)} |
|
812 | return subset & {repo.changelog.rev(mergestate.other)} | |
813 |
|
813 |
@@ -53,3 +53,20 b' def disallowednewunstable(repo, revs):' | |||||
53 | if allowunstable: |
|
53 | if allowunstable: | |
54 | return revset.baseset() |
|
54 | return revset.baseset() | |
55 | return repo.revs(b"(%ld::) - %ld", revs, revs) |
|
55 | return repo.revs(b"(%ld::) - %ld", revs, revs) | |
|
56 | ||||
|
57 | ||||
|
58 | def skip_empty_successor(ui, command): | |||
|
59 | empty_successor = ui.config(b'rewrite', b'empty-successor') | |||
|
60 | if empty_successor == b'skip': | |||
|
61 | return True | |||
|
62 | elif empty_successor == b'keep': | |||
|
63 | return False | |||
|
64 | else: | |||
|
65 | raise error.ConfigError( | |||
|
66 | _( | |||
|
67 | b"%s doesn't know how to handle config " | |||
|
68 | b"rewrite.empty-successor=%s (only 'skip' and 'keep' are " | |||
|
69 | b"supported)" | |||
|
70 | ) | |||
|
71 | % (command, empty_successor) | |||
|
72 | ) |
@@ -456,9 +456,7 b' def formatrevnode(ui, rev, node):' | |||||
456 |
|
456 | |||
457 |
|
457 | |||
458 | def resolvehexnodeidprefix(repo, prefix): |
|
458 | def resolvehexnodeidprefix(repo, prefix): | |
459 |
if prefix.startswith(b'x') |
|
459 | if prefix.startswith(b'x'): | |
460 | b'experimental', b'revisions.prefixhexnode' |
|
|||
461 | ): |
|
|||
462 | prefix = prefix[1:] |
|
460 | prefix = prefix[1:] | |
463 | try: |
|
461 | try: | |
464 | # Uses unfiltered repo because it's faster when prefix is ambiguous/ |
|
462 | # Uses unfiltered repo because it's faster when prefix is ambiguous/ | |
@@ -805,9 +803,12 b' def getuipathfn(repo, legacyrelativevalu' | |||||
805 |
|
803 | |||
806 | if relative: |
|
804 | if relative: | |
807 | cwd = repo.getcwd() |
|
805 | cwd = repo.getcwd() | |
808 | pathto = repo.pathto |
|
806 | if cwd != b'': | |
809 | return lambda f: pathto(f, cwd) |
|
807 | # this branch would work even if cwd == b'' (ie cwd = repo | |
810 | elif repo.ui.configbool(b'ui', b'slash'): |
|
808 | # root), but its generality makes the returned function slower | |
|
809 | pathto = repo.pathto | |||
|
810 | return lambda f: pathto(f, cwd) | |||
|
811 | if repo.ui.configbool(b'ui', b'slash'): | |||
811 | return lambda f: f |
|
812 | return lambda f: f | |
812 | else: |
|
813 | else: | |
813 | return util.localpath |
|
814 | return util.localpath | |
@@ -1469,6 +1470,13 b' def movedirstate(repo, newctx, match=Non' | |||||
1469 | repo._quick_access_changeid_invalidate() |
|
1470 | repo._quick_access_changeid_invalidate() | |
1470 |
|
1471 | |||
1471 |
|
1472 | |||
|
1473 | def writereporequirements(repo, requirements=None): | |||
|
1474 | """ writes requirements for the repo to .hg/requires """ | |||
|
1475 | if requirements: | |||
|
1476 | repo.requirements = requirements | |||
|
1477 | writerequires(repo.vfs, repo.requirements) | |||
|
1478 | ||||
|
1479 | ||||
1472 | def writerequires(opener, requirements): |
|
1480 | def writerequires(opener, requirements): | |
1473 | with opener(b'requires', b'w', atomictemp=True) as fp: |
|
1481 | with opener(b'requires', b'w', atomictemp=True) as fp: | |
1474 | for r in sorted(requirements): |
|
1482 | for r in sorted(requirements): | |
@@ -1879,18 +1887,29 b' class simplekeyvaluefile(object):' | |||||
1879 | ] |
|
1887 | ] | |
1880 |
|
1888 | |||
1881 |
|
1889 | |||
1882 |
def prefetchfiles(repo, rev |
|
1890 | def prefetchfiles(repo, revmatches): | |
1883 | """Invokes the registered file prefetch functions, allowing extensions to |
|
1891 | """Invokes the registered file prefetch functions, allowing extensions to | |
1884 | ensure the corresponding files are available locally, before the command |
|
1892 | ensure the corresponding files are available locally, before the command | |
1885 |
uses them. |
|
1893 | uses them. | |
1886 | if match: |
|
1894 | ||
1887 | # The command itself will complain about files that don't exist, so |
|
1895 | Args: | |
1888 | # don't duplicate the message. |
|
1896 | revmatches: a list of (revision, match) tuples to indicate the files to | |
1889 | match = matchmod.badmatch(match, lambda fn, msg: None) |
|
1897 | fetch at each revision. If any of the match elements is None, it matches | |
1890 | else: |
|
1898 | all files. | |
1891 | match = matchall(repo) |
|
1899 | """ | |
1892 |
|
1900 | |||
1893 | fileprefetchhooks(repo, revs, match) |
|
1901 | def _matcher(m): | |
|
1902 | if m: | |||
|
1903 | assert isinstance(m, matchmod.basematcher) | |||
|
1904 | # The command itself will complain about files that don't exist, so | |||
|
1905 | # don't duplicate the message. | |||
|
1906 | return matchmod.badmatch(m, lambda fn, msg: None) | |||
|
1907 | else: | |||
|
1908 | return matchall(repo) | |||
|
1909 | ||||
|
1910 | revbadmatches = [(rev, _matcher(match)) for (rev, match) in revmatches] | |||
|
1911 | ||||
|
1912 | fileprefetchhooks(repo, revbadmatches) | |||
1894 |
|
1913 | |||
1895 |
|
1914 | |||
1896 | # a list of (repo, revs, match) prefetch functions |
|
1915 | # a list of (repo, revs, match) prefetch functions |
@@ -42,6 +42,7 b' from . import (' | |||||
42 | lock as lockmod, |
|
42 | lock as lockmod, | |
43 | mdiff, |
|
43 | mdiff, | |
44 | merge, |
|
44 | merge, | |
|
45 | mergestate as mergestatemod, | |||
45 | node as nodemod, |
|
46 | node as nodemod, | |
46 | patch, |
|
47 | patch, | |
47 | phases, |
|
48 | phases, | |
@@ -161,7 +162,7 b' class shelvedfile(object):' | |||||
161 | repo = self.repo.unfiltered() |
|
162 | repo = self.repo.unfiltered() | |
162 |
|
163 | |||
163 | outgoing = discovery.outgoing( |
|
164 | outgoing = discovery.outgoing( | |
164 |
repo, missingroots=bases, |
|
165 | repo, missingroots=bases, ancestorsof=[node] | |
165 | ) |
|
166 | ) | |
166 | cg = changegroup.makechangegroup(repo, outgoing, cgversion, b'shelve') |
|
167 | cg = changegroup.makechangegroup(repo, outgoing, cgversion, b'shelve') | |
167 |
|
168 | |||
@@ -801,7 +802,7 b' def unshelvecontinue(ui, repo, state, op' | |||||
801 | basename = state.name |
|
802 | basename = state.name | |
802 | with repo.lock(): |
|
803 | with repo.lock(): | |
803 | checkparents(repo, state) |
|
804 | checkparents(repo, state) | |
804 | ms = merge.mergestate.read(repo) |
|
805 | ms = mergestatemod.mergestate.read(repo) | |
805 | if list(ms.unresolved()): |
|
806 | if list(ms.unresolved()): | |
806 | raise error.Abort( |
|
807 | raise error.Abort( | |
807 | _(b"unresolved conflicts, can't continue"), |
|
808 | _(b"unresolved conflicts, can't continue"), | |
@@ -1013,12 +1014,7 b' def _rebaserestoredcommit(' | |||||
1013 | activebookmark, |
|
1014 | activebookmark, | |
1014 | interactive, |
|
1015 | interactive, | |
1015 | ) |
|
1016 | ) | |
1016 |
raise error. |
|
1017 | raise error.ConflictResolutionRequired(b'unshelve') | |
1017 | _( |
|
|||
1018 | b"unresolved conflicts (see 'hg resolve', then " |
|
|||
1019 | b"'hg unshelve --continue')" |
|
|||
1020 | ) |
|
|||
1021 | ) |
|
|||
1022 |
|
1018 | |||
1023 | with repo.dirstate.parentchange(): |
|
1019 | with repo.dirstate.parentchange(): | |
1024 | repo.setparents(tmpwctx.node(), nodemod.nullid) |
|
1020 | repo.setparents(tmpwctx.node(), nodemod.nullid) |
@@ -451,12 +451,7 b' def _picklabels(defaults, overrides):' | |||||
451 | return result |
|
451 | return result | |
452 |
|
452 | |||
453 |
|
453 | |||
454 | def _bytes_to_set(b): |
|
454 | def is_not_null(ctx): | |
455 | """turns a multiple bytes (usually flags) into a set of individual byte""" |
|
|||
456 | return set(b[x : x + 1] for x in range(len(b))) |
|
|||
457 |
|
||||
458 |
|
||||
459 | def is_null(ctx): |
|
|||
460 | if not util.safehasattr(ctx, "node"): |
|
455 | if not util.safehasattr(ctx, "node"): | |
461 | return False |
|
456 | return False | |
462 | return ctx.node() != nodemod.nullid |
|
457 | return ctx.node() != nodemod.nullid | |
@@ -518,15 +513,13 b' def simplemerge(ui, localctx, basectx, o' | |||||
518 |
|
513 | |||
519 | # merge flags if necessary |
|
514 | # merge flags if necessary | |
520 | flags = localctx.flags() |
|
515 | flags = localctx.flags() | |
521 |
localflags = |
|
516 | localflags = set(pycompat.iterbytestr(flags)) | |
522 |
otherflags = |
|
517 | otherflags = set(pycompat.iterbytestr(otherctx.flags())) | |
523 | if is_null(basectx) and localflags != otherflags: |
|
518 | if is_not_null(basectx) and localflags != otherflags: | |
524 |
baseflags = |
|
519 | baseflags = set(pycompat.iterbytestr(basectx.flags())) | |
525 | flags = localflags & otherflags |
|
520 | commonflags = localflags & otherflags | |
526 | for f in localflags.symmetric_difference(otherflags): |
|
521 | addedflags = (localflags ^ otherflags) - baseflags | |
527 | if f not in baseflags: |
|
522 | flags = b''.join(sorted(commonflags | addedflags)) | |
528 | flags.add(f) |
|
|||
529 | flags = b''.join(sorted(flags)) |
|
|||
530 |
|
523 | |||
531 | if not opts.get(b'print'): |
|
524 | if not opts.get(b'print'): | |
532 | localctx.write(mergedtext, flags) |
|
525 | localctx.write(mergedtext, flags) |
@@ -18,6 +18,7 b' from . import (' | |||||
18 | error, |
|
18 | error, | |
19 | match as matchmod, |
|
19 | match as matchmod, | |
20 | merge as mergemod, |
|
20 | merge as mergemod, | |
|
21 | mergestate as mergestatemod, | |||
21 | pathutil, |
|
22 | pathutil, | |
22 | pycompat, |
|
23 | pycompat, | |
23 | scmutil, |
|
24 | scmutil, | |
@@ -406,7 +407,7 b' def filterupdatesactions(repo, wctx, mct' | |||||
406 | elif file in wctx: |
|
407 | elif file in wctx: | |
407 | prunedactions[file] = (b'r', args, msg) |
|
408 | prunedactions[file] = (b'r', args, msg) | |
408 |
|
409 | |||
409 | if branchmerge and type == mergemod.ACTION_MERGE: |
|
410 | if branchmerge and type == mergestatemod.ACTION_MERGE: | |
410 | f1, f2, fa, move, anc = args |
|
411 | f1, f2, fa, move, anc = args | |
411 | if not sparsematch(f1): |
|
412 | if not sparsematch(f1): | |
412 | temporaryfiles.append(f1) |
|
413 | temporaryfiles.append(f1) | |
@@ -600,10 +601,10 b' def _updateconfigandrefreshwdir(' | |||||
600 |
|
601 | |||
601 | if b'exp-sparse' in oldrequires and removing: |
|
602 | if b'exp-sparse' in oldrequires and removing: | |
602 | repo.requirements.discard(b'exp-sparse') |
|
603 | repo.requirements.discard(b'exp-sparse') | |
603 |
scmutil.writerequires(repo |
|
604 | scmutil.writereporequirements(repo) | |
604 | elif b'exp-sparse' not in oldrequires: |
|
605 | elif b'exp-sparse' not in oldrequires: | |
605 | repo.requirements.add(b'exp-sparse') |
|
606 | repo.requirements.add(b'exp-sparse') | |
606 |
scmutil.writerequires(repo |
|
607 | scmutil.writereporequirements(repo) | |
607 |
|
608 | |||
608 | try: |
|
609 | try: | |
609 | writeconfig(repo, includes, excludes, profiles) |
|
610 | writeconfig(repo, includes, excludes, profiles) | |
@@ -612,7 +613,7 b' def _updateconfigandrefreshwdir(' | |||||
612 | if repo.requirements != oldrequires: |
|
613 | if repo.requirements != oldrequires: | |
613 | repo.requirements.clear() |
|
614 | repo.requirements.clear() | |
614 | repo.requirements |= oldrequires |
|
615 | repo.requirements |= oldrequires | |
615 |
scmutil.writerequires(repo |
|
616 | scmutil.writereporequirements(repo) | |
616 | writeconfig(repo, oldincludes, oldexcludes, oldprofiles) |
|
617 | writeconfig(repo, oldincludes, oldexcludes, oldprofiles) | |
617 | raise |
|
618 | raise | |
618 |
|
619 |
@@ -36,15 +36,16 b' def _serverquote(s):' | |||||
36 | return b"'%s'" % s.replace(b"'", b"'\\''") |
|
36 | return b"'%s'" % s.replace(b"'", b"'\\''") | |
37 |
|
37 | |||
38 |
|
38 | |||
39 | def _forwardoutput(ui, pipe): |
|
39 | def _forwardoutput(ui, pipe, warn=False): | |
40 | """display all data currently available on pipe as remote output. |
|
40 | """display all data currently available on pipe as remote output. | |
41 |
|
41 | |||
42 | This is non blocking.""" |
|
42 | This is non blocking.""" | |
43 | if pipe: |
|
43 | if pipe: | |
44 | s = procutil.readpipe(pipe) |
|
44 | s = procutil.readpipe(pipe) | |
45 | if s: |
|
45 | if s: | |
|
46 | display = ui.warn if warn else ui.status | |||
46 | for l in s.splitlines(): |
|
47 | for l in s.splitlines(): | |
47 |
|
|
48 | display(_(b"remote: "), l, b'\n') | |
48 |
|
49 | |||
49 |
|
50 | |||
50 | class doublepipe(object): |
|
51 | class doublepipe(object): | |
@@ -178,7 +179,6 b' def _makeconnection(ui, sshcmd, args, re' | |||||
178 | ) |
|
179 | ) | |
179 |
|
180 | |||
180 | ui.debug(b'running %s\n' % cmd) |
|
181 | ui.debug(b'running %s\n' % cmd) | |
181 | cmd = procutil.quotecommand(cmd) |
|
|||
182 |
|
182 | |||
183 | # no buffer allow the use of 'select' |
|
183 | # no buffer allow the use of 'select' | |
184 | # feel free to remove buffering and select usage when we ultimately |
|
184 | # feel free to remove buffering and select usage when we ultimately | |
@@ -204,8 +204,12 b' def _clientcapabilities():' | |||||
204 |
|
204 | |||
205 | def _performhandshake(ui, stdin, stdout, stderr): |
|
205 | def _performhandshake(ui, stdin, stdout, stderr): | |
206 | def badresponse(): |
|
206 | def badresponse(): | |
207 | # Flush any output on stderr. |
|
207 | # Flush any output on stderr. In general, the stderr contains errors | |
208 | _forwardoutput(ui, stderr) |
|
208 | # from the remote (ssh errors, some hg errors), and status indications | |
|
209 | # (like "adding changes"), with no current way to tell them apart. | |||
|
210 | # Here we failed so early that it's almost certainly only errors, so | |||
|
211 | # use warn=True so -q doesn't hide them. | |||
|
212 | _forwardoutput(ui, stderr, warn=True) | |||
209 |
|
213 | |||
210 | msg = _(b'no suitable response from remote hg') |
|
214 | msg = _(b'no suitable response from remote hg') | |
211 | hint = ui.config(b'ui', b'ssherrorhint') |
|
215 | hint = ui.config(b'ui', b'ssherrorhint') | |
@@ -307,7 +311,7 b' def _performhandshake(ui, stdin, stdout,' | |||||
307 | while lines[-1] and max_noise: |
|
311 | while lines[-1] and max_noise: | |
308 | try: |
|
312 | try: | |
309 | l = stdout.readline() |
|
313 | l = stdout.readline() | |
310 | _forwardoutput(ui, stderr) |
|
314 | _forwardoutput(ui, stderr, warn=True) | |
311 |
|
315 | |||
312 | # Look for reply to protocol upgrade request. It has a token |
|
316 | # Look for reply to protocol upgrade request. It has a token | |
313 | # in it, so there should be no false positives. |
|
317 | # in it, so there should be no false positives. | |
@@ -374,7 +378,7 b' def _performhandshake(ui, stdin, stdout,' | |||||
374 | badresponse() |
|
378 | badresponse() | |
375 |
|
379 | |||
376 | # Flush any output on stderr before proceeding. |
|
380 | # Flush any output on stderr before proceeding. | |
377 | _forwardoutput(ui, stderr) |
|
381 | _forwardoutput(ui, stderr, warn=True) | |
378 |
|
382 | |||
379 | return protoname, caps |
|
383 | return protoname, caps | |
380 |
|
384 |
@@ -33,9 +33,8 b' from .utils import (' | |||||
33 | # support for TLS 1.1, TLS 1.2, SNI, system CA stores, etc. These features are |
|
33 | # support for TLS 1.1, TLS 1.2, SNI, system CA stores, etc. These features are | |
34 | # all exposed via the "ssl" module. |
|
34 | # all exposed via the "ssl" module. | |
35 | # |
|
35 | # | |
36 | # Depending on the version of Python being used, SSL/TLS support is either |
|
36 | # We require in setup.py the presence of ssl.SSLContext, which indicates modern | |
37 | # modern/secure or legacy/insecure. Many operations in this module have |
|
37 | # SSL/TLS support. | |
38 | # separate code paths depending on support in Python. |
|
|||
39 |
|
38 | |||
40 | configprotocols = { |
|
39 | configprotocols = { | |
41 | b'tls1.0', |
|
40 | b'tls1.0', | |
@@ -45,76 +44,19 b' configprotocols = {' | |||||
45 |
|
44 | |||
46 | hassni = getattr(ssl, 'HAS_SNI', False) |
|
45 | hassni = getattr(ssl, 'HAS_SNI', False) | |
47 |
|
46 | |||
48 | # TLS 1.1 and 1.2 may not be supported if the OpenSSL Python is compiled |
|
47 | # ssl.HAS_TLSv1* are preferred to check support but they were added in Python | |
49 | # against doesn't support them. |
|
48 | # 3.7. Prior to CPython commit 6e8cda91d92da72800d891b2fc2073ecbc134d98 | |
50 | supportedprotocols = {b'tls1.0'} |
|
49 | # (backported to the 3.7 branch), ssl.PROTOCOL_TLSv1_1 / ssl.PROTOCOL_TLSv1_2 | |
51 | if util.safehasattr(ssl, b'PROTOCOL_TLSv1_1'): |
|
50 | # were defined only if compiled against a OpenSSL version with TLS 1.1 / 1.2 | |
|
51 | # support. At the mentioned commit, they were unconditionally defined. | |||
|
52 | supportedprotocols = set() | |||
|
53 | if getattr(ssl, 'HAS_TLSv1', util.safehasattr(ssl, 'PROTOCOL_TLSv1')): | |||
|
54 | supportedprotocols.add(b'tls1.0') | |||
|
55 | if getattr(ssl, 'HAS_TLSv1_1', util.safehasattr(ssl, 'PROTOCOL_TLSv1_1')): | |||
52 | supportedprotocols.add(b'tls1.1') |
|
56 | supportedprotocols.add(b'tls1.1') | |
53 |
if util.safehasattr(ssl, |
|
57 | if getattr(ssl, 'HAS_TLSv1_2', util.safehasattr(ssl, 'PROTOCOL_TLSv1_2')): | |
54 | supportedprotocols.add(b'tls1.2') |
|
58 | supportedprotocols.add(b'tls1.2') | |
55 |
|
59 | |||
56 | try: |
|
|||
57 | # ssl.SSLContext was added in 2.7.9 and presence indicates modern |
|
|||
58 | # SSL/TLS features are available. |
|
|||
59 | SSLContext = ssl.SSLContext |
|
|||
60 | modernssl = True |
|
|||
61 | _canloaddefaultcerts = util.safehasattr(SSLContext, b'load_default_certs') |
|
|||
62 | except AttributeError: |
|
|||
63 | modernssl = False |
|
|||
64 | _canloaddefaultcerts = False |
|
|||
65 |
|
||||
66 | # We implement SSLContext using the interface from the standard library. |
|
|||
67 | class SSLContext(object): |
|
|||
68 | def __init__(self, protocol): |
|
|||
69 | # From the public interface of SSLContext |
|
|||
70 | self.protocol = protocol |
|
|||
71 | self.check_hostname = False |
|
|||
72 | self.options = 0 |
|
|||
73 | self.verify_mode = ssl.CERT_NONE |
|
|||
74 |
|
||||
75 | # Used by our implementation. |
|
|||
76 | self._certfile = None |
|
|||
77 | self._keyfile = None |
|
|||
78 | self._certpassword = None |
|
|||
79 | self._cacerts = None |
|
|||
80 | self._ciphers = None |
|
|||
81 |
|
||||
82 | def load_cert_chain(self, certfile, keyfile=None, password=None): |
|
|||
83 | self._certfile = certfile |
|
|||
84 | self._keyfile = keyfile |
|
|||
85 | self._certpassword = password |
|
|||
86 |
|
||||
87 | def load_default_certs(self, purpose=None): |
|
|||
88 | pass |
|
|||
89 |
|
||||
90 | def load_verify_locations(self, cafile=None, capath=None, cadata=None): |
|
|||
91 | if capath: |
|
|||
92 | raise error.Abort(_(b'capath not supported')) |
|
|||
93 | if cadata: |
|
|||
94 | raise error.Abort(_(b'cadata not supported')) |
|
|||
95 |
|
||||
96 | self._cacerts = cafile |
|
|||
97 |
|
||||
98 | def set_ciphers(self, ciphers): |
|
|||
99 | self._ciphers = ciphers |
|
|||
100 |
|
||||
101 | def wrap_socket(self, socket, server_hostname=None, server_side=False): |
|
|||
102 | # server_hostname is unique to SSLContext.wrap_socket and is used |
|
|||
103 | # for SNI in that context. So there's nothing for us to do with it |
|
|||
104 | # in this legacy code since we don't support SNI. |
|
|||
105 |
|
||||
106 | args = { |
|
|||
107 | 'keyfile': self._keyfile, |
|
|||
108 | 'certfile': self._certfile, |
|
|||
109 | 'server_side': server_side, |
|
|||
110 | 'cert_reqs': self.verify_mode, |
|
|||
111 | 'ssl_version': self.protocol, |
|
|||
112 | 'ca_certs': self._cacerts, |
|
|||
113 | 'ciphers': self._ciphers, |
|
|||
114 | } |
|
|||
115 |
|
||||
116 | return ssl.wrap_socket(socket, **args) |
|
|||
117 |
|
||||
118 |
|
60 | |||
119 | def _hostsettings(ui, hostname): |
|
61 | def _hostsettings(ui, hostname): | |
120 | """Obtain security settings for a hostname. |
|
62 | """Obtain security settings for a hostname. | |
@@ -135,15 +77,11 b' def _hostsettings(ui, hostname):' | |||||
135 | b'disablecertverification': False, |
|
77 | b'disablecertverification': False, | |
136 | # Whether the legacy [hostfingerprints] section has data for this host. |
|
78 | # Whether the legacy [hostfingerprints] section has data for this host. | |
137 | b'legacyfingerprint': False, |
|
79 | b'legacyfingerprint': False, | |
138 | # PROTOCOL_* constant to use for SSLContext.__init__. |
|
|||
139 | b'protocol': None, |
|
|||
140 | # String representation of minimum protocol to be used for UI |
|
80 | # String representation of minimum protocol to be used for UI | |
141 | # presentation. |
|
81 | # presentation. | |
142 |
b'protocol |
|
82 | b'minimumprotocol': None, | |
143 | # ssl.CERT_* constant used by SSLContext.verify_mode. |
|
83 | # ssl.CERT_* constant used by SSLContext.verify_mode. | |
144 | b'verifymode': None, |
|
84 | b'verifymode': None, | |
145 | # Defines extra ssl.OP* bitwise options to set. |
|
|||
146 | b'ctxoptions': None, |
|
|||
147 | # OpenSSL Cipher List to use (instead of default). |
|
85 | # OpenSSL Cipher List to use (instead of default). | |
148 | b'ciphers': None, |
|
86 | b'ciphers': None, | |
149 | } |
|
87 | } | |
@@ -158,45 +96,30 b' def _hostsettings(ui, hostname):' | |||||
158 | % b' '.join(sorted(configprotocols)), |
|
96 | % b' '.join(sorted(configprotocols)), | |
159 | ) |
|
97 | ) | |
160 |
|
98 | |||
161 |
# We default to TLS 1.1+ |
|
99 | # We default to TLS 1.1+ because TLS 1.0 has known vulnerabilities (like | |
162 |
# |
|
100 | # BEAST and POODLE). We allow users to downgrade to TLS 1.0+ via config | |
163 |
# |
|
101 | # options in case a legacy server is encountered. | |
164 | if b'tls1.1' in supportedprotocols: |
|
102 | ||
165 | defaultprotocol = b'tls1.1' |
|
103 | # setup.py checks that TLS 1.1 or TLS 1.2 is present, so the following | |
166 | else: |
|
104 | # assert should not fail. | |
167 | # Let people know they are borderline secure. |
|
105 | assert supportedprotocols - {b'tls1.0'} | |
168 | # We don't document this config option because we want people to see |
|
106 | defaultminimumprotocol = b'tls1.1' | |
169 | # the bold warnings on the web site. |
|
|||
170 | # internal config: hostsecurity.disabletls10warning |
|
|||
171 | if not ui.configbool(b'hostsecurity', b'disabletls10warning'): |
|
|||
172 | ui.warn( |
|
|||
173 | _( |
|
|||
174 | b'warning: connecting to %s using legacy security ' |
|
|||
175 | b'technology (TLS 1.0); see ' |
|
|||
176 | b'https://mercurial-scm.org/wiki/SecureConnections for ' |
|
|||
177 | b'more info\n' |
|
|||
178 | ) |
|
|||
179 | % bhostname |
|
|||
180 | ) |
|
|||
181 | defaultprotocol = b'tls1.0' |
|
|||
182 |
|
107 | |||
183 | key = b'minimumprotocol' |
|
108 | key = b'minimumprotocol' | |
184 | protocol = ui.config(b'hostsecurity', key, defaultprotocol) |
|
109 | minimumprotocol = ui.config(b'hostsecurity', key, defaultminimumprotocol) | |
185 | validateprotocol(protocol, key) |
|
110 | validateprotocol(minimumprotocol, key) | |
186 |
|
111 | |||
187 | key = b'%s:minimumprotocol' % bhostname |
|
112 | key = b'%s:minimumprotocol' % bhostname | |
188 | protocol = ui.config(b'hostsecurity', key, protocol) |
|
113 | minimumprotocol = ui.config(b'hostsecurity', key, minimumprotocol) | |
189 | validateprotocol(protocol, key) |
|
114 | validateprotocol(minimumprotocol, key) | |
190 |
|
115 | |||
191 | # If --insecure is used, we allow the use of TLS 1.0 despite config options. |
|
116 | # If --insecure is used, we allow the use of TLS 1.0 despite config options. | |
192 | # We always print a "connection security to %s is disabled..." message when |
|
117 | # We always print a "connection security to %s is disabled..." message when | |
193 | # --insecure is used. So no need to print anything more here. |
|
118 | # --insecure is used. So no need to print anything more here. | |
194 | if ui.insecureconnections: |
|
119 | if ui.insecureconnections: | |
195 | protocol = b'tls1.0' |
|
120 | minimumprotocol = b'tls1.0' | |
196 |
|
121 | |||
197 | s[b'protocol'], s[b'ctxoptions'], s[b'protocolui'] = protocolsettings( |
|
122 | s[b'minimumprotocol'] = minimumprotocol | |
198 | protocol |
|
|||
199 | ) |
|
|||
200 |
|
123 | |||
201 | ciphers = ui.config(b'hostsecurity', b'ciphers') |
|
124 | ciphers = ui.config(b'hostsecurity', b'ciphers') | |
202 | ciphers = ui.config(b'hostsecurity', b'%s:ciphers' % bhostname, ciphers) |
|
125 | ciphers = ui.config(b'hostsecurity', b'%s:ciphers' % bhostname, ciphers) | |
@@ -288,7 +211,7 b' def _hostsettings(ui, hostname):' | |||||
288 |
|
211 | |||
289 | # Require certificate validation if CA certs are being loaded and |
|
212 | # Require certificate validation if CA certs are being loaded and | |
290 | # verification hasn't been disabled above. |
|
213 | # verification hasn't been disabled above. | |
291 |
if cafile or |
|
214 | if cafile or s[b'allowloaddefaultcerts']: | |
292 | s[b'verifymode'] = ssl.CERT_REQUIRED |
|
215 | s[b'verifymode'] = ssl.CERT_REQUIRED | |
293 | else: |
|
216 | else: | |
294 | # At this point we don't have a fingerprint, aren't being |
|
217 | # At this point we don't have a fingerprint, aren't being | |
@@ -298,59 +221,26 b' def _hostsettings(ui, hostname):' | |||||
298 | # user). |
|
221 | # user). | |
299 | s[b'verifymode'] = ssl.CERT_NONE |
|
222 | s[b'verifymode'] = ssl.CERT_NONE | |
300 |
|
223 | |||
301 | assert s[b'protocol'] is not None |
|
|||
302 | assert s[b'ctxoptions'] is not None |
|
|||
303 | assert s[b'verifymode'] is not None |
|
224 | assert s[b'verifymode'] is not None | |
304 |
|
225 | |||
305 | return s |
|
226 | return s | |
306 |
|
227 | |||
307 |
|
228 | |||
308 |
def |
|
229 | def commonssloptions(minimumprotocol): | |
309 | """Resolve the protocol for a config value. |
|
230 | """Return SSLContext options common to servers and clients. | |
310 |
|
||||
311 | Returns a 3-tuple of (protocol, options, ui value) where the first |
|
|||
312 | 2 items are values used by SSLContext and the last is a string value |
|
|||
313 | of the ``minimumprotocol`` config option equivalent. |
|
|||
314 | """ |
|
231 | """ | |
315 | if protocol not in configprotocols: |
|
232 | if minimumprotocol not in configprotocols: | |
316 | raise ValueError(b'protocol value not supported: %s' % protocol) |
|
233 | raise ValueError(b'protocol value not supported: %s' % minimumprotocol) | |
317 |
|
||||
318 | # Despite its name, PROTOCOL_SSLv23 selects the highest protocol |
|
|||
319 | # that both ends support, including TLS protocols. On legacy stacks, |
|
|||
320 | # the highest it likely goes is TLS 1.0. On modern stacks, it can |
|
|||
321 | # support TLS 1.2. |
|
|||
322 | # |
|
|||
323 | # The PROTOCOL_TLSv* constants select a specific TLS version |
|
|||
324 | # only (as opposed to multiple versions). So the method for |
|
|||
325 | # supporting multiple TLS versions is to use PROTOCOL_SSLv23 and |
|
|||
326 | # disable protocols via SSLContext.options and OP_NO_* constants. |
|
|||
327 | # However, SSLContext.options doesn't work unless we have the |
|
|||
328 | # full/real SSLContext available to us. |
|
|||
329 | if supportedprotocols == {b'tls1.0'}: |
|
|||
330 | if protocol != b'tls1.0': |
|
|||
331 | raise error.Abort( |
|
|||
332 | _(b'current Python does not support protocol setting %s') |
|
|||
333 | % protocol, |
|
|||
334 | hint=_( |
|
|||
335 | b'upgrade Python or disable setting since ' |
|
|||
336 | b'only TLS 1.0 is supported' |
|
|||
337 | ), |
|
|||
338 | ) |
|
|||
339 |
|
||||
340 | return ssl.PROTOCOL_TLSv1, 0, b'tls1.0' |
|
|||
341 |
|
||||
342 | # WARNING: returned options don't work unless the modern ssl module |
|
|||
343 | # is available. Be careful when adding options here. |
|
|||
344 |
|
234 | |||
345 | # SSLv2 and SSLv3 are broken. We ban them outright. |
|
235 | # SSLv2 and SSLv3 are broken. We ban them outright. | |
346 | options = ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3 |
|
236 | options = ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3 | |
347 |
|
237 | |||
348 | if protocol == b'tls1.0': |
|
238 | if minimumprotocol == b'tls1.0': | |
349 | # Defaults above are to use TLS 1.0+ |
|
239 | # Defaults above are to use TLS 1.0+ | |
350 | pass |
|
240 | pass | |
351 | elif protocol == b'tls1.1': |
|
241 | elif minimumprotocol == b'tls1.1': | |
352 | options |= ssl.OP_NO_TLSv1 |
|
242 | options |= ssl.OP_NO_TLSv1 | |
353 | elif protocol == b'tls1.2': |
|
243 | elif minimumprotocol == b'tls1.2': | |
354 | options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 |
|
244 | options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 | |
355 | else: |
|
245 | else: | |
356 | raise error.Abort(_(b'this should not happen')) |
|
246 | raise error.Abort(_(b'this should not happen')) | |
@@ -359,7 +249,7 b' def protocolsettings(protocol):' | |||||
359 | # There is no guarantee this attribute is defined on the module. |
|
249 | # There is no guarantee this attribute is defined on the module. | |
360 | options |= getattr(ssl, 'OP_NO_COMPRESSION', 0) |
|
250 | options |= getattr(ssl, 'OP_NO_COMPRESSION', 0) | |
361 |
|
251 | |||
362 | return ssl.PROTOCOL_SSLv23, options, protocol |
|
252 | return options | |
363 |
|
253 | |||
364 |
|
254 | |||
365 | def wrapsocket(sock, keyfile, certfile, ui, serverhostname=None): |
|
255 | def wrapsocket(sock, keyfile, certfile, ui, serverhostname=None): | |
@@ -414,12 +304,12 b' def wrapsocket(sock, keyfile, certfile, ' | |||||
414 | # bundle with a specific CA cert removed. If the system/default CA bundle |
|
304 | # bundle with a specific CA cert removed. If the system/default CA bundle | |
415 | # is loaded and contains that removed CA, you've just undone the user's |
|
305 | # is loaded and contains that removed CA, you've just undone the user's | |
416 | # choice. |
|
306 | # choice. | |
417 | sslcontext = SSLContext(settings[b'protocol']) |
|
307 | # | |
418 |
|
308 | # Despite its name, PROTOCOL_SSLv23 selects the highest protocol that both | ||
419 | # This is a no-op unless using modern ssl. |
|
309 | # ends support, including TLS protocols. commonssloptions() restricts the | |
420 | sslcontext.options |= settings[b'ctxoptions'] |
|
310 | # set of allowed protocols. | |
421 |
|
311 | sslcontext = ssl.SSLContext(ssl.PROTOCOL_SSLv23) | ||
422 | # This still works on our fake SSLContext. |
|
312 | sslcontext.options |= commonssloptions(settings[b'minimumprotocol']) | |
423 | sslcontext.verify_mode = settings[b'verifymode'] |
|
313 | sslcontext.verify_mode = settings[b'verifymode'] | |
424 |
|
314 | |||
425 | if settings[b'ciphers']: |
|
315 | if settings[b'ciphers']: | |
@@ -468,8 +358,6 b' def wrapsocket(sock, keyfile, certfile, ' | |||||
468 | # If we're doing certificate verification and no CA certs are loaded, |
|
358 | # If we're doing certificate verification and no CA certs are loaded, | |
469 | # that is almost certainly the reason why verification failed. Provide |
|
359 | # that is almost certainly the reason why verification failed. Provide | |
470 | # a hint to the user. |
|
360 | # a hint to the user. | |
471 | # Only modern ssl module exposes SSLContext.get_ca_certs() so we can |
|
|||
472 | # only show this warning if modern ssl is available. |
|
|||
473 | # The exception handler is here to handle bugs around cert attributes: |
|
361 | # The exception handler is here to handle bugs around cert attributes: | |
474 | # https://bugs.python.org/issue20916#msg213479. (See issues5313.) |
|
362 | # https://bugs.python.org/issue20916#msg213479. (See issues5313.) | |
475 | # When the main 20916 bug occurs, 'sslcontext.get_ca_certs()' is a |
|
363 | # When the main 20916 bug occurs, 'sslcontext.get_ca_certs()' is a | |
@@ -478,7 +366,6 b' def wrapsocket(sock, keyfile, certfile, ' | |||||
478 | if ( |
|
366 | if ( | |
479 | caloaded |
|
367 | caloaded | |
480 | and settings[b'verifymode'] == ssl.CERT_REQUIRED |
|
368 | and settings[b'verifymode'] == ssl.CERT_REQUIRED | |
481 | and modernssl |
|
|||
482 | and not sslcontext.get_ca_certs() |
|
369 | and not sslcontext.get_ca_certs() | |
483 | ): |
|
370 | ): | |
484 | ui.warn( |
|
371 | ui.warn( | |
@@ -502,7 +389,7 b' def wrapsocket(sock, keyfile, certfile, ' | |||||
502 | # reason, try to emit an actionable warning. |
|
389 | # reason, try to emit an actionable warning. | |
503 | if e.reason == 'UNSUPPORTED_PROTOCOL': |
|
390 | if e.reason == 'UNSUPPORTED_PROTOCOL': | |
504 | # We attempted TLS 1.0+. |
|
391 | # We attempted TLS 1.0+. | |
505 |
if settings[b'protocol |
|
392 | if settings[b'minimumprotocol'] == b'tls1.0': | |
506 | # We support more than just TLS 1.0+. If this happens, |
|
393 | # We support more than just TLS 1.0+. If this happens, | |
507 | # the likely scenario is either the client or the server |
|
394 | # the likely scenario is either the client or the server | |
508 | # is really old. (e.g. server doesn't support TLS 1.0+ or |
|
395 | # is really old. (e.g. server doesn't support TLS 1.0+ or | |
@@ -547,7 +434,7 b' def wrapsocket(sock, keyfile, certfile, ' | |||||
547 | b'to be more secure than the server can support)\n' |
|
434 | b'to be more secure than the server can support)\n' | |
548 | ) |
|
435 | ) | |
549 | % ( |
|
436 | % ( | |
550 |
settings[b'protocol |
|
437 | settings[b'minimumprotocol'], | |
551 | pycompat.bytesurl(serverhostname), |
|
438 | pycompat.bytesurl(serverhostname), | |
552 | ) |
|
439 | ) | |
553 | ) |
|
440 | ) | |
@@ -618,12 +505,18 b' def wrapserversocket(' | |||||
618 | _(b'referenced certificate file (%s) does not exist') % f |
|
505 | _(b'referenced certificate file (%s) does not exist') % f | |
619 | ) |
|
506 | ) | |
620 |
|
507 | |||
621 | protocol, options, _protocolui = protocolsettings(b'tls1.0') |
|
508 | # Despite its name, PROTOCOL_SSLv23 selects the highest protocol that both | |
|
509 | # ends support, including TLS protocols. commonssloptions() restricts the | |||
|
510 | # set of allowed protocols. | |||
|
511 | protocol = ssl.PROTOCOL_SSLv23 | |||
|
512 | options = commonssloptions(b'tls1.0') | |||
622 |
|
513 | |||
623 | # This config option is intended for use in tests only. It is a giant |
|
514 | # This config option is intended for use in tests only. It is a giant | |
624 | # footgun to kill security. Don't define it. |
|
515 | # footgun to kill security. Don't define it. | |
625 | exactprotocol = ui.config(b'devel', b'serverexactprotocol') |
|
516 | exactprotocol = ui.config(b'devel', b'serverexactprotocol') | |
626 | if exactprotocol == b'tls1.0': |
|
517 | if exactprotocol == b'tls1.0': | |
|
518 | if b'tls1.0' not in supportedprotocols: | |||
|
519 | raise error.Abort(_(b'TLS 1.0 not supported by this Python')) | |||
627 | protocol = ssl.PROTOCOL_TLSv1 |
|
520 | protocol = ssl.PROTOCOL_TLSv1 | |
628 | elif exactprotocol == b'tls1.1': |
|
521 | elif exactprotocol == b'tls1.1': | |
629 | if b'tls1.1' not in supportedprotocols: |
|
522 | if b'tls1.1' not in supportedprotocols: | |
@@ -638,23 +531,20 b' def wrapserversocket(' | |||||
638 | _(b'invalid value for serverexactprotocol: %s') % exactprotocol |
|
531 | _(b'invalid value for serverexactprotocol: %s') % exactprotocol | |
639 | ) |
|
532 | ) | |
640 |
|
533 | |||
641 | if modernssl: |
|
534 | # We /could/ use create_default_context() here since it doesn't load | |
642 | # We /could/ use create_default_context() here since it doesn't load |
|
535 | # CAs when configured for client auth. However, it is hard-coded to | |
643 | # CAs when configured for client auth. However, it is hard-coded to |
|
536 | # use ssl.PROTOCOL_SSLv23 which may not be appropriate here. | |
644 | # use ssl.PROTOCOL_SSLv23 which may not be appropriate here. |
|
537 | sslcontext = ssl.SSLContext(protocol) | |
645 | sslcontext = SSLContext(protocol) |
|
538 | sslcontext.options |= options | |
646 | sslcontext.options |= options |
|
|||
647 |
|
539 | |||
648 |
|
|
540 | # Improve forward secrecy. | |
649 |
|
|
541 | sslcontext.options |= getattr(ssl, 'OP_SINGLE_DH_USE', 0) | |
650 |
|
|
542 | sslcontext.options |= getattr(ssl, 'OP_SINGLE_ECDH_USE', 0) | |
651 |
|
543 | |||
652 |
|
|
544 | # Use the list of more secure ciphers if found in the ssl module. | |
653 |
|
|
545 | if util.safehasattr(ssl, b'_RESTRICTED_SERVER_CIPHERS'): | |
654 |
|
|
546 | sslcontext.options |= getattr(ssl, 'OP_CIPHER_SERVER_PREFERENCE', 0) | |
655 |
|
|
547 | sslcontext.set_ciphers(ssl._RESTRICTED_SERVER_CIPHERS) | |
656 | else: |
|
|||
657 | sslcontext = SSLContext(ssl.PROTOCOL_TLSv1) |
|
|||
658 |
|
548 | |||
659 | if requireclientcert: |
|
549 | if requireclientcert: | |
660 | sslcontext.verify_mode = ssl.CERT_REQUIRED |
|
550 | sslcontext.verify_mode = ssl.CERT_REQUIRED | |
@@ -797,14 +687,6 b' def _plainapplepython():' | |||||
797 | ) |
|
687 | ) | |
798 |
|
688 | |||
799 |
|
689 | |||
800 | _systemcacertpaths = [ |
|
|||
801 | # RHEL, CentOS, and Fedora |
|
|||
802 | b'/etc/pki/tls/certs/ca-bundle.trust.crt', |
|
|||
803 | # Debian, Ubuntu, Gentoo |
|
|||
804 | b'/etc/ssl/certs/ca-certificates.crt', |
|
|||
805 | ] |
|
|||
806 |
|
||||
807 |
|
||||
808 | def _defaultcacerts(ui): |
|
690 | def _defaultcacerts(ui): | |
809 | """return path to default CA certificates or None. |
|
691 | """return path to default CA certificates or None. | |
810 |
|
692 | |||
@@ -827,23 +709,6 b' def _defaultcacerts(ui):' | |||||
827 | except (ImportError, AttributeError): |
|
709 | except (ImportError, AttributeError): | |
828 | pass |
|
710 | pass | |
829 |
|
711 | |||
830 | # On Windows, only the modern ssl module is capable of loading the system |
|
|||
831 | # CA certificates. If we're not capable of doing that, emit a warning |
|
|||
832 | # because we'll get a certificate verification error later and the lack |
|
|||
833 | # of loaded CA certificates will be the reason why. |
|
|||
834 | # Assertion: this code is only called if certificates are being verified. |
|
|||
835 | if pycompat.iswindows: |
|
|||
836 | if not _canloaddefaultcerts: |
|
|||
837 | ui.warn( |
|
|||
838 | _( |
|
|||
839 | b'(unable to load Windows CA certificates; see ' |
|
|||
840 | b'https://mercurial-scm.org/wiki/SecureConnections for ' |
|
|||
841 | b'how to configure Mercurial to avoid this message)\n' |
|
|||
842 | ) |
|
|||
843 | ) |
|
|||
844 |
|
||||
845 | return None |
|
|||
846 |
|
||||
847 | # Apple's OpenSSL has patches that allow a specially constructed certificate |
|
712 | # Apple's OpenSSL has patches that allow a specially constructed certificate | |
848 | # to load the system CA store. If we're running on Apple Python, use this |
|
713 | # to load the system CA store. If we're running on Apple Python, use this | |
849 | # trick. |
|
714 | # trick. | |
@@ -854,58 +719,6 b' def _defaultcacerts(ui):' | |||||
854 | if os.path.exists(dummycert): |
|
719 | if os.path.exists(dummycert): | |
855 | return dummycert |
|
720 | return dummycert | |
856 |
|
721 | |||
857 | # The Apple OpenSSL trick isn't available to us. If Python isn't able to |
|
|||
858 | # load system certs, we're out of luck. |
|
|||
859 | if pycompat.isdarwin: |
|
|||
860 | # FUTURE Consider looking for Homebrew or MacPorts installed certs |
|
|||
861 | # files. Also consider exporting the keychain certs to a file during |
|
|||
862 | # Mercurial install. |
|
|||
863 | if not _canloaddefaultcerts: |
|
|||
864 | ui.warn( |
|
|||
865 | _( |
|
|||
866 | b'(unable to load CA certificates; see ' |
|
|||
867 | b'https://mercurial-scm.org/wiki/SecureConnections for ' |
|
|||
868 | b'how to configure Mercurial to avoid this message)\n' |
|
|||
869 | ) |
|
|||
870 | ) |
|
|||
871 | return None |
|
|||
872 |
|
||||
873 | # / is writable on Windows. Out of an abundance of caution make sure |
|
|||
874 | # we're not on Windows because paths from _systemcacerts could be installed |
|
|||
875 | # by non-admin users. |
|
|||
876 | assert not pycompat.iswindows |
|
|||
877 |
|
||||
878 | # Try to find CA certificates in well-known locations. We print a warning |
|
|||
879 | # when using a found file because we don't want too much silent magic |
|
|||
880 | # for security settings. The expectation is that proper Mercurial |
|
|||
881 | # installs will have the CA certs path defined at install time and the |
|
|||
882 | # installer/packager will make an appropriate decision on the user's |
|
|||
883 | # behalf. We only get here and perform this setting as a feature of |
|
|||
884 | # last resort. |
|
|||
885 | if not _canloaddefaultcerts: |
|
|||
886 | for path in _systemcacertpaths: |
|
|||
887 | if os.path.isfile(path): |
|
|||
888 | ui.warn( |
|
|||
889 | _( |
|
|||
890 | b'(using CA certificates from %s; if you see this ' |
|
|||
891 | b'message, your Mercurial install is not properly ' |
|
|||
892 | b'configured; see ' |
|
|||
893 | b'https://mercurial-scm.org/wiki/SecureConnections ' |
|
|||
894 | b'for how to configure Mercurial to avoid this ' |
|
|||
895 | b'message)\n' |
|
|||
896 | ) |
|
|||
897 | % path |
|
|||
898 | ) |
|
|||
899 | return path |
|
|||
900 |
|
||||
901 | ui.warn( |
|
|||
902 | _( |
|
|||
903 | b'(unable to load CA certificates; see ' |
|
|||
904 | b'https://mercurial-scm.org/wiki/SecureConnections for ' |
|
|||
905 | b'how to configure Mercurial to avoid this message)\n' |
|
|||
906 | ) |
|
|||
907 | ) |
|
|||
908 |
|
||||
909 | return None |
|
722 | return None | |
910 |
|
723 | |||
911 |
|
724 |
@@ -19,6 +19,8 b' the data.' | |||||
19 |
|
19 | |||
20 | from __future__ import absolute_import |
|
20 | from __future__ import absolute_import | |
21 |
|
21 | |||
|
22 | import contextlib | |||
|
23 | ||||
22 | from .i18n import _ |
|
24 | from .i18n import _ | |
23 |
|
25 | |||
24 | from . import ( |
|
26 | from . import ( | |
@@ -119,6 +121,7 b' class _statecheck(object):' | |||||
119 | reportonly, |
|
121 | reportonly, | |
120 | continueflag, |
|
122 | continueflag, | |
121 | stopflag, |
|
123 | stopflag, | |
|
124 | childopnames, | |||
122 | cmdmsg, |
|
125 | cmdmsg, | |
123 | cmdhint, |
|
126 | cmdhint, | |
124 | statushint, |
|
127 | statushint, | |
@@ -132,6 +135,8 b' class _statecheck(object):' | |||||
132 | self._reportonly = reportonly |
|
135 | self._reportonly = reportonly | |
133 | self._continueflag = continueflag |
|
136 | self._continueflag = continueflag | |
134 | self._stopflag = stopflag |
|
137 | self._stopflag = stopflag | |
|
138 | self._childopnames = childopnames | |||
|
139 | self._delegating = False | |||
135 | self._cmdmsg = cmdmsg |
|
140 | self._cmdmsg = cmdmsg | |
136 | self._cmdhint = cmdhint |
|
141 | self._cmdhint = cmdhint | |
137 | self._statushint = statushint |
|
142 | self._statushint = statushint | |
@@ -181,12 +186,15 b' class _statecheck(object):' | |||||
181 | """ |
|
186 | """ | |
182 | if self._opname == b'merge': |
|
187 | if self._opname == b'merge': | |
183 | return len(repo[None].parents()) > 1 |
|
188 | return len(repo[None].parents()) > 1 | |
|
189 | elif self._delegating: | |||
|
190 | return False | |||
184 | else: |
|
191 | else: | |
185 | return repo.vfs.exists(self._fname) |
|
192 | return repo.vfs.exists(self._fname) | |
186 |
|
193 | |||
187 |
|
194 | |||
188 | # A list of statecheck objects for multistep operations like graft. |
|
195 | # A list of statecheck objects for multistep operations like graft. | |
189 | _unfinishedstates = [] |
|
196 | _unfinishedstates = [] | |
|
197 | _unfinishedstatesbyname = {} | |||
190 |
|
198 | |||
191 |
|
199 | |||
192 | def addunfinished( |
|
200 | def addunfinished( | |
@@ -197,6 +205,7 b' def addunfinished(' | |||||
197 | reportonly=False, |
|
205 | reportonly=False, | |
198 | continueflag=False, |
|
206 | continueflag=False, | |
199 | stopflag=False, |
|
207 | stopflag=False, | |
|
208 | childopnames=None, | |||
200 | cmdmsg=b"", |
|
209 | cmdmsg=b"", | |
201 | cmdhint=b"", |
|
210 | cmdhint=b"", | |
202 | statushint=b"", |
|
211 | statushint=b"", | |
@@ -218,6 +227,8 b' def addunfinished(' | |||||
218 | `--continue` option or not. |
|
227 | `--continue` option or not. | |
219 | stopflag is a boolean that determines whether or not a command supports |
|
228 | stopflag is a boolean that determines whether or not a command supports | |
220 | --stop flag |
|
229 | --stop flag | |
|
230 | childopnames is a list of other opnames this op uses as sub-steps of its | |||
|
231 | own execution. They must already be added. | |||
221 | cmdmsg is used to pass a different status message in case standard |
|
232 | cmdmsg is used to pass a different status message in case standard | |
222 | message of the format "abort: cmdname in progress" is not desired. |
|
233 | message of the format "abort: cmdname in progress" is not desired. | |
223 | cmdhint is used to pass a different hint message in case standard |
|
234 | cmdhint is used to pass a different hint message in case standard | |
@@ -230,6 +241,7 b' def addunfinished(' | |||||
230 | continuefunc stores the function required to finish an interrupted |
|
241 | continuefunc stores the function required to finish an interrupted | |
231 | operation. |
|
242 | operation. | |
232 | """ |
|
243 | """ | |
|
244 | childopnames = childopnames or [] | |||
233 | statecheckobj = _statecheck( |
|
245 | statecheckobj = _statecheck( | |
234 | opname, |
|
246 | opname, | |
235 | fname, |
|
247 | fname, | |
@@ -238,17 +250,98 b' def addunfinished(' | |||||
238 | reportonly, |
|
250 | reportonly, | |
239 | continueflag, |
|
251 | continueflag, | |
240 | stopflag, |
|
252 | stopflag, | |
|
253 | childopnames, | |||
241 | cmdmsg, |
|
254 | cmdmsg, | |
242 | cmdhint, |
|
255 | cmdhint, | |
243 | statushint, |
|
256 | statushint, | |
244 | abortfunc, |
|
257 | abortfunc, | |
245 | continuefunc, |
|
258 | continuefunc, | |
246 | ) |
|
259 | ) | |
|
260 | ||||
247 | if opname == b'merge': |
|
261 | if opname == b'merge': | |
248 | _unfinishedstates.append(statecheckobj) |
|
262 | _unfinishedstates.append(statecheckobj) | |
249 | else: |
|
263 | else: | |
|
264 | # This check enforces that for any op 'foo' which depends on op 'bar', | |||
|
265 | # 'foo' comes before 'bar' in _unfinishedstates. This ensures that | |||
|
266 | # getrepostate() always returns the most specific applicable answer. | |||
|
267 | for childopname in childopnames: | |||
|
268 | if childopname not in _unfinishedstatesbyname: | |||
|
269 | raise error.ProgrammingError( | |||
|
270 | _(b'op %s depends on unknown op %s') % (opname, childopname) | |||
|
271 | ) | |||
|
272 | ||||
250 | _unfinishedstates.insert(0, statecheckobj) |
|
273 | _unfinishedstates.insert(0, statecheckobj) | |
251 |
|
274 | |||
|
275 | if opname in _unfinishedstatesbyname: | |||
|
276 | raise error.ProgrammingError(_(b'op %s registered twice') % opname) | |||
|
277 | _unfinishedstatesbyname[opname] = statecheckobj | |||
|
278 | ||||
|
279 | ||||
|
280 | def _getparentandchild(opname, childopname): | |||
|
281 | p = _unfinishedstatesbyname.get(opname, None) | |||
|
282 | if not p: | |||
|
283 | raise error.ProgrammingError(_(b'unknown op %s') % opname) | |||
|
284 | if childopname not in p._childopnames: | |||
|
285 | raise error.ProgrammingError( | |||
|
286 | _(b'op %s does not delegate to %s') % (opname, childopname) | |||
|
287 | ) | |||
|
288 | c = _unfinishedstatesbyname[childopname] | |||
|
289 | return p, c | |||
|
290 | ||||
|
291 | ||||
|
292 | @contextlib.contextmanager | |||
|
293 | def delegating(repo, opname, childopname): | |||
|
294 | """context wrapper for delegations from opname to childopname. | |||
|
295 | ||||
|
296 | requires that childopname was specified when opname was registered. | |||
|
297 | ||||
|
298 | Usage: | |||
|
299 | def my_command_foo_that_uses_rebase(...): | |||
|
300 | ... | |||
|
301 | with state.delegating(repo, 'foo', 'rebase'): | |||
|
302 | _run_rebase(...) | |||
|
303 | ... | |||
|
304 | """ | |||
|
305 | ||||
|
306 | p, c = _getparentandchild(opname, childopname) | |||
|
307 | if p._delegating: | |||
|
308 | raise error.ProgrammingError( | |||
|
309 | _(b'cannot delegate from op %s recursively') % opname | |||
|
310 | ) | |||
|
311 | p._delegating = True | |||
|
312 | try: | |||
|
313 | yield | |||
|
314 | except error.ConflictResolutionRequired as e: | |||
|
315 | # Rewrite conflict resolution advice for the parent opname. | |||
|
316 | if e.opname == childopname: | |||
|
317 | raise error.ConflictResolutionRequired(opname) | |||
|
318 | raise e | |||
|
319 | finally: | |||
|
320 | p._delegating = False | |||
|
321 | ||||
|
322 | ||||
|
323 | def ischildunfinished(repo, opname, childopname): | |||
|
324 | """Returns true if both opname and childopname are unfinished.""" | |||
|
325 | ||||
|
326 | p, c = _getparentandchild(opname, childopname) | |||
|
327 | return (p._delegating or p.isunfinished(repo)) and c.isunfinished(repo) | |||
|
328 | ||||
|
329 | ||||
|
330 | def continuechild(ui, repo, opname, childopname): | |||
|
331 | """Checks that childopname is in progress, and continues it.""" | |||
|
332 | ||||
|
333 | p, c = _getparentandchild(opname, childopname) | |||
|
334 | if not ischildunfinished(repo, opname, childopname): | |||
|
335 | raise error.ProgrammingError( | |||
|
336 | _(b'child op %s of parent %s is not unfinished') | |||
|
337 | % (childopname, opname) | |||
|
338 | ) | |||
|
339 | if not c.continuefunc: | |||
|
340 | raise error.ProgrammingError( | |||
|
341 | _(b'op %s has no continue function') % childopname | |||
|
342 | ) | |||
|
343 | return c.continuefunc(ui, repo) | |||
|
344 | ||||
252 |
|
345 | |||
253 | addunfinished( |
|
346 | addunfinished( | |
254 | b'update', |
|
347 | b'update', |
@@ -20,6 +20,7 b' from . import (' | |||||
20 | narrowspec, |
|
20 | narrowspec, | |
21 | phases, |
|
21 | phases, | |
22 | pycompat, |
|
22 | pycompat, | |
|
23 | scmutil, | |||
23 | store, |
|
24 | store, | |
24 | util, |
|
25 | util, | |
25 | ) |
|
26 | ) | |
@@ -187,7 +188,7 b' def maybeperformlegacystreamclone(pullop' | |||||
187 | repo.svfs.options = localrepo.resolvestorevfsoptions( |
|
188 | repo.svfs.options = localrepo.resolvestorevfsoptions( | |
188 | repo.ui, repo.requirements, repo.features |
|
189 | repo.ui, repo.requirements, repo.features | |
189 | ) |
|
190 | ) | |
190 |
repo |
|
191 | scmutil.writereporequirements(repo) | |
191 |
|
192 | |||
192 | if rbranchmap: |
|
193 | if rbranchmap: | |
193 | repo._branchcaches.replace(repo, rbranchmap) |
|
194 | repo._branchcaches.replace(repo, rbranchmap) | |
@@ -730,4 +731,4 b' def applybundlev2(repo, fp, filecount, f' | |||||
730 | repo.svfs.options = localrepo.resolvestorevfsoptions( |
|
731 | repo.svfs.options = localrepo.resolvestorevfsoptions( | |
731 | repo.ui, repo.requirements, repo.features |
|
732 | repo.ui, repo.requirements, repo.features | |
732 | ) |
|
733 | ) | |
733 |
repo |
|
734 | scmutil.writereporequirements(repo) |
@@ -617,8 +617,8 b' class hgsubrepo(abstractsubrepo):' | |||||
617 | ui, |
|
617 | ui, | |
618 | self._repo, |
|
618 | self._repo, | |
619 | diffopts, |
|
619 | diffopts, | |
620 | node1, |
|
620 | self._repo[node1], | |
621 | node2, |
|
621 | self._repo[node2], | |
622 | match, |
|
622 | match, | |
623 | prefix=prefix, |
|
623 | prefix=prefix, | |
624 | listsubrepos=True, |
|
624 | listsubrepos=True, | |
@@ -639,7 +639,7 b' class hgsubrepo(abstractsubrepo):' | |||||
639 | rev = self._state[1] |
|
639 | rev = self._state[1] | |
640 | ctx = self._repo[rev] |
|
640 | ctx = self._repo[rev] | |
641 | scmutil.prefetchfiles( |
|
641 | scmutil.prefetchfiles( | |
642 |
self._repo, [ctx.rev() |
|
642 | self._repo, [(ctx.rev(), scmutil.matchfiles(self._repo, files))] | |
643 | ) |
|
643 | ) | |
644 | total = abstractsubrepo.archive(self, archiver, prefix, match) |
|
644 | total = abstractsubrepo.archive(self, archiver, prefix, match) | |
645 | for subpath in ctx.substate: |
|
645 | for subpath in ctx.substate: |
@@ -419,9 +419,9 b' def getgraphnodecurrent(repo, ctx, cache' | |||||
419 | else: |
|
419 | else: | |
420 | merge_nodes = cache.get(b'merge_nodes') |
|
420 | merge_nodes = cache.get(b'merge_nodes') | |
421 | if merge_nodes is None: |
|
421 | if merge_nodes is None: | |
422 | from . import merge |
|
422 | from . import mergestate as mergestatemod | |
423 |
|
423 | |||
424 | mergestate = merge.mergestate.read(repo) |
|
424 | mergestate = mergestatemod.mergestate.read(repo) | |
425 | if mergestate.active(): |
|
425 | if mergestate.active(): | |
426 | merge_nodes = (mergestate.local, mergestate.other) |
|
426 | merge_nodes = (mergestate.local, mergestate.other) | |
427 | else: |
|
427 | else: |
@@ -9,6 +9,7 b' from __future__ import absolute_import' | |||||
9 |
|
9 | |||
10 | import collections |
|
10 | import collections | |
11 | import contextlib |
|
11 | import contextlib | |
|
12 | import datetime | |||
12 | import errno |
|
13 | import errno | |
13 | import getpass |
|
14 | import getpass | |
14 | import inspect |
|
15 | import inspect | |
@@ -242,6 +243,7 b' class ui(object):' | |||||
242 | self._terminfoparams = {} |
|
243 | self._terminfoparams = {} | |
243 | self._styles = {} |
|
244 | self._styles = {} | |
244 | self._uninterruptible = False |
|
245 | self._uninterruptible = False | |
|
246 | self.showtimestamp = False | |||
245 |
|
247 | |||
246 | if src: |
|
248 | if src: | |
247 | self._fout = src._fout |
|
249 | self._fout = src._fout | |
@@ -561,6 +563,7 b' class ui(object):' | |||||
561 | self._reportuntrusted = self.debugflag or self.configbool( |
|
563 | self._reportuntrusted = self.debugflag or self.configbool( | |
562 | b"ui", b"report_untrusted" |
|
564 | b"ui", b"report_untrusted" | |
563 | ) |
|
565 | ) | |
|
566 | self.showtimestamp = self.configbool(b'ui', b'timestamp-output') | |||
564 | self.tracebackflag = self.configbool(b'ui', b'traceback') |
|
567 | self.tracebackflag = self.configbool(b'ui', b'traceback') | |
565 | self.logblockedtimes = self.configbool(b'ui', b'logblockedtimes') |
|
568 | self.logblockedtimes = self.configbool(b'ui', b'logblockedtimes') | |
566 |
|
569 | |||
@@ -1200,7 +1203,7 b' class ui(object):' | |||||
1200 | dest.write(msg) |
|
1203 | dest.write(msg) | |
1201 | # stderr may be buffered under win32 when redirected to files, |
|
1204 | # stderr may be buffered under win32 when redirected to files, | |
1202 | # including stdout. |
|
1205 | # including stdout. | |
1203 |
if dest is self._ferr and not getattr( |
|
1206 | if dest is self._ferr and not getattr(dest, 'closed', False): | |
1204 | dest.flush() |
|
1207 | dest.flush() | |
1205 | except IOError as err: |
|
1208 | except IOError as err: | |
1206 | if dest is self._ferr and err.errno in ( |
|
1209 | if dest is self._ferr and err.errno in ( | |
@@ -1217,7 +1220,21 b' class ui(object):' | |||||
1217 | ) * 1000 |
|
1220 | ) * 1000 | |
1218 |
|
1221 | |||
1219 | def _writemsg(self, dest, *args, **opts): |
|
1222 | def _writemsg(self, dest, *args, **opts): | |
|
1223 | timestamp = self.showtimestamp and opts.get('type') in { | |||
|
1224 | b'debug', | |||
|
1225 | b'error', | |||
|
1226 | b'note', | |||
|
1227 | b'status', | |||
|
1228 | b'warning', | |||
|
1229 | } | |||
|
1230 | if timestamp: | |||
|
1231 | args = ( | |||
|
1232 | b'[%s] ' | |||
|
1233 | % pycompat.bytestr(datetime.datetime.now().isoformat()), | |||
|
1234 | ) + args | |||
1220 | _writemsgwith(self._write, dest, *args, **opts) |
|
1235 | _writemsgwith(self._write, dest, *args, **opts) | |
|
1236 | if timestamp: | |||
|
1237 | dest.flush() | |||
1221 |
|
1238 | |||
1222 | def _writemsgnobuf(self, dest, *args, **opts): |
|
1239 | def _writemsgnobuf(self, dest, *args, **opts): | |
1223 | _writemsgwith(self._writenobuf, dest, *args, **opts) |
|
1240 | _writemsgwith(self._writenobuf, dest, *args, **opts) | |
@@ -2102,6 +2119,22 b' class ui(object):' | |||||
2102 | if (b'ui', b'quiet') in overrides: |
|
2119 | if (b'ui', b'quiet') in overrides: | |
2103 | self.fixconfig(section=b'ui') |
|
2120 | self.fixconfig(section=b'ui') | |
2104 |
|
2121 | |||
|
2122 | def estimatememory(self): | |||
|
2123 | """Provide an estimate for the available system memory in Bytes. | |||
|
2124 | ||||
|
2125 | This can be overriden via ui.available-memory. It returns None, if | |||
|
2126 | no estimate can be computed. | |||
|
2127 | """ | |||
|
2128 | value = self.config(b'ui', b'available-memory') | |||
|
2129 | if value is not None: | |||
|
2130 | try: | |||
|
2131 | return util.sizetoint(value) | |||
|
2132 | except error.ParseError: | |||
|
2133 | raise error.ConfigError( | |||
|
2134 | _(b"ui.available-memory value is invalid ('%s')") % value | |||
|
2135 | ) | |||
|
2136 | return util._estimatememory() | |||
|
2137 | ||||
2105 |
|
2138 | |||
2106 | class paths(dict): |
|
2139 | class paths(dict): | |
2107 | """Represents a collection of paths and their configs. |
|
2140 | """Represents a collection of paths and their configs. |
@@ -13,12 +13,12 b' from .i18n import _' | |||||
13 | from .pycompat import getattr |
|
13 | from .pycompat import getattr | |
14 | from . import ( |
|
14 | from . import ( | |
15 | changelog, |
|
15 | changelog, | |
16 | copies, |
|
|||
17 | error, |
|
16 | error, | |
18 | filelog, |
|
17 | filelog, | |
19 | hg, |
|
18 | hg, | |
20 | localrepo, |
|
19 | localrepo, | |
21 | manifest, |
|
20 | manifest, | |
|
21 | metadata, | |||
22 | pycompat, |
|
22 | pycompat, | |
23 | revlog, |
|
23 | revlog, | |
24 | scmutil, |
|
24 | scmutil, | |
@@ -78,6 +78,7 b' def supportremovedrequirements(repo):' | |||||
78 | localrepo.SPARSEREVLOG_REQUIREMENT, |
|
78 | localrepo.SPARSEREVLOG_REQUIREMENT, | |
79 | localrepo.SIDEDATA_REQUIREMENT, |
|
79 | localrepo.SIDEDATA_REQUIREMENT, | |
80 | localrepo.COPIESSDC_REQUIREMENT, |
|
80 | localrepo.COPIESSDC_REQUIREMENT, | |
|
81 | localrepo.NODEMAP_REQUIREMENT, | |||
81 | } |
|
82 | } | |
82 | for name in compression.compengines: |
|
83 | for name in compression.compengines: | |
83 | engine = compression.compengines[name] |
|
84 | engine = compression.compengines[name] | |
@@ -105,6 +106,7 b' def supporteddestrequirements(repo):' | |||||
105 | localrepo.SPARSEREVLOG_REQUIREMENT, |
|
106 | localrepo.SPARSEREVLOG_REQUIREMENT, | |
106 | localrepo.SIDEDATA_REQUIREMENT, |
|
107 | localrepo.SIDEDATA_REQUIREMENT, | |
107 | localrepo.COPIESSDC_REQUIREMENT, |
|
108 | localrepo.COPIESSDC_REQUIREMENT, | |
|
109 | localrepo.NODEMAP_REQUIREMENT, | |||
108 | } |
|
110 | } | |
109 | for name in compression.compengines: |
|
111 | for name in compression.compengines: | |
110 | engine = compression.compengines[name] |
|
112 | engine = compression.compengines[name] | |
@@ -132,6 +134,7 b' def allowednewrequirements(repo):' | |||||
132 | localrepo.SPARSEREVLOG_REQUIREMENT, |
|
134 | localrepo.SPARSEREVLOG_REQUIREMENT, | |
133 | localrepo.SIDEDATA_REQUIREMENT, |
|
135 | localrepo.SIDEDATA_REQUIREMENT, | |
134 | localrepo.COPIESSDC_REQUIREMENT, |
|
136 | localrepo.COPIESSDC_REQUIREMENT, | |
|
137 | localrepo.NODEMAP_REQUIREMENT, | |||
135 | } |
|
138 | } | |
136 | for name in compression.compengines: |
|
139 | for name in compression.compengines: | |
137 | engine = compression.compengines[name] |
|
140 | engine = compression.compengines[name] | |
@@ -374,6 +377,21 b' class sidedata(requirementformatvariant)' | |||||
374 |
|
377 | |||
375 |
|
378 | |||
376 | @registerformatvariant |
|
379 | @registerformatvariant | |
|
380 | class persistentnodemap(requirementformatvariant): | |||
|
381 | name = b'persistent-nodemap' | |||
|
382 | ||||
|
383 | _requirement = localrepo.NODEMAP_REQUIREMENT | |||
|
384 | ||||
|
385 | default = False | |||
|
386 | ||||
|
387 | description = _( | |||
|
388 | b'persist the node -> rev mapping on disk to speedup lookup' | |||
|
389 | ) | |||
|
390 | ||||
|
391 | upgrademessage = _(b'Speedup revision lookup by node id.') | |||
|
392 | ||||
|
393 | ||||
|
394 | @registerformatvariant | |||
377 | class copiessdc(requirementformatvariant): |
|
395 | class copiessdc(requirementformatvariant): | |
378 | name = b'copies-sdc' |
|
396 | name = b'copies-sdc' | |
379 |
|
397 | |||
@@ -716,9 +734,9 b' def getsidedatacompanion(srcrepo, dstrep' | |||||
716 | return False, (), {} |
|
734 | return False, (), {} | |
717 |
|
735 | |||
718 | elif localrepo.COPIESSDC_REQUIREMENT in addedreqs: |
|
736 | elif localrepo.COPIESSDC_REQUIREMENT in addedreqs: | |
719 |
sidedatacompanion = |
|
737 | sidedatacompanion = metadata.getsidedataadder(srcrepo, dstrepo) | |
720 | elif localrepo.COPIESSDC_REQUIREMENT in removedreqs: |
|
738 | elif localrepo.COPIESSDC_REQUIREMENT in removedreqs: | |
721 |
sidedatacompanion = |
|
739 | sidedatacompanion = metadata.getsidedataremover(srcrepo, dstrepo) | |
722 | return sidedatacompanion |
|
740 | return sidedatacompanion | |
723 |
|
741 | |||
724 |
|
742 | |||
@@ -807,14 +825,14 b' def _clonerevlogs(' | |||||
807 | if not revcount: |
|
825 | if not revcount: | |
808 | return |
|
826 | return | |
809 |
|
827 | |||
810 |
ui. |
|
828 | ui.status( | |
811 | _( |
|
829 | _( | |
812 | b'migrating %d total revisions (%d in filelogs, %d in manifests, ' |
|
830 | b'migrating %d total revisions (%d in filelogs, %d in manifests, ' | |
813 | b'%d in changelog)\n' |
|
831 | b'%d in changelog)\n' | |
814 | ) |
|
832 | ) | |
815 | % (revcount, frevcount, mrevcount, crevcount) |
|
833 | % (revcount, frevcount, mrevcount, crevcount) | |
816 | ) |
|
834 | ) | |
817 |
ui. |
|
835 | ui.status( | |
818 | _(b'migrating %s in store; %s tracked data\n') |
|
836 | _(b'migrating %s in store; %s tracked data\n') | |
819 | % ((util.bytecount(srcsize), util.bytecount(srcrawsize))) |
|
837 | % ((util.bytecount(srcsize), util.bytecount(srcrawsize))) | |
820 | ) |
|
838 | ) | |
@@ -837,7 +855,7 b' def _clonerevlogs(' | |||||
837 | oldrl = _revlogfrompath(srcrepo, unencoded) |
|
855 | oldrl = _revlogfrompath(srcrepo, unencoded) | |
838 |
|
856 | |||
839 | if isinstance(oldrl, changelog.changelog) and b'c' not in seen: |
|
857 | if isinstance(oldrl, changelog.changelog) and b'c' not in seen: | |
840 |
ui. |
|
858 | ui.status( | |
841 | _( |
|
859 | _( | |
842 | b'finished migrating %d manifest revisions across %d ' |
|
860 | b'finished migrating %d manifest revisions across %d ' | |
843 | b'manifests; change in size: %s\n' |
|
861 | b'manifests; change in size: %s\n' | |
@@ -845,7 +863,7 b' def _clonerevlogs(' | |||||
845 | % (mrevcount, mcount, util.bytecount(mdstsize - msrcsize)) |
|
863 | % (mrevcount, mcount, util.bytecount(mdstsize - msrcsize)) | |
846 | ) |
|
864 | ) | |
847 |
|
865 | |||
848 |
ui. |
|
866 | ui.status( | |
849 | _( |
|
867 | _( | |
850 | b'migrating changelog containing %d revisions ' |
|
868 | b'migrating changelog containing %d revisions ' | |
851 | b'(%s in store; %s tracked data)\n' |
|
869 | b'(%s in store; %s tracked data)\n' | |
@@ -861,7 +879,7 b' def _clonerevlogs(' | |||||
861 | _(b'changelog revisions'), total=crevcount |
|
879 | _(b'changelog revisions'), total=crevcount | |
862 | ) |
|
880 | ) | |
863 | elif isinstance(oldrl, manifest.manifestrevlog) and b'm' not in seen: |
|
881 | elif isinstance(oldrl, manifest.manifestrevlog) and b'm' not in seen: | |
864 |
ui. |
|
882 | ui.status( | |
865 | _( |
|
883 | _( | |
866 | b'finished migrating %d filelog revisions across %d ' |
|
884 | b'finished migrating %d filelog revisions across %d ' | |
867 | b'filelogs; change in size: %s\n' |
|
885 | b'filelogs; change in size: %s\n' | |
@@ -869,7 +887,7 b' def _clonerevlogs(' | |||||
869 | % (frevcount, fcount, util.bytecount(fdstsize - fsrcsize)) |
|
887 | % (frevcount, fcount, util.bytecount(fdstsize - fsrcsize)) | |
870 | ) |
|
888 | ) | |
871 |
|
889 | |||
872 |
ui. |
|
890 | ui.status( | |
873 | _( |
|
891 | _( | |
874 | b'migrating %d manifests containing %d revisions ' |
|
892 | b'migrating %d manifests containing %d revisions ' | |
875 | b'(%s in store; %s tracked data)\n' |
|
893 | b'(%s in store; %s tracked data)\n' | |
@@ -888,7 +906,7 b' def _clonerevlogs(' | |||||
888 | _(b'manifest revisions'), total=mrevcount |
|
906 | _(b'manifest revisions'), total=mrevcount | |
889 | ) |
|
907 | ) | |
890 | elif b'f' not in seen: |
|
908 | elif b'f' not in seen: | |
891 |
ui. |
|
909 | ui.status( | |
892 | _( |
|
910 | _( | |
893 | b'migrating %d filelogs containing %d revisions ' |
|
911 | b'migrating %d filelogs containing %d revisions ' | |
894 | b'(%s in store; %s tracked data)\n' |
|
912 | b'(%s in store; %s tracked data)\n' | |
@@ -941,7 +959,7 b' def _clonerevlogs(' | |||||
941 |
|
959 | |||
942 | progress.complete() |
|
960 | progress.complete() | |
943 |
|
961 | |||
944 |
ui. |
|
962 | ui.status( | |
945 | _( |
|
963 | _( | |
946 | b'finished migrating %d changelog revisions; change in size: ' |
|
964 | b'finished migrating %d changelog revisions; change in size: ' | |
947 | b'%s\n' |
|
965 | b'%s\n' | |
@@ -949,7 +967,7 b' def _clonerevlogs(' | |||||
949 | % (crevcount, util.bytecount(cdstsize - csrcsize)) |
|
967 | % (crevcount, util.bytecount(cdstsize - csrcsize)) | |
950 | ) |
|
968 | ) | |
951 |
|
969 | |||
952 |
ui. |
|
970 | ui.status( | |
953 | _( |
|
971 | _( | |
954 | b'finished migrating %d total revisions; total change in store ' |
|
972 | b'finished migrating %d total revisions; total change in store ' | |
955 | b'size: %s\n' |
|
973 | b'size: %s\n' | |
@@ -975,7 +993,7 b' def _filterstorefile(srcrepo, dstrepo, r' | |||||
975 | Function should return ``True`` if the file is to be copied. |
|
993 | Function should return ``True`` if the file is to be copied. | |
976 | """ |
|
994 | """ | |
977 | # Skip revlogs. |
|
995 | # Skip revlogs. | |
978 | if path.endswith((b'.i', b'.d')): |
|
996 | if path.endswith((b'.i', b'.d', b'.n', b'.nd')): | |
979 | return False |
|
997 | return False | |
980 | # Skip transaction related files. |
|
998 | # Skip transaction related files. | |
981 | if path.startswith(b'undo'): |
|
999 | if path.startswith(b'undo'): | |
@@ -1013,7 +1031,7 b' def _upgraderepo(' | |||||
1013 | assert srcrepo.currentwlock() |
|
1031 | assert srcrepo.currentwlock() | |
1014 | assert dstrepo.currentwlock() |
|
1032 | assert dstrepo.currentwlock() | |
1015 |
|
1033 | |||
1016 |
ui. |
|
1034 | ui.status( | |
1017 | _( |
|
1035 | _( | |
1018 | b'(it is safe to interrupt this process any time before ' |
|
1036 | b'(it is safe to interrupt this process any time before ' | |
1019 | b'data migration completes)\n' |
|
1037 | b'data migration completes)\n' | |
@@ -1048,14 +1066,14 b' def _upgraderepo(' | |||||
1048 | if not _filterstorefile(srcrepo, dstrepo, requirements, p, kind, st): |
|
1066 | if not _filterstorefile(srcrepo, dstrepo, requirements, p, kind, st): | |
1049 | continue |
|
1067 | continue | |
1050 |
|
1068 | |||
1051 |
srcrepo.ui. |
|
1069 | srcrepo.ui.status(_(b'copying %s\n') % p) | |
1052 | src = srcrepo.store.rawvfs.join(p) |
|
1070 | src = srcrepo.store.rawvfs.join(p) | |
1053 | dst = dstrepo.store.rawvfs.join(p) |
|
1071 | dst = dstrepo.store.rawvfs.join(p) | |
1054 | util.copyfile(src, dst, copystat=True) |
|
1072 | util.copyfile(src, dst, copystat=True) | |
1055 |
|
1073 | |||
1056 | _finishdatamigration(ui, srcrepo, dstrepo, requirements) |
|
1074 | _finishdatamigration(ui, srcrepo, dstrepo, requirements) | |
1057 |
|
1075 | |||
1058 |
ui. |
|
1076 | ui.status(_(b'data fully migrated to temporary repository\n')) | |
1059 |
|
1077 | |||
1060 | backuppath = pycompat.mkdtemp(prefix=b'upgradebackup.', dir=srcrepo.path) |
|
1078 | backuppath = pycompat.mkdtemp(prefix=b'upgradebackup.', dir=srcrepo.path) | |
1061 | backupvfs = vfsmod.vfs(backuppath) |
|
1079 | backupvfs = vfsmod.vfs(backuppath) | |
@@ -1067,28 +1085,28 b' def _upgraderepo(' | |||||
1067 | # as a mechanism to lock out new clients during the data swap. This is |
|
1085 | # as a mechanism to lock out new clients during the data swap. This is | |
1068 | # better than allowing a client to continue while the repository is in |
|
1086 | # better than allowing a client to continue while the repository is in | |
1069 | # an inconsistent state. |
|
1087 | # an inconsistent state. | |
1070 |
ui. |
|
1088 | ui.status( | |
1071 | _( |
|
1089 | _( | |
1072 | b'marking source repository as being upgraded; clients will be ' |
|
1090 | b'marking source repository as being upgraded; clients will be ' | |
1073 | b'unable to read from repository\n' |
|
1091 | b'unable to read from repository\n' | |
1074 | ) |
|
1092 | ) | |
1075 | ) |
|
1093 | ) | |
1076 | scmutil.writerequires( |
|
1094 | scmutil.writereporequirements( | |
1077 |
srcrepo |
|
1095 | srcrepo, srcrepo.requirements | {b'upgradeinprogress'} | |
1078 | ) |
|
1096 | ) | |
1079 |
|
1097 | |||
1080 |
ui. |
|
1098 | ui.status(_(b'starting in-place swap of repository data\n')) | |
1081 |
ui. |
|
1099 | ui.status(_(b'replaced files will be backed up at %s\n') % backuppath) | |
1082 |
|
1100 | |||
1083 | # Now swap in the new store directory. Doing it as a rename should make |
|
1101 | # Now swap in the new store directory. Doing it as a rename should make | |
1084 | # the operation nearly instantaneous and atomic (at least in well-behaved |
|
1102 | # the operation nearly instantaneous and atomic (at least in well-behaved | |
1085 | # environments). |
|
1103 | # environments). | |
1086 |
ui. |
|
1104 | ui.status(_(b'replacing store...\n')) | |
1087 | tstart = util.timer() |
|
1105 | tstart = util.timer() | |
1088 | util.rename(srcrepo.spath, backupvfs.join(b'store')) |
|
1106 | util.rename(srcrepo.spath, backupvfs.join(b'store')) | |
1089 | util.rename(dstrepo.spath, srcrepo.spath) |
|
1107 | util.rename(dstrepo.spath, srcrepo.spath) | |
1090 | elapsed = util.timer() - tstart |
|
1108 | elapsed = util.timer() - tstart | |
1091 |
ui. |
|
1109 | ui.status( | |
1092 | _( |
|
1110 | _( | |
1093 | b'store replacement complete; repository was inconsistent for ' |
|
1111 | b'store replacement complete; repository was inconsistent for ' | |
1094 | b'%0.1fs\n' |
|
1112 | b'%0.1fs\n' | |
@@ -1098,13 +1116,13 b' def _upgraderepo(' | |||||
1098 |
|
1116 | |||
1099 | # We first write the requirements file. Any new requirements will lock |
|
1117 | # We first write the requirements file. Any new requirements will lock | |
1100 | # out legacy clients. |
|
1118 | # out legacy clients. | |
1101 |
ui. |
|
1119 | ui.status( | |
1102 | _( |
|
1120 | _( | |
1103 | b'finalizing requirements file and making repository readable ' |
|
1121 | b'finalizing requirements file and making repository readable ' | |
1104 | b'again\n' |
|
1122 | b'again\n' | |
1105 | ) |
|
1123 | ) | |
1106 | ) |
|
1124 | ) | |
1107 |
scmutil.writerequires(srcrepo |
|
1125 | scmutil.writereporequirements(srcrepo, requirements) | |
1108 |
|
1126 | |||
1109 | # The lock file from the old store won't be removed because nothing has a |
|
1127 | # The lock file from the old store won't be removed because nothing has a | |
1110 | # reference to its new location. So clean it up manually. Alternatively, we |
|
1128 | # reference to its new location. So clean it up manually. Alternatively, we | |
@@ -1274,9 +1292,20 b' def upgraderepo(' | |||||
1274 | ui.write((b'\n')) |
|
1292 | ui.write((b'\n')) | |
1275 | ui.write(b'\n') |
|
1293 | ui.write(b'\n') | |
1276 |
|
1294 | |||
|
1295 | def printoptimisations(): | |||
|
1296 | optimisations = [a for a in actions if a.type == optimisation] | |||
|
1297 | optimisations.sort(key=lambda a: a.name) | |||
|
1298 | if optimisations: | |||
|
1299 | ui.write(_(b'optimisations: ')) | |||
|
1300 | write_labeled( | |||
|
1301 | [a.name for a in optimisations], | |||
|
1302 | "upgrade-repo.optimisation.performed", | |||
|
1303 | ) | |||
|
1304 | ui.write(b'\n\n') | |||
|
1305 | ||||
1277 | def printupgradeactions(): |
|
1306 | def printupgradeactions(): | |
1278 | for a in actions: |
|
1307 | for a in actions: | |
1279 |
ui. |
|
1308 | ui.status(b'%s\n %s\n\n' % (a.name, a.upgrademessage)) | |
1280 |
|
1309 | |||
1281 | if not run: |
|
1310 | if not run: | |
1282 | fromconfig = [] |
|
1311 | fromconfig = [] | |
@@ -1291,35 +1320,35 b' def upgraderepo(' | |||||
1291 | if fromconfig or onlydefault: |
|
1320 | if fromconfig or onlydefault: | |
1292 |
|
1321 | |||
1293 | if fromconfig: |
|
1322 | if fromconfig: | |
1294 |
ui. |
|
1323 | ui.status( | |
1295 | _( |
|
1324 | _( | |
1296 | b'repository lacks features recommended by ' |
|
1325 | b'repository lacks features recommended by ' | |
1297 | b'current config options:\n\n' |
|
1326 | b'current config options:\n\n' | |
1298 | ) |
|
1327 | ) | |
1299 | ) |
|
1328 | ) | |
1300 | for i in fromconfig: |
|
1329 | for i in fromconfig: | |
1301 |
ui. |
|
1330 | ui.status(b'%s\n %s\n\n' % (i.name, i.description)) | |
1302 |
|
1331 | |||
1303 | if onlydefault: |
|
1332 | if onlydefault: | |
1304 |
ui. |
|
1333 | ui.status( | |
1305 | _( |
|
1334 | _( | |
1306 | b'repository lacks features used by the default ' |
|
1335 | b'repository lacks features used by the default ' | |
1307 | b'config options:\n\n' |
|
1336 | b'config options:\n\n' | |
1308 | ) |
|
1337 | ) | |
1309 | ) |
|
1338 | ) | |
1310 | for i in onlydefault: |
|
1339 | for i in onlydefault: | |
1311 |
ui. |
|
1340 | ui.status(b'%s\n %s\n\n' % (i.name, i.description)) | |
1312 |
|
1341 | |||
1313 |
ui. |
|
1342 | ui.status(b'\n') | |
1314 | else: |
|
1343 | else: | |
1315 |
ui. |
|
1344 | ui.status( | |
1316 | _( |
|
1345 | _( | |
1317 | b'(no feature deficiencies found in existing ' |
|
1346 | b'(no feature deficiencies found in existing ' | |
1318 | b'repository)\n' |
|
1347 | b'repository)\n' | |
1319 | ) |
|
1348 | ) | |
1320 | ) |
|
1349 | ) | |
1321 |
|
1350 | |||
1322 |
ui. |
|
1351 | ui.status( | |
1323 | _( |
|
1352 | _( | |
1324 | b'performing an upgrade with "--run" will make the following ' |
|
1353 | b'performing an upgrade with "--run" will make the following ' | |
1325 | b'changes:\n\n' |
|
1354 | b'changes:\n\n' | |
@@ -1327,31 +1356,33 b' def upgraderepo(' | |||||
1327 | ) |
|
1356 | ) | |
1328 |
|
1357 | |||
1329 | printrequirements() |
|
1358 | printrequirements() | |
|
1359 | printoptimisations() | |||
1330 | printupgradeactions() |
|
1360 | printupgradeactions() | |
1331 |
|
1361 | |||
1332 | unusedoptimize = [i for i in alloptimizations if i not in actions] |
|
1362 | unusedoptimize = [i for i in alloptimizations if i not in actions] | |
1333 |
|
1363 | |||
1334 | if unusedoptimize: |
|
1364 | if unusedoptimize: | |
1335 |
ui. |
|
1365 | ui.status( | |
1336 | _( |
|
1366 | _( | |
1337 | b'additional optimizations are available by specifying ' |
|
1367 | b'additional optimizations are available by specifying ' | |
1338 | b'"--optimize <name>":\n\n' |
|
1368 | b'"--optimize <name>":\n\n' | |
1339 | ) |
|
1369 | ) | |
1340 | ) |
|
1370 | ) | |
1341 | for i in unusedoptimize: |
|
1371 | for i in unusedoptimize: | |
1342 |
ui. |
|
1372 | ui.status(_(b'%s\n %s\n\n') % (i.name, i.description)) | |
1343 | return |
|
1373 | return | |
1344 |
|
1374 | |||
1345 | # Else we're in the run=true case. |
|
1375 | # Else we're in the run=true case. | |
1346 | ui.write(_(b'upgrade will perform the following actions:\n\n')) |
|
1376 | ui.write(_(b'upgrade will perform the following actions:\n\n')) | |
1347 | printrequirements() |
|
1377 | printrequirements() | |
|
1378 | printoptimisations() | |||
1348 | printupgradeactions() |
|
1379 | printupgradeactions() | |
1349 |
|
1380 | |||
1350 | upgradeactions = [a.name for a in actions] |
|
1381 | upgradeactions = [a.name for a in actions] | |
1351 |
|
1382 | |||
1352 |
ui. |
|
1383 | ui.status(_(b'beginning upgrade...\n')) | |
1353 | with repo.wlock(), repo.lock(): |
|
1384 | with repo.wlock(), repo.lock(): | |
1354 |
ui. |
|
1385 | ui.status(_(b'repository locked and read-only\n')) | |
1355 | # Our strategy for upgrading the repository is to create a new, |
|
1386 | # Our strategy for upgrading the repository is to create a new, | |
1356 | # temporary repository, write data to it, then do a swap of the |
|
1387 | # temporary repository, write data to it, then do a swap of the | |
1357 | # data. There are less heavyweight ways to do this, but it is easier |
|
1388 | # data. There are less heavyweight ways to do this, but it is easier | |
@@ -1360,7 +1391,7 b' def upgraderepo(' | |||||
1360 | tmppath = pycompat.mkdtemp(prefix=b'upgrade.', dir=repo.path) |
|
1391 | tmppath = pycompat.mkdtemp(prefix=b'upgrade.', dir=repo.path) | |
1361 | backuppath = None |
|
1392 | backuppath = None | |
1362 | try: |
|
1393 | try: | |
1363 |
ui. |
|
1394 | ui.status( | |
1364 | _( |
|
1395 | _( | |
1365 | b'creating temporary repository to stage migrated ' |
|
1396 | b'creating temporary repository to stage migrated ' | |
1366 | b'data: %s\n' |
|
1397 | b'data: %s\n' | |
@@ -1377,15 +1408,17 b' def upgraderepo(' | |||||
1377 | ui, repo, dstrepo, newreqs, upgradeactions, revlogs=revlogs |
|
1408 | ui, repo, dstrepo, newreqs, upgradeactions, revlogs=revlogs | |
1378 | ) |
|
1409 | ) | |
1379 | if not (backup or backuppath is None): |
|
1410 | if not (backup or backuppath is None): | |
1380 | ui.write(_(b'removing old repository content%s\n') % backuppath) |
|
1411 | ui.status( | |
|
1412 | _(b'removing old repository content%s\n') % backuppath | |||
|
1413 | ) | |||
1381 | repo.vfs.rmtree(backuppath, forcibly=True) |
|
1414 | repo.vfs.rmtree(backuppath, forcibly=True) | |
1382 | backuppath = None |
|
1415 | backuppath = None | |
1383 |
|
1416 | |||
1384 | finally: |
|
1417 | finally: | |
1385 |
ui. |
|
1418 | ui.status(_(b'removing temporary repository %s\n') % tmppath) | |
1386 | repo.vfs.rmtree(tmppath, forcibly=True) |
|
1419 | repo.vfs.rmtree(tmppath, forcibly=True) | |
1387 |
|
1420 | |||
1388 | if backuppath: |
|
1421 | if backuppath and not ui.quiet: | |
1389 | ui.warn( |
|
1422 | ui.warn( | |
1390 | _(b'copy of old repository backed up at %s\n') % backuppath |
|
1423 | _(b'copy of old repository backed up at %s\n') % backuppath | |
1391 | ) |
|
1424 | ) |
@@ -205,6 +205,8 b' def nouideprecwarn(msg, version, stackle' | |||||
205 | b" update your code.)" |
|
205 | b" update your code.)" | |
206 | ) % version |
|
206 | ) % version | |
207 | warnings.warn(pycompat.sysstr(msg), DeprecationWarning, stacklevel + 1) |
|
207 | warnings.warn(pycompat.sysstr(msg), DeprecationWarning, stacklevel + 1) | |
|
208 | # on python 3 with chg, we will need to explicitly flush the output | |||
|
209 | sys.stderr.flush() | |||
208 |
|
210 | |||
209 |
|
211 | |||
210 | DIGESTS = { |
|
212 | DIGESTS = { | |
@@ -1379,8 +1381,8 b' def acceptintervention(tr=None):' | |||||
1379 |
|
1381 | |||
1380 |
|
1382 | |||
1381 | @contextlib.contextmanager |
|
1383 | @contextlib.contextmanager | |
1382 | def nullcontextmanager(): |
|
1384 | def nullcontextmanager(enter_result=None): | |
1383 | yield |
|
1385 | yield enter_result | |
1384 |
|
1386 | |||
1385 |
|
1387 | |||
1386 | class _lrucachenode(object): |
|
1388 | class _lrucachenode(object): | |
@@ -2845,7 +2847,7 b' if pyplatform.python_implementation() ==' | |||||
2845 | # [1]: fixed by changeset 67dc99a989cd in the cpython hg repo. |
|
2847 | # [1]: fixed by changeset 67dc99a989cd in the cpython hg repo. | |
2846 | # |
|
2848 | # | |
2847 | # Here we workaround the EINTR issue for fileobj.__iter__. Other methods |
|
2849 | # Here we workaround the EINTR issue for fileobj.__iter__. Other methods | |
2848 |
# like "read*" |
|
2850 | # like "read*" work fine, as we do not support Python < 2.7.4. | |
2849 | # |
|
2851 | # | |
2850 | # Although we can workaround the EINTR issue for fp.__iter__, it is slower: |
|
2852 | # Although we can workaround the EINTR issue for fp.__iter__, it is slower: | |
2851 | # "for x in fp" is 4x faster than "for x in iter(fp.readline, '')" in |
|
2853 | # "for x in fp" is 4x faster than "for x in iter(fp.readline, '')" in | |
@@ -2857,39 +2859,6 b' if pyplatform.python_implementation() ==' | |||||
2857 | # affects things like pipes, sockets, ttys etc. We treat "normal" (S_ISREG) |
|
2859 | # affects things like pipes, sockets, ttys etc. We treat "normal" (S_ISREG) | |
2858 | # files approximately as "fast" files and use the fast (unsafe) code path, |
|
2860 | # files approximately as "fast" files and use the fast (unsafe) code path, | |
2859 | # to minimize the performance impact. |
|
2861 | # to minimize the performance impact. | |
2860 | if sys.version_info >= (2, 7, 4): |
|
|||
2861 | # fp.readline deals with EINTR correctly, use it as a workaround. |
|
|||
2862 | def _safeiterfile(fp): |
|
|||
2863 | return iter(fp.readline, b'') |
|
|||
2864 |
|
||||
2865 | else: |
|
|||
2866 | # fp.read* are broken too, manually deal with EINTR in a stupid way. |
|
|||
2867 | # note: this may block longer than necessary because of bufsize. |
|
|||
2868 | def _safeiterfile(fp, bufsize=4096): |
|
|||
2869 | fd = fp.fileno() |
|
|||
2870 | line = b'' |
|
|||
2871 | while True: |
|
|||
2872 | try: |
|
|||
2873 | buf = os.read(fd, bufsize) |
|
|||
2874 | except OSError as ex: |
|
|||
2875 | # os.read only raises EINTR before any data is read |
|
|||
2876 | if ex.errno == errno.EINTR: |
|
|||
2877 | continue |
|
|||
2878 | else: |
|
|||
2879 | raise |
|
|||
2880 | line += buf |
|
|||
2881 | if b'\n' in buf: |
|
|||
2882 | splitted = line.splitlines(True) |
|
|||
2883 | line = b'' |
|
|||
2884 | for l in splitted: |
|
|||
2885 | if l[-1] == b'\n': |
|
|||
2886 | yield l |
|
|||
2887 | else: |
|
|||
2888 | line = l |
|
|||
2889 | if not buf: |
|
|||
2890 | break |
|
|||
2891 | if line: |
|
|||
2892 | yield line |
|
|||
2893 |
|
2862 | |||
2894 | def iterfile(fp): |
|
2863 | def iterfile(fp): | |
2895 | fastpath = True |
|
2864 | fastpath = True | |
@@ -2898,7 +2867,8 b' if pyplatform.python_implementation() ==' | |||||
2898 | if fastpath: |
|
2867 | if fastpath: | |
2899 | return fp |
|
2868 | return fp | |
2900 | else: |
|
2869 | else: | |
2901 | return _safeiterfile(fp) |
|
2870 | # fp.readline deals with EINTR correctly, use it as a workaround. | |
|
2871 | return iter(fp.readline, b'') | |||
2902 |
|
2872 | |||
2903 |
|
2873 | |||
2904 | else: |
|
2874 | else: | |
@@ -3656,3 +3626,44 b' def with_lc_ctype():' | |||||
3656 | locale.setlocale(locale.LC_CTYPE, oldloc) |
|
3626 | locale.setlocale(locale.LC_CTYPE, oldloc) | |
3657 | else: |
|
3627 | else: | |
3658 | yield |
|
3628 | yield | |
|
3629 | ||||
|
3630 | ||||
|
3631 | def _estimatememory(): | |||
|
3632 | """Provide an estimate for the available system memory in Bytes. | |||
|
3633 | ||||
|
3634 | If no estimate can be provided on the platform, returns None. | |||
|
3635 | """ | |||
|
3636 | if pycompat.sysplatform.startswith(b'win'): | |||
|
3637 | # On Windows, use the GlobalMemoryStatusEx kernel function directly. | |||
|
3638 | from ctypes import c_long as DWORD, c_ulonglong as DWORDLONG | |||
|
3639 | from ctypes.wintypes import Structure, byref, sizeof, windll | |||
|
3640 | ||||
|
3641 | class MEMORYSTATUSEX(Structure): | |||
|
3642 | _fields_ = [ | |||
|
3643 | ('dwLength', DWORD), | |||
|
3644 | ('dwMemoryLoad', DWORD), | |||
|
3645 | ('ullTotalPhys', DWORDLONG), | |||
|
3646 | ('ullAvailPhys', DWORDLONG), | |||
|
3647 | ('ullTotalPageFile', DWORDLONG), | |||
|
3648 | ('ullAvailPageFile', DWORDLONG), | |||
|
3649 | ('ullTotalVirtual', DWORDLONG), | |||
|
3650 | ('ullAvailVirtual', DWORDLONG), | |||
|
3651 | ('ullExtendedVirtual', DWORDLONG), | |||
|
3652 | ] | |||
|
3653 | ||||
|
3654 | x = MEMORYSTATUSEX() | |||
|
3655 | x.dwLength = sizeof(x) | |||
|
3656 | windll.kernel32.GlobalMemoryStatusEx(byref(x)) | |||
|
3657 | return x.ullAvailPhys | |||
|
3658 | ||||
|
3659 | # On newer Unix-like systems and Mac OSX, the sysconf interface | |||
|
3660 | # can be used. _SC_PAGE_SIZE is part of POSIX; _SC_PHYS_PAGES | |||
|
3661 | # seems to be implemented on most systems. | |||
|
3662 | try: | |||
|
3663 | pagesize = os.sysconf(os.sysconf_names['SC_PAGE_SIZE']) | |||
|
3664 | pages = os.sysconf(os.sysconf_names['SC_PHYS_PAGES']) | |||
|
3665 | return pagesize * pages | |||
|
3666 | except OSError: # sysconf can fail | |||
|
3667 | pass | |||
|
3668 | except KeyError: # unknown parameter | |||
|
3669 | pass |
@@ -37,9 +37,10 b' from ..utils import resourceutil' | |||||
37 |
|
37 | |||
38 | osutil = policy.importmod('osutil') |
|
38 | osutil = policy.importmod('osutil') | |
39 |
|
39 | |||
40 | stderr = pycompat.stderr |
|
40 | if pycompat.iswindows: | |
41 | stdin = pycompat.stdin |
|
41 | from .. import windows as platform | |
42 | stdout = pycompat.stdout |
|
42 | else: | |
|
43 | from .. import posix as platform | |||
43 |
|
44 | |||
44 |
|
45 | |||
45 | def isatty(fp): |
|
46 | def isatty(fp): | |
@@ -49,33 +50,108 b' def isatty(fp):' | |||||
49 | return False |
|
50 | return False | |
50 |
|
51 | |||
51 |
|
52 | |||
52 | # Python 2 uses the C library's standard I/O streams. Glibc determines |
|
53 | class LineBufferedWrapper(object): | |
53 | # buffering on first write to stdout - if we replace a TTY destined stdout with |
|
54 | def __init__(self, orig): | |
54 | # a pipe destined stdout (e.g. pager), we want line buffering (or unbuffered, |
|
55 | self.orig = orig | |
55 | # on Windows). |
|
56 | ||
56 | # Python 3 rolls its own standard I/O streams. |
|
57 | def __getattr__(self, attr): | |
57 | if isatty(stdout): |
|
58 | return getattr(self.orig, attr) | |
|
59 | ||||
|
60 | def write(self, s): | |||
|
61 | orig = self.orig | |||
|
62 | res = orig.write(s) | |||
|
63 | if s.endswith(b'\n'): | |||
|
64 | orig.flush() | |||
|
65 | return res | |||
|
66 | ||||
|
67 | ||||
|
68 | io.BufferedIOBase.register(LineBufferedWrapper) | |||
|
69 | ||||
|
70 | ||||
|
71 | def make_line_buffered(stream): | |||
|
72 | if pycompat.ispy3 and not isinstance(stream, io.BufferedIOBase): | |||
|
73 | # On Python 3, buffered streams can be expected to subclass | |||
|
74 | # BufferedIOBase. This is definitively the case for the streams | |||
|
75 | # initialized by the interpreter. For unbuffered streams, we don't need | |||
|
76 | # to emulate line buffering. | |||
|
77 | return stream | |||
|
78 | if isinstance(stream, LineBufferedWrapper): | |||
|
79 | return stream | |||
|
80 | return LineBufferedWrapper(stream) | |||
|
81 | ||||
|
82 | ||||
|
83 | class WriteAllWrapper(object): | |||
|
84 | def __init__(self, orig): | |||
|
85 | self.orig = orig | |||
|
86 | ||||
|
87 | def __getattr__(self, attr): | |||
|
88 | return getattr(self.orig, attr) | |||
|
89 | ||||
|
90 | def write(self, s): | |||
|
91 | write1 = self.orig.write | |||
|
92 | m = memoryview(s) | |||
|
93 | total_to_write = len(s) | |||
|
94 | total_written = 0 | |||
|
95 | while total_written < total_to_write: | |||
|
96 | total_written += write1(m[total_written:]) | |||
|
97 | return total_written | |||
|
98 | ||||
|
99 | ||||
|
100 | io.IOBase.register(WriteAllWrapper) | |||
|
101 | ||||
|
102 | ||||
|
103 | def _make_write_all(stream): | |||
|
104 | assert pycompat.ispy3 | |||
|
105 | if isinstance(stream, WriteAllWrapper): | |||
|
106 | return stream | |||
|
107 | if isinstance(stream, io.BufferedIOBase): | |||
|
108 | # The io.BufferedIOBase.write() contract guarantees that all data is | |||
|
109 | # written. | |||
|
110 | return stream | |||
|
111 | # In general, the write() method of streams is free to write only part of | |||
|
112 | # the data. | |||
|
113 | return WriteAllWrapper(stream) | |||
|
114 | ||||
|
115 | ||||
|
116 | if pycompat.ispy3: | |||
|
117 | # Python 3 implements its own I/O streams. | |||
|
118 | # TODO: .buffer might not exist if std streams were replaced; we'll need | |||
|
119 | # a silly wrapper to make a bytes stream backed by a unicode one. | |||
|
120 | stdin = sys.stdin.buffer | |||
|
121 | stdout = _make_write_all(sys.stdout.buffer) | |||
|
122 | stderr = _make_write_all(sys.stderr.buffer) | |||
58 | if pycompat.iswindows: |
|
123 | if pycompat.iswindows: | |
59 | # Windows doesn't support line buffering |
|
124 | # Work around Windows bugs. | |
60 |
stdout = |
|
125 | stdout = platform.winstdout(stdout) | |
61 | elif not pycompat.ispy3: |
|
126 | stderr = platform.winstdout(stderr) | |
62 | # on Python 3, stdout (sys.stdout.buffer) is already line buffered and |
|
127 | if isatty(stdout): | |
63 | # buffering=1 is not handled in binary mode |
|
128 | # The standard library doesn't offer line-buffered binary streams. | |
64 | stdout = os.fdopen(stdout.fileno(), 'wb', 1) |
|
129 | stdout = make_line_buffered(stdout) | |
|
130 | else: | |||
|
131 | # Python 2 uses the I/O streams provided by the C library. | |||
|
132 | stdin = sys.stdin | |||
|
133 | stdout = sys.stdout | |||
|
134 | stderr = sys.stderr | |||
|
135 | if pycompat.iswindows: | |||
|
136 | # Work around Windows bugs. | |||
|
137 | stdout = platform.winstdout(stdout) | |||
|
138 | stderr = platform.winstdout(stderr) | |||
|
139 | if isatty(stdout): | |||
|
140 | if pycompat.iswindows: | |||
|
141 | # The Windows C runtime library doesn't support line buffering. | |||
|
142 | stdout = make_line_buffered(stdout) | |||
|
143 | else: | |||
|
144 | # glibc determines buffering on first write to stdout - if we | |||
|
145 | # replace a TTY destined stdout with a pipe destined stdout (e.g. | |||
|
146 | # pager), we want line buffering. | |||
|
147 | stdout = os.fdopen(stdout.fileno(), 'wb', 1) | |||
65 |
|
148 | |||
66 | if pycompat.iswindows: |
|
|||
67 | from .. import windows as platform |
|
|||
68 |
|
||||
69 | stdout = platform.winstdout(stdout) |
|
|||
70 | else: |
|
|||
71 | from .. import posix as platform |
|
|||
72 |
|
149 | |||
73 | findexe = platform.findexe |
|
150 | findexe = platform.findexe | |
74 | _gethgcmd = platform.gethgcmd |
|
151 | _gethgcmd = platform.gethgcmd | |
75 | getuser = platform.getuser |
|
152 | getuser = platform.getuser | |
76 | getpid = os.getpid |
|
153 | getpid = os.getpid | |
77 | hidewindow = platform.hidewindow |
|
154 | hidewindow = platform.hidewindow | |
78 | quotecommand = platform.quotecommand |
|
|||
79 | readpipe = platform.readpipe |
|
155 | readpipe = platform.readpipe | |
80 | setbinary = platform.setbinary |
|
156 | setbinary = platform.setbinary | |
81 | setsignalhandler = platform.setsignalhandler |
|
157 | setsignalhandler = platform.setsignalhandler | |
@@ -140,7 +216,7 b" def popen(cmd, mode=b'rb', bufsize=-1):" | |||||
140 |
|
216 | |||
141 | def _popenreader(cmd, bufsize): |
|
217 | def _popenreader(cmd, bufsize): | |
142 | p = subprocess.Popen( |
|
218 | p = subprocess.Popen( | |
143 |
tonativestr( |
|
219 | tonativestr(cmd), | |
144 | shell=True, |
|
220 | shell=True, | |
145 | bufsize=bufsize, |
|
221 | bufsize=bufsize, | |
146 | close_fds=closefds, |
|
222 | close_fds=closefds, | |
@@ -151,7 +227,7 b' def _popenreader(cmd, bufsize):' | |||||
151 |
|
227 | |||
152 | def _popenwriter(cmd, bufsize): |
|
228 | def _popenwriter(cmd, bufsize): | |
153 | p = subprocess.Popen( |
|
229 | p = subprocess.Popen( | |
154 |
tonativestr( |
|
230 | tonativestr(cmd), | |
155 | shell=True, |
|
231 | shell=True, | |
156 | bufsize=bufsize, |
|
232 | bufsize=bufsize, | |
157 | close_fds=closefds, |
|
233 | close_fds=closefds, | |
@@ -397,7 +473,6 b' def system(cmd, environ=None, cwd=None, ' | |||||
397 | stdout.flush() |
|
473 | stdout.flush() | |
398 | except Exception: |
|
474 | except Exception: | |
399 | pass |
|
475 | pass | |
400 | cmd = quotecommand(cmd) |
|
|||
401 | env = shellenviron(environ) |
|
476 | env = shellenviron(environ) | |
402 | if out is None or isstdout(out): |
|
477 | if out is None or isstdout(out): | |
403 | rc = subprocess.call( |
|
478 | rc = subprocess.call( |
@@ -186,11 +186,26 b" def posixfile(name, mode=b'r', buffering" | |||||
186 | listdir = osutil.listdir |
|
186 | listdir = osutil.listdir | |
187 |
|
187 | |||
188 |
|
188 | |||
|
189 | # copied from .utils.procutil, remove after Python 2 support was dropped | |||
|
190 | def _isatty(fp): | |||
|
191 | try: | |||
|
192 | return fp.isatty() | |||
|
193 | except AttributeError: | |||
|
194 | return False | |||
|
195 | ||||
|
196 | ||||
189 | class winstdout(object): |
|
197 | class winstdout(object): | |
190 | '''stdout on windows misbehaves if sent through a pipe''' |
|
198 | '''Some files on Windows misbehave. | |
|
199 | ||||
|
200 | When writing to a broken pipe, EINVAL instead of EPIPE may be raised. | |||
|
201 | ||||
|
202 | When writing too many bytes to a console at the same, a "Not enough space" | |||
|
203 | error may happen. Python 3 already works around that. | |||
|
204 | ''' | |||
191 |
|
205 | |||
192 | def __init__(self, fp): |
|
206 | def __init__(self, fp): | |
193 | self.fp = fp |
|
207 | self.fp = fp | |
|
208 | self.throttle = not pycompat.ispy3 and _isatty(fp) | |||
194 |
|
209 | |||
195 | def __getattr__(self, key): |
|
210 | def __getattr__(self, key): | |
196 | return getattr(self.fp, key) |
|
211 | return getattr(self.fp, key) | |
@@ -203,12 +218,13 b' class winstdout(object):' | |||||
203 |
|
218 | |||
204 | def write(self, s): |
|
219 | def write(self, s): | |
205 | try: |
|
220 | try: | |
|
221 | if not self.throttle: | |||
|
222 | return self.fp.write(s) | |||
206 | # This is workaround for "Not enough space" error on |
|
223 | # This is workaround for "Not enough space" error on | |
207 | # writing large size of data to console. |
|
224 | # writing large size of data to console. | |
208 | limit = 16000 |
|
225 | limit = 16000 | |
209 | l = len(s) |
|
226 | l = len(s) | |
210 | start = 0 |
|
227 | start = 0 | |
211 | self.softspace = 0 |
|
|||
212 | while start < l: |
|
228 | while start < l: | |
213 | end = start + limit |
|
229 | end = start + limit | |
214 | self.fp.write(s[start:end]) |
|
230 | self.fp.write(s[start:end]) | |
@@ -474,14 +490,6 b' def shellsplit(s):' | |||||
474 | return pycompat.maplist(_unquote, pycompat.shlexsplit(s, posix=False)) |
|
490 | return pycompat.maplist(_unquote, pycompat.shlexsplit(s, posix=False)) | |
475 |
|
491 | |||
476 |
|
492 | |||
477 | def quotecommand(cmd): |
|
|||
478 | """Build a command string suitable for os.popen* calls.""" |
|
|||
479 | if sys.version_info < (2, 7, 1): |
|
|||
480 | # Python versions since 2.7.1 do this extra quoting themselves |
|
|||
481 | return b'"' + cmd + b'"' |
|
|||
482 | return cmd |
|
|||
483 |
|
||||
484 |
|
||||
485 | # if you change this stub into a real check, please try to implement the |
|
493 | # if you change this stub into a real check, please try to implement the | |
486 | # username and groupname functions above, too. |
|
494 | # username and groupname functions above, too. | |
487 | def isowner(st): |
|
495 | def isowner(st): |
@@ -339,7 +339,7 b' def capabilities(repo, proto):' | |||||
339 | def changegroup(repo, proto, roots): |
|
339 | def changegroup(repo, proto, roots): | |
340 | nodes = wireprototypes.decodelist(roots) |
|
340 | nodes = wireprototypes.decodelist(roots) | |
341 | outgoing = discovery.outgoing( |
|
341 | outgoing = discovery.outgoing( | |
342 |
repo, missingroots=nodes, |
|
342 | repo, missingroots=nodes, ancestorsof=repo.heads() | |
343 | ) |
|
343 | ) | |
344 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') |
|
344 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') | |
345 | gen = iter(lambda: cg.read(32768), b'') |
|
345 | gen = iter(lambda: cg.read(32768), b'') | |
@@ -350,7 +350,7 b' def changegroup(repo, proto, roots):' | |||||
350 | def changegroupsubset(repo, proto, bases, heads): |
|
350 | def changegroupsubset(repo, proto, bases, heads): | |
351 | bases = wireprototypes.decodelist(bases) |
|
351 | bases = wireprototypes.decodelist(bases) | |
352 | heads = wireprototypes.decodelist(heads) |
|
352 | heads = wireprototypes.decodelist(heads) | |
353 |
outgoing = discovery.outgoing(repo, missingroots=bases, |
|
353 | outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads) | |
354 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') |
|
354 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') | |
355 | gen = iter(lambda: cg.read(32768), b'') |
|
355 | gen = iter(lambda: cg.read(32768), b'') | |
356 | return wireprototypes.streamres(gen=gen) |
|
356 | return wireprototypes.streamres(gen=gen) |
@@ -1,11 +1,45 b'' | |||||
1 | == New Features == |
|
1 | == New Features == | |
2 |
|
2 | |||
|
3 | * clonebundles can be annotated with the expected memory requirements | |||
|
4 | using the `REQUIREDRAM` option. This allows clients to skip | |||
|
5 | bundles created with large zstd windows and fallback to larger, but | |||
|
6 | less demanding bundles. | |||
|
7 | ||||
|
8 | * The `phabricator` extension now provides more functionality of the | |||
|
9 | arcanist CLI like changing the status of a differential. | |||
|
10 | ||||
|
11 | * Phases processing is much faster, especially for repositories with | |||
|
12 | old non-public changesets. | |||
3 |
|
13 | |||
4 | == New Experimental Features == |
|
14 | == New Experimental Features == | |
5 |
|
15 | |||
|
16 | * The core of some hg operations have been (and are being) | |||
|
17 | implemented in rust, for speed. `hg status` on a repository with | |||
|
18 | 300k tracked files goes from 1.8s to 0.6s for instance. | |||
|
19 | This has currently been tested only on linux, and does not build on | |||
|
20 | windows. See rust/README.rst in the mercurial repository for | |||
|
21 | instructions to opt into this. | |||
6 |
|
22 | |||
7 | == Backwards Compatibility Changes == |
|
23 | == Backwards Compatibility Changes == | |
8 |
|
24 | |||
|
25 | * Mercurial now requires at least Python 2.7.9 or a Python version that | |||
|
26 | backported modern SSL/TLS features (as defined in PEP 466), and that Python | |||
|
27 | was compiled against a OpenSSL version supporting TLS 1.1 or TLS 1.2 | |||
|
28 | (likely this requires the OpenSSL version to be at least 1.0.1). | |||
|
29 | ||||
|
30 | * The `hg perfwrite` command from contrib/perf.py was made more flexible and | |||
|
31 | changed its default behavior. To get the previous behavior, run `hg perfwrite | |||
|
32 | --nlines=100000 --nitems=1 --item='Testing write performance' --batch-line`. | |||
|
33 | ||||
9 |
|
34 | |||
10 | == Internal API Changes == |
|
35 | == Internal API Changes == | |
11 |
|
36 | |||
|
37 | * logcmdutil.diffordiffstat() now takes contexts instead of nodes. | |||
|
38 | ||||
|
39 | * The `mergestate` class along with some related methods and constants have | |||
|
40 | moved from `mercurial.merge` to a new `mercurial.mergestate` module. | |||
|
41 | ||||
|
42 | * The `phasecache` class now uses sparse dictionaries for the phase data. | |||
|
43 | New accessors are provided to detect if any non-public changeset exists | |||
|
44 | (`hasnonpublicphases`) and get the correponsponding root set | |||
|
45 | (`nonpublicphaseroots`). |
@@ -42,11 +42,6 b' version = "1.3.4"' | |||||
42 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
42 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
43 |
|
43 | |||
44 | [[package]] |
|
44 | [[package]] | |
45 | name = "cc" |
|
|||
46 | version = "1.0.50" |
|
|||
47 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
|||
48 |
|
||||
49 | [[package]] |
|
|||
50 | name = "cfg-if" |
|
45 | name = "cfg-if" | |
51 | version = "0.1.10" |
|
46 | version = "0.1.10" | |
52 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
47 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
@@ -63,7 +58,7 b' dependencies = [' | |||||
63 |
|
58 | |||
64 | [[package]] |
|
59 | [[package]] | |
65 | name = "clap" |
|
60 | name = "clap" | |
66 |
version = "2.33. |
|
61 | version = "2.33.1" | |
67 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
62 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
68 | dependencies = [ |
|
63 | dependencies = [ | |
69 | "ansi_term 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
64 | "ansi_term 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
@@ -208,22 +203,20 b' name = "hg-core"' | |||||
208 | version = "0.1.0" |
|
203 | version = "0.1.0" | |
209 | dependencies = [ |
|
204 | dependencies = [ | |
210 | "byteorder 1.3.4 (registry+https://github.com/rust-lang/crates.io-index)", |
|
205 | "byteorder 1.3.4 (registry+https://github.com/rust-lang/crates.io-index)", | |
211 |
"c |
|
206 | "clap 2.33.1 (registry+https://github.com/rust-lang/crates.io-index)", | |
212 | "clap 2.33.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
|||
213 | "crossbeam 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)", |
|
207 | "crossbeam 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)", | |
214 | "hex 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)", |
|
208 | "hex 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)", | |
215 | "lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
209 | "lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
216 | "libc 0.2.67 (registry+https://github.com/rust-lang/crates.io-index)", |
|
|||
217 | "log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)", |
|
210 | "log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)", | |
218 | "memchr 2.3.3 (registry+https://github.com/rust-lang/crates.io-index)", |
|
211 | "memchr 2.3.3 (registry+https://github.com/rust-lang/crates.io-index)", | |
219 | "memmap 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
212 | "memmap 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
220 |
"micro-timer 0. |
|
213 | "micro-timer 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
221 | "pretty_assertions 0.6.1 (registry+https://github.com/rust-lang/crates.io-index)", |
|
214 | "pretty_assertions 0.6.1 (registry+https://github.com/rust-lang/crates.io-index)", | |
222 | "rand 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)", |
|
215 | "rand 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)", | |
223 | "rand_distr 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)", |
|
216 | "rand_distr 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)", | |
224 | "rand_pcg 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)", |
|
217 | "rand_pcg 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)", | |
225 | "rayon 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
218 | "rayon 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
226 |
"regex 1.3. |
|
219 | "regex 1.3.9 (registry+https://github.com/rust-lang/crates.io-index)", | |
227 | "same-file 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)", |
|
220 | "same-file 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)", | |
228 | "tempfile 3.1.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
221 | "tempfile 3.1.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
229 | "twox-hash 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
222 | "twox-hash 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
@@ -287,16 +280,16 b' dependencies = [' | |||||
287 |
|
280 | |||
288 | [[package]] |
|
281 | [[package]] | |
289 | name = "micro-timer" |
|
282 | name = "micro-timer" | |
290 |
version = "0. |
|
283 | version = "0.3.0" | |
291 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
284 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
292 | dependencies = [ |
|
285 | dependencies = [ | |
293 |
"micro-timer-macros 0. |
|
286 | "micro-timer-macros 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
294 | "scopeguard 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)", |
|
287 | "scopeguard 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)", | |
295 | ] |
|
288 | ] | |
296 |
|
289 | |||
297 | [[package]] |
|
290 | [[package]] | |
298 | name = "micro-timer-macros" |
|
291 | name = "micro-timer-macros" | |
299 |
version = "0. |
|
292 | version = "0.3.0" | |
300 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
293 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
301 | dependencies = [ |
|
294 | dependencies = [ | |
302 | "proc-macro2 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)", |
|
295 | "proc-macro2 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)", | |
@@ -369,7 +362,7 b' version = "0.4.1"' | |||||
369 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
362 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
370 | dependencies = [ |
|
363 | dependencies = [ | |
371 | "libc 0.2.67 (registry+https://github.com/rust-lang/crates.io-index)", |
|
364 | "libc 0.2.67 (registry+https://github.com/rust-lang/crates.io-index)", | |
372 |
"regex 1.3. |
|
365 | "regex 1.3.9 (registry+https://github.com/rust-lang/crates.io-index)", | |
373 | ] |
|
366 | ] | |
374 |
|
367 | |||
375 | [[package]] |
|
368 | [[package]] | |
@@ -378,7 +371,7 b' version = "0.4.1"' | |||||
378 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
371 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
379 | dependencies = [ |
|
372 | dependencies = [ | |
380 | "libc 0.2.67 (registry+https://github.com/rust-lang/crates.io-index)", |
|
373 | "libc 0.2.67 (registry+https://github.com/rust-lang/crates.io-index)", | |
381 |
"regex 1.3. |
|
374 | "regex 1.3.9 (registry+https://github.com/rust-lang/crates.io-index)", | |
382 | ] |
|
375 | ] | |
383 |
|
376 | |||
384 | [[package]] |
|
377 | [[package]] | |
@@ -471,18 +464,18 b' source = "registry+https://github.com/ru' | |||||
471 |
|
464 | |||
472 | [[package]] |
|
465 | [[package]] | |
473 | name = "regex" |
|
466 | name = "regex" | |
474 |
version = "1.3. |
|
467 | version = "1.3.9" | |
475 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
468 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
476 | dependencies = [ |
|
469 | dependencies = [ | |
477 | "aho-corasick 0.7.10 (registry+https://github.com/rust-lang/crates.io-index)", |
|
470 | "aho-corasick 0.7.10 (registry+https://github.com/rust-lang/crates.io-index)", | |
478 | "memchr 2.3.3 (registry+https://github.com/rust-lang/crates.io-index)", |
|
471 | "memchr 2.3.3 (registry+https://github.com/rust-lang/crates.io-index)", | |
479 |
"regex-syntax 0.6.1 |
|
472 | "regex-syntax 0.6.18 (registry+https://github.com/rust-lang/crates.io-index)", | |
480 | "thread_local 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)", |
|
473 | "thread_local 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)", | |
481 | ] |
|
474 | ] | |
482 |
|
475 | |||
483 | [[package]] |
|
476 | [[package]] | |
484 | name = "regex-syntax" |
|
477 | name = "regex-syntax" | |
485 |
version = "0.6.1 |
|
478 | version = "0.6.18" | |
486 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
479 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
487 |
|
480 | |||
488 | [[package]] |
|
481 | [[package]] | |
@@ -494,6 +487,14 b' dependencies = [' | |||||
494 | ] |
|
487 | ] | |
495 |
|
488 | |||
496 | [[package]] |
|
489 | [[package]] | |
|
490 | name = "rhg" | |||
|
491 | version = "0.1.0" | |||
|
492 | dependencies = [ | |||
|
493 | "clap 2.33.1 (registry+https://github.com/rust-lang/crates.io-index)", | |||
|
494 | "hg-core 0.1.0", | |||
|
495 | ] | |||
|
496 | ||||
|
497 | [[package]] | |||
497 | name = "rustc_version" |
|
498 | name = "rustc_version" | |
498 | version = "0.2.3" |
|
499 | version = "0.2.3" | |
499 | source = "registry+https://github.com/rust-lang/crates.io-index" |
|
500 | source = "registry+https://github.com/rust-lang/crates.io-index" | |
@@ -655,10 +656,9 b' source = "registry+https://github.com/ru' | |||||
655 | "checksum autocfg 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f8aac770f1885fd7e387acedd76065302551364496e46b3dd00860b2f8359b9d" |
|
656 | "checksum autocfg 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f8aac770f1885fd7e387acedd76065302551364496e46b3dd00860b2f8359b9d" | |
656 | "checksum bitflags 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "cf1de2fe8c75bc145a2f577add951f8134889b4795d47466a54a5c846d691693" |
|
657 | "checksum bitflags 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "cf1de2fe8c75bc145a2f577add951f8134889b4795d47466a54a5c846d691693" | |
657 | "checksum byteorder 1.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "08c48aae112d48ed9f069b33538ea9e3e90aa263cfa3d1c24309612b1f7472de" |
|
658 | "checksum byteorder 1.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "08c48aae112d48ed9f069b33538ea9e3e90aa263cfa3d1c24309612b1f7472de" | |
658 | "checksum cc 1.0.50 (registry+https://github.com/rust-lang/crates.io-index)" = "95e28fa049fda1c330bcf9d723be7663a899c4679724b34c81e9f5a326aab8cd" |
|
|||
659 | "checksum cfg-if 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822" |
|
659 | "checksum cfg-if 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822" | |
660 | "checksum chrono 0.4.11 (registry+https://github.com/rust-lang/crates.io-index)" = "80094f509cf8b5ae86a4966a39b3ff66cd7e2a3e594accec3743ff3fabeab5b2" |
|
660 | "checksum chrono 0.4.11 (registry+https://github.com/rust-lang/crates.io-index)" = "80094f509cf8b5ae86a4966a39b3ff66cd7e2a3e594accec3743ff3fabeab5b2" | |
661 |
"checksum clap 2.33. |
|
661 | "checksum clap 2.33.1 (registry+https://github.com/rust-lang/crates.io-index)" = "bdfa80d47f954d53a35a64987ca1422f495b8d6483c0fe9f7117b36c2a792129" | |
662 | "checksum colored 1.9.3 (registry+https://github.com/rust-lang/crates.io-index)" = "f4ffc801dacf156c5854b9df4f425a626539c3a6ef7893cc0c5084a23f0b6c59" |
|
662 | "checksum colored 1.9.3 (registry+https://github.com/rust-lang/crates.io-index)" = "f4ffc801dacf156c5854b9df4f425a626539c3a6ef7893cc0c5084a23f0b6c59" | |
663 | "checksum cpython 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "bfaf3847ab963e40c4f6dd8d6be279bdf74007ae2413786a0dcbb28c52139a95" |
|
663 | "checksum cpython 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "bfaf3847ab963e40c4f6dd8d6be279bdf74007ae2413786a0dcbb28c52139a95" | |
664 | "checksum crossbeam 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)" = "69323bff1fb41c635347b8ead484a5ca6c3f11914d784170b158d8449ab07f8e" |
|
664 | "checksum crossbeam 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)" = "69323bff1fb41c635347b8ead484a5ca6c3f11914d784170b158d8449ab07f8e" | |
@@ -680,8 +680,8 b' source = "registry+https://github.com/ru' | |||||
680 | "checksum memchr 2.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3728d817d99e5ac407411fa471ff9800a778d88a24685968b36824eaf4bee400" |
|
680 | "checksum memchr 2.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3728d817d99e5ac407411fa471ff9800a778d88a24685968b36824eaf4bee400" | |
681 | "checksum memmap 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "6585fd95e7bb50d6cc31e20d4cf9afb4e2ba16c5846fc76793f11218da9c475b" |
|
681 | "checksum memmap 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "6585fd95e7bb50d6cc31e20d4cf9afb4e2ba16c5846fc76793f11218da9c475b" | |
682 | "checksum memoffset 0.5.3 (registry+https://github.com/rust-lang/crates.io-index)" = "75189eb85871ea5c2e2c15abbdd541185f63b408415e5051f5cac122d8c774b9" |
|
682 | "checksum memoffset 0.5.3 (registry+https://github.com/rust-lang/crates.io-index)" = "75189eb85871ea5c2e2c15abbdd541185f63b408415e5051f5cac122d8c774b9" | |
683 |
"checksum micro-timer 0. |
|
683 | "checksum micro-timer 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "25b31d6cb9112984323d05d7a353f272ae5d7a307074f9ab9b25c00121b8c947" | |
684 |
"checksum micro-timer-macros 0. |
|
684 | "checksum micro-timer-macros 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "5694085dd384bb9e824207facc040c248d9df653f55e28c3ad0686958b448504" | |
685 | "checksum num-integer 0.1.42 (registry+https://github.com/rust-lang/crates.io-index)" = "3f6ea62e9d81a77cd3ee9a2a5b9b609447857f3d358704331e4ef39eb247fcba" |
|
685 | "checksum num-integer 0.1.42 (registry+https://github.com/rust-lang/crates.io-index)" = "3f6ea62e9d81a77cd3ee9a2a5b9b609447857f3d358704331e4ef39eb247fcba" | |
686 | "checksum num-traits 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "c62be47e61d1842b9170f0fdeec8eba98e60e90e5446449a0545e5152acd7096" |
|
686 | "checksum num-traits 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "c62be47e61d1842b9170f0fdeec8eba98e60e90e5446449a0545e5152acd7096" | |
687 | "checksum num_cpus 1.12.0 (registry+https://github.com/rust-lang/crates.io-index)" = "46203554f085ff89c235cd12f7075f3233af9b11ed7c9e16dfe2560d03313ce6" |
|
687 | "checksum num_cpus 1.12.0 (registry+https://github.com/rust-lang/crates.io-index)" = "46203554f085ff89c235cd12f7075f3233af9b11ed7c9e16dfe2560d03313ce6" | |
@@ -701,8 +701,8 b' source = "registry+https://github.com/ru' | |||||
701 | "checksum rayon 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "db6ce3297f9c85e16621bb8cca38a06779ffc31bb8184e1be4bed2be4678a098" |
|
701 | "checksum rayon 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "db6ce3297f9c85e16621bb8cca38a06779ffc31bb8184e1be4bed2be4678a098" | |
702 | "checksum rayon-core 1.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "08a89b46efaf957e52b18062fb2f4660f8b8a4dde1807ca002690868ef2c85a9" |
|
702 | "checksum rayon-core 1.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "08a89b46efaf957e52b18062fb2f4660f8b8a4dde1807ca002690868ef2c85a9" | |
703 | "checksum redox_syscall 0.1.56 (registry+https://github.com/rust-lang/crates.io-index)" = "2439c63f3f6139d1b57529d16bc3b8bb855230c8efcc5d3a896c8bea7c3b1e84" |
|
703 | "checksum redox_syscall 0.1.56 (registry+https://github.com/rust-lang/crates.io-index)" = "2439c63f3f6139d1b57529d16bc3b8bb855230c8efcc5d3a896c8bea7c3b1e84" | |
704 |
"checksum regex 1.3. |
|
704 | "checksum regex 1.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "9c3780fcf44b193bc4d09f36d2a3c87b251da4a046c87795a0d35f4f927ad8e6" | |
705 |
"checksum regex-syntax 0.6.1 |
|
705 | "checksum regex-syntax 0.6.18 (registry+https://github.com/rust-lang/crates.io-index)" = "26412eb97c6b088a6997e05f69403a802a92d520de2f8e63c2b65f9e0f47c4e8" | |
706 | "checksum remove_dir_all 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)" = "4a83fa3702a688b9359eccba92d153ac33fd2e8462f9e0e3fdf155239ea7792e" |
|
706 | "checksum remove_dir_all 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)" = "4a83fa3702a688b9359eccba92d153ac33fd2e8462f9e0e3fdf155239ea7792e" | |
707 | "checksum rustc_version 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "138e3e0acb6c9fb258b19b67cb8abd63c00679d2851805ea151465464fe9030a" |
|
707 | "checksum rustc_version 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "138e3e0acb6c9fb258b19b67cb8abd63c00679d2851805ea151465464fe9030a" | |
708 | "checksum same-file 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)" = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" |
|
708 | "checksum same-file 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)" = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" |
@@ -1,3 +1,3 b'' | |||||
1 | [workspace] |
|
1 | [workspace] | |
2 | members = ["hg-core", "hg-cpython"] |
|
2 | members = ["hg-core", "hg-cpython", "rhg"] | |
3 | exclude = ["chg", "hgcli"] |
|
3 | exclude = ["chg", "hgcli"] |
@@ -8,9 +8,9 b' improves performance in some areas.' | |||||
8 |
|
8 | |||
9 | There are currently three independent rust projects: |
|
9 | There are currently three independent rust projects: | |
10 | - chg. An implementation of chg, in rust instead of C. |
|
10 | - chg. An implementation of chg, in rust instead of C. | |
11 | - hgcli. A experiment for starting hg in rust rather than in python, |
|
11 | - hgcli. A project that provide a (mostly) self-contained "hg" binary, | |
12 | by linking with the python runtime. Probably meant to be replaced by |
|
12 | for ease of deployment and a bit of speed, using PyOxidizer. See | |
13 | PyOxidizer at some point. |
|
13 | hgcli/README.md. | |
14 | - hg-core (and hg-cpython): implementation of some |
|
14 | - hg-core (and hg-cpython): implementation of some | |
15 | functionality of mercurial in rust, e.g. ancestry computations in |
|
15 | functionality of mercurial in rust, e.g. ancestry computations in | |
16 | revision graphs, status or pull discovery. The top-level ``Cargo.toml`` file |
|
16 | revision graphs, status or pull discovery. The top-level ``Cargo.toml`` file | |
@@ -27,8 +27,6 b' built without rust previously)::' | |||||
27 | $ ./hg debuginstall | grep -i rust # to validate rust is in use |
|
27 | $ ./hg debuginstall | grep -i rust # to validate rust is in use | |
28 | checking Rust extensions (installed) |
|
28 | checking Rust extensions (installed) | |
29 | checking module policy (rust+c-allow) |
|
29 | checking module policy (rust+c-allow) | |
30 | checking "re2" regexp engine Rust bindings (installed) |
|
|||
31 |
|
||||
32 |
|
30 | |||
33 | If the environment variable ``HGWITHRUSTEXT=cpython`` is set, the Rust |
|
31 | If the environment variable ``HGWITHRUSTEXT=cpython`` is set, the Rust | |
34 | extension will be used by default unless ``--no-rust``. |
|
32 | extension will be used by default unless ``--no-rust``. | |
@@ -36,35 +34,20 b' extension will be used by default unless' | |||||
36 | One day we may use this environment variable to switch to new experimental |
|
34 | One day we may use this environment variable to switch to new experimental | |
37 | binding crates like a hypothetical ``HGWITHRUSTEXT=hpy``. |
|
35 | binding crates like a hypothetical ``HGWITHRUSTEXT=hpy``. | |
38 |
|
36 | |||
39 | Using the fastest ``hg status`` |
|
37 | Profiling | |
40 | ------------------------------- |
|
38 | ========= | |
41 |
|
||||
42 | The code for ``hg status`` needs to conform to ``.hgignore`` rules, which are |
|
|||
43 | all translated into regex. |
|
|||
44 |
|
||||
45 | In the first version, for compatibility and ease of development reasons, the |
|
|||
46 | Re2 regex engine was chosen until we figured out if the ``regex`` crate had |
|
|||
47 | similar enough behavior. |
|
|||
48 |
|
||||
49 | Now that that work has been done, the default behavior is to use the ``regex`` |
|
|||
50 | crate, that provides a significant performance boost compared to the standard |
|
|||
51 | Python + C path in many commands such as ``status``, ``diff`` and ``commit``, |
|
|||
52 |
|
39 | |||
53 | However, the ``Re2`` path remains slightly faster for our use cases and remains |
|
40 | Setting the environment variable ``RUST_LOG=trace`` will make hg print | |
54 | a better option for getting the most speed out of your Mercurial. |
|
41 | a few high level rust-related performance numbers. It can also | |
|
42 | indicate why the rust code cannot be used (say, using lookarounds in | |||
|
43 | hgignore). | |||
55 |
|
44 | |||
56 | If you want to use ``Re2``, you need to install ``Re2`` following Google's |
|
45 | ``py-spy`` (https://github.com/benfred/py-spy) can be used to | |
57 | guidelines: https://github.com/google/re2/wiki/Install. |
|
46 | construct a single profile with rust functions and python functions | |
58 | Then, use ``HG_RUST_FEATURES=with-re2`` and |
|
47 | (as opposed to ``hg --profile``, which attributes time spent in rust | |
59 | ``HG_RE2_PATH=system|<path to your re2 install>`` when building ``hg`` to |
|
48 | to some unlucky python code running shortly after the rust code, and | |
60 | signal the use of Re2. Using the local path instead of the "system" RE2 links |
|
49 | as opposed to tools for native code like ``perf``, which attribute | |
61 | it statically. |
|
50 | time to the python interpreter instead of python functions). | |
62 |
|
||||
63 | For example:: |
|
|||
64 |
|
||||
65 | $ HG_RUST_FEATURES=with-re2 HG_RE2_PATH=system make PURE=--rust |
|
|||
66 | $ # OR |
|
|||
67 | $ HG_RUST_FEATURES=with-re2 HG_RE2_PATH=/path/to/re2 make PURE=--rust |
|
|||
68 |
|
51 | |||
69 | Developing Rust |
|
52 | Developing Rust | |
70 | =============== |
|
53 | =============== | |
@@ -114,14 +97,3 b' To format the entire Rust workspace::' | |||||
114 | $ cargo +nightly fmt |
|
97 | $ cargo +nightly fmt | |
115 |
|
98 | |||
116 | This requires you to have the nightly toolchain installed. |
|
99 | This requires you to have the nightly toolchain installed. | |
117 |
|
||||
118 | Additional features |
|
|||
119 | ------------------- |
|
|||
120 |
|
||||
121 | As mentioned in the section about ``hg status``, code paths using ``re2`` are |
|
|||
122 | opt-in. |
|
|||
123 |
|
||||
124 | For example:: |
|
|||
125 |
|
||||
126 | $ cargo check --features with-re2 |
|
|||
127 |
|
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: file was removed |
|
NO CONTENT: file was removed | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: file was removed |
|
NO CONTENT: file was removed | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: file was removed |
|
NO CONTENT: file was removed | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: file was removed |
|
NO CONTENT: file was removed | ||
The requested commit or file is too big and content was truncated. Show full diff |
General Comments 0
You need to be logged in to leave comments.
Login now