Show More
The requested changes are too big and content was truncated. Show full diff
@@ -0,0 +1,13 b'' | |||||
|
1 | Our full contribution guidelines are in our wiki, please see: | |||
|
2 | ||||
|
3 | https://www.mercurial-scm.org/wiki/ContributingChanges | |||
|
4 | ||||
|
5 | If you just want a checklist to follow, you can go straight to | |||
|
6 | ||||
|
7 | https://www.mercurial-scm.org/wiki/ContributingChanges#Submission_checklist | |||
|
8 | ||||
|
9 | If you can't run the entire testsuite for some reason (it can be | |||
|
10 | difficult on Windows), please at least run `contrib/check-code.py` on | |||
|
11 | any files you've modified and run `python contrib/check-commit` on any | |||
|
12 | commits you've made (for example, `python contrib/check-commit | |||
|
13 | 273ce12ad8f1` will report some style violations on a very old commit). |
This diff has been collapsed as it changes many lines, (773 lines changed) Show them Hide them | |||||
@@ -0,0 +1,773 b'' | |||||
|
1 | The Mercurial wire protocol is a request-response based protocol | |||
|
2 | with multiple wire representations. | |||
|
3 | ||||
|
4 | Each request is modeled as a command name, a dictionary of arguments, and | |||
|
5 | optional raw input. Command arguments and their types are intrinsic | |||
|
6 | properties of commands. So is the response type of the command. This means | |||
|
7 | clients can't always send arbitrary arguments to servers and servers can't | |||
|
8 | return multiple response types. | |||
|
9 | ||||
|
10 | The protocol is synchronous and does not support multiplexing (concurrent | |||
|
11 | commands). | |||
|
12 | ||||
|
13 | Transport Protocols | |||
|
14 | =================== | |||
|
15 | ||||
|
16 | HTTP Transport | |||
|
17 | -------------- | |||
|
18 | ||||
|
19 | Commands are issued as HTTP/1.0 or HTTP/1.1 requests. Commands are | |||
|
20 | sent to the base URL of the repository with the command name sent in | |||
|
21 | the ``cmd`` query string parameter. e.g. | |||
|
22 | ``https://example.com/repo?cmd=capabilities``. The HTTP method is ``GET`` | |||
|
23 | or ``POST`` depending on the command and whether there is a request | |||
|
24 | body. | |||
|
25 | ||||
|
26 | Command arguments can be sent multiple ways. | |||
|
27 | ||||
|
28 | The simplest is part of the URL query string using ``x-www-form-urlencoded`` | |||
|
29 | encoding (see Python's ``urllib.urlencode()``. However, many servers impose | |||
|
30 | length limitations on the URL. So this mechanism is typically only used if | |||
|
31 | the server doesn't support other mechanisms. | |||
|
32 | ||||
|
33 | If the server supports the ``httpheader`` capability, command arguments can | |||
|
34 | be sent in HTTP request headers named ``X-HgArg-<N>`` where ``<N>`` is an | |||
|
35 | integer starting at 1. A ``x-www-form-urlencoded`` representation of the | |||
|
36 | arguments is obtained. This full string is then split into chunks and sent | |||
|
37 | in numbered ``X-HgArg-<N>`` headers. The maximum length of each HTTP header | |||
|
38 | is defined by the server in the ``httpheader`` capability value, which defaults | |||
|
39 | to ``1024``. The server reassembles the encoded arguments string by | |||
|
40 | concatenating the ``X-HgArg-<N>`` headers then URL decodes them into a | |||
|
41 | dictionary. | |||
|
42 | ||||
|
43 | The list of ``X-HgArg-<N>`` headers should be added to the ``Vary`` request | |||
|
44 | header to instruct caches to take these headers into consideration when caching | |||
|
45 | requests. | |||
|
46 | ||||
|
47 | If the server supports the ``httppostargs`` capability, the client | |||
|
48 | may send command arguments in the HTTP request body as part of an | |||
|
49 | HTTP POST request. The command arguments will be URL encoded just like | |||
|
50 | they would for sending them via HTTP headers. However, no splitting is | |||
|
51 | performed: the raw arguments are included in the HTTP request body. | |||
|
52 | ||||
|
53 | The client sends a ``X-HgArgs-Post`` header with the string length of the | |||
|
54 | encoded arguments data. Additional data may be included in the HTTP | |||
|
55 | request body immediately following the argument data. The offset of the | |||
|
56 | non-argument data is defined by the ``X-HgArgs-Post`` header. The | |||
|
57 | ``X-HgArgs-Post`` header is not required if there is no argument data. | |||
|
58 | ||||
|
59 | Additional command data can be sent as part of the HTTP request body. The | |||
|
60 | default ``Content-Type`` when sending data is ``application/mercurial-0.1``. | |||
|
61 | A ``Content-Length`` header is currently always sent. | |||
|
62 | ||||
|
63 | Example HTTP requests:: | |||
|
64 | ||||
|
65 | GET /repo?cmd=capabilities | |||
|
66 | X-HgArg-1: foo=bar&baz=hello%20world | |||
|
67 | ||||
|
68 | The ``Content-Type`` HTTP response header identifies the response as coming | |||
|
69 | from Mercurial and can also be used to signal an error has occurred. | |||
|
70 | ||||
|
71 | The ``application/mercurial-0.1`` media type indicates a generic Mercurial | |||
|
72 | response. It matches the media type sent by the client. | |||
|
73 | ||||
|
74 | The ``application/hg-error`` media type indicates a generic error occurred. | |||
|
75 | The content of the HTTP response body typically holds text describing the | |||
|
76 | error. | |||
|
77 | ||||
|
78 | The ``application/hg-changegroup`` media type indicates a changegroup response | |||
|
79 | type. | |||
|
80 | ||||
|
81 | Clients also accept the ``text/plain`` media type. All other media | |||
|
82 | types should cause the client to error. | |||
|
83 | ||||
|
84 | Clients should issue a ``User-Agent`` request header that identifies the client. | |||
|
85 | The server should not use the ``User-Agent`` for feature detection. | |||
|
86 | ||||
|
87 | A command returning a ``string`` response issues the | |||
|
88 | ``application/mercurial-0.1`` media type and the HTTP response body contains | |||
|
89 | the raw string value. A ``Content-Length`` header is typically issued. | |||
|
90 | ||||
|
91 | A command returning a ``stream`` response issues the | |||
|
92 | ``application/mercurial-0.1`` media type and the HTTP response is typically | |||
|
93 | using *chunked transfer* (``Transfer-Encoding: chunked``). | |||
|
94 | ||||
|
95 | SSH Transport | |||
|
96 | ============= | |||
|
97 | ||||
|
98 | The SSH transport is a custom text-based protocol suitable for use over any | |||
|
99 | bi-directional stream transport. It is most commonly used with SSH. | |||
|
100 | ||||
|
101 | A SSH transport server can be started with ``hg serve --stdio``. The stdin, | |||
|
102 | stderr, and stdout file descriptors of the started process are used to exchange | |||
|
103 | data. When Mercurial connects to a remote server over SSH, it actually starts | |||
|
104 | a ``hg serve --stdio`` process on the remote server. | |||
|
105 | ||||
|
106 | Commands are issued by sending the command name followed by a trailing newline | |||
|
107 | ``\n`` to the server. e.g. ``capabilities\n``. | |||
|
108 | ||||
|
109 | Command arguments are sent in the following format:: | |||
|
110 | ||||
|
111 | <argument> <length>\n<value> | |||
|
112 | ||||
|
113 | That is, the argument string name followed by a space followed by the | |||
|
114 | integer length of the value (expressed as a string) followed by a newline | |||
|
115 | (``\n``) followed by the raw argument value. | |||
|
116 | ||||
|
117 | Dictionary arguments are encoded differently:: | |||
|
118 | ||||
|
119 | <argument> <# elements>\n | |||
|
120 | <key1> <length1>\n<value1> | |||
|
121 | <key2> <length2>\n<value2> | |||
|
122 | ... | |||
|
123 | ||||
|
124 | Non-argument data is sent immediately after the final argument value. It is | |||
|
125 | encoded in chunks:: | |||
|
126 | ||||
|
127 | <length>\n<data> | |||
|
128 | ||||
|
129 | Each command declares a list of supported arguments and their types. If a | |||
|
130 | client sends an unknown argument to the server, the server should abort | |||
|
131 | immediately. The special argument ``*`` in a command's definition indicates | |||
|
132 | that all argument names are allowed. | |||
|
133 | ||||
|
134 | The definition of supported arguments and types is initially made when a | |||
|
135 | new command is implemented. The client and server must initially independently | |||
|
136 | agree on the arguments and their types. This initial set of arguments can be | |||
|
137 | supplemented through the presence of *capabilities* advertised by the server. | |||
|
138 | ||||
|
139 | Each command has a defined expected response type. | |||
|
140 | ||||
|
141 | A ``string`` response type is a length framed value. The response consists of | |||
|
142 | the string encoded integer length of a value followed by a newline (``\n``) | |||
|
143 | followed by the value. Empty values are allowed (and are represented as | |||
|
144 | ``0\n``). | |||
|
145 | ||||
|
146 | A ``stream`` response type consists of raw bytes of data. There is no framing. | |||
|
147 | ||||
|
148 | A generic error response type is also supported. It consists of a an error | |||
|
149 | message written to ``stderr`` followed by ``\n-\n``. In addition, ``\n`` is | |||
|
150 | written to ``stdout``. | |||
|
151 | ||||
|
152 | If the server receives an unknown command, it will send an empty ``string`` | |||
|
153 | response. | |||
|
154 | ||||
|
155 | The server terminates if it receives an empty command (a ``\n`` character). | |||
|
156 | ||||
|
157 | Capabilities | |||
|
158 | ============ | |||
|
159 | ||||
|
160 | Servers advertise supported wire protocol features. This allows clients to | |||
|
161 | probe for server features before blindly calling a command or passing a | |||
|
162 | specific argument. | |||
|
163 | ||||
|
164 | The server's features are exposed via a *capabilities* string. This is a | |||
|
165 | space-delimited string of tokens/features. Some features are single words | |||
|
166 | like ``lookup`` or ``batch``. Others are complicated key-value pairs | |||
|
167 | advertising sub-features. e.g. ``httpheader=2048``. When complex, non-word | |||
|
168 | values are used, each feature name can define its own encoding of sub-values. | |||
|
169 | Comma-delimited and ``x-www-form-urlencoded`` values are common. | |||
|
170 | ||||
|
171 | The following document capabilities defined by the canonical Mercurial server | |||
|
172 | implementation. | |||
|
173 | ||||
|
174 | batch | |||
|
175 | ----- | |||
|
176 | ||||
|
177 | Whether the server supports the ``batch`` command. | |||
|
178 | ||||
|
179 | This capability/command was introduced in Mercurial 1.9 (released July 2011). | |||
|
180 | ||||
|
181 | branchmap | |||
|
182 | --------- | |||
|
183 | ||||
|
184 | Whether the server supports the ``branchmap`` command. | |||
|
185 | ||||
|
186 | This capability/command was introduced in Mercurial 1.3 (released July 2009). | |||
|
187 | ||||
|
188 | bundle2-exp | |||
|
189 | ----------- | |||
|
190 | ||||
|
191 | Precursor to ``bundle2`` capability that was used before bundle2 was a | |||
|
192 | stable feature. | |||
|
193 | ||||
|
194 | This capability was introduced in Mercurial 3.0 behind an experimental | |||
|
195 | flag. This capability should not be observed in the wild. | |||
|
196 | ||||
|
197 | bundle2 | |||
|
198 | ------- | |||
|
199 | ||||
|
200 | Indicates whether the server supports the ``bundle2`` data exchange format. | |||
|
201 | ||||
|
202 | The value of the capability is a URL quoted, newline (``\n``) delimited | |||
|
203 | list of keys or key-value pairs. | |||
|
204 | ||||
|
205 | A key is simply a URL encoded string. | |||
|
206 | ||||
|
207 | A key-value pair is a URL encoded key separated from a URL encoded value by | |||
|
208 | an ``=``. If the value is a list, elements are delimited by a ``,`` after | |||
|
209 | URL encoding. | |||
|
210 | ||||
|
211 | For example, say we have the values:: | |||
|
212 | ||||
|
213 | {'HG20': [], 'changegroup': ['01', '02'], 'digests': ['sha1', 'sha512']} | |||
|
214 | ||||
|
215 | We would first construct a string:: | |||
|
216 | ||||
|
217 | HG20\nchangegroup=01,02\ndigests=sha1,sha512 | |||
|
218 | ||||
|
219 | We would then URL quote this string:: | |||
|
220 | ||||
|
221 | HG20%0Achangegroup%3D01%2C02%0Adigests%3Dsha1%2Csha512 | |||
|
222 | ||||
|
223 | This capability was introduced in Mercurial 3.4 (released May 2015). | |||
|
224 | ||||
|
225 | changegroupsubset | |||
|
226 | ----------------- | |||
|
227 | ||||
|
228 | Whether the server supports the ``changegroupsubset`` command. | |||
|
229 | ||||
|
230 | This capability was introduced in Mercurial 0.9.2 (released December | |||
|
231 | 2006). | |||
|
232 | ||||
|
233 | This capability was introduced at the same time as the ``lookup`` | |||
|
234 | capability/command. | |||
|
235 | ||||
|
236 | getbundle | |||
|
237 | --------- | |||
|
238 | ||||
|
239 | Whether the server supports the ``getbundle`` command. | |||
|
240 | ||||
|
241 | This capability was introduced in Mercurial 1.9 (released July 2011). | |||
|
242 | ||||
|
243 | httpheader | |||
|
244 | ---------- | |||
|
245 | ||||
|
246 | Whether the server supports receiving command arguments via HTTP request | |||
|
247 | headers. | |||
|
248 | ||||
|
249 | The value of the capability is an integer describing the max header | |||
|
250 | length that clients should send. Clients should ignore any content after a | |||
|
251 | comma in the value, as this is reserved for future use. | |||
|
252 | ||||
|
253 | This capability was introduced in Mercurial 1.9 (released July 2011). | |||
|
254 | ||||
|
255 | httppostargs | |||
|
256 | ------------ | |||
|
257 | ||||
|
258 | **Experimental** | |||
|
259 | ||||
|
260 | Indicates that the server supports and prefers clients send command arguments | |||
|
261 | via a HTTP POST request as part of the request body. | |||
|
262 | ||||
|
263 | This capability was introduced in Mercurial 3.8 (released May 2016). | |||
|
264 | ||||
|
265 | known | |||
|
266 | ----- | |||
|
267 | ||||
|
268 | Whether the server supports the ``known`` command. | |||
|
269 | ||||
|
270 | This capability/command was introduced in Mercurial 1.9 (released July 2011). | |||
|
271 | ||||
|
272 | lookup | |||
|
273 | ------ | |||
|
274 | ||||
|
275 | Whether the server supports the ``lookup`` command. | |||
|
276 | ||||
|
277 | This capability was introduced in Mercurial 0.9.2 (released December | |||
|
278 | 2006). | |||
|
279 | ||||
|
280 | This capability was introduced at the same time as the ``changegroupsubset`` | |||
|
281 | capability/command. | |||
|
282 | ||||
|
283 | pushkey | |||
|
284 | ------- | |||
|
285 | ||||
|
286 | Whether the server supports the ``pushkey`` and ``listkeys`` commands. | |||
|
287 | ||||
|
288 | This capability was introduced in Mercurial 1.6 (released July 2010). | |||
|
289 | ||||
|
290 | standardbundle | |||
|
291 | -------------- | |||
|
292 | ||||
|
293 | **Unsupported** | |||
|
294 | ||||
|
295 | This capability was introduced during the Mercurial 0.9.2 development cycle in | |||
|
296 | 2006. It was never present in a release, as it was replaced by the ``unbundle`` | |||
|
297 | capability. This capability should not be encountered in the wild. | |||
|
298 | ||||
|
299 | stream-preferred | |||
|
300 | ---------------- | |||
|
301 | ||||
|
302 | If present the server prefers that clients clone using the streaming clone | |||
|
303 | protocol (``hg clone --uncompressed``) rather than the standard | |||
|
304 | changegroup/bundle based protocol. | |||
|
305 | ||||
|
306 | This capability was introduced in Mercurial 2.2 (released May 2012). | |||
|
307 | ||||
|
308 | streamreqs | |||
|
309 | ---------- | |||
|
310 | ||||
|
311 | Indicates whether the server supports *streaming clones* and the *requirements* | |||
|
312 | that clients must support to receive it. | |||
|
313 | ||||
|
314 | If present, the server supports the ``stream_out`` command, which transmits | |||
|
315 | raw revlogs from the repository instead of changegroups. This provides a faster | |||
|
316 | cloning mechanism at the expense of more bandwidth used. | |||
|
317 | ||||
|
318 | The value of this capability is a comma-delimited list of repo format | |||
|
319 | *requirements*. These are requirements that impact the reading of data in | |||
|
320 | the ``.hg/store`` directory. An example value is | |||
|
321 | ``streamreqs=generaldelta,revlogv1`` indicating the server repo requires | |||
|
322 | the ``revlogv1`` and ``generaldelta`` requirements. | |||
|
323 | ||||
|
324 | If the only format requirement is ``revlogv1``, the server may expose the | |||
|
325 | ``stream`` capability instead of the ``streamreqs`` capability. | |||
|
326 | ||||
|
327 | This capability was introduced in Mercurial 1.7 (released November 2010). | |||
|
328 | ||||
|
329 | stream | |||
|
330 | ------ | |||
|
331 | ||||
|
332 | Whether the server supports *streaming clones* from ``revlogv1`` repos. | |||
|
333 | ||||
|
334 | If present, the server supports the ``stream_out`` command, which transmits | |||
|
335 | raw revlogs from the repository instead of changegroups. This provides a faster | |||
|
336 | cloning mechanism at the expense of more bandwidth used. | |||
|
337 | ||||
|
338 | This capability was introduced in Mercurial 0.9.1 (released July 2006). | |||
|
339 | ||||
|
340 | When initially introduced, the value of the capability was the numeric | |||
|
341 | revlog revision. e.g. ``stream=1``. This indicates the changegroup is using | |||
|
342 | ``revlogv1``. This simple integer value wasn't powerful enough, so the | |||
|
343 | ``streamreqs`` capability was invented to handle cases where the repo | |||
|
344 | requirements have more than just ``revlogv1``. Newer servers omit the | |||
|
345 | ``=1`` since it was the only value supported and the value of ``1`` can | |||
|
346 | be implied by clients. | |||
|
347 | ||||
|
348 | unbundlehash | |||
|
349 | ------------ | |||
|
350 | ||||
|
351 | Whether the ``unbundle`` commands supports receiving a hash of all the | |||
|
352 | heads instead of a list. | |||
|
353 | ||||
|
354 | For more, see the documentation for the ``unbundle`` command. | |||
|
355 | ||||
|
356 | This capability was introduced in Mercurial 1.9 (released July 2011). | |||
|
357 | ||||
|
358 | unbundle | |||
|
359 | -------- | |||
|
360 | ||||
|
361 | Whether the server supports pushing via the ``unbundle`` command. | |||
|
362 | ||||
|
363 | This capability/command has been present since Mercurial 0.9.1 (released | |||
|
364 | July 2006). | |||
|
365 | ||||
|
366 | Mercurial 0.9.2 (released December 2006) added values to the capability | |||
|
367 | indicating which bundle types the server supports receiving. This value is a | |||
|
368 | comma-delimited list. e.g. ``HG10GZ,HG10BZ,HG10UN``. The order of values | |||
|
369 | reflects the priority/preference of that type, where the first value is the | |||
|
370 | most preferred type. | |||
|
371 | ||||
|
372 | Handshake Protocol | |||
|
373 | ================== | |||
|
374 | ||||
|
375 | While not explicitly required, it is common for clients to perform a | |||
|
376 | *handshake* when connecting to a server. The handshake accomplishes 2 things: | |||
|
377 | ||||
|
378 | * Obtaining capabilities and other server features | |||
|
379 | * Flushing extra server output (e.g. SSH servers may print extra text | |||
|
380 | when connecting that may confuse the wire protocol) | |||
|
381 | ||||
|
382 | This isn't a traditional *handshake* as far as network protocols go because | |||
|
383 | there is no persistent state as a result of the handshake: the handshake is | |||
|
384 | simply the issuing of commands and commands are stateless. | |||
|
385 | ||||
|
386 | The canonical clients perform a capabilities lookup at connection establishment | |||
|
387 | time. This is because clients must assume a server only supports the features | |||
|
388 | of the original Mercurial server implementation until proven otherwise (from | |||
|
389 | advertised capabilities). Nearly every server running today supports features | |||
|
390 | that weren't present in the original Mercurial server implementation. Rather | |||
|
391 | than wait for a client to perform functionality that needs to consult | |||
|
392 | capabilities, it issues the lookup at connection start to avoid any delay later. | |||
|
393 | ||||
|
394 | For HTTP servers, the client sends a ``capabilities`` command request as | |||
|
395 | soon as the connection is established. The server responds with a capabilities | |||
|
396 | string, which the client parses. | |||
|
397 | ||||
|
398 | For SSH servers, the client sends the ``hello`` command (no arguments) | |||
|
399 | and a ``between`` command with the ``pairs`` argument having the value | |||
|
400 | ``0000000000000000000000000000000000000000-0000000000000000000000000000000000000000``. | |||
|
401 | ||||
|
402 | The ``between`` command has been supported since the original Mercurial | |||
|
403 | server. Requesting the empty range will return a ``\n`` string response, | |||
|
404 | which will be encoded as ``1\n\n`` (value length of ``1`` followed by a newline | |||
|
405 | followed by the value, which happens to be a newline). | |||
|
406 | ||||
|
407 | The ``hello`` command was later introduced. Servers supporting it will issue | |||
|
408 | a response to that command before sending the ``1\n\n`` response to the | |||
|
409 | ``between`` command. Servers not supporting ``hello`` will send an empty | |||
|
410 | response (``0\n``). | |||
|
411 | ||||
|
412 | In addition to the expected output from the ``hello`` and ``between`` commands, | |||
|
413 | servers may also send other output, such as *message of the day (MOTD)* | |||
|
414 | announcements. Clients assume servers will send this output before the | |||
|
415 | Mercurial server replies to the client-issued commands. So any server output | |||
|
416 | not conforming to the expected command responses is assumed to be not related | |||
|
417 | to Mercurial and can be ignored. | |||
|
418 | ||||
|
419 | Commands | |||
|
420 | ======== | |||
|
421 | ||||
|
422 | This section contains a list of all wire protocol commands implemented by | |||
|
423 | the canonical Mercurial server. | |||
|
424 | ||||
|
425 | batch | |||
|
426 | ----- | |||
|
427 | ||||
|
428 | Issue multiple commands while sending a single command request. The purpose | |||
|
429 | of this command is to allow a client to issue multiple commands while avoiding | |||
|
430 | multiple round trips to the server therefore enabling commands to complete | |||
|
431 | quicker. | |||
|
432 | ||||
|
433 | The command accepts a ``cmds`` argument that contains a list of commands to | |||
|
434 | execute. | |||
|
435 | ||||
|
436 | The value of ``cmds`` is a ``;`` delimited list of strings. Each string has the | |||
|
437 | form ``<command> <arguments>``. That is, the command name followed by a space | |||
|
438 | followed by an argument string. | |||
|
439 | ||||
|
440 | The argument string is a ``,`` delimited list of ``<key>=<value>`` values | |||
|
441 | corresponding to command arguments. Both the argument name and value are | |||
|
442 | escaped using a special substitution map:: | |||
|
443 | ||||
|
444 | : -> :c | |||
|
445 | , -> :o | |||
|
446 | ; -> :s | |||
|
447 | = -> :e | |||
|
448 | ||||
|
449 | The response type for this command is ``string``. The value contains a | |||
|
450 | ``;`` delimited list of responses for each requested command. Each value | |||
|
451 | in this list is escaped using the same substitution map used for arguments. | |||
|
452 | ||||
|
453 | If an error occurs, the generic error response may be sent. | |||
|
454 | ||||
|
455 | between | |||
|
456 | ------- | |||
|
457 | ||||
|
458 | (Legacy command used for discovery in old clients) | |||
|
459 | ||||
|
460 | Obtain nodes between pairs of nodes. | |||
|
461 | ||||
|
462 | The ``pairs`` arguments contains a space-delimited list of ``-`` delimited | |||
|
463 | hex node pairs. e.g.:: | |||
|
464 | ||||
|
465 | a072279d3f7fd3a4aa7ffa1a5af8efc573e1c896-6dc58916e7c070f678682bfe404d2e2d68291a18 | |||
|
466 | ||||
|
467 | Return type is a ``string``. Value consists of lines corresponding to each | |||
|
468 | requested range. Each line contains a space-delimited list of hex nodes. | |||
|
469 | A newline ``\n`` terminates each line, including the last one. | |||
|
470 | ||||
|
471 | branchmap | |||
|
472 | --------- | |||
|
473 | ||||
|
474 | Obtain heads in named branches. | |||
|
475 | ||||
|
476 | Accepts no arguments. Return type is a ``string``. | |||
|
477 | ||||
|
478 | Return value contains lines with URL encoded branch names followed by a space | |||
|
479 | followed by a space-delimited list of hex nodes of heads on that branch. | |||
|
480 | e.g.:: | |||
|
481 | ||||
|
482 | default a072279d3f7fd3a4aa7ffa1a5af8efc573e1c896 6dc58916e7c070f678682bfe404d2e2d68291a18 | |||
|
483 | stable baae3bf31522f41dd5e6d7377d0edd8d1cf3fccc | |||
|
484 | ||||
|
485 | There is no trailing newline. | |||
|
486 | ||||
|
487 | branches | |||
|
488 | -------- | |||
|
489 | ||||
|
490 | Obtain ancestor changesets of specific nodes back to a branch point. | |||
|
491 | ||||
|
492 | Despite the name, this command has nothing to do with Mercurial named branches. | |||
|
493 | Instead, it is related to DAG branches. | |||
|
494 | ||||
|
495 | The command accepts a ``nodes`` argument, which is a string of space-delimited | |||
|
496 | hex nodes. | |||
|
497 | ||||
|
498 | For each node requested, the server will find the first ancestor node that is | |||
|
499 | a DAG root or is a merge. | |||
|
500 | ||||
|
501 | Return type is a ``string``. Return value contains lines with result data for | |||
|
502 | each requested node. Each line contains space-delimited nodes followed by a | |||
|
503 | newline (``\n``). The 4 nodes reported on each line correspond to the requested | |||
|
504 | node, the ancestor node found, and its 2 parent nodes (which may be the null | |||
|
505 | node). | |||
|
506 | ||||
|
507 | capabilities | |||
|
508 | ------------ | |||
|
509 | ||||
|
510 | Obtain the capabilities string for the repo. | |||
|
511 | ||||
|
512 | Unlike the ``hello`` command, the capabilities string is not prefixed. | |||
|
513 | There is no trailing newline. | |||
|
514 | ||||
|
515 | This command does not accept any arguments. Return type is a ``string``. | |||
|
516 | ||||
|
517 | changegroup | |||
|
518 | ----------- | |||
|
519 | ||||
|
520 | (Legacy command: use ``getbundle`` instead) | |||
|
521 | ||||
|
522 | Obtain a changegroup version 1 with data for changesets that are | |||
|
523 | descendants of client-specified changesets. | |||
|
524 | ||||
|
525 | The ``roots`` arguments contains a list of space-delimited hex nodes. | |||
|
526 | ||||
|
527 | The server responds with a changegroup version 1 containing all | |||
|
528 | changesets between the requested root/base nodes and the repo's head nodes | |||
|
529 | at the time of the request. | |||
|
530 | ||||
|
531 | The return type is a ``stream``. | |||
|
532 | ||||
|
533 | changegroupsubset | |||
|
534 | ----------------- | |||
|
535 | ||||
|
536 | (Legacy command: use ``getbundle`` instead) | |||
|
537 | ||||
|
538 | Obtain a changegroup version 1 with data for changesetsets between | |||
|
539 | client specified base and head nodes. | |||
|
540 | ||||
|
541 | The ``bases`` argument contains a list of space-delimited hex nodes. | |||
|
542 | The ``heads`` argument contains a list of space-delimited hex nodes. | |||
|
543 | ||||
|
544 | The server responds with a changegroup version 1 containing all | |||
|
545 | changesets between the requested base and head nodes at the time of the | |||
|
546 | request. | |||
|
547 | ||||
|
548 | The return type is a ``stream``. | |||
|
549 | ||||
|
550 | clonebundles | |||
|
551 | ------------ | |||
|
552 | ||||
|
553 | Obtains a manifest of bundle URLs available to seed clones. | |||
|
554 | ||||
|
555 | Each returned line contains a URL followed by metadata. See the | |||
|
556 | documentation in the ``clonebundles`` extension for more. | |||
|
557 | ||||
|
558 | The return type is a ``string``. | |||
|
559 | ||||
|
560 | getbundle | |||
|
561 | --------- | |||
|
562 | ||||
|
563 | Obtain a bundle containing repository data. | |||
|
564 | ||||
|
565 | This command accepts the following arguments: | |||
|
566 | ||||
|
567 | heads | |||
|
568 | List of space-delimited hex nodes of heads to retrieve. | |||
|
569 | common | |||
|
570 | List of space-delimited hex nodes that the client has in common with the | |||
|
571 | server. | |||
|
572 | obsmarkers | |||
|
573 | Boolean indicating whether to include obsolescence markers as part | |||
|
574 | of the response. Only works with bundle2. | |||
|
575 | bundlecaps | |||
|
576 | Comma-delimited set of strings defining client bundle capabilities. | |||
|
577 | listkeys | |||
|
578 | Comma-delimited list of strings of ``pushkey`` namespaces. For each | |||
|
579 | namespace listed, a bundle2 part will be included with the content of | |||
|
580 | that namespace. | |||
|
581 | cg | |||
|
582 | Boolean indicating whether changegroup data is requested. | |||
|
583 | cbattempted | |||
|
584 | Boolean indicating whether the client attempted to use the *clone bundles* | |||
|
585 | feature before performing this request. | |||
|
586 | ||||
|
587 | The return type on success is a ``stream`` where the value is bundle. | |||
|
588 | On the HTTP transport, the response is zlib compressed. | |||
|
589 | ||||
|
590 | If an error occurs, a generic error response can be sent. | |||
|
591 | ||||
|
592 | Unless the client sends a false value for the ``cg`` argument, the returned | |||
|
593 | bundle contains a changegroup with the nodes between the specified ``common`` | |||
|
594 | and ``heads`` nodes. Depending on the command arguments, the type and content | |||
|
595 | of the returned bundle can vary significantly. | |||
|
596 | ||||
|
597 | The default behavior is for the server to send a raw changegroup version | |||
|
598 | ``01`` response. | |||
|
599 | ||||
|
600 | If the ``bundlecaps`` provided by the client contain a value beginning | |||
|
601 | with ``HG2``, a bundle2 will be returned. The bundle2 data may contain | |||
|
602 | additional repository data, such as ``pushkey`` namespace values. | |||
|
603 | ||||
|
604 | heads | |||
|
605 | ----- | |||
|
606 | ||||
|
607 | Returns a list of space-delimited hex nodes of repository heads followed | |||
|
608 | by a newline. e.g. | |||
|
609 | ``a9eeb3adc7ddb5006c088e9eda61791c777cbf7c 31f91a3da534dc849f0d6bfc00a395a97cf218a1\n`` | |||
|
610 | ||||
|
611 | This command does not accept any arguments. The return type is a ``string``. | |||
|
612 | ||||
|
613 | hello | |||
|
614 | ----- | |||
|
615 | ||||
|
616 | Returns lines describing interesting things about the server in an RFC-822 | |||
|
617 | like format. | |||
|
618 | ||||
|
619 | Currently, the only line defines the server capabilities. It has the form:: | |||
|
620 | ||||
|
621 | capabilities: <value> | |||
|
622 | ||||
|
623 | See above for more about the capabilities string. | |||
|
624 | ||||
|
625 | SSH clients typically issue this command as soon as a connection is | |||
|
626 | established. | |||
|
627 | ||||
|
628 | This command does not accept any arguments. The return type is a ``string``. | |||
|
629 | ||||
|
630 | listkeys | |||
|
631 | -------- | |||
|
632 | ||||
|
633 | List values in a specified ``pushkey`` namespace. | |||
|
634 | ||||
|
635 | The ``namespace`` argument defines the pushkey namespace to operate on. | |||
|
636 | ||||
|
637 | The return type is a ``string``. The value is an encoded dictionary of keys. | |||
|
638 | ||||
|
639 | Key-value pairs are delimited by newlines (``\n``). Within each line, keys and | |||
|
640 | values are separated by a tab (``\t``). Keys and values are both strings. | |||
|
641 | ||||
|
642 | lookup | |||
|
643 | ------ | |||
|
644 | ||||
|
645 | Try to resolve a value to a known repository revision. | |||
|
646 | ||||
|
647 | The ``key`` argument is converted from bytes to an | |||
|
648 | ``encoding.localstr`` instance then passed into | |||
|
649 | ``localrepository.__getitem__`` in an attempt to resolve it. | |||
|
650 | ||||
|
651 | The return type is a ``string``. | |||
|
652 | ||||
|
653 | Upon successful resolution, returns ``1 <hex node>\n``. On failure, | |||
|
654 | returns ``0 <error string>\n``. e.g.:: | |||
|
655 | ||||
|
656 | 1 273ce12ad8f155317b2c078ec75a4eba507f1fba\n | |||
|
657 | ||||
|
658 | 0 unknown revision 'foo'\n | |||
|
659 | ||||
|
660 | known | |||
|
661 | ----- | |||
|
662 | ||||
|
663 | Determine whether multiple nodes are known. | |||
|
664 | ||||
|
665 | The ``nodes`` argument is a list of space-delimited hex nodes to check | |||
|
666 | for existence. | |||
|
667 | ||||
|
668 | The return type is ``string``. | |||
|
669 | ||||
|
670 | Returns a string consisting of ``0``s and ``1``s indicating whether nodes | |||
|
671 | are known. If the Nth node specified in the ``nodes`` argument is known, | |||
|
672 | a ``1`` will be returned at byte offset N. If the node isn't known, ``0`` | |||
|
673 | will be present at byte offset N. | |||
|
674 | ||||
|
675 | There is no trailing newline. | |||
|
676 | ||||
|
677 | pushkey | |||
|
678 | ------- | |||
|
679 | ||||
|
680 | Set a value using the ``pushkey`` protocol. | |||
|
681 | ||||
|
682 | Accepts arguments ``namespace``, ``key``, ``old``, and ``new``, which | |||
|
683 | correspond to the pushkey namespace to operate on, the key within that | |||
|
684 | namespace to change, the old value (which may be empty), and the new value. | |||
|
685 | All arguments are string types. | |||
|
686 | ||||
|
687 | The return type is a ``string``. The value depends on the transport protocol. | |||
|
688 | ||||
|
689 | The SSH transport sends a string encoded integer followed by a newline | |||
|
690 | (``\n``) which indicates operation result. The server may send additional | |||
|
691 | output on the ``stderr`` stream that should be displayed to the user. | |||
|
692 | ||||
|
693 | The HTTP transport sends a string encoded integer followed by a newline | |||
|
694 | followed by additional server output that should be displayed to the user. | |||
|
695 | This may include output from hooks, etc. | |||
|
696 | ||||
|
697 | The integer result varies by namespace. ``0`` means an error has occurred | |||
|
698 | and there should be additional output to display to the user. | |||
|
699 | ||||
|
700 | stream_out | |||
|
701 | ---------- | |||
|
702 | ||||
|
703 | Obtain *streaming clone* data. | |||
|
704 | ||||
|
705 | The return type is either a ``string`` or a ``stream``, depending on | |||
|
706 | whether the request was fulfilled properly. | |||
|
707 | ||||
|
708 | A return value of ``1\n`` indicates the server is not configured to serve | |||
|
709 | this data. If this is seen by the client, they may not have verified the | |||
|
710 | ``stream`` capability is set before making the request. | |||
|
711 | ||||
|
712 | A return value of ``2\n`` indicates the server was unable to lock the | |||
|
713 | repository to generate data. | |||
|
714 | ||||
|
715 | All other responses are a ``stream`` of bytes. The first line of this data | |||
|
716 | contains 2 space-delimited integers corresponding to the path count and | |||
|
717 | payload size, respectively:: | |||
|
718 | ||||
|
719 | <path count> <payload size>\n | |||
|
720 | ||||
|
721 | The ``<payload size>`` is the total size of path data: it does not include | |||
|
722 | the size of the per-path header lines. | |||
|
723 | ||||
|
724 | Following that header are ``<path count>`` entries. Each entry consists of a | |||
|
725 | line with metadata followed by raw revlog data. The line consists of:: | |||
|
726 | ||||
|
727 | <store path>\0<size>\n | |||
|
728 | ||||
|
729 | The ``<store path>`` is the encoded store path of the data that follows. | |||
|
730 | ``<size>`` is the amount of data for this store path/revlog that follows the | |||
|
731 | newline. | |||
|
732 | ||||
|
733 | There is no trailer to indicate end of data. Instead, the client should stop | |||
|
734 | reading after ``<path count>`` entries are consumed. | |||
|
735 | ||||
|
736 | unbundle | |||
|
737 | -------- | |||
|
738 | ||||
|
739 | Send a bundle containing data (usually changegroup data) to the server. | |||
|
740 | ||||
|
741 | Accepts the argument ``heads``, which is a space-delimited list of hex nodes | |||
|
742 | corresponding to server repository heads observed by the client. This is used | |||
|
743 | to detect race conditions and abort push operations before a server performs | |||
|
744 | too much work or a client transfers too much data. | |||
|
745 | ||||
|
746 | The request payload consists of a bundle to be applied to the repository, | |||
|
747 | similarly to as if :hg:`unbundle` were called. | |||
|
748 | ||||
|
749 | In most scenarios, a special ``push response`` type is returned. This type | |||
|
750 | contains an integer describing the change in heads as a result of the | |||
|
751 | operation. A value of ``0`` indicates nothing changed. ``1`` means the number | |||
|
752 | of heads remained the same. Values ``2`` and larger indicate the number of | |||
|
753 | added heads minus 1. e.g. ``3`` means 2 heads were added. Negative values | |||
|
754 | indicate the number of fewer heads, also off by 1. e.g. ``-2`` means there | |||
|
755 | is 1 fewer head. | |||
|
756 | ||||
|
757 | The encoding of the ``push response`` type varies by transport. | |||
|
758 | ||||
|
759 | For the SSH transport, this type is composed of 2 ``string`` responses: an | |||
|
760 | empty response (``0\n``) followed by the integer result value. e.g. | |||
|
761 | ``1\n2``. So the full response might be ``0\n1\n2``. | |||
|
762 | ||||
|
763 | For the HTTP transport, the response is a ``string`` type composed of an | |||
|
764 | integer result value followed by a newline (``\n``) followed by string | |||
|
765 | content holding server output that should be displayed on the client (output | |||
|
766 | hooks, etc). | |||
|
767 | ||||
|
768 | In some cases, the server may respond with a ``bundle2`` bundle. In this | |||
|
769 | case, the response type is ``stream``. For the HTTP transport, the response | |||
|
770 | is zlib compressed. | |||
|
771 | ||||
|
772 | The server may also respond with a generic error type, which contains a string | |||
|
773 | indicating the failure. |
@@ -0,0 +1,26 b'' | |||||
|
1 | #ifndef _HG_MPATCH_H_ | |||
|
2 | #define _HG_MPATCH_H_ | |||
|
3 | ||||
|
4 | #define MPATCH_ERR_NO_MEM -3 | |||
|
5 | #define MPATCH_ERR_CANNOT_BE_DECODED -2 | |||
|
6 | #define MPATCH_ERR_INVALID_PATCH -1 | |||
|
7 | ||||
|
8 | struct mpatch_frag { | |||
|
9 | int start, end, len; | |||
|
10 | const char *data; | |||
|
11 | }; | |||
|
12 | ||||
|
13 | struct mpatch_flist { | |||
|
14 | struct mpatch_frag *base, *head, *tail; | |||
|
15 | }; | |||
|
16 | ||||
|
17 | int mpatch_decode(const char *bin, ssize_t len, struct mpatch_flist** res); | |||
|
18 | ssize_t mpatch_calcsize(ssize_t len, struct mpatch_flist *l); | |||
|
19 | void mpatch_lfree(struct mpatch_flist *a); | |||
|
20 | int mpatch_apply(char *buf, const char *orig, ssize_t len, | |||
|
21 | struct mpatch_flist *l); | |||
|
22 | struct mpatch_flist *mpatch_fold(void *bins, | |||
|
23 | struct mpatch_flist* (*get_next_item)(void*, ssize_t), | |||
|
24 | ssize_t start, ssize_t end); | |||
|
25 | ||||
|
26 | #endif |
@@ -0,0 +1,164 b'' | |||||
|
1 | # profiling.py - profiling functions | |||
|
2 | # | |||
|
3 | # Copyright 2016 Gregory Szorc <gregory.szorc@gmail.com> | |||
|
4 | # | |||
|
5 | # This software may be used and distributed according to the terms of the | |||
|
6 | # GNU General Public License version 2 or any later version. | |||
|
7 | ||||
|
8 | from __future__ import absolute_import, print_function | |||
|
9 | ||||
|
10 | import contextlib | |||
|
11 | import os | |||
|
12 | import sys | |||
|
13 | import time | |||
|
14 | ||||
|
15 | from .i18n import _ | |||
|
16 | from . import ( | |||
|
17 | error, | |||
|
18 | util, | |||
|
19 | ) | |||
|
20 | ||||
|
21 | @contextlib.contextmanager | |||
|
22 | def lsprofile(ui, fp): | |||
|
23 | format = ui.config('profiling', 'format', default='text') | |||
|
24 | field = ui.config('profiling', 'sort', default='inlinetime') | |||
|
25 | limit = ui.configint('profiling', 'limit', default=30) | |||
|
26 | climit = ui.configint('profiling', 'nested', default=0) | |||
|
27 | ||||
|
28 | if format not in ['text', 'kcachegrind']: | |||
|
29 | ui.warn(_("unrecognized profiling format '%s'" | |||
|
30 | " - Ignored\n") % format) | |||
|
31 | format = 'text' | |||
|
32 | ||||
|
33 | try: | |||
|
34 | from . import lsprof | |||
|
35 | except ImportError: | |||
|
36 | raise error.Abort(_( | |||
|
37 | 'lsprof not available - install from ' | |||
|
38 | 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/')) | |||
|
39 | p = lsprof.Profiler() | |||
|
40 | p.enable(subcalls=True) | |||
|
41 | try: | |||
|
42 | yield | |||
|
43 | finally: | |||
|
44 | p.disable() | |||
|
45 | ||||
|
46 | if format == 'kcachegrind': | |||
|
47 | from . import lsprofcalltree | |||
|
48 | calltree = lsprofcalltree.KCacheGrind(p) | |||
|
49 | calltree.output(fp) | |||
|
50 | else: | |||
|
51 | # format == 'text' | |||
|
52 | stats = lsprof.Stats(p.getstats()) | |||
|
53 | stats.sort(field) | |||
|
54 | stats.pprint(limit=limit, file=fp, climit=climit) | |||
|
55 | ||||
|
56 | @contextlib.contextmanager | |||
|
57 | def flameprofile(ui, fp): | |||
|
58 | try: | |||
|
59 | from flamegraph import flamegraph | |||
|
60 | except ImportError: | |||
|
61 | raise error.Abort(_( | |||
|
62 | 'flamegraph not available - install from ' | |||
|
63 | 'https://github.com/evanhempel/python-flamegraph')) | |||
|
64 | # developer config: profiling.freq | |||
|
65 | freq = ui.configint('profiling', 'freq', default=1000) | |||
|
66 | filter_ = None | |||
|
67 | collapse_recursion = True | |||
|
68 | thread = flamegraph.ProfileThread(fp, 1.0 / freq, | |||
|
69 | filter_, collapse_recursion) | |||
|
70 | start_time = time.clock() | |||
|
71 | try: | |||
|
72 | thread.start() | |||
|
73 | yield | |||
|
74 | finally: | |||
|
75 | thread.stop() | |||
|
76 | thread.join() | |||
|
77 | print('Collected %d stack frames (%d unique) in %2.2f seconds.' % ( | |||
|
78 | time.clock() - start_time, thread.num_frames(), | |||
|
79 | thread.num_frames(unique=True))) | |||
|
80 | ||||
|
81 | @contextlib.contextmanager | |||
|
82 | def statprofile(ui, fp): | |||
|
83 | try: | |||
|
84 | import statprof | |||
|
85 | except ImportError: | |||
|
86 | raise error.Abort(_( | |||
|
87 | 'statprof not available - install using "easy_install statprof"')) | |||
|
88 | ||||
|
89 | freq = ui.configint('profiling', 'freq', default=1000) | |||
|
90 | if freq > 0: | |||
|
91 | # Cannot reset when profiler is already active. So silently no-op. | |||
|
92 | if statprof.state.profile_level == 0: | |||
|
93 | statprof.reset(freq) | |||
|
94 | else: | |||
|
95 | ui.warn(_("invalid sampling frequency '%s' - ignoring\n") % freq) | |||
|
96 | ||||
|
97 | statprof.start() | |||
|
98 | try: | |||
|
99 | yield | |||
|
100 | finally: | |||
|
101 | statprof.stop() | |||
|
102 | statprof.display(fp) | |||
|
103 | ||||
|
104 | @contextlib.contextmanager | |||
|
105 | def profile(ui): | |||
|
106 | """Start profiling. | |||
|
107 | ||||
|
108 | Profiling is active when the context manager is active. When the context | |||
|
109 | manager exits, profiling results will be written to the configured output. | |||
|
110 | """ | |||
|
111 | profiler = os.getenv('HGPROF') | |||
|
112 | if profiler is None: | |||
|
113 | profiler = ui.config('profiling', 'type', default='ls') | |||
|
114 | if profiler not in ('ls', 'stat', 'flame'): | |||
|
115 | ui.warn(_("unrecognized profiler '%s' - ignored\n") % profiler) | |||
|
116 | profiler = 'ls' | |||
|
117 | ||||
|
118 | output = ui.config('profiling', 'output') | |||
|
119 | ||||
|
120 | if output == 'blackbox': | |||
|
121 | fp = util.stringio() | |||
|
122 | elif output: | |||
|
123 | path = ui.expandpath(output) | |||
|
124 | fp = open(path, 'wb') | |||
|
125 | else: | |||
|
126 | fp = sys.stderr | |||
|
127 | ||||
|
128 | try: | |||
|
129 | if profiler == 'ls': | |||
|
130 | proffn = lsprofile | |||
|
131 | elif profiler == 'flame': | |||
|
132 | proffn = flameprofile | |||
|
133 | else: | |||
|
134 | proffn = statprofile | |||
|
135 | ||||
|
136 | with proffn(ui, fp): | |||
|
137 | yield | |||
|
138 | ||||
|
139 | finally: | |||
|
140 | if output: | |||
|
141 | if output == 'blackbox': | |||
|
142 | val = 'Profile:\n%s' % fp.getvalue() | |||
|
143 | # ui.log treats the input as a format string, | |||
|
144 | # so we need to escape any % signs. | |||
|
145 | val = val.replace('%', '%%') | |||
|
146 | ui.log('profile', val) | |||
|
147 | fp.close() | |||
|
148 | ||||
|
149 | @contextlib.contextmanager | |||
|
150 | def maybeprofile(ui): | |||
|
151 | """Profile if enabled, else do nothing. | |||
|
152 | ||||
|
153 | This context manager can be used to optionally profile if profiling | |||
|
154 | is enabled. Otherwise, it does nothing. | |||
|
155 | ||||
|
156 | The purpose of this context manager is to make calling code simpler: | |||
|
157 | just use a single code path for calling into code you may want to profile | |||
|
158 | and this function determines whether to start profiling. | |||
|
159 | """ | |||
|
160 | if ui.configbool('profiling', 'enabled'): | |||
|
161 | with profile(ui): | |||
|
162 | yield | |||
|
163 | else: | |||
|
164 | yield |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: new file 100644 |
|
NO CONTENT: new file 100644 | ||
The requested commit or file is too big and content was truncated. Show full diff |
@@ -164,10 +164,12 b' osx:' | |||||
164 | --install-lib=/Library/Python/2.7/site-packages/ |
|
164 | --install-lib=/Library/Python/2.7/site-packages/ | |
165 | make -C doc all install DESTDIR="$(PWD)/build/mercurial/" |
|
165 | make -C doc all install DESTDIR="$(PWD)/build/mercurial/" | |
166 | mkdir -p $${OUTPUTDIR:-dist} |
|
166 | mkdir -p $${OUTPUTDIR:-dist} | |
167 | pkgbuild --root build/mercurial/ --identifier org.mercurial-scm.mercurial \ |
|
|||
168 | build/mercurial.pkg |
|
|||
169 | HGVER=$$((cat build/mercurial/Library/Python/2.7/site-packages/mercurial/__version__.py; echo 'print(version)') | python) && \ |
|
167 | HGVER=$$((cat build/mercurial/Library/Python/2.7/site-packages/mercurial/__version__.py; echo 'print(version)') | python) && \ | |
170 | OSXVER=$$(sw_vers -productVersion | cut -d. -f1,2) && \ |
|
168 | OSXVER=$$(sw_vers -productVersion | cut -d. -f1,2) && \ | |
|
169 | pkgbuild --root build/mercurial/ \ | |||
|
170 | --identifier org.mercurial-scm.mercurial \ | |||
|
171 | --version "$${HGVER}" \ | |||
|
172 | build/mercurial.pkg && \ | |||
171 | productbuild --distribution contrib/macosx/distribution.xml \ |
|
173 | productbuild --distribution contrib/macosx/distribution.xml \ | |
172 | --package-path build/ \ |
|
174 | --package-path build/ \ | |
173 | --version "$${HGVER}" \ |
|
175 | --version "$${HGVER}" \ |
@@ -89,9 +89,11 b' shopt -s extglob' | |||||
89 |
|
89 | |||
90 | _hg_status() |
|
90 | _hg_status() | |
91 | { |
|
91 | { | |
92 | local files="$(_hg_cmd status -n$1 "glob:$cur**")" |
|
92 | if [ -z "$HGCOMPLETE_NOSTATUS" ]; then | |
93 | local IFS=$'\n' |
|
93 | local files="$(_hg_cmd status -n$1 "glob:$cur**")" | |
94 | COMPREPLY=(${COMPREPLY[@]:-} $(compgen -W '$files' -- "$cur")) |
|
94 | local IFS=$'\n' | |
|
95 | COMPREPLY=(${COMPREPLY[@]:-} $(compgen -W '$files' -- "$cur")) | |||
|
96 | fi | |||
95 | } |
|
97 | } | |
96 |
|
98 | |||
97 | _hg_branches() |
|
99 | _hg_branches() |
@@ -237,7 +237,7 b' pypats = [' | |||||
237 | "tuple parameter unpacking not available in Python 3+"), |
|
237 | "tuple parameter unpacking not available in Python 3+"), | |
238 | (r'(?<!def)\s+(cmp)\(', "cmp is not available in Python 3+"), |
|
238 | (r'(?<!def)\s+(cmp)\(', "cmp is not available in Python 3+"), | |
239 | (r'\breduce\s*\(.*', "reduce is not available in Python 3+"), |
|
239 | (r'\breduce\s*\(.*', "reduce is not available in Python 3+"), | |
240 | (r'dict\(.*=', 'dict() is different in Py2 and 3 and is slower than {}', |
|
240 | (r'\bdict\(.*=', 'dict() is different in Py2 and 3 and is slower than {}', | |
241 | 'dict-from-generator'), |
|
241 | 'dict-from-generator'), | |
242 | (r'\.has_key\b', "dict.has_key is not available in Python 3+"), |
|
242 | (r'\.has_key\b', "dict.has_key is not available in Python 3+"), | |
243 | (r'\s<>\s', '<> operator is not available in Python 3+, use !='), |
|
243 | (r'\s<>\s', '<> operator is not available in Python 3+, use !='), | |
@@ -295,7 +295,7 b' pypats = [' | |||||
295 | "comparison with singleton, use 'is' or 'is not' instead"), |
|
295 | "comparison with singleton, use 'is' or 'is not' instead"), | |
296 | (r'^\s*(while|if) [01]:', |
|
296 | (r'^\s*(while|if) [01]:', | |
297 | "use True/False for constant Boolean expression"), |
|
297 | "use True/False for constant Boolean expression"), | |
298 | (r'(?:(?<!def)\s+|\()hasattr', |
|
298 | (r'(?:(?<!def)\s+|\()hasattr\(', | |
299 | 'hasattr(foo, bar) is broken, use util.safehasattr(foo, bar) instead'), |
|
299 | 'hasattr(foo, bar) is broken, use util.safehasattr(foo, bar) instead'), | |
300 | (r'opener\([^)]*\).read\(', |
|
300 | (r'opener\([^)]*\).read\(', | |
301 | "use opener.read() instead"), |
|
301 | "use opener.read() instead"), |
@@ -35,13 +35,18 b' errors = [' | |||||
35 | "summary line doesn't start with 'topic: '"), |
|
35 | "summary line doesn't start with 'topic: '"), | |
36 | (afterheader + r"[A-Z][a-z]\S+", "don't capitalize summary lines"), |
|
36 | (afterheader + r"[A-Z][a-z]\S+", "don't capitalize summary lines"), | |
37 | (afterheader + r"[^\n]*: *[A-Z][a-z]\S+", "don't capitalize summary lines"), |
|
37 | (afterheader + r"[^\n]*: *[A-Z][a-z]\S+", "don't capitalize summary lines"), | |
38 | (afterheader + r"\S*[^A-Za-z0-9-]\S*: ", |
|
38 | (afterheader + r"\S*[^A-Za-z0-9-_]\S*: ", | |
39 | "summary keyword should be most user-relevant one-word command or topic"), |
|
39 | "summary keyword should be most user-relevant one-word command or topic"), | |
40 | (afterheader + r".*\.\s*\n", "don't add trailing period on summary line"), |
|
40 | (afterheader + r".*\.\s*\n", "don't add trailing period on summary line"), | |
41 | (afterheader + r".{79,}", "summary line too long (limit is 78)"), |
|
41 | (afterheader + r".{79,}", "summary line too long (limit is 78)"), | |
42 | (r"\n\+\n( |\+)\n", "adds double empty line"), |
|
42 | (r"\n\+\n( |\+)\n", "adds double empty line"), | |
43 | (r"\n \n\+\n", "adds double empty line"), |
|
43 | (r"\n \n\+\n", "adds double empty line"), | |
44 | (r"\n\+[ \t]+def [a-z]+_[a-z]", "adds a function with foo_bar naming"), |
|
44 | # Forbid "_" in function name. | |
|
45 | # | |||
|
46 | # We skip the check for cffi related functions. They use names mapping the | |||
|
47 | # name of the C function. C function names may contain "_". | |||
|
48 | (r"\n\+[ \t]+def (?!cffi)[a-z]+_[a-z]", | |||
|
49 | "adds a function with foo_bar naming"), | |||
45 | ] |
|
50 | ] | |
46 |
|
51 | |||
47 | word = re.compile('\S') |
|
52 | word = re.compile('\S') |
@@ -10,7 +10,6 b'' | |||||
10 | from __future__ import absolute_import, print_function |
|
10 | from __future__ import absolute_import, print_function | |
11 |
|
11 | |||
12 | import ast |
|
12 | import ast | |
13 | import imp |
|
|||
14 | import os |
|
13 | import os | |
15 | import sys |
|
14 | import sys | |
16 | import traceback |
|
15 | import traceback | |
@@ -41,6 +40,7 b' def check_compat_py2(f):' | |||||
41 |
|
40 | |||
42 | def check_compat_py3(f): |
|
41 | def check_compat_py3(f): | |
43 | """Check Python 3 compatibility of a file with Python 3.""" |
|
42 | """Check Python 3 compatibility of a file with Python 3.""" | |
|
43 | import importlib # not available on Python 2.6 | |||
44 | with open(f, 'rb') as fh: |
|
44 | with open(f, 'rb') as fh: | |
45 | content = fh.read() |
|
45 | content = fh.read() | |
46 |
|
46 | |||
@@ -55,34 +55,33 b' def check_compat_py3(f):' | |||||
55 | # out module paths for things not in a package can be confusing. |
|
55 | # out module paths for things not in a package can be confusing. | |
56 | if f.startswith(('hgext/', 'mercurial/')) and not f.endswith('__init__.py'): |
|
56 | if f.startswith(('hgext/', 'mercurial/')) and not f.endswith('__init__.py'): | |
57 | assert f.endswith('.py') |
|
57 | assert f.endswith('.py') | |
58 | name = f.replace('/', '.')[:-3] |
|
58 | name = f.replace('/', '.')[:-3].replace('.pure.', '.') | |
59 | with open(f, 'r') as fh: |
|
59 | try: | |
60 | try: |
|
60 | importlib.import_module(name) | |
61 | imp.load_module(name, fh, '', ('py', 'r', imp.PY_SOURCE)) |
|
61 | except Exception as e: | |
62 | except Exception as e: |
|
62 | exc_type, exc_value, tb = sys.exc_info() | |
63 | exc_type, exc_value, tb = sys.exc_info() |
|
63 | # We walk the stack and ignore frames from our custom importer, | |
64 | # We walk the stack and ignore frames from our custom importer, |
|
64 | # import mechanisms, and stdlib modules. This kinda/sorta | |
65 | # import mechanisms, and stdlib modules. This kinda/sorta |
|
65 | # emulates CPython behavior in import.c while also attempting | |
66 | # emulates CPython behavior in import.c while also attempting |
|
66 | # to pin blame on a Mercurial file. | |
67 | # to pin blame on a Mercurial file. |
|
67 | for frame in reversed(traceback.extract_tb(tb)): | |
68 | for frame in reversed(traceback.extract_tb(tb)): |
|
68 | if frame.name == '_call_with_frames_removed': | |
69 | if frame.name == '_call_with_frames_removed': |
|
69 | continue | |
70 | continue |
|
70 | if 'importlib' in frame.filename: | |
71 |
|
|
71 | continue | |
72 | continue |
|
72 | if 'mercurial/__init__.py' in frame.filename: | |
73 | if 'mercurial/__init__.py' in frame.filename: |
|
73 | continue | |
74 | continue |
|
74 | if frame.filename.startswith(sys.prefix): | |
75 | if frame.filename.startswith(sys.prefix): |
|
75 | continue | |
76 |
|
|
76 | break | |
77 | break |
|
|||
78 |
|
77 | |||
79 |
|
|
78 | if frame.filename: | |
80 |
|
|
79 | filename = os.path.basename(frame.filename) | |
81 |
|
|
80 | print('%s: error importing: <%s> %s (error at %s:%d)' % ( | |
82 |
|
|
81 | f, type(e).__name__, e, filename, frame.lineno)) | |
83 |
|
|
82 | else: | |
84 |
|
|
83 | print('%s: error importing module: <%s> %s (line %d)' % ( | |
85 |
|
|
84 | f, type(e).__name__, e, frame.lineno)) | |
86 |
|
85 | |||
87 | if __name__ == '__main__': |
|
86 | if __name__ == '__main__': | |
88 | if sys.version_info[0] == 2: |
|
87 | if sys.version_info[0] == 2: |
@@ -126,15 +126,10 b' static void readchannel(hgclient_t *hgc)' | |||||
126 | return; /* assumes input request */ |
|
126 | return; /* assumes input request */ | |
127 |
|
127 | |||
128 | size_t cursize = 0; |
|
128 | size_t cursize = 0; | |
129 | int emptycount = 0; |
|
|||
130 | while (cursize < hgc->ctx.datasize) { |
|
129 | while (cursize < hgc->ctx.datasize) { | |
131 | rsize = recv(hgc->sockfd, hgc->ctx.data + cursize, |
|
130 | rsize = recv(hgc->sockfd, hgc->ctx.data + cursize, | |
132 | hgc->ctx.datasize - cursize, 0); |
|
131 | hgc->ctx.datasize - cursize, 0); | |
133 | /* rsize == 0 normally indicates EOF, while it's also a valid |
|
132 | if (rsize < 1) | |
134 | * packet size for unix socket. treat it as EOF and abort if |
|
|||
135 | * we get many empty responses in a row. */ |
|
|||
136 | emptycount = (rsize == 0 ? emptycount + 1 : 0); |
|
|||
137 | if (rsize < 0 || emptycount > 20) |
|
|||
138 | abortmsg("failed to read data block"); |
|
133 | abortmsg("failed to read data block"); | |
139 | cursize += rsize; |
|
134 | cursize += rsize; | |
140 | } |
|
135 | } |
@@ -25,6 +25,7 b' import random' | |||||
25 | import sys |
|
25 | import sys | |
26 | import time |
|
26 | import time | |
27 | from mercurial import ( |
|
27 | from mercurial import ( | |
|
28 | changegroup, | |||
28 | cmdutil, |
|
29 | cmdutil, | |
29 | commands, |
|
30 | commands, | |
30 | copies, |
|
31 | copies, | |
@@ -130,15 +131,53 b' def gettimer(ui, opts=None):' | |||||
130 |
|
131 | |||
131 | # enforce an idle period before execution to counteract power management |
|
132 | # enforce an idle period before execution to counteract power management | |
132 | # experimental config: perf.presleep |
|
133 | # experimental config: perf.presleep | |
133 |
time.sleep(ui |
|
134 | time.sleep(getint(ui, "perf", "presleep", 1)) | |
134 |
|
135 | |||
135 | if opts is None: |
|
136 | if opts is None: | |
136 | opts = {} |
|
137 | opts = {} | |
137 | # redirect all to stderr |
|
138 | # redirect all to stderr | |
138 | ui = ui.copy() |
|
139 | ui = ui.copy() | |
139 | ui.fout = ui.ferr |
|
140 | uifout = safeattrsetter(ui, 'fout', ignoremissing=True) | |
|
141 | if uifout: | |||
|
142 | # for "historical portability": | |||
|
143 | # ui.fout/ferr have been available since 1.9 (or 4e1ccd4c2b6d) | |||
|
144 | uifout.set(ui.ferr) | |||
|
145 | ||||
140 | # get a formatter |
|
146 | # get a formatter | |
141 | fm = ui.formatter('perf', opts) |
|
147 | uiformatter = getattr(ui, 'formatter', None) | |
|
148 | if uiformatter: | |||
|
149 | fm = uiformatter('perf', opts) | |||
|
150 | else: | |||
|
151 | # for "historical portability": | |||
|
152 | # define formatter locally, because ui.formatter has been | |||
|
153 | # available since 2.2 (or ae5f92e154d3) | |||
|
154 | from mercurial import node | |||
|
155 | class defaultformatter(object): | |||
|
156 | """Minimized composition of baseformatter and plainformatter | |||
|
157 | """ | |||
|
158 | def __init__(self, ui, topic, opts): | |||
|
159 | self._ui = ui | |||
|
160 | if ui.debugflag: | |||
|
161 | self.hexfunc = node.hex | |||
|
162 | else: | |||
|
163 | self.hexfunc = node.short | |||
|
164 | def __nonzero__(self): | |||
|
165 | return False | |||
|
166 | def startitem(self): | |||
|
167 | pass | |||
|
168 | def data(self, **data): | |||
|
169 | pass | |||
|
170 | def write(self, fields, deftext, *fielddata, **opts): | |||
|
171 | self._ui.write(deftext % fielddata, **opts) | |||
|
172 | def condwrite(self, cond, fields, deftext, *fielddata, **opts): | |||
|
173 | if cond: | |||
|
174 | self._ui.write(deftext % fielddata, **opts) | |||
|
175 | def plain(self, text, **opts): | |||
|
176 | self._ui.write(text, **opts) | |||
|
177 | def end(self): | |||
|
178 | pass | |||
|
179 | fm = defaultformatter(ui, 'perf', opts) | |||
|
180 | ||||
142 | # stub function, runs code only once instead of in a loop |
|
181 | # stub function, runs code only once instead of in a loop | |
143 | # experimental config: perf.stub |
|
182 | # experimental config: perf.stub | |
144 | if ui.configbool("perf", "stub"): |
|
183 | if ui.configbool("perf", "stub"): | |
@@ -181,6 +220,121 b' def _timer(fm, func, title=None):' | |||||
181 | fm.write('count', ' (best of %d)', count) |
|
220 | fm.write('count', ' (best of %d)', count) | |
182 | fm.plain('\n') |
|
221 | fm.plain('\n') | |
183 |
|
222 | |||
|
223 | # utilities for historical portability | |||
|
224 | ||||
|
225 | def getint(ui, section, name, default): | |||
|
226 | # for "historical portability": | |||
|
227 | # ui.configint has been available since 1.9 (or fa2b596db182) | |||
|
228 | v = ui.config(section, name, None) | |||
|
229 | if v is None: | |||
|
230 | return default | |||
|
231 | try: | |||
|
232 | return int(v) | |||
|
233 | except ValueError: | |||
|
234 | raise error.ConfigError(("%s.%s is not an integer ('%s')") | |||
|
235 | % (section, name, v)) | |||
|
236 | ||||
|
237 | def safeattrsetter(obj, name, ignoremissing=False): | |||
|
238 | """Ensure that 'obj' has 'name' attribute before subsequent setattr | |||
|
239 | ||||
|
240 | This function is aborted, if 'obj' doesn't have 'name' attribute | |||
|
241 | at runtime. This avoids overlooking removal of an attribute, which | |||
|
242 | breaks assumption of performance measurement, in the future. | |||
|
243 | ||||
|
244 | This function returns the object to (1) assign a new value, and | |||
|
245 | (2) restore an original value to the attribute. | |||
|
246 | ||||
|
247 | If 'ignoremissing' is true, missing 'name' attribute doesn't cause | |||
|
248 | abortion, and this function returns None. This is useful to | |||
|
249 | examine an attribute, which isn't ensured in all Mercurial | |||
|
250 | versions. | |||
|
251 | """ | |||
|
252 | if not util.safehasattr(obj, name): | |||
|
253 | if ignoremissing: | |||
|
254 | return None | |||
|
255 | raise error.Abort(("missing attribute %s of %s might break assumption" | |||
|
256 | " of performance measurement") % (name, obj)) | |||
|
257 | ||||
|
258 | origvalue = getattr(obj, name) | |||
|
259 | class attrutil(object): | |||
|
260 | def set(self, newvalue): | |||
|
261 | setattr(obj, name, newvalue) | |||
|
262 | def restore(self): | |||
|
263 | setattr(obj, name, origvalue) | |||
|
264 | ||||
|
265 | return attrutil() | |||
|
266 | ||||
|
267 | # utilities to examine each internal API changes | |||
|
268 | ||||
|
269 | def getbranchmapsubsettable(): | |||
|
270 | # for "historical portability": | |||
|
271 | # subsettable is defined in: | |||
|
272 | # - branchmap since 2.9 (or 175c6fd8cacc) | |||
|
273 | # - repoview since 2.5 (or 59a9f18d4587) | |||
|
274 | for mod in (branchmap, repoview): | |||
|
275 | subsettable = getattr(mod, 'subsettable', None) | |||
|
276 | if subsettable: | |||
|
277 | return subsettable | |||
|
278 | ||||
|
279 | # bisecting in bcee63733aad::59a9f18d4587 can reach here (both | |||
|
280 | # branchmap and repoview modules exist, but subsettable attribute | |||
|
281 | # doesn't) | |||
|
282 | raise error.Abort(("perfbranchmap not available with this Mercurial"), | |||
|
283 | hint="use 2.5 or later") | |||
|
284 | ||||
|
285 | def getsvfs(repo): | |||
|
286 | """Return appropriate object to access files under .hg/store | |||
|
287 | """ | |||
|
288 | # for "historical portability": | |||
|
289 | # repo.svfs has been available since 2.3 (or 7034365089bf) | |||
|
290 | svfs = getattr(repo, 'svfs', None) | |||
|
291 | if svfs: | |||
|
292 | return svfs | |||
|
293 | else: | |||
|
294 | return getattr(repo, 'sopener') | |||
|
295 | ||||
|
296 | def getvfs(repo): | |||
|
297 | """Return appropriate object to access files under .hg | |||
|
298 | """ | |||
|
299 | # for "historical portability": | |||
|
300 | # repo.vfs has been available since 2.3 (or 7034365089bf) | |||
|
301 | vfs = getattr(repo, 'vfs', None) | |||
|
302 | if vfs: | |||
|
303 | return vfs | |||
|
304 | else: | |||
|
305 | return getattr(repo, 'opener') | |||
|
306 | ||||
|
307 | def repocleartagscachefunc(repo): | |||
|
308 | """Return the function to clear tags cache according to repo internal API | |||
|
309 | """ | |||
|
310 | if util.safehasattr(repo, '_tagscache'): # since 2.0 (or 9dca7653b525) | |||
|
311 | # in this case, setattr(repo, '_tagscache', None) or so isn't | |||
|
312 | # correct way to clear tags cache, because existing code paths | |||
|
313 | # expect _tagscache to be a structured object. | |||
|
314 | def clearcache(): | |||
|
315 | # _tagscache has been filteredpropertycache since 2.5 (or | |||
|
316 | # 98c867ac1330), and delattr() can't work in such case | |||
|
317 | if '_tagscache' in vars(repo): | |||
|
318 | del repo.__dict__['_tagscache'] | |||
|
319 | return clearcache | |||
|
320 | ||||
|
321 | repotags = safeattrsetter(repo, '_tags', ignoremissing=True) | |||
|
322 | if repotags: # since 1.4 (or 5614a628d173) | |||
|
323 | return lambda : repotags.set(None) | |||
|
324 | ||||
|
325 | repotagscache = safeattrsetter(repo, 'tagscache', ignoremissing=True) | |||
|
326 | if repotagscache: # since 0.6 (or d7df759d0e97) | |||
|
327 | return lambda : repotagscache.set(None) | |||
|
328 | ||||
|
329 | # Mercurial earlier than 0.6 (or d7df759d0e97) logically reaches | |||
|
330 | # this point, but it isn't so problematic, because: | |||
|
331 | # - repo.tags of such Mercurial isn't "callable", and repo.tags() | |||
|
332 | # in perftags() causes failure soon | |||
|
333 | # - perf.py itself has been available since 1.1 (or eb240755386d) | |||
|
334 | raise error.Abort(("tags API of this hg command is unknown")) | |||
|
335 | ||||
|
336 | # perf commands | |||
|
337 | ||||
184 | @command('perfwalk', formatteropts) |
|
338 | @command('perfwalk', formatteropts) | |
185 | def perfwalk(ui, repo, *pats, **opts): |
|
339 | def perfwalk(ui, repo, *pats, **opts): | |
186 | timer, fm = gettimer(ui, opts) |
|
340 | timer, fm = gettimer(ui, opts) | |
@@ -249,10 +403,12 b' def perftags(ui, repo, **opts):' | |||||
249 | import mercurial.changelog |
|
403 | import mercurial.changelog | |
250 | import mercurial.manifest |
|
404 | import mercurial.manifest | |
251 | timer, fm = gettimer(ui, opts) |
|
405 | timer, fm = gettimer(ui, opts) | |
|
406 | svfs = getsvfs(repo) | |||
|
407 | repocleartagscache = repocleartagscachefunc(repo) | |||
252 | def t(): |
|
408 | def t(): | |
253 |
repo.changelog = mercurial.changelog.changelog( |
|
409 | repo.changelog = mercurial.changelog.changelog(svfs) | |
254 |
repo.manifest = mercurial.manifest.manifest( |
|
410 | repo.manifest = mercurial.manifest.manifest(svfs) | |
255 |
repo |
|
411 | repocleartagscache() | |
256 | return len(repo.tags()) |
|
412 | return len(repo.tags()) | |
257 | timer(t) |
|
413 | timer(t) | |
258 | fm.end() |
|
414 | fm.end() | |
@@ -279,6 +435,37 b' def perfancestorset(ui, repo, revset, **' | |||||
279 | timer(d) |
|
435 | timer(d) | |
280 | fm.end() |
|
436 | fm.end() | |
281 |
|
437 | |||
|
438 | @command('perfchangegroupchangelog', formatteropts + | |||
|
439 | [('', 'version', '02', 'changegroup version'), | |||
|
440 | ('r', 'rev', '', 'revisions to add to changegroup')]) | |||
|
441 | def perfchangegroupchangelog(ui, repo, version='02', rev=None, **opts): | |||
|
442 | """Benchmark producing a changelog group for a changegroup. | |||
|
443 | ||||
|
444 | This measures the time spent processing the changelog during a | |||
|
445 | bundle operation. This occurs during `hg bundle` and on a server | |||
|
446 | processing a `getbundle` wire protocol request (handles clones | |||
|
447 | and pull requests). | |||
|
448 | ||||
|
449 | By default, all revisions are added to the changegroup. | |||
|
450 | """ | |||
|
451 | cl = repo.changelog | |||
|
452 | revs = [cl.lookup(r) for r in repo.revs(rev or 'all()')] | |||
|
453 | bundler = changegroup.getbundler(version, repo) | |||
|
454 | ||||
|
455 | def lookup(node): | |||
|
456 | # The real bundler reads the revision in order to access the | |||
|
457 | # manifest node and files list. Do that here. | |||
|
458 | cl.read(node) | |||
|
459 | return node | |||
|
460 | ||||
|
461 | def d(): | |||
|
462 | for chunk in bundler.group(revs, cl, lookup): | |||
|
463 | pass | |||
|
464 | ||||
|
465 | timer, fm = gettimer(ui, opts) | |||
|
466 | timer(d) | |||
|
467 | fm.end() | |||
|
468 | ||||
282 | @command('perfdirs', formatteropts) |
|
469 | @command('perfdirs', formatteropts) | |
283 | def perfdirs(ui, repo, **opts): |
|
470 | def perfdirs(ui, repo, **opts): | |
284 | timer, fm = gettimer(ui, opts) |
|
471 | timer, fm = gettimer(ui, opts) | |
@@ -399,8 +586,9 b' def perfindex(ui, repo, **opts):' | |||||
399 | timer, fm = gettimer(ui, opts) |
|
586 | timer, fm = gettimer(ui, opts) | |
400 | mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg |
|
587 | mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg | |
401 | n = repo["tip"].node() |
|
588 | n = repo["tip"].node() | |
|
589 | svfs = getsvfs(repo) | |||
402 | def d(): |
|
590 | def d(): | |
403 |
cl = mercurial.revlog.revlog( |
|
591 | cl = mercurial.revlog.revlog(svfs, "00changelog.i") | |
404 | cl.rev(n) |
|
592 | cl.rev(n) | |
405 | timer(d) |
|
593 | timer(d) | |
406 | fm.end() |
|
594 | fm.end() | |
@@ -423,7 +611,7 b' def perfparents(ui, repo, **opts):' | |||||
423 | timer, fm = gettimer(ui, opts) |
|
611 | timer, fm = gettimer(ui, opts) | |
424 | # control the number of commits perfparents iterates over |
|
612 | # control the number of commits perfparents iterates over | |
425 | # experimental config: perf.parentscount |
|
613 | # experimental config: perf.parentscount | |
426 |
count = ui |
|
614 | count = getint(ui, "perf", "parentscount", 1000) | |
427 | if len(repo.changelog) < count: |
|
615 | if len(repo.changelog) < count: | |
428 | raise error.Abort("repo needs %d commits for this test" % count) |
|
616 | raise error.Abort("repo needs %d commits for this test" % count) | |
429 | repo = repo.unfiltered() |
|
617 | repo = repo.unfiltered() | |
@@ -472,7 +660,7 b' def perfnodelookup(ui, repo, rev, **opts' | |||||
472 | import mercurial.revlog |
|
660 | import mercurial.revlog | |
473 | mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg |
|
661 | mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg | |
474 | n = repo[rev].node() |
|
662 | n = repo[rev].node() | |
475 |
cl = mercurial.revlog.revlog(repo |
|
663 | cl = mercurial.revlog.revlog(getsvfs(repo), "00changelog.i") | |
476 | def d(): |
|
664 | def d(): | |
477 | cl.rev(n) |
|
665 | cl.rev(n) | |
478 | clearcaches(cl) |
|
666 | clearcaches(cl) | |
@@ -543,8 +731,8 b' def perffncachewrite(ui, repo, **opts):' | |||||
543 | s.fncache._dirty = True |
|
731 | s.fncache._dirty = True | |
544 | s.fncache.write(tr) |
|
732 | s.fncache.write(tr) | |
545 | timer(d) |
|
733 | timer(d) | |
|
734 | tr.close() | |||
546 | lock.release() |
|
735 | lock.release() | |
547 | tr.close() |
|
|||
548 | fm.end() |
|
736 | fm.end() | |
549 |
|
737 | |||
550 | @command('perffncacheencode', formatteropts) |
|
738 | @command('perffncacheencode', formatteropts) | |
@@ -580,9 +768,10 b' def perfdiffwd(ui, repo, **opts):' | |||||
580 |
|
768 | |||
581 | @command('perfrevlog', revlogopts + formatteropts + |
|
769 | @command('perfrevlog', revlogopts + formatteropts + | |
582 | [('d', 'dist', 100, 'distance between the revisions'), |
|
770 | [('d', 'dist', 100, 'distance between the revisions'), | |
583 |
('s', 'startrev', 0, 'revision to start reading at') |
|
771 | ('s', 'startrev', 0, 'revision to start reading at'), | |
|
772 | ('', 'reverse', False, 'read in reverse')], | |||
584 | '-c|-m|FILE') |
|
773 | '-c|-m|FILE') | |
585 | def perfrevlog(ui, repo, file_=None, startrev=0, **opts): |
|
774 | def perfrevlog(ui, repo, file_=None, startrev=0, reverse=False, **opts): | |
586 | """Benchmark reading a series of revisions from a revlog. |
|
775 | """Benchmark reading a series of revisions from a revlog. | |
587 |
|
776 | |||
588 | By default, we read every ``-d/--dist`` revision from 0 to tip of |
|
777 | By default, we read every ``-d/--dist`` revision from 0 to tip of | |
@@ -591,11 +780,20 b' def perfrevlog(ui, repo, file_=None, sta' | |||||
591 | The start revision can be defined via ``-s/--startrev``. |
|
780 | The start revision can be defined via ``-s/--startrev``. | |
592 | """ |
|
781 | """ | |
593 | timer, fm = gettimer(ui, opts) |
|
782 | timer, fm = gettimer(ui, opts) | |
594 | dist = opts['dist'] |
|
|||
595 | _len = getlen(ui) |
|
783 | _len = getlen(ui) | |
|
784 | ||||
596 | def d(): |
|
785 | def d(): | |
597 | r = cmdutil.openrevlog(repo, 'perfrevlog', file_, opts) |
|
786 | r = cmdutil.openrevlog(repo, 'perfrevlog', file_, opts) | |
598 | for x in xrange(startrev, _len(r), dist): |
|
787 | ||
|
788 | startrev = 0 | |||
|
789 | endrev = _len(r) | |||
|
790 | dist = opts['dist'] | |||
|
791 | ||||
|
792 | if reverse: | |||
|
793 | startrev, endrev = endrev, startrev | |||
|
794 | dist = -1 * dist | |||
|
795 | ||||
|
796 | for x in xrange(startrev, endrev, dist): | |||
599 | r.revision(r.node(x)) |
|
797 | r.revision(r.node(x)) | |
600 |
|
798 | |||
601 | timer(d) |
|
799 | timer(d) | |
@@ -772,10 +970,11 b' def perfbranchmap(ui, repo, full=False, ' | |||||
772 | return d |
|
970 | return d | |
773 | # add filter in smaller subset to bigger subset |
|
971 | # add filter in smaller subset to bigger subset | |
774 | possiblefilters = set(repoview.filtertable) |
|
972 | possiblefilters = set(repoview.filtertable) | |
|
973 | subsettable = getbranchmapsubsettable() | |||
775 | allfilters = [] |
|
974 | allfilters = [] | |
776 | while possiblefilters: |
|
975 | while possiblefilters: | |
777 | for name in possiblefilters: |
|
976 | for name in possiblefilters: | |
778 |
subset = |
|
977 | subset = subsettable.get(name) | |
779 | if subset not in possiblefilters: |
|
978 | if subset not in possiblefilters: | |
780 | break |
|
979 | break | |
781 | else: |
|
980 | else: | |
@@ -789,16 +988,17 b' def perfbranchmap(ui, repo, full=False, ' | |||||
789 | repo.filtered(name).branchmap() |
|
988 | repo.filtered(name).branchmap() | |
790 | # add unfiltered |
|
989 | # add unfiltered | |
791 | allfilters.append(None) |
|
990 | allfilters.append(None) | |
792 | oldread = branchmap.read |
|
991 | ||
793 | oldwrite = branchmap.branchcache.write |
|
992 | branchcacheread = safeattrsetter(branchmap, 'read') | |
|
993 | branchcachewrite = safeattrsetter(branchmap.branchcache, 'write') | |||
|
994 | branchcacheread.set(lambda repo: None) | |||
|
995 | branchcachewrite.set(lambda bc, repo: None) | |||
794 | try: |
|
996 | try: | |
795 | branchmap.read = lambda repo: None |
|
|||
796 | branchmap.write = lambda repo: None |
|
|||
797 | for name in allfilters: |
|
997 | for name in allfilters: | |
798 | timer(getbranchmap(name), title=str(name)) |
|
998 | timer(getbranchmap(name), title=str(name)) | |
799 | finally: |
|
999 | finally: | |
800 |
branch |
|
1000 | branchcacheread.restore() | |
801 |
branch |
|
1001 | branchcachewrite.restore() | |
802 | fm.end() |
|
1002 | fm.end() | |
803 |
|
1003 | |||
804 | @command('perfloadmarkers') |
|
1004 | @command('perfloadmarkers') | |
@@ -807,7 +1007,8 b' def perfloadmarkers(ui, repo):' | |||||
807 |
|
1007 | |||
808 | Result is the number of markers in the repo.""" |
|
1008 | Result is the number of markers in the repo.""" | |
809 | timer, fm = gettimer(ui) |
|
1009 | timer, fm = gettimer(ui) | |
810 | timer(lambda: len(obsolete.obsstore(repo.svfs))) |
|
1010 | svfs = getsvfs(repo) | |
|
1011 | timer(lambda: len(obsolete.obsstore(svfs))) | |||
811 | fm.end() |
|
1012 | fm.end() | |
812 |
|
1013 | |||
813 | @command('perflrucachedict', formatteropts + |
|
1014 | @command('perflrucachedict', formatteropts + |
@@ -62,11 +62,11 b' from mercurial import (' | |||||
62 | util, |
|
62 | util, | |
63 | ) |
|
63 | ) | |
64 |
|
64 | |||
65 |
# Note for extension authors: ONLY specify testedwith = ' |
|
65 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
66 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
66 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
67 | # be specifying the version(s) of Mercurial they are tested with, or |
|
67 | # be specifying the version(s) of Mercurial they are tested with, or | |
68 | # leave the attribute unspecified. |
|
68 | # leave the attribute unspecified. | |
69 |
testedwith = ' |
|
69 | testedwith = 'ships-with-hg-core' | |
70 |
|
70 | |||
71 | cmdtable = {} |
|
71 | cmdtable = {} | |
72 | command = cmdutil.command(cmdtable) |
|
72 | command = cmdutil.command(cmdtable) |
@@ -42,6 +42,7 b'' | |||||
42 | <File Id="internals.changegroups.txt" Name="changegroups.txt" /> |
|
42 | <File Id="internals.changegroups.txt" Name="changegroups.txt" /> | |
43 | <File Id="internals.requirements.txt" Name="requirements.txt" /> |
|
43 | <File Id="internals.requirements.txt" Name="requirements.txt" /> | |
44 | <File Id="internals.revlogs.txt" Name="revlogs.txt" /> |
|
44 | <File Id="internals.revlogs.txt" Name="revlogs.txt" /> | |
|
45 | <File Id="internals.wireprotocol.txt" Name="wireprotocol.txt" /> | |||
45 | </Component> |
|
46 | </Component> | |
46 | </Directory> |
|
47 | </Directory> | |
47 |
|
48 |
@@ -371,7 +371,7 b' typeset -A _hg_cmd_globals' | |||||
371 |
|
371 | |||
372 | # Common options |
|
372 | # Common options | |
373 | _hg_global_opts=( |
|
373 | _hg_global_opts=( | |
374 | '(--repository -R)'{-R+,--repository}'[repository root directory]:repository:_files -/' |
|
374 | '(--repository -R)'{-R+,--repository=}'[repository root directory]:repository:_files -/' | |
375 | '--cwd[change working directory]:new working directory:_files -/' |
|
375 | '--cwd[change working directory]:new working directory:_files -/' | |
376 | '(--noninteractive -y)'{-y,--noninteractive}'[do not prompt, assume yes for any required answers]' |
|
376 | '(--noninteractive -y)'{-y,--noninteractive}'[do not prompt, assume yes for any required answers]' | |
377 | '(--verbose -v)'{-v,--verbose}'[enable additional output]' |
|
377 | '(--verbose -v)'{-v,--verbose}'[enable additional output]' | |
@@ -390,8 +390,8 b' typeset -A _hg_cmd_globals' | |||||
390 | ) |
|
390 | ) | |
391 |
|
391 | |||
392 | _hg_pat_opts=( |
|
392 | _hg_pat_opts=( | |
393 | '*'{-I+,--include}'[include names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/' |
|
393 | '*'{-I+,--include=}'[include names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/' | |
394 | '*'{-X+,--exclude}'[exclude names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/') |
|
394 | '*'{-X+,--exclude=}'[exclude names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/') | |
395 |
|
395 | |||
396 | _hg_clone_opts=( |
|
396 | _hg_clone_opts=( | |
397 | $_hg_remote_opts |
|
397 | $_hg_remote_opts | |
@@ -402,8 +402,8 b' typeset -A _hg_cmd_globals' | |||||
402 | _hg_date_user_opts=( |
|
402 | _hg_date_user_opts=( | |
403 | '(--currentdate -D)'{-D,--currentdate}'[record the current date as commit date]' |
|
403 | '(--currentdate -D)'{-D,--currentdate}'[record the current date as commit date]' | |
404 | '(--currentuser -U)'{-U,--currentuser}'[record the current user as committer]' |
|
404 | '(--currentuser -U)'{-U,--currentuser}'[record the current user as committer]' | |
405 | '(--date -d)'{-d+,--date}'[record the specified date as commit date]:date:' |
|
405 | '(--date -d)'{-d+,--date=}'[record the specified date as commit date]:date:' | |
406 | '(--user -u)'{-u+,--user}'[record the specified user as committer]:user:') |
|
406 | '(--user -u)'{-u+,--user=}'[record the specified user as committer]:user:') | |
407 |
|
407 | |||
408 | _hg_gitlike_opts=( |
|
408 | _hg_gitlike_opts=( | |
409 | '(--git -g)'{-g,--git}'[use git extended diff format]') |
|
409 | '(--git -g)'{-g,--git}'[use git extended diff format]') | |
@@ -414,7 +414,7 b' typeset -A _hg_cmd_globals' | |||||
414 | '--nodates[omit dates from diff headers]') |
|
414 | '--nodates[omit dates from diff headers]') | |
415 |
|
415 | |||
416 | _hg_mergetool_opts=( |
|
416 | _hg_mergetool_opts=( | |
417 | '(--tool -t)'{-t+,--tool}'[specify merge tool]:tool:') |
|
417 | '(--tool -t)'{-t+,--tool=}'[specify merge tool]:tool:') | |
418 |
|
418 | |||
419 | _hg_dryrun_opts=( |
|
419 | _hg_dryrun_opts=( | |
420 | '(--dry-run -n)'{-n,--dry-run}'[do not perform actions, just print output]') |
|
420 | '(--dry-run -n)'{-n,--dry-run}'[do not perform actions, just print output]') | |
@@ -430,7 +430,7 b' typeset -A _hg_cmd_globals' | |||||
430 |
|
430 | |||
431 | _hg_log_opts=( |
|
431 | _hg_log_opts=( | |
432 | $_hg_global_opts $_hg_style_opts $_hg_gitlike_opts |
|
432 | $_hg_global_opts $_hg_style_opts $_hg_gitlike_opts | |
433 | '(--limit -l)'{-l+,--limit}'[limit number of changes displayed]:' |
|
433 | '(--limit -l)'{-l+,--limit=}'[limit number of changes displayed]:' | |
434 | '(--no-merges -M)'{-M,--no-merges}'[do not show merges]' |
|
434 | '(--no-merges -M)'{-M,--no-merges}'[do not show merges]' | |
435 | '(--patch -p)'{-p,--patch}'[show patch]' |
|
435 | '(--patch -p)'{-p,--patch}'[show patch]' | |
436 | '--stat[output diffstat-style summary of changes]' |
|
436 | '--stat[output diffstat-style summary of changes]' | |
@@ -438,16 +438,16 b' typeset -A _hg_cmd_globals' | |||||
438 |
|
438 | |||
439 | _hg_commit_opts=( |
|
439 | _hg_commit_opts=( | |
440 | '(-m --message -l --logfile --edit -e)'{-e,--edit}'[edit commit message]' |
|
440 | '(-m --message -l --logfile --edit -e)'{-e,--edit}'[edit commit message]' | |
441 | '(-e --edit -l --logfile --message -m)'{-m+,--message}'[use <text> as commit message]:message:' |
|
441 | '(-e --edit -l --logfile --message -m)'{-m+,--message=}'[use <text> as commit message]:message:' | |
442 | '(-e --edit -m --message --logfile -l)'{-l+,--logfile}'[read the commit message from <file>]:log file:_files') |
|
442 | '(-e --edit -m --message --logfile -l)'{-l+,--logfile=}'[read the commit message from <file>]:log file:_files') | |
443 |
|
443 | |||
444 | _hg_remote_opts=( |
|
444 | _hg_remote_opts=( | |
445 | '(--ssh -e)'{-e+,--ssh}'[specify ssh command to use]:' |
|
445 | '(--ssh -e)'{-e+,--ssh=}'[specify ssh command to use]:' | |
446 | '--remotecmd[specify hg command to run on the remote side]:') |
|
446 | '--remotecmd[specify hg command to run on the remote side]:') | |
447 |
|
447 | |||
448 | _hg_branch_bmark_opts=( |
|
448 | _hg_branch_bmark_opts=( | |
449 | '(--bookmark -B)'{-B+,--bookmark}'[specify bookmark(s)]:bookmark:_hg_bookmarks' |
|
449 | '(--bookmark -B)'{-B+,--bookmark=}'[specify bookmark(s)]:bookmark:_hg_bookmarks' | |
450 | '(--branch -b)'{-b+,--branch}'[specify branch(es)]:branch:_hg_branches' |
|
450 | '(--branch -b)'{-b+,--branch=}'[specify branch(es)]:branch:_hg_branches' | |
451 | ) |
|
451 | ) | |
452 |
|
452 | |||
453 | _hg_subrepos_opts=( |
|
453 | _hg_subrepos_opts=( | |
@@ -464,13 +464,13 b' typeset -A _hg_cmd_globals' | |||||
464 |
|
464 | |||
465 | _hg_cmd_addremove() { |
|
465 | _hg_cmd_addremove() { | |
466 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_dryrun_opts \ |
|
466 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_dryrun_opts \ | |
467 | '(--similarity -s)'{-s+,--similarity}'[guess renamed files by similarity (0<=s<=100)]:' \ |
|
467 | '(--similarity -s)'{-s+,--similarity=}'[guess renamed files by similarity (0<=s<=100)]:' \ | |
468 | '*:unknown or missing files:_hg_addremove' |
|
468 | '*:unknown or missing files:_hg_addremove' | |
469 | } |
|
469 | } | |
470 |
|
470 | |||
471 | _hg_cmd_annotate() { |
|
471 | _hg_cmd_annotate() { | |
472 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts \ |
|
472 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts \ | |
473 | '(--rev -r)'{-r+,--rev}'[annotate the specified revision]:revision:_hg_labels' \ |
|
473 | '(--rev -r)'{-r+,--rev=}'[annotate the specified revision]:revision:_hg_labels' \ | |
474 | '(--follow -f)'{-f,--follow}'[follow file copies and renames]' \ |
|
474 | '(--follow -f)'{-f,--follow}'[follow file copies and renames]' \ | |
475 | '(--text -a)'{-a,--text}'[treat all files as text]' \ |
|
475 | '(--text -a)'{-a,--text}'[treat all files as text]' \ | |
476 | '(--user -u)'{-u,--user}'[list the author]' \ |
|
476 | '(--user -u)'{-u,--user}'[list the author]' \ | |
@@ -483,21 +483,21 b' typeset -A _hg_cmd_globals' | |||||
483 | _hg_cmd_archive() { |
|
483 | _hg_cmd_archive() { | |
484 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_subrepos_opts \ |
|
484 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_subrepos_opts \ | |
485 | '--no-decode[do not pass files through decoders]' \ |
|
485 | '--no-decode[do not pass files through decoders]' \ | |
486 | '(--prefix -p)'{-p+,--prefix}'[directory prefix for files in archive]:' \ |
|
486 | '(--prefix -p)'{-p+,--prefix=}'[directory prefix for files in archive]:' \ | |
487 | '(--rev -r)'{-r+,--rev}'[revision to distribute]:revision:_hg_labels' \ |
|
487 | '(--rev -r)'{-r+,--rev=}'[revision to distribute]:revision:_hg_labels' \ | |
488 | '(--type -t)'{-t+,--type}'[type of distribution to create]:archive type:(files tar tbz2 tgz uzip zip)' \ |
|
488 | '(--type -t)'{-t+,--type=}'[type of distribution to create]:archive type:(files tar tbz2 tgz uzip zip)' \ | |
489 | '*:destination:_files' |
|
489 | '*:destination:_files' | |
490 | } |
|
490 | } | |
491 |
|
491 | |||
492 | _hg_cmd_backout() { |
|
492 | _hg_cmd_backout() { | |
493 | _arguments -s -w : $_hg_global_opts $_hg_mergetool_opts $_hg_pat_opts \ |
|
493 | _arguments -s -w : $_hg_global_opts $_hg_mergetool_opts $_hg_pat_opts \ | |
494 | '--merge[merge with old dirstate parent after backout]' \ |
|
494 | '--merge[merge with old dirstate parent after backout]' \ | |
495 | '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \ |
|
495 | '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \ | |
496 | '--parent[parent to choose when backing out merge]' \ |
|
496 | '--parent[parent to choose when backing out merge]' \ | |
497 | '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \ |
|
497 | '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \ | |
498 | '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \ |
|
498 | '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \ | |
499 | '(--message -m)'{-m+,--message}'[use <text> as commit message]:text:' \ |
|
499 | '(--message -m)'{-m+,--message=}'[use <text> as commit message]:text:' \ | |
500 | '(--logfile -l)'{-l+,--logfile}'[read commit message from <file>]:log file:_files' |
|
500 | '(--logfile -l)'{-l+,--logfile=}'[read commit message from <file>]:log file:_files' | |
501 | } |
|
501 | } | |
502 |
|
502 | |||
503 | _hg_cmd_bisect() { |
|
503 | _hg_cmd_bisect() { | |
@@ -507,7 +507,7 b' typeset -A _hg_cmd_globals' | |||||
507 | '(--good -g --bad -b --skip -s --reset -r)'{-g,--good}'[mark changeset good]'::revision:_hg_labels \ |
|
507 | '(--good -g --bad -b --skip -s --reset -r)'{-g,--good}'[mark changeset good]'::revision:_hg_labels \ | |
508 | '(--good -g --bad -b --skip -s --reset -r)'{-b,--bad}'[mark changeset bad]'::revision:_hg_labels \ |
|
508 | '(--good -g --bad -b --skip -s --reset -r)'{-b,--bad}'[mark changeset bad]'::revision:_hg_labels \ | |
509 | '(--good -g --bad -b --skip -s --reset -r)'{-s,--skip}'[skip testing changeset]' \ |
|
509 | '(--good -g --bad -b --skip -s --reset -r)'{-s,--skip}'[skip testing changeset]' \ | |
510 | '(--command -c --noupdate -U)'{-c+,--command}'[use command to check changeset state]':commands:_command_names \ |
|
510 | '(--command -c --noupdate -U)'{-c+,--command=}'[use command to check changeset state]':commands:_command_names \ | |
511 | '(--command -c --noupdate -U)'{-U,--noupdate}'[do not update to target]' |
|
511 | '(--command -c --noupdate -U)'{-U,--noupdate}'[do not update to target]' | |
512 | } |
|
512 | } | |
513 |
|
513 | |||
@@ -515,9 +515,9 b' typeset -A _hg_cmd_globals' | |||||
515 | _arguments -s -w : $_hg_global_opts \ |
|
515 | _arguments -s -w : $_hg_global_opts \ | |
516 | '(--force -f)'{-f,--force}'[force]' \ |
|
516 | '(--force -f)'{-f,--force}'[force]' \ | |
517 | '(--inactive -i)'{-i,--inactive}'[mark a bookmark inactive]' \ |
|
517 | '(--inactive -i)'{-i,--inactive}'[mark a bookmark inactive]' \ | |
518 | '(--rev -r --delete -d --rename -m)'{-r+,--rev}'[revision]:revision:_hg_labels' \ |
|
518 | '(--rev -r --delete -d --rename -m)'{-r+,--rev=}'[revision]:revision:_hg_labels' \ | |
519 | '(--rev -r --delete -d --rename -m)'{-d,--delete}'[delete a given bookmark]' \ |
|
519 | '(--rev -r --delete -d --rename -m)'{-d,--delete}'[delete a given bookmark]' \ | |
520 | '(--rev -r --delete -d --rename -m)'{-m+,--rename}'[rename a given bookmark]:bookmark:_hg_bookmarks' \ |
|
520 | '(--rev -r --delete -d --rename -m)'{-m+,--rename=}'[rename a given bookmark]:bookmark:_hg_bookmarks' \ | |
521 | ':bookmark:_hg_bookmarks' |
|
521 | ':bookmark:_hg_bookmarks' | |
522 | } |
|
522 | } | |
523 |
|
523 | |||
@@ -537,8 +537,8 b' typeset -A _hg_cmd_globals' | |||||
537 | _arguments -s -w : $_hg_global_opts $_hg_remote_opts \ |
|
537 | _arguments -s -w : $_hg_global_opts $_hg_remote_opts \ | |
538 | '(--force -f)'{-f,--force}'[run even when remote repository is unrelated]' \ |
|
538 | '(--force -f)'{-f,--force}'[run even when remote repository is unrelated]' \ | |
539 | '(2)*--base[a base changeset to specify instead of a destination]:revision:_hg_labels' \ |
|
539 | '(2)*--base[a base changeset to specify instead of a destination]:revision:_hg_labels' \ | |
540 | '(--branch -b)'{-b+,--branch}'[a specific branch to bundle]' \ |
|
540 | '(--branch -b)'{-b+,--branch=}'[a specific branch to bundle]:' \ | |
541 | '(--rev -r)'{-r+,--rev}'[changeset(s) to bundle]:' \ |
|
541 | '(--rev -r)'{-r+,--rev=}'[changeset(s) to bundle]:' \ | |
542 | '--all[bundle all changesets in the repository]' \ |
|
542 | '--all[bundle all changesets in the repository]' \ | |
543 | ':output file:_files' \ |
|
543 | ':output file:_files' \ | |
544 | ':destination repository:_files -/' |
|
544 | ':destination repository:_files -/' | |
@@ -546,17 +546,17 b' typeset -A _hg_cmd_globals' | |||||
546 |
|
546 | |||
547 | _hg_cmd_cat() { |
|
547 | _hg_cmd_cat() { | |
548 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts \ |
|
548 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts \ | |
549 | '(--output -o)'{-o+,--output}'[print output to file with formatted name]:filespec:' \ |
|
549 | '(--output -o)'{-o+,--output=}'[print output to file with formatted name]:filespec:' \ | |
550 | '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \ |
|
550 | '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \ | |
551 | '--decode[apply any matching decode filter]' \ |
|
551 | '--decode[apply any matching decode filter]' \ | |
552 | '*:file:_hg_files' |
|
552 | '*:file:_hg_files' | |
553 | } |
|
553 | } | |
554 |
|
554 | |||
555 | _hg_cmd_clone() { |
|
555 | _hg_cmd_clone() { | |
556 | _arguments -s -w : $_hg_global_opts $_hg_clone_opts \ |
|
556 | _arguments -s -w : $_hg_global_opts $_hg_clone_opts \ | |
557 | '(--rev -r)'{-r+,--rev}'[a changeset you would like to have after cloning]:' \ |
|
557 | '(--rev -r)'{-r+,--rev=}'[a changeset you would like to have after cloning]:' \ | |
558 | '(--updaterev -u)'{-u+,--updaterev}'[revision, tag or branch to check out]' \ |
|
558 | '(--updaterev -u)'{-u+,--updaterev=}'[revision, tag or branch to check out]:' \ | |
559 | '(--branch -b)'{-b+,--branch}'[clone only the specified branch]' \ |
|
559 | '(--branch -b)'{-b+,--branch=}'[clone only the specified branch]:' \ | |
560 | ':source repository:_hg_remote' \ |
|
560 | ':source repository:_hg_remote' \ | |
561 | ':destination:_hg_clone_dest' |
|
561 | ':destination:_hg_clone_dest' | |
562 | } |
|
562 | } | |
@@ -564,10 +564,10 b' typeset -A _hg_cmd_globals' | |||||
564 | _hg_cmd_commit() { |
|
564 | _hg_cmd_commit() { | |
565 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_subrepos_opts \ |
|
565 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_subrepos_opts \ | |
566 | '(--addremove -A)'{-A,--addremove}'[mark new/missing files as added/removed before committing]' \ |
|
566 | '(--addremove -A)'{-A,--addremove}'[mark new/missing files as added/removed before committing]' \ | |
567 | '(--message -m)'{-m+,--message}'[use <text> as commit message]:text:' \ |
|
567 | '(--message -m)'{-m+,--message=}'[use <text> as commit message]:text:' \ | |
568 | '(--logfile -l)'{-l+,--logfile}'[read commit message from <file>]:log file:_files' \ |
|
568 | '(--logfile -l)'{-l+,--logfile=}'[read commit message from <file>]:log file:_files' \ | |
569 | '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \ |
|
569 | '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \ | |
570 | '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \ |
|
570 | '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \ | |
571 | '--amend[amend the parent of the working dir]' \ |
|
571 | '--amend[amend the parent of the working dir]' \ | |
572 | '--close-branch[mark a branch as closed]' \ |
|
572 | '--close-branch[mark a branch as closed]' \ | |
573 | '*:file:_hg_files' |
|
573 | '*:file:_hg_files' | |
@@ -584,12 +584,12 b' typeset -A _hg_cmd_globals' | |||||
584 | typeset -A opt_args |
|
584 | typeset -A opt_args | |
585 | _arguments -s -w : $_hg_global_opts $_hg_diff_opts $_hg_ignore_space_opts \ |
|
585 | _arguments -s -w : $_hg_global_opts $_hg_diff_opts $_hg_ignore_space_opts \ | |
586 | $_hg_pat_opts $_hg_subrepos_opts \ |
|
586 | $_hg_pat_opts $_hg_subrepos_opts \ | |
587 |
'*'{-r,--rev}' |
|
587 | '*'{-r+,--rev=}'[revision]:revision:_hg_revrange' \ | |
588 | '(--show-function -p)'{-p,--show-function}'[show which function each change is in]' \ |
|
588 | '(--show-function -p)'{-p,--show-function}'[show which function each change is in]' \ | |
589 | '(--change -c)'{-c,--change}'[change made by revision]' \ |
|
589 | '(--change -c)'{-c+,--change=}'[change made by revision]:' \ | |
590 | '(--text -a)'{-a,--text}'[treat all files as text]' \ |
|
590 | '(--text -a)'{-a,--text}'[treat all files as text]' \ | |
591 | '--reverse[produce a diff that undoes the changes]' \ |
|
591 | '--reverse[produce a diff that undoes the changes]' \ | |
592 | '(--unified -U)'{-U,--unified}'[number of lines of context to show]' \ |
|
592 | '(--unified -U)'{-U+,--unified=}'[number of lines of context to show]:' \ | |
593 | '--stat[output diffstat-style summary of changes]' \ |
|
593 | '--stat[output diffstat-style summary of changes]' \ | |
594 | '*:file:->diff_files' |
|
594 | '*:file:->diff_files' | |
595 |
|
595 | |||
@@ -606,9 +606,9 b' typeset -A _hg_cmd_globals' | |||||
606 |
|
606 | |||
607 | _hg_cmd_export() { |
|
607 | _hg_cmd_export() { | |
608 | _arguments -s -w : $_hg_global_opts $_hg_diff_opts \ |
|
608 | _arguments -s -w : $_hg_global_opts $_hg_diff_opts \ | |
609 | '(--outout -o)'{-o+,--output}'[print output to file with formatted name]:filespec:' \ |
|
609 | '(--outout -o)'{-o+,--output=}'[print output to file with formatted name]:filespec:' \ | |
610 | '--switch-parent[diff against the second parent]' \ |
|
610 | '--switch-parent[diff against the second parent]' \ | |
611 | '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \ |
|
611 | '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \ | |
612 | '*:revision:_hg_labels' |
|
612 | '*:revision:_hg_labels' | |
613 | } |
|
613 | } | |
614 |
|
614 | |||
@@ -634,7 +634,7 b' typeset -A _hg_cmd_globals' | |||||
634 | '(--ignore-case -i)'{-i,--ignore-case}'[ignore case when matching]' \ |
|
634 | '(--ignore-case -i)'{-i,--ignore-case}'[ignore case when matching]' \ | |
635 | '(--files-with-matches -l)'{-l,--files-with-matches}'[print only filenames and revs that match]' \ |
|
635 | '(--files-with-matches -l)'{-l,--files-with-matches}'[print only filenames and revs that match]' \ | |
636 | '(--line-number -n)'{-n,--line-number}'[print matching line numbers]' \ |
|
636 | '(--line-number -n)'{-n,--line-number}'[print matching line numbers]' \ | |
637 | '*'{-r+,--rev}'[search in given revision range]:revision:_hg_revrange' \ |
|
637 | '*'{-r+,--rev=}'[search in given revision range]:revision:_hg_revrange' \ | |
638 | '(--user -u)'{-u,--user}'[print user who committed change]' \ |
|
638 | '(--user -u)'{-u,--user}'[print user who committed change]' \ | |
639 | '(--date -d)'{-d,--date}'[print date of a changeset]' \ |
|
639 | '(--date -d)'{-d,--date}'[print date of a changeset]' \ | |
640 | '1:search pattern:' \ |
|
640 | '1:search pattern:' \ | |
@@ -645,7 +645,7 b' typeset -A _hg_cmd_globals' | |||||
645 | _arguments -s -w : $_hg_global_opts $_hg_style_opts \ |
|
645 | _arguments -s -w : $_hg_global_opts $_hg_style_opts \ | |
646 | '(--topo -t)'{-t,--topo}'[show topological heads only]' \ |
|
646 | '(--topo -t)'{-t,--topo}'[show topological heads only]' \ | |
647 | '(--closed -c)'{-c,--closed}'[show normal and closed branch heads]' \ |
|
647 | '(--closed -c)'{-c,--closed}'[show normal and closed branch heads]' \ | |
648 | '(--rev -r)'{-r+,--rev}'[show only heads which are descendants of rev]:revision:_hg_labels' |
|
648 | '(--rev -r)'{-r+,--rev=}'[show only heads which are descendants of rev]:revision:_hg_labels' | |
649 | } |
|
649 | } | |
650 |
|
650 | |||
651 | _hg_cmd_help() { |
|
651 | _hg_cmd_help() { | |
@@ -658,25 +658,25 b' typeset -A _hg_cmd_globals' | |||||
658 |
|
658 | |||
659 | _hg_cmd_identify() { |
|
659 | _hg_cmd_identify() { | |
660 | _arguments -s -w : $_hg_global_opts $_hg_remote_opts \ |
|
660 | _arguments -s -w : $_hg_global_opts $_hg_remote_opts \ | |
661 | '(--rev -r)'{-r+,--rev}'[identify the specified rev]:revision:_hg_labels' \ |
|
661 | '(--rev -r)'{-r+,--rev=}'[identify the specified rev]:revision:_hg_labels' \ | |
662 |
'(--num -n)'{-n |
|
662 | '(--num -n)'{-n,--num}'[show local revision number]' \ | |
663 |
'(--id -i)'{-i |
|
663 | '(--id -i)'{-i,--id}'[show global revision id]' \ | |
664 |
'(--branch -b)'{-b |
|
664 | '(--branch -b)'{-b,--branch}'[show branch]' \ | |
665 |
'(--bookmark -B)'{-B |
|
665 | '(--bookmark -B)'{-B,--bookmark}'[show bookmarks]' \ | |
666 |
'(--tags -t)'{-t |
|
666 | '(--tags -t)'{-t,--tags}'[show tags]' | |
667 | } |
|
667 | } | |
668 |
|
668 | |||
669 | _hg_cmd_import() { |
|
669 | _hg_cmd_import() { | |
670 | _arguments -s -w : $_hg_global_opts $_hg_commit_opts \ |
|
670 | _arguments -s -w : $_hg_global_opts $_hg_commit_opts \ | |
671 | '(--strip -p)'{-p+,--strip}'[directory strip option for patch (default: 1)]:count:' \ |
|
671 | '(--strip -p)'{-p+,--strip=}'[directory strip option for patch (default: 1)]:count:' \ | |
672 | '(--force -f)'{-f,--force}'[skip check for outstanding uncommitted changes]' \ |
|
672 | '(--force -f)'{-f,--force}'[skip check for outstanding uncommitted changes]' \ | |
673 | '--bypass[apply patch without touching the working directory]' \ |
|
673 | '--bypass[apply patch without touching the working directory]' \ | |
674 | '--no-commit[do not commit, just update the working directory]' \ |
|
674 | '--no-commit[do not commit, just update the working directory]' \ | |
675 | '--exact[apply patch to the nodes from which it was generated]' \ |
|
675 | '--exact[apply patch to the nodes from which it was generated]' \ | |
676 | '--import-branch[use any branch information in patch (implied by --exact)]' \ |
|
676 | '--import-branch[use any branch information in patch (implied by --exact)]' \ | |
677 | '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \ |
|
677 | '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \ | |
678 | '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \ |
|
678 | '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \ | |
679 | '(--similarity -s)'{-s+,--similarity}'[guess renamed files by similarity (0<=s<=100)]:' \ |
|
679 | '(--similarity -s)'{-s+,--similarity=}'[guess renamed files by similarity (0<=s<=100)]:' \ | |
680 | '*:patch:_files' |
|
680 | '*:patch:_files' | |
681 | } |
|
681 | } | |
682 |
|
682 | |||
@@ -684,7 +684,7 b' typeset -A _hg_cmd_globals' | |||||
684 | _arguments -s -w : $_hg_log_opts $_hg_branch_bmark_opts $_hg_remote_opts \ |
|
684 | _arguments -s -w : $_hg_log_opts $_hg_branch_bmark_opts $_hg_remote_opts \ | |
685 | $_hg_subrepos_opts \ |
|
685 | $_hg_subrepos_opts \ | |
686 | '(--force -f)'{-f,--force}'[run even when the remote repository is unrelated]' \ |
|
686 | '(--force -f)'{-f,--force}'[run even when the remote repository is unrelated]' \ | |
687 | '(--rev -r)'{-r+,--rev}'[a specific revision up to which you would like to pull]:revision:_hg_labels' \ |
|
687 | '(--rev -r)'{-r+,--rev=}'[a specific revision up to which you would like to pull]:revision:_hg_labels' \ | |
688 | '(--newest-first -n)'{-n,--newest-first}'[show newest record first]' \ |
|
688 | '(--newest-first -n)'{-n,--newest-first}'[show newest record first]' \ | |
689 | '--bundle[file to store the bundles into]:bundle file:_files' \ |
|
689 | '--bundle[file to store the bundles into]:bundle file:_files' \ | |
690 | ':source:_hg_remote' |
|
690 | ':source:_hg_remote' | |
@@ -697,7 +697,7 b' typeset -A _hg_cmd_globals' | |||||
697 |
|
697 | |||
698 | _hg_cmd_locate() { |
|
698 | _hg_cmd_locate() { | |
699 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts \ |
|
699 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts \ | |
700 | '(--rev -r)'{-r+,--rev}'[search repository as it stood at revision]:revision:_hg_labels' \ |
|
700 | '(--rev -r)'{-r+,--rev=}'[search repository as it stood at revision]:revision:_hg_labels' \ | |
701 | '(--print0 -0)'{-0,--print0}'[end filenames with NUL, for use with xargs]' \ |
|
701 | '(--print0 -0)'{-0,--print0}'[end filenames with NUL, for use with xargs]' \ | |
702 | '(--fullpath -f)'{-f,--fullpath}'[print complete paths]' \ |
|
702 | '(--fullpath -f)'{-f,--fullpath}'[print complete paths]' \ | |
703 | '*:search pattern:_hg_files' |
|
703 | '*:search pattern:_hg_files' | |
@@ -709,27 +709,27 b' typeset -A _hg_cmd_globals' | |||||
709 | '(-f --follow)--follow-first[only follow the first parent of merge changesets]' \ |
|
709 | '(-f --follow)--follow-first[only follow the first parent of merge changesets]' \ | |
710 | '(--copies -C)'{-C,--copies}'[show copied files]' \ |
|
710 | '(--copies -C)'{-C,--copies}'[show copied files]' \ | |
711 | '(--keyword -k)'{-k+,--keyword}'[search for a keyword]:' \ |
|
711 | '(--keyword -k)'{-k+,--keyword}'[search for a keyword]:' \ | |
712 | '*'{-r,--rev}'[show the specified revision or revset]:revision:_hg_revrange' \ |
|
712 | '*'{-r+,--rev=}'[show the specified revision or revset]:revision:_hg_revrange' \ | |
713 | '(--only-merges -m)'{-m,--only-merges}'[show only merges]' \ |
|
713 | '(--only-merges -m)'{-m,--only-merges}'[show only merges]' \ | |
714 | '(--prune -P)'{-P+,--prune}'[do not display revision or any of its ancestors]:revision:_hg_labels' \ |
|
714 | '(--prune -P)'{-P+,--prune=}'[do not display revision or any of its ancestors]:revision:_hg_labels' \ | |
715 |
'(--graph -G)'{-G |
|
715 | '(--graph -G)'{-G,--graph}'[show the revision DAG]' \ | |
716 | '(--branch -b)'{-b+,--branch}'[show changesets within the given named branch]:branch:_hg_branches' \ |
|
716 | '(--branch -b)'{-b+,--branch=}'[show changesets within the given named branch]:branch:_hg_branches' \ | |
717 | '(--user -u)'{-u+,--user}'[revisions committed by user]:user:' \ |
|
717 | '(--user -u)'{-u+,--user=}'[revisions committed by user]:user:' \ | |
718 | '(--date -d)'{-d+,--date}'[show revisions matching date spec]:date:' \ |
|
718 | '(--date -d)'{-d+,--date=}'[show revisions matching date spec]:date:' \ | |
719 | '*:files:_hg_files' |
|
719 | '*:files:_hg_files' | |
720 | } |
|
720 | } | |
721 |
|
721 | |||
722 | _hg_cmd_manifest() { |
|
722 | _hg_cmd_manifest() { | |
723 | _arguments -s -w : $_hg_global_opts \ |
|
723 | _arguments -s -w : $_hg_global_opts \ | |
724 | '--all[list files from all revisions]' \ |
|
724 | '--all[list files from all revisions]' \ | |
725 | '(--rev -r)'{-r+,--rev}'[revision to display]:revision:_hg_labels' \ |
|
725 | '(--rev -r)'{-r+,--rev=}'[revision to display]:revision:_hg_labels' \ | |
726 | ':revision:_hg_labels' |
|
726 | ':revision:_hg_labels' | |
727 | } |
|
727 | } | |
728 |
|
728 | |||
729 | _hg_cmd_merge() { |
|
729 | _hg_cmd_merge() { | |
730 | _arguments -s -w : $_hg_global_opts $_hg_mergetool_opts \ |
|
730 | _arguments -s -w : $_hg_global_opts $_hg_mergetool_opts \ | |
731 | '(--force -f)'{-f,--force}'[force a merge with outstanding changes]' \ |
|
731 | '(--force -f)'{-f,--force}'[force a merge with outstanding changes]' \ | |
732 | '(--rev -r 1)'{-r,--rev}'[revision to merge]:revision:_hg_mergerevs' \ |
|
732 | '(--rev -r 1)'{-r+,--rev=}'[revision to merge]:revision:_hg_mergerevs' \ | |
733 | '(--preview -P)'{-P,--preview}'[review revisions to merge (no merge is performed)]' \ |
|
733 | '(--preview -P)'{-P,--preview}'[review revisions to merge (no merge is performed)]' \ | |
734 | ':revision:_hg_mergerevs' |
|
734 | ':revision:_hg_mergerevs' | |
735 | } |
|
735 | } | |
@@ -738,14 +738,14 b' typeset -A _hg_cmd_globals' | |||||
738 | _arguments -s -w : $_hg_log_opts $_hg_branch_bmark_opts $_hg_remote_opts \ |
|
738 | _arguments -s -w : $_hg_log_opts $_hg_branch_bmark_opts $_hg_remote_opts \ | |
739 | $_hg_subrepos_opts \ |
|
739 | $_hg_subrepos_opts \ | |
740 | '(--force -f)'{-f,--force}'[run even when the remote repository is unrelated]' \ |
|
740 | '(--force -f)'{-f,--force}'[run even when the remote repository is unrelated]' \ | |
741 | '*'{-r,--rev}'[a specific revision you would like to push]:revision:_hg_revrange' \ |
|
741 | '*'{-r+,--rev=}'[a specific revision you would like to push]:revision:_hg_revrange' \ | |
742 | '(--newest-first -n)'{-n,--newest-first}'[show newest record first]' \ |
|
742 | '(--newest-first -n)'{-n,--newest-first}'[show newest record first]' \ | |
743 | ':destination:_hg_remote' |
|
743 | ':destination:_hg_remote' | |
744 | } |
|
744 | } | |
745 |
|
745 | |||
746 | _hg_cmd_parents() { |
|
746 | _hg_cmd_parents() { | |
747 | _arguments -s -w : $_hg_global_opts $_hg_style_opts \ |
|
747 | _arguments -s -w : $_hg_global_opts $_hg_style_opts \ | |
748 | '(--rev -r)'{-r+,--rev}'[show parents of the specified rev]:revision:_hg_labels' \ |
|
748 | '(--rev -r)'{-r+,--rev=}'[show parents of the specified rev]:revision:_hg_labels' \ | |
749 | ':last modified file:_hg_files' |
|
749 | ':last modified file:_hg_files' | |
750 | } |
|
750 | } | |
751 |
|
751 | |||
@@ -760,7 +760,7 b' typeset -A _hg_cmd_globals' | |||||
760 | '(--draft -d)'{-d,--draft}'[set changeset phase to draft]' \ |
|
760 | '(--draft -d)'{-d,--draft}'[set changeset phase to draft]' \ | |
761 | '(--secret -s)'{-s,--secret}'[set changeset phase to secret]' \ |
|
761 | '(--secret -s)'{-s,--secret}'[set changeset phase to secret]' \ | |
762 | '(--force -f)'{-f,--force}'[allow to move boundary backward]' \ |
|
762 | '(--force -f)'{-f,--force}'[allow to move boundary backward]' \ | |
763 | '(--rev -r)'{-r+,--rev}'[target revision]:revision:_hg_labels' \ |
|
763 | '(--rev -r)'{-r+,--rev=}'[target revision]:revision:_hg_labels' \ | |
764 | ':revision:_hg_labels' |
|
764 | ':revision:_hg_labels' | |
765 | } |
|
765 | } | |
766 |
|
766 | |||
@@ -775,7 +775,7 b' typeset -A _hg_cmd_globals' | |||||
775 | _hg_cmd_push() { |
|
775 | _hg_cmd_push() { | |
776 | _arguments -s -w : $_hg_global_opts $_hg_branch_bmark_opts $_hg_remote_opts \ |
|
776 | _arguments -s -w : $_hg_global_opts $_hg_branch_bmark_opts $_hg_remote_opts \ | |
777 | '(--force -f)'{-f,--force}'[force push]' \ |
|
777 | '(--force -f)'{-f,--force}'[force push]' \ | |
778 | '(--rev -r)'{-r+,--rev}'[a specific revision you would like to push]:revision:_hg_labels' \ |
|
778 | '(--rev -r)'{-r+,--rev=}'[a specific revision you would like to push]:revision:_hg_labels' \ | |
779 | '--new-branch[allow pushing a new branch]' \ |
|
779 | '--new-branch[allow pushing a new branch]' \ | |
780 | ':destination:_hg_remote' |
|
780 | ':destination:_hg_remote' | |
781 | } |
|
781 | } | |
@@ -819,9 +819,9 b' typeset -A _hg_cmd_globals' | |||||
819 |
|
819 | |||
820 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_dryrun_opts \ |
|
820 | _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_dryrun_opts \ | |
821 | '(--all -a :)'{-a,--all}'[revert all changes when no arguments given]' \ |
|
821 | '(--all -a :)'{-a,--all}'[revert all changes when no arguments given]' \ | |
822 | '(--rev -r)'{-r+,--rev}'[revision to revert to]:revision:_hg_labels' \ |
|
822 | '(--rev -r)'{-r+,--rev=}'[revision to revert to]:revision:_hg_labels' \ | |
823 | '(--no-backup -C)'{-C,--no-backup}'[do not save backup copies of files]' \ |
|
823 | '(--no-backup -C)'{-C,--no-backup}'[do not save backup copies of files]' \ | |
824 | '(--date -d)'{-d+,--date}'[tipmost revision matching date]:date code:' \ |
|
824 | '(--date -d)'{-d+,--date=}'[tipmost revision matching date]:date code:' \ | |
825 | '*:file:->diff_files' |
|
825 | '*:file:->diff_files' | |
826 |
|
826 | |||
827 | if [[ $state == 'diff_files' ]] |
|
827 | if [[ $state == 'diff_files' ]] | |
@@ -844,13 +844,13 b' typeset -A _hg_cmd_globals' | |||||
844 |
|
844 | |||
845 | _hg_cmd_serve() { |
|
845 | _hg_cmd_serve() { | |
846 | _arguments -s -w : $_hg_global_opts \ |
|
846 | _arguments -s -w : $_hg_global_opts \ | |
847 | '(--accesslog -A)'{-A+,--accesslog}'[name of access log file]:log file:_files' \ |
|
847 | '(--accesslog -A)'{-A+,--accesslog=}'[name of access log file]:log file:_files' \ | |
848 | '(--errorlog -E)'{-E+,--errorlog}'[name of error log file]:log file:_files' \ |
|
848 | '(--errorlog -E)'{-E+,--errorlog=}'[name of error log file]:log file:_files' \ | |
849 | '(--daemon -d)'{-d,--daemon}'[run server in background]' \ |
|
849 | '(--daemon -d)'{-d,--daemon}'[run server in background]' \ | |
850 | '(--port -p)'{-p+,--port}'[listen port]:listen port:' \ |
|
850 | '(--port -p)'{-p+,--port=}'[listen port]:listen port:' \ | |
851 | '(--address -a)'{-a+,--address}'[interface address]:interface address:' \ |
|
851 | '(--address -a)'{-a+,--address=}'[interface address]:interface address:' \ | |
852 | '--prefix[prefix path to serve from]:directory:_files' \ |
|
852 | '--prefix[prefix path to serve from]:directory:_files' \ | |
853 | '(--name -n)'{-n+,--name}'[name to show in web pages]:repository name:' \ |
|
853 | '(--name -n)'{-n+,--name=}'[name to show in web pages]:repository name:' \ | |
854 | '--web-conf[name of the hgweb config file]:webconf_file:_files' \ |
|
854 | '--web-conf[name of the hgweb config file]:webconf_file:_files' \ | |
855 | '--pid-file[name of file to write process ID to]:pid_file:_files' \ |
|
855 | '--pid-file[name of file to write process ID to]:pid_file:_files' \ | |
856 | '--cmdserver[cmdserver mode]:mode:' \ |
|
856 | '--cmdserver[cmdserver mode]:mode:' \ | |
@@ -863,7 +863,7 b' typeset -A _hg_cmd_globals' | |||||
863 |
|
863 | |||
864 | _hg_cmd_showconfig() { |
|
864 | _hg_cmd_showconfig() { | |
865 | _arguments -s -w : $_hg_global_opts \ |
|
865 | _arguments -s -w : $_hg_global_opts \ | |
866 |
'(--untrusted -u)'{-u |
|
866 | '(--untrusted -u)'{-u,--untrusted}'[show untrusted configuration options]' \ | |
867 | ':config item:_hg_config' |
|
867 | ':config item:_hg_config' | |
868 | } |
|
868 | } | |
869 |
|
869 | |||
@@ -893,10 +893,10 b' typeset -A _hg_cmd_globals' | |||||
893 | _hg_cmd_tag() { |
|
893 | _hg_cmd_tag() { | |
894 | _arguments -s -w : $_hg_global_opts \ |
|
894 | _arguments -s -w : $_hg_global_opts \ | |
895 | '(--local -l)'{-l,--local}'[make the tag local]' \ |
|
895 | '(--local -l)'{-l,--local}'[make the tag local]' \ | |
896 | '(--message -m)'{-m+,--message}'[message for tag commit log entry]:message:' \ |
|
896 | '(--message -m)'{-m+,--message=}'[message for tag commit log entry]:message:' \ | |
897 | '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \ |
|
897 | '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \ | |
898 | '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \ |
|
898 | '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \ | |
899 | '(--rev -r)'{-r+,--rev}'[revision to tag]:revision:_hg_labels' \ |
|
899 | '(--rev -r)'{-r+,--rev=}'[revision to tag]:revision:_hg_labels' \ | |
900 | '(--force -f)'{-f,--force}'[force tag]' \ |
|
900 | '(--force -f)'{-f,--force}'[force tag]' \ | |
901 | '--remove[remove a tag]' \ |
|
901 | '--remove[remove a tag]' \ | |
902 | '(--edit -e)'{-e,--edit}'[edit commit message]' \ |
|
902 | '(--edit -e)'{-e,--edit}'[edit commit message]' \ | |
@@ -917,9 +917,9 b' typeset -A _hg_cmd_globals' | |||||
917 | _hg_cmd_update() { |
|
917 | _hg_cmd_update() { | |
918 | _arguments -s -w : $_hg_global_opts \ |
|
918 | _arguments -s -w : $_hg_global_opts \ | |
919 | '(--clean -C)'{-C,--clean}'[overwrite locally modified files]' \ |
|
919 | '(--clean -C)'{-C,--clean}'[overwrite locally modified files]' \ | |
920 | '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \ |
|
920 | '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \ | |
921 | '(--check -c)'{-c,--check}'[update across branches if no uncommitted changes]' \ |
|
921 | '(--check -c)'{-c,--check}'[update across branches if no uncommitted changes]' \ | |
922 | '(--date -d)'{-d+,--date}'[tipmost revision matching date]' \ |
|
922 | '(--date -d)'{-d+,--date=}'[tipmost revision matching date]:' \ | |
923 | ':revision:_hg_labels' |
|
923 | ':revision:_hg_labels' | |
924 | } |
|
924 | } | |
925 |
|
925 | |||
@@ -928,7 +928,7 b' typeset -A _hg_cmd_globals' | |||||
928 | # HGK |
|
928 | # HGK | |
929 | _hg_cmd_view() { |
|
929 | _hg_cmd_view() { | |
930 | _arguments -s -w : $_hg_global_opts \ |
|
930 | _arguments -s -w : $_hg_global_opts \ | |
931 | '(--limit -l)'{-l+,--limit}'[limit number of changes displayed]:' \ |
|
931 | '(--limit -l)'{-l+,--limit=}'[limit number of changes displayed]:' \ | |
932 | ':revision range:_hg_labels' |
|
932 | ':revision range:_hg_labels' | |
933 | } |
|
933 | } | |
934 |
|
934 | |||
@@ -989,7 +989,7 b' typeset -A _hg_cmd_globals' | |||||
989 |
|
989 | |||
990 | _hg_cmd_qclone() { |
|
990 | _hg_cmd_qclone() { | |
991 | _arguments -s -w : $_hg_global_opts $_hg_remote_opts $_hg_clone_opts \ |
|
991 | _arguments -s -w : $_hg_global_opts $_hg_remote_opts $_hg_clone_opts \ | |
992 | '(--patches -p)'{-p+,--patches}'[location of source patch repository]' \ |
|
992 | '(--patches -p)'{-p+,--patches=}'[location of source patch repository]:' \ | |
993 | ':source repository:_hg_remote' \ |
|
993 | ':source repository:_hg_remote' \ | |
994 | ':destination:_hg_clone_dest' |
|
994 | ':destination:_hg_clone_dest' | |
995 | } |
|
995 | } | |
@@ -997,7 +997,7 b' typeset -A _hg_cmd_globals' | |||||
997 | _hg_cmd_qdelete() { |
|
997 | _hg_cmd_qdelete() { | |
998 | _arguments -s -w : $_hg_global_opts \ |
|
998 | _arguments -s -w : $_hg_global_opts \ | |
999 | '(--keep -k)'{-k,--keep}'[keep patch file]' \ |
|
999 | '(--keep -k)'{-k,--keep}'[keep patch file]' \ | |
1000 | '*'{-r+,--rev}'[stop managing a revision]:applied patch:_hg_revrange' \ |
|
1000 | '*'{-r+,--rev=}'[stop managing a revision]:applied patch:_hg_revrange' \ | |
1001 | '*:unapplied patch:_hg_qdeletable' |
|
1001 | '*:unapplied patch:_hg_qdeletable' | |
1002 | } |
|
1002 | } | |
1003 |
|
1003 | |||
@@ -1046,7 +1046,7 b' typeset -A _hg_cmd_globals' | |||||
1046 | '(--existing -e)'{-e,--existing}'[import file in patch dir]' \ |
|
1046 | '(--existing -e)'{-e,--existing}'[import file in patch dir]' \ | |
1047 | '(--name -n 2)'{-n+,--name}'[patch file name]:name:' \ |
|
1047 | '(--name -n 2)'{-n+,--name}'[patch file name]:name:' \ | |
1048 | '(--force -f)'{-f,--force}'[overwrite existing files]' \ |
|
1048 | '(--force -f)'{-f,--force}'[overwrite existing files]' \ | |
1049 | '*'{-r+,--rev}'[place existing revisions under mq control]:revision:_hg_revrange' \ |
|
1049 | '*'{-r+,--rev=}'[place existing revisions under mq control]:revision:_hg_revrange' \ | |
1050 | '(--push -P)'{-P,--push}'[qpush after importing]' \ |
|
1050 | '(--push -P)'{-P,--push}'[qpush after importing]' \ | |
1051 | '*:patch:_files' |
|
1051 | '*:patch:_files' | |
1052 | } |
|
1052 | } | |
@@ -1125,8 +1125,8 b' typeset -A _hg_cmd_globals' | |||||
1125 | '(--force -f)'{-f,--force}'[force removal, discard uncommitted changes, no backup]' \ |
|
1125 | '(--force -f)'{-f,--force}'[force removal, discard uncommitted changes, no backup]' \ | |
1126 | '(--no-backup -n)'{-n,--no-backup}'[no backups]' \ |
|
1126 | '(--no-backup -n)'{-n,--no-backup}'[no backups]' \ | |
1127 | '(--keep -k)'{-k,--keep}'[do not modify working copy during strip]' \ |
|
1127 | '(--keep -k)'{-k,--keep}'[do not modify working copy during strip]' \ | |
1128 | '(--bookmark -B)'{-B+,--bookmark}'[remove revs only reachable from given bookmark]:bookmark:_hg_bookmarks' \ |
|
1128 | '(--bookmark -B)'{-B+,--bookmark=}'[remove revs only reachable from given bookmark]:bookmark:_hg_bookmarks' \ | |
1129 | '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \ |
|
1129 | '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \ | |
1130 | ':revision:_hg_labels' |
|
1130 | ':revision:_hg_labels' | |
1131 | } |
|
1131 | } | |
1132 |
|
1132 | |||
@@ -1138,7 +1138,7 b' typeset -A _hg_cmd_globals' | |||||
1138 | '(--outgoing -o)'{-o,--outgoing}'[send changes not found in the target repository]' \ |
|
1138 | '(--outgoing -o)'{-o,--outgoing}'[send changes not found in the target repository]' \ | |
1139 | '(--bundle -b)'{-b,--bundle}'[send changes not in target as a binary bundle]' \ |
|
1139 | '(--bundle -b)'{-b,--bundle}'[send changes not in target as a binary bundle]' \ | |
1140 | '--bundlename[name of the bundle attachment file (default: bundle)]:' \ |
|
1140 | '--bundlename[name of the bundle attachment file (default: bundle)]:' \ | |
1141 | '*'{-r+,--rev}'[search in given revision range]:revision:_hg_revrange' \ |
|
1141 | '*'{-r+,--rev=}'[search in given revision range]:revision:_hg_revrange' \ | |
1142 | '--force[run even when remote repository is unrelated (with -b/--bundle)]' \ |
|
1142 | '--force[run even when remote repository is unrelated (with -b/--bundle)]' \ | |
1143 | '*--base[a base changeset to specify instead of a destination (with -b/--bundle)]:revision:_hg_labels' \ |
|
1143 | '*--base[a base changeset to specify instead of a destination (with -b/--bundle)]:revision:_hg_labels' \ | |
1144 | '--intro[send an introduction email for a single patch]' \ |
|
1144 | '--intro[send an introduction email for a single patch]' \ | |
@@ -1163,10 +1163,10 b' typeset -A _hg_cmd_globals' | |||||
1163 | # Rebase |
|
1163 | # Rebase | |
1164 | _hg_cmd_rebase() { |
|
1164 | _hg_cmd_rebase() { | |
1165 | _arguments -s -w : $_hg_global_opts $_hg_commit_opts $_hg_mergetool_opts \ |
|
1165 | _arguments -s -w : $_hg_global_opts $_hg_commit_opts $_hg_mergetool_opts \ | |
1166 | '*'{-r,--rev}'[rebase these revisions]:revision:_hg_revrange' \ |
|
1166 | '*'{-r+,--rev=}'[rebase these revisions]:revision:_hg_revrange' \ | |
1167 | '(--source -s)'{-s+,--source}'[rebase from the specified changeset]:revision:_hg_labels' \ |
|
1167 | '(--source -s)'{-s+,--source=}'[rebase from the specified changeset]:revision:_hg_labels' \ | |
1168 | '(--base -b)'{-b+,--base}'[rebase from the base of the specified changeset]:revision:_hg_labels' \ |
|
1168 | '(--base -b)'{-b+,--base=}'[rebase from the base of the specified changeset]:revision:_hg_labels' \ | |
1169 | '(--dest -d)'{-d+,--dest}'[rebase onto the specified changeset]:revision:_hg_labels' \ |
|
1169 | '(--dest -d)'{-d+,--dest=}'[rebase onto the specified changeset]:revision:_hg_labels' \ | |
1170 | '--collapse[collapse the rebased changeset]' \ |
|
1170 | '--collapse[collapse the rebased changeset]' \ | |
1171 | '--keep[keep original changeset]' \ |
|
1171 | '--keep[keep original changeset]' \ | |
1172 | '--keepbranches[keep original branch name]' \ |
|
1172 | '--keepbranches[keep original branch name]' \ | |
@@ -1181,8 +1181,8 b' typeset -A _hg_cmd_globals' | |||||
1181 | '(--addremove -A)'{-A,--addremove}'[mark new/missing files as added/removed before committing]' \ |
|
1181 | '(--addremove -A)'{-A,--addremove}'[mark new/missing files as added/removed before committing]' \ | |
1182 | '--close-branch[mark a branch as closed, hiding it from the branch list]' \ |
|
1182 | '--close-branch[mark a branch as closed, hiding it from the branch list]' \ | |
1183 | '--amend[amend the parent of the working dir]' \ |
|
1183 | '--amend[amend the parent of the working dir]' \ | |
1184 | '(--date -d)'{-d+,--date}'[record the specified date as commit date]:date:' \ |
|
1184 | '(--date -d)'{-d+,--date=}'[record the specified date as commit date]:date:' \ | |
1185 | '(--user -u)'{-u+,--user}'[record the specified user as committer]:user:' |
|
1185 | '(--user -u)'{-u+,--user=}'[record the specified user as committer]:user:' | |
1186 | } |
|
1186 | } | |
1187 |
|
1187 | |||
1188 | _hg_cmd_qrecord() { |
|
1188 | _hg_cmd_qrecord() { | |
@@ -1195,8 +1195,8 b' typeset -A _hg_cmd_globals' | |||||
1195 | _arguments -s -w : $_hg_global_opts \ |
|
1195 | _arguments -s -w : $_hg_global_opts \ | |
1196 | '(--source-type -s)'{-s,--source-type}'[source repository type]' \ |
|
1196 | '(--source-type -s)'{-s,--source-type}'[source repository type]' \ | |
1197 | '(--dest-type -d)'{-d,--dest-type}'[destination repository type]' \ |
|
1197 | '(--dest-type -d)'{-d,--dest-type}'[destination repository type]' \ | |
1198 | '(--rev -r)'{-r+,--rev}'[import up to target revision]:revision:' \ |
|
1198 | '(--rev -r)'{-r+,--rev=}'[import up to target revision]:revision:' \ | |
1199 | '(--authormap -A)'{-A+,--authormap}'[remap usernames using this file]:file:_files' \ |
|
1199 | '(--authormap -A)'{-A+,--authormap=}'[remap usernames using this file]:file:_files' \ | |
1200 | '--filemap[remap file names using contents of file]:file:_files' \ |
|
1200 | '--filemap[remap file names using contents of file]:file:_files' \ | |
1201 | '--splicemap[splice synthesized history into place]:file:_files' \ |
|
1201 | '--splicemap[splice synthesized history into place]:file:_files' \ | |
1202 | '--branchmap[change branch names while converting]:file:_files' \ |
|
1202 | '--branchmap[change branch names while converting]:file:_files' \ |
@@ -57,7 +57,6 b' try:' | |||||
57 | import roman |
|
57 | import roman | |
58 | except ImportError: |
|
58 | except ImportError: | |
59 | from docutils.utils import roman |
|
59 | from docutils.utils import roman | |
60 | import inspect |
|
|||
61 |
|
60 | |||
62 | FIELD_LIST_INDENT = 7 |
|
61 | FIELD_LIST_INDENT = 7 | |
63 | DEFINITION_LIST_INDENT = 7 |
|
62 | DEFINITION_LIST_INDENT = 7 | |
@@ -289,10 +288,10 b' class Translator(nodes.NodeVisitor):' | |||||
289 | text = node.astext() |
|
288 | text = node.astext() | |
290 | text = text.replace('\\','\\e') |
|
289 | text = text.replace('\\','\\e') | |
291 | replace_pairs = [ |
|
290 | replace_pairs = [ | |
292 |
(u'-', u |
|
291 | (u'-', u'\\-'), | |
293 |
(u' |
|
292 | (u"'", u'\\(aq'), | |
294 |
(u'´', u |
|
293 | (u'´', u"\\'"), | |
295 |
(u'`', u |
|
294 | (u'`', u'\\(ga'), | |
296 | ] |
|
295 | ] | |
297 | for (in_char, out_markup) in replace_pairs: |
|
296 | for (in_char, out_markup) in replace_pairs: | |
298 | text = text.replace(in_char, out_markup) |
|
297 | text = text.replace(in_char, out_markup) |
@@ -204,11 +204,11 b' from mercurial import (' | |||||
204 |
|
204 | |||
205 | urlreq = util.urlreq |
|
205 | urlreq = util.urlreq | |
206 |
|
206 | |||
207 |
# Note for extension authors: ONLY specify testedwith = ' |
|
207 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
208 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
208 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
209 | # be specifying the version(s) of Mercurial they are tested with, or |
|
209 | # be specifying the version(s) of Mercurial they are tested with, or | |
210 | # leave the attribute unspecified. |
|
210 | # leave the attribute unspecified. | |
211 |
testedwith = ' |
|
211 | testedwith = 'ships-with-hg-core' | |
212 |
|
212 | |||
213 | def _getusers(ui, group): |
|
213 | def _getusers(ui, group): | |
214 |
|
214 |
@@ -51,11 +51,11 b' from mercurial import (' | |||||
51 |
|
51 | |||
52 | cmdtable = {} |
|
52 | cmdtable = {} | |
53 | command = cmdutil.command(cmdtable) |
|
53 | command = cmdutil.command(cmdtable) | |
54 |
# Note for extension authors: ONLY specify testedwith = ' |
|
54 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
55 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
55 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
56 | # be specifying the version(s) of Mercurial they are tested with, or |
|
56 | # be specifying the version(s) of Mercurial they are tested with, or | |
57 | # leave the attribute unspecified. |
|
57 | # leave the attribute unspecified. | |
58 |
testedwith = ' |
|
58 | testedwith = 'ships-with-hg-core' | |
59 | lastui = None |
|
59 | lastui = None | |
60 |
|
60 | |||
61 | filehandles = {} |
|
61 | filehandles = {} |
@@ -294,11 +294,11 b' from mercurial import (' | |||||
294 | urlparse = util.urlparse |
|
294 | urlparse = util.urlparse | |
295 | xmlrpclib = util.xmlrpclib |
|
295 | xmlrpclib = util.xmlrpclib | |
296 |
|
296 | |||
297 |
# Note for extension authors: ONLY specify testedwith = ' |
|
297 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
298 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
298 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
299 | # be specifying the version(s) of Mercurial they are tested with, or |
|
299 | # be specifying the version(s) of Mercurial they are tested with, or | |
300 | # leave the attribute unspecified. |
|
300 | # leave the attribute unspecified. | |
301 |
testedwith = ' |
|
301 | testedwith = 'ships-with-hg-core' | |
302 |
|
302 | |||
303 | class bzaccess(object): |
|
303 | class bzaccess(object): | |
304 | '''Base class for access to Bugzilla.''' |
|
304 | '''Base class for access to Bugzilla.''' |
@@ -42,11 +42,11 b' from mercurial import (' | |||||
42 |
|
42 | |||
43 | cmdtable = {} |
|
43 | cmdtable = {} | |
44 | command = cmdutil.command(cmdtable) |
|
44 | command = cmdutil.command(cmdtable) | |
45 |
# Note for extension authors: ONLY specify testedwith = ' |
|
45 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
46 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
46 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
47 | # be specifying the version(s) of Mercurial they are tested with, or |
|
47 | # be specifying the version(s) of Mercurial they are tested with, or | |
48 | # leave the attribute unspecified. |
|
48 | # leave the attribute unspecified. | |
49 |
testedwith = ' |
|
49 | testedwith = 'ships-with-hg-core' | |
50 |
|
50 | |||
51 | @command('censor', |
|
51 | @command('censor', | |
52 | [('r', 'rev', '', _('censor file from specified revision'), _('REV')), |
|
52 | [('r', 'rev', '', _('censor file from specified revision'), _('REV')), |
@@ -63,11 +63,11 b' from mercurial import (' | |||||
63 | util, |
|
63 | util, | |
64 | ) |
|
64 | ) | |
65 |
|
65 | |||
66 |
# Note for extension authors: ONLY specify testedwith = ' |
|
66 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
67 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
67 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
68 | # be specifying the version(s) of Mercurial they are tested with, or |
|
68 | # be specifying the version(s) of Mercurial they are tested with, or | |
69 | # leave the attribute unspecified. |
|
69 | # leave the attribute unspecified. | |
70 |
testedwith = ' |
|
70 | testedwith = 'ships-with-hg-core' | |
71 |
|
71 | |||
72 | _log = commandserver.log |
|
72 | _log = commandserver.log | |
73 |
|
73 |
@@ -26,11 +26,11 b' templateopts = commands.templateopts' | |||||
26 |
|
26 | |||
27 | cmdtable = {} |
|
27 | cmdtable = {} | |
28 | command = cmdutil.command(cmdtable) |
|
28 | command = cmdutil.command(cmdtable) | |
29 |
# Note for extension authors: ONLY specify testedwith = ' |
|
29 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
30 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
30 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
31 | # be specifying the version(s) of Mercurial they are tested with, or |
|
31 | # be specifying the version(s) of Mercurial they are tested with, or | |
32 | # leave the attribute unspecified. |
|
32 | # leave the attribute unspecified. | |
33 |
testedwith = ' |
|
33 | testedwith = 'ships-with-hg-core' | |
34 |
|
34 | |||
35 | @command('children', |
|
35 | @command('children', | |
36 | [('r', 'rev', '', |
|
36 | [('r', 'rev', '', |
@@ -26,11 +26,11 b' from mercurial import (' | |||||
26 |
|
26 | |||
27 | cmdtable = {} |
|
27 | cmdtable = {} | |
28 | command = cmdutil.command(cmdtable) |
|
28 | command = cmdutil.command(cmdtable) | |
29 |
# Note for extension authors: ONLY specify testedwith = ' |
|
29 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
30 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
30 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
31 | # be specifying the version(s) of Mercurial they are tested with, or |
|
31 | # be specifying the version(s) of Mercurial they are tested with, or | |
32 | # leave the attribute unspecified. |
|
32 | # leave the attribute unspecified. | |
33 |
testedwith = ' |
|
33 | testedwith = 'ships-with-hg-core' | |
34 |
|
34 | |||
35 | def maketemplater(ui, repo, tmpl): |
|
35 | def maketemplater(ui, repo, tmpl): | |
36 | return cmdutil.changeset_templater(ui, repo, False, None, tmpl, None, False) |
|
36 | return cmdutil.changeset_templater(ui, repo, False, None, tmpl, None, False) |
@@ -169,7 +169,7 b' from mercurial import (' | |||||
169 | wireproto, |
|
169 | wireproto, | |
170 | ) |
|
170 | ) | |
171 |
|
171 | |||
172 |
testedwith = ' |
|
172 | testedwith = 'ships-with-hg-core' | |
173 |
|
173 | |||
174 | def capabilities(orig, repo, proto): |
|
174 | def capabilities(orig, repo, proto): | |
175 | caps = orig(repo, proto) |
|
175 | caps = orig(repo, proto) |
@@ -29,6 +29,15 b" ECMA-48 mode, the options are 'bold', 'i" | |||||
29 | Some may not be available for a given terminal type, and will be |
|
29 | Some may not be available for a given terminal type, and will be | |
30 | silently ignored. |
|
30 | silently ignored. | |
31 |
|
31 | |||
|
32 | If the terminfo entry for your terminal is missing codes for an effect | |||
|
33 | or has the wrong codes, you can add or override those codes in your | |||
|
34 | configuration:: | |||
|
35 | ||||
|
36 | [color] | |||
|
37 | terminfo.dim = \E[2m | |||
|
38 | ||||
|
39 | where '\E' is substituted with an escape character. | |||
|
40 | ||||
32 | Labels |
|
41 | Labels | |
33 | ------ |
|
42 | ------ | |
34 |
|
43 | |||
@@ -170,11 +179,11 b' from mercurial import (' | |||||
170 |
|
179 | |||
171 | cmdtable = {} |
|
180 | cmdtable = {} | |
172 | command = cmdutil.command(cmdtable) |
|
181 | command = cmdutil.command(cmdtable) | |
173 |
# Note for extension authors: ONLY specify testedwith = ' |
|
182 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
174 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
183 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
175 | # be specifying the version(s) of Mercurial they are tested with, or |
|
184 | # be specifying the version(s) of Mercurial they are tested with, or | |
176 | # leave the attribute unspecified. |
|
185 | # leave the attribute unspecified. | |
177 |
testedwith = ' |
|
186 | testedwith = 'ships-with-hg-core' | |
178 |
|
187 | |||
179 | # start and stop parameters for effects |
|
188 | # start and stop parameters for effects | |
180 | _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33, |
|
189 | _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33, | |
@@ -196,9 +205,12 b' def _terminfosetup(ui, mode):' | |||||
196 | if mode not in ('auto', 'terminfo'): |
|
205 | if mode not in ('auto', 'terminfo'): | |
197 | return |
|
206 | return | |
198 |
|
207 | |||
199 | _terminfo_params.update((key[6:], (False, int(val))) |
|
208 | _terminfo_params.update((key[6:], (False, int(val), '')) | |
200 | for key, val in ui.configitems('color') |
|
209 | for key, val in ui.configitems('color') | |
201 | if key.startswith('color.')) |
|
210 | if key.startswith('color.')) | |
|
211 | _terminfo_params.update((key[9:], (True, '', val.replace('\\E', '\x1b'))) | |||
|
212 | for key, val in ui.configitems('color') | |||
|
213 | if key.startswith('terminfo.')) | |||
202 |
|
214 | |||
203 | try: |
|
215 | try: | |
204 | curses.setupterm() |
|
216 | curses.setupterm() | |
@@ -206,10 +218,10 b' def _terminfosetup(ui, mode):' | |||||
206 | _terminfo_params = {} |
|
218 | _terminfo_params = {} | |
207 | return |
|
219 | return | |
208 |
|
220 | |||
209 | for key, (b, e) in _terminfo_params.items(): |
|
221 | for key, (b, e, c) in _terminfo_params.items(): | |
210 | if not b: |
|
222 | if not b: | |
211 | continue |
|
223 | continue | |
212 | if not curses.tigetstr(e): |
|
224 | if not c and not curses.tigetstr(e): | |
213 | # Most terminals don't support dim, invis, etc, so don't be |
|
225 | # Most terminals don't support dim, invis, etc, so don't be | |
214 | # noisy and use ui.debug(). |
|
226 | # noisy and use ui.debug(). | |
215 | ui.debug("no terminfo entry for %s\n" % e) |
|
227 | ui.debug("no terminfo entry for %s\n" % e) | |
@@ -290,26 +302,26 b' def _modesetup(ui, coloropt):' | |||||
290 |
|
302 | |||
291 | try: |
|
303 | try: | |
292 | import curses |
|
304 | import curses | |
293 |
# Mapping from effect name to terminfo attribute name |
|
305 | # Mapping from effect name to terminfo attribute name (or raw code) or | |
294 | # This will also force-load the curses module. |
|
306 | # color number. This will also force-load the curses module. | |
295 | _terminfo_params = {'none': (True, 'sgr0'), |
|
307 | _terminfo_params = {'none': (True, 'sgr0', ''), | |
296 | 'standout': (True, 'smso'), |
|
308 | 'standout': (True, 'smso', ''), | |
297 | 'underline': (True, 'smul'), |
|
309 | 'underline': (True, 'smul', ''), | |
298 | 'reverse': (True, 'rev'), |
|
310 | 'reverse': (True, 'rev', ''), | |
299 | 'inverse': (True, 'rev'), |
|
311 | 'inverse': (True, 'rev', ''), | |
300 | 'blink': (True, 'blink'), |
|
312 | 'blink': (True, 'blink', ''), | |
301 | 'dim': (True, 'dim'), |
|
313 | 'dim': (True, 'dim', ''), | |
302 | 'bold': (True, 'bold'), |
|
314 | 'bold': (True, 'bold', ''), | |
303 | 'invisible': (True, 'invis'), |
|
315 | 'invisible': (True, 'invis', ''), | |
304 | 'italic': (True, 'sitm'), |
|
316 | 'italic': (True, 'sitm', ''), | |
305 | 'black': (False, curses.COLOR_BLACK), |
|
317 | 'black': (False, curses.COLOR_BLACK, ''), | |
306 | 'red': (False, curses.COLOR_RED), |
|
318 | 'red': (False, curses.COLOR_RED, ''), | |
307 | 'green': (False, curses.COLOR_GREEN), |
|
319 | 'green': (False, curses.COLOR_GREEN, ''), | |
308 | 'yellow': (False, curses.COLOR_YELLOW), |
|
320 | 'yellow': (False, curses.COLOR_YELLOW, ''), | |
309 | 'blue': (False, curses.COLOR_BLUE), |
|
321 | 'blue': (False, curses.COLOR_BLUE, ''), | |
310 | 'magenta': (False, curses.COLOR_MAGENTA), |
|
322 | 'magenta': (False, curses.COLOR_MAGENTA, ''), | |
311 | 'cyan': (False, curses.COLOR_CYAN), |
|
323 | 'cyan': (False, curses.COLOR_CYAN, ''), | |
312 | 'white': (False, curses.COLOR_WHITE)} |
|
324 | 'white': (False, curses.COLOR_WHITE, '')} | |
313 | except ImportError: |
|
325 | except ImportError: | |
314 | _terminfo_params = {} |
|
326 | _terminfo_params = {} | |
315 |
|
327 | |||
@@ -375,9 +387,15 b' def _effect_str(effect):' | |||||
375 | if effect.endswith('_background'): |
|
387 | if effect.endswith('_background'): | |
376 | bg = True |
|
388 | bg = True | |
377 | effect = effect[:-11] |
|
389 | effect = effect[:-11] | |
378 | attr, val = _terminfo_params[effect] |
|
390 | try: | |
|
391 | attr, val, termcode = _terminfo_params[effect] | |||
|
392 | except KeyError: | |||
|
393 | return '' | |||
379 | if attr: |
|
394 | if attr: | |
380 | return curses.tigetstr(val) |
|
395 | if termcode: | |
|
396 | return termcode | |||
|
397 | else: | |||
|
398 | return curses.tigetstr(val) | |||
381 | elif bg: |
|
399 | elif bg: | |
382 | return curses.tparm(curses.tigetstr('setab'), val) |
|
400 | return curses.tparm(curses.tigetstr('setab'), val) | |
383 | else: |
|
401 | else: | |
@@ -412,7 +430,7 b' def valideffect(effect):' | |||||
412 |
|
430 | |||
413 | def configstyles(ui): |
|
431 | def configstyles(ui): | |
414 | for status, cfgeffects in ui.configitems('color'): |
|
432 | for status, cfgeffects in ui.configitems('color'): | |
415 | if '.' not in status or status.startswith('color.'): |
|
433 | if '.' not in status or status.startswith(('color.', 'terminfo.')): | |
416 | continue |
|
434 | continue | |
417 | cfgeffects = ui.configlist('color', status) |
|
435 | cfgeffects = ui.configlist('color', status) | |
418 | if cfgeffects: |
|
436 | if cfgeffects: | |
@@ -524,10 +542,16 b' def debugcolor(ui, repo, **opts):' | |||||
524 | _styles = {} |
|
542 | _styles = {} | |
525 | for effect in _effects.keys(): |
|
543 | for effect in _effects.keys(): | |
526 | _styles[effect] = effect |
|
544 | _styles[effect] = effect | |
|
545 | if _terminfo_params: | |||
|
546 | for k, v in ui.configitems('color'): | |||
|
547 | if k.startswith('color.'): | |||
|
548 | _styles[k] = k[6:] | |||
|
549 | elif k.startswith('terminfo.'): | |||
|
550 | _styles[k] = k[9:] | |||
527 | ui.write(('color mode: %s\n') % ui._colormode) |
|
551 | ui.write(('color mode: %s\n') % ui._colormode) | |
528 | ui.write(_('available colors:\n')) |
|
552 | ui.write(_('available colors:\n')) | |
529 |
for |
|
553 | for colorname, label in _styles.items(): | |
530 |
ui.write(('%s\n') % color |
|
554 | ui.write(('%s\n') % colorname, label=label) | |
531 |
|
555 | |||
532 | if os.name != 'nt': |
|
556 | if os.name != 'nt': | |
533 | w32effects = None |
|
557 | w32effects = None | |
@@ -558,8 +582,8 b' else:' | |||||
558 | ('srWindow', _SMALL_RECT), |
|
582 | ('srWindow', _SMALL_RECT), | |
559 | ('dwMaximumWindowSize', _COORD)] |
|
583 | ('dwMaximumWindowSize', _COORD)] | |
560 |
|
584 | |||
561 |
_STD_OUTPUT_HANDLE = 0xfffffff5 |
|
585 | _STD_OUTPUT_HANDLE = 0xfffffff5 # (DWORD)-11 | |
562 |
_STD_ERROR_HANDLE = 0xfffffff4 |
|
586 | _STD_ERROR_HANDLE = 0xfffffff4 # (DWORD)-12 | |
563 |
|
587 | |||
564 | _FOREGROUND_BLUE = 0x0001 |
|
588 | _FOREGROUND_BLUE = 0x0001 | |
565 | _FOREGROUND_GREEN = 0x0002 |
|
589 | _FOREGROUND_GREEN = 0x0002 |
@@ -23,11 +23,11 b' from . import (' | |||||
23 |
|
23 | |||
24 | cmdtable = {} |
|
24 | cmdtable = {} | |
25 | command = cmdutil.command(cmdtable) |
|
25 | command = cmdutil.command(cmdtable) | |
26 |
# Note for extension authors: ONLY specify testedwith = ' |
|
26 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
27 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
27 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
28 | # be specifying the version(s) of Mercurial they are tested with, or |
|
28 | # be specifying the version(s) of Mercurial they are tested with, or | |
29 | # leave the attribute unspecified. |
|
29 | # leave the attribute unspecified. | |
30 |
testedwith = ' |
|
30 | testedwith = 'ships-with-hg-core' | |
31 |
|
31 | |||
32 | # Commands definition was moved elsewhere to ease demandload job. |
|
32 | # Commands definition was moved elsewhere to ease demandload job. | |
33 |
|
33 |
@@ -531,8 +531,8 b' class svn_source(converter_source):' | |||||
531 | def checkrevformat(self, revstr, mapname='splicemap'): |
|
531 | def checkrevformat(self, revstr, mapname='splicemap'): | |
532 | """ fails if revision format does not match the correct format""" |
|
532 | """ fails if revision format does not match the correct format""" | |
533 | if not re.match(r'svn:[0-9a-f]{8,8}-[0-9a-f]{4,4}-' |
|
533 | if not re.match(r'svn:[0-9a-f]{8,8}-[0-9a-f]{4,4}-' | |
534 | '[0-9a-f]{4,4}-[0-9a-f]{4,4}-[0-9a-f]' |
|
534 | r'[0-9a-f]{4,4}-[0-9a-f]{4,4}-[0-9a-f]' | |
535 | '{12,12}(.*)\@[0-9]+$',revstr): |
|
535 | r'{12,12}(.*)\@[0-9]+$',revstr): | |
536 | raise error.Abort(_('%s entry %s is not a valid revision' |
|
536 | raise error.Abort(_('%s entry %s is not a valid revision' | |
537 | ' identifier') % (mapname, revstr)) |
|
537 | ' identifier') % (mapname, revstr)) | |
538 |
|
538 |
@@ -104,11 +104,11 b' from mercurial import (' | |||||
104 | util, |
|
104 | util, | |
105 | ) |
|
105 | ) | |
106 |
|
106 | |||
107 |
# Note for extension authors: ONLY specify testedwith = ' |
|
107 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
108 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
108 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
109 | # be specifying the version(s) of Mercurial they are tested with, or |
|
109 | # be specifying the version(s) of Mercurial they are tested with, or | |
110 | # leave the attribute unspecified. |
|
110 | # leave the attribute unspecified. | |
111 |
testedwith = ' |
|
111 | testedwith = 'ships-with-hg-core' | |
112 |
|
112 | |||
113 | # Matches a lone LF, i.e., one that is not part of CRLF. |
|
113 | # Matches a lone LF, i.e., one that is not part of CRLF. | |
114 | singlelf = re.compile('(^|[^\r])\n') |
|
114 | singlelf = re.compile('(^|[^\r])\n') | |
@@ -175,25 +175,27 b' class eolfile(object):' | |||||
175 |
|
175 | |||
176 | include = [] |
|
176 | include = [] | |
177 | exclude = [] |
|
177 | exclude = [] | |
|
178 | self.patterns = [] | |||
178 | for pattern, style in self.cfg.items('patterns'): |
|
179 | for pattern, style in self.cfg.items('patterns'): | |
179 | key = style.upper() |
|
180 | key = style.upper() | |
180 | if key == 'BIN': |
|
181 | if key == 'BIN': | |
181 | exclude.append(pattern) |
|
182 | exclude.append(pattern) | |
182 | else: |
|
183 | else: | |
183 | include.append(pattern) |
|
184 | include.append(pattern) | |
|
185 | m = match.match(root, '', [pattern]) | |||
|
186 | self.patterns.append((pattern, key, m)) | |||
184 | # This will match the files for which we need to care |
|
187 | # This will match the files for which we need to care | |
185 | # about inconsistent newlines. |
|
188 | # about inconsistent newlines. | |
186 | self.match = match.match(root, '', [], include, exclude) |
|
189 | self.match = match.match(root, '', [], include, exclude) | |
187 |
|
190 | |||
188 | def copytoui(self, ui): |
|
191 | def copytoui(self, ui): | |
189 |
for pattern, |
|
192 | for pattern, key, m in self.patterns: | |
190 | key = style.upper() |
|
|||
191 | try: |
|
193 | try: | |
192 | ui.setconfig('decode', pattern, self._decode[key], 'eol') |
|
194 | ui.setconfig('decode', pattern, self._decode[key], 'eol') | |
193 | ui.setconfig('encode', pattern, self._encode[key], 'eol') |
|
195 | ui.setconfig('encode', pattern, self._encode[key], 'eol') | |
194 | except KeyError: |
|
196 | except KeyError: | |
195 | ui.warn(_("ignoring unknown EOL style '%s' from %s\n") |
|
197 | ui.warn(_("ignoring unknown EOL style '%s' from %s\n") | |
196 |
% ( |
|
198 | % (key, self.cfg.source('patterns', pattern))) | |
197 | # eol.only-consistent can be specified in ~/.hgrc or .hgeol |
|
199 | # eol.only-consistent can be specified in ~/.hgrc or .hgeol | |
198 | for k, v in self.cfg.items('eol'): |
|
200 | for k, v in self.cfg.items('eol'): | |
199 | ui.setconfig('eol', k, v, 'eol') |
|
201 | ui.setconfig('eol', k, v, 'eol') | |
@@ -203,10 +205,10 b' class eolfile(object):' | |||||
203 | for f in (files or ctx.files()): |
|
205 | for f in (files or ctx.files()): | |
204 | if f not in ctx: |
|
206 | if f not in ctx: | |
205 | continue |
|
207 | continue | |
206 |
for pattern, |
|
208 | for pattern, key, m in self.patterns: | |
207 |
if not m |
|
209 | if not m(f): | |
208 | continue |
|
210 | continue | |
209 |
target = self._encode[ |
|
211 | target = self._encode[key] | |
210 | data = ctx[f].data() |
|
212 | data = ctx[f].data() | |
211 | if (target == "to-lf" and "\r\n" in data |
|
213 | if (target == "to-lf" and "\r\n" in data | |
212 | or target == "to-crlf" and singlelf.search(data)): |
|
214 | or target == "to-crlf" and singlelf.search(data)): | |
@@ -305,15 +307,20 b' def reposetup(ui, repo):' | |||||
305 | return eol.match |
|
307 | return eol.match | |
306 |
|
308 | |||
307 | def _hgcleardirstate(self): |
|
309 | def _hgcleardirstate(self): | |
308 |
self._eol |
|
310 | self._eolmatch = self.loadeol([None, 'tip']) | |
309 |
if not self._eol |
|
311 | if not self._eolmatch: | |
310 |
self._eol |
|
312 | self._eolmatch = util.never | |
311 | return |
|
313 | return | |
312 |
|
314 | |||
|
315 | oldeol = None | |||
313 | try: |
|
316 | try: | |
314 | cachemtime = os.path.getmtime(self.join("eol.cache")) |
|
317 | cachemtime = os.path.getmtime(self.join("eol.cache")) | |
315 | except OSError: |
|
318 | except OSError: | |
316 | cachemtime = 0 |
|
319 | cachemtime = 0 | |
|
320 | else: | |||
|
321 | olddata = self.vfs.read("eol.cache") | |||
|
322 | if olddata: | |||
|
323 | oldeol = eolfile(self.ui, self.root, olddata) | |||
317 |
|
324 | |||
318 | try: |
|
325 | try: | |
319 | eolmtime = os.path.getmtime(self.wjoin(".hgeol")) |
|
326 | eolmtime = os.path.getmtime(self.wjoin(".hgeol")) | |
@@ -322,18 +329,37 b' def reposetup(ui, repo):' | |||||
322 |
|
329 | |||
323 | if eolmtime > cachemtime: |
|
330 | if eolmtime > cachemtime: | |
324 | self.ui.debug("eol: detected change in .hgeol\n") |
|
331 | self.ui.debug("eol: detected change in .hgeol\n") | |
|
332 | ||||
|
333 | hgeoldata = self.wvfs.read('.hgeol') | |||
|
334 | neweol = eolfile(self.ui, self.root, hgeoldata) | |||
|
335 | ||||
325 | wlock = None |
|
336 | wlock = None | |
326 | try: |
|
337 | try: | |
327 | wlock = self.wlock() |
|
338 | wlock = self.wlock() | |
328 | for f in self.dirstate: |
|
339 | for f in self.dirstate: | |
329 |
if self.dirstate[f] |
|
340 | if self.dirstate[f] != 'n': | |
330 |
|
|
341 | continue | |
331 | # again since the new .hgeol file might no |
|
342 | if oldeol is not None: | |
332 |
|
|
343 | if not oldeol.match(f) and not neweol.match(f): | |
333 | self.dirstate.normallookup(f) |
|
344 | continue | |
334 | # Create or touch the cache to update mtime |
|
345 | oldkey = None | |
335 | self.vfs("eol.cache", "w").close() |
|
346 | for pattern, key, m in oldeol.patterns: | |
336 | wlock.release() |
|
347 | if m(f): | |
|
348 | oldkey = key | |||
|
349 | break | |||
|
350 | newkey = None | |||
|
351 | for pattern, key, m in neweol.patterns: | |||
|
352 | if m(f): | |||
|
353 | newkey = key | |||
|
354 | break | |||
|
355 | if oldkey == newkey: | |||
|
356 | continue | |||
|
357 | # all normal files need to be looked at again since | |||
|
358 | # the new .hgeol file specify a different filter | |||
|
359 | self.dirstate.normallookup(f) | |||
|
360 | # Write the cache to update mtime and cache .hgeol | |||
|
361 | with self.vfs("eol.cache", "w") as f: | |||
|
362 | f.write(hgeoldata) | |||
337 | except error.LockUnavailable: |
|
363 | except error.LockUnavailable: | |
338 | # If we cannot lock the repository and clear the |
|
364 | # If we cannot lock the repository and clear the | |
339 | # dirstate, then a commit might not see all files |
|
365 | # dirstate, then a commit might not see all files | |
@@ -341,10 +367,13 b' def reposetup(ui, repo):' | |||||
341 | # repository, then we can also not make a commit, |
|
367 | # repository, then we can also not make a commit, | |
342 | # so ignore the error. |
|
368 | # so ignore the error. | |
343 | pass |
|
369 | pass | |
|
370 | finally: | |||
|
371 | if wlock is not None: | |||
|
372 | wlock.release() | |||
344 |
|
373 | |||
345 | def commitctx(self, ctx, haserror=False): |
|
374 | def commitctx(self, ctx, haserror=False): | |
346 | for f in sorted(ctx.added() + ctx.modified()): |
|
375 | for f in sorted(ctx.added() + ctx.modified()): | |
347 |
if not self._eol |
|
376 | if not self._eolmatch(f): | |
348 | continue |
|
377 | continue | |
349 | fctx = ctx[f] |
|
378 | fctx = ctx[f] | |
350 | if fctx is None: |
|
379 | if fctx is None: |
@@ -84,11 +84,11 b' from mercurial import (' | |||||
84 |
|
84 | |||
85 | cmdtable = {} |
|
85 | cmdtable = {} | |
86 | command = cmdutil.command(cmdtable) |
|
86 | command = cmdutil.command(cmdtable) | |
87 |
# Note for extension authors: ONLY specify testedwith = ' |
|
87 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
88 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
88 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
89 | # be specifying the version(s) of Mercurial they are tested with, or |
|
89 | # be specifying the version(s) of Mercurial they are tested with, or | |
90 | # leave the attribute unspecified. |
|
90 | # leave the attribute unspecified. | |
91 |
testedwith = ' |
|
91 | testedwith = 'ships-with-hg-core' | |
92 |
|
92 | |||
93 | def snapshot(ui, repo, files, node, tmproot, listsubrepos): |
|
93 | def snapshot(ui, repo, files, node, tmproot, listsubrepos): | |
94 | '''snapshot files as of some revision |
|
94 | '''snapshot files as of some revision | |
@@ -324,6 +324,34 b' def extdiff(ui, repo, *pats, **opts):' | |||||
324 | cmdline = ' '.join(map(util.shellquote, [program] + option)) |
|
324 | cmdline = ' '.join(map(util.shellquote, [program] + option)) | |
325 | return dodiff(ui, repo, cmdline, pats, opts) |
|
325 | return dodiff(ui, repo, cmdline, pats, opts) | |
326 |
|
326 | |||
|
327 | class savedcmd(object): | |||
|
328 | """use external program to diff repository (or selected files) | |||
|
329 | ||||
|
330 | Show differences between revisions for the specified files, using | |||
|
331 | the following program:: | |||
|
332 | ||||
|
333 | %(path)s | |||
|
334 | ||||
|
335 | When two revision arguments are given, then changes are shown | |||
|
336 | between those revisions. If only one revision is specified then | |||
|
337 | that revision is compared to the working directory, and, when no | |||
|
338 | revisions are specified, the working directory files are compared | |||
|
339 | to its parent. | |||
|
340 | """ | |||
|
341 | ||||
|
342 | def __init__(self, path, cmdline): | |||
|
343 | # We can't pass non-ASCII through docstrings (and path is | |||
|
344 | # in an unknown encoding anyway) | |||
|
345 | docpath = path.encode("string-escape") | |||
|
346 | self.__doc__ = self.__doc__ % {'path': util.uirepr(docpath)} | |||
|
347 | self._cmdline = cmdline | |||
|
348 | ||||
|
349 | def __call__(self, ui, repo, *pats, **opts): | |||
|
350 | options = ' '.join(map(util.shellquote, opts['option'])) | |||
|
351 | if options: | |||
|
352 | options = ' ' + options | |||
|
353 | return dodiff(ui, repo, self._cmdline + options, pats, opts) | |||
|
354 | ||||
327 | def uisetup(ui): |
|
355 | def uisetup(ui): | |
328 | for cmd, path in ui.configitems('extdiff'): |
|
356 | for cmd, path in ui.configitems('extdiff'): | |
329 | path = util.expandpath(path) |
|
357 | path = util.expandpath(path) | |
@@ -357,28 +385,8 b' def uisetup(ui):' | |||||
357 | ui.config('merge-tools', cmd+'.diffargs') |
|
385 | ui.config('merge-tools', cmd+'.diffargs') | |
358 | if args: |
|
386 | if args: | |
359 | cmdline += ' ' + args |
|
387 | cmdline += ' ' + args | |
360 | def save(cmdline): |
|
388 | command(cmd, extdiffopts[:], _('hg %s [OPTION]... [FILE]...') % cmd, | |
361 | '''use closure to save diff command to use''' |
|
389 | inferrepo=True)(savedcmd(path, cmdline)) | |
362 | def mydiff(ui, repo, *pats, **opts): |
|
|||
363 | options = ' '.join(map(util.shellquote, opts['option'])) |
|
|||
364 | if options: |
|
|||
365 | options = ' ' + options |
|
|||
366 | return dodiff(ui, repo, cmdline + options, pats, opts) |
|
|||
367 | # We can't pass non-ASCII through docstrings (and path is |
|
|||
368 | # in an unknown encoding anyway) |
|
|||
369 | docpath = path.encode("string-escape") |
|
|||
370 | mydiff.__doc__ = '''\ |
|
|||
371 | use %(path)s to diff repository (or selected files) |
|
|||
372 |
|
390 | |||
373 | Show differences between revisions for the specified files, using |
|
391 | # tell hggettext to extract docstrings from these functions: | |
374 | the %(path)s program. |
|
392 | i18nfunctions = [savedcmd] | |
375 |
|
||||
376 | When two revision arguments are given, then changes are shown |
|
|||
377 | between those revisions. If only one revision is specified then |
|
|||
378 | that revision is compared to the working directory, and, when no |
|
|||
379 | revisions are specified, the working directory files are compared |
|
|||
380 | to its parent.\ |
|
|||
381 | ''' % {'path': util.uirepr(docpath)} |
|
|||
382 | return mydiff |
|
|||
383 | command(cmd, extdiffopts[:], _('hg %s [OPTION]... [FILE]...') % cmd, |
|
|||
384 | inferrepo=True)(save(cmdline)) |
|
@@ -26,11 +26,11 b' from mercurial import (' | |||||
26 | release = lock.release |
|
26 | release = lock.release | |
27 | cmdtable = {} |
|
27 | cmdtable = {} | |
28 | command = cmdutil.command(cmdtable) |
|
28 | command = cmdutil.command(cmdtable) | |
29 |
# Note for extension authors: ONLY specify testedwith = ' |
|
29 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
30 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
30 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
31 | # be specifying the version(s) of Mercurial they are tested with, or |
|
31 | # be specifying the version(s) of Mercurial they are tested with, or | |
32 | # leave the attribute unspecified. |
|
32 | # leave the attribute unspecified. | |
33 |
testedwith = ' |
|
33 | testedwith = 'ships-with-hg-core' | |
34 |
|
34 | |||
35 | @command('fetch', |
|
35 | @command('fetch', | |
36 | [('r', 'rev', [], |
|
36 | [('r', 'rev', [], |
@@ -113,11 +113,11 b' from . import (' | |||||
113 | watchmanclient, |
|
113 | watchmanclient, | |
114 | ) |
|
114 | ) | |
115 |
|
115 | |||
116 |
# Note for extension authors: ONLY specify testedwith = ' |
|
116 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
117 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
117 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
118 | # be specifying the version(s) of Mercurial they are tested with, or |
|
118 | # be specifying the version(s) of Mercurial they are tested with, or | |
119 | # leave the attribute unspecified. |
|
119 | # leave the attribute unspecified. | |
120 |
testedwith = ' |
|
120 | testedwith = 'ships-with-hg-core' | |
121 |
|
121 | |||
122 | # This extension is incompatible with the following blacklisted extensions |
|
122 | # This extension is incompatible with the following blacklisted extensions | |
123 | # and will disable itself when encountering one of these: |
|
123 | # and will disable itself when encountering one of these: |
@@ -23,11 +23,11 b' from mercurial import (' | |||||
23 |
|
23 | |||
24 | cmdtable = {} |
|
24 | cmdtable = {} | |
25 | command = cmdutil.command(cmdtable) |
|
25 | command = cmdutil.command(cmdtable) | |
26 |
# Note for extension authors: ONLY specify testedwith = ' |
|
26 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
27 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
27 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
28 | # be specifying the version(s) of Mercurial they are tested with, or |
|
28 | # be specifying the version(s) of Mercurial they are tested with, or | |
29 | # leave the attribute unspecified. |
|
29 | # leave the attribute unspecified. | |
30 |
testedwith = ' |
|
30 | testedwith = 'ships-with-hg-core' | |
31 |
|
31 | |||
32 | class gpg(object): |
|
32 | class gpg(object): | |
33 | def __init__(self, path, key=None): |
|
33 | def __init__(self, path, key=None): |
@@ -25,11 +25,11 b' from mercurial import (' | |||||
25 |
|
25 | |||
26 | cmdtable = {} |
|
26 | cmdtable = {} | |
27 | command = cmdutil.command(cmdtable) |
|
27 | command = cmdutil.command(cmdtable) | |
28 |
# Note for extension authors: ONLY specify testedwith = ' |
|
28 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
29 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
29 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
30 | # be specifying the version(s) of Mercurial they are tested with, or |
|
30 | # be specifying the version(s) of Mercurial they are tested with, or | |
31 | # leave the attribute unspecified. |
|
31 | # leave the attribute unspecified. | |
32 |
testedwith = ' |
|
32 | testedwith = 'ships-with-hg-core' | |
33 |
|
33 | |||
34 | @command('glog', |
|
34 | @command('glog', | |
35 | [('f', 'follow', None, |
|
35 | [('f', 'follow', None, |
@@ -54,11 +54,11 b' from mercurial import (' | |||||
54 |
|
54 | |||
55 | cmdtable = {} |
|
55 | cmdtable = {} | |
56 | command = cmdutil.command(cmdtable) |
|
56 | command = cmdutil.command(cmdtable) | |
57 |
# Note for extension authors: ONLY specify testedwith = ' |
|
57 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
58 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
58 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
59 | # be specifying the version(s) of Mercurial they are tested with, or |
|
59 | # be specifying the version(s) of Mercurial they are tested with, or | |
60 | # leave the attribute unspecified. |
|
60 | # leave the attribute unspecified. | |
61 |
testedwith = ' |
|
61 | testedwith = 'ships-with-hg-core' | |
62 |
|
62 | |||
63 | @command('debug-diff-tree', |
|
63 | @command('debug-diff-tree', | |
64 | [('p', 'patch', None, _('generate patch')), |
|
64 | [('p', 'patch', None, _('generate patch')), |
@@ -41,11 +41,11 b' from mercurial import (' | |||||
41 | fileset, |
|
41 | fileset, | |
42 | ) |
|
42 | ) | |
43 |
|
43 | |||
44 |
# Note for extension authors: ONLY specify testedwith = ' |
|
44 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
45 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
45 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
46 | # be specifying the version(s) of Mercurial they are tested with, or |
|
46 | # be specifying the version(s) of Mercurial they are tested with, or | |
47 | # leave the attribute unspecified. |
|
47 | # leave the attribute unspecified. | |
48 |
testedwith = ' |
|
48 | testedwith = 'ships-with-hg-core' | |
49 |
|
49 | |||
50 | def pygmentize(web, field, fctx, tmpl): |
|
50 | def pygmentize(web, field, fctx, tmpl): | |
51 | style = web.config('web', 'pygments_style', 'colorful') |
|
51 | style = web.config('web', 'pygments_style', 'colorful') |
@@ -201,23 +201,11 b' release = lock.release' | |||||
201 | cmdtable = {} |
|
201 | cmdtable = {} | |
202 | command = cmdutil.command(cmdtable) |
|
202 | command = cmdutil.command(cmdtable) | |
203 |
|
203 | |||
204 | class _constraints(object): |
|
204 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
205 | # aborts if there are multiple rules for one node |
|
|||
206 | noduplicates = 'noduplicates' |
|
|||
207 | # abort if the node does belong to edited stack |
|
|||
208 | forceother = 'forceother' |
|
|||
209 | # abort if the node doesn't belong to edited stack |
|
|||
210 | noother = 'noother' |
|
|||
211 |
|
||||
212 | @classmethod |
|
|||
213 | def known(cls): |
|
|||
214 | return set([v for k, v in cls.__dict__.items() if k[0] != '_']) |
|
|||
215 |
|
||||
216 | # Note for extension authors: ONLY specify testedwith = 'internal' for |
|
|||
217 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
205 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
218 | # be specifying the version(s) of Mercurial they are tested with, or |
|
206 | # be specifying the version(s) of Mercurial they are tested with, or | |
219 | # leave the attribute unspecified. |
|
207 | # leave the attribute unspecified. | |
220 |
testedwith = ' |
|
208 | testedwith = 'ships-with-hg-core' | |
221 |
|
209 | |||
222 | actiontable = {} |
|
210 | actiontable = {} | |
223 | primaryactions = set() |
|
211 | primaryactions = set() | |
@@ -403,7 +391,7 b' class histeditaction(object):' | |||||
403 | raise error.ParseError("invalid changeset %s" % rulehash) |
|
391 | raise error.ParseError("invalid changeset %s" % rulehash) | |
404 | return cls(state, rev) |
|
392 | return cls(state, rev) | |
405 |
|
393 | |||
406 | def verify(self, prev): |
|
394 | def verify(self, prev, expected, seen): | |
407 | """ Verifies semantic correctness of the rule""" |
|
395 | """ Verifies semantic correctness of the rule""" | |
408 | repo = self.repo |
|
396 | repo = self.repo | |
409 | ha = node.hex(self.node) |
|
397 | ha = node.hex(self.node) | |
@@ -412,6 +400,19 b' class histeditaction(object):' | |||||
412 | except error.RepoError: |
|
400 | except error.RepoError: | |
413 | raise error.ParseError(_('unknown changeset %s listed') |
|
401 | raise error.ParseError(_('unknown changeset %s listed') | |
414 | % ha[:12]) |
|
402 | % ha[:12]) | |
|
403 | if self.node is not None: | |||
|
404 | self._verifynodeconstraints(prev, expected, seen) | |||
|
405 | ||||
|
406 | def _verifynodeconstraints(self, prev, expected, seen): | |||
|
407 | # by default command need a node in the edited list | |||
|
408 | if self.node not in expected: | |||
|
409 | raise error.ParseError(_('%s "%s" changeset was not a candidate') | |||
|
410 | % (self.verb, node.short(self.node)), | |||
|
411 | hint=_('only use listed changesets')) | |||
|
412 | # and only one command per node | |||
|
413 | if self.node in seen: | |||
|
414 | raise error.ParseError(_('duplicated command for changeset %s') % | |||
|
415 | node.short(self.node)) | |||
415 |
|
416 | |||
416 | def torule(self): |
|
417 | def torule(self): | |
417 | """build a histedit rule line for an action |
|
418 | """build a histedit rule line for an action | |
@@ -434,19 +435,6 b' class histeditaction(object):' | |||||
434 | """ |
|
435 | """ | |
435 | return "%s\n%s" % (self.verb, node.hex(self.node)) |
|
436 | return "%s\n%s" % (self.verb, node.hex(self.node)) | |
436 |
|
437 | |||
437 | def constraints(self): |
|
|||
438 | """Return a set of constrains that this action should be verified for |
|
|||
439 | """ |
|
|||
440 | return set([_constraints.noduplicates, _constraints.noother]) |
|
|||
441 |
|
||||
442 | def nodetoverify(self): |
|
|||
443 | """Returns a node associated with the action that will be used for |
|
|||
444 | verification purposes. |
|
|||
445 |
|
||||
446 | If the action doesn't correspond to node it should return None |
|
|||
447 | """ |
|
|||
448 | return self.node |
|
|||
449 |
|
||||
450 | def run(self): |
|
438 | def run(self): | |
451 | """Runs the action. The default behavior is simply apply the action's |
|
439 | """Runs the action. The default behavior is simply apply the action's | |
452 | rulectx onto the current parentctx.""" |
|
440 | rulectx onto the current parentctx.""" | |
@@ -573,18 +561,7 b' def collapse(repo, first, last, commitop' | |||||
573 | copied = copies.pathcopies(base, last) |
|
561 | copied = copies.pathcopies(base, last) | |
574 |
|
562 | |||
575 | # prune files which were reverted by the updates |
|
563 | # prune files which were reverted by the updates | |
576 | def samefile(f): |
|
564 | files = [f for f in files if not cmdutil.samefile(f, last, base)] | |
577 | if f in last.manifest(): |
|
|||
578 | a = last.filectx(f) |
|
|||
579 | if f in base.manifest(): |
|
|||
580 | b = base.filectx(f) |
|
|||
581 | return (a.data() == b.data() |
|
|||
582 | and a.flags() == b.flags()) |
|
|||
583 | else: |
|
|||
584 | return False |
|
|||
585 | else: |
|
|||
586 | return f not in base.manifest() |
|
|||
587 | files = [f for f in files if not samefile(f)] |
|
|||
588 | # commit version of these files as defined by head |
|
565 | # commit version of these files as defined by head | |
589 | headmf = last.manifest() |
|
566 | headmf = last.manifest() | |
590 | def filectxfn(repo, ctx, path): |
|
567 | def filectxfn(repo, ctx, path): | |
@@ -683,9 +660,9 b' class edit(histeditaction):' | |||||
683 | @action(['fold', 'f'], |
|
660 | @action(['fold', 'f'], | |
684 | _('use commit, but combine it with the one above')) |
|
661 | _('use commit, but combine it with the one above')) | |
685 | class fold(histeditaction): |
|
662 | class fold(histeditaction): | |
686 | def verify(self, prev): |
|
663 | def verify(self, prev, expected, seen): | |
687 | """ Verifies semantic correctness of the fold rule""" |
|
664 | """ Verifies semantic correctness of the fold rule""" | |
688 | super(fold, self).verify(prev) |
|
665 | super(fold, self).verify(prev, expected, seen) | |
689 | repo = self.repo |
|
666 | repo = self.repo | |
690 | if not prev: |
|
667 | if not prev: | |
691 | c = repo[self.node].parents()[0] |
|
668 | c = repo[self.node].parents()[0] | |
@@ -795,8 +772,6 b' class fold(histeditaction):' | |||||
795 | return repo[n], replacements |
|
772 | return repo[n], replacements | |
796 |
|
773 | |||
797 | class base(histeditaction): |
|
774 | class base(histeditaction): | |
798 | def constraints(self): |
|
|||
799 | return set([_constraints.forceother]) |
|
|||
800 |
|
775 | |||
801 | def run(self): |
|
776 | def run(self): | |
802 | if self.repo['.'].node() != self.node: |
|
777 | if self.repo['.'].node() != self.node: | |
@@ -811,6 +786,14 b' class base(histeditaction):' | |||||
811 | basectx = self.repo['.'] |
|
786 | basectx = self.repo['.'] | |
812 | return basectx, [] |
|
787 | return basectx, [] | |
813 |
|
788 | |||
|
789 | def _verifynodeconstraints(self, prev, expected, seen): | |||
|
790 | # base can only be use with a node not in the edited set | |||
|
791 | if self.node in expected: | |||
|
792 | msg = _('%s "%s" changeset was an edited list candidate') | |||
|
793 | raise error.ParseError( | |||
|
794 | msg % (self.verb, node.short(self.node)), | |||
|
795 | hint=_('base must only use unlisted changesets')) | |||
|
796 | ||||
814 | @action(['_multifold'], |
|
797 | @action(['_multifold'], | |
815 | _( |
|
798 | _( | |
816 | """fold subclass used for when multiple folds happen in a row |
|
799 | """fold subclass used for when multiple folds happen in a row | |
@@ -871,7 +854,7 b' def findoutgoing(ui, repo, remote=None, ' | |||||
871 | roots = list(repo.revs("roots(%ln)", outgoing.missing)) |
|
854 | roots = list(repo.revs("roots(%ln)", outgoing.missing)) | |
872 | if 1 < len(roots): |
|
855 | if 1 < len(roots): | |
873 | msg = _('there are ambiguous outgoing revisions') |
|
856 | msg = _('there are ambiguous outgoing revisions') | |
874 |
hint = _('see " |
|
857 | hint = _("see 'hg help histedit' for more detail") | |
875 | raise error.Abort(msg, hint=hint) |
|
858 | raise error.Abort(msg, hint=hint) | |
876 | return repo.lookup(roots[0]) |
|
859 | return repo.lookup(roots[0]) | |
877 |
|
860 | |||
@@ -1210,8 +1193,8 b' def _edithisteditplan(ui, repo, state, r' | |||||
1210 | else: |
|
1193 | else: | |
1211 | rules = _readfile(rules) |
|
1194 | rules = _readfile(rules) | |
1212 | actions = parserules(rules, state) |
|
1195 | actions = parserules(rules, state) | |
1213 |
ctxs = [repo[act.node |
|
1196 | ctxs = [repo[act.node] \ | |
1214 |
for act in state.actions if act.node |
|
1197 | for act in state.actions if act.node] | |
1215 | warnverifyactions(ui, repo, actions, state, ctxs) |
|
1198 | warnverifyactions(ui, repo, actions, state, ctxs) | |
1216 | state.actions = actions |
|
1199 | state.actions = actions | |
1217 | state.write() |
|
1200 | state.write() | |
@@ -1307,7 +1290,7 b' def between(repo, old, new, keep):' | |||||
1307 | root = ctxs[0] # list is already sorted by repo.set |
|
1290 | root = ctxs[0] # list is already sorted by repo.set | |
1308 | if not root.mutable(): |
|
1291 | if not root.mutable(): | |
1309 | raise error.Abort(_('cannot edit public changeset: %s') % root, |
|
1292 | raise error.Abort(_('cannot edit public changeset: %s') % root, | |
1310 |
hint=_(' |
|
1293 | hint=_("see 'hg help phases' for details")) | |
1311 | return [c.node() for c in ctxs] |
|
1294 | return [c.node() for c in ctxs] | |
1312 |
|
1295 | |||
1313 | def ruleeditor(repo, ui, actions, editcomment=""): |
|
1296 | def ruleeditor(repo, ui, actions, editcomment=""): | |
@@ -1396,36 +1379,14 b' def verifyactions(actions, state, ctxs):' | |||||
1396 | Will abort if there are to many or too few rules, a malformed rule, |
|
1379 | Will abort if there are to many or too few rules, a malformed rule, | |
1397 | or a rule on a changeset outside of the user-given range. |
|
1380 | or a rule on a changeset outside of the user-given range. | |
1398 | """ |
|
1381 | """ | |
1399 |
expected = set(c. |
|
1382 | expected = set(c.node() for c in ctxs) | |
1400 | seen = set() |
|
1383 | seen = set() | |
1401 | prev = None |
|
1384 | prev = None | |
1402 | for action in actions: |
|
1385 | for action in actions: | |
1403 | action.verify(prev) |
|
1386 | action.verify(prev, expected, seen) | |
1404 | prev = action |
|
1387 | prev = action | |
1405 | constraints = action.constraints() |
|
1388 | if action.node is not None: | |
1406 | for constraint in constraints: |
|
1389 | seen.add(action.node) | |
1407 | if constraint not in _constraints.known(): |
|
|||
1408 | raise error.ParseError(_('unknown constraint "%s"') % |
|
|||
1409 | constraint) |
|
|||
1410 |
|
||||
1411 | nodetoverify = action.nodetoverify() |
|
|||
1412 | if nodetoverify is not None: |
|
|||
1413 | ha = node.hex(nodetoverify) |
|
|||
1414 | if _constraints.noother in constraints and ha not in expected: |
|
|||
1415 | raise error.ParseError( |
|
|||
1416 | _('%s "%s" changeset was not a candidate') |
|
|||
1417 | % (action.verb, ha[:12]), |
|
|||
1418 | hint=_('only use listed changesets')) |
|
|||
1419 | if _constraints.forceother in constraints and ha in expected: |
|
|||
1420 | raise error.ParseError( |
|
|||
1421 | _('%s "%s" changeset was not an edited list candidate') |
|
|||
1422 | % (action.verb, ha[:12]), |
|
|||
1423 | hint=_('only use listed changesets')) |
|
|||
1424 | if _constraints.noduplicates in constraints and ha in seen: |
|
|||
1425 | raise error.ParseError(_( |
|
|||
1426 | 'duplicated command for changeset %s') % |
|
|||
1427 | ha[:12]) |
|
|||
1428 | seen.add(ha) |
|
|||
1429 | missing = sorted(expected - seen) # sort to stabilize output |
|
1390 | missing = sorted(expected - seen) # sort to stabilize output | |
1430 |
|
1391 | |||
1431 | if state.repo.ui.configbool('histedit', 'dropmissing'): |
|
1392 | if state.repo.ui.configbool('histedit', 'dropmissing'): | |
@@ -1433,15 +1394,16 b' def verifyactions(actions, state, ctxs):' | |||||
1433 | raise error.ParseError(_('no rules provided'), |
|
1394 | raise error.ParseError(_('no rules provided'), | |
1434 | hint=_('use strip extension to remove commits')) |
|
1395 | hint=_('use strip extension to remove commits')) | |
1435 |
|
1396 | |||
1436 |
drops = [drop(state, |
|
1397 | drops = [drop(state, n) for n in missing] | |
1437 | # put the in the beginning so they execute immediately and |
|
1398 | # put the in the beginning so they execute immediately and | |
1438 | # don't show in the edit-plan in the future |
|
1399 | # don't show in the edit-plan in the future | |
1439 | actions[:0] = drops |
|
1400 | actions[:0] = drops | |
1440 | elif missing: |
|
1401 | elif missing: | |
1441 | raise error.ParseError(_('missing rules for changeset %s') % |
|
1402 | raise error.ParseError(_('missing rules for changeset %s') % | |
1442 |
missing[0] |
|
1403 | node.short(missing[0]), | |
1443 | hint=_('use "drop %s" to discard, see also: ' |
|
1404 | hint=_('use "drop %s" to discard, see also: ' | |
1444 |
'" |
|
1405 | "'hg help -e histedit.config'") | |
|
1406 | % node.short(missing[0])) | |||
1445 |
|
1407 | |||
1446 | def adjustreplacementsfrommarkers(repo, oldreplacements): |
|
1408 | def adjustreplacementsfrommarkers(repo, oldreplacements): | |
1447 | """Adjust replacements from obsolescense markers |
|
1409 | """Adjust replacements from obsolescense markers | |
@@ -1608,10 +1570,9 b' def stripwrapper(orig, ui, repo, nodelis' | |||||
1608 | if os.path.exists(os.path.join(repo.path, 'histedit-state')): |
|
1570 | if os.path.exists(os.path.join(repo.path, 'histedit-state')): | |
1609 | state = histeditstate(repo) |
|
1571 | state = histeditstate(repo) | |
1610 | state.read() |
|
1572 | state.read() | |
1611 |
histedit_nodes = set([action.node |
|
1573 | histedit_nodes = set([action.node for action | |
1612 |
in state.actions if action.node |
|
1574 | in state.actions if action.node]) | |
1613 | strip_nodes = set([repo[n].node() for n in nodelist]) |
|
1575 | common_nodes = histedit_nodes & set(nodelist) | |
1614 | common_nodes = histedit_nodes & strip_nodes |
|
|||
1615 | if common_nodes: |
|
1576 | if common_nodes: | |
1616 | raise error.Abort(_("histedit in progress, can't strip %s") |
|
1577 | raise error.Abort(_("histedit in progress, can't strip %s") | |
1617 | % ', '.join(node.short(x) for x in common_nodes)) |
|
1578 | % ', '.join(node.short(x) for x in common_nodes)) |
@@ -24,7 +24,6 b' from mercurial import (' | |||||
24 | bookmarks, |
|
24 | bookmarks, | |
25 | cmdutil, |
|
25 | cmdutil, | |
26 | commands, |
|
26 | commands, | |
27 | dirstate, |
|
|||
28 | dispatch, |
|
27 | dispatch, | |
29 | error, |
|
28 | error, | |
30 | extensions, |
|
29 | extensions, | |
@@ -40,11 +39,11 b' from . import share' | |||||
40 | cmdtable = {} |
|
39 | cmdtable = {} | |
41 | command = cmdutil.command(cmdtable) |
|
40 | command = cmdutil.command(cmdtable) | |
42 |
|
41 | |||
43 |
# Note for extension authors: ONLY specify testedwith = ' |
|
42 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
44 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
43 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
45 | # be specifying the version(s) of Mercurial they are tested with, or |
|
44 | # be specifying the version(s) of Mercurial they are tested with, or | |
46 | # leave the attribute unspecified. |
|
45 | # leave the attribute unspecified. | |
47 |
testedwith = ' |
|
46 | testedwith = 'ships-with-hg-core' | |
48 |
|
47 | |||
49 | # storage format version; increment when the format changes |
|
48 | # storage format version; increment when the format changes | |
50 | storageversion = 0 |
|
49 | storageversion = 0 | |
@@ -63,8 +62,6 b' def extsetup(ui):' | |||||
63 | extensions.wrapfunction(dispatch, 'runcommand', runcommand) |
|
62 | extensions.wrapfunction(dispatch, 'runcommand', runcommand) | |
64 | extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks) |
|
63 | extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks) | |
65 | extensions.wrapfunction( |
|
64 | extensions.wrapfunction( | |
66 | dirstate.dirstate, '_writedirstate', recorddirstateparents) |
|
|||
67 | extensions.wrapfunction( |
|
|||
68 | localrepo.localrepository.dirstate, 'func', wrapdirstate) |
|
65 | localrepo.localrepository.dirstate, 'func', wrapdirstate) | |
69 | extensions.wrapfunction(hg, 'postshare', wrappostshare) |
|
66 | extensions.wrapfunction(hg, 'postshare', wrappostshare) | |
70 | extensions.wrapfunction(hg, 'copystore', unsharejournal) |
|
67 | extensions.wrapfunction(hg, 'copystore', unsharejournal) | |
@@ -84,34 +81,19 b' def wrapdirstate(orig, repo):' | |||||
84 | dirstate = orig(repo) |
|
81 | dirstate = orig(repo) | |
85 | if util.safehasattr(repo, 'journal'): |
|
82 | if util.safehasattr(repo, 'journal'): | |
86 | dirstate.journalstorage = repo.journal |
|
83 | dirstate.journalstorage = repo.journal | |
|
84 | dirstate.addparentchangecallback('journal', recorddirstateparents) | |||
87 | return dirstate |
|
85 | return dirstate | |
88 |
|
86 | |||
89 |
def recorddirstateparents( |
|
87 | def recorddirstateparents(dirstate, old, new): | |
90 | """Records all dirstate parent changes in the journal.""" |
|
88 | """Records all dirstate parent changes in the journal.""" | |
|
89 | old = list(old) | |||
|
90 | new = list(new) | |||
91 | if util.safehasattr(dirstate, 'journalstorage'): |
|
91 | if util.safehasattr(dirstate, 'journalstorage'): | |
92 | old = [node.nullid, node.nullid] |
|
92 | # only record two hashes if there was a merge | |
93 | nodesize = len(node.nullid) |
|
93 | oldhashes = old[:1] if old[1] == node.nullid else old | |
94 | try: |
|
94 | newhashes = new[:1] if new[1] == node.nullid else new | |
95 | # The only source for the old state is in the dirstate file still |
|
95 | dirstate.journalstorage.record( | |
96 | # on disk; the in-memory dirstate object only contains the new |
|
96 | wdirparenttype, '.', oldhashes, newhashes) | |
97 | # state. dirstate._opendirstatefile() switches beteen .hg/dirstate |
|
|||
98 | # and .hg/dirstate.pending depending on the transaction state. |
|
|||
99 | with dirstate._opendirstatefile() as fp: |
|
|||
100 | state = fp.read(2 * nodesize) |
|
|||
101 | if len(state) == 2 * nodesize: |
|
|||
102 | old = [state[:nodesize], state[nodesize:]] |
|
|||
103 | except IOError: |
|
|||
104 | pass |
|
|||
105 |
|
||||
106 | new = dirstate.parents() |
|
|||
107 | if old != new: |
|
|||
108 | # only record two hashes if there was a merge |
|
|||
109 | oldhashes = old[:1] if old[1] == node.nullid else old |
|
|||
110 | newhashes = new[:1] if new[1] == node.nullid else new |
|
|||
111 | dirstate.journalstorage.record( |
|
|||
112 | wdirparenttype, '.', oldhashes, newhashes) |
|
|||
113 |
|
||||
114 | return orig(dirstate, dirstatefp) |
|
|||
115 |
|
97 | |||
116 | # hooks to record bookmark changes (both local and remote) |
|
98 | # hooks to record bookmark changes (both local and remote) | |
117 | def recordbookmarks(orig, store, fp): |
|
99 | def recordbookmarks(orig, store, fp): | |
@@ -165,9 +147,10 b' def _mergeentriesiter(*iterables, **kwar' | |||||
165 |
|
147 | |||
166 | def wrappostshare(orig, sourcerepo, destrepo, **kwargs): |
|
148 | def wrappostshare(orig, sourcerepo, destrepo, **kwargs): | |
167 | """Mark this shared working copy as sharing journal information""" |
|
149 | """Mark this shared working copy as sharing journal information""" | |
168 | orig(sourcerepo, destrepo, **kwargs) |
|
150 | with destrepo.wlock(): | |
169 | with destrepo.vfs('shared', 'a') as fp: |
|
151 | orig(sourcerepo, destrepo, **kwargs) | |
170 | fp.write('journal\n') |
|
152 | with destrepo.vfs('shared', 'a') as fp: | |
|
153 | fp.write('journal\n') | |||
171 |
|
154 | |||
172 | def unsharejournal(orig, ui, repo, repopath): |
|
155 | def unsharejournal(orig, ui, repo, repopath): | |
173 | """Copy shared journal entries into this repo when unsharing""" |
|
156 | """Copy shared journal entries into this repo when unsharing""" | |
@@ -179,12 +162,12 b' def unsharejournal(orig, ui, repo, repop' | |||||
179 | # there is a shared repository and there are shared journal entries |
|
162 | # there is a shared repository and there are shared journal entries | |
180 | # to copy. move shared date over from source to destination but |
|
163 | # to copy. move shared date over from source to destination but | |
181 | # move the local file first |
|
164 | # move the local file first | |
182 | if repo.vfs.exists('journal'): |
|
165 | if repo.vfs.exists('namejournal'): | |
183 | journalpath = repo.join('journal') |
|
166 | journalpath = repo.join('namejournal') | |
184 | util.rename(journalpath, journalpath + '.bak') |
|
167 | util.rename(journalpath, journalpath + '.bak') | |
185 | storage = repo.journal |
|
168 | storage = repo.journal | |
186 | local = storage._open( |
|
169 | local = storage._open( | |
187 | repo.vfs, filename='journal.bak', _newestfirst=False) |
|
170 | repo.vfs, filename='namejournal.bak', _newestfirst=False) | |
188 | shared = ( |
|
171 | shared = ( | |
189 | e for e in storage._open(sharedrepo.vfs, _newestfirst=False) |
|
172 | e for e in storage._open(sharedrepo.vfs, _newestfirst=False) | |
190 | if sharednamespaces.get(e.namespace) in sharedfeatures) |
|
173 | if sharednamespaces.get(e.namespace) in sharedfeatures) | |
@@ -194,8 +177,8 b' def unsharejournal(orig, ui, repo, repop' | |||||
194 | return orig(ui, repo, repopath) |
|
177 | return orig(ui, repo, repopath) | |
195 |
|
178 | |||
196 | class journalentry(collections.namedtuple( |
|
179 | class journalentry(collections.namedtuple( | |
197 | 'journalentry', |
|
180 | u'journalentry', | |
198 | 'timestamp user command namespace name oldhashes newhashes')): |
|
181 | u'timestamp user command namespace name oldhashes newhashes')): | |
199 | """Individual journal entry |
|
182 | """Individual journal entry | |
200 |
|
183 | |||
201 | * timestamp: a mercurial (time, timezone) tuple |
|
184 | * timestamp: a mercurial (time, timezone) tuple | |
@@ -284,19 +267,31 b' class journalstorage(object):' | |||||
284 | # with a non-local repo (cloning for example). |
|
267 | # with a non-local repo (cloning for example). | |
285 | cls._currentcommand = fullargs |
|
268 | cls._currentcommand = fullargs | |
286 |
|
269 | |||
|
270 | def _currentlock(self, lockref): | |||
|
271 | """Returns the lock if it's held, or None if it's not. | |||
|
272 | ||||
|
273 | (This is copied from the localrepo class) | |||
|
274 | """ | |||
|
275 | if lockref is None: | |||
|
276 | return None | |||
|
277 | l = lockref() | |||
|
278 | if l is None or not l.held: | |||
|
279 | return None | |||
|
280 | return l | |||
|
281 | ||||
287 | def jlock(self, vfs): |
|
282 | def jlock(self, vfs): | |
288 | """Create a lock for the journal file""" |
|
283 | """Create a lock for the journal file""" | |
289 |
if self._ |
|
284 | if self._currentlock(self._lockref) is not None: | |
290 | raise error.Abort(_('journal lock does not support nesting')) |
|
285 | raise error.Abort(_('journal lock does not support nesting')) | |
291 | desc = _('journal of %s') % vfs.base |
|
286 | desc = _('journal of %s') % vfs.base | |
292 | try: |
|
287 | try: | |
293 | l = lock.lock(vfs, 'journal.lock', 0, desc=desc) |
|
288 | l = lock.lock(vfs, 'namejournal.lock', 0, desc=desc) | |
294 | except error.LockHeld as inst: |
|
289 | except error.LockHeld as inst: | |
295 | self.ui.warn( |
|
290 | self.ui.warn( | |
296 | _("waiting for lock on %s held by %r\n") % (desc, inst.locker)) |
|
291 | _("waiting for lock on %s held by %r\n") % (desc, inst.locker)) | |
297 | # default to 600 seconds timeout |
|
292 | # default to 600 seconds timeout | |
298 | l = lock.lock( |
|
293 | l = lock.lock( | |
299 | vfs, 'journal.lock', |
|
294 | vfs, 'namejournal.lock', | |
300 | int(self.ui.config("ui", "timeout", "600")), desc=desc) |
|
295 | int(self.ui.config("ui", "timeout", "600")), desc=desc) | |
301 | self.ui.warn(_("got lock after %s seconds\n") % l.delay) |
|
296 | self.ui.warn(_("got lock after %s seconds\n") % l.delay) | |
302 | self._lockref = weakref.ref(l) |
|
297 | self._lockref = weakref.ref(l) | |
@@ -336,7 +331,7 b' class journalstorage(object):' | |||||
336 | with self.jlock(vfs): |
|
331 | with self.jlock(vfs): | |
337 | version = None |
|
332 | version = None | |
338 | # open file in amend mode to ensure it is created if missing |
|
333 | # open file in amend mode to ensure it is created if missing | |
339 | with vfs('journal', mode='a+b', atomictemp=True) as f: |
|
334 | with vfs('namejournal', mode='a+b', atomictemp=True) as f: | |
340 | f.seek(0, os.SEEK_SET) |
|
335 | f.seek(0, os.SEEK_SET) | |
341 | # Read just enough bytes to get a version number (up to 2 |
|
336 | # Read just enough bytes to get a version number (up to 2 | |
342 | # digits plus separator) |
|
337 | # digits plus separator) | |
@@ -394,7 +389,7 b' class journalstorage(object):' | |||||
394 | if sharednamespaces.get(e.namespace) in self.sharedfeatures) |
|
389 | if sharednamespaces.get(e.namespace) in self.sharedfeatures) | |
395 | return _mergeentriesiter(local, shared) |
|
390 | return _mergeentriesiter(local, shared) | |
396 |
|
391 | |||
397 | def _open(self, vfs, filename='journal', _newestfirst=True): |
|
392 | def _open(self, vfs, filename='namejournal', _newestfirst=True): | |
398 | if not vfs.exists(filename): |
|
393 | if not vfs.exists(filename): | |
399 | return |
|
394 | return | |
400 |
|
395 | |||
@@ -475,8 +470,10 b' def journal(ui, repo, *args, **opts):' | |||||
475 | for count, entry in enumerate(repo.journal.filtered(name=name)): |
|
470 | for count, entry in enumerate(repo.journal.filtered(name=name)): | |
476 | if count == limit: |
|
471 | if count == limit: | |
477 | break |
|
472 | break | |
478 |
newhashesstr = |
|
473 | newhashesstr = fm.formatlist(map(fm.hexfunc, entry.newhashes), | |
479 | oldhashesstr = ','.join([node.short(hash) for hash in entry.oldhashes]) |
|
474 | name='node', sep=',') | |
|
475 | oldhashesstr = fm.formatlist(map(fm.hexfunc, entry.oldhashes), | |||
|
476 | name='node', sep=',') | |||
480 |
|
477 | |||
481 | fm.startitem() |
|
478 | fm.startitem() | |
482 | fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr) |
|
479 | fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr) | |
@@ -486,7 +483,7 b' def journal(ui, repo, *args, **opts):' | |||||
486 | opts.get('all') or name.startswith('re:'), |
|
483 | opts.get('all') or name.startswith('re:'), | |
487 | 'name', ' %-8s', entry.name) |
|
484 | 'name', ' %-8s', entry.name) | |
488 |
|
485 | |||
489 |
timestring = |
|
486 | timestring = fm.formatdate(entry.timestamp, '%Y-%m-%d %H:%M %1%2') | |
490 | fm.condwrite(ui.verbose, 'date', ' %s', timestring) |
|
487 | fm.condwrite(ui.verbose, 'date', ' %s', timestring) | |
491 | fm.write('command', ' %s\n', entry.command) |
|
488 | fm.write('command', ' %s\n', entry.command) | |
492 |
|
489 |
@@ -24,7 +24,7 b'' | |||||
24 | # Files to act upon/ignore are specified in the [keyword] section. |
|
24 | # Files to act upon/ignore are specified in the [keyword] section. | |
25 | # Customized keyword template mappings in the [keywordmaps] section. |
|
25 | # Customized keyword template mappings in the [keywordmaps] section. | |
26 | # |
|
26 | # | |
27 |
# Run |
|
27 | # Run 'hg help keyword' and 'hg kwdemo' to get info on configuration. | |
28 |
|
28 | |||
29 | '''expand keywords in tracked files |
|
29 | '''expand keywords in tracked files | |
30 |
|
30 | |||
@@ -112,11 +112,11 b' from mercurial import (' | |||||
112 |
|
112 | |||
113 | cmdtable = {} |
|
113 | cmdtable = {} | |
114 | command = cmdutil.command(cmdtable) |
|
114 | command = cmdutil.command(cmdtable) | |
115 |
# Note for extension authors: ONLY specify testedwith = ' |
|
115 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
116 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
116 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
117 | # be specifying the version(s) of Mercurial they are tested with, or |
|
117 | # be specifying the version(s) of Mercurial they are tested with, or | |
118 | # leave the attribute unspecified. |
|
118 | # leave the attribute unspecified. | |
119 |
testedwith = ' |
|
119 | testedwith = 'ships-with-hg-core' | |
120 |
|
120 | |||
121 | # hg commands that do not act on keywords |
|
121 | # hg commands that do not act on keywords | |
122 | nokwcommands = ('add addremove annotate bundle export grep incoming init log' |
|
122 | nokwcommands = ('add addremove annotate bundle export grep incoming init log' |
@@ -119,11 +119,11 b' from . import (' | |||||
119 | uisetup as uisetupmod, |
|
119 | uisetup as uisetupmod, | |
120 | ) |
|
120 | ) | |
121 |
|
121 | |||
122 |
# Note for extension authors: ONLY specify testedwith = ' |
|
122 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
123 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
123 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
124 | # be specifying the version(s) of Mercurial they are tested with, or |
|
124 | # be specifying the version(s) of Mercurial they are tested with, or | |
125 | # leave the attribute unspecified. |
|
125 | # leave the attribute unspecified. | |
126 |
testedwith = ' |
|
126 | testedwith = 'ships-with-hg-core' | |
127 |
|
127 | |||
128 | reposetup = reposetup.reposetup |
|
128 | reposetup = reposetup.reposetup | |
129 |
|
129 |
@@ -91,15 +91,13 b' class basestore(object):' | |||||
91 | storefilename = lfutil.storepath(self.repo, hash) |
|
91 | storefilename = lfutil.storepath(self.repo, hash) | |
92 |
|
92 | |||
93 | tmpname = storefilename + '.tmp' |
|
93 | tmpname = storefilename + '.tmp' | |
94 |
|
|
94 | with util.atomictempfile(tmpname, | |
95 |
|
|
95 | createmode=self.repo.store.createmode) as tmpfile: | |
96 |
|
96 | try: | ||
97 | try: |
|
97 | gothash = self._getfile(tmpfile, filename, hash) | |
98 | gothash = self._getfile(tmpfile, filename, hash) |
|
98 | except StoreError as err: | |
99 | except StoreError as err: |
|
99 | self.ui.warn(err.longmessage()) | |
100 | self.ui.warn(err.longmessage()) |
|
100 | gothash = "" | |
101 | gothash = "" |
|
|||
102 | tmpfile.close() |
|
|||
103 |
|
101 | |||
104 | if gothash != hash: |
|
102 | if gothash != hash: | |
105 | if gothash != "": |
|
103 | if gothash != "": |
@@ -515,9 +515,13 b' def updatelfiles(ui, repo, filelist=None' | |||||
515 | rellfile = lfile |
|
515 | rellfile = lfile | |
516 | relstandin = lfutil.standin(lfile) |
|
516 | relstandin = lfutil.standin(lfile) | |
517 | if wvfs.exists(relstandin): |
|
517 | if wvfs.exists(relstandin): | |
518 |
|
|
518 | standinexec = wvfs.stat(relstandin).st_mode & 0o100 | |
519 |
|
|
519 | st = wvfs.stat(rellfile).st_mode | |
520 | wvfs.chmod(rellfile, mode) |
|
520 | if standinexec != st & 0o100: | |
|
521 | st &= ~0o111 | |||
|
522 | if standinexec: | |||
|
523 | st |= (st >> 2) & 0o111 & ~util.umask | |||
|
524 | wvfs.chmod(rellfile, st) | |||
521 | update1 = 1 |
|
525 | update1 = 1 | |
522 |
|
526 | |||
523 | updated += update1 |
|
527 | updated += update1 |
@@ -54,10 +54,10 b' def link(src, dest):' | |||||
54 | util.oslink(src, dest) |
|
54 | util.oslink(src, dest) | |
55 | except OSError: |
|
55 | except OSError: | |
56 | # if hardlinks fail, fallback on atomic copy |
|
56 | # if hardlinks fail, fallback on atomic copy | |
57 | dst = util.atomictempfile(dest) |
|
57 | with open(src, 'rb') as srcf: | |
58 | for chunk in util.filechunkiter(open(src, 'rb')): |
|
58 | with util.atomictempfile(dest) as dstf: | |
59 | dst.write(chunk) |
|
59 | for chunk in util.filechunkiter(srcf): | |
60 | dst.close() |
|
60 | dstf.write(chunk) | |
61 | os.chmod(dest, os.stat(src).st_mode) |
|
61 | os.chmod(dest, os.stat(src).st_mode) | |
62 |
|
62 | |||
63 | def usercachepath(ui, hash): |
|
63 | def usercachepath(ui, hash): | |
@@ -231,7 +231,8 b' def copyfromcache(repo, hash, filename):' | |||||
231 | # don't use atomic writes in the working copy. |
|
231 | # don't use atomic writes in the working copy. | |
232 | with open(path, 'rb') as srcfd: |
|
232 | with open(path, 'rb') as srcfd: | |
233 | with wvfs(filename, 'wb') as destfd: |
|
233 | with wvfs(filename, 'wb') as destfd: | |
234 |
gothash = copyandhash( |
|
234 | gothash = copyandhash( | |
|
235 | util.filechunkiter(srcfd), destfd) | |||
235 | if gothash != hash: |
|
236 | if gothash != hash: | |
236 | repo.ui.warn(_('%s: data corruption in %s with hash %s\n') |
|
237 | repo.ui.warn(_('%s: data corruption in %s with hash %s\n') | |
237 | % (filename, path, gothash)) |
|
238 | % (filename, path, gothash)) | |
@@ -264,11 +265,11 b' def copytostoreabsolute(repo, file, hash' | |||||
264 | link(usercachepath(repo.ui, hash), storepath(repo, hash)) |
|
265 | link(usercachepath(repo.ui, hash), storepath(repo, hash)) | |
265 | else: |
|
266 | else: | |
266 | util.makedirs(os.path.dirname(storepath(repo, hash))) |
|
267 | util.makedirs(os.path.dirname(storepath(repo, hash))) | |
267 | dst = util.atomictempfile(storepath(repo, hash), |
|
268 | with open(file, 'rb') as srcf: | |
268 | createmode=repo.store.createmode) |
|
269 | with util.atomictempfile(storepath(repo, hash), | |
269 | for chunk in util.filechunkiter(open(file, 'rb')): |
|
270 | createmode=repo.store.createmode) as dstf: | |
270 | dst.write(chunk) |
|
271 | for chunk in util.filechunkiter(srcf): | |
271 | dst.close() |
|
272 | dstf.write(chunk) | |
272 | linktousercache(repo, hash) |
|
273 | linktousercache(repo, hash) | |
273 |
|
274 | |||
274 | def linktousercache(repo, hash): |
|
275 | def linktousercache(repo, hash): | |
@@ -370,10 +371,9 b' def hashfile(file):' | |||||
370 | if not os.path.exists(file): |
|
371 | if not os.path.exists(file): | |
371 | return '' |
|
372 | return '' | |
372 | hasher = hashlib.sha1('') |
|
373 | hasher = hashlib.sha1('') | |
373 |
|
|
374 | with open(file, 'rb') as fd: | |
374 |
for data in util.filechunkiter(fd |
|
375 | for data in util.filechunkiter(fd): | |
375 | hasher.update(data) |
|
376 | hasher.update(data) | |
376 | fd.close() |
|
|||
377 | return hasher.hexdigest() |
|
377 | return hasher.hexdigest() | |
378 |
|
378 | |||
379 | def getexecutable(filename): |
|
379 | def getexecutable(filename): |
@@ -10,6 +10,7 b'' | |||||
10 | from __future__ import absolute_import |
|
10 | from __future__ import absolute_import | |
11 |
|
11 | |||
12 | from mercurial.i18n import _ |
|
12 | from mercurial.i18n import _ | |
|
13 | from mercurial import util | |||
13 |
|
14 | |||
14 | from . import ( |
|
15 | from . import ( | |
15 | basestore, |
|
16 | basestore, | |
@@ -42,7 +43,8 b' class localstore(basestore.basestore):' | |||||
42 | raise basestore.StoreError(filename, hash, self.url, |
|
43 | raise basestore.StoreError(filename, hash, self.url, | |
43 | _("can't get file locally")) |
|
44 | _("can't get file locally")) | |
44 | with open(path, 'rb') as fd: |
|
45 | with open(path, 'rb') as fd: | |
45 |
return lfutil.copyandhash( |
|
46 | return lfutil.copyandhash( | |
|
47 | util.filechunkiter(fd), tmpfile) | |||
46 |
|
48 | |||
47 | def _verifyfiles(self, contents, filestocheck): |
|
49 | def _verifyfiles(self, contents, filestocheck): | |
48 | failed = False |
|
50 | failed = False |
@@ -883,11 +883,8 b' def hgclone(orig, ui, opts, *args, **kwa' | |||||
883 |
|
883 | |||
884 | # If largefiles is required for this repo, permanently enable it locally |
|
884 | # If largefiles is required for this repo, permanently enable it locally | |
885 | if 'largefiles' in repo.requirements: |
|
885 | if 'largefiles' in repo.requirements: | |
886 |
|
|
886 | with repo.vfs('hgrc', 'a', text=True) as fp: | |
887 | try: |
|
|||
888 | fp.write('\n[extensions]\nlargefiles=\n') |
|
887 | fp.write('\n[extensions]\nlargefiles=\n') | |
889 | finally: |
|
|||
890 | fp.close() |
|
|||
891 |
|
888 | |||
892 | # Caching is implicitly limited to 'rev' option, since the dest repo was |
|
889 | # Caching is implicitly limited to 'rev' option, since the dest repo was | |
893 | # truncated at that point. The user may expect a download count with |
|
890 | # truncated at that point. The user may expect a download count with | |
@@ -1339,30 +1336,28 b' def overridecat(orig, ui, repo, file1, *' | |||||
1339 | m.visitdir = lfvisitdirfn |
|
1336 | m.visitdir = lfvisitdirfn | |
1340 |
|
1337 | |||
1341 | for f in ctx.walk(m): |
|
1338 | for f in ctx.walk(m): | |
1342 |
|
|
1339 | with cmdutil.makefileobj(repo, opts.get('output'), ctx.node(), | |
1343 | pathname=f) |
|
1340 | pathname=f) as fp: | |
1344 | lf = lfutil.splitstandin(f) |
|
1341 | lf = lfutil.splitstandin(f) | |
1345 | if lf is None or origmatchfn(f): |
|
1342 | if lf is None or origmatchfn(f): | |
1346 | # duplicating unreachable code from commands.cat |
|
1343 | # duplicating unreachable code from commands.cat | |
1347 | data = ctx[f].data() |
|
1344 | data = ctx[f].data() | |
1348 | if opts.get('decode'): |
|
1345 | if opts.get('decode'): | |
1349 | data = repo.wwritedata(f, data) |
|
1346 | data = repo.wwritedata(f, data) | |
1350 | fp.write(data) |
|
1347 | fp.write(data) | |
1351 | else: |
|
1348 | else: | |
1352 | hash = lfutil.readstandin(repo, lf, ctx.rev()) |
|
1349 | hash = lfutil.readstandin(repo, lf, ctx.rev()) | |
1353 | if not lfutil.inusercache(repo.ui, hash): |
|
1350 | if not lfutil.inusercache(repo.ui, hash): | |
1354 | store = storefactory.openstore(repo) |
|
1351 | store = storefactory.openstore(repo) | |
1355 | success, missing = store.get([(lf, hash)]) |
|
1352 | success, missing = store.get([(lf, hash)]) | |
1356 | if len(success) != 1: |
|
1353 | if len(success) != 1: | |
1357 | raise error.Abort( |
|
1354 | raise error.Abort( | |
1358 | _('largefile %s is not in cache and could not be ' |
|
1355 | _('largefile %s is not in cache and could not be ' | |
1359 | 'downloaded') % lf) |
|
1356 | 'downloaded') % lf) | |
1360 | path = lfutil.usercachepath(repo.ui, hash) |
|
1357 | path = lfutil.usercachepath(repo.ui, hash) | |
1361 |
|
|
1358 | with open(path, "rb") as fpin: | |
1362 |
for chunk in util.filechunkiter(fpin |
|
1359 | for chunk in util.filechunkiter(fpin): | |
1363 | fp.write(chunk) |
|
1360 | fp.write(chunk) | |
1364 | fpin.close() |
|
|||
1365 | fp.close() |
|
|||
1366 | err = 0 |
|
1361 | err = 0 | |
1367 | return err |
|
1362 | return err | |
1368 |
|
1363 |
@@ -134,7 +134,7 b' def wirereposetup(ui, repo):' | |||||
134 | length)) |
|
134 | length)) | |
135 |
|
135 | |||
136 | # SSH streams will block if reading more than length |
|
136 | # SSH streams will block if reading more than length | |
137 |
for chunk in util.filechunkiter(stream, |
|
137 | for chunk in util.filechunkiter(stream, limit=length): | |
138 | yield chunk |
|
138 | yield chunk | |
139 | # HTTP streams must hit the end to process the last empty |
|
139 | # HTTP streams must hit the end to process the last empty | |
140 | # chunk of Chunked-Encoding so the connection can be reused. |
|
140 | # chunk of Chunked-Encoding so the connection can be reused. |
@@ -45,17 +45,13 b' class remotestore(basestore.basestore):' | |||||
45 |
|
45 | |||
46 | def sendfile(self, filename, hash): |
|
46 | def sendfile(self, filename, hash): | |
47 | self.ui.debug('remotestore: sendfile(%s, %s)\n' % (filename, hash)) |
|
47 | self.ui.debug('remotestore: sendfile(%s, %s)\n' % (filename, hash)) | |
48 | fd = None |
|
|||
49 | try: |
|
48 | try: | |
50 |
|
|
49 | with lfutil.httpsendfile(self.ui, filename) as fd: | |
51 | return self._put(hash, fd) |
|
50 | return self._put(hash, fd) | |
52 | except IOError as e: |
|
51 | except IOError as e: | |
53 | raise error.Abort( |
|
52 | raise error.Abort( | |
54 | _('remotestore: could not open file %s: %s') |
|
53 | _('remotestore: could not open file %s: %s') | |
55 | % (filename, str(e))) |
|
54 | % (filename, str(e))) | |
56 | finally: |
|
|||
57 | if fd: |
|
|||
58 | fd.close() |
|
|||
59 |
|
55 | |||
60 | def _getfile(self, tmpfile, filename, hash): |
|
56 | def _getfile(self, tmpfile, filename, hash): | |
61 | try: |
|
57 | try: | |
@@ -122,7 +118,7 b' class remotestore(basestore.basestore):' | |||||
122 | raise NotImplementedError('abstract method') |
|
118 | raise NotImplementedError('abstract method') | |
123 |
|
119 | |||
124 | def _get(self, hash): |
|
120 | def _get(self, hash): | |
125 |
'''Get |
|
121 | '''Get a iterator for content with the given hash.''' | |
126 | raise NotImplementedError('abstract method') |
|
122 | raise NotImplementedError('abstract method') | |
127 |
|
123 | |||
128 | def _stat(self, hashes): |
|
124 | def _stat(self, hashes): |
@@ -40,11 +40,11 b' import platform' | |||||
40 | import subprocess |
|
40 | import subprocess | |
41 | import sys |
|
41 | import sys | |
42 |
|
42 | |||
43 |
# Note for extension authors: ONLY specify testedwith = ' |
|
43 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
44 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
44 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
45 | # be specifying the version(s) of Mercurial they are tested with, or |
|
45 | # be specifying the version(s) of Mercurial they are tested with, or | |
46 | # leave the attribute unspecified. |
|
46 | # leave the attribute unspecified. | |
47 |
testedwith = ' |
|
47 | testedwith = 'ships-with-hg-core' | |
48 |
|
48 | |||
49 | def uisetup(ui): |
|
49 | def uisetup(ui): | |
50 | if platform.system() == 'Windows': |
|
50 | if platform.system() == 'Windows': |
@@ -99,11 +99,11 b" seriesopts = [('s', 'summary', None, _('" | |||||
99 |
|
99 | |||
100 | cmdtable = {} |
|
100 | cmdtable = {} | |
101 | command = cmdutil.command(cmdtable) |
|
101 | command = cmdutil.command(cmdtable) | |
102 |
# Note for extension authors: ONLY specify testedwith = ' |
|
102 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
103 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
103 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
104 | # be specifying the version(s) of Mercurial they are tested with, or |
|
104 | # be specifying the version(s) of Mercurial they are tested with, or | |
105 | # leave the attribute unspecified. |
|
105 | # leave the attribute unspecified. | |
106 |
testedwith = ' |
|
106 | testedwith = 'ships-with-hg-core' | |
107 |
|
107 | |||
108 | # force load strip extension formerly included in mq and import some utility |
|
108 | # force load strip extension formerly included in mq and import some utility | |
109 | try: |
|
109 | try: | |
@@ -1562,7 +1562,7 b' class queue(object):' | |||||
1562 | if not repo[self.applied[-1].node].mutable(): |
|
1562 | if not repo[self.applied[-1].node].mutable(): | |
1563 | raise error.Abort( |
|
1563 | raise error.Abort( | |
1564 | _("popping would remove a public revision"), |
|
1564 | _("popping would remove a public revision"), | |
1565 |
hint=_(' |
|
1565 | hint=_("see 'hg help phases' for details")) | |
1566 |
|
1566 | |||
1567 | # we know there are no local changes, so we can make a simplified |
|
1567 | # we know there are no local changes, so we can make a simplified | |
1568 | # form of hg.update. |
|
1568 | # form of hg.update. | |
@@ -1631,7 +1631,7 b' class queue(object):' | |||||
1631 | raise error.Abort(_("cannot qrefresh a revision with children")) |
|
1631 | raise error.Abort(_("cannot qrefresh a revision with children")) | |
1632 | if not repo[top].mutable(): |
|
1632 | if not repo[top].mutable(): | |
1633 | raise error.Abort(_("cannot qrefresh public revision"), |
|
1633 | raise error.Abort(_("cannot qrefresh public revision"), | |
1634 |
hint=_(' |
|
1634 | hint=_("see 'hg help phases' for details")) | |
1635 |
|
1635 | |||
1636 | cparents = repo.changelog.parents(top) |
|
1636 | cparents = repo.changelog.parents(top) | |
1637 | patchparent = self.qparents(repo, top) |
|
1637 | patchparent = self.qparents(repo, top) | |
@@ -1840,7 +1840,7 b' class queue(object):' | |||||
1840 |
|
1840 | |||
1841 | self.applied.append(statusentry(n, patchfn)) |
|
1841 | self.applied.append(statusentry(n, patchfn)) | |
1842 | finally: |
|
1842 | finally: | |
1843 |
lockmod.release( |
|
1843 | lockmod.release(tr, lock) | |
1844 | except: # re-raises |
|
1844 | except: # re-raises | |
1845 | ctx = repo[cparents[0]] |
|
1845 | ctx = repo[cparents[0]] | |
1846 | repo.dirstate.rebuild(ctx.node(), ctx.manifest()) |
|
1846 | repo.dirstate.rebuild(ctx.node(), ctx.manifest()) | |
@@ -2117,7 +2117,7 b' class queue(object):' | |||||
2117 | for r in rev: |
|
2117 | for r in rev: | |
2118 | if not repo[r].mutable(): |
|
2118 | if not repo[r].mutable(): | |
2119 | raise error.Abort(_('revision %d is not mutable') % r, |
|
2119 | raise error.Abort(_('revision %d is not mutable') % r, | |
2120 |
hint=_(' |
|
2120 | hint=_("see 'hg help phases' " | |
2121 | 'for details')) |
|
2121 | 'for details')) | |
2122 | p1, p2 = repo.changelog.parentrevs(r) |
|
2122 | p1, p2 = repo.changelog.parentrevs(r) | |
2123 | n = repo.changelog.node(r) |
|
2123 | n = repo.changelog.node(r) | |
@@ -3354,53 +3354,54 b' def qqueue(ui, repo, name=None, **opts):' | |||||
3354 | raise error.Abort( |
|
3354 | raise error.Abort( | |
3355 | _('invalid queue name, may not contain the characters ":\\/."')) |
|
3355 | _('invalid queue name, may not contain the characters ":\\/."')) | |
3356 |
|
3356 | |||
3357 | existing = _getqueues() |
|
3357 | with repo.wlock(): | |
3358 |
|
3358 | existing = _getqueues() | ||
3359 | if opts.get('create'): |
|
3359 | ||
3360 | if name in existing: |
|
3360 | if opts.get('create'): | |
3361 | raise error.Abort(_('queue "%s" already exists') % name) |
|
3361 | if name in existing: | |
3362 | if _noqueues(): |
|
3362 | raise error.Abort(_('queue "%s" already exists') % name) | |
3363 |
|
|
3363 | if _noqueues(): | |
3364 |
_addqueue( |
|
3364 | _addqueue(_defaultqueue) | |
3365 |
|
|
3365 | _addqueue(name) | |
3366 | elif opts.get('rename'): |
|
3366 | _setactive(name) | |
3367 | current = _getcurrent() |
|
3367 | elif opts.get('rename'): | |
3368 | if name == current: |
|
3368 | current = _getcurrent() | |
3369 | raise error.Abort(_('can\'t rename "%s" to its current name') |
|
3369 | if name == current: | |
3370 | % name) |
|
3370 | raise error.Abort(_('can\'t rename "%s" to its current name') | |
3371 | if name in existing: |
|
3371 | % name) | |
3372 | raise error.Abort(_('queue "%s" already exists') % name) |
|
3372 | if name in existing: | |
3373 |
|
3373 | raise error.Abort(_('queue "%s" already exists') % name) | ||
3374 | olddir = _queuedir(current) |
|
3374 | ||
3375 |
|
|
3375 | olddir = _queuedir(current) | |
3376 |
|
3376 | newdir = _queuedir(name) | ||
3377 | if os.path.exists(newdir): |
|
3377 | ||
3378 | raise error.Abort(_('non-queue directory "%s" already exists') % |
|
3378 | if os.path.exists(newdir): | |
3379 | newdir) |
|
3379 | raise error.Abort(_('non-queue directory "%s" already exists') % | |
3380 |
|
3380 | newdir) | ||
3381 | fh = repo.vfs('patches.queues.new', 'w') |
|
3381 | ||
3382 | for queue in existing: |
|
3382 | fh = repo.vfs('patches.queues.new', 'w') | |
3383 |
|
|
3383 | for queue in existing: | |
3384 | fh.write('%s\n' % (name,)) |
|
3384 | if queue == current: | |
3385 | if os.path.exists(olddir): |
|
3385 | fh.write('%s\n' % (name,)) | |
3386 |
|
|
3386 | if os.path.exists(olddir): | |
3387 | else: |
|
3387 | util.rename(olddir, newdir) | |
3388 | fh.write('%s\n' % (queue,)) |
|
3388 | else: | |
3389 | fh.close() |
|
3389 | fh.write('%s\n' % (queue,)) | |
3390 | util.rename(repo.join('patches.queues.new'), repo.join(_allqueues)) |
|
3390 | fh.close() | |
3391 | _setactivenocheck(name) |
|
3391 | util.rename(repo.join('patches.queues.new'), repo.join(_allqueues)) | |
3392 | elif opts.get('delete'): |
|
3392 | _setactivenocheck(name) | |
3393 | _delete(name) |
|
3393 | elif opts.get('delete'): | |
3394 | elif opts.get('purge'): |
|
|||
3395 | if name in existing: |
|
|||
3396 | _delete(name) |
|
3394 | _delete(name) | |
3397 | qdir = _queuedir(name) |
|
3395 | elif opts.get('purge'): | |
3398 | if os.path.exists(qdir): |
|
3396 | if name in existing: | |
3399 | shutil.rmtree(qdir) |
|
3397 | _delete(name) | |
3400 | else: |
|
3398 | qdir = _queuedir(name) | |
3401 | if name not in existing: |
|
3399 | if os.path.exists(qdir): | |
3402 | raise error.Abort(_('use --create to create a new queue')) |
|
3400 | shutil.rmtree(qdir) | |
3403 | _setactive(name) |
|
3401 | else: | |
|
3402 | if name not in existing: | |||
|
3403 | raise error.Abort(_('use --create to create a new queue')) | |||
|
3404 | _setactive(name) | |||
3404 |
|
3405 | |||
3405 | def mqphasedefaults(repo, roots): |
|
3406 | def mqphasedefaults(repo, roots): | |
3406 | """callback used to set mq changeset as secret when no phase data exists""" |
|
3407 | """callback used to set mq changeset as secret when no phase data exists""" |
@@ -148,11 +148,11 b' from mercurial import (' | |||||
148 | util, |
|
148 | util, | |
149 | ) |
|
149 | ) | |
150 |
|
150 | |||
151 |
# Note for extension authors: ONLY specify testedwith = ' |
|
151 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
152 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
152 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
153 | # be specifying the version(s) of Mercurial they are tested with, or |
|
153 | # be specifying the version(s) of Mercurial they are tested with, or | |
154 | # leave the attribute unspecified. |
|
154 | # leave the attribute unspecified. | |
155 |
testedwith = ' |
|
155 | testedwith = 'ships-with-hg-core' | |
156 |
|
156 | |||
157 | # template for single changeset can include email headers. |
|
157 | # template for single changeset can include email headers. | |
158 | single_template = ''' |
|
158 | single_template = ''' |
@@ -10,7 +10,7 b'' | |||||
10 | # [extension] |
|
10 | # [extension] | |
11 | # pager = |
|
11 | # pager = | |
12 | # |
|
12 | # | |
13 |
# Run |
|
13 | # Run 'hg help pager' to get info on configuration. | |
14 |
|
14 | |||
15 | '''browse command output with an external pager |
|
15 | '''browse command output with an external pager | |
16 |
|
16 | |||
@@ -75,11 +75,11 b' from mercurial import (' | |||||
75 | util, |
|
75 | util, | |
76 | ) |
|
76 | ) | |
77 |
|
77 | |||
78 |
# Note for extension authors: ONLY specify testedwith = ' |
|
78 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
79 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
79 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
80 | # be specifying the version(s) of Mercurial they are tested with, or |
|
80 | # be specifying the version(s) of Mercurial they are tested with, or | |
81 | # leave the attribute unspecified. |
|
81 | # leave the attribute unspecified. | |
82 |
testedwith = ' |
|
82 | testedwith = 'ships-with-hg-core' | |
83 |
|
83 | |||
84 | def _runpager(ui, p): |
|
84 | def _runpager(ui, p): | |
85 | pager = subprocess.Popen(p, shell=True, bufsize=-1, |
|
85 | pager = subprocess.Popen(p, shell=True, bufsize=-1, |
@@ -87,11 +87,11 b' stringio = util.stringio' | |||||
87 |
|
87 | |||
88 | cmdtable = {} |
|
88 | cmdtable = {} | |
89 | command = cmdutil.command(cmdtable) |
|
89 | command = cmdutil.command(cmdtable) | |
90 |
# Note for extension authors: ONLY specify testedwith = ' |
|
90 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
91 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
91 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
92 | # be specifying the version(s) of Mercurial they are tested with, or |
|
92 | # be specifying the version(s) of Mercurial they are tested with, or | |
93 | # leave the attribute unspecified. |
|
93 | # leave the attribute unspecified. | |
94 |
testedwith = ' |
|
94 | testedwith = 'ships-with-hg-core' | |
95 |
|
95 | |||
96 | def _addpullheader(seq, ctx): |
|
96 | def _addpullheader(seq, ctx): | |
97 | """Add a header pointing to a public URL where the changeset is available |
|
97 | """Add a header pointing to a public URL where the changeset is available |
@@ -38,11 +38,11 b' from mercurial import (' | |||||
38 |
|
38 | |||
39 | cmdtable = {} |
|
39 | cmdtable = {} | |
40 | command = cmdutil.command(cmdtable) |
|
40 | command = cmdutil.command(cmdtable) | |
41 |
# Note for extension authors: ONLY specify testedwith = ' |
|
41 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
42 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
42 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
43 | # be specifying the version(s) of Mercurial they are tested with, or |
|
43 | # be specifying the version(s) of Mercurial they are tested with, or | |
44 | # leave the attribute unspecified. |
|
44 | # leave the attribute unspecified. | |
45 |
testedwith = ' |
|
45 | testedwith = 'ships-with-hg-core' | |
46 |
|
46 | |||
47 | @command('purge|clean', |
|
47 | @command('purge|clean', | |
48 | [('a', 'abort-on-err', None, _('abort if an error occurs')), |
|
48 | [('a', 'abort-on-err', None, _('abort if an error occurs')), |
@@ -66,11 +66,11 b' revskipped = (revignored, revprecursor, ' | |||||
66 |
|
66 | |||
67 | cmdtable = {} |
|
67 | cmdtable = {} | |
68 | command = cmdutil.command(cmdtable) |
|
68 | command = cmdutil.command(cmdtable) | |
69 |
# Note for extension authors: ONLY specify testedwith = ' |
|
69 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
70 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
70 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
71 | # be specifying the version(s) of Mercurial they are tested with, or |
|
71 | # be specifying the version(s) of Mercurial they are tested with, or | |
72 | # leave the attribute unspecified. |
|
72 | # leave the attribute unspecified. | |
73 |
testedwith = ' |
|
73 | testedwith = 'ships-with-hg-core' | |
74 |
|
74 | |||
75 | def _nothingtorebase(): |
|
75 | def _nothingtorebase(): | |
76 | return 1 |
|
76 | return 1 | |
@@ -296,7 +296,7 b' class rebaseruntime(object):' | |||||
296 | if not self.keepf and not self.repo[root].mutable(): |
|
296 | if not self.keepf and not self.repo[root].mutable(): | |
297 | raise error.Abort(_("can't rebase public changeset %s") |
|
297 | raise error.Abort(_("can't rebase public changeset %s") | |
298 | % self.repo[root], |
|
298 | % self.repo[root], | |
299 |
hint=_(' |
|
299 | hint=_("see 'hg help phases' for details")) | |
300 |
|
300 | |||
301 | (self.originalwd, self.target, self.state) = result |
|
301 | (self.originalwd, self.target, self.state) = result | |
302 | if self.collapsef: |
|
302 | if self.collapsef: | |
@@ -335,8 +335,9 b' class rebaseruntime(object):' | |||||
335 | if self.activebookmark: |
|
335 | if self.activebookmark: | |
336 | bookmarks.deactivate(repo) |
|
336 | bookmarks.deactivate(repo) | |
337 |
|
337 | |||
338 |
sortedrevs = |
|
338 | sortedrevs = repo.revs('sort(%ld, -topo)', self.state) | |
339 | total = len(self.state) |
|
339 | cands = [k for k, v in self.state.iteritems() if v == revtodo] | |
|
340 | total = len(cands) | |||
340 | pos = 0 |
|
341 | pos = 0 | |
341 | for rev in sortedrevs: |
|
342 | for rev in sortedrevs: | |
342 | ctx = repo[rev] |
|
343 | ctx = repo[rev] | |
@@ -345,8 +346,8 b' class rebaseruntime(object):' | |||||
345 | names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node()) |
|
346 | names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node()) | |
346 | if names: |
|
347 | if names: | |
347 | desc += ' (%s)' % ' '.join(names) |
|
348 | desc += ' (%s)' % ' '.join(names) | |
348 | pos += 1 |
|
|||
349 | if self.state[rev] == revtodo: |
|
349 | if self.state[rev] == revtodo: | |
|
350 | pos += 1 | |||
350 | ui.status(_('rebasing %s\n') % desc) |
|
351 | ui.status(_('rebasing %s\n') % desc) | |
351 | ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)), |
|
352 | ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)), | |
352 | _('changesets'), total) |
|
353 | _('changesets'), total) | |
@@ -1127,7 +1128,7 b' def abort(repo, originalwd, target, stat' | |||||
1127 | if immutable: |
|
1128 | if immutable: | |
1128 | repo.ui.warn(_("warning: can't clean up public changesets %s\n") |
|
1129 | repo.ui.warn(_("warning: can't clean up public changesets %s\n") | |
1129 | % ', '.join(str(repo[r]) for r in immutable), |
|
1130 | % ', '.join(str(repo[r]) for r in immutable), | |
1130 |
hint=_(' |
|
1131 | hint=_("see 'hg help phases' for details")) | |
1131 | cleanup = False |
|
1132 | cleanup = False | |
1132 |
|
1133 | |||
1133 | descendants = set() |
|
1134 | descendants = set() | |
@@ -1197,7 +1198,7 b' def buildstate(repo, dest, rebaseset, co' | |||||
1197 | repo.ui.debug('source is a child of destination\n') |
|
1198 | repo.ui.debug('source is a child of destination\n') | |
1198 | return None |
|
1199 | return None | |
1199 |
|
1200 | |||
1200 |
repo.ui.debug('rebase onto % |
|
1201 | repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root)) | |
1201 | state.update(dict.fromkeys(rebaseset, revtodo)) |
|
1202 | state.update(dict.fromkeys(rebaseset, revtodo)) | |
1202 | # Rebase tries to turn <dest> into a parent of <root> while |
|
1203 | # Rebase tries to turn <dest> into a parent of <root> while | |
1203 | # preserving the number of parents of rebased changesets: |
|
1204 | # preserving the number of parents of rebased changesets: |
@@ -22,11 +22,11 b' from mercurial import (' | |||||
22 |
|
22 | |||
23 | cmdtable = {} |
|
23 | cmdtable = {} | |
24 | command = cmdutil.command(cmdtable) |
|
24 | command = cmdutil.command(cmdtable) | |
25 |
# Note for extension authors: ONLY specify testedwith = ' |
|
25 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
26 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
26 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
27 | # be specifying the version(s) of Mercurial they are tested with, or |
|
27 | # be specifying the version(s) of Mercurial they are tested with, or | |
28 | # leave the attribute unspecified. |
|
28 | # leave the attribute unspecified. | |
29 |
testedwith = ' |
|
29 | testedwith = 'ships-with-hg-core' | |
30 |
|
30 | |||
31 |
|
31 | |||
32 | @command("record", |
|
32 | @command("record", | |
@@ -70,7 +70,7 b' def record(ui, repo, *pats, **opts):' | |||||
70 | backup = ui.backupconfig('experimental', 'crecord') |
|
70 | backup = ui.backupconfig('experimental', 'crecord') | |
71 | try: |
|
71 | try: | |
72 | ui.setconfig('experimental', 'crecord', False, 'record') |
|
72 | ui.setconfig('experimental', 'crecord', False, 'record') | |
73 | commands.commit(ui, repo, *pats, **opts) |
|
73 | return commands.commit(ui, repo, *pats, **opts) | |
74 | finally: |
|
74 | finally: | |
75 | ui.restoreconfig(backup) |
|
75 | ui.restoreconfig(backup) | |
76 |
|
76 |
@@ -21,11 +21,11 b' from mercurial import (' | |||||
21 |
|
21 | |||
22 | cmdtable = {} |
|
22 | cmdtable = {} | |
23 | command = cmdutil.command(cmdtable) |
|
23 | command = cmdutil.command(cmdtable) | |
24 |
# Note for extension authors: ONLY specify testedwith = ' |
|
24 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
25 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
25 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
26 | # be specifying the version(s) of Mercurial they are tested with, or |
|
26 | # be specifying the version(s) of Mercurial they are tested with, or | |
27 | # leave the attribute unspecified. |
|
27 | # leave the attribute unspecified. | |
28 |
testedwith = ' |
|
28 | testedwith = 'ships-with-hg-core' | |
29 |
|
29 | |||
30 | @command('relink', [], _('[ORIGIN]')) |
|
30 | @command('relink', [], _('[ORIGIN]')) | |
31 | def relink(ui, repo, origin=None, **opts): |
|
31 | def relink(ui, repo, origin=None, **opts): |
@@ -56,11 +56,11 b' from mercurial import (' | |||||
56 |
|
56 | |||
57 | cmdtable = {} |
|
57 | cmdtable = {} | |
58 | command = cmdutil.command(cmdtable) |
|
58 | command = cmdutil.command(cmdtable) | |
59 |
# Note for extension authors: ONLY specify testedwith = ' |
|
59 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
60 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
60 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
61 | # be specifying the version(s) of Mercurial they are tested with, or |
|
61 | # be specifying the version(s) of Mercurial they are tested with, or | |
62 | # leave the attribute unspecified. |
|
62 | # leave the attribute unspecified. | |
63 |
testedwith = ' |
|
63 | testedwith = 'ships-with-hg-core' | |
64 |
|
64 | |||
65 |
|
65 | |||
66 | class ShortRepository(object): |
|
66 | class ShortRepository(object): |
@@ -56,11 +56,11 b' parseurl = hg.parseurl' | |||||
56 |
|
56 | |||
57 | cmdtable = {} |
|
57 | cmdtable = {} | |
58 | command = cmdutil.command(cmdtable) |
|
58 | command = cmdutil.command(cmdtable) | |
59 |
# Note for extension authors: ONLY specify testedwith = ' |
|
59 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
60 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
60 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
61 | # be specifying the version(s) of Mercurial they are tested with, or |
|
61 | # be specifying the version(s) of Mercurial they are tested with, or | |
62 | # leave the attribute unspecified. |
|
62 | # leave the attribute unspecified. | |
63 |
testedwith = ' |
|
63 | testedwith = 'ships-with-hg-core' | |
64 |
|
64 | |||
65 | @command('share', |
|
65 | @command('share', | |
66 | [('U', 'noupdate', None, _('do not create a working directory')), |
|
66 | [('U', 'noupdate', None, _('do not create a working directory')), |
@@ -54,11 +54,11 b' from . import (' | |||||
54 |
|
54 | |||
55 | cmdtable = {} |
|
55 | cmdtable = {} | |
56 | command = cmdutil.command(cmdtable) |
|
56 | command = cmdutil.command(cmdtable) | |
57 |
# Note for extension authors: ONLY specify testedwith = ' |
|
57 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
58 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
58 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
59 | # be specifying the version(s) of Mercurial they are tested with, or |
|
59 | # be specifying the version(s) of Mercurial they are tested with, or | |
60 | # leave the attribute unspecified. |
|
60 | # leave the attribute unspecified. | |
61 |
testedwith = ' |
|
61 | testedwith = 'ships-with-hg-core' | |
62 |
|
62 | |||
63 | backupdir = 'shelve-backup' |
|
63 | backupdir = 'shelve-backup' | |
64 | shelvedir = 'shelved' |
|
64 | shelvedir = 'shelved' |
@@ -23,11 +23,11 b' release = lockmod.release' | |||||
23 |
|
23 | |||
24 | cmdtable = {} |
|
24 | cmdtable = {} | |
25 | command = cmdutil.command(cmdtable) |
|
25 | command = cmdutil.command(cmdtable) | |
26 |
# Note for extension authors: ONLY specify testedwith = ' |
|
26 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
27 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
27 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
28 | # be specifying the version(s) of Mercurial they are tested with, or |
|
28 | # be specifying the version(s) of Mercurial they are tested with, or | |
29 | # leave the attribute unspecified. |
|
29 | # leave the attribute unspecified. | |
30 |
testedwith = ' |
|
30 | testedwith = 'ships-with-hg-core' | |
31 |
|
31 | |||
32 | def checksubstate(repo, baserev=None): |
|
32 | def checksubstate(repo, baserev=None): | |
33 | '''return list of subrepos at a different revision than substate. |
|
33 | '''return list of subrepos at a different revision than substate. |
@@ -40,11 +40,11 b' class TransplantError(error.Abort):' | |||||
40 |
|
40 | |||
41 | cmdtable = {} |
|
41 | cmdtable = {} | |
42 | command = cmdutil.command(cmdtable) |
|
42 | command = cmdutil.command(cmdtable) | |
43 |
# Note for extension authors: ONLY specify testedwith = ' |
|
43 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
44 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
44 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
45 | # be specifying the version(s) of Mercurial they are tested with, or |
|
45 | # be specifying the version(s) of Mercurial they are tested with, or | |
46 | # leave the attribute unspecified. |
|
46 | # leave the attribute unspecified. | |
47 |
testedwith = ' |
|
47 | testedwith = 'ships-with-hg-core' | |
48 |
|
48 | |||
49 | class transplantentry(object): |
|
49 | class transplantentry(object): | |
50 | def __init__(self, lnode, rnode): |
|
50 | def __init__(self, lnode, rnode): |
@@ -55,11 +55,11 b' from mercurial import (' | |||||
55 | error, |
|
55 | error, | |
56 | ) |
|
56 | ) | |
57 |
|
57 | |||
58 |
# Note for extension authors: ONLY specify testedwith = ' |
|
58 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
59 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
59 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
60 | # be specifying the version(s) of Mercurial they are tested with, or |
|
60 | # be specifying the version(s) of Mercurial they are tested with, or | |
61 | # leave the attribute unspecified. |
|
61 | # leave the attribute unspecified. | |
62 |
testedwith = ' |
|
62 | testedwith = 'ships-with-hg-core' | |
63 |
|
63 | |||
64 | _encoding = None # see extsetup |
|
64 | _encoding = None # see extsetup | |
65 |
|
65 | |||
@@ -148,8 +148,8 b' def wrapname(name, wrapper):' | |||||
148 | # NOTE: os.path.dirname() and os.path.basename() are safe because |
|
148 | # NOTE: os.path.dirname() and os.path.basename() are safe because | |
149 | # they use result of os.path.split() |
|
149 | # they use result of os.path.split() | |
150 | funcs = '''os.path.join os.path.split os.path.splitext |
|
150 | funcs = '''os.path.join os.path.split os.path.splitext | |
151 | os.path.normpath os.makedirs |
|
151 | os.path.normpath os.makedirs mercurial.util.endswithsep | |
152 |
|
|
152 | mercurial.util.splitpath mercurial.util.fscasesensitive | |
153 | mercurial.util.fspath mercurial.util.pconvert mercurial.util.normpath |
|
153 | mercurial.util.fspath mercurial.util.pconvert mercurial.util.normpath | |
154 | mercurial.util.checkwinfilename mercurial.util.checkosfilename |
|
154 | mercurial.util.checkwinfilename mercurial.util.checkosfilename | |
155 | mercurial.util.split''' |
|
155 | mercurial.util.split''' |
@@ -52,11 +52,11 b' from mercurial import (' | |||||
52 | util, |
|
52 | util, | |
53 | ) |
|
53 | ) | |
54 |
|
54 | |||
55 |
# Note for extension authors: ONLY specify testedwith = ' |
|
55 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
56 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
56 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
57 | # be specifying the version(s) of Mercurial they are tested with, or |
|
57 | # be specifying the version(s) of Mercurial they are tested with, or | |
58 | # leave the attribute unspecified. |
|
58 | # leave the attribute unspecified. | |
59 |
testedwith = ' |
|
59 | testedwith = 'ships-with-hg-core' | |
60 |
|
60 | |||
61 | # regexp for single LF without CR preceding. |
|
61 | # regexp for single LF without CR preceding. | |
62 | re_single_lf = re.compile('(^|[^\r])\n', re.MULTILINE) |
|
62 | re_single_lf = re.compile('(^|[^\r])\n', re.MULTILINE) |
@@ -40,11 +40,11 b' from mercurial.hgweb import (' | |||||
40 | server as servermod |
|
40 | server as servermod | |
41 | ) |
|
41 | ) | |
42 |
|
42 | |||
43 |
# Note for extension authors: ONLY specify testedwith = ' |
|
43 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
44 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
44 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
45 | # be specifying the version(s) of Mercurial they are tested with, or |
|
45 | # be specifying the version(s) of Mercurial they are tested with, or | |
46 | # leave the attribute unspecified. |
|
46 | # leave the attribute unspecified. | |
47 |
testedwith = ' |
|
47 | testedwith = 'ships-with-hg-core' | |
48 |
|
48 | |||
49 | # publish |
|
49 | # publish | |
50 |
|
50 |
@@ -114,7 +114,7 b' def docstrings(path):' | |||||
114 | if func.__doc__: |
|
114 | if func.__doc__: | |
115 | src = inspect.getsource(func) |
|
115 | src = inspect.getsource(func) | |
116 | name = "%s.%s" % (path, func.__name__) |
|
116 | name = "%s.%s" % (path, func.__name__) | |
117 | lineno = func.func_code.co_firstlineno |
|
117 | lineno = inspect.getsourcelines(func)[1] | |
118 | doc = func.__doc__ |
|
118 | doc = func.__doc__ | |
119 | if rstrip: |
|
119 | if rstrip: | |
120 | doc = doc.rstrip() |
|
120 | doc = doc.rstrip() |
@@ -170,7 +170,7 b' if sys.version_info[0] >= 3:' | |||||
170 | spec.loader = hgloader(spec.name, spec.origin) |
|
170 | spec.loader = hgloader(spec.name, spec.origin) | |
171 | return spec |
|
171 | return spec | |
172 |
|
172 | |||
173 | def replacetokens(tokens): |
|
173 | def replacetokens(tokens, fullname): | |
174 | """Transform a stream of tokens from raw to Python 3. |
|
174 | """Transform a stream of tokens from raw to Python 3. | |
175 |
|
175 | |||
176 | It is called by the custom module loading machinery to rewrite |
|
176 | It is called by the custom module loading machinery to rewrite | |
@@ -184,6 +184,57 b' if sys.version_info[0] >= 3:' | |||||
184 | REMEMBER TO CHANGE ``BYTECODEHEADER`` WHEN CHANGING THIS FUNCTION |
|
184 | REMEMBER TO CHANGE ``BYTECODEHEADER`` WHEN CHANGING THIS FUNCTION | |
185 | OR CACHED FILES WON'T GET INVALIDATED PROPERLY. |
|
185 | OR CACHED FILES WON'T GET INVALIDATED PROPERLY. | |
186 | """ |
|
186 | """ | |
|
187 | futureimpline = False | |||
|
188 | ||||
|
189 | # The following utility functions access the tokens list and i index of | |||
|
190 | # the for i, t enumerate(tokens) loop below | |||
|
191 | def _isop(j, *o): | |||
|
192 | """Assert that tokens[j] is an OP with one of the given values""" | |||
|
193 | try: | |||
|
194 | return tokens[j].type == token.OP and tokens[j].string in o | |||
|
195 | except IndexError: | |||
|
196 | return False | |||
|
197 | ||||
|
198 | def _findargnofcall(n): | |||
|
199 | """Find arg n of a call expression (start at 0) | |||
|
200 | ||||
|
201 | Returns index of the first token of that argument, or None if | |||
|
202 | there is not that many arguments. | |||
|
203 | ||||
|
204 | Assumes that token[i + 1] is '('. | |||
|
205 | ||||
|
206 | """ | |||
|
207 | nested = 0 | |||
|
208 | for j in range(i + 2, len(tokens)): | |||
|
209 | if _isop(j, ')', ']', '}'): | |||
|
210 | # end of call, tuple, subscription or dict / set | |||
|
211 | nested -= 1 | |||
|
212 | if nested < 0: | |||
|
213 | return None | |||
|
214 | elif n == 0: | |||
|
215 | # this is the starting position of arg | |||
|
216 | return j | |||
|
217 | elif _isop(j, '(', '[', '{'): | |||
|
218 | nested += 1 | |||
|
219 | elif _isop(j, ',') and nested == 0: | |||
|
220 | n -= 1 | |||
|
221 | ||||
|
222 | return None | |||
|
223 | ||||
|
224 | def _ensureunicode(j): | |||
|
225 | """Make sure the token at j is a unicode string | |||
|
226 | ||||
|
227 | This rewrites a string token to include the unicode literal prefix | |||
|
228 | so the string transformer won't add the byte prefix. | |||
|
229 | ||||
|
230 | Ignores tokens that are not strings. Assumes bounds checking has | |||
|
231 | already been done. | |||
|
232 | ||||
|
233 | """ | |||
|
234 | st = tokens[j] | |||
|
235 | if st.type == token.STRING and st.string.startswith(("'", '"')): | |||
|
236 | tokens[j] = st._replace(string='u%s' % st.string) | |||
|
237 | ||||
187 | for i, t in enumerate(tokens): |
|
238 | for i, t in enumerate(tokens): | |
188 | # Convert most string literals to byte literals. String literals |
|
239 | # Convert most string literals to byte literals. String literals | |
189 | # in Python 2 are bytes. String literals in Python 3 are unicode. |
|
240 | # in Python 2 are bytes. String literals in Python 3 are unicode. | |
@@ -213,64 +264,61 b' if sys.version_info[0] >= 3:' | |||||
213 | continue |
|
264 | continue | |
214 |
|
265 | |||
215 | # String literal. Prefix to make a b'' string. |
|
266 | # String literal. Prefix to make a b'' string. | |
216 | yield tokenize.TokenInfo(t.type, 'b%s' % s, t.start, t.end, |
|
267 | yield t._replace(string='b%s' % t.string) | |
217 | t.line) |
|
|||
218 | continue |
|
268 | continue | |
219 |
|
269 | |||
220 | try: |
|
270 | # Insert compatibility imports at "from __future__ import" line. | |
221 | nexttoken = tokens[i + 1] |
|
271 | # No '\n' should be added to preserve line numbers. | |
222 | except IndexError: |
|
272 | if (t.type == token.NAME and t.string == 'import' and | |
223 | nexttoken = None |
|
273 | all(u.type == token.NAME for u in tokens[i - 2:i]) and | |
224 |
|
274 | [u.string for u in tokens[i - 2:i]] == ['from', '__future__']): | ||
225 | try: |
|
275 | futureimpline = True | |
226 | prevtoken = tokens[i - 1] |
|
276 | if t.type == token.NEWLINE and futureimpline: | |
227 | except IndexError: |
|
277 | futureimpline = False | |
228 | prevtoken = None |
|
278 | if fullname == 'mercurial.pycompat': | |
|
279 | yield t | |||
|
280 | continue | |||
|
281 | r, c = t.start | |||
|
282 | l = (b'; from mercurial.pycompat import ' | |||
|
283 | b'delattr, getattr, hasattr, setattr, xrange\n') | |||
|
284 | for u in tokenize.tokenize(io.BytesIO(l).readline): | |||
|
285 | if u.type in (tokenize.ENCODING, token.ENDMARKER): | |||
|
286 | continue | |||
|
287 | yield u._replace( | |||
|
288 | start=(r, c + u.start[1]), end=(r, c + u.end[1])) | |||
|
289 | continue | |||
229 |
|
290 | |||
230 | # This looks like a function call. |
|
291 | # This looks like a function call. | |
231 |
if |
|
292 | if t.type == token.NAME and _isop(i + 1, '('): | |
232 | nexttoken.type == token.OP and nexttoken.string == '('): |
|
|||
233 | fn = t.string |
|
293 | fn = t.string | |
234 |
|
294 | |||
235 | # *attr() builtins don't accept byte strings to 2nd argument. |
|
295 | # *attr() builtins don't accept byte strings to 2nd argument. | |
236 | # Rewrite the token to include the unicode literal prefix so |
|
296 | if (fn in ('getattr', 'setattr', 'hasattr', 'safehasattr') and | |
237 | # the string transformer above doesn't add the byte prefix. |
|
297 | not _isop(i - 1, '.')): | |
238 | if fn in ('getattr', 'setattr', 'hasattr', 'safehasattr'): |
|
298 | arg1idx = _findargnofcall(1) | |
239 |
|
|
299 | if arg1idx is not None: | |
240 |
|
|
300 | _ensureunicode(arg1idx) | |
241 | # (OP, '(') |
|
|||
242 | # (NAME, 'foo') |
|
|||
243 | # (OP, ',') |
|
|||
244 | # (NAME|STRING, foo) |
|
|||
245 | st = tokens[i + 4] |
|
|||
246 | if (st.type == token.STRING and |
|
|||
247 | st.string[0] in ("'", '"')): |
|
|||
248 | rt = tokenize.TokenInfo(st.type, 'u%s' % st.string, |
|
|||
249 | st.start, st.end, st.line) |
|
|||
250 | tokens[i + 4] = rt |
|
|||
251 | except IndexError: |
|
|||
252 | pass |
|
|||
253 |
|
301 | |||
254 | # .encode() and .decode() on str/bytes/unicode don't accept |
|
302 | # .encode() and .decode() on str/bytes/unicode don't accept | |
255 |
# byte strings on Python 3. |
|
303 | # byte strings on Python 3. | |
256 | # unicode literal prefix so the string transformer above doesn't |
|
304 | elif fn in ('encode', 'decode') and _isop(i - 1, '.'): | |
257 | # add the byte prefix. |
|
305 | for argn in range(2): | |
258 | if (fn in ('encode', 'decode') and |
|
306 | argidx = _findargnofcall(argn) | |
259 | prevtoken.type == token.OP and prevtoken.string == '.'): |
|
307 | if argidx is not None: | |
260 | # (OP, '.') |
|
308 | _ensureunicode(argidx) | |
261 | # (NAME, 'encode') |
|
309 | ||
262 | # (OP, '(') |
|
310 | # Bare open call (not an attribute on something else), the | |
263 | # (STRING, 'utf-8') |
|
311 | # second argument (mode) must be a string, not bytes | |
264 | # (OP, ')') |
|
312 | elif fn == 'open' and not _isop(i - 1, '.'): | |
265 | try: |
|
313 | arg1idx = _findargnofcall(1) | |
266 |
|
|
314 | if arg1idx is not None: | |
267 | if (st.type == token.STRING and |
|
315 | _ensureunicode(arg1idx) | |
268 | st.string[0] in ("'", '"')): |
|
316 | ||
269 | rt = tokenize.TokenInfo(st.type, 'u%s' % st.string, |
|
317 | # It changes iteritems to items as iteritems is not | |
270 | st.start, st.end, st.line) |
|
318 | # present in Python 3 world. | |
271 | tokens[i + 2] = rt |
|
319 | elif fn == 'iteritems': | |
272 | except IndexError: |
|
320 | yield t._replace(string='items') | |
273 |
|
|
321 | continue | |
274 |
|
322 | |||
275 | # Emit unmodified token. |
|
323 | # Emit unmodified token. | |
276 | yield t |
|
324 | yield t | |
@@ -279,7 +327,7 b' if sys.version_info[0] >= 3:' | |||||
279 | # ``replacetoken`` or any mechanism that changes semantics of module |
|
327 | # ``replacetoken`` or any mechanism that changes semantics of module | |
280 | # loading is changed. Otherwise cached bytecode may get loaded without |
|
328 | # loading is changed. Otherwise cached bytecode may get loaded without | |
281 | # the new transformation mechanisms applied. |
|
329 | # the new transformation mechanisms applied. | |
282 |
BYTECODEHEADER = b'HG\x00\x0 |
|
330 | BYTECODEHEADER = b'HG\x00\x06' | |
283 |
|
331 | |||
284 | class hgloader(importlib.machinery.SourceFileLoader): |
|
332 | class hgloader(importlib.machinery.SourceFileLoader): | |
285 | """Custom module loader that transforms source code. |
|
333 | """Custom module loader that transforms source code. | |
@@ -338,7 +386,7 b' if sys.version_info[0] >= 3:' | |||||
338 | """Perform token transformation before compilation.""" |
|
386 | """Perform token transformation before compilation.""" | |
339 | buf = io.BytesIO(data) |
|
387 | buf = io.BytesIO(data) | |
340 | tokens = tokenize.tokenize(buf.readline) |
|
388 | tokens = tokenize.tokenize(buf.readline) | |
341 | data = tokenize.untokenize(replacetokens(list(tokens))) |
|
389 | data = tokenize.untokenize(replacetokens(list(tokens), self.name)) | |
342 | # Python's built-in importer strips frames from exceptions raised |
|
390 | # Python's built-in importer strips frames from exceptions raised | |
343 | # for this code. Unfortunately, that mechanism isn't extensible |
|
391 | # for this code. Unfortunately, that mechanism isn't extensible | |
344 | # and our frame will be blamed for the import failure. There |
|
392 | # and our frame will be blamed for the import failure. There |
@@ -231,7 +231,7 b' class zipit(object):' | |||||
231 | if islink: |
|
231 | if islink: | |
232 | mode = 0o777 |
|
232 | mode = 0o777 | |
233 | ftype = _UNX_IFLNK |
|
233 | ftype = _UNX_IFLNK | |
234 |
i.external_attr = (mode | ftype) << 16 |
|
234 | i.external_attr = (mode | ftype) << 16 | |
235 | # add "extended-timestamp" extra block, because zip archives |
|
235 | # add "extended-timestamp" extra block, because zip archives | |
236 | # without this will be extracted with unexpected timestamp, |
|
236 | # without this will be extracted with unexpected timestamp, | |
237 | # if TZ is not configured as GMT |
|
237 | # if TZ is not configured as GMT |
@@ -17,6 +17,7 b'' | |||||
17 |
|
17 | |||
18 | #include "bdiff.h" |
|
18 | #include "bdiff.h" | |
19 | #include "bitmanipulation.h" |
|
19 | #include "bitmanipulation.h" | |
|
20 | #include "util.h" | |||
20 |
|
21 | |||
21 |
|
22 | |||
22 | static PyObject *blocks(PyObject *self, PyObject *args) |
|
23 | static PyObject *blocks(PyObject *self, PyObject *args) |
@@ -470,8 +470,12 b' class revbranchcache(object):' | |||||
470 | def write(self, tr=None): |
|
470 | def write(self, tr=None): | |
471 | """Save branch cache if it is dirty.""" |
|
471 | """Save branch cache if it is dirty.""" | |
472 | repo = self._repo |
|
472 | repo = self._repo | |
473 | if self._rbcnamescount < len(self._names): |
|
473 | wlock = None | |
474 |
|
|
474 | step = '' | |
|
475 | try: | |||
|
476 | if self._rbcnamescount < len(self._names): | |||
|
477 | step = ' names' | |||
|
478 | wlock = repo.wlock(wait=False) | |||
475 | if self._rbcnamescount != 0: |
|
479 | if self._rbcnamescount != 0: | |
476 | f = repo.vfs.open(_rbcnames, 'ab') |
|
480 | f = repo.vfs.open(_rbcnames, 'ab') | |
477 | if f.tell() == self._rbcsnameslen: |
|
481 | if f.tell() == self._rbcsnameslen: | |
@@ -489,16 +493,15 b' class revbranchcache(object):' | |||||
489 | for b in self._names[self._rbcnamescount:])) |
|
493 | for b in self._names[self._rbcnamescount:])) | |
490 | self._rbcsnameslen = f.tell() |
|
494 | self._rbcsnameslen = f.tell() | |
491 | f.close() |
|
495 | f.close() | |
492 | except (IOError, OSError, error.Abort) as inst: |
|
496 | self._rbcnamescount = len(self._names) | |
493 | repo.ui.debug("couldn't write revision branch cache names: " |
|
|||
494 | "%s\n" % inst) |
|
|||
495 | return |
|
|||
496 | self._rbcnamescount = len(self._names) |
|
|||
497 |
|
497 | |||
498 | start = self._rbcrevslen * _rbcrecsize |
|
498 | start = self._rbcrevslen * _rbcrecsize | |
499 | if start != len(self._rbcrevs): |
|
499 | if start != len(self._rbcrevs): | |
500 | revs = min(len(repo.changelog), len(self._rbcrevs) // _rbcrecsize) |
|
500 | step = '' | |
501 | try: |
|
501 | if wlock is None: | |
|
502 | wlock = repo.wlock(wait=False) | |||
|
503 | revs = min(len(repo.changelog), | |||
|
504 | len(self._rbcrevs) // _rbcrecsize) | |||
502 | f = repo.vfs.open(_rbcrevs, 'ab') |
|
505 | f = repo.vfs.open(_rbcrevs, 'ab') | |
503 | if f.tell() != start: |
|
506 | if f.tell() != start: | |
504 | repo.ui.debug("truncating %s to %s\n" % (_rbcrevs, start)) |
|
507 | repo.ui.debug("truncating %s to %s\n" % (_rbcrevs, start)) | |
@@ -510,8 +513,10 b' class revbranchcache(object):' | |||||
510 | end = revs * _rbcrecsize |
|
513 | end = revs * _rbcrecsize | |
511 | f.write(self._rbcrevs[start:end]) |
|
514 | f.write(self._rbcrevs[start:end]) | |
512 | f.close() |
|
515 | f.close() | |
513 | except (IOError, OSError, error.Abort) as inst: |
|
516 | self._rbcrevslen = revs | |
514 | repo.ui.debug("couldn't write revision branch cache: %s\n" % |
|
517 | except (IOError, OSError, error.Abort, error.LockError) as inst: | |
515 | inst) |
|
518 | repo.ui.debug("couldn't write revision branch cache%s: %s\n" | |
516 | return |
|
519 | % (step, inst)) | |
517 | self._rbcrevslen = revs |
|
520 | finally: | |
|
521 | if wlock is not None: | |||
|
522 | wlock.release() |
@@ -159,6 +159,7 b' from . import (' | |||||
159 | error, |
|
159 | error, | |
160 | obsolete, |
|
160 | obsolete, | |
161 | pushkey, |
|
161 | pushkey, | |
|
162 | pycompat, | |||
162 | tags, |
|
163 | tags, | |
163 | url, |
|
164 | url, | |
164 | util, |
|
165 | util, | |
@@ -572,7 +573,9 b' class bundle20(object):' | |||||
572 | yield param |
|
573 | yield param | |
573 | # starting compression |
|
574 | # starting compression | |
574 | for chunk in self._getcorechunk(): |
|
575 | for chunk in self._getcorechunk(): | |
575 |
|
|
576 | data = self._compressor.compress(chunk) | |
|
577 | if data: | |||
|
578 | yield data | |||
576 | yield self._compressor.flush() |
|
579 | yield self._compressor.flush() | |
577 |
|
580 | |||
578 | def _paramchunk(self): |
|
581 | def _paramchunk(self): | |
@@ -996,7 +999,10 b' class bundlepart(object):' | |||||
996 | outdebug(ui, 'closing payload chunk') |
|
999 | outdebug(ui, 'closing payload chunk') | |
997 | # abort current part payload |
|
1000 | # abort current part payload | |
998 | yield _pack(_fpayloadsize, 0) |
|
1001 | yield _pack(_fpayloadsize, 0) | |
999 | raise exc_info[0], exc_info[1], exc_info[2] |
|
1002 | if pycompat.ispy3: | |
|
1003 | raise exc_info[0](exc_info[1]).with_traceback(exc_info[2]) | |||
|
1004 | else: | |||
|
1005 | exec("""raise exc_info[0], exc_info[1], exc_info[2]""") | |||
1000 | # end of payload |
|
1006 | # end of payload | |
1001 | outdebug(ui, 'closing payload chunk') |
|
1007 | outdebug(ui, 'closing payload chunk') | |
1002 | yield _pack(_fpayloadsize, 0) |
|
1008 | yield _pack(_fpayloadsize, 0) | |
@@ -1320,7 +1326,9 b' def writebundle(ui, cg, filename, bundle' | |||||
1320 | def chunkiter(): |
|
1326 | def chunkiter(): | |
1321 | yield header |
|
1327 | yield header | |
1322 | for chunk in subchunkiter: |
|
1328 | for chunk in subchunkiter: | |
1323 |
|
|
1329 | data = z.compress(chunk) | |
|
1330 | if data: | |||
|
1331 | yield data | |||
1324 | yield z.flush() |
|
1332 | yield z.flush() | |
1325 | chunkiter = chunkiter() |
|
1333 | chunkiter = chunkiter() | |
1326 |
|
1334 |
@@ -56,10 +56,8 b' class bundlerevlog(revlog.revlog):' | |||||
56 | self.repotiprev = n - 1 |
|
56 | self.repotiprev = n - 1 | |
57 | chain = None |
|
57 | chain = None | |
58 | self.bundlerevs = set() # used by 'bundle()' revset expression |
|
58 | self.bundlerevs = set() # used by 'bundle()' revset expression | |
59 | while True: |
|
59 | getchunk = lambda: bundle.deltachunk(chain) | |
60 |
|
|
60 | for chunkdata in iter(getchunk, {}): | |
61 | if not chunkdata: |
|
|||
62 | break |
|
|||
63 | node = chunkdata['node'] |
|
61 | node = chunkdata['node'] | |
64 | p1 = chunkdata['p1'] |
|
62 | p1 = chunkdata['p1'] | |
65 | p2 = chunkdata['p2'] |
|
63 | p2 = chunkdata['p2'] | |
@@ -190,22 +188,36 b' class bundlechangelog(bundlerevlog, chan' | |||||
190 | self.filteredrevs = oldfilter |
|
188 | self.filteredrevs = oldfilter | |
191 |
|
189 | |||
192 | class bundlemanifest(bundlerevlog, manifest.manifest): |
|
190 | class bundlemanifest(bundlerevlog, manifest.manifest): | |
193 | def __init__(self, opener, bundle, linkmapper): |
|
191 | def __init__(self, opener, bundle, linkmapper, dirlogstarts=None, dir=''): | |
194 | manifest.manifest.__init__(self, opener) |
|
192 | manifest.manifest.__init__(self, opener, dir=dir) | |
195 | bundlerevlog.__init__(self, opener, self.indexfile, bundle, |
|
193 | bundlerevlog.__init__(self, opener, self.indexfile, bundle, | |
196 | linkmapper) |
|
194 | linkmapper) | |
|
195 | if dirlogstarts is None: | |||
|
196 | dirlogstarts = {} | |||
|
197 | if self.bundle.version == "03": | |||
|
198 | dirlogstarts = _getfilestarts(self.bundle) | |||
|
199 | self._dirlogstarts = dirlogstarts | |||
|
200 | self._linkmapper = linkmapper | |||
197 |
|
201 | |||
198 | def baserevision(self, nodeorrev): |
|
202 | def baserevision(self, nodeorrev): | |
199 | node = nodeorrev |
|
203 | node = nodeorrev | |
200 | if isinstance(node, int): |
|
204 | if isinstance(node, int): | |
201 | node = self.node(node) |
|
205 | node = self.node(node) | |
202 |
|
206 | |||
203 |
if node in self. |
|
207 | if node in self.fulltextcache: | |
204 |
result = self. |
|
208 | result = self.fulltextcache[node].tostring() | |
205 | else: |
|
209 | else: | |
206 | result = manifest.manifest.revision(self, nodeorrev) |
|
210 | result = manifest.manifest.revision(self, nodeorrev) | |
207 | return result |
|
211 | return result | |
208 |
|
212 | |||
|
213 | def dirlog(self, d): | |||
|
214 | if d in self._dirlogstarts: | |||
|
215 | self.bundle.seek(self._dirlogstarts[d]) | |||
|
216 | return bundlemanifest( | |||
|
217 | self.opener, self.bundle, self._linkmapper, | |||
|
218 | self._dirlogstarts, dir=d) | |||
|
219 | return super(bundlemanifest, self).dirlog(d) | |||
|
220 | ||||
209 | class bundlefilelog(bundlerevlog, filelog.filelog): |
|
221 | class bundlefilelog(bundlerevlog, filelog.filelog): | |
210 | def __init__(self, opener, path, bundle, linkmapper): |
|
222 | def __init__(self, opener, path, bundle, linkmapper): | |
211 | filelog.filelog.__init__(self, opener, path) |
|
223 | filelog.filelog.__init__(self, opener, path) | |
@@ -236,6 +248,15 b' class bundlephasecache(phases.phasecache' | |||||
236 | self.invalidate() |
|
248 | self.invalidate() | |
237 | self.dirty = True |
|
249 | self.dirty = True | |
238 |
|
250 | |||
|
251 | def _getfilestarts(bundle): | |||
|
252 | bundlefilespos = {} | |||
|
253 | for chunkdata in iter(bundle.filelogheader, {}): | |||
|
254 | fname = chunkdata['filename'] | |||
|
255 | bundlefilespos[fname] = bundle.tell() | |||
|
256 | for chunk in iter(lambda: bundle.deltachunk(None), {}): | |||
|
257 | pass | |||
|
258 | return bundlefilespos | |||
|
259 | ||||
239 | class bundlerepository(localrepo.localrepository): |
|
260 | class bundlerepository(localrepo.localrepository): | |
240 | def __init__(self, ui, path, bundlename): |
|
261 | def __init__(self, ui, path, bundlename): | |
241 | def _writetempbundle(read, suffix, header=''): |
|
262 | def _writetempbundle(read, suffix, header=''): | |
@@ -283,7 +304,8 b' class bundlerepository(localrepo.localre' | |||||
283 | "multiple changegroups") |
|
304 | "multiple changegroups") | |
284 | cgstream = part |
|
305 | cgstream = part | |
285 | version = part.params.get('version', '01') |
|
306 | version = part.params.get('version', '01') | |
286 |
|
|
307 | legalcgvers = changegroup.supportedincomingversions(self) | |
|
308 | if version not in legalcgvers: | |||
287 | msg = _('Unsupported changegroup version: %s') |
|
309 | msg = _('Unsupported changegroup version: %s') | |
288 | raise error.Abort(msg % version) |
|
310 | raise error.Abort(msg % version) | |
289 | if self.bundle.compressed(): |
|
311 | if self.bundle.compressed(): | |
@@ -328,10 +350,6 b' class bundlerepository(localrepo.localre' | |||||
328 | self.bundle.manifestheader() |
|
350 | self.bundle.manifestheader() | |
329 | linkmapper = self.unfiltered().changelog.rev |
|
351 | linkmapper = self.unfiltered().changelog.rev | |
330 | m = bundlemanifest(self.svfs, self.bundle, linkmapper) |
|
352 | m = bundlemanifest(self.svfs, self.bundle, linkmapper) | |
331 | # XXX: hack to work with changegroup3, but we still don't handle |
|
|||
332 | # tree manifests correctly |
|
|||
333 | if self.bundle.version == "03": |
|
|||
334 | self.bundle.filelogheader() |
|
|||
335 | self.filestart = self.bundle.tell() |
|
353 | self.filestart = self.bundle.tell() | |
336 | return m |
|
354 | return m | |
337 |
|
355 | |||
@@ -351,16 +369,7 b' class bundlerepository(localrepo.localre' | |||||
351 | def file(self, f): |
|
369 | def file(self, f): | |
352 | if not self.bundlefilespos: |
|
370 | if not self.bundlefilespos: | |
353 | self.bundle.seek(self.filestart) |
|
371 | self.bundle.seek(self.filestart) | |
354 | while True: |
|
372 | self.bundlefilespos = _getfilestarts(self.bundle) | |
355 | chunkdata = self.bundle.filelogheader() |
|
|||
356 | if not chunkdata: |
|
|||
357 | break |
|
|||
358 | fname = chunkdata['filename'] |
|
|||
359 | self.bundlefilespos[fname] = self.bundle.tell() |
|
|||
360 | while True: |
|
|||
361 | c = self.bundle.deltachunk(None) |
|
|||
362 | if not c: |
|
|||
363 | break |
|
|||
364 |
|
373 | |||
365 | if f in self.bundlefilespos: |
|
374 | if f in self.bundlefilespos: | |
366 | self.bundle.seek(self.bundlefilespos[f]) |
|
375 | self.bundle.seek(self.bundlefilespos[f]) | |
@@ -480,7 +489,10 b' def getremotechanges(ui, repo, other, on' | |||||
480 | if bundlename or not localrepo: |
|
489 | if bundlename or not localrepo: | |
481 | # create a bundle (uncompressed if other repo is not local) |
|
490 | # create a bundle (uncompressed if other repo is not local) | |
482 |
|
491 | |||
483 | canbundle2 = (ui.configbool('experimental', 'bundle2-exp', True) |
|
492 | # developer config: devel.legacy.exchange | |
|
493 | legexc = ui.configlist('devel', 'legacy.exchange') | |||
|
494 | forcebundle1 = 'bundle2' not in legexc and 'bundle1' in legexc | |||
|
495 | canbundle2 = (not forcebundle1 | |||
484 | and other.capable('getbundle') |
|
496 | and other.capable('getbundle') | |
485 | and other.capable('bundle2')) |
|
497 | and other.capable('bundle2')) | |
486 | if canbundle2: |
|
498 | if canbundle2: |
@@ -15,7 +15,6 b' import weakref' | |||||
15 | from .i18n import _ |
|
15 | from .i18n import _ | |
16 | from .node import ( |
|
16 | from .node import ( | |
17 | hex, |
|
17 | hex, | |
18 | nullid, |
|
|||
19 | nullrev, |
|
18 | nullrev, | |
20 | short, |
|
19 | short, | |
21 | ) |
|
20 | ) | |
@@ -94,7 +93,9 b' def writechunks(ui, chunks, filename, vf' | |||||
94 | if vfs: |
|
93 | if vfs: | |
95 | fh = vfs.open(filename, "wb") |
|
94 | fh = vfs.open(filename, "wb") | |
96 | else: |
|
95 | else: | |
97 | fh = open(filename, "wb") |
|
96 | # Increase default buffer size because default is usually | |
|
97 | # small (4k is common on Linux). | |||
|
98 | fh = open(filename, "wb", 131072) | |||
98 | else: |
|
99 | else: | |
99 | fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg") |
|
100 | fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg") | |
100 | fh = os.fdopen(fd, "wb") |
|
101 | fh = os.fdopen(fd, "wb") | |
@@ -333,7 +334,7 b' class cg1unpacker(object):' | |||||
333 | for cset in xrange(clstart, clend): |
|
334 | for cset in xrange(clstart, clend): | |
334 | mfnode = repo.changelog.read( |
|
335 | mfnode = repo.changelog.read( | |
335 | repo.changelog.node(cset))[0] |
|
336 | repo.changelog.node(cset))[0] | |
336 |
mfest = repo.manifest.readdelta( |
|
337 | mfest = repo.manifestlog[mfnode].readdelta() | |
337 | # store file nodes we must see |
|
338 | # store file nodes we must see | |
338 | for f, n in mfest.iteritems(): |
|
339 | for f, n in mfest.iteritems(): | |
339 | needfiles.setdefault(f, set()).add(n) |
|
340 | needfiles.setdefault(f, set()).add(n) | |
@@ -404,6 +405,7 b' class cg1unpacker(object):' | |||||
404 | # coming call to `destroyed` will repair it. |
|
405 | # coming call to `destroyed` will repair it. | |
405 | # In other case we can safely update cache on |
|
406 | # In other case we can safely update cache on | |
406 | # disk. |
|
407 | # disk. | |
|
408 | repo.ui.debug('updating the branch cache\n') | |||
407 | branchmap.updatecache(repo.filtered('served')) |
|
409 | branchmap.updatecache(repo.filtered('served')) | |
408 |
|
410 | |||
409 | def runhooks(): |
|
411 | def runhooks(): | |
@@ -413,8 +415,6 b' class cg1unpacker(object):' | |||||
413 | if clstart >= len(repo): |
|
415 | if clstart >= len(repo): | |
414 | return |
|
416 | return | |
415 |
|
417 | |||
416 | # forcefully update the on-disk branch cache |
|
|||
417 | repo.ui.debug("updating the branch cache\n") |
|
|||
418 | repo.hook("changegroup", **hookargs) |
|
418 | repo.hook("changegroup", **hookargs) | |
419 |
|
419 | |||
420 | for n in added: |
|
420 | for n in added: | |
@@ -475,10 +475,7 b' class cg3unpacker(cg2unpacker):' | |||||
475 | def _unpackmanifests(self, repo, revmap, trp, prog, numchanges): |
|
475 | def _unpackmanifests(self, repo, revmap, trp, prog, numchanges): | |
476 | super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog, |
|
476 | super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog, | |
477 | numchanges) |
|
477 | numchanges) | |
478 | while True: |
|
478 | for chunkdata in iter(self.filelogheader, {}): | |
479 | chunkdata = self.filelogheader() |
|
|||
480 | if not chunkdata: |
|
|||
481 | break |
|
|||
482 | # If we get here, there are directory manifests in the changegroup |
|
479 | # If we get here, there are directory manifests in the changegroup | |
483 | d = chunkdata["filename"] |
|
480 | d = chunkdata["filename"] | |
484 | repo.ui.debug("adding %s revisions\n" % d) |
|
481 | repo.ui.debug("adding %s revisions\n" % d) | |
@@ -823,11 +820,24 b' class cg2packer(cg1packer):' | |||||
823 |
|
820 | |||
824 | def deltaparent(self, revlog, rev, p1, p2, prev): |
|
821 | def deltaparent(self, revlog, rev, p1, p2, prev): | |
825 | dp = revlog.deltaparent(rev) |
|
822 | dp = revlog.deltaparent(rev) | |
826 | # avoid storing full revisions; pick prev in those cases |
|
823 | if dp == nullrev and revlog.storedeltachains: | |
827 | # also pick prev when we can't be sure remote has dp |
|
824 | # Avoid sending full revisions when delta parent is null. Pick prev | |
828 | if dp == nullrev or (dp != p1 and dp != p2 and dp != prev): |
|
825 | # in that case. It's tempting to pick p1 in this case, as p1 will | |
|
826 | # be smaller in the common case. However, computing a delta against | |||
|
827 | # p1 may require resolving the raw text of p1, which could be | |||
|
828 | # expensive. The revlog caches should have prev cached, meaning | |||
|
829 | # less CPU for changegroup generation. There is likely room to add | |||
|
830 | # a flag and/or config option to control this behavior. | |||
829 | return prev |
|
831 | return prev | |
830 | return dp |
|
832 | elif dp == nullrev: | |
|
833 | # revlog is configured to use full snapshot for a reason, | |||
|
834 | # stick to full snapshot. | |||
|
835 | return nullrev | |||
|
836 | elif dp not in (p1, p2, prev): | |||
|
837 | # Pick prev when we can't be sure remote has the base revision. | |||
|
838 | return prev | |||
|
839 | else: | |||
|
840 | return dp | |||
831 |
|
841 | |||
832 | def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags): |
|
842 | def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags): | |
833 | # Do nothing with flags, it is implicitly 0 in cg1 and cg2 |
|
843 | # Do nothing with flags, it is implicitly 0 in cg1 and cg2 | |
@@ -946,17 +956,7 b' def changegroupsubset(repo, roots, heads' | |||||
946 | Another wrinkle is doing the reverse, figuring out which changeset in |
|
956 | Another wrinkle is doing the reverse, figuring out which changeset in | |
947 | the changegroup a particular filenode or manifestnode belongs to. |
|
957 | the changegroup a particular filenode or manifestnode belongs to. | |
948 | """ |
|
958 | """ | |
949 | cl = repo.changelog |
|
959 | outgoing = discovery.outgoing(repo, missingroots=roots, missingheads=heads) | |
950 | if not roots: |
|
|||
951 | roots = [nullid] |
|
|||
952 | discbases = [] |
|
|||
953 | for n in roots: |
|
|||
954 | discbases.extend([p for p in cl.parents(n) if p != nullid]) |
|
|||
955 | # TODO: remove call to nodesbetween. |
|
|||
956 | csets, roots, heads = cl.nodesbetween(roots, heads) |
|
|||
957 | included = set(csets) |
|
|||
958 | discbases = [n for n in discbases if n not in included] |
|
|||
959 | outgoing = discovery.outgoing(cl, discbases, heads) |
|
|||
960 | bundler = getbundler(version, repo) |
|
960 | bundler = getbundler(version, repo) | |
961 | return getsubset(repo, outgoing, bundler, source) |
|
961 | return getsubset(repo, outgoing, bundler, source) | |
962 |
|
962 | |||
@@ -982,26 +982,7 b' def getlocalchangegroup(repo, source, ou' | |||||
982 | bundler = getbundler(version, repo, bundlecaps) |
|
982 | bundler = getbundler(version, repo, bundlecaps) | |
983 | return getsubset(repo, outgoing, bundler, source) |
|
983 | return getsubset(repo, outgoing, bundler, source) | |
984 |
|
984 | |||
985 | def computeoutgoing(repo, heads, common): |
|
985 | def getchangegroup(repo, source, outgoing, bundlecaps=None, | |
986 | """Computes which revs are outgoing given a set of common |
|
|||
987 | and a set of heads. |
|
|||
988 |
|
||||
989 | This is a separate function so extensions can have access to |
|
|||
990 | the logic. |
|
|||
991 |
|
||||
992 | Returns a discovery.outgoing object. |
|
|||
993 | """ |
|
|||
994 | cl = repo.changelog |
|
|||
995 | if common: |
|
|||
996 | hasnode = cl.hasnode |
|
|||
997 | common = [n for n in common if hasnode(n)] |
|
|||
998 | else: |
|
|||
999 | common = [nullid] |
|
|||
1000 | if not heads: |
|
|||
1001 | heads = cl.heads() |
|
|||
1002 | return discovery.outgoing(cl, common, heads) |
|
|||
1003 |
|
||||
1004 | def getchangegroup(repo, source, heads=None, common=None, bundlecaps=None, |
|
|||
1005 | version='01'): |
|
986 | version='01'): | |
1006 | """Like changegroupsubset, but returns the set difference between the |
|
987 | """Like changegroupsubset, but returns the set difference between the | |
1007 | ancestors of heads and the ancestors common. |
|
988 | ancestors of heads and the ancestors common. | |
@@ -1011,7 +992,6 b' def getchangegroup(repo, source, heads=N' | |||||
1011 | The nodes in common might not all be known locally due to the way the |
|
992 | The nodes in common might not all be known locally due to the way the | |
1012 | current discovery protocol works. |
|
993 | current discovery protocol works. | |
1013 | """ |
|
994 | """ | |
1014 | outgoing = computeoutgoing(repo, heads, common) |
|
|||
1015 | return getlocalchangegroup(repo, source, outgoing, bundlecaps=bundlecaps, |
|
995 | return getlocalchangegroup(repo, source, outgoing, bundlecaps=bundlecaps, | |
1016 | version=version) |
|
996 | version=version) | |
1017 |
|
997 | |||
@@ -1022,10 +1002,7 b' def changegroup(repo, basenodes, source)' | |||||
1022 | def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles): |
|
1002 | def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles): | |
1023 | revisions = 0 |
|
1003 | revisions = 0 | |
1024 | files = 0 |
|
1004 | files = 0 | |
1025 | while True: |
|
1005 | for chunkdata in iter(source.filelogheader, {}): | |
1026 | chunkdata = source.filelogheader() |
|
|||
1027 | if not chunkdata: |
|
|||
1028 | break |
|
|||
1029 | files += 1 |
|
1006 | files += 1 | |
1030 | f = chunkdata["filename"] |
|
1007 | f = chunkdata["filename"] | |
1031 | repo.ui.debug("adding %s revisions\n" % f) |
|
1008 | repo.ui.debug("adding %s revisions\n" % f) |
@@ -124,7 +124,7 b' class appender(object):' | |||||
124 |
|
124 | |||
125 | def _divertopener(opener, target): |
|
125 | def _divertopener(opener, target): | |
126 | """build an opener that writes in 'target.a' instead of 'target'""" |
|
126 | """build an opener that writes in 'target.a' instead of 'target'""" | |
127 | def _divert(name, mode='r'): |
|
127 | def _divert(name, mode='r', checkambig=False): | |
128 | if name != target: |
|
128 | if name != target: | |
129 | return opener(name, mode) |
|
129 | return opener(name, mode) | |
130 | return opener(name + ".a", mode) |
|
130 | return opener(name + ".a", mode) | |
@@ -132,15 +132,16 b' def _divertopener(opener, target):' | |||||
132 |
|
132 | |||
133 | def _delayopener(opener, target, buf): |
|
133 | def _delayopener(opener, target, buf): | |
134 | """build an opener that stores chunks in 'buf' instead of 'target'""" |
|
134 | """build an opener that stores chunks in 'buf' instead of 'target'""" | |
135 | def _delay(name, mode='r'): |
|
135 | def _delay(name, mode='r', checkambig=False): | |
136 | if name != target: |
|
136 | if name != target: | |
137 | return opener(name, mode) |
|
137 | return opener(name, mode) | |
138 | return appender(opener, name, mode, buf) |
|
138 | return appender(opener, name, mode, buf) | |
139 | return _delay |
|
139 | return _delay | |
140 |
|
140 | |||
141 | _changelogrevision = collections.namedtuple('changelogrevision', |
|
141 | _changelogrevision = collections.namedtuple(u'changelogrevision', | |
142 | ('manifest', 'user', 'date', |
|
142 | (u'manifest', u'user', u'date', | |
143 |
'files', 'description', |
|
143 | u'files', u'description', | |
|
144 | u'extra')) | |||
144 |
|
145 | |||
145 | class changelogrevision(object): |
|
146 | class changelogrevision(object): | |
146 | """Holds results of a parsed changelog revision. |
|
147 | """Holds results of a parsed changelog revision. | |
@@ -151,8 +152,8 b' class changelogrevision(object):' | |||||
151 | """ |
|
152 | """ | |
152 |
|
153 | |||
153 | __slots__ = ( |
|
154 | __slots__ = ( | |
154 | '_offsets', |
|
155 | u'_offsets', | |
155 | '_text', |
|
156 | u'_text', | |
156 | ) |
|
157 | ) | |
157 |
|
158 | |||
158 | def __new__(cls, text): |
|
159 | def __new__(cls, text): | |
@@ -256,11 +257,18 b' class changelogrevision(object):' | |||||
256 |
|
257 | |||
257 | class changelog(revlog.revlog): |
|
258 | class changelog(revlog.revlog): | |
258 | def __init__(self, opener): |
|
259 | def __init__(self, opener): | |
259 |
revlog.revlog.__init__(self, opener, "00changelog.i" |
|
260 | revlog.revlog.__init__(self, opener, "00changelog.i", | |
|
261 | checkambig=True) | |||
260 | if self._initempty: |
|
262 | if self._initempty: | |
261 | # changelogs don't benefit from generaldelta |
|
263 | # changelogs don't benefit from generaldelta | |
262 | self.version &= ~revlog.REVLOGGENERALDELTA |
|
264 | self.version &= ~revlog.REVLOGGENERALDELTA | |
263 | self._generaldelta = False |
|
265 | self._generaldelta = False | |
|
266 | ||||
|
267 | # Delta chains for changelogs tend to be very small because entries | |||
|
268 | # tend to be small and don't delta well with each. So disable delta | |||
|
269 | # chains. | |||
|
270 | self.storedeltachains = False | |||
|
271 | ||||
264 | self._realopener = opener |
|
272 | self._realopener = opener | |
265 | self._delayed = False |
|
273 | self._delayed = False | |
266 | self._delaybuf = None |
|
274 | self._delaybuf = None | |
@@ -381,9 +389,9 b' class changelog(revlog.revlog):' | |||||
381 | tmpname = self.indexfile + ".a" |
|
389 | tmpname = self.indexfile + ".a" | |
382 | nfile = self.opener.open(tmpname) |
|
390 | nfile = self.opener.open(tmpname) | |
383 | nfile.close() |
|
391 | nfile.close() | |
384 | self.opener.rename(tmpname, self.indexfile) |
|
392 | self.opener.rename(tmpname, self.indexfile, checkambig=True) | |
385 | elif self._delaybuf: |
|
393 | elif self._delaybuf: | |
386 | fp = self.opener(self.indexfile, 'a') |
|
394 | fp = self.opener(self.indexfile, 'a', checkambig=True) | |
387 | fp.write("".join(self._delaybuf)) |
|
395 | fp.write("".join(self._delaybuf)) | |
388 | fp.close() |
|
396 | fp.close() | |
389 | self._delaybuf = None |
|
397 | self._delaybuf = None |
@@ -499,6 +499,12 b' class _unclosablefile(object):' | |||||
499 | def __getattr__(self, attr): |
|
499 | def __getattr__(self, attr): | |
500 | return getattr(self._fp, attr) |
|
500 | return getattr(self._fp, attr) | |
501 |
|
501 | |||
|
502 | def __enter__(self): | |||
|
503 | return self | |||
|
504 | ||||
|
505 | def __exit__(self, exc_type, exc_value, exc_tb): | |||
|
506 | pass | |||
|
507 | ||||
502 | def makefileobj(repo, pat, node=None, desc=None, total=None, |
|
508 | def makefileobj(repo, pat, node=None, desc=None, total=None, | |
503 | seqno=None, revwidth=None, mode='wb', modemap=None, |
|
509 | seqno=None, revwidth=None, mode='wb', modemap=None, | |
504 | pathname=None): |
|
510 | pathname=None): | |
@@ -549,7 +555,7 b' def openrevlog(repo, cmd, file_, opts):' | |||||
549 | if 'treemanifest' not in repo.requirements: |
|
555 | if 'treemanifest' not in repo.requirements: | |
550 | raise error.Abort(_("--dir can only be used on repos with " |
|
556 | raise error.Abort(_("--dir can only be used on repos with " | |
551 | "treemanifest enabled")) |
|
557 | "treemanifest enabled")) | |
552 | dirlog = repo.dirlog(dir) |
|
558 | dirlog = repo.manifest.dirlog(dir) | |
553 | if len(dirlog): |
|
559 | if len(dirlog): | |
554 | r = dirlog |
|
560 | r = dirlog | |
555 | elif mf: |
|
561 | elif mf: | |
@@ -640,8 +646,26 b' def copy(ui, repo, pats, opts, rename=Fa' | |||||
640 |
|
646 | |||
641 | if not after and exists or after and state in 'mn': |
|
647 | if not after and exists or after and state in 'mn': | |
642 | if not opts['force']: |
|
648 | if not opts['force']: | |
643 | ui.warn(_('%s: not overwriting - file exists\n') % |
|
649 | if state in 'mn': | |
644 | reltarget) |
|
650 | msg = _('%s: not overwriting - file already committed\n') | |
|
651 | if after: | |||
|
652 | flags = '--after --force' | |||
|
653 | else: | |||
|
654 | flags = '--force' | |||
|
655 | if rename: | |||
|
656 | hint = _('(hg rename %s to replace the file by ' | |||
|
657 | 'recording a rename)\n') % flags | |||
|
658 | else: | |||
|
659 | hint = _('(hg copy %s to replace the file by ' | |||
|
660 | 'recording a copy)\n') % flags | |||
|
661 | else: | |||
|
662 | msg = _('%s: not overwriting - file exists\n') | |||
|
663 | if rename: | |||
|
664 | hint = _('(hg rename --after to record the rename)\n') | |||
|
665 | else: | |||
|
666 | hint = _('(hg copy --after to record the copy)\n') | |||
|
667 | ui.warn(msg % reltarget) | |||
|
668 | ui.warn(hint) | |||
645 | return |
|
669 | return | |
646 |
|
670 | |||
647 | if after: |
|
671 | if after: | |
@@ -1611,25 +1635,26 b' def show_changeset(ui, repo, opts, buffe' | |||||
1611 |
|
1635 | |||
1612 | return changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile, buffered) |
|
1636 | return changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile, buffered) | |
1613 |
|
1637 | |||
1614 |
def showmarker( |
|
1638 | def showmarker(fm, marker, index=None): | |
1615 | """utility function to display obsolescence marker in a readable way |
|
1639 | """utility function to display obsolescence marker in a readable way | |
1616 |
|
1640 | |||
1617 | To be used by debug function.""" |
|
1641 | To be used by debug function.""" | |
1618 | if index is not None: |
|
1642 | if index is not None: | |
1619 |
|
|
1643 | fm.write('index', '%i ', index) | |
1620 |
|
|
1644 | fm.write('precnode', '%s ', hex(marker.precnode())) | |
1621 |
|
|
1645 | succs = marker.succnodes() | |
1622 | ui.write(' ') |
|
1646 | fm.condwrite(succs, 'succnodes', '%s ', | |
1623 | ui.write(hex(repl)) |
|
1647 | fm.formatlist(map(hex, succs), name='node')) | |
1624 |
|
|
1648 | fm.write('flag', '%X ', marker.flags()) | |
1625 | parents = marker.parentnodes() |
|
1649 | parents = marker.parentnodes() | |
1626 | if parents is not None: |
|
1650 | if parents is not None: | |
1627 | ui.write('{%s} ' % ', '.join(hex(p) for p in parents)) |
|
1651 | fm.write('parentnodes', '{%s} ', | |
1628 | ui.write('(%s) ' % util.datestr(marker.date())) |
|
1652 | fm.formatlist(map(hex, parents), name='node', sep=', ')) | |
1629 | ui.write('{%s}' % (', '.join('%r: %r' % t for t in |
|
1653 | fm.write('date', '(%s) ', fm.formatdate(marker.date())) | |
1630 | sorted(marker.metadata().items()) |
|
1654 | meta = marker.metadata().copy() | |
1631 | if t[0] != 'date'))) |
|
1655 | meta.pop('date', None) | |
1632 | ui.write('\n') |
|
1656 | fm.write('metadata', '{%s}', fm.formatdict(meta, fmt='%r: %r', sep=', ')) | |
|
1657 | fm.plain('\n') | |||
1633 |
|
1658 | |||
1634 | def finddate(ui, repo, date): |
|
1659 | def finddate(ui, repo, date): | |
1635 | """Find the tipmost changeset that matches the given date spec""" |
|
1660 | """Find the tipmost changeset that matches the given date spec""" | |
@@ -1940,7 +1965,7 b' def _makefollowlogfilematcher(repo, file' | |||||
1940 | # --follow, we want the names of the ancestors of FILE in the |
|
1965 | # --follow, we want the names of the ancestors of FILE in the | |
1941 | # revision, stored in "fcache". "fcache" is populated by |
|
1966 | # revision, stored in "fcache". "fcache" is populated by | |
1942 | # reproducing the graph traversal already done by --follow revset |
|
1967 | # reproducing the graph traversal already done by --follow revset | |
1943 |
# and relating |
|
1968 | # and relating revs to file names (which is not "correct" but | |
1944 | # good enough). |
|
1969 | # good enough). | |
1945 | fcache = {} |
|
1970 | fcache = {} | |
1946 | fcacheready = [False] |
|
1971 | fcacheready = [False] | |
@@ -1948,9 +1973,10 b' def _makefollowlogfilematcher(repo, file' | |||||
1948 |
|
1973 | |||
1949 | def populate(): |
|
1974 | def populate(): | |
1950 | for fn in files: |
|
1975 | for fn in files: | |
1951 | for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)): |
|
1976 | fctx = pctx[fn] | |
1952 | for c in i: |
|
1977 | fcache.setdefault(fctx.introrev(), set()).add(fctx.path()) | |
1953 | fcache.setdefault(c.linkrev(), set()).add(c.path()) |
|
1978 | for c in fctx.ancestors(followfirst=followfirst): | |
|
1979 | fcache.setdefault(c.rev(), set()).add(c.path()) | |||
1954 |
|
1980 | |||
1955 | def filematcher(rev): |
|
1981 | def filematcher(rev): | |
1956 | if not fcacheready[0]: |
|
1982 | if not fcacheready[0]: | |
@@ -2151,15 +2177,8 b' def getgraphlogrevs(repo, pats, opts):' | |||||
2151 | if not (revs.isdescending() or revs.istopo()): |
|
2177 | if not (revs.isdescending() or revs.istopo()): | |
2152 | revs.sort(reverse=True) |
|
2178 | revs.sort(reverse=True) | |
2153 | if expr: |
|
2179 | if expr: | |
2154 | # Revset matchers often operate faster on revisions in changelog |
|
2180 | matcher = revset.match(repo.ui, expr, order=revset.followorder) | |
2155 | # order, because most filters deal with the changelog. |
|
|||
2156 | revs.reverse() |
|
|||
2157 | matcher = revset.match(repo.ui, expr) |
|
|||
2158 | # Revset matches can reorder revisions. "A or B" typically returns |
|
|||
2159 | # returns the revision matching A then the revision matching B. Sort |
|
|||
2160 | # again to fix that. |
|
|||
2161 | revs = matcher(repo, revs) |
|
2181 | revs = matcher(repo, revs) | |
2162 | revs.sort(reverse=True) |
|
|||
2163 | if limit is not None: |
|
2182 | if limit is not None: | |
2164 | limitedrevs = [] |
|
2183 | limitedrevs = [] | |
2165 | for idx, rev in enumerate(revs): |
|
2184 | for idx, rev in enumerate(revs): | |
@@ -2184,23 +2203,8 b' def getlogrevs(repo, pats, opts):' | |||||
2184 | return revset.baseset([]), None, None |
|
2203 | return revset.baseset([]), None, None | |
2185 | expr, filematcher = _makelogrevset(repo, pats, opts, revs) |
|
2204 | expr, filematcher = _makelogrevset(repo, pats, opts, revs) | |
2186 | if expr: |
|
2205 | if expr: | |
2187 | # Revset matchers often operate faster on revisions in changelog |
|
2206 | matcher = revset.match(repo.ui, expr, order=revset.followorder) | |
2188 | # order, because most filters deal with the changelog. |
|
|||
2189 | if not opts.get('rev'): |
|
|||
2190 | revs.reverse() |
|
|||
2191 | matcher = revset.match(repo.ui, expr) |
|
|||
2192 | # Revset matches can reorder revisions. "A or B" typically returns |
|
|||
2193 | # returns the revision matching A then the revision matching B. Sort |
|
|||
2194 | # again to fix that. |
|
|||
2195 | fixopts = ['branch', 'only_branch', 'keyword', 'user'] |
|
|||
2196 | oldrevs = revs |
|
|||
2197 | revs = matcher(repo, revs) |
|
2207 | revs = matcher(repo, revs) | |
2198 | if not opts.get('rev'): |
|
|||
2199 | revs.sort(reverse=True) |
|
|||
2200 | elif len(pats) > 1 or any(len(opts.get(op, [])) > 1 for op in fixopts): |
|
|||
2201 | # XXX "A or B" is known to change the order; fix it by filtering |
|
|||
2202 | # matched set again (issue5100) |
|
|||
2203 | revs = oldrevs & revs |
|
|||
2204 | if limit is not None: |
|
2208 | if limit is not None: | |
2205 | limitedrevs = [] |
|
2209 | limitedrevs = [] | |
2206 | for idx, r in enumerate(revs): |
|
2210 | for idx, r in enumerate(revs): | |
@@ -2415,14 +2419,10 b' def files(ui, ctx, m, fm, fmt, subrepos)' | |||||
2415 | ret = 0 |
|
2419 | ret = 0 | |
2416 |
|
2420 | |||
2417 | for subpath in sorted(ctx.substate): |
|
2421 | for subpath in sorted(ctx.substate): | |
2418 |
|
|
2422 | submatch = matchmod.subdirmatcher(subpath, m) | |
2419 | return (m.exact(subpath) |
|
2423 | if (subrepos or m.exact(subpath) or any(submatch.files())): | |
2420 | or any(f.startswith(subpath + '/') for f in m.files())) |
|
|||
2421 |
|
||||
2422 | if subrepos or matchessubrepo(subpath): |
|
|||
2423 | sub = ctx.sub(subpath) |
|
2424 | sub = ctx.sub(subpath) | |
2424 | try: |
|
2425 | try: | |
2425 | submatch = matchmod.subdirmatcher(subpath, m) |
|
|||
2426 | recurse = m.exact(subpath) or subrepos |
|
2426 | recurse = m.exact(subpath) or subrepos | |
2427 | if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0: |
|
2427 | if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0: | |
2428 | ret = 0 |
|
2428 | ret = 0 | |
@@ -2450,21 +2450,12 b' def remove(ui, repo, m, prefix, after, f' | |||||
2450 | total = len(subs) |
|
2450 | total = len(subs) | |
2451 | count = 0 |
|
2451 | count = 0 | |
2452 | for subpath in subs: |
|
2452 | for subpath in subs: | |
2453 | def matchessubrepo(matcher, subpath): |
|
|||
2454 | if matcher.exact(subpath): |
|
|||
2455 | return True |
|
|||
2456 | for f in matcher.files(): |
|
|||
2457 | if f.startswith(subpath): |
|
|||
2458 | return True |
|
|||
2459 | return False |
|
|||
2460 |
|
||||
2461 | count += 1 |
|
2453 | count += 1 | |
2462 |
|
|
2454 | submatch = matchmod.subdirmatcher(subpath, m) | |
|
2455 | if subrepos or m.exact(subpath) or any(submatch.files()): | |||
2463 | ui.progress(_('searching'), count, total=total, unit=_('subrepos')) |
|
2456 | ui.progress(_('searching'), count, total=total, unit=_('subrepos')) | |
2464 |
|
||||
2465 | sub = wctx.sub(subpath) |
|
2457 | sub = wctx.sub(subpath) | |
2466 | try: |
|
2458 | try: | |
2467 | submatch = matchmod.subdirmatcher(subpath, m) |
|
|||
2468 | if sub.removefiles(submatch, prefix, after, force, subrepos, |
|
2459 | if sub.removefiles(submatch, prefix, after, force, subrepos, | |
2469 | warnings): |
|
2460 | warnings): | |
2470 | ret = 1 |
|
2461 | ret = 1 | |
@@ -2530,8 +2521,8 b' def remove(ui, repo, m, prefix, after, f' | |||||
2530 | for f in added: |
|
2521 | for f in added: | |
2531 | count += 1 |
|
2522 | count += 1 | |
2532 | ui.progress(_('skipping'), count, total=total, unit=_('files')) |
|
2523 | ui.progress(_('skipping'), count, total=total, unit=_('files')) | |
2533 |
warnings.append(_( |
|
2524 | warnings.append(_("not removing %s: file has been marked for add" | |
2534 |
' |
|
2525 | " (use 'hg forget' to undo add)\n") % m.rel(f)) | |
2535 | ret = 1 |
|
2526 | ret = 1 | |
2536 | ui.progress(_('skipping'), None) |
|
2527 | ui.progress(_('skipping'), None) | |
2537 |
|
2528 | |||
@@ -2581,14 +2572,7 b' def cat(ui, repo, ctx, matcher, prefix, ' | |||||
2581 | write(file) |
|
2572 | write(file) | |
2582 | return 0 |
|
2573 | return 0 | |
2583 |
|
2574 | |||
2584 | # Don't warn about "missing" files that are really in subrepos |
|
2575 | for abs in ctx.walk(matcher): | |
2585 | def badfn(path, msg): |
|
|||
2586 | for subpath in ctx.substate: |
|
|||
2587 | if path.startswith(subpath + '/'): |
|
|||
2588 | return |
|
|||
2589 | matcher.bad(path, msg) |
|
|||
2590 |
|
||||
2591 | for abs in ctx.walk(matchmod.badmatch(matcher, badfn)): |
|
|||
2592 | write(abs) |
|
2576 | write(abs) | |
2593 | err = 0 |
|
2577 | err = 0 | |
2594 |
|
2578 | |||
@@ -2623,6 +2607,18 b' def commit(ui, repo, commitfunc, pats, o' | |||||
2623 |
|
2607 | |||
2624 | return commitfunc(ui, repo, message, matcher, opts) |
|
2608 | return commitfunc(ui, repo, message, matcher, opts) | |
2625 |
|
2609 | |||
|
2610 | def samefile(f, ctx1, ctx2): | |||
|
2611 | if f in ctx1.manifest(): | |||
|
2612 | a = ctx1.filectx(f) | |||
|
2613 | if f in ctx2.manifest(): | |||
|
2614 | b = ctx2.filectx(f) | |||
|
2615 | return (not a.cmp(b) | |||
|
2616 | and a.flags() == b.flags()) | |||
|
2617 | else: | |||
|
2618 | return False | |||
|
2619 | else: | |||
|
2620 | return f not in ctx2.manifest() | |||
|
2621 | ||||
2626 | def amend(ui, repo, commitfunc, old, extra, pats, opts): |
|
2622 | def amend(ui, repo, commitfunc, old, extra, pats, opts): | |
2627 | # avoid cycle context -> subrepo -> cmdutil |
|
2623 | # avoid cycle context -> subrepo -> cmdutil | |
2628 | from . import context |
|
2624 | from . import context | |
@@ -2706,19 +2702,7 b' def amend(ui, repo, commitfunc, old, ext' | |||||
2706 | # we can discard X from our list of files. Likewise if X |
|
2702 | # we can discard X from our list of files. Likewise if X | |
2707 | # was deleted, it's no longer relevant |
|
2703 | # was deleted, it's no longer relevant | |
2708 | files.update(ctx.files()) |
|
2704 | files.update(ctx.files()) | |
2709 |
|
2705 | files = [f for f in files if not samefile(f, ctx, base)] | ||
2710 | def samefile(f): |
|
|||
2711 | if f in ctx.manifest(): |
|
|||
2712 | a = ctx.filectx(f) |
|
|||
2713 | if f in base.manifest(): |
|
|||
2714 | b = base.filectx(f) |
|
|||
2715 | return (not a.cmp(b) |
|
|||
2716 | and a.flags() == b.flags()) |
|
|||
2717 | else: |
|
|||
2718 | return False |
|
|||
2719 | else: |
|
|||
2720 | return f not in base.manifest() |
|
|||
2721 | files = [f for f in files if not samefile(f)] |
|
|||
2722 |
|
2706 | |||
2723 | def filectxfn(repo, ctx_, path): |
|
2707 | def filectxfn(repo, ctx_, path): | |
2724 | try: |
|
2708 | try: | |
@@ -3542,10 +3526,11 b' class dirstateguard(object):' | |||||
3542 |
|
3526 | |||
3543 | def __init__(self, repo, name): |
|
3527 | def __init__(self, repo, name): | |
3544 | self._repo = repo |
|
3528 | self._repo = repo | |
|
3529 | self._active = False | |||
|
3530 | self._closed = False | |||
3545 | self._suffix = '.backup.%s.%d' % (name, id(self)) |
|
3531 | self._suffix = '.backup.%s.%d' % (name, id(self)) | |
3546 | repo.dirstate.savebackup(repo.currenttransaction(), self._suffix) |
|
3532 | repo.dirstate.savebackup(repo.currenttransaction(), self._suffix) | |
3547 | self._active = True |
|
3533 | self._active = True | |
3548 | self._closed = False |
|
|||
3549 |
|
3534 | |||
3550 | def __del__(self): |
|
3535 | def __del__(self): | |
3551 | if self._active: # still active |
|
3536 | if self._active: # still active |
This diff has been collapsed as it changes many lines, (565 lines changed) Show them Hide them | |||||
@@ -441,12 +441,12 b' def annotate(ui, repo, *pats, **opts):' | |||||
441 | if linenumber and (not opts.get('changeset')) and (not opts.get('number')): |
|
441 | if linenumber and (not opts.get('changeset')) and (not opts.get('number')): | |
442 | raise error.Abort(_('at least one of -n/-c is required for -l')) |
|
442 | raise error.Abort(_('at least one of -n/-c is required for -l')) | |
443 |
|
443 | |||
444 | if fm: |
|
444 | if fm.isplain(): | |
|
445 | def makefunc(get, fmt): | |||
|
446 | return lambda x: fmt(get(x)) | |||
|
447 | else: | |||
445 | def makefunc(get, fmt): |
|
448 | def makefunc(get, fmt): | |
446 | return get |
|
449 | return get | |
447 | else: |
|
|||
448 | def makefunc(get, fmt): |
|
|||
449 | return lambda x: fmt(get(x)) |
|
|||
450 | funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap |
|
450 | funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap | |
451 | if opts.get(op)] |
|
451 | if opts.get(op)] | |
452 | funcmap[0] = (funcmap[0][0], '') # no separator in front of first column |
|
452 | funcmap[0] = (funcmap[0][0], '') # no separator in front of first column | |
@@ -476,12 +476,12 b' def annotate(ui, repo, *pats, **opts):' | |||||
476 |
|
476 | |||
477 | for f, sep in funcmap: |
|
477 | for f, sep in funcmap: | |
478 | l = [f(n) for n, dummy in lines] |
|
478 | l = [f(n) for n, dummy in lines] | |
479 | if fm: |
|
479 | if fm.isplain(): | |
480 | formats.append(['%s' for x in l]) |
|
|||
481 | else: |
|
|||
482 | sizes = [encoding.colwidth(x) for x in l] |
|
480 | sizes = [encoding.colwidth(x) for x in l] | |
483 | ml = max(sizes) |
|
481 | ml = max(sizes) | |
484 | formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes]) |
|
482 | formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes]) | |
|
483 | else: | |||
|
484 | formats.append(['%s' for x in l]) | |||
485 | pieces.append(l) |
|
485 | pieces.append(l) | |
486 |
|
486 | |||
487 | for f, p, l in zip(zip(*formats), zip(*pieces), lines): |
|
487 | for f, p, l in zip(zip(*formats), zip(*pieces), lines): | |
@@ -835,57 +835,6 b' def bisect(ui, repo, rev=None, extra=Non' | |||||
835 |
|
835 | |||
836 | Returns 0 on success. |
|
836 | Returns 0 on success. | |
837 | """ |
|
837 | """ | |
838 | def extendbisectrange(nodes, good): |
|
|||
839 | # bisect is incomplete when it ends on a merge node and |
|
|||
840 | # one of the parent was not checked. |
|
|||
841 | parents = repo[nodes[0]].parents() |
|
|||
842 | if len(parents) > 1: |
|
|||
843 | if good: |
|
|||
844 | side = state['bad'] |
|
|||
845 | else: |
|
|||
846 | side = state['good'] |
|
|||
847 | num = len(set(i.node() for i in parents) & set(side)) |
|
|||
848 | if num == 1: |
|
|||
849 | return parents[0].ancestor(parents[1]) |
|
|||
850 | return None |
|
|||
851 |
|
||||
852 | def print_result(nodes, good): |
|
|||
853 | displayer = cmdutil.show_changeset(ui, repo, {}) |
|
|||
854 | if len(nodes) == 1: |
|
|||
855 | # narrowed it down to a single revision |
|
|||
856 | if good: |
|
|||
857 | ui.write(_("The first good revision is:\n")) |
|
|||
858 | else: |
|
|||
859 | ui.write(_("The first bad revision is:\n")) |
|
|||
860 | displayer.show(repo[nodes[0]]) |
|
|||
861 | extendnode = extendbisectrange(nodes, good) |
|
|||
862 | if extendnode is not None: |
|
|||
863 | ui.write(_('Not all ancestors of this changeset have been' |
|
|||
864 | ' checked.\nUse bisect --extend to continue the ' |
|
|||
865 | 'bisection from\nthe common ancestor, %s.\n') |
|
|||
866 | % extendnode) |
|
|||
867 | else: |
|
|||
868 | # multiple possible revisions |
|
|||
869 | if good: |
|
|||
870 | ui.write(_("Due to skipped revisions, the first " |
|
|||
871 | "good revision could be any of:\n")) |
|
|||
872 | else: |
|
|||
873 | ui.write(_("Due to skipped revisions, the first " |
|
|||
874 | "bad revision could be any of:\n")) |
|
|||
875 | for n in nodes: |
|
|||
876 | displayer.show(repo[n]) |
|
|||
877 | displayer.close() |
|
|||
878 |
|
||||
879 | def check_state(state, interactive=True): |
|
|||
880 | if not state['good'] or not state['bad']: |
|
|||
881 | if (good or bad or skip or reset) and interactive: |
|
|||
882 | return |
|
|||
883 | if not state['good']: |
|
|||
884 | raise error.Abort(_('cannot bisect (no known good revisions)')) |
|
|||
885 | else: |
|
|||
886 | raise error.Abort(_('cannot bisect (no known bad revisions)')) |
|
|||
887 | return True |
|
|||
888 |
|
||||
889 | # backward compatibility |
|
838 | # backward compatibility | |
890 | if rev in "good bad reset init".split(): |
|
839 | if rev in "good bad reset init".split(): | |
891 | ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n")) |
|
840 | ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n")) | |
@@ -902,13 +851,36 b' def bisect(ui, repo, rev=None, extra=Non' | |||||
902 | cmdutil.checkunfinished(repo) |
|
851 | cmdutil.checkunfinished(repo) | |
903 |
|
852 | |||
904 | if reset: |
|
853 | if reset: | |
905 |
|
|
854 | hbisect.resetstate(repo) | |
906 | if os.path.exists(p): |
|
|||
907 | os.unlink(p) |
|
|||
908 | return |
|
855 | return | |
909 |
|
856 | |||
910 | state = hbisect.load_state(repo) |
|
857 | state = hbisect.load_state(repo) | |
911 |
|
858 | |||
|
859 | # update state | |||
|
860 | if good or bad or skip: | |||
|
861 | if rev: | |||
|
862 | nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])] | |||
|
863 | else: | |||
|
864 | nodes = [repo.lookup('.')] | |||
|
865 | if good: | |||
|
866 | state['good'] += nodes | |||
|
867 | elif bad: | |||
|
868 | state['bad'] += nodes | |||
|
869 | elif skip: | |||
|
870 | state['skip'] += nodes | |||
|
871 | hbisect.save_state(repo, state) | |||
|
872 | if not (state['good'] and state['bad']): | |||
|
873 | return | |||
|
874 | ||||
|
875 | def mayupdate(repo, node, show_stats=True): | |||
|
876 | """common used update sequence""" | |||
|
877 | if noupdate: | |||
|
878 | return | |||
|
879 | cmdutil.bailifchanged(repo) | |||
|
880 | return hg.clean(repo, node, show_stats=show_stats) | |||
|
881 | ||||
|
882 | displayer = cmdutil.show_changeset(ui, repo, {}) | |||
|
883 | ||||
912 | if command: |
|
884 | if command: | |
913 | changesets = 1 |
|
885 | changesets = 1 | |
914 | if noupdate: |
|
886 | if noupdate: | |
@@ -921,6 +893,8 b' def bisect(ui, repo, rev=None, extra=Non' | |||||
921 | node, p2 = repo.dirstate.parents() |
|
893 | node, p2 = repo.dirstate.parents() | |
922 | if p2 != nullid: |
|
894 | if p2 != nullid: | |
923 | raise error.Abort(_('current bisect revision is a merge')) |
|
895 | raise error.Abort(_('current bisect revision is a merge')) | |
|
896 | if rev: | |||
|
897 | node = repo[scmutil.revsingle(repo, rev, node)].node() | |||
924 | try: |
|
898 | try: | |
925 | while changesets: |
|
899 | while changesets: | |
926 | # update state |
|
900 | # update state | |
@@ -938,61 +912,38 b' def bisect(ui, repo, rev=None, extra=Non' | |||||
938 | raise error.Abort(_("%s killed") % command) |
|
912 | raise error.Abort(_("%s killed") % command) | |
939 | else: |
|
913 | else: | |
940 | transition = "bad" |
|
914 | transition = "bad" | |
941 | ctx = scmutil.revsingle(repo, rev, node) |
|
915 | state[transition].append(node) | |
942 | rev = None # clear for future iterations |
|
916 | ctx = repo[node] | |
943 | state[transition].append(ctx.node()) |
|
|||
944 | ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition)) |
|
917 | ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition)) | |
945 |
check |
|
918 | hbisect.checkstate(state) | |
946 | # bisect |
|
919 | # bisect | |
947 | nodes, changesets, bgood = hbisect.bisect(repo.changelog, state) |
|
920 | nodes, changesets, bgood = hbisect.bisect(repo.changelog, state) | |
948 | # update to next check |
|
921 | # update to next check | |
949 | node = nodes[0] |
|
922 | node = nodes[0] | |
950 | if not noupdate: |
|
923 | mayupdate(repo, node, show_stats=False) | |
951 | cmdutil.bailifchanged(repo) |
|
|||
952 | hg.clean(repo, node, show_stats=False) |
|
|||
953 | finally: |
|
924 | finally: | |
954 | state['current'] = [node] |
|
925 | state['current'] = [node] | |
955 | hbisect.save_state(repo, state) |
|
926 | hbisect.save_state(repo, state) | |
956 |
print |
|
927 | hbisect.printresult(ui, repo, state, displayer, nodes, bgood) | |
957 | return |
|
928 | return | |
958 |
|
929 | |||
959 | # update state |
|
930 | hbisect.checkstate(state) | |
960 |
|
||||
961 | if rev: |
|
|||
962 | nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])] |
|
|||
963 | else: |
|
|||
964 | nodes = [repo.lookup('.')] |
|
|||
965 |
|
||||
966 | if good or bad or skip: |
|
|||
967 | if good: |
|
|||
968 | state['good'] += nodes |
|
|||
969 | elif bad: |
|
|||
970 | state['bad'] += nodes |
|
|||
971 | elif skip: |
|
|||
972 | state['skip'] += nodes |
|
|||
973 | hbisect.save_state(repo, state) |
|
|||
974 |
|
||||
975 | if not check_state(state): |
|
|||
976 | return |
|
|||
977 |
|
931 | |||
978 | # actually bisect |
|
932 | # actually bisect | |
979 | nodes, changesets, good = hbisect.bisect(repo.changelog, state) |
|
933 | nodes, changesets, good = hbisect.bisect(repo.changelog, state) | |
980 | if extend: |
|
934 | if extend: | |
981 | if not changesets: |
|
935 | if not changesets: | |
982 |
extendnode = |
|
936 | extendnode = hbisect.extendrange(repo, state, nodes, good) | |
983 | if extendnode is not None: |
|
937 | if extendnode is not None: | |
984 | ui.write(_("Extending search to changeset %d:%s\n") |
|
938 | ui.write(_("Extending search to changeset %d:%s\n") | |
985 | % (extendnode.rev(), extendnode)) |
|
939 | % (extendnode.rev(), extendnode)) | |
986 | state['current'] = [extendnode.node()] |
|
940 | state['current'] = [extendnode.node()] | |
987 | hbisect.save_state(repo, state) |
|
941 | hbisect.save_state(repo, state) | |
988 | if noupdate: |
|
942 | return mayupdate(repo, extendnode.node()) | |
989 | return |
|
|||
990 | cmdutil.bailifchanged(repo) |
|
|||
991 | return hg.clean(repo, extendnode.node()) |
|
|||
992 | raise error.Abort(_("nothing to extend")) |
|
943 | raise error.Abort(_("nothing to extend")) | |
993 |
|
944 | |||
994 | if changesets == 0: |
|
945 | if changesets == 0: | |
995 |
print |
|
946 | hbisect.printresult(ui, repo, state, displayer, nodes, good) | |
996 | else: |
|
947 | else: | |
997 | assert len(nodes) == 1 # only a single node can be tested next |
|
948 | assert len(nodes) == 1 # only a single node can be tested next | |
998 | node = nodes[0] |
|
949 | node = nodes[0] | |
@@ -1006,9 +957,7 b' def bisect(ui, repo, rev=None, extra=Non' | |||||
1006 | % (rev, short(node), changesets, tests)) |
|
957 | % (rev, short(node), changesets, tests)) | |
1007 | state['current'] = [node] |
|
958 | state['current'] = [node] | |
1008 | hbisect.save_state(repo, state) |
|
959 | hbisect.save_state(repo, state) | |
1009 | if not noupdate: |
|
960 | return mayupdate(repo, node) | |
1010 | cmdutil.bailifchanged(repo) |
|
|||
1011 | return hg.clean(repo, node) |
|
|||
1012 |
|
961 | |||
1013 | @command('bookmarks|bookmark', |
|
962 | @command('bookmarks|bookmark', | |
1014 | [('f', 'force', False, _('force')), |
|
963 | [('f', 'force', False, _('force')), | |
@@ -1185,7 +1134,7 b' def bookmark(ui, repo, *names, **opts):' | |||||
1185 | fm = ui.formatter('bookmarks', opts) |
|
1134 | fm = ui.formatter('bookmarks', opts) | |
1186 | hexfn = fm.hexfunc |
|
1135 | hexfn = fm.hexfunc | |
1187 | marks = repo._bookmarks |
|
1136 | marks = repo._bookmarks | |
1188 |
if len(marks) == 0 and |
|
1137 | if len(marks) == 0 and fm.isplain(): | |
1189 | ui.status(_("no bookmarks set\n")) |
|
1138 | ui.status(_("no bookmarks set\n")) | |
1190 | for bmark, n in sorted(marks.iteritems()): |
|
1139 | for bmark, n in sorted(marks.iteritems()): | |
1191 | active = repo._activebookmark |
|
1140 | active = repo._activebookmark | |
@@ -1383,8 +1332,8 b' def bundle(ui, repo, fname, dest=None, *' | |||||
1383 | repo, bundletype, strict=False) |
|
1332 | repo, bundletype, strict=False) | |
1384 | except error.UnsupportedBundleSpecification as e: |
|
1333 | except error.UnsupportedBundleSpecification as e: | |
1385 | raise error.Abort(str(e), |
|
1334 | raise error.Abort(str(e), | |
1386 |
hint=_(' |
|
1335 | hint=_("see 'hg help bundle' for supported " | |
1387 |
|
|
1336 | "values for --type")) | |
1388 |
|
1337 | |||
1389 | # Packed bundles are a pseudo bundle format for now. |
|
1338 | # Packed bundles are a pseudo bundle format for now. | |
1390 | if cgversion == 's1': |
|
1339 | if cgversion == 's1': | |
@@ -1411,10 +1360,11 b' def bundle(ui, repo, fname, dest=None, *' | |||||
1411 | raise error.Abort(_("--base is incompatible with specifying " |
|
1360 | raise error.Abort(_("--base is incompatible with specifying " | |
1412 | "a destination")) |
|
1361 | "a destination")) | |
1413 | common = [repo.lookup(rev) for rev in base] |
|
1362 | common = [repo.lookup(rev) for rev in base] | |
1414 |
heads = revs and map(repo.lookup, revs) or |
|
1363 | heads = revs and map(repo.lookup, revs) or None | |
1415 | cg = changegroup.getchangegroup(repo, 'bundle', heads=heads, |
|
1364 | outgoing = discovery.outgoing(repo, common, heads) | |
1416 | common=common, bundlecaps=bundlecaps, |
|
1365 | cg = changegroup.getchangegroup(repo, 'bundle', outgoing, | |
1417 |
|
|
1366 | bundlecaps=bundlecaps, | |
|
1367 | version=cgversion) | |||
1418 | outgoing = None |
|
1368 | outgoing = None | |
1419 | else: |
|
1369 | else: | |
1420 | dest = ui.expandpath(dest or 'default-push', dest or 'default') |
|
1370 | dest = ui.expandpath(dest or 'default-push', dest or 'default') | |
@@ -1688,9 +1638,11 b' def commit(ui, repo, *pats, **opts):' | |||||
1688 | def _docommit(ui, repo, *pats, **opts): |
|
1638 | def _docommit(ui, repo, *pats, **opts): | |
1689 | if opts.get('interactive'): |
|
1639 | if opts.get('interactive'): | |
1690 | opts.pop('interactive') |
|
1640 | opts.pop('interactive') | |
1691 | cmdutil.dorecord(ui, repo, commit, None, False, |
|
1641 | ret = cmdutil.dorecord(ui, repo, commit, None, False, | |
1692 | cmdutil.recordfilter, *pats, **opts) |
|
1642 | cmdutil.recordfilter, *pats, **opts) | |
1693 | return |
|
1643 | # ret can be 0 (no changes to record) or the value returned by | |
|
1644 | # commit(), 1 if nothing changed or None on success. | |||
|
1645 | return 1 if ret == 0 else ret | |||
1694 |
|
1646 | |||
1695 | if opts.get('subrepos'): |
|
1647 | if opts.get('subrepos'): | |
1696 | if opts.get('amend'): |
|
1648 | if opts.get('amend'): | |
@@ -1787,7 +1739,7 b' def _docommit(ui, repo, *pats, **opts):' | |||||
1787 | [('u', 'untrusted', None, _('show untrusted configuration options')), |
|
1739 | [('u', 'untrusted', None, _('show untrusted configuration options')), | |
1788 | ('e', 'edit', None, _('edit user config')), |
|
1740 | ('e', 'edit', None, _('edit user config')), | |
1789 | ('l', 'local', None, _('edit repository config')), |
|
1741 | ('l', 'local', None, _('edit repository config')), | |
1790 | ('g', 'global', None, _('edit global config'))], |
|
1742 | ('g', 'global', None, _('edit global config'))] + formatteropts, | |
1791 | _('[-u] [NAME]...'), |
|
1743 | _('[-u] [NAME]...'), | |
1792 | optionalrepo=True) |
|
1744 | optionalrepo=True) | |
1793 | def config(ui, repo, *values, **opts): |
|
1745 | def config(ui, repo, *values, **opts): | |
@@ -1848,6 +1800,7 b' def config(ui, repo, *values, **opts):' | |||||
1848 | onerr=error.Abort, errprefix=_("edit failed")) |
|
1800 | onerr=error.Abort, errprefix=_("edit failed")) | |
1849 | return |
|
1801 | return | |
1850 |
|
1802 | |||
|
1803 | fm = ui.formatter('config', opts) | |||
1851 | for f in scmutil.rcpath(): |
|
1804 | for f in scmutil.rcpath(): | |
1852 | ui.debug('read config from: %s\n' % f) |
|
1805 | ui.debug('read config from: %s\n' % f) | |
1853 | untrusted = bool(opts.get('untrusted')) |
|
1806 | untrusted = bool(opts.get('untrusted')) | |
@@ -1858,25 +1811,32 b' def config(ui, repo, *values, **opts):' | |||||
1858 | raise error.Abort(_('only one config item permitted')) |
|
1811 | raise error.Abort(_('only one config item permitted')) | |
1859 | matched = False |
|
1812 | matched = False | |
1860 | for section, name, value in ui.walkconfig(untrusted=untrusted): |
|
1813 | for section, name, value in ui.walkconfig(untrusted=untrusted): | |
1861 |
value = str(value) |
|
1814 | value = str(value) | |
1862 | sectname = section + '.' + name |
|
1815 | if fm.isplain(): | |
|
1816 | value = value.replace('\n', '\\n') | |||
|
1817 | entryname = section + '.' + name | |||
1863 | if values: |
|
1818 | if values: | |
1864 | for v in values: |
|
1819 | for v in values: | |
1865 | if v == section: |
|
1820 | if v == section: | |
1866 |
|
|
1821 | fm.startitem() | |
1867 | ui.configsource(section, name, untrusted)) |
|
1822 | fm.condwrite(ui.debugflag, 'source', '%s: ', | |
1868 |
ui. |
|
1823 | ui.configsource(section, name, untrusted)) | |
|
1824 | fm.write('name value', '%s=%s\n', entryname, value) | |||
1869 | matched = True |
|
1825 | matched = True | |
1870 |
elif v == |
|
1826 | elif v == entryname: | |
1871 |
|
|
1827 | fm.startitem() | |
1872 | ui.configsource(section, name, untrusted)) |
|
1828 | fm.condwrite(ui.debugflag, 'source', '%s: ', | |
1873 | ui.write(value, '\n') |
|
1829 | ui.configsource(section, name, untrusted)) | |
|
1830 | fm.write('value', '%s\n', value) | |||
|
1831 | fm.data(name=entryname) | |||
1874 | matched = True |
|
1832 | matched = True | |
1875 | else: |
|
1833 | else: | |
1876 | ui.debug('%s: ' % |
|
1834 | fm.startitem() | |
1877 | ui.configsource(section, name, untrusted)) |
|
1835 | fm.condwrite(ui.debugflag, 'source', '%s: ', | |
1878 | ui.write('%s=%s\n' % (sectname, value)) |
|
1836 | ui.configsource(section, name, untrusted)) | |
|
1837 | fm.write('name value', '%s=%s\n', entryname, value) | |||
1879 | matched = True |
|
1838 | matched = True | |
|
1839 | fm.end() | |||
1880 | if matched: |
|
1840 | if matched: | |
1881 | return 0 |
|
1841 | return 0 | |
1882 | return 1 |
|
1842 | return 1 | |
@@ -1987,8 +1947,9 b' def debugbuilddag(ui, repo, text=None,' | |||||
1987 |
|
1947 | |||
1988 | tags = [] |
|
1948 | tags = [] | |
1989 |
|
1949 | |||
1990 | lock = tr = None |
|
1950 | wlock = lock = tr = None | |
1991 | try: |
|
1951 | try: | |
|
1952 | wlock = repo.wlock() | |||
1992 | lock = repo.lock() |
|
1953 | lock = repo.lock() | |
1993 | tr = repo.transaction("builddag") |
|
1954 | tr = repo.transaction("builddag") | |
1994 |
|
1955 | |||
@@ -2073,7 +2034,7 b' def debugbuilddag(ui, repo, text=None,' | |||||
2073 | repo.vfs.write("localtags", "".join(tags)) |
|
2034 | repo.vfs.write("localtags", "".join(tags)) | |
2074 | finally: |
|
2035 | finally: | |
2075 | ui.progress(_('building'), None) |
|
2036 | ui.progress(_('building'), None) | |
2076 | release(tr, lock) |
|
2037 | release(tr, lock, wlock) | |
2077 |
|
2038 | |||
2078 | @command('debugbundle', |
|
2039 | @command('debugbundle', | |
2079 | [('a', 'all', None, _('show all details')), |
|
2040 | [('a', 'all', None, _('show all details')), | |
@@ -2102,10 +2063,7 b' def _debugchangegroup(ui, gen, all=None,' | |||||
2102 | def showchunks(named): |
|
2063 | def showchunks(named): | |
2103 | ui.write("\n%s%s\n" % (indent_string, named)) |
|
2064 | ui.write("\n%s%s\n" % (indent_string, named)) | |
2104 | chain = None |
|
2065 | chain = None | |
2105 | while True: |
|
2066 | for chunkdata in iter(lambda: gen.deltachunk(chain), {}): | |
2106 | chunkdata = gen.deltachunk(chain) |
|
|||
2107 | if not chunkdata: |
|
|||
2108 | break |
|
|||
2109 | node = chunkdata['node'] |
|
2067 | node = chunkdata['node'] | |
2110 | p1 = chunkdata['p1'] |
|
2068 | p1 = chunkdata['p1'] | |
2111 | p2 = chunkdata['p2'] |
|
2069 | p2 = chunkdata['p2'] | |
@@ -2121,10 +2079,7 b' def _debugchangegroup(ui, gen, all=None,' | |||||
2121 | showchunks("changelog") |
|
2079 | showchunks("changelog") | |
2122 | chunkdata = gen.manifestheader() |
|
2080 | chunkdata = gen.manifestheader() | |
2123 | showchunks("manifest") |
|
2081 | showchunks("manifest") | |
2124 | while True: |
|
2082 | for chunkdata in iter(gen.filelogheader, {}): | |
2125 | chunkdata = gen.filelogheader() |
|
|||
2126 | if not chunkdata: |
|
|||
2127 | break |
|
|||
2128 | fname = chunkdata['filename'] |
|
2083 | fname = chunkdata['filename'] | |
2129 | showchunks(fname) |
|
2084 | showchunks(fname) | |
2130 | else: |
|
2085 | else: | |
@@ -2132,10 +2087,7 b' def _debugchangegroup(ui, gen, all=None,' | |||||
2132 | raise error.Abort(_('use debugbundle2 for this file')) |
|
2087 | raise error.Abort(_('use debugbundle2 for this file')) | |
2133 | chunkdata = gen.changelogheader() |
|
2088 | chunkdata = gen.changelogheader() | |
2134 | chain = None |
|
2089 | chain = None | |
2135 | while True: |
|
2090 | for chunkdata in iter(lambda: gen.deltachunk(chain), {}): | |
2136 | chunkdata = gen.deltachunk(chain) |
|
|||
2137 | if not chunkdata: |
|
|||
2138 | break |
|
|||
2139 | node = chunkdata['node'] |
|
2091 | node = chunkdata['node'] | |
2140 | ui.write("%s%s\n" % (indent_string, hex(node))) |
|
2092 | ui.write("%s%s\n" % (indent_string, hex(node))) | |
2141 | chain = node |
|
2093 | chain = node | |
@@ -2398,12 +2350,15 b' def debugdiscovery(ui, repo, remoteurl="' | |||||
2398 | def debugextensions(ui, **opts): |
|
2350 | def debugextensions(ui, **opts): | |
2399 | '''show information about active extensions''' |
|
2351 | '''show information about active extensions''' | |
2400 | exts = extensions.extensions(ui) |
|
2352 | exts = extensions.extensions(ui) | |
|
2353 | hgver = util.version() | |||
2401 | fm = ui.formatter('debugextensions', opts) |
|
2354 | fm = ui.formatter('debugextensions', opts) | |
2402 | for extname, extmod in sorted(exts, key=operator.itemgetter(0)): |
|
2355 | for extname, extmod in sorted(exts, key=operator.itemgetter(0)): | |
|
2356 | isinternal = extensions.ismoduleinternal(extmod) | |||
2403 | extsource = extmod.__file__ |
|
2357 | extsource = extmod.__file__ | |
2404 | exttestedwith = getattr(extmod, 'testedwith', None) |
|
2358 | if isinternal: | |
2405 | if exttestedwith is not None: |
|
2359 | exttestedwith = [] # never expose magic string to users | |
2406 | exttestedwith = exttestedwith.split() |
|
2360 | else: | |
|
2361 | exttestedwith = getattr(extmod, 'testedwith', '').split() | |||
2407 | extbuglink = getattr(extmod, 'buglink', None) |
|
2362 | extbuglink = getattr(extmod, 'buglink', None) | |
2408 |
|
2363 | |||
2409 | fm.startitem() |
|
2364 | fm.startitem() | |
@@ -2412,21 +2367,24 b' def debugextensions(ui, **opts):' | |||||
2412 | fm.write('name', '%s\n', extname) |
|
2367 | fm.write('name', '%s\n', extname) | |
2413 | else: |
|
2368 | else: | |
2414 | fm.write('name', '%s', extname) |
|
2369 | fm.write('name', '%s', extname) | |
2415 |
if |
|
2370 | if isinternal or hgver in exttestedwith: | |
|
2371 | fm.plain('\n') | |||
|
2372 | elif not exttestedwith: | |||
2416 | fm.plain(_(' (untested!)\n')) |
|
2373 | fm.plain(_(' (untested!)\n')) | |
2417 | else: |
|
2374 | else: | |
2418 |
|
|
2375 | lasttestedversion = exttestedwith[-1] | |
2419 | util.version() in exttestedwith: |
|
2376 | fm.plain(' (%s!)\n' % lasttestedversion) | |
2420 | fm.plain('\n') |
|
|||
2421 | else: |
|
|||
2422 | lasttestedversion = exttestedwith[-1] |
|
|||
2423 | fm.plain(' (%s!)\n' % lasttestedversion) |
|
|||
2424 |
|
2377 | |||
2425 | fm.condwrite(ui.verbose and extsource, 'source', |
|
2378 | fm.condwrite(ui.verbose and extsource, 'source', | |
2426 | _(' location: %s\n'), extsource or "") |
|
2379 | _(' location: %s\n'), extsource or "") | |
2427 |
|
2380 | |||
|
2381 | if ui.verbose: | |||
|
2382 | fm.plain(_(' bundled: %s\n') % ['no', 'yes'][isinternal]) | |||
|
2383 | fm.data(bundled=isinternal) | |||
|
2384 | ||||
2428 | fm.condwrite(ui.verbose and exttestedwith, 'testedwith', |
|
2385 | fm.condwrite(ui.verbose and exttestedwith, 'testedwith', | |
2429 |
_(' tested with: %s\n'), |
|
2386 | _(' tested with: %s\n'), | |
|
2387 | fm.formatlist(exttestedwith, name='ver')) | |||
2430 |
|
2388 | |||
2431 | fm.condwrite(ui.verbose and extbuglink, 'buglink', |
|
2389 | fm.condwrite(ui.verbose and extbuglink, 'buglink', | |
2432 | _(' bug reporting: %s\n'), extbuglink or "") |
|
2390 | _(' bug reporting: %s\n'), extbuglink or "") | |
@@ -2453,7 +2411,7 b' def debugfsinfo(ui, path="."):' | |||||
2453 | ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no')) |
|
2411 | ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no')) | |
2454 | ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no')) |
|
2412 | ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no')) | |
2455 | ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no')) |
|
2413 | ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no')) | |
2456 |
ui.write(('case-sensitive: %s\n') % (util. |
|
2414 | ui.write(('case-sensitive: %s\n') % (util.fscasesensitive('.debugfsinfo') | |
2457 | and 'yes' or 'no')) |
|
2415 | and 'yes' or 'no')) | |
2458 | os.unlink('.debugfsinfo') |
|
2416 | os.unlink('.debugfsinfo') | |
2459 |
|
2417 | |||
@@ -2832,7 +2790,7 b' def debuginstall(ui, **opts):' | |||||
2832 | if not problems: |
|
2790 | if not problems: | |
2833 | fm.data(problems=problems) |
|
2791 | fm.data(problems=problems) | |
2834 | fm.condwrite(problems, 'problems', |
|
2792 | fm.condwrite(problems, 'problems', | |
2835 |
_("% |
|
2793 | _("%d problems detected," | |
2836 | " please check your install!\n"), problems) |
|
2794 | " please check your install!\n"), problems) | |
2837 | fm.end() |
|
2795 | fm.end() | |
2838 |
|
2796 | |||
@@ -3054,7 +3012,7 b' def debuglocks(ui, repo, **opts):' | |||||
3054 | ('r', 'rev', [], _('display markers relevant to REV')), |
|
3012 | ('r', 'rev', [], _('display markers relevant to REV')), | |
3055 | ('', 'index', False, _('display index of the marker')), |
|
3013 | ('', 'index', False, _('display index of the marker')), | |
3056 | ('', 'delete', [], _('delete markers specified by indices')), |
|
3014 | ('', 'delete', [], _('delete markers specified by indices')), | |
3057 | ] + commitopts2, |
|
3015 | ] + commitopts2 + formatteropts, | |
3058 | _('[OBSOLETED [REPLACEMENT ...]]')) |
|
3016 | _('[OBSOLETED [REPLACEMENT ...]]')) | |
3059 | def debugobsolete(ui, repo, precursor=None, *successors, **opts): |
|
3017 | def debugobsolete(ui, repo, precursor=None, *successors, **opts): | |
3060 | """create arbitrary obsolete marker |
|
3018 | """create arbitrary obsolete marker | |
@@ -3142,6 +3100,7 b' def debugobsolete(ui, repo, precursor=No' | |||||
3142 | markerset = set(markers) |
|
3100 | markerset = set(markers) | |
3143 | isrelevant = lambda m: m in markerset |
|
3101 | isrelevant = lambda m: m in markerset | |
3144 |
|
3102 | |||
|
3103 | fm = ui.formatter('debugobsolete', opts) | |||
3145 | for i, m in enumerate(markerstoiter): |
|
3104 | for i, m in enumerate(markerstoiter): | |
3146 | if not isrelevant(m): |
|
3105 | if not isrelevant(m): | |
3147 | # marker can be irrelevant when we're iterating over a set |
|
3106 | # marker can be irrelevant when we're iterating over a set | |
@@ -3152,8 +3111,10 b' def debugobsolete(ui, repo, precursor=No' | |||||
3152 | # to get the correct indices, but only display the ones that |
|
3111 | # to get the correct indices, but only display the ones that | |
3153 | # are relevant to --rev value |
|
3112 | # are relevant to --rev value | |
3154 | continue |
|
3113 | continue | |
|
3114 | fm.startitem() | |||
3155 | ind = i if opts.get('index') else None |
|
3115 | ind = i if opts.get('index') else None | |
3156 |
cmdutil.showmarker( |
|
3116 | cmdutil.showmarker(fm, m, index=ind) | |
|
3117 | fm.end() | |||
3157 |
|
3118 | |||
3158 | @command('debugpathcomplete', |
|
3119 | @command('debugpathcomplete', | |
3159 | [('f', 'full', None, _('complete an entire path')), |
|
3120 | [('f', 'full', None, _('complete an entire path')), | |
@@ -3382,9 +3343,9 b' def debugrevlog(ui, repo, file_=None, **' | |||||
3382 | nump2prev = 0 |
|
3343 | nump2prev = 0 | |
3383 | chainlengths = [] |
|
3344 | chainlengths = [] | |
3384 |
|
3345 | |||
3385 |
datasize = [None, 0, 0 |
|
3346 | datasize = [None, 0, 0] | |
3386 |
fullsize = [None, 0, 0 |
|
3347 | fullsize = [None, 0, 0] | |
3387 |
deltasize = [None, 0, 0 |
|
3348 | deltasize = [None, 0, 0] | |
3388 |
|
3349 | |||
3389 | def addsize(size, l): |
|
3350 | def addsize(size, l): | |
3390 | if l[0] is None or size < l[0]: |
|
3351 | if l[0] is None or size < l[0]: | |
@@ -3509,29 +3470,92 b' def debugrevlog(ui, repo, file_=None, **' | |||||
3509 | numdeltas)) |
|
3470 | numdeltas)) | |
3510 |
|
3471 | |||
3511 | @command('debugrevspec', |
|
3472 | @command('debugrevspec', | |
3512 | [('', 'optimize', None, _('print parsed tree after optimizing'))], |
|
3473 | [('', 'optimize', None, | |
|
3474 | _('print parsed tree after optimizing (DEPRECATED)')), | |||
|
3475 | ('p', 'show-stage', [], | |||
|
3476 | _('print parsed tree at the given stage'), _('NAME')), | |||
|
3477 | ('', 'no-optimized', False, _('evaluate tree without optimization')), | |||
|
3478 | ('', 'verify-optimized', False, _('verify optimized result')), | |||
|
3479 | ], | |||
3513 | ('REVSPEC')) |
|
3480 | ('REVSPEC')) | |
3514 | def debugrevspec(ui, repo, expr, **opts): |
|
3481 | def debugrevspec(ui, repo, expr, **opts): | |
3515 | """parse and apply a revision specification |
|
3482 | """parse and apply a revision specification | |
3516 |
|
3483 | |||
3517 |
Use -- |
|
3484 | Use -p/--show-stage option to print the parsed tree at the given stages. | |
3518 | expansion. |
|
3485 | Use -p all to print tree at every stage. | |
|
3486 | ||||
|
3487 | Use --verify-optimized to compare the optimized result with the unoptimized | |||
|
3488 | one. Returns 1 if the optimized result differs. | |||
3519 | """ |
|
3489 | """ | |
3520 | if ui.verbose: |
|
3490 | stages = [ | |
3521 | tree = revset.parse(expr, lookup=repo.__contains__) |
|
3491 | ('parsed', lambda tree: tree), | |
3522 | ui.note(revset.prettyformat(tree), "\n") |
|
3492 | ('expanded', lambda tree: revset.expandaliases(ui, tree)), | |
3523 | newtree = revset.expandaliases(ui, tree) |
|
3493 | ('concatenated', revset.foldconcat), | |
3524 | if newtree != tree: |
|
3494 | ('analyzed', revset.analyze), | |
3525 | ui.note(("* expanded:\n"), revset.prettyformat(newtree), "\n") |
|
3495 | ('optimized', revset.optimize), | |
3526 | tree = newtree |
|
3496 | ] | |
3527 | newtree = revset.foldconcat(tree) |
|
3497 | if opts['no_optimized']: | |
3528 | if newtree != tree: |
|
3498 | stages = stages[:-1] | |
3529 | ui.note(("* concatenated:\n"), revset.prettyformat(newtree), "\n") |
|
3499 | if opts['verify_optimized'] and opts['no_optimized']: | |
3530 | if opts["optimize"]: |
|
3500 | raise error.Abort(_('cannot use --verify-optimized with ' | |
3531 | optimizedtree = revset.optimize(newtree) |
|
3501 | '--no-optimized')) | |
3532 | ui.note(("* optimized:\n"), |
|
3502 | stagenames = set(n for n, f in stages) | |
3533 | revset.prettyformat(optimizedtree), "\n") |
|
3503 | ||
3534 | func = revset.match(ui, expr, repo) |
|
3504 | showalways = set() | |
|
3505 | showchanged = set() | |||
|
3506 | if ui.verbose and not opts['show_stage']: | |||
|
3507 | # show parsed tree by --verbose (deprecated) | |||
|
3508 | showalways.add('parsed') | |||
|
3509 | showchanged.update(['expanded', 'concatenated']) | |||
|
3510 | if opts['optimize']: | |||
|
3511 | showalways.add('optimized') | |||
|
3512 | if opts['show_stage'] and opts['optimize']: | |||
|
3513 | raise error.Abort(_('cannot use --optimize with --show-stage')) | |||
|
3514 | if opts['show_stage'] == ['all']: | |||
|
3515 | showalways.update(stagenames) | |||
|
3516 | else: | |||
|
3517 | for n in opts['show_stage']: | |||
|
3518 | if n not in stagenames: | |||
|
3519 | raise error.Abort(_('invalid stage name: %s') % n) | |||
|
3520 | showalways.update(opts['show_stage']) | |||
|
3521 | ||||
|
3522 | treebystage = {} | |||
|
3523 | printedtree = None | |||
|
3524 | tree = revset.parse(expr, lookup=repo.__contains__) | |||
|
3525 | for n, f in stages: | |||
|
3526 | treebystage[n] = tree = f(tree) | |||
|
3527 | if n in showalways or (n in showchanged and tree != printedtree): | |||
|
3528 | if opts['show_stage'] or n != 'parsed': | |||
|
3529 | ui.write(("* %s:\n") % n) | |||
|
3530 | ui.write(revset.prettyformat(tree), "\n") | |||
|
3531 | printedtree = tree | |||
|
3532 | ||||
|
3533 | if opts['verify_optimized']: | |||
|
3534 | arevs = revset.makematcher(treebystage['analyzed'])(repo) | |||
|
3535 | brevs = revset.makematcher(treebystage['optimized'])(repo) | |||
|
3536 | if ui.verbose: | |||
|
3537 | ui.note(("* analyzed set:\n"), revset.prettyformatset(arevs), "\n") | |||
|
3538 | ui.note(("* optimized set:\n"), revset.prettyformatset(brevs), "\n") | |||
|
3539 | arevs = list(arevs) | |||
|
3540 | brevs = list(brevs) | |||
|
3541 | if arevs == brevs: | |||
|
3542 | return 0 | |||
|
3543 | ui.write(('--- analyzed\n'), label='diff.file_a') | |||
|
3544 | ui.write(('+++ optimized\n'), label='diff.file_b') | |||
|
3545 | sm = difflib.SequenceMatcher(None, arevs, brevs) | |||
|
3546 | for tag, alo, ahi, blo, bhi in sm.get_opcodes(): | |||
|
3547 | if tag in ('delete', 'replace'): | |||
|
3548 | for c in arevs[alo:ahi]: | |||
|
3549 | ui.write('-%s\n' % c, label='diff.deleted') | |||
|
3550 | if tag in ('insert', 'replace'): | |||
|
3551 | for c in brevs[blo:bhi]: | |||
|
3552 | ui.write('+%s\n' % c, label='diff.inserted') | |||
|
3553 | if tag == 'equal': | |||
|
3554 | for c in arevs[alo:ahi]: | |||
|
3555 | ui.write(' %s\n' % c) | |||
|
3556 | return 1 | |||
|
3557 | ||||
|
3558 | func = revset.makematcher(tree) | |||
3535 | revs = func(repo) |
|
3559 | revs = func(repo) | |
3536 | if ui.verbose: |
|
3560 | if ui.verbose: | |
3537 | ui.note(("* set:\n"), revset.prettyformatset(revs), "\n") |
|
3561 | ui.note(("* set:\n"), revset.prettyformatset(revs), "\n") | |
@@ -3914,16 +3938,16 b' def export(ui, repo, *changesets, **opts' | |||||
3914 | [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')), |
|
3938 | [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')), | |
3915 | ('0', 'print0', None, _('end filenames with NUL, for use with xargs')), |
|
3939 | ('0', 'print0', None, _('end filenames with NUL, for use with xargs')), | |
3916 | ] + walkopts + formatteropts + subrepoopts, |
|
3940 | ] + walkopts + formatteropts + subrepoopts, | |
3917 |
_('[OPTION]... [ |
|
3941 | _('[OPTION]... [FILE]...')) | |
3918 | def files(ui, repo, *pats, **opts): |
|
3942 | def files(ui, repo, *pats, **opts): | |
3919 | """list tracked files |
|
3943 | """list tracked files | |
3920 |
|
3944 | |||
3921 | Print files under Mercurial control in the working directory or |
|
3945 | Print files under Mercurial control in the working directory or | |
3922 |
specified revision |
|
3946 | specified revision for given files (excluding removed files). | |
3923 | removed files). |
|
3947 | Files can be specified as filenames or filesets. | |
3924 |
|
3948 | |||
3925 |
If no |
|
3949 | If no files are given to match, this command prints the names | |
3926 |
of all files under Mercurial control |
|
3950 | of all files under Mercurial control. | |
3927 |
|
3951 | |||
3928 | .. container:: verbose |
|
3952 | .. container:: verbose | |
3929 |
|
3953 | |||
@@ -3964,15 +3988,11 b' def files(ui, repo, *pats, **opts):' | |||||
3964 | end = '\n' |
|
3988 | end = '\n' | |
3965 | if opts.get('print0'): |
|
3989 | if opts.get('print0'): | |
3966 | end = '\0' |
|
3990 | end = '\0' | |
3967 | fm = ui.formatter('files', opts) |
|
|||
3968 | fmt = '%s' + end |
|
3991 | fmt = '%s' + end | |
3969 |
|
3992 | |||
3970 | m = scmutil.match(ctx, pats, opts) |
|
3993 | m = scmutil.match(ctx, pats, opts) | |
3971 | ret = cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos')) |
|
3994 | with ui.formatter('files', opts) as fm: | |
3972 |
|
3995 | return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos')) | ||
3973 | fm.end() |
|
|||
3974 |
|
||||
3975 | return ret |
|
|||
3976 |
|
3996 | |||
3977 | @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True) |
|
3997 | @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True) | |
3978 | def forget(ui, repo, *pats, **opts): |
|
3998 | def forget(ui, repo, *pats, **opts): | |
@@ -4142,9 +4162,7 b' def _dograft(ui, repo, *revs, **opts):' | |||||
4142 | # check for ancestors of dest branch |
|
4162 | # check for ancestors of dest branch | |
4143 | crev = repo['.'].rev() |
|
4163 | crev = repo['.'].rev() | |
4144 | ancestors = repo.changelog.ancestors([crev], inclusive=True) |
|
4164 | ancestors = repo.changelog.ancestors([crev], inclusive=True) | |
4145 | # Cannot use x.remove(y) on smart set, this has to be a list. |
|
|||
4146 | # XXX make this lazy in the future |
|
4165 | # XXX make this lazy in the future | |
4147 | revs = list(revs) |
|
|||
4148 | # don't mutate while iterating, create a copy |
|
4166 | # don't mutate while iterating, create a copy | |
4149 | for rev in list(revs): |
|
4167 | for rev in list(revs): | |
4150 | if rev in ancestors: |
|
4168 | if rev in ancestors: | |
@@ -4284,7 +4302,7 b' def _dograft(ui, repo, *revs, **opts):' | |||||
4284 | _('only search files changed within revision range'), _('REV')), |
|
4302 | _('only search files changed within revision range'), _('REV')), | |
4285 | ('u', 'user', None, _('list the author (long with -v)')), |
|
4303 | ('u', 'user', None, _('list the author (long with -v)')), | |
4286 | ('d', 'date', None, _('list the date (short with -q)')), |
|
4304 | ('d', 'date', None, _('list the date (short with -q)')), | |
4287 | ] + walkopts, |
|
4305 | ] + formatteropts + walkopts, | |
4288 | _('[OPTION]... PATTERN [FILE]...'), |
|
4306 | _('[OPTION]... PATTERN [FILE]...'), | |
4289 | inferrepo=True) |
|
4307 | inferrepo=True) | |
4290 | def grep(ui, repo, pattern, *pats, **opts): |
|
4308 | def grep(ui, repo, pattern, *pats, **opts): | |
@@ -4349,19 +4367,16 b' def grep(ui, repo, pattern, *pats, **opt' | |||||
4349 | def __eq__(self, other): |
|
4367 | def __eq__(self, other): | |
4350 | return self.line == other.line |
|
4368 | return self.line == other.line | |
4351 |
|
4369 | |||
4352 |
def |
|
4370 | def findpos(self): | |
4353 | yield (self.line[:self.colstart], '') |
|
4371 | """Iterate all (start, end) indices of matches""" | |
4354 |
yield |
|
4372 | yield self.colstart, self.colend | |
4355 |
|
|
4373 | p = self.colend | |
4356 |
while |
|
4374 | while p < len(self.line): | |
4357 |
m |
|
4375 | m = regexp.search(self.line, p) | |
4358 |
if not m |
|
4376 | if not m: | |
4359 | yield (rest, '') |
|
|||
4360 | break |
|
4377 | break | |
4361 |
|
|
4378 | yield m.span() | |
4362 |
|
|
4379 | p = m.end() | |
4363 | yield (rest[mstart:mend], 'grep.match') |
|
|||
4364 | rest = rest[mend:] |
|
|||
4365 |
|
4380 | |||
4366 | matches = {} |
|
4381 | matches = {} | |
4367 | copies = {} |
|
4382 | copies = {} | |
@@ -4387,50 +4402,76 b' def grep(ui, repo, pattern, *pats, **opt' | |||||
4387 | for i in xrange(blo, bhi): |
|
4402 | for i in xrange(blo, bhi): | |
4388 | yield ('+', b[i]) |
|
4403 | yield ('+', b[i]) | |
4389 |
|
4404 | |||
4390 | def display(fn, ctx, pstates, states): |
|
4405 | def display(fm, fn, ctx, pstates, states): | |
4391 | rev = ctx.rev() |
|
4406 | rev = ctx.rev() | |
|
4407 | if fm.isplain(): | |||
|
4408 | formatuser = ui.shortuser | |||
|
4409 | else: | |||
|
4410 | formatuser = str | |||
4392 | if ui.quiet: |
|
4411 | if ui.quiet: | |
4393 |
datef |
|
4412 | datefmt = '%Y-%m-%d' | |
4394 | else: |
|
4413 | else: | |
4395 | datefunc = util.datestr |
|
4414 | datefmt = '%a %b %d %H:%M:%S %Y %1%2' | |
4396 | found = False |
|
4415 | found = False | |
4397 | @util.cachefunc |
|
4416 | @util.cachefunc | |
4398 | def binary(): |
|
4417 | def binary(): | |
4399 | flog = getfile(fn) |
|
4418 | flog = getfile(fn) | |
4400 | return util.binary(flog.read(ctx.filenode(fn))) |
|
4419 | return util.binary(flog.read(ctx.filenode(fn))) | |
4401 |
|
4420 | |||
|
4421 | fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'} | |||
4402 | if opts.get('all'): |
|
4422 | if opts.get('all'): | |
4403 | iter = difflinestates(pstates, states) |
|
4423 | iter = difflinestates(pstates, states) | |
4404 | else: |
|
4424 | else: | |
4405 | iter = [('', l) for l in states] |
|
4425 | iter = [('', l) for l in states] | |
4406 | for change, l in iter: |
|
4426 | for change, l in iter: | |
4407 | cols = [(fn, 'grep.filename'), (str(rev), 'grep.rev')] |
|
4427 | fm.startitem() | |
4408 |
|
4428 | fm.data(node=fm.hexfunc(ctx.node())) | ||
4409 | if opts.get('line_number'): |
|
4429 | cols = [ | |
4410 | cols.append((str(l.linenum), 'grep.linenumber')) |
|
4430 | ('filename', fn, True), | |
|
4431 | ('rev', rev, True), | |||
|
4432 | ('linenumber', l.linenum, opts.get('line_number')), | |||
|
4433 | ] | |||
4411 | if opts.get('all'): |
|
4434 | if opts.get('all'): | |
4412 |
cols.append((change, |
|
4435 | cols.append(('change', change, True)) | |
4413 |
|
|
4436 | cols.extend([ | |
4414 |
|
|
4437 | ('user', formatuser(ctx.user()), opts.get('user')), | |
4415 | if opts.get('date'): |
|
4438 | ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')), | |
4416 | cols.append((datefunc(ctx.date()), 'grep.date')) |
|
4439 | ]) | |
4417 | for col, label in cols[:-1]: |
|
4440 | lastcol = next(name for name, data, cond in reversed(cols) if cond) | |
4418 | ui.write(col, label=label) |
|
4441 | for name, data, cond in cols: | |
4419 | ui.write(sep, label='grep.sep') |
|
4442 | field = fieldnamemap.get(name, name) | |
4420 | ui.write(cols[-1][0], label=cols[-1][1]) |
|
4443 | fm.condwrite(cond, field, '%s', data, label='grep.%s' % name) | |
|
4444 | if cond and name != lastcol: | |||
|
4445 | fm.plain(sep, label='grep.sep') | |||
4421 | if not opts.get('files_with_matches'): |
|
4446 | if not opts.get('files_with_matches'): | |
4422 |
|
|
4447 | fm.plain(sep, label='grep.sep') | |
4423 | if not opts.get('text') and binary(): |
|
4448 | if not opts.get('text') and binary(): | |
4424 |
|
|
4449 | fm.plain(_(" Binary file matches")) | |
4425 | else: |
|
4450 | else: | |
4426 | for s, label in l: |
|
4451 | displaymatches(fm.nested('texts'), l) | |
4427 | ui.write(s, label=label) |
|
4452 | fm.plain(eol) | |
4428 | ui.write(eol) |
|
|||
4429 | found = True |
|
4453 | found = True | |
4430 | if opts.get('files_with_matches'): |
|
4454 | if opts.get('files_with_matches'): | |
4431 | break |
|
4455 | break | |
4432 | return found |
|
4456 | return found | |
4433 |
|
4457 | |||
|
4458 | def displaymatches(fm, l): | |||
|
4459 | p = 0 | |||
|
4460 | for s, e in l.findpos(): | |||
|
4461 | if p < s: | |||
|
4462 | fm.startitem() | |||
|
4463 | fm.write('text', '%s', l.line[p:s]) | |||
|
4464 | fm.data(matched=False) | |||
|
4465 | fm.startitem() | |||
|
4466 | fm.write('text', '%s', l.line[s:e], label='grep.match') | |||
|
4467 | fm.data(matched=True) | |||
|
4468 | p = e | |||
|
4469 | if p < len(l.line): | |||
|
4470 | fm.startitem() | |||
|
4471 | fm.write('text', '%s', l.line[p:]) | |||
|
4472 | fm.data(matched=False) | |||
|
4473 | fm.end() | |||
|
4474 | ||||
4434 | skip = {} |
|
4475 | skip = {} | |
4435 | revfiles = {} |
|
4476 | revfiles = {} | |
4436 | matchfn = scmutil.match(repo[None], pats, opts) |
|
4477 | matchfn = scmutil.match(repo[None], pats, opts) | |
@@ -4472,6 +4513,7 b' def grep(ui, repo, pattern, *pats, **opt' | |||||
4472 | except error.LookupError: |
|
4513 | except error.LookupError: | |
4473 | pass |
|
4514 | pass | |
4474 |
|
4515 | |||
|
4516 | fm = ui.formatter('grep', opts) | |||
4475 | for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep): |
|
4517 | for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep): | |
4476 | rev = ctx.rev() |
|
4518 | rev = ctx.rev() | |
4477 | parent = ctx.p1().rev() |
|
4519 | parent = ctx.p1().rev() | |
@@ -4484,7 +4526,7 b' def grep(ui, repo, pattern, *pats, **opt' | |||||
4484 | continue |
|
4526 | continue | |
4485 | pstates = matches.get(parent, {}).get(copy or fn, []) |
|
4527 | pstates = matches.get(parent, {}).get(copy or fn, []) | |
4486 | if pstates or states: |
|
4528 | if pstates or states: | |
4487 | r = display(fn, ctx, pstates, states) |
|
4529 | r = display(fm, fn, ctx, pstates, states) | |
4488 | found = found or r |
|
4530 | found = found or r | |
4489 | if r and not opts.get('all'): |
|
4531 | if r and not opts.get('all'): | |
4490 | skip[fn] = True |
|
4532 | skip[fn] = True | |
@@ -4492,6 +4534,7 b' def grep(ui, repo, pattern, *pats, **opt' | |||||
4492 | skip[copy] = True |
|
4534 | skip[copy] = True | |
4493 | del matches[rev] |
|
4535 | del matches[rev] | |
4494 | del revfiles[rev] |
|
4536 | del revfiles[rev] | |
|
4537 | fm.end() | |||
4495 |
|
4538 | |||
4496 | return not found |
|
4539 | return not found | |
4497 |
|
4540 | |||
@@ -4607,12 +4650,15 b' def help_(ui, name=None, **opts):' | |||||
4607 | section = None |
|
4650 | section = None | |
4608 | subtopic = None |
|
4651 | subtopic = None | |
4609 | if name and '.' in name: |
|
4652 | if name and '.' in name: | |
4610 |
name, |
|
4653 | name, remaining = name.split('.', 1) | |
4611 |
|
|
4654 | remaining = encoding.lower(remaining) | |
4612 |
if '.' in |
|
4655 | if '.' in remaining: | |
4613 |
subtopic, section = |
|
4656 | subtopic, section = remaining.split('.', 1) | |
4614 | else: |
|
4657 | else: | |
4615 |
|
|
4658 | if name in help.subtopics: | |
|
4659 | subtopic = remaining | |||
|
4660 | else: | |||
|
4661 | section = remaining | |||
4616 |
|
4662 | |||
4617 | text = help.help_(ui, name, subtopic=subtopic, **opts) |
|
4663 | text = help.help_(ui, name, subtopic=subtopic, **opts) | |
4618 |
|
4664 | |||
@@ -5437,7 +5483,9 b' def merge(ui, repo, node=None, **opts):' | |||||
5437 | # ui.forcemerge is an internal variable, do not document |
|
5483 | # ui.forcemerge is an internal variable, do not document | |
5438 | repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge') |
|
5484 | repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge') | |
5439 | force = opts.get('force') |
|
5485 | force = opts.get('force') | |
5440 | return hg.merge(repo, node, force=force, mergeforce=force) |
|
5486 | labels = ['working copy', 'merge rev'] | |
|
5487 | return hg.merge(repo, node, force=force, mergeforce=force, | |||
|
5488 | labels=labels) | |||
5441 | finally: |
|
5489 | finally: | |
5442 | ui.setconfig('ui', 'forcemerge', '', 'merge') |
|
5490 | ui.setconfig('ui', 'forcemerge', '', 'merge') | |
5443 |
|
5491 | |||
@@ -5611,10 +5659,10 b' def paths(ui, repo, search=None, **opts)' | |||||
5611 | pathitems = sorted(ui.paths.iteritems()) |
|
5659 | pathitems = sorted(ui.paths.iteritems()) | |
5612 |
|
5660 | |||
5613 | fm = ui.formatter('paths', opts) |
|
5661 | fm = ui.formatter('paths', opts) | |
5614 | if fm: |
|
5662 | if fm.isplain(): | |
|
5663 | hidepassword = util.hidepassword | |||
|
5664 | else: | |||
5615 | hidepassword = str |
|
5665 | hidepassword = str | |
5616 | else: |
|
|||
5617 | hidepassword = util.hidepassword |
|
|||
5618 | if ui.quiet: |
|
5666 | if ui.quiet: | |
5619 | namefmt = '%s\n' |
|
5667 | namefmt = '%s\n' | |
5620 | else: |
|
5668 | else: | |
@@ -5936,7 +5984,7 b' def push(ui, repo, dest=None, **opts):' | |||||
5936 | path = ui.paths.getpath(dest, default=('default-push', 'default')) |
|
5984 | path = ui.paths.getpath(dest, default=('default-push', 'default')) | |
5937 | if not path: |
|
5985 | if not path: | |
5938 | raise error.Abort(_('default repository not configured!'), |
|
5986 | raise error.Abort(_('default repository not configured!'), | |
5939 |
hint=_(' |
|
5987 | hint=_("see 'hg help config.paths'")) | |
5940 | dest = path.pushloc or path.loc |
|
5988 | dest = path.pushloc or path.loc | |
5941 | branches = (path.branch, opts.get('branch') or []) |
|
5989 | branches = (path.branch, opts.get('branch') or []) | |
5942 | ui.status(_('pushing to %s\n') % util.hidepassword(dest)) |
|
5990 | ui.status(_('pushing to %s\n') % util.hidepassword(dest)) | |
@@ -6459,7 +6507,7 b' def root(ui, repo):' | |||||
6459 | ('n', 'name', '', |
|
6507 | ('n', 'name', '', | |
6460 | _('name to show in web pages (default: working directory)'), _('NAME')), |
|
6508 | _('name to show in web pages (default: working directory)'), _('NAME')), | |
6461 | ('', 'web-conf', '', |
|
6509 | ('', 'web-conf', '', | |
6462 |
_(' |
|
6510 | _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')), | |
6463 | ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'), |
|
6511 | ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'), | |
6464 | _('FILE')), |
|
6512 | _('FILE')), | |
6465 | ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')), |
|
6513 | ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')), | |
@@ -7247,37 +7295,48 b' def verify(ui, repo):' | |||||
7247 | """ |
|
7295 | """ | |
7248 | return hg.verify(repo) |
|
7296 | return hg.verify(repo) | |
7249 |
|
7297 | |||
7250 | @command('version', [], norepo=True) |
|
7298 | @command('version', [] + formatteropts, norepo=True) | |
7251 | def version_(ui): |
|
7299 | def version_(ui, **opts): | |
7252 | """output version and copyright information""" |
|
7300 | """output version and copyright information""" | |
7253 | ui.write(_("Mercurial Distributed SCM (version %s)\n") |
|
7301 | fm = ui.formatter("version", opts) | |
7254 | % util.version()) |
|
7302 | fm.startitem() | |
7255 | ui.status(_( |
|
7303 | fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"), | |
|
7304 | util.version()) | |||
|
7305 | license = _( | |||
7256 | "(see https://mercurial-scm.org for more information)\n" |
|
7306 | "(see https://mercurial-scm.org for more information)\n" | |
7257 | "\nCopyright (C) 2005-2016 Matt Mackall and others\n" |
|
7307 | "\nCopyright (C) 2005-2016 Matt Mackall and others\n" | |
7258 | "This is free software; see the source for copying conditions. " |
|
7308 | "This is free software; see the source for copying conditions. " | |
7259 | "There is NO\nwarranty; " |
|
7309 | "There is NO\nwarranty; " | |
7260 | "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n" |
|
7310 | "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n" | |
7261 |
) |
|
7311 | ) | |
7262 |
|
7312 | if not ui.quiet: | ||
7263 | ui.note(_("\nEnabled extensions:\n\n")) |
|
7313 | fm.plain(license) | |
|
7314 | ||||
7264 | if ui.verbose: |
|
7315 | if ui.verbose: | |
7265 | # format names and versions into columns |
|
7316 | fm.plain(_("\nEnabled extensions:\n\n")) | |
7266 | names = [] |
|
7317 | # format names and versions into columns | |
7267 |
|
|
7318 | names = [] | |
7268 |
|
|
7319 | vers = [] | |
7269 | for name, module in extensions.extensions(): |
|
7320 | isinternals = [] | |
7270 | names.append(name) |
|
7321 | for name, module in extensions.extensions(): | |
7271 | vers.append(extensions.moduleversion(module)) |
|
7322 | names.append(name) | |
7272 |
|
|
7323 | vers.append(extensions.moduleversion(module) or None) | |
7273 | place.append(_("internal")) |
|
7324 | isinternals.append(extensions.ismoduleinternal(module)) | |
7274 | else: |
|
7325 | fn = fm.nested("extensions") | |
7275 | place.append(_("external")) |
|
7326 | if names: | |
7276 | if names: |
|
7327 | namefmt = " %%-%ds " % max(len(n) for n in names) | |
7277 | maxnamelen = max(len(n) for n in names) |
|
7328 | places = [_("external"), _("internal")] | |
7278 | for i, name in enumerate(names): |
|
7329 | for n, v, p in zip(names, vers, isinternals): | |
7279 | ui.write(" %-*s %s %s\n" % |
|
7330 | fn.startitem() | |
7280 | (maxnamelen, name, place[i], vers[i])) |
|
7331 | fn.condwrite(ui.verbose, "name", namefmt, n) | |
|
7332 | if ui.verbose: | |||
|
7333 | fn.plain("%s " % places[p]) | |||
|
7334 | fn.data(bundled=p) | |||
|
7335 | fn.condwrite(ui.verbose and v, "ver", "%s", v) | |||
|
7336 | if ui.verbose: | |||
|
7337 | fn.plain("\n") | |||
|
7338 | fn.end() | |||
|
7339 | fm.end() | |||
7281 |
|
7340 | |||
7282 | def loadcmdtable(ui, name, cmdtable): |
|
7341 | def loadcmdtable(ui, name, cmdtable): | |
7283 | """Load command functions from specified cmdtable |
|
7342 | """Load command functions from specified cmdtable |
@@ -528,11 +528,12 b' class changectx(basectx):' | |||||
528 |
|
528 | |||
529 | @propertycache |
|
529 | @propertycache | |
530 | def _manifest(self): |
|
530 | def _manifest(self): | |
531 |
return self._repo.manifest |
|
531 | return self._repo.manifestlog[self._changeset.manifest].read() | |
532 |
|
532 | |||
533 | @propertycache |
|
533 | @propertycache | |
534 | def _manifestdelta(self): |
|
534 | def _manifestdelta(self): | |
535 |
|
|
535 | mfnode = self._changeset.manifest | |
|
536 | return self._repo.manifestlog[mfnode].readdelta() | |||
536 |
|
537 | |||
537 | @propertycache |
|
538 | @propertycache | |
538 | def _parents(self): |
|
539 | def _parents(self): | |
@@ -823,7 +824,7 b' class basefilectx(object):' | |||||
823 | """ |
|
824 | """ | |
824 | repo = self._repo |
|
825 | repo = self._repo | |
825 | cl = repo.unfiltered().changelog |
|
826 | cl = repo.unfiltered().changelog | |
826 |
m |
|
827 | mfl = repo.manifestlog | |
827 | # fetch the linkrev |
|
828 | # fetch the linkrev | |
828 | fr = filelog.rev(fnode) |
|
829 | fr = filelog.rev(fnode) | |
829 | lkr = filelog.linkrev(fr) |
|
830 | lkr = filelog.linkrev(fr) | |
@@ -848,7 +849,7 b' class basefilectx(object):' | |||||
848 | if path in ac[3]: # checking the 'files' field. |
|
849 | if path in ac[3]: # checking the 'files' field. | |
849 | # The file has been touched, check if the content is |
|
850 | # The file has been touched, check if the content is | |
850 | # similar to the one we search for. |
|
851 | # similar to the one we search for. | |
851 |
if fnode == m |
|
852 | if fnode == mfl[ac[0]].readfast().get(path): | |
852 | return a |
|
853 | return a | |
853 | # In theory, we should never get out of that loop without a result. |
|
854 | # In theory, we should never get out of that loop without a result. | |
854 | # But if manifest uses a buggy file revision (not children of the |
|
855 | # But if manifest uses a buggy file revision (not children of the | |
@@ -929,7 +930,7 b' class basefilectx(object):' | |||||
929 | def lines(text): |
|
930 | def lines(text): | |
930 | if text.endswith("\n"): |
|
931 | if text.endswith("\n"): | |
931 | return text.count("\n") |
|
932 | return text.count("\n") | |
932 |
return text.count("\n") + |
|
933 | return text.count("\n") + int(bool(text)) | |
933 |
|
934 | |||
934 | if linenumber: |
|
935 | if linenumber: | |
935 | def decorate(text, rev): |
|
936 | def decorate(text, rev): | |
@@ -939,8 +940,7 b' class basefilectx(object):' | |||||
939 | return ([(rev, False)] * lines(text), text) |
|
940 | return ([(rev, False)] * lines(text), text) | |
940 |
|
941 | |||
941 | def pair(parent, child): |
|
942 | def pair(parent, child): | |
942 |
blocks = mdiff.allblocks(parent[1], child[1], opts=diffopts |
|
943 | blocks = mdiff.allblocks(parent[1], child[1], opts=diffopts) | |
943 | refine=True) |
|
|||
944 | for (a1, a2, b1, b2), t in blocks: |
|
944 | for (a1, a2, b1, b2), t in blocks: | |
945 | # Changed blocks ('!') or blocks made only of blank lines ('~') |
|
945 | # Changed blocks ('!') or blocks made only of blank lines ('~') | |
946 | # belong to the child. |
|
946 | # belong to the child. | |
@@ -1508,7 +1508,7 b' class workingctx(committablectx):' | |||||
1508 |
|
1508 | |||
1509 | # Only a case insensitive filesystem needs magic to translate user input |
|
1509 | # Only a case insensitive filesystem needs magic to translate user input | |
1510 | # to actual case in the filesystem. |
|
1510 | # to actual case in the filesystem. | |
1511 |
if not util. |
|
1511 | if not util.fscasesensitive(r.root): | |
1512 | return matchmod.icasefsmatcher(r.root, r.getcwd(), pats, include, |
|
1512 | return matchmod.icasefsmatcher(r.root, r.getcwd(), pats, include, | |
1513 | exclude, default, r.auditor, self, |
|
1513 | exclude, default, r.auditor, self, | |
1514 | listsubrepos=listsubrepos, |
|
1514 | listsubrepos=listsubrepos, |
@@ -231,30 +231,34 b' def pathcopies(x, y, match=None):' | |||||
231 | return _chain(x, y, _backwardrenames(x, a), |
|
231 | return _chain(x, y, _backwardrenames(x, a), | |
232 | _forwardcopies(a, y, match=match)) |
|
232 | _forwardcopies(a, y, match=match)) | |
233 |
|
233 | |||
234 | def _computenonoverlap(repo, c1, c2, addedinm1, addedinm2): |
|
234 | def _computenonoverlap(repo, c1, c2, addedinm1, addedinm2, baselabel=''): | |
235 | """Computes, based on addedinm1 and addedinm2, the files exclusive to c1 |
|
235 | """Computes, based on addedinm1 and addedinm2, the files exclusive to c1 | |
236 | and c2. This is its own function so extensions can easily wrap this call |
|
236 | and c2. This is its own function so extensions can easily wrap this call | |
237 | to see what files mergecopies is about to process. |
|
237 | to see what files mergecopies is about to process. | |
238 |
|
238 | |||
239 | Even though c1 and c2 are not used in this function, they are useful in |
|
239 | Even though c1 and c2 are not used in this function, they are useful in | |
240 | other extensions for being able to read the file nodes of the changed files. |
|
240 | other extensions for being able to read the file nodes of the changed files. | |
|
241 | ||||
|
242 | "baselabel" can be passed to help distinguish the multiple computations | |||
|
243 | done in the graft case. | |||
241 | """ |
|
244 | """ | |
242 | u1 = sorted(addedinm1 - addedinm2) |
|
245 | u1 = sorted(addedinm1 - addedinm2) | |
243 | u2 = sorted(addedinm2 - addedinm1) |
|
246 | u2 = sorted(addedinm2 - addedinm1) | |
244 |
|
247 | |||
|
248 | header = " unmatched files in %s" | |||
|
249 | if baselabel: | |||
|
250 | header += ' (from %s)' % baselabel | |||
245 | if u1: |
|
251 | if u1: | |
246 | repo.ui.debug(" unmatched files in local:\n %s\n" |
|
252 | repo.ui.debug("%s:\n %s\n" % (header % 'local', "\n ".join(u1))) | |
247 | % "\n ".join(u1)) |
|
|||
248 | if u2: |
|
253 | if u2: | |
249 | repo.ui.debug(" unmatched files in other:\n %s\n" |
|
254 | repo.ui.debug("%s:\n %s\n" % (header % 'other', "\n ".join(u2))) | |
250 | % "\n ".join(u2)) |
|
|||
251 | return u1, u2 |
|
255 | return u1, u2 | |
252 |
|
256 | |||
253 | def _makegetfctx(ctx): |
|
257 | def _makegetfctx(ctx): | |
254 | """return a 'getfctx' function suitable for checkcopies usage |
|
258 | """return a 'getfctx' function suitable for _checkcopies usage | |
255 |
|
259 | |||
256 | We have to re-setup the function building 'filectx' for each |
|
260 | We have to re-setup the function building 'filectx' for each | |
257 | 'checkcopies' to ensure the linkrev adjustment is properly setup for |
|
261 | '_checkcopies' to ensure the linkrev adjustment is properly setup for | |
258 | each. Linkrev adjustment is important to avoid bug in rename |
|
262 | each. Linkrev adjustment is important to avoid bug in rename | |
259 | detection. Moreover, having a proper '_ancestrycontext' setup ensures |
|
263 | detection. Moreover, having a proper '_ancestrycontext' setup ensures | |
260 | the performance impact of this adjustment is kept limited. Without it, |
|
264 | the performance impact of this adjustment is kept limited. Without it, | |
@@ -285,10 +289,26 b' def _makegetfctx(ctx):' | |||||
285 | return fctx |
|
289 | return fctx | |
286 | return util.lrucachefunc(makectx) |
|
290 | return util.lrucachefunc(makectx) | |
287 |
|
291 | |||
288 | def mergecopies(repo, c1, c2, ca): |
|
292 | def _combinecopies(copyfrom, copyto, finalcopy, diverge, incompletediverge): | |
|
293 | """combine partial copy paths""" | |||
|
294 | remainder = {} | |||
|
295 | for f in copyfrom: | |||
|
296 | if f in copyto: | |||
|
297 | finalcopy[copyto[f]] = copyfrom[f] | |||
|
298 | del copyto[f] | |||
|
299 | for f in incompletediverge: | |||
|
300 | assert f not in diverge | |||
|
301 | ic = incompletediverge[f] | |||
|
302 | if ic[0] in copyto: | |||
|
303 | diverge[f] = [copyto[ic[0]], ic[1]] | |||
|
304 | else: | |||
|
305 | remainder[f] = ic | |||
|
306 | return remainder | |||
|
307 | ||||
|
308 | def mergecopies(repo, c1, c2, base): | |||
289 | """ |
|
309 | """ | |
290 | Find moves and copies between context c1 and c2 that are relevant |
|
310 | Find moves and copies between context c1 and c2 that are relevant | |
291 | for merging. |
|
311 | for merging. 'base' will be used as the merge base. | |
292 |
|
312 | |||
293 | Returns four dicts: "copy", "movewithdir", "diverge", and |
|
313 | Returns four dicts: "copy", "movewithdir", "diverge", and | |
294 | "renamedelete". |
|
314 | "renamedelete". | |
@@ -321,6 +341,27 b' def mergecopies(repo, c1, c2, ca):' | |||||
321 | if repo.ui.configbool('experimental', 'disablecopytrace'): |
|
341 | if repo.ui.configbool('experimental', 'disablecopytrace'): | |
322 | return {}, {}, {}, {} |
|
342 | return {}, {}, {}, {} | |
323 |
|
343 | |||
|
344 | # In certain scenarios (e.g. graft, update or rebase), base can be | |||
|
345 | # overridden We still need to know a real common ancestor in this case We | |||
|
346 | # can't just compute _c1.ancestor(_c2) and compare it to ca, because there | |||
|
347 | # can be multiple common ancestors, e.g. in case of bidmerge. Because our | |||
|
348 | # caller may not know if the revision passed in lieu of the CA is a genuine | |||
|
349 | # common ancestor or not without explicitly checking it, it's better to | |||
|
350 | # determine that here. | |||
|
351 | # | |||
|
352 | # base.descendant(wc) and base.descendant(base) are False, work around that | |||
|
353 | _c1 = c1.p1() if c1.rev() is None else c1 | |||
|
354 | _c2 = c2.p1() if c2.rev() is None else c2 | |||
|
355 | # an endpoint is "dirty" if it isn't a descendant of the merge base | |||
|
356 | # if we have a dirty endpoint, we need to trigger graft logic, and also | |||
|
357 | # keep track of which endpoint is dirty | |||
|
358 | dirtyc1 = not (base == _c1 or base.descendant(_c1)) | |||
|
359 | dirtyc2 = not (base== _c2 or base.descendant(_c2)) | |||
|
360 | graft = dirtyc1 or dirtyc2 | |||
|
361 | tca = base | |||
|
362 | if graft: | |||
|
363 | tca = _c1.ancestor(_c2) | |||
|
364 | ||||
324 | limit = _findlimit(repo, c1.rev(), c2.rev()) |
|
365 | limit = _findlimit(repo, c1.rev(), c2.rev()) | |
325 | if limit is None: |
|
366 | if limit is None: | |
326 | # no common ancestor, no copies |
|
367 | # no common ancestor, no copies | |
@@ -329,28 +370,63 b' def mergecopies(repo, c1, c2, ca):' | |||||
329 |
|
370 | |||
330 | m1 = c1.manifest() |
|
371 | m1 = c1.manifest() | |
331 | m2 = c2.manifest() |
|
372 | m2 = c2.manifest() | |
332 |
m |
|
373 | mb = base.manifest() | |
333 |
|
374 | |||
334 | copy1, copy2, = {}, {} |
|
375 | # gather data from _checkcopies: | |
335 | movewithdir1, movewithdir2 = {}, {} |
|
376 | # - diverge = record all diverges in this dict | |
336 | fullcopy1, fullcopy2 = {}, {} |
|
377 | # - copy = record all non-divergent copies in this dict | |
337 | diverge = {} |
|
378 | # - fullcopy = record all copies in this dict | |
|
379 | # - incomplete = record non-divergent partial copies here | |||
|
380 | # - incompletediverge = record divergent partial copies here | |||
|
381 | diverge = {} # divergence data is shared | |||
|
382 | incompletediverge = {} | |||
|
383 | data1 = {'copy': {}, | |||
|
384 | 'fullcopy': {}, | |||
|
385 | 'incomplete': {}, | |||
|
386 | 'diverge': diverge, | |||
|
387 | 'incompletediverge': incompletediverge, | |||
|
388 | } | |||
|
389 | data2 = {'copy': {}, | |||
|
390 | 'fullcopy': {}, | |||
|
391 | 'incomplete': {}, | |||
|
392 | 'diverge': diverge, | |||
|
393 | 'incompletediverge': incompletediverge, | |||
|
394 | } | |||
338 |
|
395 | |||
339 | # find interesting file sets from manifests |
|
396 | # find interesting file sets from manifests | |
340 |
addedinm1 = m1.filesnotin(m |
|
397 | addedinm1 = m1.filesnotin(mb) | |
341 |
addedinm2 = m2.filesnotin(m |
|
398 | addedinm2 = m2.filesnotin(mb) | |
342 | u1, u2 = _computenonoverlap(repo, c1, c2, addedinm1, addedinm2) |
|
|||
343 | bothnew = sorted(addedinm1 & addedinm2) |
|
399 | bothnew = sorted(addedinm1 & addedinm2) | |
|
400 | if tca == base: | |||
|
401 | # unmatched file from base | |||
|
402 | u1r, u2r = _computenonoverlap(repo, c1, c2, addedinm1, addedinm2) | |||
|
403 | u1u, u2u = u1r, u2r | |||
|
404 | else: | |||
|
405 | # unmatched file from base (DAG rotation in the graft case) | |||
|
406 | u1r, u2r = _computenonoverlap(repo, c1, c2, addedinm1, addedinm2, | |||
|
407 | baselabel='base') | |||
|
408 | # unmatched file from topological common ancestors (no DAG rotation) | |||
|
409 | # need to recompute this for directory move handling when grafting | |||
|
410 | mta = tca.manifest() | |||
|
411 | u1u, u2u = _computenonoverlap(repo, c1, c2, m1.filesnotin(mta), | |||
|
412 | m2.filesnotin(mta), | |||
|
413 | baselabel='topological common ancestor') | |||
344 |
|
414 | |||
345 | for f in u1: |
|
415 | for f in u1u: | |
346 |
checkcopies(c1, f, m1, m2, ca, limit, d |
|
416 | _checkcopies(c1, f, m1, m2, base, tca, dirtyc1, limit, data1) | |
|
417 | ||||
|
418 | for f in u2u: | |||
|
419 | _checkcopies(c2, f, m2, m1, base, tca, dirtyc2, limit, data2) | |||
347 |
|
420 | |||
348 | for f in u2: |
|
421 | copy = dict(data1['copy'].items() + data2['copy'].items()) | |
349 | checkcopies(c2, f, m2, m1, ca, limit, diverge, copy2, fullcopy2) |
|
422 | fullcopy = dict(data1['fullcopy'].items() + data2['fullcopy'].items()) | |
350 |
|
423 | |||
351 | copy = dict(copy1.items() + copy2.items()) |
|
424 | if dirtyc1: | |
352 | movewithdir = dict(movewithdir1.items() + movewithdir2.items()) |
|
425 | _combinecopies(data2['incomplete'], data1['incomplete'], copy, diverge, | |
353 | fullcopy = dict(fullcopy1.items() + fullcopy2.items()) |
|
426 | incompletediverge) | |
|
427 | else: | |||
|
428 | _combinecopies(data1['incomplete'], data2['incomplete'], copy, diverge, | |||
|
429 | incompletediverge) | |||
354 |
|
430 | |||
355 | renamedelete = {} |
|
431 | renamedelete = {} | |
356 | renamedeleteset = set() |
|
432 | renamedeleteset = set() | |
@@ -369,10 +445,44 b' def mergecopies(repo, c1, c2, ca):' | |||||
369 | if bothnew: |
|
445 | if bothnew: | |
370 | repo.ui.debug(" unmatched files new in both:\n %s\n" |
|
446 | repo.ui.debug(" unmatched files new in both:\n %s\n" | |
371 | % "\n ".join(bothnew)) |
|
447 | % "\n ".join(bothnew)) | |
372 |
bothdiverge |
|
448 | bothdiverge = {} | |
|
449 | bothincompletediverge = {} | |||
|
450 | remainder = {} | |||
|
451 | both1 = {'copy': {}, | |||
|
452 | 'fullcopy': {}, | |||
|
453 | 'incomplete': {}, | |||
|
454 | 'diverge': bothdiverge, | |||
|
455 | 'incompletediverge': bothincompletediverge | |||
|
456 | } | |||
|
457 | both2 = {'copy': {}, | |||
|
458 | 'fullcopy': {}, | |||
|
459 | 'incomplete': {}, | |||
|
460 | 'diverge': bothdiverge, | |||
|
461 | 'incompletediverge': bothincompletediverge | |||
|
462 | } | |||
373 | for f in bothnew: |
|
463 | for f in bothnew: | |
374 |
checkcopies(c1, f, m1, m2, ca, limit, both |
|
464 | _checkcopies(c1, f, m1, m2, base, tca, dirtyc1, limit, both1) | |
375 |
checkcopies(c2, f, m2, m1, ca, limit, both |
|
465 | _checkcopies(c2, f, m2, m1, base, tca, dirtyc2, limit, both2) | |
|
466 | if dirtyc1: | |||
|
467 | # incomplete copies may only be found on the "dirty" side for bothnew | |||
|
468 | assert not both2['incomplete'] | |||
|
469 | remainder = _combinecopies({}, both1['incomplete'], copy, bothdiverge, | |||
|
470 | bothincompletediverge) | |||
|
471 | elif dirtyc2: | |||
|
472 | assert not both1['incomplete'] | |||
|
473 | remainder = _combinecopies({}, both2['incomplete'], copy, bothdiverge, | |||
|
474 | bothincompletediverge) | |||
|
475 | else: | |||
|
476 | # incomplete copies and divergences can't happen outside grafts | |||
|
477 | assert not both1['incomplete'] | |||
|
478 | assert not both2['incomplete'] | |||
|
479 | assert not bothincompletediverge | |||
|
480 | for f in remainder: | |||
|
481 | assert f not in bothdiverge | |||
|
482 | ic = remainder[f] | |||
|
483 | if ic[0] in (m1 if dirtyc1 else m2): | |||
|
484 | # backed-out rename on one side, but watch out for deleted files | |||
|
485 | bothdiverge[f] = ic | |||
376 | for of, fl in bothdiverge.items(): |
|
486 | for of, fl in bothdiverge.items(): | |
377 | if len(fl) == 2 and fl[0] == fl[1]: |
|
487 | if len(fl) == 2 and fl[0] == fl[1]: | |
378 | copy[fl[0]] = of # not actually divergent, just matching renames |
|
488 | copy[fl[0]] = of # not actually divergent, just matching renames | |
@@ -393,7 +503,7 b' def mergecopies(repo, c1, c2, ca):' | |||||
393 | del divergeset |
|
503 | del divergeset | |
394 |
|
504 | |||
395 | if not fullcopy: |
|
505 | if not fullcopy: | |
396 |
return copy, |
|
506 | return copy, {}, diverge, renamedelete | |
397 |
|
507 | |||
398 | repo.ui.debug(" checking for directory renames\n") |
|
508 | repo.ui.debug(" checking for directory renames\n") | |
399 |
|
509 | |||
@@ -431,14 +541,15 b' def mergecopies(repo, c1, c2, ca):' | |||||
431 | del d1, d2, invalid |
|
541 | del d1, d2, invalid | |
432 |
|
542 | |||
433 | if not dirmove: |
|
543 | if not dirmove: | |
434 |
return copy, |
|
544 | return copy, {}, diverge, renamedelete | |
435 |
|
545 | |||
436 | for d in dirmove: |
|
546 | for d in dirmove: | |
437 | repo.ui.debug(" discovered dir src: '%s' -> dst: '%s'\n" % |
|
547 | repo.ui.debug(" discovered dir src: '%s' -> dst: '%s'\n" % | |
438 | (d, dirmove[d])) |
|
548 | (d, dirmove[d])) | |
439 |
|
549 | |||
|
550 | movewithdir = {} | |||
440 | # check unaccounted nonoverlapping files against directory moves |
|
551 | # check unaccounted nonoverlapping files against directory moves | |
441 | for f in u1 + u2: |
|
552 | for f in u1r + u2r: | |
442 | if f not in fullcopy: |
|
553 | if f not in fullcopy: | |
443 | for d in dirmove: |
|
554 | for d in dirmove: | |
444 | if f.startswith(d): |
|
555 | if f.startswith(d): | |
@@ -452,55 +563,74 b' def mergecopies(repo, c1, c2, ca):' | |||||
452 |
|
563 | |||
453 | return copy, movewithdir, diverge, renamedelete |
|
564 | return copy, movewithdir, diverge, renamedelete | |
454 |
|
565 | |||
455 | def checkcopies(ctx, f, m1, m2, ca, limit, diverge, copy, fullcopy): |
|
566 | def _related(f1, f2, limit): | |
|
567 | """return True if f1 and f2 filectx have a common ancestor | |||
|
568 | ||||
|
569 | Walk back to common ancestor to see if the two files originate | |||
|
570 | from the same file. Since workingfilectx's rev() is None it messes | |||
|
571 | up the integer comparison logic, hence the pre-step check for | |||
|
572 | None (f1 and f2 can only be workingfilectx's initially). | |||
|
573 | """ | |||
|
574 | ||||
|
575 | if f1 == f2: | |||
|
576 | return f1 # a match | |||
|
577 | ||||
|
578 | g1, g2 = f1.ancestors(), f2.ancestors() | |||
|
579 | try: | |||
|
580 | f1r, f2r = f1.linkrev(), f2.linkrev() | |||
|
581 | ||||
|
582 | if f1r is None: | |||
|
583 | f1 = next(g1) | |||
|
584 | if f2r is None: | |||
|
585 | f2 = next(g2) | |||
|
586 | ||||
|
587 | while True: | |||
|
588 | f1r, f2r = f1.linkrev(), f2.linkrev() | |||
|
589 | if f1r > f2r: | |||
|
590 | f1 = next(g1) | |||
|
591 | elif f2r > f1r: | |||
|
592 | f2 = next(g2) | |||
|
593 | elif f1 == f2: | |||
|
594 | return f1 # a match | |||
|
595 | elif f1r == f2r or f1r < limit or f2r < limit: | |||
|
596 | return False # copy no longer relevant | |||
|
597 | except StopIteration: | |||
|
598 | return False | |||
|
599 | ||||
|
600 | def _checkcopies(ctx, f, m1, m2, base, tca, remotebase, limit, data): | |||
456 | """ |
|
601 | """ | |
457 | check possible copies of f from m1 to m2 |
|
602 | check possible copies of f from m1 to m2 | |
458 |
|
603 | |||
459 | ctx = starting context for f in m1 |
|
604 | ctx = starting context for f in m1 | |
460 | f = the filename to check |
|
605 | f = the filename to check (as in m1) | |
461 | m1 = the source manifest |
|
606 | m1 = the source manifest | |
462 | m2 = the destination manifest |
|
607 | m2 = the destination manifest | |
463 |
|
|
608 | base = the changectx used as a merge base | |
|
609 | tca = topological common ancestor for graft-like scenarios | |||
|
610 | remotebase = True if base is outside tca::ctx, False otherwise | |||
464 | limit = the rev number to not search beyond |
|
611 | limit = the rev number to not search beyond | |
465 | diverge = record all diverges in this dict |
|
612 | data = dictionary of dictionary to store copy data. (see mergecopies) | |
466 | copy = record all non-divergent copies in this dict |
|
613 | ||
467 | fullcopy = record all copies in this dict |
|
614 | note: limit is only an optimization, and there is no guarantee that | |
|
615 | irrelevant revisions will not be limited | |||
|
616 | there is no easy way to make this algorithm stop in a guaranteed way | |||
|
617 | once it "goes behind a certain revision". | |||
468 | """ |
|
618 | """ | |
469 |
|
619 | |||
470 |
m |
|
620 | mb = base.manifest() | |
|
621 | mta = tca.manifest() | |||
|
622 | # Might be true if this call is about finding backward renames, | |||
|
623 | # This happens in the case of grafts because the DAG is then rotated. | |||
|
624 | # If the file exists in both the base and the source, we are not looking | |||
|
625 | # for a rename on the source side, but on the part of the DAG that is | |||
|
626 | # traversed backwards. | |||
|
627 | # | |||
|
628 | # In the case there is both backward and forward renames (before and after | |||
|
629 | # the base) this is more complicated as we must detect a divergence. | |||
|
630 | # We use 'backwards = False' in that case. | |||
|
631 | backwards = not remotebase and base != tca and f in mb | |||
471 | getfctx = _makegetfctx(ctx) |
|
632 | getfctx = _makegetfctx(ctx) | |
472 |
|
633 | |||
473 | def _related(f1, f2, limit): |
|
|||
474 | # Walk back to common ancestor to see if the two files originate |
|
|||
475 | # from the same file. Since workingfilectx's rev() is None it messes |
|
|||
476 | # up the integer comparison logic, hence the pre-step check for |
|
|||
477 | # None (f1 and f2 can only be workingfilectx's initially). |
|
|||
478 |
|
||||
479 | if f1 == f2: |
|
|||
480 | return f1 # a match |
|
|||
481 |
|
||||
482 | g1, g2 = f1.ancestors(), f2.ancestors() |
|
|||
483 | try: |
|
|||
484 | f1r, f2r = f1.linkrev(), f2.linkrev() |
|
|||
485 |
|
||||
486 | if f1r is None: |
|
|||
487 | f1 = next(g1) |
|
|||
488 | if f2r is None: |
|
|||
489 | f2 = next(g2) |
|
|||
490 |
|
||||
491 | while True: |
|
|||
492 | f1r, f2r = f1.linkrev(), f2.linkrev() |
|
|||
493 | if f1r > f2r: |
|
|||
494 | f1 = next(g1) |
|
|||
495 | elif f2r > f1r: |
|
|||
496 | f2 = next(g2) |
|
|||
497 | elif f1 == f2: |
|
|||
498 | return f1 # a match |
|
|||
499 | elif f1r == f2r or f1r < limit or f2r < limit: |
|
|||
500 | return False # copy no longer relevant |
|
|||
501 | except StopIteration: |
|
|||
502 | return False |
|
|||
503 |
|
||||
504 | of = None |
|
634 | of = None | |
505 | seen = set([f]) |
|
635 | seen = set([f]) | |
506 | for oc in getfctx(f, m1[f]).ancestors(): |
|
636 | for oc in getfctx(f, m1[f]).ancestors(): | |
@@ -513,20 +643,47 b' def checkcopies(ctx, f, m1, m2, ca, limi' | |||||
513 | continue |
|
643 | continue | |
514 | seen.add(of) |
|
644 | seen.add(of) | |
515 |
|
645 | |||
516 |
|
|
646 | # remember for dir rename detection | |
|
647 | if backwards: | |||
|
648 | data['fullcopy'][of] = f # grafting backwards through renames | |||
|
649 | else: | |||
|
650 | data['fullcopy'][f] = of | |||
517 | if of not in m2: |
|
651 | if of not in m2: | |
518 | continue # no match, keep looking |
|
652 | continue # no match, keep looking | |
519 |
if m2[of] == m |
|
653 | if m2[of] == mb.get(of): | |
520 |
|
|
654 | return # no merge needed, quit early | |
521 | c2 = getfctx(of, m2[of]) |
|
655 | c2 = getfctx(of, m2[of]) | |
522 | cr = _related(oc, c2, ca.rev()) |
|
656 | # c2 might be a plain new file on added on destination side that is | |
|
657 | # unrelated to the droids we are looking for. | |||
|
658 | cr = _related(oc, c2, tca.rev()) | |||
523 | if cr and (of == f or of == c2.path()): # non-divergent |
|
659 | if cr and (of == f or of == c2.path()): # non-divergent | |
524 | copy[f] = of |
|
660 | if backwards: | |
525 | of = None |
|
661 | data['copy'][of] = f | |
526 |
|
|
662 | elif of in mb: | |
|
663 | data['copy'][f] = of | |||
|
664 | elif remotebase: # special case: a <- b <- a -> b "ping-pong" rename | |||
|
665 | data['copy'][of] = f | |||
|
666 | del data['fullcopy'][f] | |||
|
667 | data['fullcopy'][of] = f | |||
|
668 | else: # divergence w.r.t. graft CA on one side of topological CA | |||
|
669 | for sf in seen: | |||
|
670 | if sf in mb: | |||
|
671 | assert sf not in data['diverge'] | |||
|
672 | data['diverge'][sf] = [f, of] | |||
|
673 | break | |||
|
674 | return | |||
527 |
|
675 | |||
528 | if of in ma: |
|
676 | if of in mta: | |
529 | diverge.setdefault(of, []).append(f) |
|
677 | if backwards or remotebase: | |
|
678 | data['incomplete'][of] = f | |||
|
679 | else: | |||
|
680 | for sf in seen: | |||
|
681 | if sf in mb: | |||
|
682 | if tca == base: | |||
|
683 | data['diverge'].setdefault(sf, []).append(f) | |||
|
684 | else: | |||
|
685 | data['incompletediverge'][sf] = [of, f] | |||
|
686 | return | |||
530 |
|
687 | |||
531 | def duplicatecopies(repo, rev, fromrev, skiprev=None): |
|
688 | def duplicatecopies(repo, rev, fromrev, skiprev=None): | |
532 | '''reproduce copies from fromrev to rev in the dirstate |
|
689 | '''reproduce copies from fromrev to rev in the dirstate |
@@ -28,7 +28,7 b' stringio = util.stringio' | |||||
28 |
|
28 | |||
29 | # This is required for ncurses to display non-ASCII characters in default user |
|
29 | # This is required for ncurses to display non-ASCII characters in default user | |
30 | # locale encoding correctly. --immerrr |
|
30 | # locale encoding correctly. --immerrr | |
31 | locale.setlocale(locale.LC_ALL, '') |
|
31 | locale.setlocale(locale.LC_ALL, u'') | |
32 |
|
32 | |||
33 | # patch comments based on the git one |
|
33 | # patch comments based on the git one | |
34 | diffhelptext = _("""# To remove '-' lines, make them ' ' lines (context). |
|
34 | diffhelptext = _("""# To remove '-' lines, make them ' ' lines (context). | |
@@ -719,7 +719,7 b' class curseschunkselector(object):' | |||||
719 | "scroll the screen to fully show the currently-selected" |
|
719 | "scroll the screen to fully show the currently-selected" | |
720 | selstart = self.selecteditemstartline |
|
720 | selstart = self.selecteditemstartline | |
721 | selend = self.selecteditemendline |
|
721 | selend = self.selecteditemendline | |
722 | #selnumlines = selend - selstart |
|
722 | ||
723 | padstart = self.firstlineofpadtoprint |
|
723 | padstart = self.firstlineofpadtoprint | |
724 | padend = padstart + self.yscreensize - self.numstatuslines - 1 |
|
724 | padend = padstart + self.yscreensize - self.numstatuslines - 1 | |
725 | # 'buffered' pad start/end values which scroll with a certain |
|
725 | # 'buffered' pad start/end values which scroll with a certain | |
@@ -1263,7 +1263,6 b' class curseschunkselector(object):' | |||||
1263 | self.statuswin.resize(self.numstatuslines, self.xscreensize) |
|
1263 | self.statuswin.resize(self.numstatuslines, self.xscreensize) | |
1264 | self.numpadlines = self.getnumlinesdisplayed(ignorefolding=True) + 1 |
|
1264 | self.numpadlines = self.getnumlinesdisplayed(ignorefolding=True) + 1 | |
1265 | self.chunkpad = curses.newpad(self.numpadlines, self.xscreensize) |
|
1265 | self.chunkpad = curses.newpad(self.numpadlines, self.xscreensize) | |
1266 | # todo: try to resize commit message window if possible |
|
|||
1267 | except curses.error: |
|
1266 | except curses.error: | |
1268 | pass |
|
1267 | pass | |
1269 |
|
1268 | |||
@@ -1339,6 +1338,7 b' the following are valid keystrokes:' | |||||
1339 | shift-left-arrow [H] : go to parent header / fold selected header |
|
1338 | shift-left-arrow [H] : go to parent header / fold selected header | |
1340 | f : fold / unfold item, hiding/revealing its children |
|
1339 | f : fold / unfold item, hiding/revealing its children | |
1341 | F : fold / unfold parent item and all of its ancestors |
|
1340 | F : fold / unfold parent item and all of its ancestors | |
|
1341 | ctrl-l : scroll the selected line to the top of the screen | |||
1342 | m : edit / resume editing the commit message |
|
1342 | m : edit / resume editing the commit message | |
1343 | e : edit the currently selected hunk |
|
1343 | e : edit the currently selected hunk | |
1344 | a : toggle amend mode, only with commit -i |
|
1344 | a : toggle amend mode, only with commit -i | |
@@ -1583,13 +1583,17 b' are you sure you want to review/edit and' | |||||
1583 | self.helpwindow() |
|
1583 | self.helpwindow() | |
1584 | self.stdscr.clear() |
|
1584 | self.stdscr.clear() | |
1585 | self.stdscr.refresh() |
|
1585 | self.stdscr.refresh() | |
|
1586 | elif curses.unctrl(keypressed) in ["^L"]: | |||
|
1587 | # scroll the current line to the top of the screen | |||
|
1588 | self.scrolllines(self.selecteditemstartline) | |||
1586 |
|
1589 | |||
1587 | def main(self, stdscr): |
|
1590 | def main(self, stdscr): | |
1588 | """ |
|
1591 | """ | |
1589 | method to be wrapped by curses.wrapper() for selecting chunks. |
|
1592 | method to be wrapped by curses.wrapper() for selecting chunks. | |
1590 | """ |
|
1593 | """ | |
1591 |
|
1594 | |||
1592 |
signal.signal(signal.SIGWINCH, |
|
1595 | origsigwinchhandler = signal.signal(signal.SIGWINCH, | |
|
1596 | self.sigwinchhandler) | |||
1593 | self.stdscr = stdscr |
|
1597 | self.stdscr = stdscr | |
1594 | # error during initialization, cannot be printed in the curses |
|
1598 | # error during initialization, cannot be printed in the curses | |
1595 | # interface, it should be printed by the calling code |
|
1599 | # interface, it should be printed by the calling code | |
@@ -1640,3 +1644,4 b' are you sure you want to review/edit and' | |||||
1640 | keypressed = "foobar" |
|
1644 | keypressed = "foobar" | |
1641 | if self.handlekeypressed(keypressed): |
|
1645 | if self.handlekeypressed(keypressed): | |
1642 | break |
|
1646 | break | |
|
1647 | signal.signal(signal.SIGWINCH, origsigwinchhandler) |
@@ -64,8 +64,12 b' def _hgextimport(importfunc, name, globa' | |||||
64 | return importfunc(hgextname, globals, *args, **kwargs) |
|
64 | return importfunc(hgextname, globals, *args, **kwargs) | |
65 |
|
65 | |||
66 | class _demandmod(object): |
|
66 | class _demandmod(object): | |
67 |
"""module demand-loader and proxy |
|
67 | """module demand-loader and proxy | |
68 | def __init__(self, name, globals, locals, level=level): |
|
68 | ||
|
69 | Specify 1 as 'level' argument at construction, to import module | |||
|
70 | relatively. | |||
|
71 | """ | |||
|
72 | def __init__(self, name, globals, locals, level): | |||
69 | if '.' in name: |
|
73 | if '.' in name: | |
70 | head, rest = name.split('.', 1) |
|
74 | head, rest = name.split('.', 1) | |
71 | after = [rest] |
|
75 | after = [rest] | |
@@ -117,7 +121,8 b' class _demandmod(object):' | |||||
117 | if '.' in p: |
|
121 | if '.' in p: | |
118 | h, t = p.split('.', 1) |
|
122 | h, t = p.split('.', 1) | |
119 | if getattr(mod, h, nothing) is nothing: |
|
123 | if getattr(mod, h, nothing) is nothing: | |
120 |
setattr(mod, h, _demandmod(p, mod.__dict__, mod.__dict__ |
|
124 | setattr(mod, h, _demandmod(p, mod.__dict__, mod.__dict__, | |
|
125 | level=1)) | |||
121 | elif t: |
|
126 | elif t: | |
122 | subload(getattr(mod, h), t) |
|
127 | subload(getattr(mod, h), t) | |
123 |
|
128 | |||
@@ -186,11 +191,16 b' def _demandimport(name, globals=None, lo' | |||||
186 | def processfromitem(mod, attr): |
|
191 | def processfromitem(mod, attr): | |
187 | """Process an imported symbol in the import statement. |
|
192 | """Process an imported symbol in the import statement. | |
188 |
|
193 | |||
189 |
If the symbol doesn't exist in the parent module, |
|
194 | If the symbol doesn't exist in the parent module, and if the | |
190 | module. We set missing modules up as _demandmod instances. |
|
195 | parent module is a package, it must be a module. We set missing | |
|
196 | modules up as _demandmod instances. | |||
191 | """ |
|
197 | """ | |
192 | symbol = getattr(mod, attr, nothing) |
|
198 | symbol = getattr(mod, attr, nothing) | |
|
199 | nonpkg = getattr(mod, '__path__', nothing) is nothing | |||
193 | if symbol is nothing: |
|
200 | if symbol is nothing: | |
|
201 | if nonpkg: | |||
|
202 | # do not try relative import, which would raise ValueError | |||
|
203 | raise ImportError('cannot import name %s' % attr) | |||
194 | mn = '%s.%s' % (mod.__name__, attr) |
|
204 | mn = '%s.%s' % (mod.__name__, attr) | |
195 | if mn in ignore: |
|
205 | if mn in ignore: | |
196 | importfunc = _origimport |
|
206 | importfunc = _origimport | |
@@ -210,8 +220,8 b' def _demandimport(name, globals=None, lo' | |||||
210 | mod = rootmod |
|
220 | mod = rootmod | |
211 | for comp in modname.split('.')[1:]: |
|
221 | for comp in modname.split('.')[1:]: | |
212 | if getattr(mod, comp, nothing) is nothing: |
|
222 | if getattr(mod, comp, nothing) is nothing: | |
213 | setattr(mod, comp, |
|
223 | setattr(mod, comp, _demandmod(comp, mod.__dict__, | |
214 |
|
|
224 | mod.__dict__, level=1)) | |
215 | mod = getattr(mod, comp) |
|
225 | mod = getattr(mod, comp) | |
216 | return mod |
|
226 | return mod | |
217 |
|
227 | |||
@@ -259,6 +269,7 b' ignore = [' | |||||
259 | '_imp', |
|
269 | '_imp', | |
260 | '_xmlplus', |
|
270 | '_xmlplus', | |
261 | 'fcntl', |
|
271 | 'fcntl', | |
|
272 | 'nt', # pathlib2 tests the existence of built-in 'nt' module | |||
262 | 'win32com.gen_py', |
|
273 | 'win32com.gen_py', | |
263 | '_winreg', # 2.7 mimetypes needs immediate ImportError |
|
274 | '_winreg', # 2.7 mimetypes needs immediate ImportError | |
264 | 'pythoncom', |
|
275 | 'pythoncom', | |
@@ -279,9 +290,17 b' ignore = [' | |||||
279 | 'mimetools', |
|
290 | 'mimetools', | |
280 | 'sqlalchemy.events', # has import-time side effects (issue5085) |
|
291 | 'sqlalchemy.events', # has import-time side effects (issue5085) | |
281 | # setuptools 8 expects this module to explode early when not on windows |
|
292 | # setuptools 8 expects this module to explode early when not on windows | |
282 | 'distutils.msvc9compiler' |
|
293 | 'distutils.msvc9compiler', | |
|
294 | '__builtin__', | |||
|
295 | 'builtins', | |||
283 | ] |
|
296 | ] | |
284 |
|
297 | |||
|
298 | if _pypy: | |||
|
299 | ignore.extend([ | |||
|
300 | # _ctypes.pointer is shadowed by "from ... import pointer" (PyPy 5) | |||
|
301 | '_ctypes.pointer', | |||
|
302 | ]) | |||
|
303 | ||||
285 | def isenabled(): |
|
304 | def isenabled(): | |
286 | return builtins.__import__ == _demandimport |
|
305 | return builtins.__import__ == _demandimport | |
287 |
|
306 |
@@ -416,8 +416,8 b' def _statusotherbranchheads(ui, repo):' | |||||
416 | 'updating to a closed head\n') % |
|
416 | 'updating to a closed head\n') % | |
417 | (currentbranch)) |
|
417 | (currentbranch)) | |
418 | if otherheads: |
|
418 | if otherheads: | |
419 |
ui.warn(_( |
|
419 | ui.warn(_("(committing will reopen the head, " | |
420 |
' |
|
420 | "use 'hg heads .' to see %i other heads)\n") % | |
421 | (len(otherheads))) |
|
421 | (len(otherheads))) | |
422 | else: |
|
422 | else: | |
423 | ui.warn(_('(committing will reopen branch "%s")\n') % |
|
423 | ui.warn(_('(committing will reopen branch "%s")\n') % |
@@ -11,6 +11,12 b'' | |||||
11 | #include <Python.h> |
|
11 | #include <Python.h> | |
12 | #include "util.h" |
|
12 | #include "util.h" | |
13 |
|
13 | |||
|
14 | #ifdef IS_PY3K | |||
|
15 | #define PYLONG_VALUE(o) ((PyLongObject *)o)->ob_digit[1] | |||
|
16 | #else | |||
|
17 | #define PYLONG_VALUE(o) PyInt_AS_LONG(o) | |||
|
18 | #endif | |||
|
19 | ||||
14 | /* |
|
20 | /* | |
15 | * This is a multiset of directory names, built from the files that |
|
21 | * This is a multiset of directory names, built from the files that | |
16 | * appear in a dirstate or manifest. |
|
22 | * appear in a dirstate or manifest. | |
@@ -41,11 +47,20 b' static inline Py_ssize_t _finddir(const ' | |||||
41 |
|
47 | |||
42 | static int _addpath(PyObject *dirs, PyObject *path) |
|
48 | static int _addpath(PyObject *dirs, PyObject *path) | |
43 | { |
|
49 | { | |
44 |
const char *cpath = Py |
|
50 | const char *cpath = PyBytes_AS_STRING(path); | |
45 |
Py_ssize_t pos = Py |
|
51 | Py_ssize_t pos = PyBytes_GET_SIZE(path); | |
46 | PyObject *key = NULL; |
|
52 | PyObject *key = NULL; | |
47 | int ret = -1; |
|
53 | int ret = -1; | |
48 |
|
54 | |||
|
55 | /* This loop is super critical for performance. That's why we inline | |||
|
56 | * access to Python structs instead of going through a supported API. | |||
|
57 | * The implementation, therefore, is heavily dependent on CPython | |||
|
58 | * implementation details. We also commit violations of the Python | |||
|
59 | * "protocol" such as mutating immutable objects. But since we only | |||
|
60 | * mutate objects created in this function or in other well-defined | |||
|
61 | * locations, the references are known so these violations should go | |||
|
62 | * unnoticed. The code for adjusting the length of a PyBytesObject is | |||
|
63 | * essentially a minimal version of _PyBytes_Resize. */ | |||
49 | while ((pos = _finddir(cpath, pos - 1)) != -1) { |
|
64 | while ((pos = _finddir(cpath, pos - 1)) != -1) { | |
50 | PyObject *val; |
|
65 | PyObject *val; | |
51 |
|
66 | |||
@@ -53,30 +68,36 b' static int _addpath(PyObject *dirs, PyOb' | |||||
53 | in our dict. Try to avoid allocating and |
|
68 | in our dict. Try to avoid allocating and | |
54 | deallocating a string for each prefix we check. */ |
|
69 | deallocating a string for each prefix we check. */ | |
55 | if (key != NULL) |
|
70 | if (key != NULL) | |
56 |
((Py |
|
71 | ((PyBytesObject *)key)->ob_shash = -1; | |
57 | else { |
|
72 | else { | |
58 | /* Force Python to not reuse a small shared string. */ |
|
73 | /* Force Python to not reuse a small shared string. */ | |
59 |
key = Py |
|
74 | key = PyBytes_FromStringAndSize(cpath, | |
60 | pos < 2 ? 2 : pos); |
|
75 | pos < 2 ? 2 : pos); | |
61 | if (key == NULL) |
|
76 | if (key == NULL) | |
62 | goto bail; |
|
77 | goto bail; | |
63 | } |
|
78 | } | |
64 | PyString_GET_SIZE(key) = pos; |
|
79 | /* Py_SIZE(o) refers to the ob_size member of the struct. Yes, | |
65 | PyString_AS_STRING(key)[pos] = '\0'; |
|
80 | * assigning to what looks like a function seems wrong. */ | |
|
81 | Py_SIZE(key) = pos; | |||
|
82 | ((PyBytesObject *)key)->ob_sval[pos] = '\0'; | |||
66 |
|
83 | |||
67 | val = PyDict_GetItem(dirs, key); |
|
84 | val = PyDict_GetItem(dirs, key); | |
68 | if (val != NULL) { |
|
85 | if (val != NULL) { | |
69 |
P |
|
86 | PYLONG_VALUE(val) += 1; | |
70 | break; |
|
87 | break; | |
71 | } |
|
88 | } | |
72 |
|
89 | |||
73 | /* Force Python to not reuse a small shared int. */ |
|
90 | /* Force Python to not reuse a small shared int. */ | |
|
91 | #ifdef IS_PY3K | |||
|
92 | val = PyLong_FromLong(0x1eadbeef); | |||
|
93 | #else | |||
74 | val = PyInt_FromLong(0x1eadbeef); |
|
94 | val = PyInt_FromLong(0x1eadbeef); | |
|
95 | #endif | |||
75 |
|
96 | |||
76 | if (val == NULL) |
|
97 | if (val == NULL) | |
77 | goto bail; |
|
98 | goto bail; | |
78 |
|
99 | |||
79 |
P |
|
100 | PYLONG_VALUE(val) = 1; | |
80 | ret = PyDict_SetItem(dirs, key, val); |
|
101 | ret = PyDict_SetItem(dirs, key, val); | |
81 | Py_DECREF(val); |
|
102 | Py_DECREF(val); | |
82 | if (ret == -1) |
|
103 | if (ret == -1) | |
@@ -93,15 +114,15 b' bail:' | |||||
93 |
|
114 | |||
94 | static int _delpath(PyObject *dirs, PyObject *path) |
|
115 | static int _delpath(PyObject *dirs, PyObject *path) | |
95 | { |
|
116 | { | |
96 |
char *cpath = Py |
|
117 | char *cpath = PyBytes_AS_STRING(path); | |
97 |
Py_ssize_t pos = Py |
|
118 | Py_ssize_t pos = PyBytes_GET_SIZE(path); | |
98 | PyObject *key = NULL; |
|
119 | PyObject *key = NULL; | |
99 | int ret = -1; |
|
120 | int ret = -1; | |
100 |
|
121 | |||
101 | while ((pos = _finddir(cpath, pos - 1)) != -1) { |
|
122 | while ((pos = _finddir(cpath, pos - 1)) != -1) { | |
102 | PyObject *val; |
|
123 | PyObject *val; | |
103 |
|
124 | |||
104 |
key = Py |
|
125 | key = PyBytes_FromStringAndSize(cpath, pos); | |
105 |
|
126 | |||
106 | if (key == NULL) |
|
127 | if (key == NULL) | |
107 | goto bail; |
|
128 | goto bail; | |
@@ -113,7 +134,7 b' static int _delpath(PyObject *dirs, PyOb' | |||||
113 | goto bail; |
|
134 | goto bail; | |
114 | } |
|
135 | } | |
115 |
|
136 | |||
116 |
if (--P |
|
137 | if (--PYLONG_VALUE(val) <= 0) { | |
117 | if (PyDict_DelItem(dirs, key) == -1) |
|
138 | if (PyDict_DelItem(dirs, key) == -1) | |
118 | goto bail; |
|
139 | goto bail; | |
119 | } else |
|
140 | } else | |
@@ -134,7 +155,7 b' static int dirs_fromdict(PyObject *dirs,' | |||||
134 | Py_ssize_t pos = 0; |
|
155 | Py_ssize_t pos = 0; | |
135 |
|
156 | |||
136 | while (PyDict_Next(source, &pos, &key, &value)) { |
|
157 | while (PyDict_Next(source, &pos, &key, &value)) { | |
137 |
if (!Py |
|
158 | if (!PyBytes_Check(key)) { | |
138 | PyErr_SetString(PyExc_TypeError, "expected string key"); |
|
159 | PyErr_SetString(PyExc_TypeError, "expected string key"); | |
139 | return -1; |
|
160 | return -1; | |
140 | } |
|
161 | } | |
@@ -165,7 +186,7 b' static int dirs_fromiter(PyObject *dirs,' | |||||
165 | return -1; |
|
186 | return -1; | |
166 |
|
187 | |||
167 | while ((item = PyIter_Next(iter)) != NULL) { |
|
188 | while ((item = PyIter_Next(iter)) != NULL) { | |
168 |
if (!Py |
|
189 | if (!PyBytes_Check(item)) { | |
169 | PyErr_SetString(PyExc_TypeError, "expected string"); |
|
190 | PyErr_SetString(PyExc_TypeError, "expected string"); | |
170 | break; |
|
191 | break; | |
171 | } |
|
192 | } | |
@@ -224,7 +245,7 b' PyObject *dirs_addpath(dirsObject *self,' | |||||
224 | { |
|
245 | { | |
225 | PyObject *path; |
|
246 | PyObject *path; | |
226 |
|
247 | |||
227 |
if (!PyArg_ParseTuple(args, "O!:addpath", &Py |
|
248 | if (!PyArg_ParseTuple(args, "O!:addpath", &PyBytes_Type, &path)) | |
228 | return NULL; |
|
249 | return NULL; | |
229 |
|
250 | |||
230 | if (_addpath(self->dict, path) == -1) |
|
251 | if (_addpath(self->dict, path) == -1) | |
@@ -237,7 +258,7 b' static PyObject *dirs_delpath(dirsObject' | |||||
237 | { |
|
258 | { | |
238 | PyObject *path; |
|
259 | PyObject *path; | |
239 |
|
260 | |||
240 |
if (!PyArg_ParseTuple(args, "O!:delpath", &Py |
|
261 | if (!PyArg_ParseTuple(args, "O!:delpath", &PyBytes_Type, &path)) | |
241 | return NULL; |
|
262 | return NULL; | |
242 |
|
263 | |||
243 | if (_delpath(self->dict, path) == -1) |
|
264 | if (_delpath(self->dict, path) == -1) | |
@@ -248,7 +269,7 b' static PyObject *dirs_delpath(dirsObject' | |||||
248 |
|
269 | |||
249 | static int dirs_contains(dirsObject *self, PyObject *value) |
|
270 | static int dirs_contains(dirsObject *self, PyObject *value) | |
250 | { |
|
271 | { | |
251 |
return Py |
|
272 | return PyBytes_Check(value) ? PyDict_Contains(self->dict, value) : 0; | |
252 | } |
|
273 | } | |
253 |
|
274 | |||
254 | static void dirs_dealloc(dirsObject *self) |
|
275 | static void dirs_dealloc(dirsObject *self) | |
@@ -270,7 +291,7 b' static PyMethodDef dirs_methods[] = {' | |||||
270 | {NULL} /* Sentinel */ |
|
291 | {NULL} /* Sentinel */ | |
271 | }; |
|
292 | }; | |
272 |
|
293 | |||
273 | static PyTypeObject dirsType = { PyObject_HEAD_INIT(NULL) }; |
|
294 | static PyTypeObject dirsType = { PyVarObject_HEAD_INIT(NULL, 0) }; | |
274 |
|
295 | |||
275 | void dirs_module_init(PyObject *mod) |
|
296 | void dirs_module_init(PyObject *mod) | |
276 | { |
|
297 | { |
@@ -74,8 +74,6 b' def _trypending(root, vfs, filename):' | |||||
74 | raise |
|
74 | raise | |
75 | return (vfs(filename), False) |
|
75 | return (vfs(filename), False) | |
76 |
|
76 | |||
77 | _token = object() |
|
|||
78 |
|
||||
79 | class dirstate(object): |
|
77 | class dirstate(object): | |
80 |
|
78 | |||
81 | def __init__(self, opener, ui, root, validate): |
|
79 | def __init__(self, opener, ui, root, validate): | |
@@ -103,6 +101,8 b' class dirstate(object):' | |||||
103 | self._parentwriters = 0 |
|
101 | self._parentwriters = 0 | |
104 | self._filename = 'dirstate' |
|
102 | self._filename = 'dirstate' | |
105 | self._pendingfilename = '%s.pending' % self._filename |
|
103 | self._pendingfilename = '%s.pending' % self._filename | |
|
104 | self._plchangecallbacks = {} | |||
|
105 | self._origpl = None | |||
106 |
|
106 | |||
107 | # for consistent view between _pl() and _read() invocations |
|
107 | # for consistent view between _pl() and _read() invocations | |
108 | self._pendingmode = None |
|
108 | self._pendingmode = None | |
@@ -227,7 +227,7 b' class dirstate(object):' | |||||
227 |
|
227 | |||
228 | @propertycache |
|
228 | @propertycache | |
229 | def _checkcase(self): |
|
229 | def _checkcase(self): | |
230 |
return not util. |
|
230 | return not util.fscasesensitive(self._join('.hg')) | |
231 |
|
231 | |||
232 | def _join(self, f): |
|
232 | def _join(self, f): | |
233 | # much faster than os.path.join() |
|
233 | # much faster than os.path.join() | |
@@ -349,6 +349,8 b' class dirstate(object):' | |||||
349 |
|
349 | |||
350 | self._dirty = self._dirtypl = True |
|
350 | self._dirty = self._dirtypl = True | |
351 | oldp2 = self._pl[1] |
|
351 | oldp2 = self._pl[1] | |
|
352 | if self._origpl is None: | |||
|
353 | self._origpl = self._pl | |||
352 | self._pl = p1, p2 |
|
354 | self._pl = p1, p2 | |
353 | copies = {} |
|
355 | copies = {} | |
354 | if oldp2 != nullid and p2 == nullid: |
|
356 | if oldp2 != nullid and p2 == nullid: | |
@@ -444,6 +446,7 b' class dirstate(object):' | |||||
444 | self._lastnormaltime = 0 |
|
446 | self._lastnormaltime = 0 | |
445 | self._dirty = False |
|
447 | self._dirty = False | |
446 | self._parentwriters = 0 |
|
448 | self._parentwriters = 0 | |
|
449 | self._origpl = None | |||
447 |
|
450 | |||
448 | def copy(self, source, dest): |
|
451 | def copy(self, source, dest): | |
449 | """Mark dest as a copy of source. Unmark dest if source is None.""" |
|
452 | """Mark dest as a copy of source. Unmark dest if source is None.""" | |
@@ -677,37 +680,23 b' class dirstate(object):' | |||||
677 | self.clear() |
|
680 | self.clear() | |
678 | self._lastnormaltime = lastnormaltime |
|
681 | self._lastnormaltime = lastnormaltime | |
679 |
|
682 | |||
|
683 | if self._origpl is None: | |||
|
684 | self._origpl = self._pl | |||
|
685 | self._pl = (parent, nullid) | |||
680 | for f in changedfiles: |
|
686 | for f in changedfiles: | |
681 | mode = 0o666 |
|
|||
682 | if f in allfiles and 'x' in allfiles.flags(f): |
|
|||
683 | mode = 0o777 |
|
|||
684 |
|
||||
685 | if f in allfiles: |
|
687 | if f in allfiles: | |
686 | self._map[f] = dirstatetuple('n', mode, -1, 0) |
|
688 | self.normallookup(f) | |
687 | else: |
|
689 | else: | |
688 |
self. |
|
690 | self.drop(f) | |
689 | if f in self._nonnormalset: |
|
|||
690 | self._nonnormalset.remove(f) |
|
|||
691 |
|
691 | |||
692 | self._pl = (parent, nullid) |
|
|||
693 | self._dirty = True |
|
692 | self._dirty = True | |
694 |
|
693 | |||
695 |
def write(self, tr |
|
694 | def write(self, tr): | |
696 | if not self._dirty: |
|
695 | if not self._dirty: | |
697 | return |
|
696 | return | |
698 |
|
697 | |||
699 | filename = self._filename |
|
698 | filename = self._filename | |
700 | if tr is _token: # not explicitly specified |
|
699 | if tr: | |
701 | self._ui.deprecwarn('use dirstate.write with ' |
|
|||
702 | 'repo.currenttransaction()', |
|
|||
703 | '3.9') |
|
|||
704 |
|
||||
705 | if self._opener.lexists(self._pendingfilename): |
|
|||
706 | # if pending file already exists, in-memory changes |
|
|||
707 | # should be written into it, because it has priority |
|
|||
708 | # to '.hg/dirstate' at reading under HG_PENDING mode |
|
|||
709 | filename = self._pendingfilename |
|
|||
710 | elif tr: |
|
|||
711 | # 'dirstate.write()' is not only for writing in-memory |
|
700 | # 'dirstate.write()' is not only for writing in-memory | |
712 | # changes out, but also for dropping ambiguous timestamp. |
|
701 | # changes out, but also for dropping ambiguous timestamp. | |
713 | # delayed writing re-raise "ambiguous timestamp issue". |
|
702 | # delayed writing re-raise "ambiguous timestamp issue". | |
@@ -733,7 +722,23 b' class dirstate(object):' | |||||
733 | st = self._opener(filename, "w", atomictemp=True, checkambig=True) |
|
722 | st = self._opener(filename, "w", atomictemp=True, checkambig=True) | |
734 | self._writedirstate(st) |
|
723 | self._writedirstate(st) | |
735 |
|
724 | |||
|
725 | def addparentchangecallback(self, category, callback): | |||
|
726 | """add a callback to be called when the wd parents are changed | |||
|
727 | ||||
|
728 | Callback will be called with the following arguments: | |||
|
729 | dirstate, (oldp1, oldp2), (newp1, newp2) | |||
|
730 | ||||
|
731 | Category is a unique identifier to allow overwriting an old callback | |||
|
732 | with a newer callback. | |||
|
733 | """ | |||
|
734 | self._plchangecallbacks[category] = callback | |||
|
735 | ||||
736 | def _writedirstate(self, st): |
|
736 | def _writedirstate(self, st): | |
|
737 | # notify callbacks about parents change | |||
|
738 | if self._origpl is not None and self._origpl != self._pl: | |||
|
739 | for c, callback in sorted(self._plchangecallbacks.iteritems()): | |||
|
740 | callback(self, self._origpl, self._pl) | |||
|
741 | self._origpl = None | |||
737 | # use the modification time of the newly created temporary file as the |
|
742 | # use the modification time of the newly created temporary file as the | |
738 | # filesystem's notion of 'now' |
|
743 | # filesystem's notion of 'now' | |
739 | now = util.fstat(st).st_mtime & _rangemask |
|
744 | now = util.fstat(st).st_mtime & _rangemask |
@@ -76,10 +76,29 b' class outgoing(object):' | |||||
76 | The sets are computed on demand from the heads, unless provided upfront |
|
76 | The sets are computed on demand from the heads, unless provided upfront | |
77 | by discovery.''' |
|
77 | by discovery.''' | |
78 |
|
78 | |||
79 |
def __init__(self, re |
|
79 | def __init__(self, repo, commonheads=None, missingheads=None, | |
|
80 | missingroots=None): | |||
|
81 | # at least one of them must not be set | |||
|
82 | assert None in (commonheads, missingroots) | |||
|
83 | cl = repo.changelog | |||
|
84 | if missingheads is None: | |||
|
85 | missingheads = cl.heads() | |||
|
86 | if missingroots: | |||
|
87 | discbases = [] | |||
|
88 | for n in missingroots: | |||
|
89 | discbases.extend([p for p in cl.parents(n) if p != nullid]) | |||
|
90 | # TODO remove call to nodesbetween. | |||
|
91 | # TODO populate attributes on outgoing instance instead of setting | |||
|
92 | # discbases. | |||
|
93 | csets, roots, heads = cl.nodesbetween(missingroots, missingheads) | |||
|
94 | included = set(csets) | |||
|
95 | missingheads = heads | |||
|
96 | commonheads = [n for n in discbases if n not in included] | |||
|
97 | elif not commonheads: | |||
|
98 | commonheads = [nullid] | |||
80 | self.commonheads = commonheads |
|
99 | self.commonheads = commonheads | |
81 | self.missingheads = missingheads |
|
100 | self.missingheads = missingheads | |
82 |
self._revlog = |
|
101 | self._revlog = cl | |
83 | self._common = None |
|
102 | self._common = None | |
84 | self._missing = None |
|
103 | self._missing = None | |
85 | self.excluded = [] |
|
104 | self.excluded = [] | |
@@ -116,7 +135,7 b' def findcommonoutgoing(repo, other, only' | |||||
116 | If portable is given, compute more conservative common and missingheads, |
|
135 | If portable is given, compute more conservative common and missingheads, | |
117 | to make bundles created from the instance more portable.''' |
|
136 | to make bundles created from the instance more portable.''' | |
118 | # declare an empty outgoing object to be filled later |
|
137 | # declare an empty outgoing object to be filled later | |
119 |
og = outgoing(repo |
|
138 | og = outgoing(repo, None, None) | |
120 |
|
139 | |||
121 | # get common set if not provided |
|
140 | # get common set if not provided | |
122 | if commoninc is None: |
|
141 | if commoninc is None: | |
@@ -382,7 +401,7 b' def checkheads(pushop):' | |||||
382 | errormsg = (_("push creates new branch '%s' " |
|
401 | errormsg = (_("push creates new branch '%s' " | |
383 | "with multiple heads") % (branch)) |
|
402 | "with multiple heads") % (branch)) | |
384 | hint = _("merge or" |
|
403 | hint = _("merge or" | |
385 |
" see |
|
404 | " see 'hg help push' for details about" | |
386 | " pushing new heads") |
|
405 | " pushing new heads") | |
387 | elif len(newhs) > len(oldhs): |
|
406 | elif len(newhs) > len(oldhs): | |
388 | # remove bookmarked or existing remote heads from the new heads list |
|
407 | # remove bookmarked or existing remote heads from the new heads list | |
@@ -401,11 +420,11 b' def checkheads(pushop):' | |||||
401 | ) % short(dhs[0]) |
|
420 | ) % short(dhs[0]) | |
402 | if unsyncedheads: |
|
421 | if unsyncedheads: | |
403 | hint = _("pull and merge or" |
|
422 | hint = _("pull and merge or" | |
404 |
" see |
|
423 | " see 'hg help push' for details about" | |
405 | " pushing new heads") |
|
424 | " pushing new heads") | |
406 | else: |
|
425 | else: | |
407 | hint = _("merge or" |
|
426 | hint = _("merge or" | |
408 |
" see |
|
427 | " see 'hg help push' for details about" | |
409 | " pushing new heads") |
|
428 | " pushing new heads") | |
410 | if branch is None: |
|
429 | if branch is None: | |
411 | repo.ui.note(_("new remote heads:\n")) |
|
430 | repo.ui.note(_("new remote heads:\n")) |
@@ -34,6 +34,7 b' from . import (' | |||||
34 | fileset, |
|
34 | fileset, | |
35 | hg, |
|
35 | hg, | |
36 | hook, |
|
36 | hook, | |
|
37 | profiling, | |||
37 | revset, |
|
38 | revset, | |
38 | templatefilters, |
|
39 | templatefilters, | |
39 | templatekw, |
|
40 | templatekw, | |
@@ -150,7 +151,7 b' def _runcatch(req):' | |||||
150 | except ValueError: |
|
151 | except ValueError: | |
151 | pass # happens if called in a thread |
|
152 | pass # happens if called in a thread | |
152 |
|
153 | |||
153 | try: |
|
154 | def _runcatchfunc(): | |
154 | try: |
|
155 | try: | |
155 | debugger = 'pdb' |
|
156 | debugger = 'pdb' | |
156 | debugtrace = { |
|
157 | debugtrace = { | |
@@ -212,6 +213,16 b' def _runcatch(req):' | |||||
212 | ui.traceback() |
|
213 | ui.traceback() | |
213 | raise |
|
214 | raise | |
214 |
|
215 | |||
|
216 | return callcatch(ui, _runcatchfunc) | |||
|
217 | ||||
|
218 | def callcatch(ui, func): | |||
|
219 | """call func() with global exception handling | |||
|
220 | ||||
|
221 | return func() if no exception happens. otherwise do some error handling | |||
|
222 | and return an exit code accordingly. | |||
|
223 | """ | |||
|
224 | try: | |||
|
225 | return func() | |||
215 | # Global exception handling, alphabetically |
|
226 | # Global exception handling, alphabetically | |
216 | # Mercurial-specific first, followed by built-in and library exceptions |
|
227 | # Mercurial-specific first, followed by built-in and library exceptions | |
217 | except error.AmbiguousCommand as inst: |
|
228 | except error.AmbiguousCommand as inst: | |
@@ -489,6 +500,8 b' class cmdalias(object):' | |||||
489 | ui.debug("alias '%s' shadows command '%s'\n" % |
|
500 | ui.debug("alias '%s' shadows command '%s'\n" % | |
490 | (self.name, self.cmdname)) |
|
501 | (self.name, self.cmdname)) | |
491 |
|
502 | |||
|
503 | ui.log('commandalias', "alias '%s' expands to '%s'\n", | |||
|
504 | self.name, self.definition) | |||
492 | if util.safehasattr(self, 'shell'): |
|
505 | if util.safehasattr(self, 'shell'): | |
493 | return self.fn(ui, *args, **opts) |
|
506 | return self.fn(ui, *args, **opts) | |
494 | else: |
|
507 | else: | |
@@ -545,7 +558,7 b' def _parse(ui, args):' | |||||
545 | c.append((o[0], o[1], options[o[1]], o[3])) |
|
558 | c.append((o[0], o[1], options[o[1]], o[3])) | |
546 |
|
559 | |||
547 | try: |
|
560 | try: | |
548 | args = fancyopts.fancyopts(args, c, cmdoptions, True) |
|
561 | args = fancyopts.fancyopts(args, c, cmdoptions, gnu=True) | |
549 | except fancyopts.getopt.GetoptError as inst: |
|
562 | except fancyopts.getopt.GetoptError as inst: | |
550 | raise error.CommandError(cmd, inst) |
|
563 | raise error.CommandError(cmd, inst) | |
551 |
|
564 | |||
@@ -761,7 +774,8 b' def _dispatch(req):' | |||||
761 | # Check abbreviation/ambiguity of shell alias. |
|
774 | # Check abbreviation/ambiguity of shell alias. | |
762 | shellaliasfn = _checkshellalias(lui, ui, args) |
|
775 | shellaliasfn = _checkshellalias(lui, ui, args) | |
763 | if shellaliasfn: |
|
776 | if shellaliasfn: | |
764 | return shellaliasfn() |
|
777 | with profiling.maybeprofile(lui): | |
|
778 | return shellaliasfn() | |||
765 |
|
779 | |||
766 | # check for fallback encoding |
|
780 | # check for fallback encoding | |
767 | fallback = lui.config('ui', 'fallbackencoding') |
|
781 | fallback = lui.config('ui', 'fallbackencoding') | |
@@ -808,6 +822,10 b' def _dispatch(req):' | |||||
808 | for ui_ in uis: |
|
822 | for ui_ in uis: | |
809 | ui_.setconfig('ui', opt, val, '--' + opt) |
|
823 | ui_.setconfig('ui', opt, val, '--' + opt) | |
810 |
|
824 | |||
|
825 | if options['profile']: | |||
|
826 | for ui_ in uis: | |||
|
827 | ui_.setconfig('profiling', 'enabled', 'true', '--profile') | |||
|
828 | ||||
811 | if options['traceback']: |
|
829 | if options['traceback']: | |
812 | for ui_ in uis: |
|
830 | for ui_ in uis: | |
813 | ui_.setconfig('ui', 'traceback', 'on', '--traceback') |
|
831 | ui_.setconfig('ui', 'traceback', 'on', '--traceback') | |
@@ -827,187 +845,70 b' def _dispatch(req):' | |||||
827 | elif not cmd: |
|
845 | elif not cmd: | |
828 | return commands.help_(ui, 'shortlist') |
|
846 | return commands.help_(ui, 'shortlist') | |
829 |
|
847 | |||
830 | repo = None |
|
848 | with profiling.maybeprofile(lui): | |
831 | cmdpats = args[:] |
|
849 | repo = None | |
832 | if not _cmdattr(ui, cmd, func, 'norepo'): |
|
850 | cmdpats = args[:] | |
833 | # use the repo from the request only if we don't have -R |
|
851 | if not _cmdattr(ui, cmd, func, 'norepo'): | |
834 | if not rpath and not cwd: |
|
852 | # use the repo from the request only if we don't have -R | |
835 | repo = req.repo |
|
853 | if not rpath and not cwd: | |
836 |
|
854 | repo = req.repo | ||
837 | if repo: |
|
|||
838 | # set the descriptors of the repo ui to those of ui |
|
|||
839 | repo.ui.fin = ui.fin |
|
|||
840 | repo.ui.fout = ui.fout |
|
|||
841 | repo.ui.ferr = ui.ferr |
|
|||
842 | else: |
|
|||
843 | try: |
|
|||
844 | repo = hg.repository(ui, path=path) |
|
|||
845 | if not repo.local(): |
|
|||
846 | raise error.Abort(_("repository '%s' is not local") % path) |
|
|||
847 | repo.ui.setconfig("bundle", "mainreporoot", repo.root, 'repo') |
|
|||
848 | except error.RequirementError: |
|
|||
849 | raise |
|
|||
850 | except error.RepoError: |
|
|||
851 | if rpath and rpath[-1]: # invalid -R path |
|
|||
852 | raise |
|
|||
853 | if not _cmdattr(ui, cmd, func, 'optionalrepo'): |
|
|||
854 | if (_cmdattr(ui, cmd, func, 'inferrepo') and |
|
|||
855 | args and not path): |
|
|||
856 | # try to infer -R from command args |
|
|||
857 | repos = map(cmdutil.findrepo, args) |
|
|||
858 | guess = repos[0] |
|
|||
859 | if guess and repos.count(guess) == len(repos): |
|
|||
860 | req.args = ['--repository', guess] + fullargs |
|
|||
861 | return _dispatch(req) |
|
|||
862 | if not path: |
|
|||
863 | raise error.RepoError(_("no repository found in '%s'" |
|
|||
864 | " (.hg not found)") |
|
|||
865 | % os.getcwd()) |
|
|||
866 | raise |
|
|||
867 | if repo: |
|
|||
868 | ui = repo.ui |
|
|||
869 | if options['hidden']: |
|
|||
870 | repo = repo.unfiltered() |
|
|||
871 | args.insert(0, repo) |
|
|||
872 | elif rpath: |
|
|||
873 | ui.warn(_("warning: --repository ignored\n")) |
|
|||
874 |
|
||||
875 | msg = ' '.join(' ' in a and repr(a) or a for a in fullargs) |
|
|||
876 | ui.log("command", '%s\n', msg) |
|
|||
877 | d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) |
|
|||
878 | try: |
|
|||
879 | return runcommand(lui, repo, cmd, fullargs, ui, options, d, |
|
|||
880 | cmdpats, cmdoptions) |
|
|||
881 | finally: |
|
|||
882 | if repo and repo != req.repo: |
|
|||
883 | repo.close() |
|
|||
884 |
|
||||
885 | def lsprofile(ui, func, fp): |
|
|||
886 | format = ui.config('profiling', 'format', default='text') |
|
|||
887 | field = ui.config('profiling', 'sort', default='inlinetime') |
|
|||
888 | limit = ui.configint('profiling', 'limit', default=30) |
|
|||
889 | climit = ui.configint('profiling', 'nested', default=0) |
|
|||
890 |
|
||||
891 | if format not in ['text', 'kcachegrind']: |
|
|||
892 | ui.warn(_("unrecognized profiling format '%s'" |
|
|||
893 | " - Ignored\n") % format) |
|
|||
894 | format = 'text' |
|
|||
895 |
|
855 | |||
896 | try: |
|
856 | if repo: | |
897 | from . import lsprof |
|
857 | # set the descriptors of the repo ui to those of ui | |
898 | except ImportError: |
|
858 | repo.ui.fin = ui.fin | |
899 | raise error.Abort(_( |
|
859 | repo.ui.fout = ui.fout | |
900 | 'lsprof not available - install from ' |
|
860 | repo.ui.ferr = ui.ferr | |
901 | 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/')) |
|
861 | else: | |
902 | p = lsprof.Profiler() |
|
862 | try: | |
903 | p.enable(subcalls=True) |
|
863 | repo = hg.repository(ui, path=path) | |
904 | try: |
|
864 | if not repo.local(): | |
905 | return func() |
|
865 | raise error.Abort(_("repository '%s' is not local") | |
906 | finally: |
|
866 | % path) | |
907 | p.disable() |
|
867 | repo.ui.setconfig("bundle", "mainreporoot", repo.root, | |
908 |
|
868 | 'repo') | ||
909 | if format == 'kcachegrind': |
|
869 | except error.RequirementError: | |
910 | from . import lsprofcalltree |
|
870 | raise | |
911 | calltree = lsprofcalltree.KCacheGrind(p) |
|
871 | except error.RepoError: | |
912 | calltree.output(fp) |
|
872 | if rpath and rpath[-1]: # invalid -R path | |
913 | else: |
|
873 | raise | |
914 | # format == 'text' |
|
874 | if not _cmdattr(ui, cmd, func, 'optionalrepo'): | |
915 | stats = lsprof.Stats(p.getstats()) |
|
875 | if (_cmdattr(ui, cmd, func, 'inferrepo') and | |
916 | stats.sort(field) |
|
876 | args and not path): | |
917 | stats.pprint(limit=limit, file=fp, climit=climit) |
|
877 | # try to infer -R from command args | |
|
878 | repos = map(cmdutil.findrepo, args) | |||
|
879 | guess = repos[0] | |||
|
880 | if guess and repos.count(guess) == len(repos): | |||
|
881 | req.args = ['--repository', guess] + fullargs | |||
|
882 | return _dispatch(req) | |||
|
883 | if not path: | |||
|
884 | raise error.RepoError(_("no repository found in" | |||
|
885 | " '%s' (.hg not found)") | |||
|
886 | % os.getcwd()) | |||
|
887 | raise | |||
|
888 | if repo: | |||
|
889 | ui = repo.ui | |||
|
890 | if options['hidden']: | |||
|
891 | repo = repo.unfiltered() | |||
|
892 | args.insert(0, repo) | |||
|
893 | elif rpath: | |||
|
894 | ui.warn(_("warning: --repository ignored\n")) | |||
918 |
|
895 | |||
919 | def flameprofile(ui, func, fp): |
|
896 | msg = ' '.join(' ' in a and repr(a) or a for a in fullargs) | |
920 | try: |
|
897 | ui.log("command", '%s\n', msg) | |
921 | from flamegraph import flamegraph |
|
898 | d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) | |
922 | except ImportError: |
|
899 | try: | |
923 | raise error.Abort(_( |
|
900 | return runcommand(lui, repo, cmd, fullargs, ui, options, d, | |
924 | 'flamegraph not available - install from ' |
|
901 | cmdpats, cmdoptions) | |
925 | 'https://github.com/evanhempel/python-flamegraph')) |
|
902 | finally: | |
926 | # developer config: profiling.freq |
|
903 | if repo and repo != req.repo: | |
927 | freq = ui.configint('profiling', 'freq', default=1000) |
|
904 | repo.close() | |
928 | filter_ = None |
|
|||
929 | collapse_recursion = True |
|
|||
930 | thread = flamegraph.ProfileThread(fp, 1.0 / freq, |
|
|||
931 | filter_, collapse_recursion) |
|
|||
932 | start_time = time.clock() |
|
|||
933 | try: |
|
|||
934 | thread.start() |
|
|||
935 | func() |
|
|||
936 | finally: |
|
|||
937 | thread.stop() |
|
|||
938 | thread.join() |
|
|||
939 | print('Collected %d stack frames (%d unique) in %2.2f seconds.' % ( |
|
|||
940 | time.clock() - start_time, thread.num_frames(), |
|
|||
941 | thread.num_frames(unique=True))) |
|
|||
942 |
|
||||
943 |
|
||||
944 | def statprofile(ui, func, fp): |
|
|||
945 | try: |
|
|||
946 | import statprof |
|
|||
947 | except ImportError: |
|
|||
948 | raise error.Abort(_( |
|
|||
949 | 'statprof not available - install using "easy_install statprof"')) |
|
|||
950 |
|
||||
951 | freq = ui.configint('profiling', 'freq', default=1000) |
|
|||
952 | if freq > 0: |
|
|||
953 | statprof.reset(freq) |
|
|||
954 | else: |
|
|||
955 | ui.warn(_("invalid sampling frequency '%s' - ignoring\n") % freq) |
|
|||
956 |
|
||||
957 | statprof.start() |
|
|||
958 | try: |
|
|||
959 | return func() |
|
|||
960 | finally: |
|
|||
961 | statprof.stop() |
|
|||
962 | statprof.display(fp) |
|
|||
963 |
|
905 | |||
964 | def _runcommand(ui, options, cmd, cmdfunc): |
|
906 | def _runcommand(ui, options, cmd, cmdfunc): | |
965 | """Enables the profiler if applicable. |
|
907 | """Run a command function, possibly with profiling enabled.""" | |
966 |
|
908 | try: | ||
967 | ``profiling.enabled`` - boolean config that enables or disables profiling |
|
909 | return cmdfunc() | |
968 | """ |
|
910 | except error.SignatureError: | |
969 | def checkargs(): |
|
911 | raise error.CommandError(cmd, _('invalid arguments')) | |
970 | try: |
|
|||
971 | return cmdfunc() |
|
|||
972 | except error.SignatureError: |
|
|||
973 | raise error.CommandError(cmd, _("invalid arguments")) |
|
|||
974 |
|
||||
975 | if options['profile'] or ui.configbool('profiling', 'enabled'): |
|
|||
976 | profiler = os.getenv('HGPROF') |
|
|||
977 | if profiler is None: |
|
|||
978 | profiler = ui.config('profiling', 'type', default='ls') |
|
|||
979 | if profiler not in ('ls', 'stat', 'flame'): |
|
|||
980 | ui.warn(_("unrecognized profiler '%s' - ignored\n") % profiler) |
|
|||
981 | profiler = 'ls' |
|
|||
982 |
|
||||
983 | output = ui.config('profiling', 'output') |
|
|||
984 |
|
||||
985 | if output == 'blackbox': |
|
|||
986 | fp = util.stringio() |
|
|||
987 | elif output: |
|
|||
988 | path = ui.expandpath(output) |
|
|||
989 | fp = open(path, 'wb') |
|
|||
990 | else: |
|
|||
991 | fp = sys.stderr |
|
|||
992 |
|
||||
993 | try: |
|
|||
994 | if profiler == 'ls': |
|
|||
995 | return lsprofile(ui, checkargs, fp) |
|
|||
996 | elif profiler == 'flame': |
|
|||
997 | return flameprofile(ui, checkargs, fp) |
|
|||
998 | else: |
|
|||
999 | return statprofile(ui, checkargs, fp) |
|
|||
1000 | finally: |
|
|||
1001 | if output: |
|
|||
1002 | if output == 'blackbox': |
|
|||
1003 | val = "Profile:\n%s" % fp.getvalue() |
|
|||
1004 | # ui.log treats the input as a format string, |
|
|||
1005 | # so we need to escape any % signs. |
|
|||
1006 | val = val.replace('%', '%%') |
|
|||
1007 | ui.log('profile', val) |
|
|||
1008 | fp.close() |
|
|||
1009 | else: |
|
|||
1010 | return checkargs() |
|
|||
1011 |
|
912 | |||
1012 | def _exceptionwarning(ui): |
|
913 | def _exceptionwarning(ui): | |
1013 | """Produce a warning message for the current active exception""" |
|
914 | """Produce a warning message for the current active exception""" | |
@@ -1031,7 +932,7 b' def _exceptionwarning(ui):' | |||||
1031 | break |
|
932 | break | |
1032 |
|
933 | |||
1033 | # Never blame on extensions bundled with Mercurial. |
|
934 | # Never blame on extensions bundled with Mercurial. | |
1034 |
if |
|
935 | if extensions.ismoduleinternal(mod): | |
1035 | continue |
|
936 | continue | |
1036 |
|
937 | |||
1037 | tested = [util.versiontuple(t, 2) for t in testedwith.split()] |
|
938 | tested = [util.versiontuple(t, 2) for t in testedwith.split()] |
@@ -10,14 +10,16 b' from __future__ import absolute_import' | |||||
10 | import array |
|
10 | import array | |
11 | import locale |
|
11 | import locale | |
12 | import os |
|
12 | import os | |
13 | import sys |
|
|||
14 | import unicodedata |
|
13 | import unicodedata | |
15 |
|
14 | |||
16 | from . import ( |
|
15 | from . import ( | |
17 | error, |
|
16 | error, | |
|
17 | pycompat, | |||
18 | ) |
|
18 | ) | |
19 |
|
19 | |||
20 | if sys.version_info[0] >= 3: |
|
20 | _sysstr = pycompat.sysstr | |
|
21 | ||||
|
22 | if pycompat.ispy3: | |||
21 | unichr = chr |
|
23 | unichr = chr | |
22 |
|
24 | |||
23 | # These unicode characters are ignored by HFS+ (Apple Technote 1150, |
|
25 | # These unicode characters are ignored by HFS+ (Apple Technote 1150, | |
@@ -27,7 +29,7 b' if sys.version_info[0] >= 3:' | |||||
27 | "200c 200d 200e 200f 202a 202b 202c 202d 202e " |
|
29 | "200c 200d 200e 200f 202a 202b 202c 202d 202e " | |
28 | "206a 206b 206c 206d 206e 206f feff".split()] |
|
30 | "206a 206b 206c 206d 206e 206f feff".split()] | |
29 | # verify the next function will work |
|
31 | # verify the next function will work | |
30 | if sys.version_info[0] >= 3: |
|
32 | if pycompat.ispy3: | |
31 | assert set(i[0] for i in _ignore) == set([ord(b'\xe2'), ord(b'\xef')]) |
|
33 | assert set(i[0] for i in _ignore) == set([ord(b'\xe2'), ord(b'\xef')]) | |
32 | else: |
|
34 | else: | |
33 | assert set(i[0] for i in _ignore) == set(["\xe2", "\xef"]) |
|
35 | assert set(i[0] for i in _ignore) == set(["\xe2", "\xef"]) | |
@@ -45,6 +47,19 b' def hfsignoreclean(s):' | |||||
45 | s = s.replace(c, '') |
|
47 | s = s.replace(c, '') | |
46 | return s |
|
48 | return s | |
47 |
|
49 | |||
|
50 | # encoding.environ is provided read-only, which may not be used to modify | |||
|
51 | # the process environment | |||
|
52 | _nativeenviron = (not pycompat.ispy3 or os.supports_bytes_environ) | |||
|
53 | if not pycompat.ispy3: | |||
|
54 | environ = os.environ | |||
|
55 | elif _nativeenviron: | |||
|
56 | environ = os.environb | |||
|
57 | else: | |||
|
58 | # preferred encoding isn't known yet; use utf-8 to avoid unicode error | |||
|
59 | # and recreate it once encoding is settled | |||
|
60 | environ = dict((k.encode(u'utf-8'), v.encode(u'utf-8')) | |||
|
61 | for k, v in os.environ.items()) | |||
|
62 | ||||
48 | def _getpreferredencoding(): |
|
63 | def _getpreferredencoding(): | |
49 | ''' |
|
64 | ''' | |
50 | On darwin, getpreferredencoding ignores the locale environment and |
|
65 | On darwin, getpreferredencoding ignores the locale environment and | |
@@ -76,13 +91,13 b' def _getpreferredencoding():' | |||||
76 | } |
|
91 | } | |
77 |
|
92 | |||
78 | try: |
|
93 | try: | |
79 |
encoding = |
|
94 | encoding = environ.get("HGENCODING") | |
80 | if not encoding: |
|
95 | if not encoding: | |
81 | encoding = locale.getpreferredencoding() or 'ascii' |
|
96 | encoding = locale.getpreferredencoding() or 'ascii' | |
82 | encoding = _encodingfixers.get(encoding, lambda: encoding)() |
|
97 | encoding = _encodingfixers.get(encoding, lambda: encoding)() | |
83 | except locale.Error: |
|
98 | except locale.Error: | |
84 | encoding = 'ascii' |
|
99 | encoding = 'ascii' | |
85 |
encodingmode = |
|
100 | encodingmode = environ.get("HGENCODINGMODE", "strict") | |
86 | fallbackencoding = 'ISO-8859-1' |
|
101 | fallbackencoding = 'ISO-8859-1' | |
87 |
|
102 | |||
88 | class localstr(str): |
|
103 | class localstr(str): | |
@@ -136,23 +151,24 b' def tolocal(s):' | |||||
136 | if encoding == 'UTF-8': |
|
151 | if encoding == 'UTF-8': | |
137 | # fast path |
|
152 | # fast path | |
138 | return s |
|
153 | return s | |
139 | r = u.encode(encoding, "replace") |
|
154 | r = u.encode(_sysstr(encoding), u"replace") | |
140 | if u == r.decode(encoding): |
|
155 | if u == r.decode(_sysstr(encoding)): | |
141 | # r is a safe, non-lossy encoding of s |
|
156 | # r is a safe, non-lossy encoding of s | |
142 | return r |
|
157 | return r | |
143 | return localstr(s, r) |
|
158 | return localstr(s, r) | |
144 | except UnicodeDecodeError: |
|
159 | except UnicodeDecodeError: | |
145 | # we should only get here if we're looking at an ancient changeset |
|
160 | # we should only get here if we're looking at an ancient changeset | |
146 | try: |
|
161 | try: | |
147 | u = s.decode(fallbackencoding) |
|
162 | u = s.decode(_sysstr(fallbackencoding)) | |
148 | r = u.encode(encoding, "replace") |
|
163 | r = u.encode(_sysstr(encoding), u"replace") | |
149 | if u == r.decode(encoding): |
|
164 | if u == r.decode(_sysstr(encoding)): | |
150 | # r is a safe, non-lossy encoding of s |
|
165 | # r is a safe, non-lossy encoding of s | |
151 | return r |
|
166 | return r | |
152 | return localstr(u.encode('UTF-8'), r) |
|
167 | return localstr(u.encode('UTF-8'), r) | |
153 | except UnicodeDecodeError: |
|
168 | except UnicodeDecodeError: | |
154 | u = s.decode("utf-8", "replace") # last ditch |
|
169 | u = s.decode("utf-8", "replace") # last ditch | |
155 |
|
|
170 | # can't round-trip | |
|
171 | return u.encode(_sysstr(encoding), u"replace") | |||
156 | except LookupError as k: |
|
172 | except LookupError as k: | |
157 | raise error.Abort(k, hint="please check your locale settings") |
|
173 | raise error.Abort(k, hint="please check your locale settings") | |
158 |
|
174 | |||
@@ -172,20 +188,27 b' def fromlocal(s):' | |||||
172 | return s._utf8 |
|
188 | return s._utf8 | |
173 |
|
189 | |||
174 | try: |
|
190 | try: | |
175 |
|
|
191 | u = s.decode(_sysstr(encoding), _sysstr(encodingmode)) | |
|
192 | return u.encode("utf-8") | |||
176 | except UnicodeDecodeError as inst: |
|
193 | except UnicodeDecodeError as inst: | |
177 | sub = s[max(0, inst.start - 10):inst.start + 10] |
|
194 | sub = s[max(0, inst.start - 10):inst.start + 10] | |
178 | raise error.Abort("decoding near '%s': %s!" % (sub, inst)) |
|
195 | raise error.Abort("decoding near '%s': %s!" % (sub, inst)) | |
179 | except LookupError as k: |
|
196 | except LookupError as k: | |
180 | raise error.Abort(k, hint="please check your locale settings") |
|
197 | raise error.Abort(k, hint="please check your locale settings") | |
181 |
|
198 | |||
|
199 | if not _nativeenviron: | |||
|
200 | # now encoding and helper functions are available, recreate the environ | |||
|
201 | # dict to be exported to other modules | |||
|
202 | environ = dict((tolocal(k.encode(u'utf-8')), tolocal(v.encode(u'utf-8'))) | |||
|
203 | for k, v in os.environ.items()) | |||
|
204 | ||||
182 | # How to treat ambiguous-width characters. Set to 'wide' to treat as wide. |
|
205 | # How to treat ambiguous-width characters. Set to 'wide' to treat as wide. | |
183 |
wide = ( |
|
206 | wide = (environ.get("HGENCODINGAMBIGUOUS", "narrow") == "wide" | |
184 | and "WFA" or "WF") |
|
207 | and "WFA" or "WF") | |
185 |
|
208 | |||
186 | def colwidth(s): |
|
209 | def colwidth(s): | |
187 | "Find the column width of a string for display in the local encoding" |
|
210 | "Find the column width of a string for display in the local encoding" | |
188 | return ucolwidth(s.decode(encoding, 'replace')) |
|
211 | return ucolwidth(s.decode(_sysstr(encoding), u'replace')) | |
189 |
|
212 | |||
190 | def ucolwidth(d): |
|
213 | def ucolwidth(d): | |
191 | "Find the column width of a Unicode string for display" |
|
214 | "Find the column width of a Unicode string for display" | |
@@ -265,7 +288,7 b" def trim(s, width, ellipsis='', leftside" | |||||
265 | + |
|
288 | + | |
266 | """ |
|
289 | """ | |
267 | try: |
|
290 | try: | |
268 | u = s.decode(encoding) |
|
291 | u = s.decode(_sysstr(encoding)) | |
269 | except UnicodeDecodeError: |
|
292 | except UnicodeDecodeError: | |
270 | if len(s) <= width: # trimming is not needed |
|
293 | if len(s) <= width: # trimming is not needed | |
271 | return s |
|
294 | return s | |
@@ -292,7 +315,7 b" def trim(s, width, ellipsis='', leftside" | |||||
292 | for i in xrange(1, len(u)): |
|
315 | for i in xrange(1, len(u)): | |
293 | usub = uslice(i) |
|
316 | usub = uslice(i) | |
294 | if ucolwidth(usub) <= width: |
|
317 | if ucolwidth(usub) <= width: | |
295 | return concat(usub.encode(encoding)) |
|
318 | return concat(usub.encode(_sysstr(encoding))) | |
296 | return ellipsis # no enough room for multi-column characters |
|
319 | return ellipsis # no enough room for multi-column characters | |
297 |
|
320 | |||
298 | def _asciilower(s): |
|
321 | def _asciilower(s): | |
@@ -337,12 +360,12 b' def lower(s):' | |||||
337 | if isinstance(s, localstr): |
|
360 | if isinstance(s, localstr): | |
338 | u = s._utf8.decode("utf-8") |
|
361 | u = s._utf8.decode("utf-8") | |
339 | else: |
|
362 | else: | |
340 | u = s.decode(encoding, encodingmode) |
|
363 | u = s.decode(_sysstr(encoding), _sysstr(encodingmode)) | |
341 |
|
364 | |||
342 | lu = u.lower() |
|
365 | lu = u.lower() | |
343 | if u == lu: |
|
366 | if u == lu: | |
344 | return s # preserve localstring |
|
367 | return s # preserve localstring | |
345 | return lu.encode(encoding) |
|
368 | return lu.encode(_sysstr(encoding)) | |
346 | except UnicodeError: |
|
369 | except UnicodeError: | |
347 | return s.lower() # we don't know how to fold this except in ASCII |
|
370 | return s.lower() # we don't know how to fold this except in ASCII | |
348 | except LookupError as k: |
|
371 | except LookupError as k: | |
@@ -360,12 +383,12 b' def upperfallback(s):' | |||||
360 | if isinstance(s, localstr): |
|
383 | if isinstance(s, localstr): | |
361 | u = s._utf8.decode("utf-8") |
|
384 | u = s._utf8.decode("utf-8") | |
362 | else: |
|
385 | else: | |
363 | u = s.decode(encoding, encodingmode) |
|
386 | u = s.decode(_sysstr(encoding), _sysstr(encodingmode)) | |
364 |
|
387 | |||
365 | uu = u.upper() |
|
388 | uu = u.upper() | |
366 | if u == uu: |
|
389 | if u == uu: | |
367 | return s # preserve localstring |
|
390 | return s # preserve localstring | |
368 | return uu.encode(encoding) |
|
391 | return uu.encode(_sysstr(encoding)) | |
369 | except UnicodeError: |
|
392 | except UnicodeError: | |
370 | return s.upper() # we don't know how to fold this except in ASCII |
|
393 | return s.upper() # we don't know how to fold this except in ASCII | |
371 | except LookupError as k: |
|
394 | except LookupError as k: |
@@ -257,13 +257,40 b' def buildobsmarkerspart(bundler, markers' | |||||
257 | return bundler.newpart('obsmarkers', data=stream) |
|
257 | return bundler.newpart('obsmarkers', data=stream) | |
258 | return None |
|
258 | return None | |
259 |
|
259 | |||
260 | def _canusebundle2(op): |
|
260 | def _computeoutgoing(repo, heads, common): | |
261 | """return true if a pull/push can use bundle2 |
|
261 | """Computes which revs are outgoing given a set of common | |
|
262 | and a set of heads. | |||
|
263 | ||||
|
264 | This is a separate function so extensions can have access to | |||
|
265 | the logic. | |||
262 |
|
266 | |||
263 | Feel free to nuke this function when we drop the experimental option""" |
|
267 | Returns a discovery.outgoing object. | |
264 | return (op.repo.ui.configbool('experimental', 'bundle2-exp', True) |
|
268 | """ | |
265 | and op.remote.capable('bundle2')) |
|
269 | cl = repo.changelog | |
|
270 | if common: | |||
|
271 | hasnode = cl.hasnode | |||
|
272 | common = [n for n in common if hasnode(n)] | |||
|
273 | else: | |||
|
274 | common = [nullid] | |||
|
275 | if not heads: | |||
|
276 | heads = cl.heads() | |||
|
277 | return discovery.outgoing(repo, common, heads) | |||
266 |
|
278 | |||
|
279 | def _forcebundle1(op): | |||
|
280 | """return true if a pull/push must use bundle1 | |||
|
281 | ||||
|
282 | This function is used to allow testing of the older bundle version""" | |||
|
283 | ui = op.repo.ui | |||
|
284 | forcebundle1 = False | |||
|
285 | # The goal is this config is to allow developper to choose the bundle | |||
|
286 | # version used during exchanged. This is especially handy during test. | |||
|
287 | # Value is a list of bundle version to be picked from, highest version | |||
|
288 | # should be used. | |||
|
289 | # | |||
|
290 | # developer config: devel.legacy.exchange | |||
|
291 | exchange = ui.configlist('devel', 'legacy.exchange') | |||
|
292 | forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange | |||
|
293 | return forcebundle1 or not op.remote.capable('bundle2') | |||
267 |
|
294 | |||
268 | class pushoperation(object): |
|
295 | class pushoperation(object): | |
269 | """A object that represent a single push operation |
|
296 | """A object that represent a single push operation | |
@@ -417,7 +444,7 b' def push(repo, remote, force=False, revs' | |||||
417 | # bundle2 push may receive a reply bundle touching bookmarks or other |
|
444 | # bundle2 push may receive a reply bundle touching bookmarks or other | |
418 | # things requiring the wlock. Take it now to ensure proper ordering. |
|
445 | # things requiring the wlock. Take it now to ensure proper ordering. | |
419 | maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback') |
|
446 | maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback') | |
420 |
if _c |
|
447 | if (not _forcebundle1(pushop)) and maypushback: | |
421 | localwlock = pushop.repo.wlock() |
|
448 | localwlock = pushop.repo.wlock() | |
422 | locallock = pushop.repo.lock() |
|
449 | locallock = pushop.repo.lock() | |
423 | pushop.locallocked = True |
|
450 | pushop.locallocked = True | |
@@ -442,7 +469,7 b' def push(repo, remote, force=False, revs' | |||||
442 | lock = pushop.remote.lock() |
|
469 | lock = pushop.remote.lock() | |
443 | try: |
|
470 | try: | |
444 | _pushdiscovery(pushop) |
|
471 | _pushdiscovery(pushop) | |
445 |
if _c |
|
472 | if not _forcebundle1(pushop): | |
446 | _pushbundle2(pushop) |
|
473 | _pushbundle2(pushop) | |
447 | _pushchangeset(pushop) |
|
474 | _pushchangeset(pushop) | |
448 | _pushsyncphase(pushop) |
|
475 | _pushsyncphase(pushop) | |
@@ -1100,7 +1127,7 b' class pulloperation(object):' | |||||
1100 |
|
1127 | |||
1101 | @util.propertycache |
|
1128 | @util.propertycache | |
1102 | def canusebundle2(self): |
|
1129 | def canusebundle2(self): | |
1103 |
return _c |
|
1130 | return not _forcebundle1(self) | |
1104 |
|
1131 | |||
1105 | @util.propertycache |
|
1132 | @util.propertycache | |
1106 | def remotebundle2caps(self): |
|
1133 | def remotebundle2caps(self): | |
@@ -1174,8 +1201,10 b' def pull(repo, remote, heads=None, force' | |||||
1174 | " %s") % (', '.join(sorted(missing))) |
|
1201 | " %s") % (', '.join(sorted(missing))) | |
1175 | raise error.Abort(msg) |
|
1202 | raise error.Abort(msg) | |
1176 |
|
1203 | |||
1177 | lock = pullop.repo.lock() |
|
1204 | wlock = lock = None | |
1178 | try: |
|
1205 | try: | |
|
1206 | wlock = pullop.repo.wlock() | |||
|
1207 | lock = pullop.repo.lock() | |||
1179 | pullop.trmanager = transactionmanager(repo, 'pull', remote.url()) |
|
1208 | pullop.trmanager = transactionmanager(repo, 'pull', remote.url()) | |
1180 | streamclone.maybeperformlegacystreamclone(pullop) |
|
1209 | streamclone.maybeperformlegacystreamclone(pullop) | |
1181 | # This should ideally be in _pullbundle2(). However, it needs to run |
|
1210 | # This should ideally be in _pullbundle2(). However, it needs to run | |
@@ -1190,8 +1219,7 b' def pull(repo, remote, heads=None, force' | |||||
1190 | _pullobsolete(pullop) |
|
1219 | _pullobsolete(pullop) | |
1191 | pullop.trmanager.close() |
|
1220 | pullop.trmanager.close() | |
1192 | finally: |
|
1221 | finally: | |
1193 | pullop.trmanager.release() |
|
1222 | lockmod.release(pullop.trmanager, lock, wlock) | |
1194 | lock.release() |
|
|||
1195 |
|
1223 | |||
1196 | return pullop |
|
1224 | return pullop | |
1197 |
|
1225 | |||
@@ -1504,20 +1532,14 b' def bundle2requested(bundlecaps):' | |||||
1504 | return any(cap.startswith('HG2') for cap in bundlecaps) |
|
1532 | return any(cap.startswith('HG2') for cap in bundlecaps) | |
1505 | return False |
|
1533 | return False | |
1506 |
|
1534 | |||
1507 | def getbundle(repo, source, heads=None, common=None, bundlecaps=None, |
|
1535 | def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None, | |
1508 | **kwargs): |
|
1536 | **kwargs): | |
1509 | """return a full bundle (with potentially multiple kind of parts) |
|
1537 | """Return chunks constituting a bundle's raw data. | |
1510 |
|
1538 | |||
1511 | Could be a bundle HG10 or a bundle HG20 depending on bundlecaps |
|
1539 | Could be a bundle HG10 or a bundle HG20 depending on bundlecaps | |
1512 | passed. For now, the bundle can contain only changegroup, but this will |
|
1540 | passed. | |
1513 | changes when more part type will be available for bundle2. |
|
|||
1514 |
|
1541 | |||
1515 | This is different from changegroup.getchangegroup that only returns an HG10 |
|
1542 | Returns an iterator over raw chunks (of varying sizes). | |
1516 | changegroup bundle. They may eventually get reunited in the future when we |
|
|||
1517 | have a clearer idea of the API we what to query different data. |
|
|||
1518 |
|
||||
1519 | The implementation is at a very early stage and will get massive rework |
|
|||
1520 | when the API of bundle is refined. |
|
|||
1521 | """ |
|
1543 | """ | |
1522 | usebundle2 = bundle2requested(bundlecaps) |
|
1544 | usebundle2 = bundle2requested(bundlecaps) | |
1523 | # bundle10 case |
|
1545 | # bundle10 case | |
@@ -1528,8 +1550,9 b' def getbundle(repo, source, heads=None, ' | |||||
1528 | if kwargs: |
|
1550 | if kwargs: | |
1529 | raise ValueError(_('unsupported getbundle arguments: %s') |
|
1551 | raise ValueError(_('unsupported getbundle arguments: %s') | |
1530 | % ', '.join(sorted(kwargs.keys()))) |
|
1552 | % ', '.join(sorted(kwargs.keys()))) | |
1531 | return changegroup.getchangegroup(repo, source, heads=heads, |
|
1553 | outgoing = _computeoutgoing(repo, heads, common) | |
1532 | common=common, bundlecaps=bundlecaps) |
|
1554 | bundler = changegroup.getbundler('01', repo, bundlecaps) | |
|
1555 | return changegroup.getsubsetraw(repo, outgoing, bundler, source) | |||
1533 |
|
1556 | |||
1534 | # bundle20 case |
|
1557 | # bundle20 case | |
1535 | b2caps = {} |
|
1558 | b2caps = {} | |
@@ -1547,7 +1570,7 b' def getbundle(repo, source, heads=None, ' | |||||
1547 | func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps, |
|
1570 | func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps, | |
1548 | **kwargs) |
|
1571 | **kwargs) | |
1549 |
|
1572 | |||
1550 |
return |
|
1573 | return bundler.getchunks() | |
1551 |
|
1574 | |||
1552 | @getbundle2partsgenerator('changegroup') |
|
1575 | @getbundle2partsgenerator('changegroup') | |
1553 | def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None, |
|
1576 | def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None, | |
@@ -1564,7 +1587,7 b' def _getbundlechangegrouppart(bundler, r' | |||||
1564 | if not cgversions: |
|
1587 | if not cgversions: | |
1565 | raise ValueError(_('no common changegroup version')) |
|
1588 | raise ValueError(_('no common changegroup version')) | |
1566 | version = max(cgversions) |
|
1589 | version = max(cgversions) | |
1567 |
outgoing = |
|
1590 | outgoing = _computeoutgoing(repo, heads, common) | |
1568 | cg = changegroup.getlocalchangegroupraw(repo, source, outgoing, |
|
1591 | cg = changegroup.getlocalchangegroupraw(repo, source, outgoing, | |
1569 | bundlecaps=bundlecaps, |
|
1592 | bundlecaps=bundlecaps, | |
1570 | version=version) |
|
1593 | version=version) | |
@@ -1617,7 +1640,7 b' def _getbundletagsfnodes(bundler, repo, ' | |||||
1617 | if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps): |
|
1640 | if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps): | |
1618 | return |
|
1641 | return | |
1619 |
|
1642 | |||
1620 |
outgoing = |
|
1643 | outgoing = _computeoutgoing(repo, heads, common) | |
1621 |
|
1644 | |||
1622 | if not outgoing.missingheads: |
|
1645 | if not outgoing.missingheads: | |
1623 | return |
|
1646 | return |
@@ -22,6 +22,7 b' from . import (' | |||||
22 | ) |
|
22 | ) | |
23 |
|
23 | |||
24 | _extensions = {} |
|
24 | _extensions = {} | |
|
25 | _disabledextensions = {} | |||
25 | _aftercallbacks = {} |
|
26 | _aftercallbacks = {} | |
26 | _order = [] |
|
27 | _order = [] | |
27 | _builtin = set(['hbisect', 'bookmarks', 'parentrevspec', 'progress', 'interhg', |
|
28 | _builtin = set(['hbisect', 'bookmarks', 'parentrevspec', 'progress', 'interhg', | |
@@ -79,7 +80,29 b' def _importh(name):' | |||||
79 | mod = getattr(mod, comp) |
|
80 | mod = getattr(mod, comp) | |
80 | return mod |
|
81 | return mod | |
81 |
|
82 | |||
|
83 | def _importext(name, path=None, reportfunc=None): | |||
|
84 | if path: | |||
|
85 | # the module will be loaded in sys.modules | |||
|
86 | # choose an unique name so that it doesn't | |||
|
87 | # conflicts with other modules | |||
|
88 | mod = loadpath(path, 'hgext.%s' % name) | |||
|
89 | else: | |||
|
90 | try: | |||
|
91 | mod = _importh("hgext.%s" % name) | |||
|
92 | except ImportError as err: | |||
|
93 | if reportfunc: | |||
|
94 | reportfunc(err, "hgext.%s" % name, "hgext3rd.%s" % name) | |||
|
95 | try: | |||
|
96 | mod = _importh("hgext3rd.%s" % name) | |||
|
97 | except ImportError as err: | |||
|
98 | if reportfunc: | |||
|
99 | reportfunc(err, "hgext3rd.%s" % name, name) | |||
|
100 | mod = _importh(name) | |||
|
101 | return mod | |||
|
102 | ||||
82 | def _reportimporterror(ui, err, failed, next): |
|
103 | def _reportimporterror(ui, err, failed, next): | |
|
104 | # note: this ui.debug happens before --debug is processed, | |||
|
105 | # Use --config ui.debug=1 to see them. | |||
83 | ui.debug('could not import %s (%s): trying %s\n' |
|
106 | ui.debug('could not import %s (%s): trying %s\n' | |
84 | % (failed, err, next)) |
|
107 | % (failed, err, next)) | |
85 | if ui.debugflag: |
|
108 | if ui.debugflag: | |
@@ -95,21 +118,7 b' def load(ui, name, path):' | |||||
95 | if shortname in _extensions: |
|
118 | if shortname in _extensions: | |
96 | return _extensions[shortname] |
|
119 | return _extensions[shortname] | |
97 | _extensions[shortname] = None |
|
120 | _extensions[shortname] = None | |
98 | if path: |
|
121 | mod = _importext(name, path, bind(_reportimporterror, ui)) | |
99 | # the module will be loaded in sys.modules |
|
|||
100 | # choose an unique name so that it doesn't |
|
|||
101 | # conflicts with other modules |
|
|||
102 | mod = loadpath(path, 'hgext.%s' % name) |
|
|||
103 | else: |
|
|||
104 | try: |
|
|||
105 | mod = _importh("hgext.%s" % name) |
|
|||
106 | except ImportError as err: |
|
|||
107 | _reportimporterror(ui, err, "hgext.%s" % name, name) |
|
|||
108 | try: |
|
|||
109 | mod = _importh("hgext3rd.%s" % name) |
|
|||
110 | except ImportError as err: |
|
|||
111 | _reportimporterror(ui, err, "hgext3rd.%s" % name, name) |
|
|||
112 | mod = _importh(name) |
|
|||
113 |
|
122 | |||
114 | # Before we do anything with the extension, check against minimum stated |
|
123 | # Before we do anything with the extension, check against minimum stated | |
115 | # compatibility. This gives extension authors a mechanism to have their |
|
124 | # compatibility. This gives extension authors a mechanism to have their | |
@@ -148,6 +157,7 b' def loadall(ui):' | |||||
148 | for (name, path) in result: |
|
157 | for (name, path) in result: | |
149 | if path: |
|
158 | if path: | |
150 | if path[0] == '!': |
|
159 | if path[0] == '!': | |
|
160 | _disabledextensions[name] = path[1:] | |||
151 | continue |
|
161 | continue | |
152 | try: |
|
162 | try: | |
153 | load(ui, name, path) |
|
163 | load(ui, name, path) | |
@@ -210,11 +220,13 b' def bind(func, *args):' | |||||
210 | return func(*(args + a), **kw) |
|
220 | return func(*(args + a), **kw) | |
211 | return closure |
|
221 | return closure | |
212 |
|
222 | |||
213 | def _updatewrapper(wrap, origfn): |
|
223 | def _updatewrapper(wrap, origfn, unboundwrapper): | |
214 |
'''Copy attributes to wrapper |
|
224 | '''Copy and add some useful attributes to wrapper''' | |
215 | wrap.__module__ = getattr(origfn, '__module__') |
|
225 | wrap.__module__ = getattr(origfn, '__module__') | |
216 | wrap.__doc__ = getattr(origfn, '__doc__') |
|
226 | wrap.__doc__ = getattr(origfn, '__doc__') | |
217 | wrap.__dict__.update(getattr(origfn, '__dict__', {})) |
|
227 | wrap.__dict__.update(getattr(origfn, '__dict__', {})) | |
|
228 | wrap._origfunc = origfn | |||
|
229 | wrap._unboundwrapper = unboundwrapper | |||
218 |
|
230 | |||
219 | def wrapcommand(table, command, wrapper, synopsis=None, docstring=None): |
|
231 | def wrapcommand(table, command, wrapper, synopsis=None, docstring=None): | |
220 | '''Wrap the command named `command' in table |
|
232 | '''Wrap the command named `command' in table | |
@@ -254,7 +266,7 b' def wrapcommand(table, command, wrapper,' | |||||
254 |
|
266 | |||
255 | origfn = entry[0] |
|
267 | origfn = entry[0] | |
256 | wrap = bind(util.checksignature(wrapper), util.checksignature(origfn)) |
|
268 | wrap = bind(util.checksignature(wrapper), util.checksignature(origfn)) | |
257 | _updatewrapper(wrap, origfn) |
|
269 | _updatewrapper(wrap, origfn, wrapper) | |
258 | if docstring is not None: |
|
270 | if docstring is not None: | |
259 | wrap.__doc__ += docstring |
|
271 | wrap.__doc__ += docstring | |
260 |
|
272 | |||
@@ -303,10 +315,46 b' def wrapfunction(container, funcname, wr' | |||||
303 | origfn = getattr(container, funcname) |
|
315 | origfn = getattr(container, funcname) | |
304 | assert callable(origfn) |
|
316 | assert callable(origfn) | |
305 | wrap = bind(wrapper, origfn) |
|
317 | wrap = bind(wrapper, origfn) | |
306 | _updatewrapper(wrap, origfn) |
|
318 | _updatewrapper(wrap, origfn, wrapper) | |
307 | setattr(container, funcname, wrap) |
|
319 | setattr(container, funcname, wrap) | |
308 | return origfn |
|
320 | return origfn | |
309 |
|
321 | |||
|
322 | def unwrapfunction(container, funcname, wrapper=None): | |||
|
323 | '''undo wrapfunction | |||
|
324 | ||||
|
325 | If wrappers is None, undo the last wrap. Otherwise removes the wrapper | |||
|
326 | from the chain of wrappers. | |||
|
327 | ||||
|
328 | Return the removed wrapper. | |||
|
329 | Raise IndexError if wrapper is None and nothing to unwrap; ValueError if | |||
|
330 | wrapper is not None but is not found in the wrapper chain. | |||
|
331 | ''' | |||
|
332 | chain = getwrapperchain(container, funcname) | |||
|
333 | origfn = chain.pop() | |||
|
334 | if wrapper is None: | |||
|
335 | wrapper = chain[0] | |||
|
336 | chain.remove(wrapper) | |||
|
337 | setattr(container, funcname, origfn) | |||
|
338 | for w in reversed(chain): | |||
|
339 | wrapfunction(container, funcname, w) | |||
|
340 | return wrapper | |||
|
341 | ||||
|
342 | def getwrapperchain(container, funcname): | |||
|
343 | '''get a chain of wrappers of a function | |||
|
344 | ||||
|
345 | Return a list of functions: [newest wrapper, ..., oldest wrapper, origfunc] | |||
|
346 | ||||
|
347 | The wrapper functions are the ones passed to wrapfunction, whose first | |||
|
348 | argument is origfunc. | |||
|
349 | ''' | |||
|
350 | result = [] | |||
|
351 | fn = getattr(container, funcname) | |||
|
352 | while fn: | |||
|
353 | assert callable(fn) | |||
|
354 | result.append(getattr(fn, '_unboundwrapper', fn)) | |||
|
355 | fn = getattr(fn, '_origfunc', None) | |||
|
356 | return result | |||
|
357 | ||||
310 | def _disabledpaths(strip_init=False): |
|
358 | def _disabledpaths(strip_init=False): | |
311 | '''find paths of disabled extensions. returns a dict of {name: path} |
|
359 | '''find paths of disabled extensions. returns a dict of {name: path} | |
312 | removes /__init__.py from packages if strip_init is True''' |
|
360 | removes /__init__.py from packages if strip_init is True''' | |
@@ -332,6 +380,7 b' def _disabledpaths(strip_init=False):' | |||||
332 | if name in exts or name in _order or name == '__init__': |
|
380 | if name in exts or name in _order or name == '__init__': | |
333 | continue |
|
381 | continue | |
334 | exts[name] = path |
|
382 | exts[name] = path | |
|
383 | exts.update(_disabledextensions) | |||
335 | return exts |
|
384 | return exts | |
336 |
|
385 | |||
337 | def _moduledoc(file): |
|
386 | def _moduledoc(file): | |
@@ -494,4 +543,4 b' def moduleversion(module):' | |||||
494 |
|
543 | |||
495 | def ismoduleinternal(module): |
|
544 | def ismoduleinternal(module): | |
496 | exttestedwith = getattr(module, 'testedwith', None) |
|
545 | exttestedwith = getattr(module, 'testedwith', None) | |
497 |
return exttestedwith == " |
|
546 | return exttestedwith == "ships-with-hg-core" |
@@ -12,6 +12,17 b' import getopt' | |||||
12 | from .i18n import _ |
|
12 | from .i18n import _ | |
13 | from . import error |
|
13 | from . import error | |
14 |
|
14 | |||
|
15 | # Set of flags to not apply boolean negation logic on | |||
|
16 | nevernegate = set([ | |||
|
17 | # avoid --no-noninteractive | |||
|
18 | 'noninteractive', | |||
|
19 | # These two flags are special because they cause hg to do one | |||
|
20 | # thing and then exit, and so aren't suitable for use in things | |||
|
21 | # like aliases anyway. | |||
|
22 | 'help', | |||
|
23 | 'version', | |||
|
24 | ]) | |||
|
25 | ||||
15 | def gnugetopt(args, options, longoptions): |
|
26 | def gnugetopt(args, options, longoptions): | |
16 | """Parse options mostly like getopt.gnu_getopt. |
|
27 | """Parse options mostly like getopt.gnu_getopt. | |
17 |
|
28 | |||
@@ -64,6 +75,8 b' def fancyopts(args, options, state, gnu=' | |||||
64 | shortlist = '' |
|
75 | shortlist = '' | |
65 | argmap = {} |
|
76 | argmap = {} | |
66 | defmap = {} |
|
77 | defmap = {} | |
|
78 | negations = {} | |||
|
79 | alllong = set(o[1] for o in options) | |||
67 |
|
80 | |||
68 | for option in options: |
|
81 | for option in options: | |
69 | if len(option) == 5: |
|
82 | if len(option) == 5: | |
@@ -91,6 +104,18 b' def fancyopts(args, options, state, gnu=' | |||||
91 | short += ':' |
|
104 | short += ':' | |
92 | if oname: |
|
105 | if oname: | |
93 | oname += '=' |
|
106 | oname += '=' | |
|
107 | elif oname not in nevernegate: | |||
|
108 | if oname.startswith('no-'): | |||
|
109 | insert = oname[3:] | |||
|
110 | else: | |||
|
111 | insert = 'no-' + oname | |||
|
112 | # backout (as a practical example) has both --commit and | |||
|
113 | # --no-commit options, so we don't want to allow the | |||
|
114 | # negations of those flags. | |||
|
115 | if insert not in alllong: | |||
|
116 | assert ('--' + oname) not in negations | |||
|
117 | negations['--' + insert] = '--' + oname | |||
|
118 | namelist.append(insert) | |||
94 | if short: |
|
119 | if short: | |
95 | shortlist += short |
|
120 | shortlist += short | |
96 | if name: |
|
121 | if name: | |
@@ -105,6 +130,11 b' def fancyopts(args, options, state, gnu=' | |||||
105 |
|
130 | |||
106 | # transfer result to state |
|
131 | # transfer result to state | |
107 | for opt, val in opts: |
|
132 | for opt, val in opts: | |
|
133 | boolval = True | |||
|
134 | negation = negations.get(opt, False) | |||
|
135 | if negation: | |||
|
136 | opt = negation | |||
|
137 | boolval = False | |||
108 | name = argmap[opt] |
|
138 | name = argmap[opt] | |
109 | obj = defmap[name] |
|
139 | obj = defmap[name] | |
110 | t = type(obj) |
|
140 | t = type(obj) | |
@@ -121,7 +151,7 b' def fancyopts(args, options, state, gnu=' | |||||
121 | elif t is type([]): |
|
151 | elif t is type([]): | |
122 | state[name].append(val) |
|
152 | state[name].append(val) | |
123 | elif t is type(None) or t is type(False): |
|
153 | elif t is type(None) or t is type(False): | |
124 |
state[name] = |
|
154 | state[name] = boolval | |
125 |
|
155 | |||
126 | # return unparsed args |
|
156 | # return unparsed args | |
127 | return args |
|
157 | return args |
@@ -19,6 +19,7 b' from . import (' | |||||
19 | error, |
|
19 | error, | |
20 | formatter, |
|
20 | formatter, | |
21 | match, |
|
21 | match, | |
|
22 | pycompat, | |||
22 | scmutil, |
|
23 | scmutil, | |
23 | simplemerge, |
|
24 | simplemerge, | |
24 | tagmerge, |
|
25 | tagmerge, | |
@@ -93,7 +94,8 b' def internaltool(name, mergetype, onfail' | |||||
93 | '''return a decorator for populating internal merge tool table''' |
|
94 | '''return a decorator for populating internal merge tool table''' | |
94 | def decorator(func): |
|
95 | def decorator(func): | |
95 | fullname = ':' + name |
|
96 | fullname = ':' + name | |
96 |
func.__doc__ = "``%s``\n" % fullname |
|
97 | func.__doc__ = (pycompat.sysstr("``%s``\n" % fullname) | |
|
98 | + func.__doc__.strip()) | |||
97 | internals[fullname] = func |
|
99 | internals[fullname] = func | |
98 | internals['internal:' + name] = func |
|
100 | internals['internal:' + name] = func | |
99 | internalsdoc[fullname] = func |
|
101 | internalsdoc[fullname] = func | |
@@ -230,50 +232,56 b' def _matcheol(file, origfile):' | |||||
230 | util.writefile(file, newdata) |
|
232 | util.writefile(file, newdata) | |
231 |
|
233 | |||
232 | @internaltool('prompt', nomerge) |
|
234 | @internaltool('prompt', nomerge) | |
233 | def _iprompt(repo, mynode, orig, fcd, fco, fca, toolconf): |
|
235 | def _iprompt(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None): | |
234 | """Asks the user which of the local `p1()` or the other `p2()` version to |
|
236 | """Asks the user which of the local `p1()` or the other `p2()` version to | |
235 | keep as the merged version.""" |
|
237 | keep as the merged version.""" | |
236 | ui = repo.ui |
|
238 | ui = repo.ui | |
237 | fd = fcd.path() |
|
239 | fd = fcd.path() | |
238 |
|
240 | |||
|
241 | prompts = partextras(labels) | |||
|
242 | prompts['fd'] = fd | |||
239 | try: |
|
243 | try: | |
240 | if fco.isabsent(): |
|
244 | if fco.isabsent(): | |
241 | index = ui.promptchoice( |
|
245 | index = ui.promptchoice( | |
242 |
_("local changed %s which |
|
246 | _("local%(l)s changed %(fd)s which other%(o)s deleted\n" | |
243 | "use (c)hanged version, (d)elete, or leave (u)nresolved?" |
|
247 | "use (c)hanged version, (d)elete, or leave (u)nresolved?" | |
244 |
"$$ &Changed $$ &Delete $$ &Unresolved") % |
|
248 | "$$ &Changed $$ &Delete $$ &Unresolved") % prompts, 2) | |
245 | choice = ['local', 'other', 'unresolved'][index] |
|
249 | choice = ['local', 'other', 'unresolved'][index] | |
246 | elif fcd.isabsent(): |
|
250 | elif fcd.isabsent(): | |
247 | index = ui.promptchoice( |
|
251 | index = ui.promptchoice( | |
248 |
_(" |
|
252 | _("other%(o)s changed %(fd)s which local%(l)s deleted\n" | |
249 | "use (c)hanged version, leave (d)eleted, or " |
|
253 | "use (c)hanged version, leave (d)eleted, or " | |
250 | "leave (u)nresolved?" |
|
254 | "leave (u)nresolved?" | |
251 |
"$$ &Changed $$ &Deleted $$ &Unresolved") % |
|
255 | "$$ &Changed $$ &Deleted $$ &Unresolved") % prompts, 2) | |
252 | choice = ['other', 'local', 'unresolved'][index] |
|
256 | choice = ['other', 'local', 'unresolved'][index] | |
253 | else: |
|
257 | else: | |
254 | index = ui.promptchoice( |
|
258 | index = ui.promptchoice( | |
255 | _("no tool found to merge %s\n" |
|
259 | _("no tool found to merge %(fd)s\n" | |
256 | "keep (l)ocal, take (o)ther, or leave (u)nresolved?" |
|
260 | "keep (l)ocal%(l)s, take (o)ther%(o)s, or leave (u)nresolved?" | |
257 |
"$$ &Local $$ &Other $$ &Unresolved") % |
|
261 | "$$ &Local $$ &Other $$ &Unresolved") % prompts, 2) | |
258 | choice = ['local', 'other', 'unresolved'][index] |
|
262 | choice = ['local', 'other', 'unresolved'][index] | |
259 |
|
263 | |||
260 | if choice == 'other': |
|
264 | if choice == 'other': | |
261 |
return _iother(repo, mynode, orig, fcd, fco, fca, toolconf |
|
265 | return _iother(repo, mynode, orig, fcd, fco, fca, toolconf, | |
|
266 | labels) | |||
262 | elif choice == 'local': |
|
267 | elif choice == 'local': | |
263 |
return _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf |
|
268 | return _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf, | |
|
269 | labels) | |||
264 | elif choice == 'unresolved': |
|
270 | elif choice == 'unresolved': | |
265 |
return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf |
|
271 | return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf, | |
|
272 | labels) | |||
266 | except error.ResponseExpected: |
|
273 | except error.ResponseExpected: | |
267 | ui.write("\n") |
|
274 | ui.write("\n") | |
268 |
return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf |
|
275 | return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf, | |
|
276 | labels) | |||
269 |
|
277 | |||
270 | @internaltool('local', nomerge) |
|
278 | @internaltool('local', nomerge) | |
271 | def _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf): |
|
279 | def _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None): | |
272 | """Uses the local `p1()` version of files as the merged version.""" |
|
280 | """Uses the local `p1()` version of files as the merged version.""" | |
273 | return 0, fcd.isabsent() |
|
281 | return 0, fcd.isabsent() | |
274 |
|
282 | |||
275 | @internaltool('other', nomerge) |
|
283 | @internaltool('other', nomerge) | |
276 | def _iother(repo, mynode, orig, fcd, fco, fca, toolconf): |
|
284 | def _iother(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None): | |
277 | """Uses the other `p2()` version of files as the merged version.""" |
|
285 | """Uses the other `p2()` version of files as the merged version.""" | |
278 | if fco.isabsent(): |
|
286 | if fco.isabsent(): | |
279 | # local changed, remote deleted -- 'deleted' picked |
|
287 | # local changed, remote deleted -- 'deleted' picked | |
@@ -285,7 +293,7 b' def _iother(repo, mynode, orig, fcd, fco' | |||||
285 | return 0, deleted |
|
293 | return 0, deleted | |
286 |
|
294 | |||
287 | @internaltool('fail', nomerge) |
|
295 | @internaltool('fail', nomerge) | |
288 | def _ifail(repo, mynode, orig, fcd, fco, fca, toolconf): |
|
296 | def _ifail(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None): | |
289 | """ |
|
297 | """ | |
290 | Rather than attempting to merge files that were modified on both |
|
298 | Rather than attempting to merge files that were modified on both | |
291 | branches, it marks them as unresolved. The resolve command must be |
|
299 | branches, it marks them as unresolved. The resolve command must be | |
@@ -508,11 +516,11 b' def _formatconflictmarker(repo, ctx, tem' | |||||
508 | # 8 for the prefix of conflict marker lines (e.g. '<<<<<<< ') |
|
516 | # 8 for the prefix of conflict marker lines (e.g. '<<<<<<< ') | |
509 | return util.ellipsis(mark, 80 - 8) |
|
517 | return util.ellipsis(mark, 80 - 8) | |
510 |
|
518 | |||
511 |
_defaultconflictmarker = ('{node|short} ' |
|
519 | _defaultconflictmarker = ('{node|short} ' | |
512 |
'{ifeq(tags, "tip", "", "{tags} ")}' |
|
520 | '{ifeq(tags, "tip", "", "{tags} ")}' | |
513 |
'{if(bookmarks, "{bookmarks} ")}' |
|
521 | '{if(bookmarks, "{bookmarks} ")}' | |
514 |
'{ifeq(branch, "default", "", "{branch} ")}' |
|
522 | '{ifeq(branch, "default", "", "{branch} ")}' | |
515 | '- {author|user}: {desc|firstline}') |
|
523 | '- {author|user}: {desc|firstline}') | |
516 |
|
524 | |||
517 | _defaultconflictlabels = ['local', 'other'] |
|
525 | _defaultconflictlabels = ['local', 'other'] | |
518 |
|
526 | |||
@@ -537,6 +545,22 b' def _formatlabels(repo, fcd, fco, fca, l' | |||||
537 | newlabels.append(_formatconflictmarker(repo, ca, tmpl, labels[2], pad)) |
|
545 | newlabels.append(_formatconflictmarker(repo, ca, tmpl, labels[2], pad)) | |
538 | return newlabels |
|
546 | return newlabels | |
539 |
|
547 | |||
|
548 | def partextras(labels): | |||
|
549 | """Return a dictionary of extra labels for use in prompts to the user | |||
|
550 | ||||
|
551 | Intended use is in strings of the form "(l)ocal%(l)s". | |||
|
552 | """ | |||
|
553 | if labels is None: | |||
|
554 | return { | |||
|
555 | "l": "", | |||
|
556 | "o": "", | |||
|
557 | } | |||
|
558 | ||||
|
559 | return { | |||
|
560 | "l": " [%s]" % labels[0], | |||
|
561 | "o": " [%s]" % labels[1], | |||
|
562 | } | |||
|
563 | ||||
540 | def _filemerge(premerge, repo, mynode, orig, fcd, fco, fca, labels=None): |
|
564 | def _filemerge(premerge, repo, mynode, orig, fcd, fco, fca, labels=None): | |
541 | """perform a 3-way merge in the working directory |
|
565 | """perform a 3-way merge in the working directory | |
542 |
|
566 | |||
@@ -588,7 +612,7 b' def _filemerge(premerge, repo, mynode, o' | |||||
588 | toolconf = tool, toolpath, binary, symlink |
|
612 | toolconf = tool, toolpath, binary, symlink | |
589 |
|
613 | |||
590 | if mergetype == nomerge: |
|
614 | if mergetype == nomerge: | |
591 | r, deleted = func(repo, mynode, orig, fcd, fco, fca, toolconf) |
|
615 | r, deleted = func(repo, mynode, orig, fcd, fco, fca, toolconf, labels) | |
592 | return True, r, deleted |
|
616 | return True, r, deleted | |
593 |
|
617 | |||
594 | if premerge: |
|
618 | if premerge: |
@@ -345,10 +345,10 b' def _sizetomax(s):' | |||||
345 | def size(mctx, x): |
|
345 | def size(mctx, x): | |
346 | """File size matches the given expression. Examples: |
|
346 | """File size matches the given expression. Examples: | |
347 |
|
347 | |||
348 |
- |
|
348 | - size('1k') - files from 1024 to 2047 bytes | |
349 |
- |
|
349 | - size('< 20k') - files less than 20480 bytes | |
350 |
- >= .5MB |
|
350 | - size('>= .5MB') - files at least 524288 bytes | |
351 |
- 4k - 1MB |
|
351 | - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes | |
352 | """ |
|
352 | """ | |
353 |
|
353 | |||
354 | # i18n: "size" is a keyword |
|
354 | # i18n: "size" is a keyword |
@@ -18,25 +18,45 b' from .node import (' | |||||
18 | from . import ( |
|
18 | from . import ( | |
19 | encoding, |
|
19 | encoding, | |
20 | error, |
|
20 | error, | |
|
21 | templatekw, | |||
21 | templater, |
|
22 | templater, | |
22 | util, |
|
23 | util, | |
23 | ) |
|
24 | ) | |
24 |
|
25 | |||
25 | pickle = util.pickle |
|
26 | pickle = util.pickle | |
26 |
|
27 | |||
|
28 | class _nullconverter(object): | |||
|
29 | '''convert non-primitive data types to be processed by formatter''' | |||
|
30 | @staticmethod | |||
|
31 | def formatdate(date, fmt): | |||
|
32 | '''convert date tuple to appropriate format''' | |||
|
33 | return date | |||
|
34 | @staticmethod | |||
|
35 | def formatdict(data, key, value, fmt, sep): | |||
|
36 | '''convert dict or key-value pairs to appropriate dict format''' | |||
|
37 | # use plain dict instead of util.sortdict so that data can be | |||
|
38 | # serialized as a builtin dict in pickle output | |||
|
39 | return dict(data) | |||
|
40 | @staticmethod | |||
|
41 | def formatlist(data, name, fmt, sep): | |||
|
42 | '''convert iterable to appropriate list format''' | |||
|
43 | return list(data) | |||
|
44 | ||||
27 | class baseformatter(object): |
|
45 | class baseformatter(object): | |
28 | def __init__(self, ui, topic, opts): |
|
46 | def __init__(self, ui, topic, opts, converter): | |
29 | self._ui = ui |
|
47 | self._ui = ui | |
30 | self._topic = topic |
|
48 | self._topic = topic | |
31 | self._style = opts.get("style") |
|
49 | self._style = opts.get("style") | |
32 | self._template = opts.get("template") |
|
50 | self._template = opts.get("template") | |
|
51 | self._converter = converter | |||
33 | self._item = None |
|
52 | self._item = None | |
34 | # function to convert node to string suitable for this output |
|
53 | # function to convert node to string suitable for this output | |
35 | self.hexfunc = hex |
|
54 | self.hexfunc = hex | |
36 |
def __ |
|
55 | def __enter__(self): | |
37 | '''return False if we're not doing real templating so we can |
|
56 | return self | |
38 | skip extra work''' |
|
57 | def __exit__(self, exctype, excvalue, traceback): | |
39 | return True |
|
58 | if exctype is None: | |
|
59 | self.end() | |||
40 | def _showitem(self): |
|
60 | def _showitem(self): | |
41 | '''show a formatted item once all data is collected''' |
|
61 | '''show a formatted item once all data is collected''' | |
42 | pass |
|
62 | pass | |
@@ -45,6 +65,17 b' class baseformatter(object):' | |||||
45 | if self._item is not None: |
|
65 | if self._item is not None: | |
46 | self._showitem() |
|
66 | self._showitem() | |
47 | self._item = {} |
|
67 | self._item = {} | |
|
68 | def formatdate(self, date, fmt='%a %b %d %H:%M:%S %Y %1%2'): | |||
|
69 | '''convert date tuple to appropriate format''' | |||
|
70 | return self._converter.formatdate(date, fmt) | |||
|
71 | def formatdict(self, data, key='key', value='value', fmt='%s=%s', sep=' '): | |||
|
72 | '''convert dict or key-value pairs to appropriate dict format''' | |||
|
73 | return self._converter.formatdict(data, key, value, fmt, sep) | |||
|
74 | def formatlist(self, data, name, fmt='%s', sep=' '): | |||
|
75 | '''convert iterable to appropriate list format''' | |||
|
76 | # name is mandatory argument for now, but it could be optional if | |||
|
77 | # we have default template keyword, e.g. {item} | |||
|
78 | return self._converter.formatlist(data, name, fmt, sep) | |||
48 | def data(self, **data): |
|
79 | def data(self, **data): | |
49 | '''insert data into item that's not shown in default output''' |
|
80 | '''insert data into item that's not shown in default output''' | |
50 | self._item.update(data) |
|
81 | self._item.update(data) | |
@@ -61,21 +92,55 b' class baseformatter(object):' | |||||
61 | def plain(self, text, **opts): |
|
92 | def plain(self, text, **opts): | |
62 | '''show raw text for non-templated mode''' |
|
93 | '''show raw text for non-templated mode''' | |
63 | pass |
|
94 | pass | |
|
95 | def isplain(self): | |||
|
96 | '''check for plain formatter usage''' | |||
|
97 | return False | |||
|
98 | def nested(self, field): | |||
|
99 | '''sub formatter to store nested data in the specified field''' | |||
|
100 | self._item[field] = data = [] | |||
|
101 | return _nestedformatter(self._ui, self._converter, data) | |||
64 | def end(self): |
|
102 | def end(self): | |
65 | '''end output for the formatter''' |
|
103 | '''end output for the formatter''' | |
66 | if self._item is not None: |
|
104 | if self._item is not None: | |
67 | self._showitem() |
|
105 | self._showitem() | |
68 |
|
106 | |||
|
107 | class _nestedformatter(baseformatter): | |||
|
108 | '''build sub items and store them in the parent formatter''' | |||
|
109 | def __init__(self, ui, converter, data): | |||
|
110 | baseformatter.__init__(self, ui, topic='', opts={}, converter=converter) | |||
|
111 | self._data = data | |||
|
112 | def _showitem(self): | |||
|
113 | self._data.append(self._item) | |||
|
114 | ||||
|
115 | def _iteritems(data): | |||
|
116 | '''iterate key-value pairs in stable order''' | |||
|
117 | if isinstance(data, dict): | |||
|
118 | return sorted(data.iteritems()) | |||
|
119 | return data | |||
|
120 | ||||
|
121 | class _plainconverter(object): | |||
|
122 | '''convert non-primitive data types to text''' | |||
|
123 | @staticmethod | |||
|
124 | def formatdate(date, fmt): | |||
|
125 | '''stringify date tuple in the given format''' | |||
|
126 | return util.datestr(date, fmt) | |||
|
127 | @staticmethod | |||
|
128 | def formatdict(data, key, value, fmt, sep): | |||
|
129 | '''stringify key-value pairs separated by sep''' | |||
|
130 | return sep.join(fmt % (k, v) for k, v in _iteritems(data)) | |||
|
131 | @staticmethod | |||
|
132 | def formatlist(data, name, fmt, sep): | |||
|
133 | '''stringify iterable separated by sep''' | |||
|
134 | return sep.join(fmt % e for e in data) | |||
|
135 | ||||
69 | class plainformatter(baseformatter): |
|
136 | class plainformatter(baseformatter): | |
70 | '''the default text output scheme''' |
|
137 | '''the default text output scheme''' | |
71 | def __init__(self, ui, topic, opts): |
|
138 | def __init__(self, ui, topic, opts): | |
72 | baseformatter.__init__(self, ui, topic, opts) |
|
139 | baseformatter.__init__(self, ui, topic, opts, _plainconverter) | |
73 | if ui.debugflag: |
|
140 | if ui.debugflag: | |
74 | self.hexfunc = hex |
|
141 | self.hexfunc = hex | |
75 | else: |
|
142 | else: | |
76 | self.hexfunc = short |
|
143 | self.hexfunc = short | |
77 | def __nonzero__(self): |
|
|||
78 | return False |
|
|||
79 | def startitem(self): |
|
144 | def startitem(self): | |
80 | pass |
|
145 | pass | |
81 | def data(self, **data): |
|
146 | def data(self, **data): | |
@@ -88,12 +153,17 b' class plainformatter(baseformatter):' | |||||
88 | self._ui.write(deftext % fielddata, **opts) |
|
153 | self._ui.write(deftext % fielddata, **opts) | |
89 | def plain(self, text, **opts): |
|
154 | def plain(self, text, **opts): | |
90 | self._ui.write(text, **opts) |
|
155 | self._ui.write(text, **opts) | |
|
156 | def isplain(self): | |||
|
157 | return True | |||
|
158 | def nested(self, field): | |||
|
159 | # nested data will be directly written to ui | |||
|
160 | return self | |||
91 | def end(self): |
|
161 | def end(self): | |
92 | pass |
|
162 | pass | |
93 |
|
163 | |||
94 | class debugformatter(baseformatter): |
|
164 | class debugformatter(baseformatter): | |
95 | def __init__(self, ui, topic, opts): |
|
165 | def __init__(self, ui, topic, opts): | |
96 | baseformatter.__init__(self, ui, topic, opts) |
|
166 | baseformatter.__init__(self, ui, topic, opts, _nullconverter) | |
97 | self._ui.write("%s = [\n" % self._topic) |
|
167 | self._ui.write("%s = [\n" % self._topic) | |
98 | def _showitem(self): |
|
168 | def _showitem(self): | |
99 | self._ui.write(" " + repr(self._item) + ",\n") |
|
169 | self._ui.write(" " + repr(self._item) + ",\n") | |
@@ -103,7 +173,7 b' class debugformatter(baseformatter):' | |||||
103 |
|
173 | |||
104 | class pickleformatter(baseformatter): |
|
174 | class pickleformatter(baseformatter): | |
105 | def __init__(self, ui, topic, opts): |
|
175 | def __init__(self, ui, topic, opts): | |
106 | baseformatter.__init__(self, ui, topic, opts) |
|
176 | baseformatter.__init__(self, ui, topic, opts, _nullconverter) | |
107 | self._data = [] |
|
177 | self._data = [] | |
108 | def _showitem(self): |
|
178 | def _showitem(self): | |
109 | self._data.append(self._item) |
|
179 | self._data.append(self._item) | |
@@ -112,7 +182,11 b' class pickleformatter(baseformatter):' | |||||
112 | self._ui.write(pickle.dumps(self._data)) |
|
182 | self._ui.write(pickle.dumps(self._data)) | |
113 |
|
183 | |||
114 | def _jsonifyobj(v): |
|
184 | def _jsonifyobj(v): | |
115 |
if isinstance(v, t |
|
185 | if isinstance(v, dict): | |
|
186 | xs = ['"%s": %s' % (encoding.jsonescape(k), _jsonifyobj(u)) | |||
|
187 | for k, u in sorted(v.iteritems())] | |||
|
188 | return '{' + ', '.join(xs) + '}' | |||
|
189 | elif isinstance(v, (list, tuple)): | |||
116 | return '[' + ', '.join(_jsonifyobj(e) for e in v) + ']' |
|
190 | return '[' + ', '.join(_jsonifyobj(e) for e in v) + ']' | |
117 | elif v is None: |
|
191 | elif v is None: | |
118 | return 'null' |
|
192 | return 'null' | |
@@ -127,7 +201,7 b' def _jsonifyobj(v):' | |||||
127 |
|
201 | |||
128 | class jsonformatter(baseformatter): |
|
202 | class jsonformatter(baseformatter): | |
129 | def __init__(self, ui, topic, opts): |
|
203 | def __init__(self, ui, topic, opts): | |
130 | baseformatter.__init__(self, ui, topic, opts) |
|
204 | baseformatter.__init__(self, ui, topic, opts, _nullconverter) | |
131 | self._ui.write("[") |
|
205 | self._ui.write("[") | |
132 | self._ui._first = True |
|
206 | self._ui._first = True | |
133 | def _showitem(self): |
|
207 | def _showitem(self): | |
@@ -149,9 +223,32 b' class jsonformatter(baseformatter):' | |||||
149 | baseformatter.end(self) |
|
223 | baseformatter.end(self) | |
150 | self._ui.write("\n]\n") |
|
224 | self._ui.write("\n]\n") | |
151 |
|
225 | |||
|
226 | class _templateconverter(object): | |||
|
227 | '''convert non-primitive data types to be processed by templater''' | |||
|
228 | @staticmethod | |||
|
229 | def formatdate(date, fmt): | |||
|
230 | '''return date tuple''' | |||
|
231 | return date | |||
|
232 | @staticmethod | |||
|
233 | def formatdict(data, key, value, fmt, sep): | |||
|
234 | '''build object that can be evaluated as either plain string or dict''' | |||
|
235 | data = util.sortdict(_iteritems(data)) | |||
|
236 | def f(): | |||
|
237 | yield _plainconverter.formatdict(data, key, value, fmt, sep) | |||
|
238 | return templatekw._hybrid(f(), data, lambda k: {key: k, value: data[k]}, | |||
|
239 | lambda d: fmt % (d[key], d[value])) | |||
|
240 | @staticmethod | |||
|
241 | def formatlist(data, name, fmt, sep): | |||
|
242 | '''build object that can be evaluated as either plain string or list''' | |||
|
243 | data = list(data) | |||
|
244 | def f(): | |||
|
245 | yield _plainconverter.formatlist(data, name, fmt, sep) | |||
|
246 | return templatekw._hybrid(f(), data, lambda x: {name: x}, | |||
|
247 | lambda d: fmt % d[name]) | |||
|
248 | ||||
152 | class templateformatter(baseformatter): |
|
249 | class templateformatter(baseformatter): | |
153 | def __init__(self, ui, topic, opts): |
|
250 | def __init__(self, ui, topic, opts): | |
154 | baseformatter.__init__(self, ui, topic, opts) |
|
251 | baseformatter.__init__(self, ui, topic, opts, _templateconverter) | |
155 | self._topic = topic |
|
252 | self._topic = topic | |
156 | self._t = gettemplater(ui, topic, opts.get('template', '')) |
|
253 | self._t = gettemplater(ui, topic, opts.get('template', '')) | |
157 | def _showitem(self): |
|
254 | def _showitem(self): |
@@ -139,6 +139,19 b' def bisect(changelog, state):' | |||||
139 |
|
139 | |||
140 | return ([best_node], tot, good) |
|
140 | return ([best_node], tot, good) | |
141 |
|
141 | |||
|
142 | def extendrange(repo, state, nodes, good): | |||
|
143 | # bisect is incomplete when it ends on a merge node and | |||
|
144 | # one of the parent was not checked. | |||
|
145 | parents = repo[nodes[0]].parents() | |||
|
146 | if len(parents) > 1: | |||
|
147 | if good: | |||
|
148 | side = state['bad'] | |||
|
149 | else: | |||
|
150 | side = state['good'] | |||
|
151 | num = len(set(i.node() for i in parents) & set(side)) | |||
|
152 | if num == 1: | |||
|
153 | return parents[0].ancestor(parents[1]) | |||
|
154 | return None | |||
142 |
|
155 | |||
143 | def load_state(repo): |
|
156 | def load_state(repo): | |
144 | state = {'current': [], 'good': [], 'bad': [], 'skip': []} |
|
157 | state = {'current': [], 'good': [], 'bad': [], 'skip': []} | |
@@ -159,6 +172,22 b' def save_state(repo, state):' | |||||
159 | f.write("%s %s\n" % (kind, hex(node))) |
|
172 | f.write("%s %s\n" % (kind, hex(node))) | |
160 | f.close() |
|
173 | f.close() | |
161 |
|
174 | |||
|
175 | def resetstate(repo): | |||
|
176 | """remove any bisect state from the repository""" | |||
|
177 | if repo.vfs.exists("bisect.state"): | |||
|
178 | repo.vfs.unlink("bisect.state") | |||
|
179 | ||||
|
180 | def checkstate(state): | |||
|
181 | """check we have both 'good' and 'bad' to define a range | |||
|
182 | ||||
|
183 | Raise Abort exception otherwise.""" | |||
|
184 | if state['good'] and state['bad']: | |||
|
185 | return True | |||
|
186 | if not state['good']: | |||
|
187 | raise error.Abort(_('cannot bisect (no known good revisions)')) | |||
|
188 | else: | |||
|
189 | raise error.Abort(_('cannot bisect (no known bad revisions)')) | |||
|
190 | ||||
162 | def get(repo, status): |
|
191 | def get(repo, status): | |
163 | """ |
|
192 | """ | |
164 | Return a list of revision(s) that match the given status: |
|
193 | Return a list of revision(s) that match the given status: | |
@@ -261,3 +290,29 b' def shortlabel(label):' | |||||
261 | return label[0].upper() |
|
290 | return label[0].upper() | |
262 |
|
291 | |||
263 | return None |
|
292 | return None | |
|
293 | ||||
|
294 | def printresult(ui, repo, state, displayer, nodes, good): | |||
|
295 | if len(nodes) == 1: | |||
|
296 | # narrowed it down to a single revision | |||
|
297 | if good: | |||
|
298 | ui.write(_("The first good revision is:\n")) | |||
|
299 | else: | |||
|
300 | ui.write(_("The first bad revision is:\n")) | |||
|
301 | displayer.show(repo[nodes[0]]) | |||
|
302 | extendnode = extendrange(repo, state, nodes, good) | |||
|
303 | if extendnode is not None: | |||
|
304 | ui.write(_('Not all ancestors of this changeset have been' | |||
|
305 | ' checked.\nUse bisect --extend to continue the ' | |||
|
306 | 'bisection from\nthe common ancestor, %s.\n') | |||
|
307 | % extendnode) | |||
|
308 | else: | |||
|
309 | # multiple possible revisions | |||
|
310 | if good: | |||
|
311 | ui.write(_("Due to skipped revisions, the first " | |||
|
312 | "good revision could be any of:\n")) | |||
|
313 | else: | |||
|
314 | ui.write(_("Due to skipped revisions, the first " | |||
|
315 | "bad revision could be any of:\n")) | |||
|
316 | for n in nodes: | |||
|
317 | displayer.show(repo[n]) | |||
|
318 | displayer.close() |
@@ -184,14 +184,16 b' def loaddoc(topic, subdir=None):' | |||||
184 | return loader |
|
184 | return loader | |
185 |
|
185 | |||
186 | internalstable = sorted([ |
|
186 | internalstable = sorted([ | |
187 | (['bundles'], _('container for exchange of repository data'), |
|
187 | (['bundles'], _('Bundles'), | |
188 | loaddoc('bundles', subdir='internals')), |
|
188 | loaddoc('bundles', subdir='internals')), | |
189 |
(['changegroups'], _(' |
|
189 | (['changegroups'], _('Changegroups'), | |
190 | loaddoc('changegroups', subdir='internals')), |
|
190 | loaddoc('changegroups', subdir='internals')), | |
191 |
(['requirements'], _(' |
|
191 | (['requirements'], _('Repository Requirements'), | |
192 | loaddoc('requirements', subdir='internals')), |
|
192 | loaddoc('requirements', subdir='internals')), | |
193 |
(['revlogs'], _(' |
|
193 | (['revlogs'], _('Revision Logs'), | |
194 | loaddoc('revlogs', subdir='internals')), |
|
194 | loaddoc('revlogs', subdir='internals')), | |
|
195 | (['wireprotocol'], _('Wire Protocol'), | |||
|
196 | loaddoc('wireprotocol', subdir='internals')), | |||
195 | ]) |
|
197 | ]) | |
196 |
|
198 | |||
197 | def internalshelp(ui): |
|
199 | def internalshelp(ui): | |
@@ -356,8 +358,8 b' def help_(ui, name, unknowncmd=False, fu' | |||||
356 | mod = extensions.find(name) |
|
358 | mod = extensions.find(name) | |
357 | doc = gettext(mod.__doc__) or '' |
|
359 | doc = gettext(mod.__doc__) or '' | |
358 | if '\n' in doc.strip(): |
|
360 | if '\n' in doc.strip(): | |
359 |
msg = _(' |
|
361 | msg = _("(use 'hg help -e %s' to show help for " | |
360 |
|
|
362 | "the %s extension)") % (name, name) | |
361 | rst.append('\n%s\n' % msg) |
|
363 | rst.append('\n%s\n' % msg) | |
362 | except KeyError: |
|
364 | except KeyError: | |
363 | pass |
|
365 | pass | |
@@ -372,7 +374,7 b' def help_(ui, name, unknowncmd=False, fu' | |||||
372 |
|
374 | |||
373 | if not ui.verbose: |
|
375 | if not ui.verbose: | |
374 | if not full: |
|
376 | if not full: | |
375 |
rst.append(_( |
|
377 | rst.append(_("\n(use 'hg %s -h' to show more help)\n") | |
376 | % name) |
|
378 | % name) | |
377 | elif not ui.quiet: |
|
379 | elif not ui.quiet: | |
378 | rst.append(_('\n(some details hidden, use --verbose ' |
|
380 | rst.append(_('\n(some details hidden, use --verbose ' | |
@@ -448,21 +450,21 b' def help_(ui, name, unknowncmd=False, fu' | |||||
448 | rst.append('\n%s\n' % optrst(_("global options"), |
|
450 | rst.append('\n%s\n' % optrst(_("global options"), | |
449 | commands.globalopts, ui.verbose)) |
|
451 | commands.globalopts, ui.verbose)) | |
450 | if name == 'shortlist': |
|
452 | if name == 'shortlist': | |
451 |
rst.append(_( |
|
453 | rst.append(_("\n(use 'hg help' for the full list " | |
452 |
|
|
454 | "of commands)\n")) | |
453 | else: |
|
455 | else: | |
454 | if name == 'shortlist': |
|
456 | if name == 'shortlist': | |
455 |
rst.append(_( |
|
457 | rst.append(_("\n(use 'hg help' for the full list of commands " | |
456 |
' |
|
458 | "or 'hg -v' for details)\n")) | |
457 | elif name and not full: |
|
459 | elif name and not full: | |
458 |
rst.append(_( |
|
460 | rst.append(_("\n(use 'hg help %s' to show the full help " | |
459 |
|
|
461 | "text)\n") % name) | |
460 | elif name and cmds and name in cmds.keys(): |
|
462 | elif name and cmds and name in cmds.keys(): | |
461 |
rst.append(_( |
|
463 | rst.append(_("\n(use 'hg help -v -e %s' to show built-in " | |
462 |
|
|
464 | "aliases and global options)\n") % name) | |
463 | else: |
|
465 | else: | |
464 |
rst.append(_( |
|
466 | rst.append(_("\n(use 'hg help -v%s' to show built-in aliases " | |
465 |
|
|
467 | "and global options)\n") | |
466 | % (name and " " + name or "")) |
|
468 | % (name and " " + name or "")) | |
467 | return rst |
|
469 | return rst | |
468 |
|
470 | |||
@@ -496,8 +498,8 b' def help_(ui, name, unknowncmd=False, fu' | |||||
496 |
|
498 | |||
497 | try: |
|
499 | try: | |
498 | cmdutil.findcmd(name, commands.table) |
|
500 | cmdutil.findcmd(name, commands.table) | |
499 |
rst.append(_( |
|
501 | rst.append(_("\nuse 'hg help -c %s' to see help for " | |
500 |
|
|
502 | "the %s command\n") % (name, name)) | |
501 | except error.UnknownCommand: |
|
503 | except error.UnknownCommand: | |
502 | pass |
|
504 | pass | |
503 | return rst |
|
505 | return rst | |
@@ -534,8 +536,8 b' def help_(ui, name, unknowncmd=False, fu' | |||||
534 | modcmds = set([c.partition('|')[0] for c in ct]) |
|
536 | modcmds = set([c.partition('|')[0] for c in ct]) | |
535 | rst.extend(helplist(modcmds.__contains__)) |
|
537 | rst.extend(helplist(modcmds.__contains__)) | |
536 | else: |
|
538 | else: | |
537 |
rst.append(_(' |
|
539 | rst.append(_("(use 'hg help extensions' for information on enabling" | |
538 |
|
|
540 | " extensions)\n")) | |
539 | return rst |
|
541 | return rst | |
540 |
|
542 | |||
541 | def helpextcmd(name, subtopic=None): |
|
543 | def helpextcmd(name, subtopic=None): | |
@@ -547,8 +549,8 b' def help_(ui, name, unknowncmd=False, fu' | |||||
547 | "extension:") % cmd, {ext: doc}, indent=4, |
|
549 | "extension:") % cmd, {ext: doc}, indent=4, | |
548 | showdeprecated=True) |
|
550 | showdeprecated=True) | |
549 | rst.append('\n') |
|
551 | rst.append('\n') | |
550 |
rst.append(_(' |
|
552 | rst.append(_("(use 'hg help extensions' for information on enabling " | |
551 |
|
|
553 | "extensions)\n")) | |
552 | return rst |
|
554 | return rst | |
553 |
|
555 | |||
554 |
|
556 | |||
@@ -573,7 +575,7 b' def help_(ui, name, unknowncmd=False, fu' | |||||
573 | rst.append('\n') |
|
575 | rst.append('\n') | |
574 | if not rst: |
|
576 | if not rst: | |
575 | msg = _('no matches') |
|
577 | msg = _('no matches') | |
576 |
hint = _('t |
|
578 | hint = _("try 'hg help' for a list of topics") | |
577 | raise error.Abort(msg, hint=hint) |
|
579 | raise error.Abort(msg, hint=hint) | |
578 | elif name and name != 'shortlist': |
|
580 | elif name and name != 'shortlist': | |
579 | queries = [] |
|
581 | queries = [] | |
@@ -596,7 +598,7 b' def help_(ui, name, unknowncmd=False, fu' | |||||
596 | raise error.UnknownCommand(name) |
|
598 | raise error.UnknownCommand(name) | |
597 | else: |
|
599 | else: | |
598 | msg = _('no such help topic: %s') % name |
|
600 | msg = _('no such help topic: %s') % name | |
599 |
hint = _(' |
|
601 | hint = _("try 'hg help --keyword %s'") % name | |
600 | raise error.Abort(msg, hint=hint) |
|
602 | raise error.Abort(msg, hint=hint) | |
601 | else: |
|
603 | else: | |
602 | # program name |
|
604 | # program name |
@@ -1393,6 +1393,12 b" collected during profiling, while 'profi" | |||||
1393 | statistical text report generated from the profiling data. The |
|
1393 | statistical text report generated from the profiling data. The | |
1394 | profiling is done using lsprof. |
|
1394 | profiling is done using lsprof. | |
1395 |
|
1395 | |||
|
1396 | ``enabled`` | |||
|
1397 | Enable the profiler. | |||
|
1398 | (default: false) | |||
|
1399 | ||||
|
1400 | This is equivalent to passing ``--profile`` on the command line. | |||
|
1401 | ||||
1396 | ``type`` |
|
1402 | ``type`` | |
1397 | The type of profiler to use. |
|
1403 | The type of profiler to use. | |
1398 | (default: ls) |
|
1404 | (default: ls) | |
@@ -1557,6 +1563,21 b' Controls generic server settings.' | |||||
1557 | repositories to the exchange format required by the bundle1 data |
|
1563 | repositories to the exchange format required by the bundle1 data | |
1558 | format can consume a lot of CPU. |
|
1564 | format can consume a lot of CPU. | |
1559 |
|
1565 | |||
|
1566 | ``zliblevel`` | |||
|
1567 | Integer between ``-1`` and ``9`` that controls the zlib compression level | |||
|
1568 | for wire protocol commands that send zlib compressed output (notably the | |||
|
1569 | commands that send repository history data). | |||
|
1570 | ||||
|
1571 | The default (``-1``) uses the default zlib compression level, which is | |||
|
1572 | likely equivalent to ``6``. ``0`` means no compression. ``9`` means | |||
|
1573 | maximum compression. | |||
|
1574 | ||||
|
1575 | Setting this option allows server operators to make trade-offs between | |||
|
1576 | bandwidth and CPU used. Lowering the compression lowers CPU utilization | |||
|
1577 | but sends more bytes to clients. | |||
|
1578 | ||||
|
1579 | This option only impacts the HTTP server. | |||
|
1580 | ||||
1560 | ``smtp`` |
|
1581 | ``smtp`` | |
1561 | -------- |
|
1582 | -------- | |
1562 |
|
1583 |
@@ -1,6 +1,3 b'' | |||||
1 | Bundles |
|
|||
2 | ======= |
|
|||
3 |
|
||||
4 | A bundle is a container for repository data. |
|
1 | A bundle is a container for repository data. | |
5 |
|
2 | |||
6 | Bundles are used as standalone files as well as the interchange format |
|
3 | Bundles are used as standalone files as well as the interchange format | |
@@ -8,7 +5,7 b' over the wire protocol used when two Mer' | |||||
8 | each other. |
|
5 | each other. | |
9 |
|
6 | |||
10 | Headers |
|
7 | Headers | |
11 | ------- |
|
8 | ======= | |
12 |
|
9 | |||
13 | Bundles produced since Mercurial 0.7 (September 2005) have a 4 byte |
|
10 | Bundles produced since Mercurial 0.7 (September 2005) have a 4 byte | |
14 | header identifying the major bundle type. The header always begins with |
|
11 | header identifying the major bundle type. The header always begins with |
@@ -1,6 +1,3 b'' | |||||
1 | Changegroups |
|
|||
2 | ============ |
|
|||
3 |
|
||||
4 | Changegroups are representations of repository revlog data, specifically |
|
1 | Changegroups are representations of repository revlog data, specifically | |
5 | the changelog, manifest, and filelogs. |
|
2 | the changelog, manifest, and filelogs. | |
6 |
|
3 | |||
@@ -35,7 +32,7 b' There is a special case chunk that has 0' | |||||
35 | call this an *empty chunk*. |
|
32 | call this an *empty chunk*. | |
36 |
|
33 | |||
37 | Delta Groups |
|
34 | Delta Groups | |
38 | ------------ |
|
35 | ============ | |
39 |
|
36 | |||
40 | A *delta group* expresses the content of a revlog as a series of deltas, |
|
37 | A *delta group* expresses the content of a revlog as a series of deltas, | |
41 | or patches against previous revisions. |
|
38 | or patches against previous revisions. | |
@@ -111,21 +108,21 b' changegroup. This allows the delta to be' | |||||
111 | which can result in smaller deltas and more efficient encoding of data. |
|
108 | which can result in smaller deltas and more efficient encoding of data. | |
112 |
|
109 | |||
113 | Changeset Segment |
|
110 | Changeset Segment | |
114 | ----------------- |
|
111 | ================= | |
115 |
|
112 | |||
116 | The *changeset segment* consists of a single *delta group* holding |
|
113 | The *changeset segment* consists of a single *delta group* holding | |
117 | changelog data. It is followed by an *empty chunk* to denote the |
|
114 | changelog data. It is followed by an *empty chunk* to denote the | |
118 | boundary to the *manifests segment*. |
|
115 | boundary to the *manifests segment*. | |
119 |
|
116 | |||
120 | Manifest Segment |
|
117 | Manifest Segment | |
121 | ---------------- |
|
118 | ================ | |
122 |
|
119 | |||
123 | The *manifest segment* consists of a single *delta group* holding |
|
120 | The *manifest segment* consists of a single *delta group* holding | |
124 | manifest data. It is followed by an *empty chunk* to denote the boundary |
|
121 | manifest data. It is followed by an *empty chunk* to denote the boundary | |
125 | to the *filelogs segment*. |
|
122 | to the *filelogs segment*. | |
126 |
|
123 | |||
127 | Filelogs Segment |
|
124 | Filelogs Segment | |
128 | ---------------- |
|
125 | ================ | |
129 |
|
126 | |||
130 | The *filelogs* segment consists of multiple sub-segments, each |
|
127 | The *filelogs* segment consists of multiple sub-segments, each | |
131 | corresponding to an individual file whose data is being described:: |
|
128 | corresponding to an individual file whose data is being described:: | |
@@ -154,4 +151,3 b' Each filelog sub-segment consists of the' | |||||
154 |
|
151 | |||
155 | That is, a *chunk* consisting of the filename (not terminated or padded) |
|
152 | That is, a *chunk* consisting of the filename (not terminated or padded) | |
156 | followed by N chunks constituting the *delta group* for this file. |
|
153 | followed by N chunks constituting the *delta group* for this file. | |
157 |
|
@@ -1,5 +1,3 b'' | |||||
1 | Requirements |
|
|||
2 | ============ |
|
|||
3 |
|
1 | |||
4 | Repositories contain a file (``.hg/requires``) containing a list of |
|
2 | Repositories contain a file (``.hg/requires``) containing a list of | |
5 | features/capabilities that are *required* for clients to interface |
|
3 | features/capabilities that are *required* for clients to interface | |
@@ -19,7 +17,7 b' The following sections describe the requ' | |||||
19 | Mercurial core distribution. |
|
17 | Mercurial core distribution. | |
20 |
|
18 | |||
21 | revlogv1 |
|
19 | revlogv1 | |
22 | -------- |
|
20 | ======== | |
23 |
|
21 | |||
24 | When present, revlogs are version 1 (RevlogNG). RevlogNG was introduced |
|
22 | When present, revlogs are version 1 (RevlogNG). RevlogNG was introduced | |
25 | in 2006. The ``revlogv1`` requirement has been enabled by default |
|
23 | in 2006. The ``revlogv1`` requirement has been enabled by default | |
@@ -28,7 +26,7 b' since the ``requires`` file was introduc' | |||||
28 | If this requirement is not present, version 0 revlogs are assumed. |
|
26 | If this requirement is not present, version 0 revlogs are assumed. | |
29 |
|
27 | |||
30 | store |
|
28 | store | |
31 | ----- |
|
29 | ===== | |
32 |
|
30 | |||
33 | The *store* repository layout should be used. |
|
31 | The *store* repository layout should be used. | |
34 |
|
32 | |||
@@ -36,7 +34,7 b' This requirement has been enabled by def' | |||||
36 | was introduced in Mercurial 0.9.2. |
|
34 | was introduced in Mercurial 0.9.2. | |
37 |
|
35 | |||
38 | fncache |
|
36 | fncache | |
39 | ------- |
|
37 | ======= | |
40 |
|
38 | |||
41 | The *fncache* repository layout should be used. |
|
39 | The *fncache* repository layout should be used. | |
42 |
|
40 | |||
@@ -48,7 +46,7 b' enabled (which is the default behavior).' | |||||
48 | 1.1 (released December 2008). |
|
46 | 1.1 (released December 2008). | |
49 |
|
47 | |||
50 | shared |
|
48 | shared | |
51 | ------ |
|
49 | ====== | |
52 |
|
50 | |||
53 | Denotes that the store for a repository is shared from another location |
|
51 | Denotes that the store for a repository is shared from another location | |
54 | (defined by the ``.hg/sharedpath`` file). |
|
52 | (defined by the ``.hg/sharedpath`` file). | |
@@ -58,7 +56,7 b' This requirement is set when a repositor' | |||||
58 | The requirement was added in Mercurial 1.3 (released July 2009). |
|
56 | The requirement was added in Mercurial 1.3 (released July 2009). | |
59 |
|
57 | |||
60 | dotencode |
|
58 | dotencode | |
61 | --------- |
|
59 | ========= | |
62 |
|
60 | |||
63 | The *dotencode* repository layout should be used. |
|
61 | The *dotencode* repository layout should be used. | |
64 |
|
62 | |||
@@ -70,7 +68,7 b' is enabled (which is the default behavio' | |||||
70 | Mercurial 1.7 (released November 2010). |
|
68 | Mercurial 1.7 (released November 2010). | |
71 |
|
69 | |||
72 | parentdelta |
|
70 | parentdelta | |
73 | ----------- |
|
71 | =========== | |
74 |
|
72 | |||
75 | Denotes a revlog delta encoding format that was experimental and |
|
73 | Denotes a revlog delta encoding format that was experimental and | |
76 | replaced by *generaldelta*. It should not be seen in the wild because |
|
74 | replaced by *generaldelta*. It should not be seen in the wild because | |
@@ -80,7 +78,7 b' This requirement was added in Mercurial ' | |||||
80 | 1.9. |
|
78 | 1.9. | |
81 |
|
79 | |||
82 | generaldelta |
|
80 | generaldelta | |
83 | ------------ |
|
81 | ============ | |
84 |
|
82 | |||
85 | Revlogs should be created with the *generaldelta* flag enabled. The |
|
83 | Revlogs should be created with the *generaldelta* flag enabled. The | |
86 | generaldelta flag will cause deltas to be encoded against a parent |
|
84 | generaldelta flag will cause deltas to be encoded against a parent | |
@@ -91,7 +89,7 b' July 2011). The requirement was disabled' | |||||
91 | default until Mercurial 3.7 (released February 2016). |
|
89 | default until Mercurial 3.7 (released February 2016). | |
92 |
|
90 | |||
93 | manifestv2 |
|
91 | manifestv2 | |
94 | ---------- |
|
92 | ========== | |
95 |
|
93 | |||
96 | Denotes that version 2 of manifests are being used. |
|
94 | Denotes that version 2 of manifests are being used. | |
97 |
|
95 | |||
@@ -100,7 +98,7 b' May 2015). The requirement is currently ' | |||||
100 | by default. |
|
98 | by default. | |
101 |
|
99 | |||
102 | treemanifest |
|
100 | treemanifest | |
103 | ------------ |
|
101 | ============ | |
104 |
|
102 | |||
105 | Denotes that tree manifests are being used. Tree manifests are |
|
103 | Denotes that tree manifests are being used. Tree manifests are | |
106 | one manifest per directory (as opposed to a single flat manifest). |
|
104 | one manifest per directory (as opposed to a single flat manifest). |
@@ -1,6 +1,3 b'' | |||||
1 | Revlogs |
|
|||
2 | ======= |
|
|||
3 |
|
||||
4 | Revision logs - or *revlogs* - are an append only data structure for |
|
1 | Revision logs - or *revlogs* - are an append only data structure for | |
5 | storing discrete entries, or *revisions*. They are the primary storage |
|
2 | storing discrete entries, or *revisions*. They are the primary storage | |
6 | mechanism of repository data. |
|
3 | mechanism of repository data. | |
@@ -28,7 +25,7 b' revision #0 and the second is revision #' | |||||
28 | used to mean *does not exist* or *not defined*. |
|
25 | used to mean *does not exist* or *not defined*. | |
29 |
|
26 | |||
30 | File Format |
|
27 | File Format | |
31 | ----------- |
|
28 | =========== | |
32 |
|
29 | |||
33 | A revlog begins with a 32-bit big endian integer holding version info |
|
30 | A revlog begins with a 32-bit big endian integer holding version info | |
34 | and feature flags. This integer is shared with the first revision |
|
31 | and feature flags. This integer is shared with the first revision | |
@@ -77,7 +74,7 b' possibly located between index entries. ' | |||||
77 | below. |
|
74 | below. | |
78 |
|
75 | |||
79 | RevlogNG Format |
|
76 | RevlogNG Format | |
80 | --------------- |
|
77 | =============== | |
81 |
|
78 | |||
82 | RevlogNG (version 1) begins with an index describing the revisions in |
|
79 | RevlogNG (version 1) begins with an index describing the revisions in | |
83 | the revlog. If the ``inline`` flag is set, revision data is stored inline, |
|
80 | the revlog. If the ``inline`` flag is set, revision data is stored inline, | |
@@ -129,7 +126,7 b' The first 4 bytes of the revlog are shar' | |||||
129 | and the 6 byte absolute offset field from the first revlog entry. |
|
126 | and the 6 byte absolute offset field from the first revlog entry. | |
130 |
|
127 | |||
131 | Delta Chains |
|
128 | Delta Chains | |
132 | ------------ |
|
129 | ============ | |
133 |
|
130 | |||
134 | Revision data is encoded as a chain of *chunks*. Each chain begins with |
|
131 | Revision data is encoded as a chain of *chunks*. Each chain begins with | |
135 | the compressed original full text for that revision. Each subsequent |
|
132 | the compressed original full text for that revision. Each subsequent | |
@@ -153,7 +150,7 b' by default in Mercurial 3.7) activates t' | |||||
153 | computed against an arbitrary revision (almost certainly a parent revision). |
|
150 | computed against an arbitrary revision (almost certainly a parent revision). | |
154 |
|
151 | |||
155 | File Storage |
|
152 | File Storage | |
156 | ------------ |
|
153 | ============ | |
157 |
|
154 | |||
158 | Revlogs logically consist of an index (metadata of entries) and |
|
155 | Revlogs logically consist of an index (metadata of entries) and | |
159 | revision data. This data may be stored together in a single file or in |
|
156 | revision data. This data may be stored together in a single file or in | |
@@ -172,7 +169,7 b' The actual layout of revlog files on dis' | |||||
172 | (possibly containing inline data) and a ``.d`` file holds the revision data. |
|
169 | (possibly containing inline data) and a ``.d`` file holds the revision data. | |
173 |
|
170 | |||
174 | Revision Entries |
|
171 | Revision Entries | |
175 | ---------------- |
|
172 | ================ | |
176 |
|
173 | |||
177 | Revision entries consist of an optional 1 byte header followed by an |
|
174 | Revision entries consist of an optional 1 byte header followed by an | |
178 | encoding of the revision data. The headers are as follows: |
|
175 | encoding of the revision data. The headers are as follows: | |
@@ -187,7 +184,7 b' x (0x78)' | |||||
187 | The 0x78 value is actually the first byte of the zlib header (CMF byte). |
|
184 | The 0x78 value is actually the first byte of the zlib header (CMF byte). | |
188 |
|
185 | |||
189 | Hash Computation |
|
186 | Hash Computation | |
190 | ---------------- |
|
187 | ================ | |
191 |
|
188 | |||
192 | The hash of the revision is stored in the index and is used both as a primary |
|
189 | The hash of the revision is stored in the index and is used both as a primary | |
193 | key and for data integrity verification. |
|
190 | key and for data integrity verification. |
@@ -12,11 +12,17 b' Special characters can be used in quoted' | |||||
12 | e.g., ``\n`` is interpreted as a newline. To prevent them from being |
|
12 | e.g., ``\n`` is interpreted as a newline. To prevent them from being | |
13 | interpreted, strings can be prefixed with ``r``, e.g. ``r'...'``. |
|
13 | interpreted, strings can be prefixed with ``r``, e.g. ``r'...'``. | |
14 |
|
14 | |||
|
15 | Prefix | |||
|
16 | ====== | |||
|
17 | ||||
15 | There is a single prefix operator: |
|
18 | There is a single prefix operator: | |
16 |
|
19 | |||
17 | ``not x`` |
|
20 | ``not x`` | |
18 | Changesets not in x. Short form is ``! x``. |
|
21 | Changesets not in x. Short form is ``! x``. | |
19 |
|
22 | |||
|
23 | Infix | |||
|
24 | ===== | |||
|
25 | ||||
20 | These are the supported infix operators: |
|
26 | These are the supported infix operators: | |
21 |
|
27 | |||
22 | ``x::y`` |
|
28 | ``x::y`` | |
@@ -55,16 +61,40 b' These are the supported infix operators:' | |||||
55 | ``x~n`` |
|
61 | ``x~n`` | |
56 | The nth first ancestor of x; ``x~0`` is x; ``x~3`` is ``x^^^``. |
|
62 | The nth first ancestor of x; ``x~0`` is x; ``x~3`` is ``x^^^``. | |
57 |
|
63 | |||
|
64 | ``x ## y`` | |||
|
65 | Concatenate strings and identifiers into one string. | |||
|
66 | ||||
|
67 | All other prefix, infix and postfix operators have lower priority than | |||
|
68 | ``##``. For example, ``a1 ## a2~2`` is equivalent to ``(a1 ## a2)~2``. | |||
|
69 | ||||
|
70 | For example:: | |||
|
71 | ||||
|
72 | [revsetalias] | |||
|
73 | issue(a1) = grep(r'\bissue[ :]?' ## a1 ## r'\b|\bbug\(' ## a1 ## r'\)') | |||
|
74 | ||||
|
75 | ``issue(1234)`` is equivalent to | |||
|
76 | ``grep(r'\bissue[ :]?1234\b|\bbug\(1234\)')`` | |||
|
77 | in this case. This matches against all of "issue 1234", "issue:1234", | |||
|
78 | "issue1234" and "bug(1234)". | |||
|
79 | ||||
|
80 | Postfix | |||
|
81 | ======= | |||
|
82 | ||||
58 | There is a single postfix operator: |
|
83 | There is a single postfix operator: | |
59 |
|
84 | |||
60 | ``x^`` |
|
85 | ``x^`` | |
61 | Equivalent to ``x^1``, the first parent of each changeset in x. |
|
86 | Equivalent to ``x^1``, the first parent of each changeset in x. | |
62 |
|
87 | |||
|
88 | Predicates | |||
|
89 | ========== | |||
63 |
|
90 | |||
64 | The following predicates are supported: |
|
91 | The following predicates are supported: | |
65 |
|
92 | |||
66 | .. predicatesmarker |
|
93 | .. predicatesmarker | |
67 |
|
94 | |||
|
95 | Aliases | |||
|
96 | ======= | |||
|
97 | ||||
68 | New predicates (known as "aliases") can be defined, using any combination of |
|
98 | New predicates (known as "aliases") can be defined, using any combination of | |
69 | existing predicates or other aliases. An alias definition looks like:: |
|
99 | existing predicates or other aliases. An alias definition looks like:: | |
70 |
|
100 | |||
@@ -86,18 +116,8 b' For example,' | |||||
86 | defines three aliases, ``h``, ``d``, and ``rs``. ``rs(0:tip, author)`` is |
|
116 | defines three aliases, ``h``, ``d``, and ``rs``. ``rs(0:tip, author)`` is | |
87 | exactly equivalent to ``reverse(sort(0:tip, author))``. |
|
117 | exactly equivalent to ``reverse(sort(0:tip, author))``. | |
88 |
|
118 | |||
89 | An infix operator ``##`` can concatenate strings and identifiers into |
|
119 | Equivalents | |
90 | one string. For example:: |
|
120 | =========== | |
91 |
|
||||
92 | [revsetalias] |
|
|||
93 | issue(a1) = grep(r'\bissue[ :]?' ## a1 ## r'\b|\bbug\(' ## a1 ## r'\)') |
|
|||
94 |
|
||||
95 | ``issue(1234)`` is equivalent to ``grep(r'\bissue[ :]?1234\b|\bbug\(1234\)')`` |
|
|||
96 | in this case. This matches against all of "issue 1234", "issue:1234", |
|
|||
97 | "issue1234" and "bug(1234)". |
|
|||
98 |
|
||||
99 | All other prefix, infix and postfix operators have lower priority than |
|
|||
100 | ``##``. For example, ``a1 ## a2~2`` is equivalent to ``(a1 ## a2)~2``. |
|
|||
101 |
|
121 | |||
102 | Command line equivalents for :hg:`log`:: |
|
122 | Command line equivalents for :hg:`log`:: | |
103 |
|
123 | |||
@@ -110,6 +130,9 b' Command line equivalents for :hg:`log`::' | |||||
110 | -P x -> !::x |
|
130 | -P x -> !::x | |
111 | -l x -> limit(expr, x) |
|
131 | -l x -> limit(expr, x) | |
112 |
|
132 | |||
|
133 | Examples | |||
|
134 | ======== | |||
|
135 | ||||
113 | Some sample queries: |
|
136 | Some sample queries: | |
114 |
|
137 | |||
115 | - Changesets on the default branch:: |
|
138 | - Changesets on the default branch:: |
@@ -43,6 +43,15 b' In addition to filters, there are some b' | |||||
43 |
|
43 | |||
44 | .. functionsmarker |
|
44 | .. functionsmarker | |
45 |
|
45 | |||
|
46 | We provide a limited set of infix arithmetic operations on integers:: | |||
|
47 | ||||
|
48 | + for addition | |||
|
49 | - for subtraction | |||
|
50 | * for multiplication | |||
|
51 | / for floor division (division rounded to integer nearest -infinity) | |||
|
52 | ||||
|
53 | Division fulfils the law x = x / y + mod(x, y). | |||
|
54 | ||||
46 | Also, for any expression that returns a list, there is a list operator:: |
|
55 | Also, for any expression that returns a list, there is a list operator:: | |
47 |
|
56 | |||
48 | expr % "{template}" |
|
57 | expr % "{template}" | |
@@ -95,6 +104,10 b' Some sample command line templates:' | |||||
95 |
|
104 | |||
96 | $ hg log -r 0 --template "files: {join(files, ', ')}\n" |
|
105 | $ hg log -r 0 --template "files: {join(files, ', ')}\n" | |
97 |
|
106 | |||
|
107 | - Join the list of files ending with ".py" with a ", ":: | |||
|
108 | ||||
|
109 | $ hg log -r 0 --template "pythonfiles: {join(files('**.py'), ', ')}\n" | |||
|
110 | ||||
98 | - Separate non-empty arguments by a " ":: |
|
111 | - Separate non-empty arguments by a " ":: | |
99 |
|
112 | |||
100 | $ hg log -r 0 --template "{separate(' ', node, bookmarks, tags}\n" |
|
113 | $ hg log -r 0 --template "{separate(' ', node, bookmarks, tags}\n" |
@@ -195,7 +195,7 b' def defaultdest(source):' | |||||
195 | return '' |
|
195 | return '' | |
196 | return os.path.basename(os.path.normpath(path)) |
|
196 | return os.path.basename(os.path.normpath(path)) | |
197 |
|
197 | |||
198 | def share(ui, source, dest=None, update=True, bookmarks=True): |
|
198 | def share(ui, source, dest=None, update=True, bookmarks=True, defaultpath=None): | |
199 | '''create a shared repository''' |
|
199 | '''create a shared repository''' | |
200 |
|
200 | |||
201 | if not islocal(source): |
|
201 | if not islocal(source): | |
@@ -240,10 +240,10 b' def share(ui, source, dest=None, update=' | |||||
240 | destvfs.write('sharedpath', sharedpath) |
|
240 | destvfs.write('sharedpath', sharedpath) | |
241 |
|
241 | |||
242 | r = repository(ui, destwvfs.base) |
|
242 | r = repository(ui, destwvfs.base) | |
243 | postshare(srcrepo, r, bookmarks=bookmarks) |
|
243 | postshare(srcrepo, r, bookmarks=bookmarks, defaultpath=defaultpath) | |
244 | _postshareupdate(r, update, checkout=checkout) |
|
244 | _postshareupdate(r, update, checkout=checkout) | |
245 |
|
245 | |||
246 | def postshare(sourcerepo, destrepo, bookmarks=True): |
|
246 | def postshare(sourcerepo, destrepo, bookmarks=True, defaultpath=None): | |
247 | """Called after a new shared repo is created. |
|
247 | """Called after a new shared repo is created. | |
248 |
|
248 | |||
249 | The new repo only has a requirements file and pointer to the source. |
|
249 | The new repo only has a requirements file and pointer to the source. | |
@@ -252,17 +252,18 b' def postshare(sourcerepo, destrepo, book' | |||||
252 | Extensions can wrap this function and write additional entries to |
|
252 | Extensions can wrap this function and write additional entries to | |
253 | destrepo/.hg/shared to indicate additional pieces of data to be shared. |
|
253 | destrepo/.hg/shared to indicate additional pieces of data to be shared. | |
254 | """ |
|
254 | """ | |
255 | default = sourcerepo.ui.config('paths', 'default') |
|
255 | default = defaultpath or sourcerepo.ui.config('paths', 'default') | |
256 | if default: |
|
256 | if default: | |
257 | fp = destrepo.vfs("hgrc", "w", text=True) |
|
257 | fp = destrepo.vfs("hgrc", "w", text=True) | |
258 | fp.write("[paths]\n") |
|
258 | fp.write("[paths]\n") | |
259 | fp.write("default = %s\n" % default) |
|
259 | fp.write("default = %s\n" % default) | |
260 | fp.close() |
|
260 | fp.close() | |
261 |
|
261 | |||
262 | if bookmarks: |
|
262 | with destrepo.wlock(): | |
263 | fp = destrepo.vfs('shared', 'w') |
|
263 | if bookmarks: | |
264 | fp.write(sharedbookmarks + '\n') |
|
264 | fp = destrepo.vfs('shared', 'w') | |
265 | fp.close() |
|
265 | fp.write(sharedbookmarks + '\n') | |
|
266 | fp.close() | |||
266 |
|
267 | |||
267 | def _postshareupdate(repo, update, checkout=None): |
|
268 | def _postshareupdate(repo, update, checkout=None): | |
268 | """Maybe perform a working directory update after a shared repo is created. |
|
269 | """Maybe perform a working directory update after a shared repo is created. | |
@@ -373,8 +374,15 b' def clonewithshare(ui, peeropts, sharepa' | |||||
373 | clone(ui, peeropts, source, dest=sharepath, pull=True, |
|
374 | clone(ui, peeropts, source, dest=sharepath, pull=True, | |
374 | rev=rev, update=False, stream=stream) |
|
375 | rev=rev, update=False, stream=stream) | |
375 |
|
376 | |||
|
377 | # Resolve the value to put in [paths] section for the source. | |||
|
378 | if islocal(source): | |||
|
379 | defaultpath = os.path.abspath(util.urllocalpath(source)) | |||
|
380 | else: | |||
|
381 | defaultpath = source | |||
|
382 | ||||
376 | sharerepo = repository(ui, path=sharepath) |
|
383 | sharerepo = repository(ui, path=sharepath) | |
377 |
share(ui, sharerepo, dest=dest, update=False, bookmarks=False |
|
384 | share(ui, sharerepo, dest=dest, update=False, bookmarks=False, | |
|
385 | defaultpath=defaultpath) | |||
378 |
|
386 | |||
379 | # We need to perform a pull against the dest repo to fetch bookmarks |
|
387 | # We need to perform a pull against the dest repo to fetch bookmarks | |
380 | # and other non-store data that isn't shared by default. In the case of |
|
388 | # and other non-store data that isn't shared by default. In the case of | |
@@ -737,20 +745,22 b' def updatetotally(ui, repo, checkout, br' | |||||
737 | if movemarkfrom == repo['.'].node(): |
|
745 | if movemarkfrom == repo['.'].node(): | |
738 | pass # no-op update |
|
746 | pass # no-op update | |
739 | elif bookmarks.update(repo, [movemarkfrom], repo['.'].node()): |
|
747 | elif bookmarks.update(repo, [movemarkfrom], repo['.'].node()): | |
740 | ui.status(_("updating bookmark %s\n") % repo._activebookmark) |
|
748 | b = ui.label(repo._activebookmark, 'bookmarks.active') | |
|
749 | ui.status(_("updating bookmark %s\n") % b) | |||
741 | else: |
|
750 | else: | |
742 | # this can happen with a non-linear update |
|
751 | # this can happen with a non-linear update | |
743 | ui.status(_("(leaving bookmark %s)\n") % |
|
752 | b = ui.label(repo._activebookmark, 'bookmarks') | |
744 | repo._activebookmark) |
|
753 | ui.status(_("(leaving bookmark %s)\n") % b) | |
745 | bookmarks.deactivate(repo) |
|
754 | bookmarks.deactivate(repo) | |
746 | elif brev in repo._bookmarks: |
|
755 | elif brev in repo._bookmarks: | |
747 | if brev != repo._activebookmark: |
|
756 | if brev != repo._activebookmark: | |
748 | ui.status(_("(activating bookmark %s)\n") % brev) |
|
757 | b = ui.label(brev, 'bookmarks.active') | |
|
758 | ui.status(_("(activating bookmark %s)\n") % b) | |||
749 | bookmarks.activate(repo, brev) |
|
759 | bookmarks.activate(repo, brev) | |
750 | elif brev: |
|
760 | elif brev: | |
751 | if repo._activebookmark: |
|
761 | if repo._activebookmark: | |
752 | ui.status(_("(leaving bookmark %s)\n") % |
|
762 | b = ui.label(repo._activebookmark, 'bookmarks') | |
753 | repo._activebookmark) |
|
763 | ui.status(_("(leaving bookmark %s)\n") % b) | |
754 | bookmarks.deactivate(repo) |
|
764 | bookmarks.deactivate(repo) | |
755 |
|
765 | |||
756 | if warndest: |
|
766 | if warndest: | |
@@ -758,10 +768,11 b' def updatetotally(ui, repo, checkout, br' | |||||
758 |
|
768 | |||
759 | return ret |
|
769 | return ret | |
760 |
|
770 | |||
761 | def merge(repo, node, force=None, remind=True, mergeforce=False): |
|
771 | def merge(repo, node, force=None, remind=True, mergeforce=False, labels=None): | |
762 | """Branch merge with node, resolving changes. Return true if any |
|
772 | """Branch merge with node, resolving changes. Return true if any | |
763 | unresolved conflicts.""" |
|
773 | unresolved conflicts.""" | |
764 |
stats = mergemod.update(repo, node, True, force, mergeforce=mergeforce |
|
774 | stats = mergemod.update(repo, node, True, force, mergeforce=mergeforce, | |
|
775 | labels=labels) | |||
765 | _showstats(repo, stats) |
|
776 | _showstats(repo, stats) | |
766 | if stats[3]: |
|
777 | if stats[3]: | |
767 | repo.ui.status(_("use 'hg resolve' to retry unresolved file merges " |
|
778 | repo.ui.status(_("use 'hg resolve' to retry unresolved file merges " |
@@ -28,6 +28,7 b' from .. import (' | |||||
28 | error, |
|
28 | error, | |
29 | hg, |
|
29 | hg, | |
30 | hook, |
|
30 | hook, | |
|
31 | profiling, | |||
31 | repoview, |
|
32 | repoview, | |
32 | templatefilters, |
|
33 | templatefilters, | |
33 | templater, |
|
34 | templater, | |
@@ -305,8 +306,9 b' class hgweb(object):' | |||||
305 | should be using instances of this class as the WSGI application. |
|
306 | should be using instances of this class as the WSGI application. | |
306 | """ |
|
307 | """ | |
307 | with self._obtainrepo() as repo: |
|
308 | with self._obtainrepo() as repo: | |
308 | for r in self._runwsgi(req, repo): |
|
309 | with profiling.maybeprofile(repo.ui): | |
309 | yield r |
|
310 | for r in self._runwsgi(req, repo): | |
|
311 | yield r | |||
310 |
|
312 | |||
311 | def _runwsgi(self, req, repo): |
|
313 | def _runwsgi(self, req, repo): | |
312 | rctx = requestcontext(self, repo) |
|
314 | rctx = requestcontext(self, repo) |
@@ -31,6 +31,7 b' from .. import (' | |||||
31 | encoding, |
|
31 | encoding, | |
32 | error, |
|
32 | error, | |
33 | hg, |
|
33 | hg, | |
|
34 | profiling, | |||
34 | scmutil, |
|
35 | scmutil, | |
35 | templater, |
|
36 | templater, | |
36 | ui as uimod, |
|
37 | ui as uimod, | |
@@ -217,6 +218,11 b' class hgwebdir(object):' | |||||
217 | return False |
|
218 | return False | |
218 |
|
219 | |||
219 | def run_wsgi(self, req): |
|
220 | def run_wsgi(self, req): | |
|
221 | with profiling.maybeprofile(self.ui): | |||
|
222 | for r in self._runwsgi(req): | |||
|
223 | yield r | |||
|
224 | ||||
|
225 | def _runwsgi(self, req): | |||
220 | try: |
|
226 | try: | |
221 | self.refresh() |
|
227 | self.refresh() | |
222 |
|
228 |
@@ -73,14 +73,30 b' class webproto(wireproto.abstractserverp' | |||||
73 | val = self.ui.fout.getvalue() |
|
73 | val = self.ui.fout.getvalue() | |
74 | self.ui.ferr, self.ui.fout = self.oldio |
|
74 | self.ui.ferr, self.ui.fout = self.oldio | |
75 | return val |
|
75 | return val | |
76 | def groupchunks(self, cg): |
|
76 | ||
77 | z = zlib.compressobj() |
|
77 | def groupchunks(self, fh): | |
78 | while True: |
|
78 | def getchunks(): | |
79 | chunk = cg.read(4096) |
|
79 | while True: | |
80 |
|
|
80 | chunk = fh.read(32768) | |
81 |
|
|
81 | if not chunk: | |
82 | yield z.compress(chunk) |
|
82 | break | |
|
83 | yield chunk | |||
|
84 | ||||
|
85 | return self.compresschunks(getchunks()) | |||
|
86 | ||||
|
87 | def compresschunks(self, chunks): | |||
|
88 | # Don't allow untrusted settings because disabling compression or | |||
|
89 | # setting a very high compression level could lead to flooding | |||
|
90 | # the server's network or CPU. | |||
|
91 | z = zlib.compressobj(self.ui.configint('server', 'zliblevel', -1)) | |||
|
92 | for chunk in chunks: | |||
|
93 | data = z.compress(chunk) | |||
|
94 | # Not all calls to compress() emit data. It is cheaper to inspect | |||
|
95 | # that here than to send it via the generator. | |||
|
96 | if data: | |||
|
97 | yield data | |||
83 | yield z.flush() |
|
98 | yield z.flush() | |
|
99 | ||||
84 | def _client(self): |
|
100 | def _client(self): | |
85 | return 'remote:%s:%s:%s' % ( |
|
101 | return 'remote:%s:%s:%s' % ( | |
86 | self.req.env.get('wsgi.url_scheme') or 'http', |
|
102 | self.req.env.get('wsgi.url_scheme') or 'http', |
@@ -263,7 +263,7 b' def openlog(opt, default):' | |||||
263 | return open(opt, 'a') |
|
263 | return open(opt, 'a') | |
264 | return default |
|
264 | return default | |
265 |
|
265 | |||
266 |
class MercurialHTTPServer( |
|
266 | class MercurialHTTPServer(_mixin, httpservermod.httpserver, object): | |
267 |
|
267 | |||
268 | # SO_REUSEADDR has broken semantics on windows |
|
268 | # SO_REUSEADDR has broken semantics on windows | |
269 | if os.name == 'nt': |
|
269 | if os.name == 'nt': |
@@ -31,7 +31,6 b' from .. import (' | |||||
31 | encoding, |
|
31 | encoding, | |
32 | error, |
|
32 | error, | |
33 | graphmod, |
|
33 | graphmod, | |
34 | patch, |
|
|||
35 | revset, |
|
34 | revset, | |
36 | scmutil, |
|
35 | scmutil, | |
37 | templatefilters, |
|
36 | templatefilters, | |
@@ -861,8 +860,6 b' def annotate(web, req, tmpl):' | |||||
861 | fctx = webutil.filectx(web.repo, req) |
|
860 | fctx = webutil.filectx(web.repo, req) | |
862 | f = fctx.path() |
|
861 | f = fctx.path() | |
863 | parity = paritygen(web.stripecount) |
|
862 | parity = paritygen(web.stripecount) | |
864 | diffopts = patch.difffeatureopts(web.repo.ui, untrusted=True, |
|
|||
865 | section='annotate', whitespace=True) |
|
|||
866 |
|
863 | |||
867 | def parents(f): |
|
864 | def parents(f): | |
868 | for p in f.parents(): |
|
865 | for p in f.parents(): | |
@@ -877,8 +874,8 b' def annotate(web, req, tmpl):' | |||||
877 | or 'application/octet-stream') |
|
874 | or 'application/octet-stream') | |
878 | lines = [((fctx.filectx(fctx.filerev()), 1), '(binary:%s)' % mt)] |
|
875 | lines = [((fctx.filectx(fctx.filerev()), 1), '(binary:%s)' % mt)] | |
879 | else: |
|
876 | else: | |
880 | lines = fctx.annotate(follow=True, linenumber=True, |
|
877 | lines = webutil.annotate(fctx, web.repo.ui) | |
881 | diffopts=diffopts) |
|
878 | ||
882 | previousrev = None |
|
879 | previousrev = None | |
883 | blockparitygen = paritygen(1) |
|
880 | blockparitygen = paritygen(1) | |
884 | for lineno, ((f, targetline), l) in enumerate(lines): |
|
881 | for lineno, ((f, targetline), l) in enumerate(lines): |
@@ -164,6 +164,11 b' class _siblings(object):' | |||||
164 | def __len__(self): |
|
164 | def __len__(self): | |
165 | return len(self.siblings) |
|
165 | return len(self.siblings) | |
166 |
|
166 | |||
|
167 | def annotate(fctx, ui): | |||
|
168 | diffopts = patch.difffeatureopts(ui, untrusted=True, | |||
|
169 | section='annotate', whitespace=True) | |||
|
170 | return fctx.annotate(follow=True, linenumber=True, diffopts=diffopts) | |||
|
171 | ||||
167 | def parents(ctx, hide=None): |
|
172 | def parents(ctx, hide=None): | |
168 | if isinstance(ctx, context.basefilectx): |
|
173 | if isinstance(ctx, context.basefilectx): | |
169 | introrev = ctx.introrev() |
|
174 | introrev = ctx.introrev() |
@@ -58,6 +58,12 b' class httpsendfile(object):' | |||||
58 | unit=_('kb'), total=self._total) |
|
58 | unit=_('kb'), total=self._total) | |
59 | return ret |
|
59 | return ret | |
60 |
|
60 | |||
|
61 | def __enter__(self): | |||
|
62 | return self | |||
|
63 | ||||
|
64 | def __exit__(self, exc_type, exc_val, exc_tb): | |||
|
65 | self.close() | |||
|
66 | ||||
61 | # moved here from url.py to avoid a cycle |
|
67 | # moved here from url.py to avoid a cycle | |
62 | def readauthforuri(ui, uri, user): |
|
68 | def readauthforuri(ui, uri, user): | |
63 | # Read configuration |
|
69 | # Read configuration |
@@ -12,7 +12,10 b' import locale' | |||||
12 | import os |
|
12 | import os | |
13 | import sys |
|
13 | import sys | |
14 |
|
14 | |||
15 |
from . import |
|
15 | from . import ( | |
|
16 | encoding, | |||
|
17 | pycompat, | |||
|
18 | ) | |||
16 |
|
19 | |||
17 | # modelled after templater.templatepath: |
|
20 | # modelled after templater.templatepath: | |
18 | if getattr(sys, 'frozen', None) is not None: |
|
21 | if getattr(sys, 'frozen', None) is not None: | |
@@ -27,10 +30,10 b' except NameError:' | |||||
27 |
|
30 | |||
28 | _languages = None |
|
31 | _languages = None | |
29 | if (os.name == 'nt' |
|
32 | if (os.name == 'nt' | |
30 |
and 'LANGUAGE' not in |
|
33 | and 'LANGUAGE' not in encoding.environ | |
31 |
and 'LC_ALL' not in |
|
34 | and 'LC_ALL' not in encoding.environ | |
32 |
and 'LC_MESSAGES' not in |
|
35 | and 'LC_MESSAGES' not in encoding.environ | |
33 |
and 'LANG' not in |
|
36 | and 'LANG' not in encoding.environ): | |
34 | # Try to detect UI language by "User Interface Language Management" API |
|
37 | # Try to detect UI language by "User Interface Language Management" API | |
35 | # if no locale variables are set. Note that locale.getdefaultlocale() |
|
38 | # if no locale variables are set. Note that locale.getdefaultlocale() | |
36 | # uses GetLocaleInfo(), which may be different from UI language. |
|
39 | # uses GetLocaleInfo(), which may be different from UI language. | |
@@ -46,7 +49,7 b" if (os.name == 'nt'" | |||||
46 | _ugettext = None |
|
49 | _ugettext = None | |
47 |
|
50 | |||
48 | def setdatapath(datapath): |
|
51 | def setdatapath(datapath): | |
49 | localedir = os.path.join(datapath, 'locale') |
|
52 | localedir = os.path.join(datapath, pycompat.sysstr('locale')) | |
50 | t = gettextmod.translation('hg', localedir, _languages, fallback=True) |
|
53 | t = gettextmod.translation('hg', localedir, _languages, fallback=True) | |
51 | global _ugettext |
|
54 | global _ugettext | |
52 | try: |
|
55 | try: | |
@@ -85,16 +88,18 b' def gettext(message):' | |||||
85 | # means u.encode(sys.getdefaultencoding()).decode(enc). Since |
|
88 | # means u.encode(sys.getdefaultencoding()).decode(enc). Since | |
86 | # the Python encoding defaults to 'ascii', this fails if the |
|
89 | # the Python encoding defaults to 'ascii', this fails if the | |
87 | # translated string use non-ASCII characters. |
|
90 | # translated string use non-ASCII characters. | |
88 | _msgcache[message] = u.encode(encoding.encoding, "replace") |
|
91 | encodingstr = pycompat.sysstr(encoding.encoding) | |
|
92 | _msgcache[message] = u.encode(encodingstr, "replace") | |||
89 | except LookupError: |
|
93 | except LookupError: | |
90 | # An unknown encoding results in a LookupError. |
|
94 | # An unknown encoding results in a LookupError. | |
91 | _msgcache[message] = message |
|
95 | _msgcache[message] = message | |
92 | return _msgcache[message] |
|
96 | return _msgcache[message] | |
93 |
|
97 | |||
94 | def _plain(): |
|
98 | def _plain(): | |
95 | if 'HGPLAIN' not in os.environ and 'HGPLAINEXCEPT' not in os.environ: |
|
99 | if ('HGPLAIN' not in encoding.environ | |
|
100 | and 'HGPLAINEXCEPT' not in encoding.environ): | |||
96 | return False |
|
101 | return False | |
97 |
exceptions = |
|
102 | exceptions = encoding.environ.get('HGPLAINEXCEPT', '').strip().split(',') | |
98 | return 'i18n' not in exceptions |
|
103 | return 'i18n' not in exceptions | |
99 |
|
104 | |||
100 | if _plain(): |
|
105 | if _plain(): |
@@ -149,14 +149,18 b' class localpeer(peer.peerrepository):' | |||||
149 |
|
149 | |||
150 | def getbundle(self, source, heads=None, common=None, bundlecaps=None, |
|
150 | def getbundle(self, source, heads=None, common=None, bundlecaps=None, | |
151 | **kwargs): |
|
151 | **kwargs): | |
152 |
c |
|
152 | chunks = exchange.getbundlechunks(self._repo, source, heads=heads, | |
153 |
common=common, bundlecaps=bundlecaps, |
|
153 | common=common, bundlecaps=bundlecaps, | |
|
154 | **kwargs) | |||
|
155 | cb = util.chunkbuffer(chunks) | |||
|
156 | ||||
154 | if bundlecaps is not None and 'HG20' in bundlecaps: |
|
157 | if bundlecaps is not None and 'HG20' in bundlecaps: | |
155 | # When requesting a bundle2, getbundle returns a stream to make the |
|
158 | # When requesting a bundle2, getbundle returns a stream to make the | |
156 | # wire level function happier. We need to build a proper object |
|
159 | # wire level function happier. We need to build a proper object | |
157 | # from it in local peer. |
|
160 | # from it in local peer. | |
158 |
|
|
161 | return bundle2.getunbundler(self.ui, cb) | |
159 |
|
|
162 | else: | |
|
163 | return changegroup.getunbundler('01', cb, None) | |||
160 |
|
164 | |||
161 | # TODO We might want to move the next two calls into legacypeer and add |
|
165 | # TODO We might want to move the next two calls into legacypeer and add | |
162 | # unbundle instead. |
|
166 | # unbundle instead. | |
@@ -504,8 +508,9 b' class localrepository(object):' | |||||
504 | def manifest(self): |
|
508 | def manifest(self): | |
505 | return manifest.manifest(self.svfs) |
|
509 | return manifest.manifest(self.svfs) | |
506 |
|
510 | |||
507 | def dirlog(self, dir): |
|
511 | @property | |
508 | return self.manifest.dirlog(dir) |
|
512 | def manifestlog(self): | |
|
513 | return manifest.manifestlog(self.svfs, self) | |||
509 |
|
514 | |||
510 | @repofilecache('dirstate') |
|
515 | @repofilecache('dirstate') | |
511 | def dirstate(self): |
|
516 | def dirstate(self): | |
@@ -1007,8 +1012,7 b' class localrepository(object):' | |||||
1007 | def transaction(self, desc, report=None): |
|
1012 | def transaction(self, desc, report=None): | |
1008 | if (self.ui.configbool('devel', 'all-warnings') |
|
1013 | if (self.ui.configbool('devel', 'all-warnings') | |
1009 | or self.ui.configbool('devel', 'check-locks')): |
|
1014 | or self.ui.configbool('devel', 'check-locks')): | |
1010 | l = self._lockref and self._lockref() |
|
1015 | if self._currentlock(self._lockref) is None: | |
1011 | if l is None or not l.held: |
|
|||
1012 | raise RuntimeError('programming error: transaction requires ' |
|
1016 | raise RuntimeError('programming error: transaction requires ' | |
1013 | 'locking') |
|
1017 | 'locking') | |
1014 | tr = self.currenttransaction() |
|
1018 | tr = self.currenttransaction() | |
@@ -1246,6 +1250,13 b' class localrepository(object):' | |||||
1246 | delattr(self.unfiltered(), 'dirstate') |
|
1250 | delattr(self.unfiltered(), 'dirstate') | |
1247 |
|
1251 | |||
1248 | def invalidate(self, clearfilecache=False): |
|
1252 | def invalidate(self, clearfilecache=False): | |
|
1253 | '''Invalidates both store and non-store parts other than dirstate | |||
|
1254 | ||||
|
1255 | If a transaction is running, invalidation of store is omitted, | |||
|
1256 | because discarding in-memory changes might cause inconsistency | |||
|
1257 | (e.g. incomplete fncache causes unintentional failure, but | |||
|
1258 | redundant one doesn't). | |||
|
1259 | ''' | |||
1249 | unfiltered = self.unfiltered() # all file caches are stored unfiltered |
|
1260 | unfiltered = self.unfiltered() # all file caches are stored unfiltered | |
1250 | for k in self._filecache.keys(): |
|
1261 | for k in self._filecache.keys(): | |
1251 | # dirstate is invalidated separately in invalidatedirstate() |
|
1262 | # dirstate is invalidated separately in invalidatedirstate() | |
@@ -1259,7 +1270,11 b' class localrepository(object):' | |||||
1259 | except AttributeError: |
|
1270 | except AttributeError: | |
1260 | pass |
|
1271 | pass | |
1261 | self.invalidatecaches() |
|
1272 | self.invalidatecaches() | |
1262 | self.store.invalidatecaches() |
|
1273 | if not self.currenttransaction(): | |
|
1274 | # TODO: Changing contents of store outside transaction | |||
|
1275 | # causes inconsistency. We should make in-memory store | |||
|
1276 | # changes detectable, and abort if changed. | |||
|
1277 | self.store.invalidatecaches() | |||
1263 |
|
1278 | |||
1264 | def invalidateall(self): |
|
1279 | def invalidateall(self): | |
1265 | '''Fully invalidates both store and non-store parts, causing the |
|
1280 | '''Fully invalidates both store and non-store parts, causing the | |
@@ -1268,6 +1283,7 b' class localrepository(object):' | |||||
1268 | self.invalidate() |
|
1283 | self.invalidate() | |
1269 | self.invalidatedirstate() |
|
1284 | self.invalidatedirstate() | |
1270 |
|
1285 | |||
|
1286 | @unfilteredmethod | |||
1271 | def _refreshfilecachestats(self, tr): |
|
1287 | def _refreshfilecachestats(self, tr): | |
1272 | """Reload stats of cached files so that they are flagged as valid""" |
|
1288 | """Reload stats of cached files so that they are flagged as valid""" | |
1273 | for k, ce in self._filecache.items(): |
|
1289 | for k, ce in self._filecache.items(): | |
@@ -1290,8 +1306,15 b' class localrepository(object):' | |||||
1290 | except error.LockHeld as inst: |
|
1306 | except error.LockHeld as inst: | |
1291 | if not wait: |
|
1307 | if not wait: | |
1292 | raise |
|
1308 | raise | |
1293 | self.ui.warn(_("waiting for lock on %s held by %r\n") % |
|
1309 | # show more details for new-style locks | |
1294 |
|
|
1310 | if ':' in inst.locker: | |
|
1311 | host, pid = inst.locker.split(":", 1) | |||
|
1312 | self.ui.warn( | |||
|
1313 | _("waiting for lock on %s held by process %r " | |||
|
1314 | "on host %r\n") % (desc, pid, host)) | |||
|
1315 | else: | |||
|
1316 | self.ui.warn(_("waiting for lock on %s held by %r\n") % | |||
|
1317 | (desc, inst.locker)) | |||
1295 | # default to 600 seconds timeout |
|
1318 | # default to 600 seconds timeout | |
1296 | l = lockmod.lock(vfs, lockname, |
|
1319 | l = lockmod.lock(vfs, lockname, | |
1297 | int(self.ui.config("ui", "timeout", "600")), |
|
1320 | int(self.ui.config("ui", "timeout", "600")), | |
@@ -1320,8 +1343,8 b' class localrepository(object):' | |||||
1320 |
|
1343 | |||
1321 | If both 'lock' and 'wlock' must be acquired, ensure you always acquires |
|
1344 | If both 'lock' and 'wlock' must be acquired, ensure you always acquires | |
1322 | 'wlock' first to avoid a dead-lock hazard.''' |
|
1345 | 'wlock' first to avoid a dead-lock hazard.''' | |
1323 |
l = self._ |
|
1346 | l = self._currentlock(self._lockref) | |
1324 |
if l is not None |
|
1347 | if l is not None: | |
1325 | l.lock() |
|
1348 | l.lock() | |
1326 | return l |
|
1349 | return l | |
1327 |
|
1350 | |||
@@ -1352,8 +1375,7 b' class localrepository(object):' | |||||
1352 | # acquisition would not cause dead-lock as they would just fail. |
|
1375 | # acquisition would not cause dead-lock as they would just fail. | |
1353 | if wait and (self.ui.configbool('devel', 'all-warnings') |
|
1376 | if wait and (self.ui.configbool('devel', 'all-warnings') | |
1354 | or self.ui.configbool('devel', 'check-locks')): |
|
1377 | or self.ui.configbool('devel', 'check-locks')): | |
1355 | l = self._lockref and self._lockref() |
|
1378 | if self._currentlock(self._lockref) is not None: | |
1356 | if l is not None and l.held: |
|
|||
1357 | self.ui.develwarn('"wlock" acquired after "lock"') |
|
1379 | self.ui.develwarn('"wlock" acquired after "lock"') | |
1358 |
|
1380 | |||
1359 | def unlock(): |
|
1381 | def unlock(): | |
@@ -1607,8 +1629,8 b' class localrepository(object):' | |||||
1607 | ms = mergemod.mergestate.read(self) |
|
1629 | ms = mergemod.mergestate.read(self) | |
1608 |
|
1630 | |||
1609 | if list(ms.unresolved()): |
|
1631 | if list(ms.unresolved()): | |
1610 |
raise error.Abort(_( |
|
1632 | raise error.Abort(_("unresolved merge conflicts " | |
1611 |
' |
|
1633 | "(see 'hg help resolve')")) | |
1612 | if ms.mdstate() != 's' or list(ms.driverresolved()): |
|
1634 | if ms.mdstate() != 's' or list(ms.driverresolved()): | |
1613 | raise error.Abort(_('driver-resolved merge conflicts'), |
|
1635 | raise error.Abort(_('driver-resolved merge conflicts'), | |
1614 | hint=_('run "hg resolve --all" to resolve')) |
|
1636 | hint=_('run "hg resolve --all" to resolve')) | |
@@ -1714,9 +1736,9 b' class localrepository(object):' | |||||
1714 | drop = [f for f in removed if f in m] |
|
1736 | drop = [f for f in removed if f in m] | |
1715 | for f in drop: |
|
1737 | for f in drop: | |
1716 | del m[f] |
|
1738 | del m[f] | |
1717 | mn = self.manifest.add(m, trp, linkrev, |
|
1739 | mn = self.manifestlog.add(m, trp, linkrev, | |
1718 | p1.manifestnode(), p2.manifestnode(), |
|
1740 | p1.manifestnode(), p2.manifestnode(), | |
1719 | added, drop) |
|
1741 | added, drop) | |
1720 | files = changed + removed |
|
1742 | files = changed + removed | |
1721 | else: |
|
1743 | else: | |
1722 | mn = p1.manifestnode() |
|
1744 | mn = p1.manifestnode() |
@@ -8,6 +8,8 b'' | |||||
8 | from __future__ import absolute_import, print_function |
|
8 | from __future__ import absolute_import, print_function | |
9 |
|
9 | |||
10 | import email |
|
10 | import email | |
|
11 | import email.charset | |||
|
12 | import email.header | |||
11 | import os |
|
13 | import os | |
12 | import quopri |
|
14 | import quopri | |
13 | import smtplib |
|
15 | import smtplib | |
@@ -23,7 +25,7 b' from . import (' | |||||
23 | util, |
|
25 | util, | |
24 | ) |
|
26 | ) | |
25 |
|
27 | |||
26 |
_oldheaderinit = email. |
|
28 | _oldheaderinit = email.header.Header.__init__ | |
27 | def _unifiedheaderinit(self, *args, **kw): |
|
29 | def _unifiedheaderinit(self, *args, **kw): | |
28 | """ |
|
30 | """ | |
29 | Python 2.7 introduces a backwards incompatible change |
|
31 | Python 2.7 introduces a backwards incompatible change | |
@@ -203,24 +205,33 b' def validateconfig(ui):' | |||||
203 | raise error.Abort(_('%r specified as email transport, ' |
|
205 | raise error.Abort(_('%r specified as email transport, ' | |
204 | 'but not in PATH') % method) |
|
206 | 'but not in PATH') % method) | |
205 |
|
207 | |||
|
208 | def codec2iana(cs): | |||
|
209 | '''''' | |||
|
210 | cs = email.charset.Charset(cs).input_charset.lower() | |||
|
211 | ||||
|
212 | # "latin1" normalizes to "iso8859-1", standard calls for "iso-8859-1" | |||
|
213 | if cs.startswith("iso") and not cs.startswith("iso-"): | |||
|
214 | return "iso-" + cs[3:] | |||
|
215 | return cs | |||
|
216 | ||||
206 | def mimetextpatch(s, subtype='plain', display=False): |
|
217 | def mimetextpatch(s, subtype='plain', display=False): | |
207 | '''Return MIME message suitable for a patch. |
|
218 | '''Return MIME message suitable for a patch. | |
208 | Charset will be detected as utf-8 or (possibly fake) us-ascii. |
|
219 | Charset will be detected by first trying to decode as us-ascii, then utf-8, | |
|
220 | and finally the global encodings. If all those fail, fall back to | |||
|
221 | ISO-8859-1, an encoding with that allows all byte sequences. | |||
209 | Transfer encodings will be used if necessary.''' |
|
222 | Transfer encodings will be used if necessary.''' | |
210 |
|
223 | |||
211 | cs = 'us-ascii' |
|
224 | cs = ['us-ascii', 'utf-8', encoding.encoding, encoding.fallbackencoding] | |
212 |
if |
|
225 | if display: | |
|
226 | return mimetextqp(s, subtype, 'us-ascii') | |||
|
227 | for charset in cs: | |||
213 | try: |
|
228 | try: | |
214 |
s.decode( |
|
229 | s.decode(charset) | |
|
230 | return mimetextqp(s, subtype, codec2iana(charset)) | |||
215 | except UnicodeDecodeError: |
|
231 | except UnicodeDecodeError: | |
216 |
|
|
232 | pass | |
217 | s.decode('utf-8') |
|
|||
218 | cs = 'utf-8' |
|
|||
219 | except UnicodeDecodeError: |
|
|||
220 | # We'll go with us-ascii as a fallback. |
|
|||
221 | pass |
|
|||
222 |
|
233 | |||
223 |
return mimetextqp(s, subtype, |
|
234 | return mimetextqp(s, subtype, "iso-8859-1") | |
224 |
|
235 | |||
225 | def mimetextqp(body, subtype, charset): |
|
236 | def mimetextqp(body, subtype, charset): | |
226 | '''Return MIME message. |
|
237 | '''Return MIME message. | |
@@ -279,7 +290,7 b' def headencode(ui, s, charsets=None, dis' | |||||
279 | if not display: |
|
290 | if not display: | |
280 | # split into words? |
|
291 | # split into words? | |
281 | s, cs = _encode(ui, s, charsets) |
|
292 | s, cs = _encode(ui, s, charsets) | |
282 |
return str(email. |
|
293 | return str(email.header.Header(s, cs)) | |
283 | return s |
|
294 | return s | |
284 |
|
295 | |||
285 | def _addressencode(ui, name, addr, charsets=None): |
|
296 | def _addressencode(ui, name, addr, charsets=None): | |
@@ -330,7 +341,7 b' def mimeencode(ui, s, charsets=None, dis' | |||||
330 | def headdecode(s): |
|
341 | def headdecode(s): | |
331 | '''Decodes RFC-2047 header''' |
|
342 | '''Decodes RFC-2047 header''' | |
332 | uparts = [] |
|
343 | uparts = [] | |
333 |
for part, charset in email. |
|
344 | for part, charset in email.header.decode_header(s): | |
334 | if charset is not None: |
|
345 | if charset is not None: | |
335 | try: |
|
346 | try: | |
336 | uparts.append(part.decode(charset)) |
|
347 | uparts.append(part.decode(charset)) |
@@ -56,10 +56,10 b' static PyObject *nodeof(line *l) {' | |||||
56 | } |
|
56 | } | |
57 | if (l->hash_suffix != '\0') { |
|
57 | if (l->hash_suffix != '\0') { | |
58 | char newhash[21]; |
|
58 | char newhash[21]; | |
59 |
memcpy(newhash, Py |
|
59 | memcpy(newhash, PyBytes_AsString(hash), 20); | |
60 | Py_DECREF(hash); |
|
60 | Py_DECREF(hash); | |
61 | newhash[20] = l->hash_suffix; |
|
61 | newhash[20] = l->hash_suffix; | |
62 |
hash = Py |
|
62 | hash = PyBytes_FromStringAndSize(newhash, 21); | |
63 | } |
|
63 | } | |
64 | return hash; |
|
64 | return hash; | |
65 | } |
|
65 | } | |
@@ -79,7 +79,7 b' static PyObject *hashflags(line *l)' | |||||
79 |
|
79 | |||
80 | if (!hash) |
|
80 | if (!hash) | |
81 | return NULL; |
|
81 | return NULL; | |
82 |
flags = Py |
|
82 | flags = PyBytes_FromStringAndSize(s + hplen - 1, flen); | |
83 | if (!flags) { |
|
83 | if (!flags) { | |
84 | Py_DECREF(hash); |
|
84 | Py_DECREF(hash); | |
85 | return NULL; |
|
85 | return NULL; | |
@@ -144,7 +144,7 b' static int lazymanifest_init(lazymanifes' | |||||
144 | if (!PyArg_ParseTuple(args, "S", &pydata)) { |
|
144 | if (!PyArg_ParseTuple(args, "S", &pydata)) { | |
145 | return -1; |
|
145 | return -1; | |
146 | } |
|
146 | } | |
147 |
err = Py |
|
147 | err = PyBytes_AsStringAndSize(pydata, &data, &len); | |
148 |
|
148 | |||
149 | self->dirty = false; |
|
149 | self->dirty = false; | |
150 | if (err == -1) |
|
150 | if (err == -1) | |
@@ -238,10 +238,10 b' static PyObject *lmiter_iterentriesnext(' | |||||
238 | goto done; |
|
238 | goto done; | |
239 | } |
|
239 | } | |
240 | pl = pathlen(l); |
|
240 | pl = pathlen(l); | |
241 |
path = Py |
|
241 | path = PyBytes_FromStringAndSize(l->start, pl); | |
242 | hash = nodeof(l); |
|
242 | hash = nodeof(l); | |
243 | consumed = pl + 41; |
|
243 | consumed = pl + 41; | |
244 |
flags = Py |
|
244 | flags = PyBytes_FromStringAndSize(l->start + consumed, | |
245 | l->len - consumed - 1); |
|
245 | l->len - consumed - 1); | |
246 | if (!path || !hash || !flags) { |
|
246 | if (!path || !hash || !flags) { | |
247 | goto done; |
|
247 | goto done; | |
@@ -254,9 +254,15 b' done:' | |||||
254 | return ret; |
|
254 | return ret; | |
255 | } |
|
255 | } | |
256 |
|
256 | |||
|
257 | #ifdef IS_PY3K | |||
|
258 | #define LAZYMANIFESTENTRIESITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT | |||
|
259 | #else | |||
|
260 | #define LAZYMANIFESTENTRIESITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT \ | |||
|
261 | | Py_TPFLAGS_HAVE_ITER | |||
|
262 | #endif | |||
|
263 | ||||
257 | static PyTypeObject lazymanifestEntriesIterator = { |
|
264 | static PyTypeObject lazymanifestEntriesIterator = { | |
258 | PyObject_HEAD_INIT(NULL) |
|
265 | PyVarObject_HEAD_INIT(NULL, 0) | |
259 | 0, /*ob_size */ |
|
|||
260 | "parsers.lazymanifest.entriesiterator", /*tp_name */ |
|
266 | "parsers.lazymanifest.entriesiterator", /*tp_name */ | |
261 | sizeof(lmIter), /*tp_basicsize */ |
|
267 | sizeof(lmIter), /*tp_basicsize */ | |
262 | 0, /*tp_itemsize */ |
|
268 | 0, /*tp_itemsize */ | |
@@ -275,9 +281,7 b' static PyTypeObject lazymanifestEntriesI' | |||||
275 | 0, /*tp_getattro */ |
|
281 | 0, /*tp_getattro */ | |
276 | 0, /*tp_setattro */ |
|
282 | 0, /*tp_setattro */ | |
277 | 0, /*tp_as_buffer */ |
|
283 | 0, /*tp_as_buffer */ | |
278 | /* tp_flags: Py_TPFLAGS_HAVE_ITER tells python to |
|
284 | LAZYMANIFESTENTRIESITERATOR_TPFLAGS, /* tp_flags */ | |
279 | use tp_iter and tp_iternext fields. */ |
|
|||
280 | Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_ITER, |
|
|||
281 | "Iterator for 3-tuples in a lazymanifest.", /* tp_doc */ |
|
285 | "Iterator for 3-tuples in a lazymanifest.", /* tp_doc */ | |
282 | 0, /* tp_traverse */ |
|
286 | 0, /* tp_traverse */ | |
283 | 0, /* tp_clear */ |
|
287 | 0, /* tp_clear */ | |
@@ -295,12 +299,18 b' static PyObject *lmiter_iterkeysnext(PyO' | |||||
295 | return NULL; |
|
299 | return NULL; | |
296 | } |
|
300 | } | |
297 | pl = pathlen(l); |
|
301 | pl = pathlen(l); | |
298 |
return Py |
|
302 | return PyBytes_FromStringAndSize(l->start, pl); | |
299 | } |
|
303 | } | |
300 |
|
304 | |||
|
305 | #ifdef IS_PY3K | |||
|
306 | #define LAZYMANIFESTKEYSITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT | |||
|
307 | #else | |||
|
308 | #define LAZYMANIFESTKEYSITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT \ | |||
|
309 | | Py_TPFLAGS_HAVE_ITER | |||
|
310 | #endif | |||
|
311 | ||||
301 | static PyTypeObject lazymanifestKeysIterator = { |
|
312 | static PyTypeObject lazymanifestKeysIterator = { | |
302 | PyObject_HEAD_INIT(NULL) |
|
313 | PyVarObject_HEAD_INIT(NULL, 0) | |
303 | 0, /*ob_size */ |
|
|||
304 | "parsers.lazymanifest.keysiterator", /*tp_name */ |
|
314 | "parsers.lazymanifest.keysiterator", /*tp_name */ | |
305 | sizeof(lmIter), /*tp_basicsize */ |
|
315 | sizeof(lmIter), /*tp_basicsize */ | |
306 | 0, /*tp_itemsize */ |
|
316 | 0, /*tp_itemsize */ | |
@@ -319,9 +329,7 b' static PyTypeObject lazymanifestKeysIter' | |||||
319 | 0, /*tp_getattro */ |
|
329 | 0, /*tp_getattro */ | |
320 | 0, /*tp_setattro */ |
|
330 | 0, /*tp_setattro */ | |
321 | 0, /*tp_as_buffer */ |
|
331 | 0, /*tp_as_buffer */ | |
322 | /* tp_flags: Py_TPFLAGS_HAVE_ITER tells python to |
|
332 | LAZYMANIFESTKEYSITERATOR_TPFLAGS, /* tp_flags */ | |
323 | use tp_iter and tp_iternext fields. */ |
|
|||
324 | Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_ITER, |
|
|||
325 | "Keys iterator for a lazymanifest.", /* tp_doc */ |
|
333 | "Keys iterator for a lazymanifest.", /* tp_doc */ | |
326 | 0, /* tp_traverse */ |
|
334 | 0, /* tp_traverse */ | |
327 | 0, /* tp_clear */ |
|
335 | 0, /* tp_clear */ | |
@@ -388,12 +396,12 b' static PyObject *lazymanifest_getitem(la' | |||||
388 | { |
|
396 | { | |
389 | line needle; |
|
397 | line needle; | |
390 | line *hit; |
|
398 | line *hit; | |
391 |
if (!Py |
|
399 | if (!PyBytes_Check(key)) { | |
392 | PyErr_Format(PyExc_TypeError, |
|
400 | PyErr_Format(PyExc_TypeError, | |
393 | "getitem: manifest keys must be a string."); |
|
401 | "getitem: manifest keys must be a string."); | |
394 | return NULL; |
|
402 | return NULL; | |
395 | } |
|
403 | } | |
396 |
needle.start = Py |
|
404 | needle.start = PyBytes_AsString(key); | |
397 | hit = bsearch(&needle, self->lines, self->numlines, sizeof(line), |
|
405 | hit = bsearch(&needle, self->lines, self->numlines, sizeof(line), | |
398 | &linecmp); |
|
406 | &linecmp); | |
399 | if (!hit || hit->deleted) { |
|
407 | if (!hit || hit->deleted) { | |
@@ -407,12 +415,12 b' static int lazymanifest_delitem(lazymani' | |||||
407 | { |
|
415 | { | |
408 | line needle; |
|
416 | line needle; | |
409 | line *hit; |
|
417 | line *hit; | |
410 |
if (!Py |
|
418 | if (!PyBytes_Check(key)) { | |
411 | PyErr_Format(PyExc_TypeError, |
|
419 | PyErr_Format(PyExc_TypeError, | |
412 | "delitem: manifest keys must be a string."); |
|
420 | "delitem: manifest keys must be a string."); | |
413 | return -1; |
|
421 | return -1; | |
414 | } |
|
422 | } | |
415 |
needle.start = Py |
|
423 | needle.start = PyBytes_AsString(key); | |
416 | hit = bsearch(&needle, self->lines, self->numlines, sizeof(line), |
|
424 | hit = bsearch(&needle, self->lines, self->numlines, sizeof(line), | |
417 | &linecmp); |
|
425 | &linecmp); | |
418 | if (!hit || hit->deleted) { |
|
426 | if (!hit || hit->deleted) { | |
@@ -476,7 +484,7 b' static int lazymanifest_setitem(' | |||||
476 | char *dest; |
|
484 | char *dest; | |
477 | int i; |
|
485 | int i; | |
478 | line new; |
|
486 | line new; | |
479 |
if (!Py |
|
487 | if (!PyBytes_Check(key)) { | |
480 | PyErr_Format(PyExc_TypeError, |
|
488 | PyErr_Format(PyExc_TypeError, | |
481 | "setitem: manifest keys must be a string."); |
|
489 | "setitem: manifest keys must be a string."); | |
482 | return -1; |
|
490 | return -1; | |
@@ -489,17 +497,17 b' static int lazymanifest_setitem(' | |||||
489 | "Manifest values must be a tuple of (node, flags)."); |
|
497 | "Manifest values must be a tuple of (node, flags)."); | |
490 | return -1; |
|
498 | return -1; | |
491 | } |
|
499 | } | |
492 |
if (Py |
|
500 | if (PyBytes_AsStringAndSize(key, &path, &plen) == -1) { | |
493 | return -1; |
|
501 | return -1; | |
494 | } |
|
502 | } | |
495 |
|
503 | |||
496 | pyhash = PyTuple_GetItem(value, 0); |
|
504 | pyhash = PyTuple_GetItem(value, 0); | |
497 |
if (!Py |
|
505 | if (!PyBytes_Check(pyhash)) { | |
498 | PyErr_Format(PyExc_TypeError, |
|
506 | PyErr_Format(PyExc_TypeError, | |
499 | "node must be a 20-byte string"); |
|
507 | "node must be a 20-byte string"); | |
500 | return -1; |
|
508 | return -1; | |
501 | } |
|
509 | } | |
502 |
hlen = Py |
|
510 | hlen = PyBytes_Size(pyhash); | |
503 | /* Some parts of the codebase try and set 21 or 22 |
|
511 | /* Some parts of the codebase try and set 21 or 22 | |
504 | * byte "hash" values in order to perturb things for |
|
512 | * byte "hash" values in order to perturb things for | |
505 | * status. We have to preserve at least the 21st |
|
513 | * status. We have to preserve at least the 21st | |
@@ -511,15 +519,15 b' static int lazymanifest_setitem(' | |||||
511 | "node must be a 20-byte string"); |
|
519 | "node must be a 20-byte string"); | |
512 | return -1; |
|
520 | return -1; | |
513 | } |
|
521 | } | |
514 |
hash = Py |
|
522 | hash = PyBytes_AsString(pyhash); | |
515 |
|
523 | |||
516 | pyflags = PyTuple_GetItem(value, 1); |
|
524 | pyflags = PyTuple_GetItem(value, 1); | |
517 |
if (!Py |
|
525 | if (!PyBytes_Check(pyflags) || PyBytes_Size(pyflags) > 1) { | |
518 | PyErr_Format(PyExc_TypeError, |
|
526 | PyErr_Format(PyExc_TypeError, | |
519 | "flags must a 0 or 1 byte string"); |
|
527 | "flags must a 0 or 1 byte string"); | |
520 | return -1; |
|
528 | return -1; | |
521 | } |
|
529 | } | |
522 |
if (Py |
|
530 | if (PyBytes_AsStringAndSize(pyflags, &flags, &flen) == -1) { | |
523 | return -1; |
|
531 | return -1; | |
524 | } |
|
532 | } | |
525 | /* one null byte and one newline */ |
|
533 | /* one null byte and one newline */ | |
@@ -564,12 +572,12 b' static int lazymanifest_contains(lazyman' | |||||
564 | { |
|
572 | { | |
565 | line needle; |
|
573 | line needle; | |
566 | line *hit; |
|
574 | line *hit; | |
567 |
if (!Py |
|
575 | if (!PyBytes_Check(key)) { | |
568 | /* Our keys are always strings, so if the contains |
|
576 | /* Our keys are always strings, so if the contains | |
569 | * check is for a non-string, just return false. */ |
|
577 | * check is for a non-string, just return false. */ | |
570 | return 0; |
|
578 | return 0; | |
571 | } |
|
579 | } | |
572 |
needle.start = Py |
|
580 | needle.start = PyBytes_AsString(key); | |
573 | hit = bsearch(&needle, self->lines, self->numlines, sizeof(line), |
|
581 | hit = bsearch(&needle, self->lines, self->numlines, sizeof(line), | |
574 | &linecmp); |
|
582 | &linecmp); | |
575 | if (!hit || hit->deleted) { |
|
583 | if (!hit || hit->deleted) { | |
@@ -609,10 +617,10 b' static int compact(lazymanifest *self) {' | |||||
609 | need += self->lines[i].len; |
|
617 | need += self->lines[i].len; | |
610 | } |
|
618 | } | |
611 | } |
|
619 | } | |
612 |
pydata = Py |
|
620 | pydata = PyBytes_FromStringAndSize(NULL, need); | |
613 | if (!pydata) |
|
621 | if (!pydata) | |
614 | return -1; |
|
622 | return -1; | |
615 |
data = Py |
|
623 | data = PyBytes_AsString(pydata); | |
616 | if (!data) { |
|
624 | if (!data) { | |
617 | return -1; |
|
625 | return -1; | |
618 | } |
|
626 | } | |
@@ -747,7 +755,7 b' static PyObject *lazymanifest_diff(lazym' | |||||
747 | return NULL; |
|
755 | return NULL; | |
748 | } |
|
756 | } | |
749 | listclean = (!pyclean) ? false : PyObject_IsTrue(pyclean); |
|
757 | listclean = (!pyclean) ? false : PyObject_IsTrue(pyclean); | |
750 |
es = Py |
|
758 | es = PyBytes_FromString(""); | |
751 | if (!es) { |
|
759 | if (!es) { | |
752 | goto nomem; |
|
760 | goto nomem; | |
753 | } |
|
761 | } | |
@@ -787,8 +795,8 b' static PyObject *lazymanifest_diff(lazym' | |||||
787 | result = linecmp(left, right); |
|
795 | result = linecmp(left, right); | |
788 | } |
|
796 | } | |
789 | key = result <= 0 ? |
|
797 | key = result <= 0 ? | |
790 |
Py |
|
798 | PyBytes_FromString(left->start) : | |
791 |
Py |
|
799 | PyBytes_FromString(right->start); | |
792 | if (!key) |
|
800 | if (!key) | |
793 | goto nomem; |
|
801 | goto nomem; | |
794 | if (result < 0) { |
|
802 | if (result < 0) { | |
@@ -873,9 +881,14 b' static PyMethodDef lazymanifest_methods[' | |||||
873 | {NULL}, |
|
881 | {NULL}, | |
874 | }; |
|
882 | }; | |
875 |
|
883 | |||
|
884 | #ifdef IS_PY3K | |||
|
885 | #define LAZYMANIFEST_TPFLAGS Py_TPFLAGS_DEFAULT | |||
|
886 | #else | |||
|
887 | #define LAZYMANIFEST_TPFLAGS Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_SEQUENCE_IN | |||
|
888 | #endif | |||
|
889 | ||||
876 | static PyTypeObject lazymanifestType = { |
|
890 | static PyTypeObject lazymanifestType = { | |
877 | PyObject_HEAD_INIT(NULL) |
|
891 | PyVarObject_HEAD_INIT(NULL, 0) | |
878 | 0, /* ob_size */ |
|
|||
879 | "parsers.lazymanifest", /* tp_name */ |
|
892 | "parsers.lazymanifest", /* tp_name */ | |
880 | sizeof(lazymanifest), /* tp_basicsize */ |
|
893 | sizeof(lazymanifest), /* tp_basicsize */ | |
881 | 0, /* tp_itemsize */ |
|
894 | 0, /* tp_itemsize */ | |
@@ -894,7 +907,7 b' static PyTypeObject lazymanifestType = {' | |||||
894 | 0, /* tp_getattro */ |
|
907 | 0, /* tp_getattro */ | |
895 | 0, /* tp_setattro */ |
|
908 | 0, /* tp_setattro */ | |
896 | 0, /* tp_as_buffer */ |
|
909 | 0, /* tp_as_buffer */ | |
897 | Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_SEQUENCE_IN, /* tp_flags */ |
|
910 | LAZYMANIFEST_TPFLAGS, /* tp_flags */ | |
898 | "TODO(augie)", /* tp_doc */ |
|
911 | "TODO(augie)", /* tp_doc */ | |
899 | 0, /* tp_traverse */ |
|
912 | 0, /* tp_traverse */ | |
900 | 0, /* tp_clear */ |
|
913 | 0, /* tp_clear */ |
This diff has been collapsed as it changes many lines, (666 lines changed) Show them Hide them | |||||
@@ -104,69 +104,300 b' def _textv2(it):' | |||||
104 | _checkforbidden(files) |
|
104 | _checkforbidden(files) | |
105 | return ''.join(lines) |
|
105 | return ''.join(lines) | |
106 |
|
106 | |||
107 |
class |
|
107 | class lazymanifestiter(object): | |
108 | """This is the pure implementation of lazymanifest. |
|
108 | def __init__(self, lm): | |
|
109 | self.pos = 0 | |||
|
110 | self.lm = lm | |||
109 |
|
111 | |||
110 | It has not been optimized *at all* and is not lazy. |
|
112 | def __iter__(self): | |
111 | """ |
|
113 | return self | |
112 |
|
114 | |||
113 |
def |
|
115 | def next(self): | |
114 | dict.__init__(self) |
|
116 | try: | |
115 | for f, n, fl in _parse(data): |
|
117 | data, pos = self.lm._get(self.pos) | |
116 | self[f] = n, fl |
|
118 | except IndexError: | |
|
119 | raise StopIteration | |||
|
120 | if pos == -1: | |||
|
121 | self.pos += 1 | |||
|
122 | return data[0] | |||
|
123 | self.pos += 1 | |||
|
124 | zeropos = data.find('\x00', pos) | |||
|
125 | return data[pos:zeropos] | |||
117 |
|
126 | |||
118 | def __setitem__(self, k, v): |
|
127 | class lazymanifestiterentries(object): | |
119 | node, flag = v |
|
128 | def __init__(self, lm): | |
120 | assert node is not None |
|
129 | self.lm = lm | |
121 | if len(node) > 21: |
|
130 | self.pos = 0 | |
122 | node = node[:21] # match c implementation behavior |
|
|||
123 | dict.__setitem__(self, k, (node, flag)) |
|
|||
124 |
|
131 | |||
125 | def __iter__(self): |
|
132 | def __iter__(self): | |
126 |
return |
|
133 | return self | |
|
134 | ||||
|
135 | def next(self): | |||
|
136 | try: | |||
|
137 | data, pos = self.lm._get(self.pos) | |||
|
138 | except IndexError: | |||
|
139 | raise StopIteration | |||
|
140 | if pos == -1: | |||
|
141 | self.pos += 1 | |||
|
142 | return data | |||
|
143 | zeropos = data.find('\x00', pos) | |||
|
144 | hashval = unhexlify(data, self.lm.extrainfo[self.pos], | |||
|
145 | zeropos + 1, 40) | |||
|
146 | flags = self.lm._getflags(data, self.pos, zeropos) | |||
|
147 | self.pos += 1 | |||
|
148 | return (data[pos:zeropos], hashval, flags) | |||
|
149 | ||||
|
150 | def unhexlify(data, extra, pos, length): | |||
|
151 | s = data[pos:pos + length].decode('hex') | |||
|
152 | if extra: | |||
|
153 | s += chr(extra & 0xff) | |||
|
154 | return s | |||
|
155 | ||||
|
156 | def _cmp(a, b): | |||
|
157 | return (a > b) - (a < b) | |||
|
158 | ||||
|
159 | class _lazymanifest(object): | |||
|
160 | def __init__(self, data, positions=None, extrainfo=None, extradata=None): | |||
|
161 | if positions is None: | |||
|
162 | self.positions = self.findlines(data) | |||
|
163 | self.extrainfo = [0] * len(self.positions) | |||
|
164 | self.data = data | |||
|
165 | self.extradata = [] | |||
|
166 | else: | |||
|
167 | self.positions = positions[:] | |||
|
168 | self.extrainfo = extrainfo[:] | |||
|
169 | self.extradata = extradata[:] | |||
|
170 | self.data = data | |||
|
171 | ||||
|
172 | def findlines(self, data): | |||
|
173 | if not data: | |||
|
174 | return [] | |||
|
175 | pos = data.find("\n") | |||
|
176 | if pos == -1 or data[-1] != '\n': | |||
|
177 | raise ValueError("Manifest did not end in a newline.") | |||
|
178 | positions = [0] | |||
|
179 | prev = data[:data.find('\x00')] | |||
|
180 | while pos < len(data) - 1 and pos != -1: | |||
|
181 | positions.append(pos + 1) | |||
|
182 | nexts = data[pos + 1:data.find('\x00', pos + 1)] | |||
|
183 | if nexts < prev: | |||
|
184 | raise ValueError("Manifest lines not in sorted order.") | |||
|
185 | prev = nexts | |||
|
186 | pos = data.find("\n", pos + 1) | |||
|
187 | return positions | |||
|
188 | ||||
|
189 | def _get(self, index): | |||
|
190 | # get the position encoded in pos: | |||
|
191 | # positive number is an index in 'data' | |||
|
192 | # negative number is in extrapieces | |||
|
193 | pos = self.positions[index] | |||
|
194 | if pos >= 0: | |||
|
195 | return self.data, pos | |||
|
196 | return self.extradata[-pos - 1], -1 | |||
|
197 | ||||
|
198 | def _getkey(self, pos): | |||
|
199 | if pos >= 0: | |||
|
200 | return self.data[pos:self.data.find('\x00', pos + 1)] | |||
|
201 | return self.extradata[-pos - 1][0] | |||
|
202 | ||||
|
203 | def bsearch(self, key): | |||
|
204 | first = 0 | |||
|
205 | last = len(self.positions) - 1 | |||
|
206 | ||||
|
207 | while first <= last: | |||
|
208 | midpoint = (first + last)//2 | |||
|
209 | nextpos = self.positions[midpoint] | |||
|
210 | candidate = self._getkey(nextpos) | |||
|
211 | r = _cmp(key, candidate) | |||
|
212 | if r == 0: | |||
|
213 | return midpoint | |||
|
214 | else: | |||
|
215 | if r < 0: | |||
|
216 | last = midpoint - 1 | |||
|
217 | else: | |||
|
218 | first = midpoint + 1 | |||
|
219 | return -1 | |||
127 |
|
220 | |||
128 |
def |
|
221 | def bsearch2(self, key): | |
129 | return iter(sorted(dict.keys(self))) |
|
222 | # same as the above, but will always return the position | |
|
223 | # done for performance reasons | |||
|
224 | first = 0 | |||
|
225 | last = len(self.positions) - 1 | |||
|
226 | ||||
|
227 | while first <= last: | |||
|
228 | midpoint = (first + last)//2 | |||
|
229 | nextpos = self.positions[midpoint] | |||
|
230 | candidate = self._getkey(nextpos) | |||
|
231 | r = _cmp(key, candidate) | |||
|
232 | if r == 0: | |||
|
233 | return (midpoint, True) | |||
|
234 | else: | |||
|
235 | if r < 0: | |||
|
236 | last = midpoint - 1 | |||
|
237 | else: | |||
|
238 | first = midpoint + 1 | |||
|
239 | return (first, False) | |||
|
240 | ||||
|
241 | def __contains__(self, key): | |||
|
242 | return self.bsearch(key) != -1 | |||
|
243 | ||||
|
244 | def _getflags(self, data, needle, pos): | |||
|
245 | start = pos + 41 | |||
|
246 | end = data.find("\n", start) | |||
|
247 | if end == -1: | |||
|
248 | end = len(data) - 1 | |||
|
249 | if start == end: | |||
|
250 | return '' | |||
|
251 | return self.data[start:end] | |||
130 |
|
252 | |||
131 |
def |
|
253 | def __getitem__(self, key): | |
132 | return ((f, e[0], e[1]) for f, e in sorted(self.iteritems())) |
|
254 | if not isinstance(key, str): | |
|
255 | raise TypeError("getitem: manifest keys must be a string.") | |||
|
256 | needle = self.bsearch(key) | |||
|
257 | if needle == -1: | |||
|
258 | raise KeyError | |||
|
259 | data, pos = self._get(needle) | |||
|
260 | if pos == -1: | |||
|
261 | return (data[1], data[2]) | |||
|
262 | zeropos = data.find('\x00', pos) | |||
|
263 | assert 0 <= needle <= len(self.positions) | |||
|
264 | assert len(self.extrainfo) == len(self.positions) | |||
|
265 | hashval = unhexlify(data, self.extrainfo[needle], zeropos + 1, 40) | |||
|
266 | flags = self._getflags(data, needle, zeropos) | |||
|
267 | return (hashval, flags) | |||
|
268 | ||||
|
269 | def __delitem__(self, key): | |||
|
270 | needle, found = self.bsearch2(key) | |||
|
271 | if not found: | |||
|
272 | raise KeyError | |||
|
273 | cur = self.positions[needle] | |||
|
274 | self.positions = self.positions[:needle] + self.positions[needle + 1:] | |||
|
275 | self.extrainfo = self.extrainfo[:needle] + self.extrainfo[needle + 1:] | |||
|
276 | if cur >= 0: | |||
|
277 | self.data = self.data[:cur] + '\x00' + self.data[cur + 1:] | |||
|
278 | ||||
|
279 | def __setitem__(self, key, value): | |||
|
280 | if not isinstance(key, str): | |||
|
281 | raise TypeError("setitem: manifest keys must be a string.") | |||
|
282 | if not isinstance(value, tuple) or len(value) != 2: | |||
|
283 | raise TypeError("Manifest values must be a tuple of (node, flags).") | |||
|
284 | hashval = value[0] | |||
|
285 | if not isinstance(hashval, str) or not 20 <= len(hashval) <= 22: | |||
|
286 | raise TypeError("node must be a 20-byte string") | |||
|
287 | flags = value[1] | |||
|
288 | if len(hashval) == 22: | |||
|
289 | hashval = hashval[:-1] | |||
|
290 | if not isinstance(flags, str) or len(flags) > 1: | |||
|
291 | raise TypeError("flags must a 0 or 1 byte string, got %r", flags) | |||
|
292 | needle, found = self.bsearch2(key) | |||
|
293 | if found: | |||
|
294 | # put the item | |||
|
295 | pos = self.positions[needle] | |||
|
296 | if pos < 0: | |||
|
297 | self.extradata[-pos - 1] = (key, hashval, value[1]) | |||
|
298 | else: | |||
|
299 | # just don't bother | |||
|
300 | self.extradata.append((key, hashval, value[1])) | |||
|
301 | self.positions[needle] = -len(self.extradata) | |||
|
302 | else: | |||
|
303 | # not found, put it in with extra positions | |||
|
304 | self.extradata.append((key, hashval, value[1])) | |||
|
305 | self.positions = (self.positions[:needle] + [-len(self.extradata)] | |||
|
306 | + self.positions[needle:]) | |||
|
307 | self.extrainfo = (self.extrainfo[:needle] + [0] + | |||
|
308 | self.extrainfo[needle:]) | |||
133 |
|
309 | |||
134 | def copy(self): |
|
310 | def copy(self): | |
135 | c = _lazymanifest('') |
|
311 | # XXX call _compact like in C? | |
136 | c.update(self) |
|
312 | return _lazymanifest(self.data, self.positions, self.extrainfo, | |
137 | return c |
|
313 | self.extradata) | |
|
314 | ||||
|
315 | def _compact(self): | |||
|
316 | # hopefully not called TOO often | |||
|
317 | if len(self.extradata) == 0: | |||
|
318 | return | |||
|
319 | l = [] | |||
|
320 | last_cut = 0 | |||
|
321 | i = 0 | |||
|
322 | offset = 0 | |||
|
323 | self.extrainfo = [0] * len(self.positions) | |||
|
324 | while i < len(self.positions): | |||
|
325 | if self.positions[i] >= 0: | |||
|
326 | cur = self.positions[i] | |||
|
327 | last_cut = cur | |||
|
328 | while True: | |||
|
329 | self.positions[i] = offset | |||
|
330 | i += 1 | |||
|
331 | if i == len(self.positions) or self.positions[i] < 0: | |||
|
332 | break | |||
|
333 | offset += self.positions[i] - cur | |||
|
334 | cur = self.positions[i] | |||
|
335 | end_cut = self.data.find('\n', cur) | |||
|
336 | if end_cut != -1: | |||
|
337 | end_cut += 1 | |||
|
338 | offset += end_cut - cur | |||
|
339 | l.append(self.data[last_cut:end_cut]) | |||
|
340 | else: | |||
|
341 | while i < len(self.positions) and self.positions[i] < 0: | |||
|
342 | cur = self.positions[i] | |||
|
343 | t = self.extradata[-cur - 1] | |||
|
344 | l.append(self._pack(t)) | |||
|
345 | self.positions[i] = offset | |||
|
346 | if len(t[1]) > 20: | |||
|
347 | self.extrainfo[i] = ord(t[1][21]) | |||
|
348 | offset += len(l[-1]) | |||
|
349 | i += 1 | |||
|
350 | self.data = ''.join(l) | |||
|
351 | self.extradata = [] | |||
|
352 | ||||
|
353 | def _pack(self, d): | |||
|
354 | return d[0] + '\x00' + d[1][:20].encode('hex') + d[2] + '\n' | |||
|
355 | ||||
|
356 | def text(self): | |||
|
357 | self._compact() | |||
|
358 | return self.data | |||
138 |
|
359 | |||
139 | def diff(self, m2, clean=False): |
|
360 | def diff(self, m2, clean=False): | |
140 | '''Finds changes between the current manifest and m2.''' |
|
361 | '''Finds changes between the current manifest and m2.''' | |
|
362 | # XXX think whether efficiency matters here | |||
141 | diff = {} |
|
363 | diff = {} | |
142 |
|
364 | |||
143 |
for fn, e1 in self.iter |
|
365 | for fn, e1, flags in self.iterentries(): | |
144 | if fn not in m2: |
|
366 | if fn not in m2: | |
145 | diff[fn] = e1, (None, '') |
|
367 | diff[fn] = (e1, flags), (None, '') | |
146 | else: |
|
368 | else: | |
147 | e2 = m2[fn] |
|
369 | e2 = m2[fn] | |
148 | if e1 != e2: |
|
370 | if (e1, flags) != e2: | |
149 | diff[fn] = e1, e2 |
|
371 | diff[fn] = (e1, flags), e2 | |
150 | elif clean: |
|
372 | elif clean: | |
151 | diff[fn] = None |
|
373 | diff[fn] = None | |
152 |
|
374 | |||
153 |
for fn, e2 in m2.iter |
|
375 | for fn, e2, flags in m2.iterentries(): | |
154 | if fn not in self: |
|
376 | if fn not in self: | |
155 | diff[fn] = (None, ''), e2 |
|
377 | diff[fn] = (None, ''), (e2, flags) | |
156 |
|
378 | |||
157 | return diff |
|
379 | return diff | |
158 |
|
380 | |||
|
381 | def iterentries(self): | |||
|
382 | return lazymanifestiterentries(self) | |||
|
383 | ||||
|
384 | def iterkeys(self): | |||
|
385 | return lazymanifestiter(self) | |||
|
386 | ||||
|
387 | def __iter__(self): | |||
|
388 | return lazymanifestiter(self) | |||
|
389 | ||||
|
390 | def __len__(self): | |||
|
391 | return len(self.positions) | |||
|
392 | ||||
159 | def filtercopy(self, filterfn): |
|
393 | def filtercopy(self, filterfn): | |
|
394 | # XXX should be optimized | |||
160 | c = _lazymanifest('') |
|
395 | c = _lazymanifest('') | |
161 | for f, n, fl in self.iterentries(): |
|
396 | for f, n, fl in self.iterentries(): | |
162 | if filterfn(f): |
|
397 | if filterfn(f): | |
163 | c[f] = n, fl |
|
398 | c[f] = n, fl | |
164 | return c |
|
399 | return c | |
165 |
|
400 | |||
166 | def text(self): |
|
|||
167 | """Get the full data of this manifest as a bytestring.""" |
|
|||
168 | return _textv1(self.iterentries()) |
|
|||
169 |
|
||||
170 | try: |
|
401 | try: | |
171 | _lazymanifest = parsers.lazymanifest |
|
402 | _lazymanifest = parsers.lazymanifest | |
172 | except AttributeError: |
|
403 | except AttributeError: | |
@@ -882,6 +1113,8 b' class treemanifest(object):' | |||||
882 |
|
1113 | |||
883 | def writesubtrees(self, m1, m2, writesubtree): |
|
1114 | def writesubtrees(self, m1, m2, writesubtree): | |
884 | self._load() # for consistency; should never have any effect here |
|
1115 | self._load() # for consistency; should never have any effect here | |
|
1116 | m1._load() | |||
|
1117 | m2._load() | |||
885 | emptytree = treemanifest() |
|
1118 | emptytree = treemanifest() | |
886 | for d, subm in self._dirs.iteritems(): |
|
1119 | for d, subm in self._dirs.iteritems(): | |
887 | subp1 = m1._dirs.get(d, emptytree)._node |
|
1120 | subp1 = m1._dirs.get(d, emptytree)._node | |
@@ -890,12 +1123,11 b' class treemanifest(object):' | |||||
890 | subp1, subp2 = subp2, subp1 |
|
1123 | subp1, subp2 = subp2, subp1 | |
891 | writesubtree(subm, subp1, subp2) |
|
1124 | writesubtree(subm, subp1, subp2) | |
892 |
|
1125 | |||
893 | class manifest(revlog.revlog): |
|
1126 | class manifestrevlog(revlog.revlog): | |
|
1127 | '''A revlog that stores manifest texts. This is responsible for caching the | |||
|
1128 | full-text manifest contents. | |||
|
1129 | ''' | |||
894 | def __init__(self, opener, dir='', dirlogcache=None): |
|
1130 | def __init__(self, opener, dir='', dirlogcache=None): | |
895 | '''The 'dir' and 'dirlogcache' arguments are for internal use by |
|
|||
896 | manifest.manifest only. External users should create a root manifest |
|
|||
897 | log with manifest.manifest(opener) and call dirlog() on it. |
|
|||
898 | ''' |
|
|||
899 | # During normal operations, we expect to deal with not more than four |
|
1131 | # During normal operations, we expect to deal with not more than four | |
900 | # revs at a time (such as during commit --amend). When rebasing large |
|
1132 | # revs at a time (such as during commit --amend). When rebasing large | |
901 | # stacks of commits, the number can go up, hence the config knob below. |
|
1133 | # stacks of commits, the number can go up, hence the config knob below. | |
@@ -907,17 +1139,18 b' class manifest(revlog.revlog):' | |||||
907 | cachesize = opts.get('manifestcachesize', cachesize) |
|
1139 | cachesize = opts.get('manifestcachesize', cachesize) | |
908 | usetreemanifest = opts.get('treemanifest', usetreemanifest) |
|
1140 | usetreemanifest = opts.get('treemanifest', usetreemanifest) | |
909 | usemanifestv2 = opts.get('manifestv2', usemanifestv2) |
|
1141 | usemanifestv2 = opts.get('manifestv2', usemanifestv2) | |
910 | self._mancache = util.lrucachedict(cachesize) |
|
1142 | ||
911 | self._treeinmem = usetreemanifest |
|
|||
912 | self._treeondisk = usetreemanifest |
|
1143 | self._treeondisk = usetreemanifest | |
913 | self._usemanifestv2 = usemanifestv2 |
|
1144 | self._usemanifestv2 = usemanifestv2 | |
|
1145 | ||||
|
1146 | self._fulltextcache = util.lrucachedict(cachesize) | |||
|
1147 | ||||
914 | indexfile = "00manifest.i" |
|
1148 | indexfile = "00manifest.i" | |
915 | if dir: |
|
1149 | if dir: | |
916 | assert self._treeondisk |
|
1150 | assert self._treeondisk, 'opts is %r' % opts | |
917 | if not dir.endswith('/'): |
|
1151 | if not dir.endswith('/'): | |
918 | dir = dir + '/' |
|
1152 | dir = dir + '/' | |
919 | indexfile = "meta/" + dir + "00manifest.i" |
|
1153 | indexfile = "meta/" + dir + "00manifest.i" | |
920 | revlog.revlog.__init__(self, opener, indexfile) |
|
|||
921 | self._dir = dir |
|
1154 | self._dir = dir | |
922 | # The dirlogcache is kept on the root manifest log |
|
1155 | # The dirlogcache is kept on the root manifest log | |
923 | if dir: |
|
1156 | if dir: | |
@@ -925,12 +1158,281 b' class manifest(revlog.revlog):' | |||||
925 | else: |
|
1158 | else: | |
926 | self._dirlogcache = {'': self} |
|
1159 | self._dirlogcache = {'': self} | |
927 |
|
1160 | |||
|
1161 | super(manifestrevlog, self).__init__(opener, indexfile, | |||
|
1162 | checkambig=bool(dir)) | |||
|
1163 | ||||
|
1164 | @property | |||
|
1165 | def fulltextcache(self): | |||
|
1166 | return self._fulltextcache | |||
|
1167 | ||||
|
1168 | def clearcaches(self): | |||
|
1169 | super(manifestrevlog, self).clearcaches() | |||
|
1170 | self._fulltextcache.clear() | |||
|
1171 | self._dirlogcache = {'': self} | |||
|
1172 | ||||
|
1173 | def dirlog(self, dir): | |||
|
1174 | if dir: | |||
|
1175 | assert self._treeondisk | |||
|
1176 | if dir not in self._dirlogcache: | |||
|
1177 | self._dirlogcache[dir] = manifestrevlog(self.opener, dir, | |||
|
1178 | self._dirlogcache) | |||
|
1179 | return self._dirlogcache[dir] | |||
|
1180 | ||||
|
1181 | def add(self, m, transaction, link, p1, p2, added, removed): | |||
|
1182 | if (p1 in self.fulltextcache and util.safehasattr(m, 'fastdelta') | |||
|
1183 | and not self._usemanifestv2): | |||
|
1184 | # If our first parent is in the manifest cache, we can | |||
|
1185 | # compute a delta here using properties we know about the | |||
|
1186 | # manifest up-front, which may save time later for the | |||
|
1187 | # revlog layer. | |||
|
1188 | ||||
|
1189 | _checkforbidden(added) | |||
|
1190 | # combine the changed lists into one sorted iterator | |||
|
1191 | work = heapq.merge([(x, False) for x in added], | |||
|
1192 | [(x, True) for x in removed]) | |||
|
1193 | ||||
|
1194 | arraytext, deltatext = m.fastdelta(self.fulltextcache[p1], work) | |||
|
1195 | cachedelta = self.rev(p1), deltatext | |||
|
1196 | text = util.buffer(arraytext) | |||
|
1197 | n = self.addrevision(text, transaction, link, p1, p2, cachedelta) | |||
|
1198 | else: | |||
|
1199 | # The first parent manifest isn't already loaded, so we'll | |||
|
1200 | # just encode a fulltext of the manifest and pass that | |||
|
1201 | # through to the revlog layer, and let it handle the delta | |||
|
1202 | # process. | |||
|
1203 | if self._treeondisk: | |||
|
1204 | m1 = self.read(p1) | |||
|
1205 | m2 = self.read(p2) | |||
|
1206 | n = self._addtree(m, transaction, link, m1, m2) | |||
|
1207 | arraytext = None | |||
|
1208 | else: | |||
|
1209 | text = m.text(self._usemanifestv2) | |||
|
1210 | n = self.addrevision(text, transaction, link, p1, p2) | |||
|
1211 | arraytext = array.array('c', text) | |||
|
1212 | ||||
|
1213 | if arraytext is not None: | |||
|
1214 | self.fulltextcache[n] = arraytext | |||
|
1215 | ||||
|
1216 | return n | |||
|
1217 | ||||
|
1218 | def _addtree(self, m, transaction, link, m1, m2): | |||
|
1219 | # If the manifest is unchanged compared to one parent, | |||
|
1220 | # don't write a new revision | |||
|
1221 | if m.unmodifiedsince(m1) or m.unmodifiedsince(m2): | |||
|
1222 | return m.node() | |||
|
1223 | def writesubtree(subm, subp1, subp2): | |||
|
1224 | sublog = self.dirlog(subm.dir()) | |||
|
1225 | sublog.add(subm, transaction, link, subp1, subp2, None, None) | |||
|
1226 | m.writesubtrees(m1, m2, writesubtree) | |||
|
1227 | text = m.dirtext(self._usemanifestv2) | |||
|
1228 | # Double-check whether contents are unchanged to one parent | |||
|
1229 | if text == m1.dirtext(self._usemanifestv2): | |||
|
1230 | n = m1.node() | |||
|
1231 | elif text == m2.dirtext(self._usemanifestv2): | |||
|
1232 | n = m2.node() | |||
|
1233 | else: | |||
|
1234 | n = self.addrevision(text, transaction, link, m1.node(), m2.node()) | |||
|
1235 | # Save nodeid so parent manifest can calculate its nodeid | |||
|
1236 | m.setnode(n) | |||
|
1237 | return n | |||
|
1238 | ||||
|
1239 | class manifestlog(object): | |||
|
1240 | """A collection class representing the collection of manifest snapshots | |||
|
1241 | referenced by commits in the repository. | |||
|
1242 | ||||
|
1243 | In this situation, 'manifest' refers to the abstract concept of a snapshot | |||
|
1244 | of the list of files in the given commit. Consumers of the output of this | |||
|
1245 | class do not care about the implementation details of the actual manifests | |||
|
1246 | they receive (i.e. tree or flat or lazily loaded, etc).""" | |||
|
1247 | def __init__(self, opener, repo): | |||
|
1248 | self._repo = repo | |||
|
1249 | ||||
|
1250 | usetreemanifest = False | |||
|
1251 | ||||
|
1252 | opts = getattr(opener, 'options', None) | |||
|
1253 | if opts is not None: | |||
|
1254 | usetreemanifest = opts.get('treemanifest', usetreemanifest) | |||
|
1255 | self._treeinmem = usetreemanifest | |||
|
1256 | ||||
|
1257 | # We'll separate this into it's own cache once oldmanifest is no longer | |||
|
1258 | # used | |||
|
1259 | self._mancache = repo.manifest._mancache | |||
|
1260 | ||||
|
1261 | @property | |||
|
1262 | def _revlog(self): | |||
|
1263 | return self._repo.manifest | |||
|
1264 | ||||
|
1265 | def __getitem__(self, node): | |||
|
1266 | """Retrieves the manifest instance for the given node. Throws a KeyError | |||
|
1267 | if not found. | |||
|
1268 | """ | |||
|
1269 | if node in self._mancache: | |||
|
1270 | cachemf = self._mancache[node] | |||
|
1271 | # The old manifest may put non-ctx manifests in the cache, so skip | |||
|
1272 | # those since they don't implement the full api. | |||
|
1273 | if (isinstance(cachemf, manifestctx) or | |||
|
1274 | isinstance(cachemf, treemanifestctx)): | |||
|
1275 | return cachemf | |||
|
1276 | ||||
|
1277 | if self._treeinmem: | |||
|
1278 | m = treemanifestctx(self._revlog, '', node) | |||
|
1279 | else: | |||
|
1280 | m = manifestctx(self._revlog, node) | |||
|
1281 | if node != revlog.nullid: | |||
|
1282 | self._mancache[node] = m | |||
|
1283 | return m | |||
|
1284 | ||||
|
1285 | def add(self, m, transaction, link, p1, p2, added, removed): | |||
|
1286 | return self._revlog.add(m, transaction, link, p1, p2, added, removed) | |||
|
1287 | ||||
|
1288 | class manifestctx(object): | |||
|
1289 | """A class representing a single revision of a manifest, including its | |||
|
1290 | contents, its parent revs, and its linkrev. | |||
|
1291 | """ | |||
|
1292 | def __init__(self, revlog, node): | |||
|
1293 | self._revlog = revlog | |||
|
1294 | self._data = None | |||
|
1295 | ||||
|
1296 | self._node = node | |||
|
1297 | ||||
|
1298 | # TODO: We eventually want p1, p2, and linkrev exposed on this class, | |||
|
1299 | # but let's add it later when something needs it and we can load it | |||
|
1300 | # lazily. | |||
|
1301 | #self.p1, self.p2 = revlog.parents(node) | |||
|
1302 | #rev = revlog.rev(node) | |||
|
1303 | #self.linkrev = revlog.linkrev(rev) | |||
|
1304 | ||||
|
1305 | def node(self): | |||
|
1306 | return self._node | |||
|
1307 | ||||
|
1308 | def read(self): | |||
|
1309 | if not self._data: | |||
|
1310 | if self._node == revlog.nullid: | |||
|
1311 | self._data = manifestdict() | |||
|
1312 | else: | |||
|
1313 | text = self._revlog.revision(self._node) | |||
|
1314 | arraytext = array.array('c', text) | |||
|
1315 | self._revlog._fulltextcache[self._node] = arraytext | |||
|
1316 | self._data = manifestdict(text) | |||
|
1317 | return self._data | |||
|
1318 | ||||
|
1319 | def readfast(self): | |||
|
1320 | rl = self._revlog | |||
|
1321 | r = rl.rev(self._node) | |||
|
1322 | deltaparent = rl.deltaparent(r) | |||
|
1323 | if deltaparent != revlog.nullrev and deltaparent in rl.parentrevs(r): | |||
|
1324 | return self.readdelta() | |||
|
1325 | return self.read() | |||
|
1326 | ||||
|
1327 | def readdelta(self): | |||
|
1328 | revlog = self._revlog | |||
|
1329 | if revlog._usemanifestv2: | |||
|
1330 | # Need to perform a slow delta | |||
|
1331 | r0 = revlog.deltaparent(revlog.rev(self._node)) | |||
|
1332 | m0 = manifestctx(revlog, revlog.node(r0)).read() | |||
|
1333 | m1 = self.read() | |||
|
1334 | md = manifestdict() | |||
|
1335 | for f, ((n0, fl0), (n1, fl1)) in m0.diff(m1).iteritems(): | |||
|
1336 | if n1: | |||
|
1337 | md[f] = n1 | |||
|
1338 | if fl1: | |||
|
1339 | md.setflag(f, fl1) | |||
|
1340 | return md | |||
|
1341 | ||||
|
1342 | r = revlog.rev(self._node) | |||
|
1343 | d = mdiff.patchtext(revlog.revdiff(revlog.deltaparent(r), r)) | |||
|
1344 | return manifestdict(d) | |||
|
1345 | ||||
|
1346 | class treemanifestctx(object): | |||
|
1347 | def __init__(self, revlog, dir, node): | |||
|
1348 | revlog = revlog.dirlog(dir) | |||
|
1349 | self._revlog = revlog | |||
|
1350 | self._dir = dir | |||
|
1351 | self._data = None | |||
|
1352 | ||||
|
1353 | self._node = node | |||
|
1354 | ||||
|
1355 | # TODO: Load p1/p2/linkrev lazily. They need to be lazily loaded so that | |||
|
1356 | # we can instantiate treemanifestctx objects for directories we don't | |||
|
1357 | # have on disk. | |||
|
1358 | #self.p1, self.p2 = revlog.parents(node) | |||
|
1359 | #rev = revlog.rev(node) | |||
|
1360 | #self.linkrev = revlog.linkrev(rev) | |||
|
1361 | ||||
|
1362 | def read(self): | |||
|
1363 | if not self._data: | |||
|
1364 | if self._node == revlog.nullid: | |||
|
1365 | self._data = treemanifest() | |||
|
1366 | elif self._revlog._treeondisk: | |||
|
1367 | m = treemanifest(dir=self._dir) | |||
|
1368 | def gettext(): | |||
|
1369 | return self._revlog.revision(self._node) | |||
|
1370 | def readsubtree(dir, subm): | |||
|
1371 | return treemanifestctx(self._revlog, dir, subm).read() | |||
|
1372 | m.read(gettext, readsubtree) | |||
|
1373 | m.setnode(self._node) | |||
|
1374 | self._data = m | |||
|
1375 | else: | |||
|
1376 | text = self._revlog.revision(self._node) | |||
|
1377 | arraytext = array.array('c', text) | |||
|
1378 | self._revlog.fulltextcache[self._node] = arraytext | |||
|
1379 | self._data = treemanifest(dir=self._dir, text=text) | |||
|
1380 | ||||
|
1381 | return self._data | |||
|
1382 | ||||
|
1383 | def node(self): | |||
|
1384 | return self._node | |||
|
1385 | ||||
|
1386 | def readdelta(self): | |||
|
1387 | # Need to perform a slow delta | |||
|
1388 | revlog = self._revlog | |||
|
1389 | r0 = revlog.deltaparent(revlog.rev(self._node)) | |||
|
1390 | m0 = treemanifestctx(revlog, self._dir, revlog.node(r0)).read() | |||
|
1391 | m1 = self.read() | |||
|
1392 | md = treemanifest(dir=self._dir) | |||
|
1393 | for f, ((n0, fl0), (n1, fl1)) in m0.diff(m1).iteritems(): | |||
|
1394 | if n1: | |||
|
1395 | md[f] = n1 | |||
|
1396 | if fl1: | |||
|
1397 | md.setflag(f, fl1) | |||
|
1398 | return md | |||
|
1399 | ||||
|
1400 | def readfast(self): | |||
|
1401 | rl = self._revlog | |||
|
1402 | r = rl.rev(self._node) | |||
|
1403 | deltaparent = rl.deltaparent(r) | |||
|
1404 | if deltaparent != revlog.nullrev and deltaparent in rl.parentrevs(r): | |||
|
1405 | return self.readdelta() | |||
|
1406 | return self.read() | |||
|
1407 | ||||
|
1408 | class manifest(manifestrevlog): | |||
|
1409 | def __init__(self, opener, dir='', dirlogcache=None): | |||
|
1410 | '''The 'dir' and 'dirlogcache' arguments are for internal use by | |||
|
1411 | manifest.manifest only. External users should create a root manifest | |||
|
1412 | log with manifest.manifest(opener) and call dirlog() on it. | |||
|
1413 | ''' | |||
|
1414 | # During normal operations, we expect to deal with not more than four | |||
|
1415 | # revs at a time (such as during commit --amend). When rebasing large | |||
|
1416 | # stacks of commits, the number can go up, hence the config knob below. | |||
|
1417 | cachesize = 4 | |||
|
1418 | usetreemanifest = False | |||
|
1419 | opts = getattr(opener, 'options', None) | |||
|
1420 | if opts is not None: | |||
|
1421 | cachesize = opts.get('manifestcachesize', cachesize) | |||
|
1422 | usetreemanifest = opts.get('treemanifest', usetreemanifest) | |||
|
1423 | self._mancache = util.lrucachedict(cachesize) | |||
|
1424 | self._treeinmem = usetreemanifest | |||
|
1425 | super(manifest, self).__init__(opener, dir=dir, dirlogcache=dirlogcache) | |||
|
1426 | ||||
928 | def _newmanifest(self, data=''): |
|
1427 | def _newmanifest(self, data=''): | |
929 | if self._treeinmem: |
|
1428 | if self._treeinmem: | |
930 | return treemanifest(self._dir, data) |
|
1429 | return treemanifest(self._dir, data) | |
931 | return manifestdict(data) |
|
1430 | return manifestdict(data) | |
932 |
|
1431 | |||
933 | def dirlog(self, dir): |
|
1432 | def dirlog(self, dir): | |
|
1433 | """This overrides the base revlog implementation to allow construction | |||
|
1434 | 'manifest' types instead of manifestrevlog types. This is only needed | |||
|
1435 | until we migrate off the 'manifest' type.""" | |||
934 | if dir: |
|
1436 | if dir: | |
935 | assert self._treeondisk |
|
1437 | assert self._treeondisk | |
936 | if dir not in self._dirlogcache: |
|
1438 | if dir not in self._dirlogcache: | |
@@ -973,20 +1475,6 b' class manifest(revlog.revlog):' | |||||
973 | d = mdiff.patchtext(self.revdiff(self.deltaparent(r), r)) |
|
1475 | d = mdiff.patchtext(self.revdiff(self.deltaparent(r), r)) | |
974 | return manifestdict(d) |
|
1476 | return manifestdict(d) | |
975 |
|
1477 | |||
976 | def readfast(self, node): |
|
|||
977 | '''use the faster of readdelta or read |
|
|||
978 |
|
||||
979 | This will return a manifest which is either only the files |
|
|||
980 | added/modified relative to p1, or all files in the |
|
|||
981 | manifest. Which one is returned depends on the codepath used |
|
|||
982 | to retrieve the data. |
|
|||
983 | ''' |
|
|||
984 | r = self.rev(node) |
|
|||
985 | deltaparent = self.deltaparent(r) |
|
|||
986 | if deltaparent != revlog.nullrev and deltaparent in self.parentrevs(r): |
|
|||
987 | return self.readdelta(node) |
|
|||
988 | return self.read(node) |
|
|||
989 |
|
||||
990 | def readshallowfast(self, node): |
|
1478 | def readshallowfast(self, node): | |
991 | '''like readfast(), but calls readshallowdelta() instead of readdelta() |
|
1479 | '''like readfast(), but calls readshallowdelta() instead of readdelta() | |
992 | ''' |
|
1480 | ''' | |
@@ -1000,7 +1488,11 b' class manifest(revlog.revlog):' | |||||
1000 | if node == revlog.nullid: |
|
1488 | if node == revlog.nullid: | |
1001 | return self._newmanifest() # don't upset local cache |
|
1489 | return self._newmanifest() # don't upset local cache | |
1002 | if node in self._mancache: |
|
1490 | if node in self._mancache: | |
1003 |
|
|
1491 | cached = self._mancache[node] | |
|
1492 | if (isinstance(cached, manifestctx) or | |||
|
1493 | isinstance(cached, treemanifestctx)): | |||
|
1494 | cached = cached.read() | |||
|
1495 | return cached | |||
1004 | if self._treeondisk: |
|
1496 | if self._treeondisk: | |
1005 | def gettext(): |
|
1497 | def gettext(): | |
1006 | return self.revision(node) |
|
1498 | return self.revision(node) | |
@@ -1014,7 +1506,9 b' class manifest(revlog.revlog):' | |||||
1014 | text = self.revision(node) |
|
1506 | text = self.revision(node) | |
1015 | m = self._newmanifest(text) |
|
1507 | m = self._newmanifest(text) | |
1016 | arraytext = array.array('c', text) |
|
1508 | arraytext = array.array('c', text) | |
1017 |
self._mancache[node] = |
|
1509 | self._mancache[node] = m | |
|
1510 | if arraytext is not None: | |||
|
1511 | self.fulltextcache[node] = arraytext | |||
1018 | return m |
|
1512 | return m | |
1019 |
|
1513 | |||
1020 | def readshallow(self, node): |
|
1514 | def readshallow(self, node): | |
@@ -1033,64 +1527,6 b' class manifest(revlog.revlog):' | |||||
1033 | except KeyError: |
|
1527 | except KeyError: | |
1034 | return None, None |
|
1528 | return None, None | |
1035 |
|
1529 | |||
1036 | def add(self, m, transaction, link, p1, p2, added, removed): |
|
|||
1037 | if (p1 in self._mancache and not self._treeinmem |
|
|||
1038 | and not self._usemanifestv2): |
|
|||
1039 | # If our first parent is in the manifest cache, we can |
|
|||
1040 | # compute a delta here using properties we know about the |
|
|||
1041 | # manifest up-front, which may save time later for the |
|
|||
1042 | # revlog layer. |
|
|||
1043 |
|
||||
1044 | _checkforbidden(added) |
|
|||
1045 | # combine the changed lists into one sorted iterator |
|
|||
1046 | work = heapq.merge([(x, False) for x in added], |
|
|||
1047 | [(x, True) for x in removed]) |
|
|||
1048 |
|
||||
1049 | arraytext, deltatext = m.fastdelta(self._mancache[p1][1], work) |
|
|||
1050 | cachedelta = self.rev(p1), deltatext |
|
|||
1051 | text = util.buffer(arraytext) |
|
|||
1052 | n = self.addrevision(text, transaction, link, p1, p2, cachedelta) |
|
|||
1053 | else: |
|
|||
1054 | # The first parent manifest isn't already loaded, so we'll |
|
|||
1055 | # just encode a fulltext of the manifest and pass that |
|
|||
1056 | # through to the revlog layer, and let it handle the delta |
|
|||
1057 | # process. |
|
|||
1058 | if self._treeondisk: |
|
|||
1059 | m1 = self.read(p1) |
|
|||
1060 | m2 = self.read(p2) |
|
|||
1061 | n = self._addtree(m, transaction, link, m1, m2) |
|
|||
1062 | arraytext = None |
|
|||
1063 | else: |
|
|||
1064 | text = m.text(self._usemanifestv2) |
|
|||
1065 | n = self.addrevision(text, transaction, link, p1, p2) |
|
|||
1066 | arraytext = array.array('c', text) |
|
|||
1067 |
|
||||
1068 | self._mancache[n] = (m, arraytext) |
|
|||
1069 |
|
||||
1070 | return n |
|
|||
1071 |
|
||||
1072 | def _addtree(self, m, transaction, link, m1, m2): |
|
|||
1073 | # If the manifest is unchanged compared to one parent, |
|
|||
1074 | # don't write a new revision |
|
|||
1075 | if m.unmodifiedsince(m1) or m.unmodifiedsince(m2): |
|
|||
1076 | return m.node() |
|
|||
1077 | def writesubtree(subm, subp1, subp2): |
|
|||
1078 | sublog = self.dirlog(subm.dir()) |
|
|||
1079 | sublog.add(subm, transaction, link, subp1, subp2, None, None) |
|
|||
1080 | m.writesubtrees(m1, m2, writesubtree) |
|
|||
1081 | text = m.dirtext(self._usemanifestv2) |
|
|||
1082 | # Double-check whether contents are unchanged to one parent |
|
|||
1083 | if text == m1.dirtext(self._usemanifestv2): |
|
|||
1084 | n = m1.node() |
|
|||
1085 | elif text == m2.dirtext(self._usemanifestv2): |
|
|||
1086 | n = m2.node() |
|
|||
1087 | else: |
|
|||
1088 | n = self.addrevision(text, transaction, link, m1.node(), m2.node()) |
|
|||
1089 | # Save nodeid so parent manifest can calculate its nodeid |
|
|||
1090 | m.setnode(n) |
|
|||
1091 | return n |
|
|||
1092 |
|
||||
1093 | def clearcaches(self): |
|
1530 | def clearcaches(self): | |
1094 | super(manifest, self).clearcaches() |
|
1531 | super(manifest, self).clearcaches() | |
1095 | self._mancache.clear() |
|
1532 | self._mancache.clear() | |
1096 | self._dirlogcache = {'': self} |
|
@@ -113,12 +113,11 b' def splitblock(base1, lines1, base2, lin' | |||||
113 | s1 = i1 |
|
113 | s1 = i1 | |
114 | s2 = i2 |
|
114 | s2 = i2 | |
115 |
|
115 | |||
116 |
def allblocks(text1, text2, opts=None, lines1=None, lines2=None |
|
116 | def allblocks(text1, text2, opts=None, lines1=None, lines2=None): | |
117 | """Return (block, type) tuples, where block is an mdiff.blocks |
|
117 | """Return (block, type) tuples, where block is an mdiff.blocks | |
118 | line entry. type is '=' for blocks matching exactly one another |
|
118 | line entry. type is '=' for blocks matching exactly one another | |
119 | (bdiff blocks), '!' for non-matching blocks and '~' for blocks |
|
119 | (bdiff blocks), '!' for non-matching blocks and '~' for blocks | |
120 |
matching only after having filtered blank lines. |
|
120 | matching only after having filtered blank lines. | |
121 | then '~' blocks are refined and are only made of blank lines. |
|
|||
122 | line1 and line2 are text1 and text2 split with splitnewlines() if |
|
121 | line1 and line2 are text1 and text2 split with splitnewlines() if | |
123 | they are already available. |
|
122 | they are already available. | |
124 | """ |
|
123 | """ |
@@ -475,10 +475,12 b' class mergestate(object):' | |||||
475 | flo = fco.flags() |
|
475 | flo = fco.flags() | |
476 | fla = fca.flags() |
|
476 | fla = fca.flags() | |
477 | if 'x' in flags + flo + fla and 'l' not in flags + flo + fla: |
|
477 | if 'x' in flags + flo + fla and 'l' not in flags + flo + fla: | |
478 | if fca.node() == nullid: |
|
478 | if fca.node() == nullid and flags != flo: | |
479 | if preresolve: |
|
479 | if preresolve: | |
480 | self._repo.ui.warn( |
|
480 | self._repo.ui.warn( | |
481 |
_('warning: cannot merge flags for %s |
|
481 | _('warning: cannot merge flags for %s ' | |
|
482 | 'without common ancestor - keeping local flags\n') | |||
|
483 | % afile) | |||
482 | elif flags == fla: |
|
484 | elif flags == fla: | |
483 | flags = flo |
|
485 | flags = flo | |
484 | if preresolve: |
|
486 | if preresolve: | |
@@ -781,7 +783,7 b' def driverconclude(repo, ms, wctx, label' | |||||
781 | def manifestmerge(repo, wctx, p2, pa, branchmerge, force, matcher, |
|
783 | def manifestmerge(repo, wctx, p2, pa, branchmerge, force, matcher, | |
782 | acceptremote, followcopies): |
|
784 | acceptremote, followcopies): | |
783 | """ |
|
785 | """ | |
784 |
Merge |
|
786 | Merge wctx and p2 with ancestor pa and generate merge action list | |
785 |
|
787 | |||
786 | branchmerge and force are as passed in to update |
|
788 | branchmerge and force are as passed in to update | |
787 | matcher = matcher to filter file lists |
|
789 | matcher = matcher to filter file lists | |
@@ -1036,6 +1038,12 b' def batchremove(repo, actions):' | |||||
1036 | unlink = util.unlinkpath |
|
1038 | unlink = util.unlinkpath | |
1037 | wjoin = repo.wjoin |
|
1039 | wjoin = repo.wjoin | |
1038 | audit = repo.wvfs.audit |
|
1040 | audit = repo.wvfs.audit | |
|
1041 | try: | |||
|
1042 | cwd = os.getcwd() | |||
|
1043 | except OSError as err: | |||
|
1044 | if err.errno != errno.ENOENT: | |||
|
1045 | raise | |||
|
1046 | cwd = None | |||
1039 | i = 0 |
|
1047 | i = 0 | |
1040 | for f, args, msg in actions: |
|
1048 | for f, args, msg in actions: | |
1041 | repo.ui.debug(" %s: %s -> r\n" % (f, msg)) |
|
1049 | repo.ui.debug(" %s: %s -> r\n" % (f, msg)) | |
@@ -1053,6 +1061,18 b' def batchremove(repo, actions):' | |||||
1053 | i += 1 |
|
1061 | i += 1 | |
1054 | if i > 0: |
|
1062 | if i > 0: | |
1055 | yield i, f |
|
1063 | yield i, f | |
|
1064 | if cwd: | |||
|
1065 | # cwd was present before we started to remove files | |||
|
1066 | # let's check if it is present after we removed them | |||
|
1067 | try: | |||
|
1068 | os.getcwd() | |||
|
1069 | except OSError as err: | |||
|
1070 | if err.errno != errno.ENOENT: | |||
|
1071 | raise | |||
|
1072 | # Print a warning if cwd was deleted | |||
|
1073 | repo.ui.warn(_("current directory was removed\n" | |||
|
1074 | "(consider changing to repo root: %s)\n") % | |||
|
1075 | repo.root) | |||
1056 |
|
1076 | |||
1057 | def batchget(repo, mctx, actions): |
|
1077 | def batchget(repo, mctx, actions): | |
1058 | """apply gets to the working directory |
|
1078 | """apply gets to the working directory | |
@@ -1150,7 +1170,7 b' def applyupdates(repo, actions, wctx, mc' | |||||
1150 | numupdates = sum(len(l) for m, l in actions.items() if m != 'k') |
|
1170 | numupdates = sum(len(l) for m, l in actions.items() if m != 'k') | |
1151 |
|
1171 | |||
1152 | if [a for a in actions['r'] if a[0] == '.hgsubstate']: |
|
1172 | if [a for a in actions['r'] if a[0] == '.hgsubstate']: | |
1153 | subrepo.submerge(repo, wctx, mctx, wctx, overwrite) |
|
1173 | subrepo.submerge(repo, wctx, mctx, wctx, overwrite, labels) | |
1154 |
|
1174 | |||
1155 | # remove in parallel (must come first) |
|
1175 | # remove in parallel (must come first) | |
1156 | z = 0 |
|
1176 | z = 0 | |
@@ -1168,7 +1188,7 b' def applyupdates(repo, actions, wctx, mc' | |||||
1168 | updated = len(actions['g']) |
|
1188 | updated = len(actions['g']) | |
1169 |
|
1189 | |||
1170 | if [a for a in actions['g'] if a[0] == '.hgsubstate']: |
|
1190 | if [a for a in actions['g'] if a[0] == '.hgsubstate']: | |
1171 | subrepo.submerge(repo, wctx, mctx, wctx, overwrite) |
|
1191 | subrepo.submerge(repo, wctx, mctx, wctx, overwrite, labels) | |
1172 |
|
1192 | |||
1173 | # forget (manifest only, just log it) (must come first) |
|
1193 | # forget (manifest only, just log it) (must come first) | |
1174 | for f, args, msg in actions['f']: |
|
1194 | for f, args, msg in actions['f']: | |
@@ -1253,7 +1273,7 b' def applyupdates(repo, actions, wctx, mc' | |||||
1253 | progress(_updating, z, item=f, total=numupdates, unit=_files) |
|
1273 | progress(_updating, z, item=f, total=numupdates, unit=_files) | |
1254 | if f == '.hgsubstate': # subrepo states need updating |
|
1274 | if f == '.hgsubstate': # subrepo states need updating | |
1255 | subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx), |
|
1275 | subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx), | |
1256 | overwrite) |
|
1276 | overwrite, labels) | |
1257 | continue |
|
1277 | continue | |
1258 | audit(f) |
|
1278 | audit(f) | |
1259 | complete, r = ms.preresolve(f, wctx) |
|
1279 | complete, r = ms.preresolve(f, wctx) | |
@@ -1286,8 +1306,29 b' def applyupdates(repo, actions, wctx, mc' | |||||
1286 | removed += msremoved |
|
1306 | removed += msremoved | |
1287 |
|
1307 | |||
1288 | extraactions = ms.actions() |
|
1308 | extraactions = ms.actions() | |
1289 | for k, acts in extraactions.iteritems(): |
|
1309 | if extraactions: | |
1290 | actions[k].extend(acts) |
|
1310 | mfiles = set(a[0] for a in actions['m']) | |
|
1311 | for k, acts in extraactions.iteritems(): | |||
|
1312 | actions[k].extend(acts) | |||
|
1313 | # Remove these files from actions['m'] as well. This is important | |||
|
1314 | # because in recordupdates, files in actions['m'] are processed | |||
|
1315 | # after files in other actions, and the merge driver might add | |||
|
1316 | # files to those actions via extraactions above. This can lead to a | |||
|
1317 | # file being recorded twice, with poor results. This is especially | |||
|
1318 | # problematic for actions['r'] (currently only possible with the | |||
|
1319 | # merge driver in the initial merge process; interrupted merges | |||
|
1320 | # don't go through this flow). | |||
|
1321 | # | |||
|
1322 | # The real fix here is to have indexes by both file and action so | |||
|
1323 | # that when the action for a file is changed it is automatically | |||
|
1324 | # reflected in the other action lists. But that involves a more | |||
|
1325 | # complex data structure, so this will do for now. | |||
|
1326 | # | |||
|
1327 | # We don't need to do the same operation for 'dc' and 'cd' because | |||
|
1328 | # those lists aren't consulted again. | |||
|
1329 | mfiles.difference_update(a[0] for a in acts) | |||
|
1330 | ||||
|
1331 | actions['m'] = [a for a in actions['m'] if a[0] in mfiles] | |||
1291 |
|
1332 | |||
1292 | progress(_updating, None, total=numupdates, unit=_files) |
|
1333 | progress(_updating, None, total=numupdates, unit=_files) | |
1293 |
|
1334 | |||
@@ -1514,15 +1555,16 b' def update(repo, node, branchmerge, forc' | |||||
1514 | pas = [p1] |
|
1555 | pas = [p1] | |
1515 |
|
1556 | |||
1516 | # deprecated config: merge.followcopies |
|
1557 | # deprecated config: merge.followcopies | |
1517 | followcopies = False |
|
1558 | followcopies = repo.ui.configbool('merge', 'followcopies', True) | |
1518 | if overwrite: |
|
1559 | if overwrite: | |
1519 | pas = [wc] |
|
1560 | pas = [wc] | |
|
1561 | followcopies = False | |||
1520 | elif pas == [p2]: # backwards |
|
1562 | elif pas == [p2]: # backwards | |
1521 |
pas = [ |
|
1563 | pas = [p1] | |
1522 | elif not branchmerge and not wc.dirty(missing=True): |
|
1564 | elif not pas[0]: | |
1523 |
|
|
1565 | followcopies = False | |
1524 | elif pas[0] and repo.ui.configbool('merge', 'followcopies', True): |
|
1566 | if not branchmerge and not wc.dirty(missing=True): | |
1525 |
followcopies = |
|
1567 | followcopies = False | |
1526 |
|
1568 | |||
1527 | ### calculate phase |
|
1569 | ### calculate phase | |
1528 | actionbyfile, diverge, renamedelete = calculateupdates( |
|
1570 | actionbyfile, diverge, renamedelete = calculateupdates( | |
@@ -1535,11 +1577,13 b' def update(repo, node, branchmerge, forc' | |||||
1535 | if '.hgsubstate' in actionbyfile: |
|
1577 | if '.hgsubstate' in actionbyfile: | |
1536 | f = '.hgsubstate' |
|
1578 | f = '.hgsubstate' | |
1537 | m, args, msg = actionbyfile[f] |
|
1579 | m, args, msg = actionbyfile[f] | |
|
1580 | prompts = filemerge.partextras(labels) | |||
|
1581 | prompts['f'] = f | |||
1538 | if m == 'cd': |
|
1582 | if m == 'cd': | |
1539 | if repo.ui.promptchoice( |
|
1583 | if repo.ui.promptchoice( | |
1540 |
_("local changed %s which |
|
1584 | _("local%(l)s changed %(f)s which other%(o)s deleted\n" | |
1541 | "use (c)hanged version or (d)elete?" |
|
1585 | "use (c)hanged version or (d)elete?" | |
1542 |
"$$ &Changed $$ &Delete") % |
|
1586 | "$$ &Changed $$ &Delete") % prompts, 0): | |
1543 | actionbyfile[f] = ('r', None, "prompt delete") |
|
1587 | actionbyfile[f] = ('r', None, "prompt delete") | |
1544 | elif f in p1: |
|
1588 | elif f in p1: | |
1545 | actionbyfile[f] = ('am', None, "prompt keep") |
|
1589 | actionbyfile[f] = ('am', None, "prompt keep") | |
@@ -1549,9 +1593,9 b' def update(repo, node, branchmerge, forc' | |||||
1549 | f1, f2, fa, move, anc = args |
|
1593 | f1, f2, fa, move, anc = args | |
1550 | flags = p2[f2].flags() |
|
1594 | flags = p2[f2].flags() | |
1551 | if repo.ui.promptchoice( |
|
1595 | if repo.ui.promptchoice( | |
1552 |
_(" |
|
1596 | _("other%(o)s changed %(f)s which local%(l)s deleted\n" | |
1553 | "use (c)hanged version or leave (d)eleted?" |
|
1597 | "use (c)hanged version or leave (d)eleted?" | |
1554 |
"$$ &Changed $$ &Deleted") % |
|
1598 | "$$ &Changed $$ &Deleted") % prompts, 0) == 0: | |
1555 | actionbyfile[f] = ('g', (flags, False), "prompt recreating") |
|
1599 | actionbyfile[f] = ('g', (flags, False), "prompt recreating") | |
1556 | else: |
|
1600 | else: | |
1557 | del actionbyfile[f] |
|
1601 | del actionbyfile[f] | |
@@ -1563,7 +1607,7 b' def update(repo, node, branchmerge, forc' | |||||
1563 | actions[m] = [] |
|
1607 | actions[m] = [] | |
1564 | actions[m].append((f, args, msg)) |
|
1608 | actions[m].append((f, args, msg)) | |
1565 |
|
1609 | |||
1566 |
if not util. |
|
1610 | if not util.fscasesensitive(repo.path): | |
1567 | # check collision between files only in p2 for clean update |
|
1611 | # check collision between files only in p2 for clean update | |
1568 | if (not branchmerge and |
|
1612 | if (not branchmerge and | |
1569 | (force or not wc.dirty(missing=True, branch=False))): |
|
1613 | (force or not wc.dirty(missing=True, branch=False))): |
@@ -20,49 +20,33 b'' | |||||
20 | of the GNU General Public License, incorporated herein by reference. |
|
20 | of the GNU General Public License, incorporated herein by reference. | |
21 | */ |
|
21 | */ | |
22 |
|
22 | |||
23 | #define PY_SSIZE_T_CLEAN |
|
|||
24 | #include <Python.h> |
|
|||
25 | #include <stdlib.h> |
|
23 | #include <stdlib.h> | |
26 | #include <string.h> |
|
24 | #include <string.h> | |
27 |
|
25 | |||
28 | #include "util.h" |
|
|||
29 | #include "bitmanipulation.h" |
|
26 | #include "bitmanipulation.h" | |
30 |
|
27 | #include "compat.h" | ||
31 | static char mpatch_doc[] = "Efficient binary patching."; |
|
28 | #include "mpatch.h" | |
32 | static PyObject *mpatch_Error; |
|
|||
33 |
|
29 | |||
34 | struct frag { |
|
30 | static struct mpatch_flist *lalloc(ssize_t size) | |
35 | int start, end, len; |
|
|||
36 | const char *data; |
|
|||
37 | }; |
|
|||
38 |
|
||||
39 | struct flist { |
|
|||
40 | struct frag *base, *head, *tail; |
|
|||
41 | }; |
|
|||
42 |
|
||||
43 | static struct flist *lalloc(Py_ssize_t size) |
|
|||
44 | { |
|
31 | { | |
45 | struct flist *a = NULL; |
|
32 | struct mpatch_flist *a = NULL; | |
46 |
|
33 | |||
47 | if (size < 1) |
|
34 | if (size < 1) | |
48 | size = 1; |
|
35 | size = 1; | |
49 |
|
36 | |||
50 | a = (struct flist *)malloc(sizeof(struct flist)); |
|
37 | a = (struct mpatch_flist *)malloc(sizeof(struct mpatch_flist)); | |
51 | if (a) { |
|
38 | if (a) { | |
52 | a->base = (struct frag *)malloc(sizeof(struct frag) * size); |
|
39 | a->base = (struct mpatch_frag *)malloc(sizeof(struct mpatch_frag) * size); | |
53 | if (a->base) { |
|
40 | if (a->base) { | |
54 | a->head = a->tail = a->base; |
|
41 | a->head = a->tail = a->base; | |
55 | return a; |
|
42 | return a; | |
56 | } |
|
43 | } | |
57 | free(a); |
|
44 | free(a); | |
58 | a = NULL; |
|
|||
59 | } |
|
45 | } | |
60 | if (!PyErr_Occurred()) |
|
|||
61 | PyErr_NoMemory(); |
|
|||
62 | return NULL; |
|
46 | return NULL; | |
63 | } |
|
47 | } | |
64 |
|
48 | |||
65 |
|
|
49 | void mpatch_lfree(struct mpatch_flist *a) | |
66 | { |
|
50 | { | |
67 | if (a) { |
|
51 | if (a) { | |
68 | free(a->base); |
|
52 | free(a->base); | |
@@ -70,7 +54,7 b' static void lfree(struct flist *a)' | |||||
70 | } |
|
54 | } | |
71 | } |
|
55 | } | |
72 |
|
56 | |||
73 |
static |
|
57 | static ssize_t lsize(struct mpatch_flist *a) | |
74 | { |
|
58 | { | |
75 | return a->tail - a->head; |
|
59 | return a->tail - a->head; | |
76 | } |
|
60 | } | |
@@ -78,9 +62,10 b' static Py_ssize_t lsize(struct flist *a)' | |||||
78 | /* move hunks in source that are less cut to dest, compensating |
|
62 | /* move hunks in source that are less cut to dest, compensating | |
79 | for changes in offset. the last hunk may be split if necessary. |
|
63 | for changes in offset. the last hunk may be split if necessary. | |
80 | */ |
|
64 | */ | |
81 |
static int gather(struct flist *dest, struct flist *src, int cut, |
|
65 | static int gather(struct mpatch_flist *dest, struct mpatch_flist *src, int cut, | |
|
66 | int offset) | |||
82 | { |
|
67 | { | |
83 | struct frag *d = dest->tail, *s = src->head; |
|
68 | struct mpatch_frag *d = dest->tail, *s = src->head; | |
84 | int postend, c, l; |
|
69 | int postend, c, l; | |
85 |
|
70 | |||
86 | while (s != src->tail) { |
|
71 | while (s != src->tail) { | |
@@ -123,9 +108,9 b' static int gather(struct flist *dest, st' | |||||
123 | } |
|
108 | } | |
124 |
|
109 | |||
125 | /* like gather, but with no output list */ |
|
110 | /* like gather, but with no output list */ | |
126 | static int discard(struct flist *src, int cut, int offset) |
|
111 | static int discard(struct mpatch_flist *src, int cut, int offset) | |
127 | { |
|
112 | { | |
128 | struct frag *s = src->head; |
|
113 | struct mpatch_frag *s = src->head; | |
129 | int postend, c, l; |
|
114 | int postend, c, l; | |
130 |
|
115 | |||
131 | while (s != src->tail) { |
|
116 | while (s != src->tail) { | |
@@ -160,10 +145,11 b' static int discard(struct flist *src, in' | |||||
160 |
|
145 | |||
161 | /* combine hunk lists a and b, while adjusting b for offset changes in a/ |
|
146 | /* combine hunk lists a and b, while adjusting b for offset changes in a/ | |
162 | this deletes a and b and returns the resultant list. */ |
|
147 | this deletes a and b and returns the resultant list. */ | |
163 |
static struct flist *combine(struct flist *a, |
|
148 | static struct mpatch_flist *combine(struct mpatch_flist *a, | |
|
149 | struct mpatch_flist *b) | |||
164 | { |
|
150 | { | |
165 | struct flist *c = NULL; |
|
151 | struct mpatch_flist *c = NULL; | |
166 | struct frag *bh, *ct; |
|
152 | struct mpatch_frag *bh, *ct; | |
167 | int offset = 0, post; |
|
153 | int offset = 0, post; | |
168 |
|
154 | |||
169 | if (a && b) |
|
155 | if (a && b) | |
@@ -189,26 +175,26 b' static struct flist *combine(struct flis' | |||||
189 | } |
|
175 | } | |
190 |
|
176 | |||
191 | /* hold on to tail from a */ |
|
177 | /* hold on to tail from a */ | |
192 | memcpy(c->tail, a->head, sizeof(struct frag) * lsize(a)); |
|
178 | memcpy(c->tail, a->head, sizeof(struct mpatch_frag) * lsize(a)); | |
193 | c->tail += lsize(a); |
|
179 | c->tail += lsize(a); | |
194 | } |
|
180 | } | |
195 |
|
181 | |||
196 | lfree(a); |
|
182 | mpatch_lfree(a); | |
197 | lfree(b); |
|
183 | mpatch_lfree(b); | |
198 | return c; |
|
184 | return c; | |
199 | } |
|
185 | } | |
200 |
|
186 | |||
201 | /* decode a binary patch into a hunk list */ |
|
187 | /* decode a binary patch into a hunk list */ | |
202 | static struct flist *decode(const char *bin, Py_ssize_t len) |
|
188 | int mpatch_decode(const char *bin, ssize_t len, struct mpatch_flist **res) | |
203 | { |
|
189 | { | |
204 | struct flist *l; |
|
190 | struct mpatch_flist *l; | |
205 | struct frag *lt; |
|
191 | struct mpatch_frag *lt; | |
206 | int pos = 0; |
|
192 | int pos = 0; | |
207 |
|
193 | |||
208 | /* assume worst case size, we won't have many of these lists */ |
|
194 | /* assume worst case size, we won't have many of these lists */ | |
209 | l = lalloc(len / 12 + 1); |
|
195 | l = lalloc(len / 12 + 1); | |
210 | if (!l) |
|
196 | if (!l) | |
211 | return NULL; |
|
197 | return MPATCH_ERR_NO_MEM; | |
212 |
|
198 | |||
213 | lt = l->tail; |
|
199 | lt = l->tail; | |
214 |
|
200 | |||
@@ -224,28 +210,24 b' static struct flist *decode(const char *' | |||||
224 | } |
|
210 | } | |
225 |
|
211 | |||
226 | if (pos != len) { |
|
212 | if (pos != len) { | |
227 | if (!PyErr_Occurred()) |
|
213 | mpatch_lfree(l); | |
228 | PyErr_SetString(mpatch_Error, "patch cannot be decoded"); |
|
214 | return MPATCH_ERR_CANNOT_BE_DECODED; | |
229 | lfree(l); |
|
|||
230 | return NULL; |
|
|||
231 | } |
|
215 | } | |
232 |
|
216 | |||
233 | l->tail = lt; |
|
217 | l->tail = lt; | |
234 |
|
|
218 | *res = l; | |
|
219 | return 0; | |||
235 | } |
|
220 | } | |
236 |
|
221 | |||
237 | /* calculate the size of resultant text */ |
|
222 | /* calculate the size of resultant text */ | |
238 |
st |
|
223 | ssize_t mpatch_calcsize(ssize_t len, struct mpatch_flist *l) | |
239 | { |
|
224 | { | |
240 |
|
|
225 | ssize_t outlen = 0, last = 0; | |
241 | struct frag *f = l->head; |
|
226 | struct mpatch_frag *f = l->head; | |
242 |
|
227 | |||
243 | while (f != l->tail) { |
|
228 | while (f != l->tail) { | |
244 | if (f->start < last || f->end > len) { |
|
229 | if (f->start < last || f->end > len) { | |
245 | if (!PyErr_Occurred()) |
|
230 | return MPATCH_ERR_INVALID_PATCH; | |
246 | PyErr_SetString(mpatch_Error, |
|
|||
247 | "invalid patch"); |
|
|||
248 | return -1; |
|
|||
249 | } |
|
231 | } | |
250 | outlen += f->start - last; |
|
232 | outlen += f->start - last; | |
251 | last = f->end; |
|
233 | last = f->end; | |
@@ -257,18 +239,16 b' static Py_ssize_t calcsize(Py_ssize_t le' | |||||
257 | return outlen; |
|
239 | return outlen; | |
258 | } |
|
240 | } | |
259 |
|
241 | |||
260 |
|
|
242 | int mpatch_apply(char *buf, const char *orig, ssize_t len, | |
|
243 | struct mpatch_flist *l) | |||
261 | { |
|
244 | { | |
262 | struct frag *f = l->head; |
|
245 | struct mpatch_frag *f = l->head; | |
263 | int last = 0; |
|
246 | int last = 0; | |
264 | char *p = buf; |
|
247 | char *p = buf; | |
265 |
|
248 | |||
266 | while (f != l->tail) { |
|
249 | while (f != l->tail) { | |
267 | if (f->start < last || f->end > len) { |
|
250 | if (f->start < last || f->end > len) { | |
268 | if (!PyErr_Occurred()) |
|
251 | return MPATCH_ERR_INVALID_PATCH; | |
269 | PyErr_SetString(mpatch_Error, |
|
|||
270 | "invalid patch"); |
|
|||
271 | return 0; |
|
|||
272 | } |
|
252 | } | |
273 | memcpy(p, orig + last, f->start - last); |
|
253 | memcpy(p, orig + last, f->start - last); | |
274 | p += f->start - last; |
|
254 | p += f->start - last; | |
@@ -278,146 +258,23 b' static int apply(char *buf, const char *' | |||||
278 | f++; |
|
258 | f++; | |
279 | } |
|
259 | } | |
280 | memcpy(p, orig + last, len - last); |
|
260 | memcpy(p, orig + last, len - last); | |
281 |
return |
|
261 | return 0; | |
282 | } |
|
262 | } | |
283 |
|
263 | |||
284 | /* recursively generate a patch of all bins between start and end */ |
|
264 | /* recursively generate a patch of all bins between start and end */ | |
285 | static struct flist *fold(PyObject *bins, Py_ssize_t start, Py_ssize_t end) |
|
265 | struct mpatch_flist *mpatch_fold(void *bins, | |
|
266 | struct mpatch_flist* (*get_next_item)(void*, ssize_t), | |||
|
267 | ssize_t start, ssize_t end) | |||
286 | { |
|
268 | { | |
287 |
|
|
269 | ssize_t len; | |
288 | const char *buffer; |
|
|||
289 |
|
270 | |||
290 | if (start + 1 == end) { |
|
271 | if (start + 1 == end) { | |
291 | /* trivial case, output a decoded list */ |
|
272 | /* trivial case, output a decoded list */ | |
292 |
|
|
273 | return get_next_item(bins, start); | |
293 | if (!tmp) |
|
|||
294 | return NULL; |
|
|||
295 | if (PyObject_AsCharBuffer(tmp, &buffer, &blen)) |
|
|||
296 | return NULL; |
|
|||
297 | return decode(buffer, blen); |
|
|||
298 | } |
|
274 | } | |
299 |
|
275 | |||
300 | /* divide and conquer, memory management is elsewhere */ |
|
276 | /* divide and conquer, memory management is elsewhere */ | |
301 | len = (end - start) / 2; |
|
277 | len = (end - start) / 2; | |
302 | return combine(fold(bins, start, start + len), |
|
278 | return combine(mpatch_fold(bins, get_next_item, start, start + len), | |
303 | fold(bins, start + len, end)); |
|
279 | mpatch_fold(bins, get_next_item, start + len, end)); | |
304 | } |
|
|||
305 |
|
||||
306 | static PyObject * |
|
|||
307 | patches(PyObject *self, PyObject *args) |
|
|||
308 | { |
|
|||
309 | PyObject *text, *bins, *result; |
|
|||
310 | struct flist *patch; |
|
|||
311 | const char *in; |
|
|||
312 | char *out; |
|
|||
313 | Py_ssize_t len, outlen, inlen; |
|
|||
314 |
|
||||
315 | if (!PyArg_ParseTuple(args, "OO:mpatch", &text, &bins)) |
|
|||
316 | return NULL; |
|
|||
317 |
|
||||
318 | len = PyList_Size(bins); |
|
|||
319 | if (!len) { |
|
|||
320 | /* nothing to do */ |
|
|||
321 | Py_INCREF(text); |
|
|||
322 | return text; |
|
|||
323 | } |
|
|||
324 |
|
||||
325 | if (PyObject_AsCharBuffer(text, &in, &inlen)) |
|
|||
326 | return NULL; |
|
|||
327 |
|
||||
328 | patch = fold(bins, 0, len); |
|
|||
329 | if (!patch) |
|
|||
330 | return NULL; |
|
|||
331 |
|
||||
332 | outlen = calcsize(inlen, patch); |
|
|||
333 | if (outlen < 0) { |
|
|||
334 | result = NULL; |
|
|||
335 | goto cleanup; |
|
|||
336 | } |
|
|||
337 | result = PyBytes_FromStringAndSize(NULL, outlen); |
|
|||
338 | if (!result) { |
|
|||
339 | result = NULL; |
|
|||
340 | goto cleanup; |
|
|||
341 | } |
|
|||
342 | out = PyBytes_AsString(result); |
|
|||
343 | if (!apply(out, in, inlen, patch)) { |
|
|||
344 | Py_DECREF(result); |
|
|||
345 | result = NULL; |
|
|||
346 | } |
|
|||
347 | cleanup: |
|
|||
348 | lfree(patch); |
|
|||
349 | return result; |
|
|||
350 | } |
|
280 | } | |
351 |
|
||||
352 | /* calculate size of a patched file directly */ |
|
|||
353 | static PyObject * |
|
|||
354 | patchedsize(PyObject *self, PyObject *args) |
|
|||
355 | { |
|
|||
356 | long orig, start, end, len, outlen = 0, last = 0, pos = 0; |
|
|||
357 | Py_ssize_t patchlen; |
|
|||
358 | char *bin; |
|
|||
359 |
|
||||
360 | if (!PyArg_ParseTuple(args, "ls#", &orig, &bin, &patchlen)) |
|
|||
361 | return NULL; |
|
|||
362 |
|
||||
363 | while (pos >= 0 && pos < patchlen) { |
|
|||
364 | start = getbe32(bin + pos); |
|
|||
365 | end = getbe32(bin + pos + 4); |
|
|||
366 | len = getbe32(bin + pos + 8); |
|
|||
367 | if (start > end) |
|
|||
368 | break; /* sanity check */ |
|
|||
369 | pos += 12 + len; |
|
|||
370 | outlen += start - last; |
|
|||
371 | last = end; |
|
|||
372 | outlen += len; |
|
|||
373 | } |
|
|||
374 |
|
||||
375 | if (pos != patchlen) { |
|
|||
376 | if (!PyErr_Occurred()) |
|
|||
377 | PyErr_SetString(mpatch_Error, "patch cannot be decoded"); |
|
|||
378 | return NULL; |
|
|||
379 | } |
|
|||
380 |
|
||||
381 | outlen += orig - last; |
|
|||
382 | return Py_BuildValue("l", outlen); |
|
|||
383 | } |
|
|||
384 |
|
||||
385 | static PyMethodDef methods[] = { |
|
|||
386 | {"patches", patches, METH_VARARGS, "apply a series of patches\n"}, |
|
|||
387 | {"patchedsize", patchedsize, METH_VARARGS, "calculed patched size\n"}, |
|
|||
388 | {NULL, NULL} |
|
|||
389 | }; |
|
|||
390 |
|
||||
391 | #ifdef IS_PY3K |
|
|||
392 | static struct PyModuleDef mpatch_module = { |
|
|||
393 | PyModuleDef_HEAD_INIT, |
|
|||
394 | "mpatch", |
|
|||
395 | mpatch_doc, |
|
|||
396 | -1, |
|
|||
397 | methods |
|
|||
398 | }; |
|
|||
399 |
|
||||
400 | PyMODINIT_FUNC PyInit_mpatch(void) |
|
|||
401 | { |
|
|||
402 | PyObject *m; |
|
|||
403 |
|
||||
404 | m = PyModule_Create(&mpatch_module); |
|
|||
405 | if (m == NULL) |
|
|||
406 | return NULL; |
|
|||
407 |
|
||||
408 | mpatch_Error = PyErr_NewException("mercurial.mpatch.mpatchError", |
|
|||
409 | NULL, NULL); |
|
|||
410 | Py_INCREF(mpatch_Error); |
|
|||
411 | PyModule_AddObject(m, "mpatchError", mpatch_Error); |
|
|||
412 |
|
||||
413 | return m; |
|
|||
414 | } |
|
|||
415 | #else |
|
|||
416 | PyMODINIT_FUNC |
|
|||
417 | initmpatch(void) |
|
|||
418 | { |
|
|||
419 | Py_InitModule3("mpatch", methods, mpatch_doc); |
|
|||
420 | mpatch_Error = PyErr_NewException("mercurial.mpatch.mpatchError", |
|
|||
421 | NULL, NULL); |
|
|||
422 | } |
|
|||
423 | #endif |
|
@@ -27,288 +27,54 b'' | |||||
27 |
|
27 | |||
28 | #include "util.h" |
|
28 | #include "util.h" | |
29 | #include "bitmanipulation.h" |
|
29 | #include "bitmanipulation.h" | |
|
30 | #include "compat.h" | |||
|
31 | #include "mpatch.h" | |||
30 |
|
32 | |||
31 | static char mpatch_doc[] = "Efficient binary patching."; |
|
33 | static char mpatch_doc[] = "Efficient binary patching."; | |
32 | static PyObject *mpatch_Error; |
|
34 | static PyObject *mpatch_Error; | |
33 |
|
35 | |||
34 | struct frag { |
|
36 | static void setpyerr(int r) | |
35 | int start, end, len; |
|
|||
36 | const char *data; |
|
|||
37 | }; |
|
|||
38 |
|
||||
39 | struct flist { |
|
|||
40 | struct frag *base, *head, *tail; |
|
|||
41 | }; |
|
|||
42 |
|
||||
43 | static struct flist *lalloc(Py_ssize_t size) |
|
|||
44 | { |
|
37 | { | |
45 | struct flist *a = NULL; |
|
38 | switch (r) { | |
46 |
|
39 | case MPATCH_ERR_NO_MEM: | ||
47 | if (size < 1) |
|
|||
48 | size = 1; |
|
|||
49 |
|
||||
50 | a = (struct flist *)malloc(sizeof(struct flist)); |
|
|||
51 | if (a) { |
|
|||
52 | a->base = (struct frag *)malloc(sizeof(struct frag) * size); |
|
|||
53 | if (a->base) { |
|
|||
54 | a->head = a->tail = a->base; |
|
|||
55 | return a; |
|
|||
56 | } |
|
|||
57 | free(a); |
|
|||
58 | a = NULL; |
|
|||
59 | } |
|
|||
60 | if (!PyErr_Occurred()) |
|
|||
61 | PyErr_NoMemory(); |
|
40 | PyErr_NoMemory(); | |
62 | return NULL; |
|
41 | break; | |
63 | } |
|
42 | case MPATCH_ERR_CANNOT_BE_DECODED: | |
64 |
|
43 | PyErr_SetString(mpatch_Error, "patch cannot be decoded"); | ||
65 | static void lfree(struct flist *a) |
|
44 | break; | |
66 | { |
|
45 | case MPATCH_ERR_INVALID_PATCH: | |
67 | if (a) { |
|
46 | PyErr_SetString(mpatch_Error, "invalid patch"); | |
68 | free(a->base); |
|
47 | break; | |
69 | free(a); |
|
|||
70 | } |
|
48 | } | |
71 | } |
|
49 | } | |
72 |
|
50 | |||
73 | static Py_ssize_t lsize(struct flist *a) |
|
51 | struct mpatch_flist *cpygetitem(void *bins, ssize_t pos) | |
74 | { |
|
|||
75 | return a->tail - a->head; |
|
|||
76 | } |
|
|||
77 |
|
||||
78 | /* move hunks in source that are less cut to dest, compensating |
|
|||
79 | for changes in offset. the last hunk may be split if necessary. |
|
|||
80 | */ |
|
|||
81 | static int gather(struct flist *dest, struct flist *src, int cut, int offset) |
|
|||
82 | { |
|
52 | { | |
83 | struct frag *d = dest->tail, *s = src->head; |
|
53 | const char *buffer; | |
84 | int postend, c, l; |
|
54 | struct mpatch_flist *res; | |
85 |
|
55 | ssize_t blen; | ||
86 | while (s != src->tail) { |
|
56 | int r; | |
87 | if (s->start + offset >= cut) |
|
|||
88 | break; /* we've gone far enough */ |
|
|||
89 |
|
||||
90 | postend = offset + s->start + s->len; |
|
|||
91 | if (postend <= cut) { |
|
|||
92 | /* save this hunk */ |
|
|||
93 | offset += s->start + s->len - s->end; |
|
|||
94 | *d++ = *s++; |
|
|||
95 | } |
|
|||
96 | else { |
|
|||
97 | /* break up this hunk */ |
|
|||
98 | c = cut - offset; |
|
|||
99 | if (s->end < c) |
|
|||
100 | c = s->end; |
|
|||
101 | l = cut - offset - s->start; |
|
|||
102 | if (s->len < l) |
|
|||
103 | l = s->len; |
|
|||
104 |
|
||||
105 | offset += s->start + l - c; |
|
|||
106 |
|
||||
107 | d->start = s->start; |
|
|||
108 | d->end = c; |
|
|||
109 | d->len = l; |
|
|||
110 | d->data = s->data; |
|
|||
111 | d++; |
|
|||
112 | s->start = c; |
|
|||
113 | s->len = s->len - l; |
|
|||
114 | s->data = s->data + l; |
|
|||
115 |
|
||||
116 | break; |
|
|||
117 | } |
|
|||
118 | } |
|
|||
119 |
|
||||
120 | dest->tail = d; |
|
|||
121 | src->head = s; |
|
|||
122 | return offset; |
|
|||
123 | } |
|
|||
124 |
|
||||
125 | /* like gather, but with no output list */ |
|
|||
126 | static int discard(struct flist *src, int cut, int offset) |
|
|||
127 | { |
|
|||
128 | struct frag *s = src->head; |
|
|||
129 | int postend, c, l; |
|
|||
130 |
|
||||
131 | while (s != src->tail) { |
|
|||
132 | if (s->start + offset >= cut) |
|
|||
133 | break; |
|
|||
134 |
|
||||
135 | postend = offset + s->start + s->len; |
|
|||
136 | if (postend <= cut) { |
|
|||
137 | offset += s->start + s->len - s->end; |
|
|||
138 | s++; |
|
|||
139 | } |
|
|||
140 | else { |
|
|||
141 | c = cut - offset; |
|
|||
142 | if (s->end < c) |
|
|||
143 | c = s->end; |
|
|||
144 | l = cut - offset - s->start; |
|
|||
145 | if (s->len < l) |
|
|||
146 | l = s->len; |
|
|||
147 |
|
57 | |||
148 | offset += s->start + l - c; |
|
58 | PyObject *tmp = PyList_GetItem((PyObject*)bins, pos); | |
149 | s->start = c; |
|
59 | if (!tmp) | |
150 | s->len = s->len - l; |
|
|||
151 | s->data = s->data + l; |
|
|||
152 |
|
||||
153 | break; |
|
|||
154 | } |
|
|||
155 | } |
|
|||
156 |
|
||||
157 | src->head = s; |
|
|||
158 | return offset; |
|
|||
159 | } |
|
|||
160 |
|
||||
161 | /* combine hunk lists a and b, while adjusting b for offset changes in a/ |
|
|||
162 | this deletes a and b and returns the resultant list. */ |
|
|||
163 | static struct flist *combine(struct flist *a, struct flist *b) |
|
|||
164 | { |
|
|||
165 | struct flist *c = NULL; |
|
|||
166 | struct frag *bh, *ct; |
|
|||
167 | int offset = 0, post; |
|
|||
168 |
|
||||
169 | if (a && b) |
|
|||
170 | c = lalloc((lsize(a) + lsize(b)) * 2); |
|
|||
171 |
|
||||
172 | if (c) { |
|
|||
173 |
|
||||
174 | for (bh = b->head; bh != b->tail; bh++) { |
|
|||
175 | /* save old hunks */ |
|
|||
176 | offset = gather(c, a, bh->start, offset); |
|
|||
177 |
|
||||
178 | /* discard replaced hunks */ |
|
|||
179 | post = discard(a, bh->end, offset); |
|
|||
180 |
|
||||
181 | /* insert new hunk */ |
|
|||
182 | ct = c->tail; |
|
|||
183 | ct->start = bh->start - offset; |
|
|||
184 | ct->end = bh->end - post; |
|
|||
185 | ct->len = bh->len; |
|
|||
186 | ct->data = bh->data; |
|
|||
187 | c->tail++; |
|
|||
188 | offset = post; |
|
|||
189 | } |
|
|||
190 |
|
||||
191 | /* hold on to tail from a */ |
|
|||
192 | memcpy(c->tail, a->head, sizeof(struct frag) * lsize(a)); |
|
|||
193 | c->tail += lsize(a); |
|
|||
194 | } |
|
|||
195 |
|
||||
196 | lfree(a); |
|
|||
197 | lfree(b); |
|
|||
198 | return c; |
|
|||
199 | } |
|
|||
200 |
|
||||
201 | /* decode a binary patch into a hunk list */ |
|
|||
202 | static struct flist *decode(const char *bin, Py_ssize_t len) |
|
|||
203 | { |
|
|||
204 | struct flist *l; |
|
|||
205 | struct frag *lt; |
|
|||
206 | int pos = 0; |
|
|||
207 |
|
||||
208 | /* assume worst case size, we won't have many of these lists */ |
|
|||
209 | l = lalloc(len / 12 + 1); |
|
|||
210 | if (!l) |
|
|||
211 | return NULL; |
|
60 | return NULL; | |
212 |
|
61 | if (PyObject_AsCharBuffer(tmp, &buffer, (Py_ssize_t*)&blen)) | ||
213 | lt = l->tail; |
|
62 | return NULL; | |
214 |
|
63 | if ((r = mpatch_decode(buffer, blen, &res)) < 0) { | ||
215 | while (pos >= 0 && pos < len) { |
|
|||
216 | lt->start = getbe32(bin + pos); |
|
|||
217 | lt->end = getbe32(bin + pos + 4); |
|
|||
218 | lt->len = getbe32(bin + pos + 8); |
|
|||
219 | lt->data = bin + pos + 12; |
|
|||
220 | pos += 12 + lt->len; |
|
|||
221 | if (lt->start > lt->end || lt->len < 0) |
|
|||
222 | break; /* sanity check */ |
|
|||
223 | lt++; |
|
|||
224 | } |
|
|||
225 |
|
||||
226 | if (pos != len) { |
|
|||
227 | if (!PyErr_Occurred()) |
|
64 | if (!PyErr_Occurred()) | |
228 | PyErr_SetString(mpatch_Error, "patch cannot be decoded"); |
|
65 | setpyerr(r); | |
229 | lfree(l); |
|
|||
230 | return NULL; |
|
66 | return NULL; | |
231 | } |
|
67 | } | |
232 |
|
68 | return res; | ||
233 | l->tail = lt; |
|
|||
234 | return l; |
|
|||
235 | } |
|
|||
236 |
|
||||
237 | /* calculate the size of resultant text */ |
|
|||
238 | static Py_ssize_t calcsize(Py_ssize_t len, struct flist *l) |
|
|||
239 | { |
|
|||
240 | Py_ssize_t outlen = 0, last = 0; |
|
|||
241 | struct frag *f = l->head; |
|
|||
242 |
|
||||
243 | while (f != l->tail) { |
|
|||
244 | if (f->start < last || f->end > len) { |
|
|||
245 | if (!PyErr_Occurred()) |
|
|||
246 | PyErr_SetString(mpatch_Error, |
|
|||
247 | "invalid patch"); |
|
|||
248 | return -1; |
|
|||
249 | } |
|
|||
250 | outlen += f->start - last; |
|
|||
251 | last = f->end; |
|
|||
252 | outlen += f->len; |
|
|||
253 | f++; |
|
|||
254 | } |
|
|||
255 |
|
||||
256 | outlen += len - last; |
|
|||
257 | return outlen; |
|
|||
258 | } |
|
|||
259 |
|
||||
260 | static int apply(char *buf, const char *orig, Py_ssize_t len, struct flist *l) |
|
|||
261 | { |
|
|||
262 | struct frag *f = l->head; |
|
|||
263 | int last = 0; |
|
|||
264 | char *p = buf; |
|
|||
265 |
|
||||
266 | while (f != l->tail) { |
|
|||
267 | if (f->start < last || f->end > len) { |
|
|||
268 | if (!PyErr_Occurred()) |
|
|||
269 | PyErr_SetString(mpatch_Error, |
|
|||
270 | "invalid patch"); |
|
|||
271 | return 0; |
|
|||
272 | } |
|
|||
273 | memcpy(p, orig + last, f->start - last); |
|
|||
274 | p += f->start - last; |
|
|||
275 | memcpy(p, f->data, f->len); |
|
|||
276 | last = f->end; |
|
|||
277 | p += f->len; |
|
|||
278 | f++; |
|
|||
279 | } |
|
|||
280 | memcpy(p, orig + last, len - last); |
|
|||
281 | return 1; |
|
|||
282 | } |
|
|||
283 |
|
||||
284 | /* recursively generate a patch of all bins between start and end */ |
|
|||
285 | static struct flist *fold(PyObject *bins, Py_ssize_t start, Py_ssize_t end) |
|
|||
286 | { |
|
|||
287 | Py_ssize_t len, blen; |
|
|||
288 | const char *buffer; |
|
|||
289 |
|
||||
290 | if (start + 1 == end) { |
|
|||
291 | /* trivial case, output a decoded list */ |
|
|||
292 | PyObject *tmp = PyList_GetItem(bins, start); |
|
|||
293 | if (!tmp) |
|
|||
294 | return NULL; |
|
|||
295 | if (PyObject_AsCharBuffer(tmp, &buffer, &blen)) |
|
|||
296 | return NULL; |
|
|||
297 | return decode(buffer, blen); |
|
|||
298 | } |
|
|||
299 |
|
||||
300 | /* divide and conquer, memory management is elsewhere */ |
|
|||
301 | len = (end - start) / 2; |
|
|||
302 | return combine(fold(bins, start, start + len), |
|
|||
303 | fold(bins, start + len, end)); |
|
|||
304 | } |
|
69 | } | |
305 |
|
70 | |||
306 | static PyObject * |
|
71 | static PyObject * | |
307 | patches(PyObject *self, PyObject *args) |
|
72 | patches(PyObject *self, PyObject *args) | |
308 | { |
|
73 | { | |
309 | PyObject *text, *bins, *result; |
|
74 | PyObject *text, *bins, *result; | |
310 | struct flist *patch; |
|
75 | struct mpatch_flist *patch; | |
311 | const char *in; |
|
76 | const char *in; | |
|
77 | int r = 0; | |||
312 | char *out; |
|
78 | char *out; | |
313 | Py_ssize_t len, outlen, inlen; |
|
79 | Py_ssize_t len, outlen, inlen; | |
314 |
|
80 | |||
@@ -325,12 +91,16 b' patches(PyObject *self, PyObject *args)' | |||||
325 | if (PyObject_AsCharBuffer(text, &in, &inlen)) |
|
91 | if (PyObject_AsCharBuffer(text, &in, &inlen)) | |
326 | return NULL; |
|
92 | return NULL; | |
327 |
|
93 | |||
328 | patch = fold(bins, 0, len); |
|
94 | patch = mpatch_fold(bins, cpygetitem, 0, len); | |
329 | if (!patch) |
|
95 | if (!patch) { /* error already set or memory error */ | |
|
96 | if (!PyErr_Occurred()) | |||
|
97 | PyErr_NoMemory(); | |||
330 | return NULL; |
|
98 | return NULL; | |
|
99 | } | |||
331 |
|
100 | |||
332 | outlen = calcsize(inlen, patch); |
|
101 | outlen = mpatch_calcsize(inlen, patch); | |
333 | if (outlen < 0) { |
|
102 | if (outlen < 0) { | |
|
103 | r = (int)outlen; | |||
334 | result = NULL; |
|
104 | result = NULL; | |
335 | goto cleanup; |
|
105 | goto cleanup; | |
336 | } |
|
106 | } | |
@@ -340,12 +110,14 b' patches(PyObject *self, PyObject *args)' | |||||
340 | goto cleanup; |
|
110 | goto cleanup; | |
341 | } |
|
111 | } | |
342 | out = PyBytes_AsString(result); |
|
112 | out = PyBytes_AsString(result); | |
343 |
if ( |
|
113 | if ((r = mpatch_apply(out, in, inlen, patch)) < 0) { | |
344 | Py_DECREF(result); |
|
114 | Py_DECREF(result); | |
345 | result = NULL; |
|
115 | result = NULL; | |
346 | } |
|
116 | } | |
347 | cleanup: |
|
117 | cleanup: | |
348 | lfree(patch); |
|
118 | mpatch_lfree(patch); | |
|
119 | if (!result && !PyErr_Occurred()) | |||
|
120 | setpyerr(r); | |||
349 | return result; |
|
121 | return result; | |
350 | } |
|
122 | } | |
351 |
|
123 |
@@ -42,9 +42,9 b' Examples:' | |||||
42 |
|
42 | |||
43 | (A, ()) |
|
43 | (A, ()) | |
44 |
|
44 | |||
45 |
- When changeset A is split into B and C, a single marker |
|
45 | - When changeset A is split into B and C, a single marker is used: | |
46 |
|
46 | |||
47 |
(A, ( |
|
47 | (A, (B, C)) | |
48 |
|
48 | |||
49 | We use a single marker to distinguish the "split" case from the "divergence" |
|
49 | We use a single marker to distinguish the "split" case from the "divergence" | |
50 | case. If two independent operations rewrite the same changeset A in to A' and |
|
50 | case. If two independent operations rewrite the same changeset A in to A' and | |
@@ -1236,7 +1236,7 b' def createmarkers(repo, relations, flag=' | |||||
1236 | if not prec.mutable(): |
|
1236 | if not prec.mutable(): | |
1237 | raise error.Abort(_("cannot obsolete public changeset: %s") |
|
1237 | raise error.Abort(_("cannot obsolete public changeset: %s") | |
1238 | % prec, |
|
1238 | % prec, | |
1239 |
hint=' |
|
1239 | hint="see 'hg help phases' for details") | |
1240 | nprec = prec.node() |
|
1240 | nprec = prec.node() | |
1241 | nsucs = tuple(s.node() for s in sucs) |
|
1241 | nsucs = tuple(s.node() for s in sucs) | |
1242 | npare = None |
|
1242 | npare = None |
@@ -63,11 +63,19 b' struct listdir_stat {' | |||||
63 | }; |
|
63 | }; | |
64 | #endif |
|
64 | #endif | |
65 |
|
65 | |||
|
66 | #ifdef IS_PY3K | |||
|
67 | #define listdir_slot(name) \ | |||
|
68 | static PyObject *listdir_stat_##name(PyObject *self, void *x) \ | |||
|
69 | { \ | |||
|
70 | return PyLong_FromLong(((struct listdir_stat *)self)->st.name); \ | |||
|
71 | } | |||
|
72 | #else | |||
66 | #define listdir_slot(name) \ |
|
73 | #define listdir_slot(name) \ | |
67 | static PyObject *listdir_stat_##name(PyObject *self, void *x) \ |
|
74 | static PyObject *listdir_stat_##name(PyObject *self, void *x) \ | |
68 | { \ |
|
75 | { \ | |
69 | return PyInt_FromLong(((struct listdir_stat *)self)->st.name); \ |
|
76 | return PyInt_FromLong(((struct listdir_stat *)self)->st.name); \ | |
70 | } |
|
77 | } | |
|
78 | #endif | |||
71 |
|
79 | |||
72 | listdir_slot(st_dev) |
|
80 | listdir_slot(st_dev) | |
73 | listdir_slot(st_mode) |
|
81 | listdir_slot(st_mode) | |
@@ -624,7 +632,7 b' static PyObject *statfiles(PyObject *sel' | |||||
624 | pypath = PySequence_GetItem(names, i); |
|
632 | pypath = PySequence_GetItem(names, i); | |
625 | if (!pypath) |
|
633 | if (!pypath) | |
626 | goto bail; |
|
634 | goto bail; | |
627 |
path = Py |
|
635 | path = PyBytes_AsString(pypath); | |
628 | if (path == NULL) { |
|
636 | if (path == NULL) { | |
629 | Py_DECREF(pypath); |
|
637 | Py_DECREF(pypath); | |
630 | PyErr_SetString(PyExc_TypeError, "not a string"); |
|
638 | PyErr_SetString(PyExc_TypeError, "not a string"); | |
@@ -706,7 +714,7 b' static PyObject *recvfds(PyObject *self,' | |||||
706 | if (!rfdslist) |
|
714 | if (!rfdslist) | |
707 | goto bail; |
|
715 | goto bail; | |
708 | for (i = 0; i < rfdscount; i++) { |
|
716 | for (i = 0; i < rfdscount; i++) { | |
709 |
PyObject *obj = Py |
|
717 | PyObject *obj = PyLong_FromLong(rfds[i]); | |
710 | if (!obj) |
|
718 | if (!obj) | |
711 | goto bail; |
|
719 | goto bail; | |
712 | PyList_SET_ITEM(rfdslist, i, obj); |
|
720 | PyList_SET_ITEM(rfdslist, i, obj); |
@@ -65,7 +65,7 b' class parser(object):' | |||||
65 | # handle infix rules, take as suffix if unambiguous |
|
65 | # handle infix rules, take as suffix if unambiguous | |
66 | infix, suffix = self._elements[token][3:] |
|
66 | infix, suffix = self._elements[token][3:] | |
67 | if suffix and not (infix and self._hasnewterm()): |
|
67 | if suffix and not (infix and self._hasnewterm()): | |
68 |
expr = (suffix |
|
68 | expr = (suffix, expr) | |
69 | elif infix: |
|
69 | elif infix: | |
70 | expr = (infix[0], expr, self._parseoperand(*infix[1:])) |
|
70 | expr = (infix[0], expr, self._parseoperand(*infix[1:])) | |
71 | else: |
|
71 | else: |
@@ -15,6 +15,18 b'' | |||||
15 | #include "util.h" |
|
15 | #include "util.h" | |
16 | #include "bitmanipulation.h" |
|
16 | #include "bitmanipulation.h" | |
17 |
|
17 | |||
|
18 | #ifdef IS_PY3K | |||
|
19 | /* The mapping of Python types is meant to be temporary to get Python | |||
|
20 | * 3 to compile. We should remove this once Python 3 support is fully | |||
|
21 | * supported and proper types are used in the extensions themselves. */ | |||
|
22 | #define PyInt_Type PyLong_Type | |||
|
23 | #define PyInt_Check PyLong_Check | |||
|
24 | #define PyInt_FromLong PyLong_FromLong | |||
|
25 | #define PyInt_FromSsize_t PyLong_FromSsize_t | |||
|
26 | #define PyInt_AS_LONG PyLong_AS_LONG | |||
|
27 | #define PyInt_AsLong PyLong_AsLong | |||
|
28 | #endif | |||
|
29 | ||||
18 | static char *versionerrortext = "Python minor version mismatch"; |
|
30 | static char *versionerrortext = "Python minor version mismatch"; | |
19 |
|
31 | |||
20 | static int8_t hextable[256] = { |
|
32 | static int8_t hextable[256] = { | |
@@ -610,37 +622,37 b' static PyObject *pack_dirstate(PyObject ' | |||||
610 | /* Figure out how much we need to allocate. */ |
|
622 | /* Figure out how much we need to allocate. */ | |
611 | for (nbytes = 40, pos = 0; PyDict_Next(map, &pos, &k, &v);) { |
|
623 | for (nbytes = 40, pos = 0; PyDict_Next(map, &pos, &k, &v);) { | |
612 | PyObject *c; |
|
624 | PyObject *c; | |
613 |
if (!Py |
|
625 | if (!PyBytes_Check(k)) { | |
614 | PyErr_SetString(PyExc_TypeError, "expected string key"); |
|
626 | PyErr_SetString(PyExc_TypeError, "expected string key"); | |
615 | goto bail; |
|
627 | goto bail; | |
616 | } |
|
628 | } | |
617 |
nbytes += Py |
|
629 | nbytes += PyBytes_GET_SIZE(k) + 17; | |
618 | c = PyDict_GetItem(copymap, k); |
|
630 | c = PyDict_GetItem(copymap, k); | |
619 | if (c) { |
|
631 | if (c) { | |
620 |
if (!Py |
|
632 | if (!PyBytes_Check(c)) { | |
621 | PyErr_SetString(PyExc_TypeError, |
|
633 | PyErr_SetString(PyExc_TypeError, | |
622 | "expected string key"); |
|
634 | "expected string key"); | |
623 | goto bail; |
|
635 | goto bail; | |
624 | } |
|
636 | } | |
625 |
nbytes += Py |
|
637 | nbytes += PyBytes_GET_SIZE(c) + 1; | |
626 | } |
|
638 | } | |
627 | } |
|
639 | } | |
628 |
|
640 | |||
629 |
packobj = Py |
|
641 | packobj = PyBytes_FromStringAndSize(NULL, nbytes); | |
630 | if (packobj == NULL) |
|
642 | if (packobj == NULL) | |
631 | goto bail; |
|
643 | goto bail; | |
632 |
|
644 | |||
633 |
p = Py |
|
645 | p = PyBytes_AS_STRING(packobj); | |
634 |
|
646 | |||
635 | pn = PySequence_ITEM(pl, 0); |
|
647 | pn = PySequence_ITEM(pl, 0); | |
636 |
if (Py |
|
648 | if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) { | |
637 | PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash"); |
|
649 | PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash"); | |
638 | goto bail; |
|
650 | goto bail; | |
639 | } |
|
651 | } | |
640 | memcpy(p, s, l); |
|
652 | memcpy(p, s, l); | |
641 | p += 20; |
|
653 | p += 20; | |
642 | pn = PySequence_ITEM(pl, 1); |
|
654 | pn = PySequence_ITEM(pl, 1); | |
643 |
if (Py |
|
655 | if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) { | |
644 | PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash"); |
|
656 | PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash"); | |
645 | goto bail; |
|
657 | goto bail; | |
646 | } |
|
658 | } | |
@@ -685,21 +697,21 b' static PyObject *pack_dirstate(PyObject ' | |||||
685 | putbe32((uint32_t)mtime, p + 8); |
|
697 | putbe32((uint32_t)mtime, p + 8); | |
686 | t = p + 12; |
|
698 | t = p + 12; | |
687 | p += 16; |
|
699 | p += 16; | |
688 |
len = Py |
|
700 | len = PyBytes_GET_SIZE(k); | |
689 |
memcpy(p, Py |
|
701 | memcpy(p, PyBytes_AS_STRING(k), len); | |
690 | p += len; |
|
702 | p += len; | |
691 | o = PyDict_GetItem(copymap, k); |
|
703 | o = PyDict_GetItem(copymap, k); | |
692 | if (o) { |
|
704 | if (o) { | |
693 | *p++ = '\0'; |
|
705 | *p++ = '\0'; | |
694 |
l = Py |
|
706 | l = PyBytes_GET_SIZE(o); | |
695 |
memcpy(p, Py |
|
707 | memcpy(p, PyBytes_AS_STRING(o), l); | |
696 | p += l; |
|
708 | p += l; | |
697 | len += l + 1; |
|
709 | len += l + 1; | |
698 | } |
|
710 | } | |
699 | putbe32((uint32_t)len, t); |
|
711 | putbe32((uint32_t)len, t); | |
700 | } |
|
712 | } | |
701 |
|
713 | |||
702 |
pos = p - Py |
|
714 | pos = p - PyBytes_AS_STRING(packobj); | |
703 | if (pos != nbytes) { |
|
715 | if (pos != nbytes) { | |
704 | PyErr_Format(PyExc_SystemError, "bad dirstate size: %ld != %ld", |
|
716 | PyErr_Format(PyExc_SystemError, "bad dirstate size: %ld != %ld", | |
705 | (long)pos, (long)nbytes); |
|
717 | (long)pos, (long)nbytes); | |
@@ -796,7 +808,7 b' static const char *index_deref(indexObje' | |||||
796 | return self->offsets[pos]; |
|
808 | return self->offsets[pos]; | |
797 | } |
|
809 | } | |
798 |
|
810 | |||
799 |
return Py |
|
811 | return PyBytes_AS_STRING(self->data) + pos * v1_hdrsize; | |
800 | } |
|
812 | } | |
801 |
|
813 | |||
802 | static inline int index_get_parents(indexObject *self, Py_ssize_t rev, |
|
814 | static inline int index_get_parents(indexObject *self, Py_ssize_t rev, | |
@@ -926,7 +938,7 b' static const char *index_node(indexObjec' | |||||
926 | PyObject *tuple, *str; |
|
938 | PyObject *tuple, *str; | |
927 | tuple = PyList_GET_ITEM(self->added, pos - self->length + 1); |
|
939 | tuple = PyList_GET_ITEM(self->added, pos - self->length + 1); | |
928 | str = PyTuple_GetItem(tuple, 7); |
|
940 | str = PyTuple_GetItem(tuple, 7); | |
929 |
return str ? Py |
|
941 | return str ? PyBytes_AS_STRING(str) : NULL; | |
930 | } |
|
942 | } | |
931 |
|
943 | |||
932 | data = index_deref(self, pos); |
|
944 | data = index_deref(self, pos); | |
@@ -937,7 +949,7 b' static int nt_insert(indexObject *self, ' | |||||
937 |
|
949 | |||
938 | static int node_check(PyObject *obj, char **node, Py_ssize_t *nodelen) |
|
950 | static int node_check(PyObject *obj, char **node, Py_ssize_t *nodelen) | |
939 | { |
|
951 | { | |
940 |
if (Py |
|
952 | if (PyBytes_AsStringAndSize(obj, node, nodelen) == -1) | |
941 | return -1; |
|
953 | return -1; | |
942 | if (*nodelen == 20) |
|
954 | if (*nodelen == 20) | |
943 | return 0; |
|
955 | return 0; | |
@@ -1825,7 +1837,7 b' static PyObject *index_partialmatch(inde' | |||||
1825 | case -2: |
|
1837 | case -2: | |
1826 | Py_RETURN_NONE; |
|
1838 | Py_RETURN_NONE; | |
1827 | case -1: |
|
1839 | case -1: | |
1828 |
return Py |
|
1840 | return PyBytes_FromStringAndSize(nullid, 20); | |
1829 | } |
|
1841 | } | |
1830 |
|
1842 | |||
1831 | fullnode = index_node(self, rev); |
|
1843 | fullnode = index_node(self, rev); | |
@@ -1834,7 +1846,7 b' static PyObject *index_partialmatch(inde' | |||||
1834 | "could not access rev %d", rev); |
|
1846 | "could not access rev %d", rev); | |
1835 | return NULL; |
|
1847 | return NULL; | |
1836 | } |
|
1848 | } | |
1837 |
return Py |
|
1849 | return PyBytes_FromStringAndSize(fullnode, 20); | |
1838 | } |
|
1850 | } | |
1839 |
|
1851 | |||
1840 | static PyObject *index_m_get(indexObject *self, PyObject *args) |
|
1852 | static PyObject *index_m_get(indexObject *self, PyObject *args) | |
@@ -2247,7 +2259,7 b' static void nt_invalidate_added(indexObj' | |||||
2247 | PyObject *tuple = PyList_GET_ITEM(self->added, i); |
|
2259 | PyObject *tuple = PyList_GET_ITEM(self->added, i); | |
2248 | PyObject *node = PyTuple_GET_ITEM(tuple, 7); |
|
2260 | PyObject *node = PyTuple_GET_ITEM(tuple, 7); | |
2249 |
|
2261 | |||
2250 |
nt_insert(self, Py |
|
2262 | nt_insert(self, PyBytes_AS_STRING(node), -1); | |
2251 | } |
|
2263 | } | |
2252 |
|
2264 | |||
2253 | if (start == 0) |
|
2265 | if (start == 0) | |
@@ -2264,7 +2276,12 b' static int index_slice_del(indexObject *' | |||||
2264 | Py_ssize_t length = index_length(self); |
|
2276 | Py_ssize_t length = index_length(self); | |
2265 | int ret = 0; |
|
2277 | int ret = 0; | |
2266 |
|
2278 | |||
|
2279 | /* Argument changed from PySliceObject* to PyObject* in Python 3. */ | |||
|
2280 | #ifdef IS_PY3K | |||
|
2281 | if (PySlice_GetIndicesEx(item, length, | |||
|
2282 | #else | |||
2267 | if (PySlice_GetIndicesEx((PySliceObject*)item, length, |
|
2283 | if (PySlice_GetIndicesEx((PySliceObject*)item, length, | |
|
2284 | #endif | |||
2268 | &start, &stop, &step, &slicelength) < 0) |
|
2285 | &start, &stop, &step, &slicelength) < 0) | |
2269 | return -1; |
|
2286 | return -1; | |
2270 |
|
2287 | |||
@@ -2372,9 +2389,9 b' static int index_assign_subscript(indexO' | |||||
2372 | */ |
|
2389 | */ | |
2373 | static Py_ssize_t inline_scan(indexObject *self, const char **offsets) |
|
2390 | static Py_ssize_t inline_scan(indexObject *self, const char **offsets) | |
2374 | { |
|
2391 | { | |
2375 |
const char *data = Py |
|
2392 | const char *data = PyBytes_AS_STRING(self->data); | |
2376 | Py_ssize_t pos = 0; |
|
2393 | Py_ssize_t pos = 0; | |
2377 |
Py_ssize_t end = Py |
|
2394 | Py_ssize_t end = PyBytes_GET_SIZE(self->data); | |
2378 | long incr = v1_hdrsize; |
|
2395 | long incr = v1_hdrsize; | |
2379 | Py_ssize_t len = 0; |
|
2396 | Py_ssize_t len = 0; | |
2380 |
|
2397 | |||
@@ -2416,11 +2433,11 b' static int index_init(indexObject *self,' | |||||
2416 |
|
2433 | |||
2417 | if (!PyArg_ParseTuple(args, "OO", &data_obj, &inlined_obj)) |
|
2434 | if (!PyArg_ParseTuple(args, "OO", &data_obj, &inlined_obj)) | |
2418 | return -1; |
|
2435 | return -1; | |
2419 |
if (!Py |
|
2436 | if (!PyBytes_Check(data_obj)) { | |
2420 | PyErr_SetString(PyExc_TypeError, "data is not a string"); |
|
2437 | PyErr_SetString(PyExc_TypeError, "data is not a string"); | |
2421 | return -1; |
|
2438 | return -1; | |
2422 | } |
|
2439 | } | |
2423 |
size = Py |
|
2440 | size = PyBytes_GET_SIZE(data_obj); | |
2424 |
|
2441 | |||
2425 | self->inlined = inlined_obj && PyObject_IsTrue(inlined_obj); |
|
2442 | self->inlined = inlined_obj && PyObject_IsTrue(inlined_obj); | |
2426 | self->data = data_obj; |
|
2443 | self->data = data_obj; | |
@@ -2516,8 +2533,7 b' static PyGetSetDef index_getset[] = {' | |||||
2516 | }; |
|
2533 | }; | |
2517 |
|
2534 | |||
2518 | static PyTypeObject indexType = { |
|
2535 | static PyTypeObject indexType = { | |
2519 | PyObject_HEAD_INIT(NULL) |
|
2536 | PyVarObject_HEAD_INIT(NULL, 0) | |
2520 | 0, /* ob_size */ |
|
|||
2521 | "parsers.index", /* tp_name */ |
|
2537 | "parsers.index", /* tp_name */ | |
2522 | sizeof(indexObject), /* tp_basicsize */ |
|
2538 | sizeof(indexObject), /* tp_basicsize */ | |
2523 | 0, /* tp_itemsize */ |
|
2539 | 0, /* tp_itemsize */ | |
@@ -2613,7 +2629,7 b' static PyObject *readshas(' | |||||
2613 | return NULL; |
|
2629 | return NULL; | |
2614 | } |
|
2630 | } | |
2615 | for (i = 0; i < num; i++) { |
|
2631 | for (i = 0; i < num; i++) { | |
2616 |
PyObject *hash = Py |
|
2632 | PyObject *hash = PyBytes_FromStringAndSize(source, hashwidth); | |
2617 | if (hash == NULL) { |
|
2633 | if (hash == NULL) { | |
2618 | Py_DECREF(list); |
|
2634 | Py_DECREF(list); | |
2619 | return NULL; |
|
2635 | return NULL; | |
@@ -2669,7 +2685,7 b' static PyObject *fm1readmarker(const cha' | |||||
2669 | if (data + hashwidth > dataend) { |
|
2685 | if (data + hashwidth > dataend) { | |
2670 | goto overflow; |
|
2686 | goto overflow; | |
2671 | } |
|
2687 | } | |
2672 |
prec = Py |
|
2688 | prec = PyBytes_FromStringAndSize(data, hashwidth); | |
2673 | data += hashwidth; |
|
2689 | data += hashwidth; | |
2674 | if (prec == NULL) { |
|
2690 | if (prec == NULL) { | |
2675 | goto bail; |
|
2691 | goto bail; | |
@@ -2712,9 +2728,9 b' static PyObject *fm1readmarker(const cha' | |||||
2712 | if (meta + leftsize + rightsize > dataend) { |
|
2728 | if (meta + leftsize + rightsize > dataend) { | |
2713 | goto overflow; |
|
2729 | goto overflow; | |
2714 | } |
|
2730 | } | |
2715 |
left = Py |
|
2731 | left = PyBytes_FromStringAndSize(meta, leftsize); | |
2716 | meta += leftsize; |
|
2732 | meta += leftsize; | |
2717 |
right = Py |
|
2733 | right = PyBytes_FromStringAndSize(meta, rightsize); | |
2718 | meta += rightsize; |
|
2734 | meta += rightsize; | |
2719 | tmp = PyTuple_New(2); |
|
2735 | tmp = PyTuple_New(2); | |
2720 | if (!left || !right || !tmp) { |
|
2736 | if (!left || !right || !tmp) { | |
@@ -2880,7 +2896,7 b' PyMODINIT_FUNC PyInit_parsers(void)' | |||||
2880 | PyObject *mod; |
|
2896 | PyObject *mod; | |
2881 |
|
2897 | |||
2882 | if (check_python_version() == -1) |
|
2898 | if (check_python_version() == -1) | |
2883 | return; |
|
2899 | return NULL; | |
2884 | mod = PyModule_Create(&parsers_module); |
|
2900 | mod = PyModule_Create(&parsers_module); | |
2885 | module_init(mod); |
|
2901 | module_init(mod); | |
2886 | return mod; |
|
2902 | return mod; |
@@ -410,11 +410,7 b' class linereader(object):' | |||||
410 | return self.fp.readline() |
|
410 | return self.fp.readline() | |
411 |
|
411 | |||
412 | def __iter__(self): |
|
412 | def __iter__(self): | |
413 | while True: |
|
413 | return iter(self.readline, '') | |
414 | l = self.readline() |
|
|||
415 | if not l: |
|
|||
416 | break |
|
|||
417 | yield l |
|
|||
418 |
|
414 | |||
419 | class abstractbackend(object): |
|
415 | class abstractbackend(object): | |
420 | def __init__(self, ui): |
|
416 | def __init__(self, ui): | |
@@ -673,6 +669,8 b' class patchfile(object):' | |||||
673 | self.mode = (False, False) |
|
669 | self.mode = (False, False) | |
674 | if self.missing: |
|
670 | if self.missing: | |
675 | self.ui.warn(_("unable to find '%s' for patching\n") % self.fname) |
|
671 | self.ui.warn(_("unable to find '%s' for patching\n") % self.fname) | |
|
672 | self.ui.warn(_("(use '--prefix' to apply patch relative to the " | |||
|
673 | "current directory)\n")) | |||
676 |
|
674 | |||
677 | self.hash = {} |
|
675 | self.hash = {} | |
678 | self.dirty = 0 |
|
676 | self.dirty = 0 | |
@@ -1688,10 +1686,7 b' def scanpatch(fp):' | |||||
1688 | def scanwhile(first, p): |
|
1686 | def scanwhile(first, p): | |
1689 | """scan lr while predicate holds""" |
|
1687 | """scan lr while predicate holds""" | |
1690 | lines = [first] |
|
1688 | lines = [first] | |
1691 | while True: |
|
1689 | for line in iter(lr.readline, ''): | |
1692 | line = lr.readline() |
|
|||
1693 | if not line: |
|
|||
1694 | break |
|
|||
1695 | if p(line): |
|
1690 | if p(line): | |
1696 | lines.append(line) |
|
1691 | lines.append(line) | |
1697 | else: |
|
1692 | else: | |
@@ -1699,10 +1694,7 b' def scanpatch(fp):' | |||||
1699 | break |
|
1694 | break | |
1700 | return lines |
|
1695 | return lines | |
1701 |
|
1696 | |||
1702 | while True: |
|
1697 | for line in iter(lr.readline, ''): | |
1703 | line = lr.readline() |
|
|||
1704 | if not line: |
|
|||
1705 | break |
|
|||
1706 | if line.startswith('diff --git a/') or line.startswith('diff -r '): |
|
1698 | if line.startswith('diff --git a/') or line.startswith('diff -r '): | |
1707 | def notheader(line): |
|
1699 | def notheader(line): | |
1708 | s = line.split(None, 1) |
|
1700 | s = line.split(None, 1) | |
@@ -1772,10 +1764,7 b' def iterhunks(fp):' | |||||
1772 | context = None |
|
1764 | context = None | |
1773 | lr = linereader(fp) |
|
1765 | lr = linereader(fp) | |
1774 |
|
1766 | |||
1775 | while True: |
|
1767 | for x in iter(lr.readline, ''): | |
1776 | x = lr.readline() |
|
|||
1777 | if not x: |
|
|||
1778 | break |
|
|||
1779 | if state == BFILE and ( |
|
1768 | if state == BFILE and ( | |
1780 | (not context and x[0] == '@') |
|
1769 | (not context and x[0] == '@') | |
1781 | or (context is not False and x.startswith('***************')) |
|
1770 | or (context is not False and x.startswith('***************')) | |
@@ -1963,8 +1952,10 b' def _applydiff(ui, fp, patcher, backend,' | |||||
1963 | data, mode = None, None |
|
1952 | data, mode = None, None | |
1964 | if gp.op in ('RENAME', 'COPY'): |
|
1953 | if gp.op in ('RENAME', 'COPY'): | |
1965 | data, mode = store.getfile(gp.oldpath)[:2] |
|
1954 | data, mode = store.getfile(gp.oldpath)[:2] | |
1966 | # FIXME: failing getfile has never been handled here |
|
1955 | if data is None: | |
1967 |
|
|
1956 | # This means that the old path does not exist | |
|
1957 | raise PatchError(_("source file '%s' does not exist") | |||
|
1958 | % gp.oldpath) | |||
1968 | if gp.mode: |
|
1959 | if gp.mode: | |
1969 | mode = gp.mode |
|
1960 | mode = gp.mode | |
1970 | if gp.op == 'ADD': |
|
1961 | if gp.op == 'ADD': | |
@@ -2155,7 +2146,14 b' def difffeatureopts(ui, opts=None, untru' | |||||
2155 | def get(key, name=None, getter=ui.configbool, forceplain=None): |
|
2146 | def get(key, name=None, getter=ui.configbool, forceplain=None): | |
2156 | if opts: |
|
2147 | if opts: | |
2157 | v = opts.get(key) |
|
2148 | v = opts.get(key) | |
2158 | if v: |
|
2149 | # diffopts flags are either None-default (which is passed | |
|
2150 | # through unchanged, so we can identify unset values), or | |||
|
2151 | # some other falsey default (eg --unified, which defaults | |||
|
2152 | # to an empty string). We only want to override the config | |||
|
2153 | # entries from hgrc with command line values if they | |||
|
2154 | # appear to have been set, which is any truthy value, | |||
|
2155 | # True, or False. | |||
|
2156 | if v or isinstance(v, bool): | |||
2159 | return v |
|
2157 | return v | |
2160 | if forceplain is not None and ui.plain(): |
|
2158 | if forceplain is not None and ui.plain(): | |
2161 | return forceplain |
|
2159 | return forceplain |
@@ -156,7 +156,7 b' PyObject *encodedir(PyObject *self, PyOb' | |||||
156 | if (!PyArg_ParseTuple(args, "O:encodedir", &pathobj)) |
|
156 | if (!PyArg_ParseTuple(args, "O:encodedir", &pathobj)) | |
157 | return NULL; |
|
157 | return NULL; | |
158 |
|
158 | |||
159 |
if (Py |
|
159 | if (PyBytes_AsStringAndSize(pathobj, &path, &len) == -1) { | |
160 | PyErr_SetString(PyExc_TypeError, "expected a string"); |
|
160 | PyErr_SetString(PyExc_TypeError, "expected a string"); | |
161 | return NULL; |
|
161 | return NULL; | |
162 | } |
|
162 | } | |
@@ -168,11 +168,12 b' PyObject *encodedir(PyObject *self, PyOb' | |||||
168 | return pathobj; |
|
168 | return pathobj; | |
169 | } |
|
169 | } | |
170 |
|
170 | |||
171 |
newobj = Py |
|
171 | newobj = PyBytes_FromStringAndSize(NULL, newlen); | |
172 |
|
172 | |||
173 | if (newobj) { |
|
173 | if (newobj) { | |
174 | PyString_GET_SIZE(newobj)--; |
|
174 | assert(PyBytes_Check(newobj)); | |
175 | _encodedir(PyString_AS_STRING(newobj), newlen, path, |
|
175 | Py_SIZE(newobj)--; | |
|
176 | _encodedir(PyBytes_AS_STRING(newobj), newlen, path, | |||
176 | len + 1); |
|
177 | len + 1); | |
177 | } |
|
178 | } | |
178 |
|
179 | |||
@@ -515,9 +516,9 b' PyObject *lowerencode(PyObject *self, Py' | |||||
515 | return NULL; |
|
516 | return NULL; | |
516 |
|
517 | |||
517 | newlen = _lowerencode(NULL, 0, path, len); |
|
518 | newlen = _lowerencode(NULL, 0, path, len); | |
518 |
ret = Py |
|
519 | ret = PyBytes_FromStringAndSize(NULL, newlen); | |
519 | if (ret) |
|
520 | if (ret) | |
520 |
_lowerencode(Py |
|
521 | _lowerencode(PyBytes_AS_STRING(ret), newlen, path, len); | |
521 |
|
522 | |||
522 | return ret; |
|
523 | return ret; | |
523 | } |
|
524 | } | |
@@ -568,11 +569,11 b' static PyObject *hashmangle(const char *' | |||||
568 | if (lastdot >= 0) |
|
569 | if (lastdot >= 0) | |
569 | destsize += len - lastdot - 1; |
|
570 | destsize += len - lastdot - 1; | |
570 |
|
571 | |||
571 |
ret = Py |
|
572 | ret = PyBytes_FromStringAndSize(NULL, destsize); | |
572 | if (ret == NULL) |
|
573 | if (ret == NULL) | |
573 | return NULL; |
|
574 | return NULL; | |
574 |
|
575 | |||
575 |
dest = Py |
|
576 | dest = PyBytes_AS_STRING(ret); | |
576 | memcopy(dest, &destlen, destsize, "dh/", 3); |
|
577 | memcopy(dest, &destlen, destsize, "dh/", 3); | |
577 |
|
578 | |||
578 | /* Copy up to dirprefixlen bytes of each path component, up to |
|
579 | /* Copy up to dirprefixlen bytes of each path component, up to | |
@@ -638,7 +639,8 b' static PyObject *hashmangle(const char *' | |||||
638 | memcopy(dest, &destlen, destsize, &src[lastdot], |
|
639 | memcopy(dest, &destlen, destsize, &src[lastdot], | |
639 | len - lastdot - 1); |
|
640 | len - lastdot - 1); | |
640 |
|
641 | |||
641 | PyString_GET_SIZE(ret) = destlen; |
|
642 | assert(PyBytes_Check(ret)); | |
|
643 | Py_SIZE(ret) = destlen; | |||
642 |
|
644 | |||
643 | return ret; |
|
645 | return ret; | |
644 | } |
|
646 | } | |
@@ -653,7 +655,7 b' static int sha1hash(char hash[20], const' | |||||
653 | PyObject *shaobj, *hashobj; |
|
655 | PyObject *shaobj, *hashobj; | |
654 |
|
656 | |||
655 | if (shafunc == NULL) { |
|
657 | if (shafunc == NULL) { | |
656 |
PyObject *hashlib, *name = Py |
|
658 | PyObject *hashlib, *name = PyBytes_FromString("hashlib"); | |
657 |
|
659 | |||
658 | if (name == NULL) |
|
660 | if (name == NULL) | |
659 | return -1; |
|
661 | return -1; | |
@@ -686,14 +688,14 b' static int sha1hash(char hash[20], const' | |||||
686 | if (hashobj == NULL) |
|
688 | if (hashobj == NULL) | |
687 | return -1; |
|
689 | return -1; | |
688 |
|
690 | |||
689 |
if (!Py |
|
691 | if (!PyBytes_Check(hashobj) || PyBytes_GET_SIZE(hashobj) != 20) { | |
690 | PyErr_SetString(PyExc_TypeError, |
|
692 | PyErr_SetString(PyExc_TypeError, | |
691 | "result of digest is not a 20-byte hash"); |
|
693 | "result of digest is not a 20-byte hash"); | |
692 | Py_DECREF(hashobj); |
|
694 | Py_DECREF(hashobj); | |
693 | return -1; |
|
695 | return -1; | |
694 | } |
|
696 | } | |
695 |
|
697 | |||
696 |
memcpy(hash, Py |
|
698 | memcpy(hash, PyBytes_AS_STRING(hashobj), 20); | |
697 | Py_DECREF(hashobj); |
|
699 | Py_DECREF(hashobj); | |
698 | return 0; |
|
700 | return 0; | |
699 | } |
|
701 | } | |
@@ -731,7 +733,7 b' PyObject *pathencode(PyObject *self, PyO' | |||||
731 | if (!PyArg_ParseTuple(args, "O:pathencode", &pathobj)) |
|
733 | if (!PyArg_ParseTuple(args, "O:pathencode", &pathobj)) | |
732 | return NULL; |
|
734 | return NULL; | |
733 |
|
735 | |||
734 |
if (Py |
|
736 | if (PyBytes_AsStringAndSize(pathobj, &path, &len) == -1) { | |
735 | PyErr_SetString(PyExc_TypeError, "expected a string"); |
|
737 | PyErr_SetString(PyExc_TypeError, "expected a string"); | |
736 | return NULL; |
|
738 | return NULL; | |
737 | } |
|
739 | } | |
@@ -747,11 +749,12 b' PyObject *pathencode(PyObject *self, PyO' | |||||
747 | return pathobj; |
|
749 | return pathobj; | |
748 | } |
|
750 | } | |
749 |
|
751 | |||
750 |
newobj = Py |
|
752 | newobj = PyBytes_FromStringAndSize(NULL, newlen); | |
751 |
|
753 | |||
752 | if (newobj) { |
|
754 | if (newobj) { | |
753 | PyString_GET_SIZE(newobj)--; |
|
755 | assert(PyBytes_Check(newobj)); | |
754 | basicencode(PyString_AS_STRING(newobj), newlen, path, |
|
756 | Py_SIZE(newobj)--; | |
|
757 | basicencode(PyBytes_AS_STRING(newobj), newlen, path, | |||
755 | len + 1); |
|
758 | len + 1); | |
756 | } |
|
759 | } | |
757 | } |
|
760 | } |
@@ -40,7 +40,7 b' class pathauditor(object):' | |||||
40 | self.root = root |
|
40 | self.root = root | |
41 | self._realfs = realfs |
|
41 | self._realfs = realfs | |
42 | self.callback = callback |
|
42 | self.callback = callback | |
43 |
if os.path.lexists(root) and not util. |
|
43 | if os.path.lexists(root) and not util.fscasesensitive(root): | |
44 | self.normcase = util.normcase |
|
44 | self.normcase = util.normcase | |
45 | else: |
|
45 | else: | |
46 | self.normcase = lambda x: x |
|
46 | self.normcase = lambda x: x |
@@ -12,6 +12,10 b' import difflib' | |||||
12 | import re |
|
12 | import re | |
13 | import struct |
|
13 | import struct | |
14 |
|
14 | |||
|
15 | from . import policy | |||
|
16 | policynocffi = policy.policynocffi | |||
|
17 | modulepolicy = policy.policy | |||
|
18 | ||||
15 | def splitnewlines(text): |
|
19 | def splitnewlines(text): | |
16 | '''like str.splitlines, but only split on newlines.''' |
|
20 | '''like str.splitlines, but only split on newlines.''' | |
17 | lines = [l + '\n' for l in text.split('\n')] |
|
21 | lines = [l + '\n' for l in text.split('\n')] | |
@@ -96,3 +100,70 b' def fixws(text, allws):' | |||||
96 | text = re.sub('[ \t\r]+', ' ', text) |
|
100 | text = re.sub('[ \t\r]+', ' ', text) | |
97 | text = text.replace(' \n', '\n') |
|
101 | text = text.replace(' \n', '\n') | |
98 | return text |
|
102 | return text | |
|
103 | ||||
|
104 | if modulepolicy not in policynocffi: | |||
|
105 | try: | |||
|
106 | from _bdiff_cffi import ffi, lib | |||
|
107 | except ImportError: | |||
|
108 | if modulepolicy == 'cffi': # strict cffi import | |||
|
109 | raise | |||
|
110 | else: | |||
|
111 | def blocks(sa, sb): | |||
|
112 | a = ffi.new("struct bdiff_line**") | |||
|
113 | b = ffi.new("struct bdiff_line**") | |||
|
114 | ac = ffi.new("char[]", str(sa)) | |||
|
115 | bc = ffi.new("char[]", str(sb)) | |||
|
116 | l = ffi.new("struct bdiff_hunk*") | |||
|
117 | try: | |||
|
118 | an = lib.bdiff_splitlines(ac, len(sa), a) | |||
|
119 | bn = lib.bdiff_splitlines(bc, len(sb), b) | |||
|
120 | if not a[0] or not b[0]: | |||
|
121 | raise MemoryError | |||
|
122 | count = lib.bdiff_diff(a[0], an, b[0], bn, l) | |||
|
123 | if count < 0: | |||
|
124 | raise MemoryError | |||
|
125 | rl = [None] * count | |||
|
126 | h = l.next | |||
|
127 | i = 0 | |||
|
128 | while h: | |||
|
129 | rl[i] = (h.a1, h.a2, h.b1, h.b2) | |||
|
130 | h = h.next | |||
|
131 | i += 1 | |||
|
132 | finally: | |||
|
133 | lib.free(a[0]) | |||
|
134 | lib.free(b[0]) | |||
|
135 | lib.bdiff_freehunks(l.next) | |||
|
136 | return rl | |||
|
137 | ||||
|
138 | def bdiff(sa, sb): | |||
|
139 | a = ffi.new("struct bdiff_line**") | |||
|
140 | b = ffi.new("struct bdiff_line**") | |||
|
141 | ac = ffi.new("char[]", str(sa)) | |||
|
142 | bc = ffi.new("char[]", str(sb)) | |||
|
143 | l = ffi.new("struct bdiff_hunk*") | |||
|
144 | try: | |||
|
145 | an = lib.bdiff_splitlines(ac, len(sa), a) | |||
|
146 | bn = lib.bdiff_splitlines(bc, len(sb), b) | |||
|
147 | if not a[0] or not b[0]: | |||
|
148 | raise MemoryError | |||
|
149 | count = lib.bdiff_diff(a[0], an, b[0], bn, l) | |||
|
150 | if count < 0: | |||
|
151 | raise MemoryError | |||
|
152 | rl = [] | |||
|
153 | h = l.next | |||
|
154 | la = lb = 0 | |||
|
155 | while h: | |||
|
156 | if h.a1 != la or h.b1 != lb: | |||
|
157 | lgt = (b[0] + h.b1).l - (b[0] + lb).l | |||
|
158 | rl.append(struct.pack(">lll", (a[0] + la).l - a[0].l, | |||
|
159 | (a[0] + h.a1).l - a[0].l, lgt)) | |||
|
160 | rl.append(str(ffi.buffer((b[0] + lb).l, lgt))) | |||
|
161 | la = h.a2 | |||
|
162 | lb = h.b2 | |||
|
163 | h = h.next | |||
|
164 | ||||
|
165 | finally: | |||
|
166 | lib.free(a[0]) | |||
|
167 | lib.free(b[0]) | |||
|
168 | lib.bdiff_freehunks(l.next) | |||
|
169 | return "".join(rl) |
@@ -9,8 +9,10 b' from __future__ import absolute_import' | |||||
9 |
|
9 | |||
10 | import struct |
|
10 | import struct | |
11 |
|
11 | |||
12 | from . import pycompat |
|
12 | from . import policy, pycompat | |
13 | stringio = pycompat.stringio |
|
13 | stringio = pycompat.stringio | |
|
14 | modulepolicy = policy.policy | |||
|
15 | policynocffi = policy.policynocffi | |||
14 |
|
16 | |||
15 | class mpatchError(Exception): |
|
17 | class mpatchError(Exception): | |
16 | """error raised when a delta cannot be decoded |
|
18 | """error raised when a delta cannot be decoded | |
@@ -125,3 +127,44 b' def patchedsize(orig, delta):' | |||||
125 |
|
127 | |||
126 | outlen += orig - last |
|
128 | outlen += orig - last | |
127 | return outlen |
|
129 | return outlen | |
|
130 | ||||
|
131 | if modulepolicy not in policynocffi: | |||
|
132 | try: | |||
|
133 | from _mpatch_cffi import ffi, lib | |||
|
134 | except ImportError: | |||
|
135 | if modulepolicy == 'cffi': # strict cffi import | |||
|
136 | raise | |||
|
137 | else: | |||
|
138 | @ffi.def_extern() | |||
|
139 | def cffi_get_next_item(arg, pos): | |||
|
140 | all, bins = ffi.from_handle(arg) | |||
|
141 | container = ffi.new("struct mpatch_flist*[1]") | |||
|
142 | to_pass = ffi.new("char[]", str(bins[pos])) | |||
|
143 | all.append(to_pass) | |||
|
144 | r = lib.mpatch_decode(to_pass, len(to_pass) - 1, container) | |||
|
145 | if r < 0: | |||
|
146 | return ffi.NULL | |||
|
147 | return container[0] | |||
|
148 | ||||
|
149 | def patches(text, bins): | |||
|
150 | lgt = len(bins) | |||
|
151 | all = [] | |||
|
152 | if not lgt: | |||
|
153 | return text | |||
|
154 | arg = (all, bins) | |||
|
155 | patch = lib.mpatch_fold(ffi.new_handle(arg), | |||
|
156 | lib.cffi_get_next_item, 0, lgt) | |||
|
157 | if not patch: | |||
|
158 | raise mpatchError("cannot decode chunk") | |||
|
159 | outlen = lib.mpatch_calcsize(len(text), patch) | |||
|
160 | if outlen < 0: | |||
|
161 | lib.mpatch_lfree(patch) | |||
|
162 | raise mpatchError("inconsistency detected") | |||
|
163 | buf = ffi.new("char[]", outlen) | |||
|
164 | if lib.mpatch_apply(buf, text, len(text), patch) < 0: | |||
|
165 | lib.mpatch_lfree(patch) | |||
|
166 | raise mpatchError("error applying patches") | |||
|
167 | res = ffi.buffer(buf, outlen)[:] | |||
|
168 | lib.mpatch_lfree(patch) | |||
|
169 | return res | |||
|
170 |
@@ -120,13 +120,14 b" if sys.platform == 'darwin' and ffi is n" | |||||
120 | if skip == name and tp == statmod.S_ISDIR: |
|
120 | if skip == name and tp == statmod.S_ISDIR: | |
121 | return [] |
|
121 | return [] | |
122 | if stat: |
|
122 | if stat: | |
123 | mtime = cur.time.tv_sec |
|
123 | mtime = cur.mtime.tv_sec | |
124 | mode = (cur.accessmask & ~lib.S_IFMT)| tp |
|
124 | mode = (cur.accessmask & ~lib.S_IFMT)| tp | |
125 | ret.append((name, tp, stat_res(st_mode=mode, st_mtime=mtime, |
|
125 | ret.append((name, tp, stat_res(st_mode=mode, st_mtime=mtime, | |
126 | st_size=cur.datalength))) |
|
126 | st_size=cur.datalength))) | |
127 | else: |
|
127 | else: | |
128 | ret.append((name, tp)) |
|
128 | ret.append((name, tp)) | |
129 | cur += lgt |
|
129 | cur = ffi.cast("val_attrs_t*", int(ffi.cast("intptr_t", cur)) | |
|
130 | + lgt) | |||
130 | return ret |
|
131 | return ret | |
131 |
|
132 | |||
132 | def listdir(path, stat=False, skip=None): |
|
133 | def listdir(path, stat=False, skip=None): | |
@@ -173,30 +174,30 b" if os.name != 'nt':" | |||||
173 |
|
174 | |||
174 | class _iovec(ctypes.Structure): |
|
175 | class _iovec(ctypes.Structure): | |
175 | _fields_ = [ |
|
176 | _fields_ = [ | |
176 | ('iov_base', ctypes.c_void_p), |
|
177 | (u'iov_base', ctypes.c_void_p), | |
177 | ('iov_len', ctypes.c_size_t), |
|
178 | (u'iov_len', ctypes.c_size_t), | |
178 | ] |
|
179 | ] | |
179 |
|
180 | |||
180 | class _msghdr(ctypes.Structure): |
|
181 | class _msghdr(ctypes.Structure): | |
181 | _fields_ = [ |
|
182 | _fields_ = [ | |
182 | ('msg_name', ctypes.c_void_p), |
|
183 | (u'msg_name', ctypes.c_void_p), | |
183 | ('msg_namelen', _socklen_t), |
|
184 | (u'msg_namelen', _socklen_t), | |
184 | ('msg_iov', ctypes.POINTER(_iovec)), |
|
185 | (u'msg_iov', ctypes.POINTER(_iovec)), | |
185 | ('msg_iovlen', _msg_iovlen_t), |
|
186 | (u'msg_iovlen', _msg_iovlen_t), | |
186 | ('msg_control', ctypes.c_void_p), |
|
187 | (u'msg_control', ctypes.c_void_p), | |
187 | ('msg_controllen', _msg_controllen_t), |
|
188 | (u'msg_controllen', _msg_controllen_t), | |
188 | ('msg_flags', ctypes.c_int), |
|
189 | (u'msg_flags', ctypes.c_int), | |
189 | ] |
|
190 | ] | |
190 |
|
191 | |||
191 | class _cmsghdr(ctypes.Structure): |
|
192 | class _cmsghdr(ctypes.Structure): | |
192 | _fields_ = [ |
|
193 | _fields_ = [ | |
193 | ('cmsg_len', _cmsg_len_t), |
|
194 | (u'cmsg_len', _cmsg_len_t), | |
194 | ('cmsg_level', ctypes.c_int), |
|
195 | (u'cmsg_level', ctypes.c_int), | |
195 | ('cmsg_type', ctypes.c_int), |
|
196 | (u'cmsg_type', ctypes.c_int), | |
196 | ('cmsg_data', ctypes.c_ubyte * 0), |
|
197 | (u'cmsg_data', ctypes.c_ubyte * 0), | |
197 | ] |
|
198 | ] | |
198 |
|
199 | |||
199 | _libc = ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True) |
|
200 | _libc = ctypes.CDLL(ctypes.util.find_library(u'c'), use_errno=True) | |
200 | _recvmsg = getattr(_libc, 'recvmsg', None) |
|
201 | _recvmsg = getattr(_libc, 'recvmsg', None) | |
201 | if _recvmsg: |
|
202 | if _recvmsg: | |
202 | _recvmsg.restype = getattr(ctypes, 'c_ssize_t', ctypes.c_long) |
|
203 | _recvmsg.restype = getattr(ctypes, 'c_ssize_t', ctypes.c_long) |
@@ -12,7 +12,9 b' from __future__ import absolute_import' | |||||
12 |
|
12 | |||
13 | import sys |
|
13 | import sys | |
14 |
|
14 | |||
15 |
i |
|
15 | ispy3 = (sys.version_info[0] >= 3) | |
|
16 | ||||
|
17 | if not ispy3: | |||
16 | import cPickle as pickle |
|
18 | import cPickle as pickle | |
17 | import cStringIO as io |
|
19 | import cStringIO as io | |
18 | import httplib |
|
20 | import httplib | |
@@ -29,36 +31,84 b' else:' | |||||
29 | import urllib.parse as urlparse |
|
31 | import urllib.parse as urlparse | |
30 | import xmlrpc.client as xmlrpclib |
|
32 | import xmlrpc.client as xmlrpclib | |
31 |
|
33 | |||
|
34 | if ispy3: | |||
|
35 | import builtins | |||
|
36 | import functools | |||
|
37 | import os | |||
|
38 | fsencode = os.fsencode | |||
|
39 | ||||
|
40 | def sysstr(s): | |||
|
41 | """Return a keyword str to be passed to Python functions such as | |||
|
42 | getattr() and str.encode() | |||
|
43 | ||||
|
44 | This never raises UnicodeDecodeError. Non-ascii characters are | |||
|
45 | considered invalid and mapped to arbitrary but unique code points | |||
|
46 | such that 'sysstr(a) != sysstr(b)' for all 'a != b'. | |||
|
47 | """ | |||
|
48 | if isinstance(s, builtins.str): | |||
|
49 | return s | |||
|
50 | return s.decode(u'latin-1') | |||
|
51 | ||||
|
52 | def _wrapattrfunc(f): | |||
|
53 | @functools.wraps(f) | |||
|
54 | def w(object, name, *args): | |||
|
55 | return f(object, sysstr(name), *args) | |||
|
56 | return w | |||
|
57 | ||||
|
58 | # these wrappers are automagically imported by hgloader | |||
|
59 | delattr = _wrapattrfunc(builtins.delattr) | |||
|
60 | getattr = _wrapattrfunc(builtins.getattr) | |||
|
61 | hasattr = _wrapattrfunc(builtins.hasattr) | |||
|
62 | setattr = _wrapattrfunc(builtins.setattr) | |||
|
63 | xrange = builtins.range | |||
|
64 | ||||
|
65 | else: | |||
|
66 | def sysstr(s): | |||
|
67 | return s | |||
|
68 | ||||
|
69 | # Partial backport from os.py in Python 3, which only accepts bytes. | |||
|
70 | # In Python 2, our paths should only ever be bytes, a unicode path | |||
|
71 | # indicates a bug. | |||
|
72 | def fsencode(filename): | |||
|
73 | if isinstance(filename, str): | |||
|
74 | return filename | |||
|
75 | else: | |||
|
76 | raise TypeError( | |||
|
77 | "expect str, not %s" % type(filename).__name__) | |||
|
78 | ||||
32 | stringio = io.StringIO |
|
79 | stringio = io.StringIO | |
33 | empty = _queue.Empty |
|
80 | empty = _queue.Empty | |
34 | queue = _queue.Queue |
|
81 | queue = _queue.Queue | |
35 |
|
82 | |||
36 | class _pycompatstub(object): |
|
83 | class _pycompatstub(object): | |
37 | pass |
|
84 | def __init__(self): | |
38 |
|
85 | self._aliases = {} | ||
39 | def _alias(alias, origin, items): |
|
|||
40 | """ populate a _pycompatstub |
|
|||
41 |
|
86 | |||
42 | copies items from origin to alias |
|
87 | def _registeraliases(self, origin, items): | |
43 | """ |
|
88 | """Add items that will be populated at the first access""" | |
44 | def hgcase(item): |
|
89 | items = map(sysstr, items) | |
45 | return item.replace('_', '').lower() |
|
90 | self._aliases.update( | |
46 | for item in items: |
|
91 | (item.replace(sysstr('_'), sysstr('')).lower(), (origin, item)) | |
|
92 | for item in items) | |||
|
93 | ||||
|
94 | def __getattr__(self, name): | |||
47 | try: |
|
95 | try: | |
48 | setattr(alias, hgcase(item), getattr(origin, item)) |
|
96 | origin, item = self._aliases[name] | |
49 |
except |
|
97 | except KeyError: | |
50 | pass |
|
98 | raise AttributeError(name) | |
|
99 | self.__dict__[name] = obj = getattr(origin, item) | |||
|
100 | return obj | |||
51 |
|
101 | |||
52 | httpserver = _pycompatstub() |
|
102 | httpserver = _pycompatstub() | |
53 | urlreq = _pycompatstub() |
|
103 | urlreq = _pycompatstub() | |
54 | urlerr = _pycompatstub() |
|
104 | urlerr = _pycompatstub() | |
55 | try: |
|
105 | if not ispy3: | |
56 | import BaseHTTPServer |
|
106 | import BaseHTTPServer | |
57 | import CGIHTTPServer |
|
107 | import CGIHTTPServer | |
58 | import SimpleHTTPServer |
|
108 | import SimpleHTTPServer | |
59 | import urllib2 |
|
109 | import urllib2 | |
60 | import urllib |
|
110 | import urllib | |
61 | _alias(urlreq, urllib, ( |
|
111 | urlreq._registeraliases(urllib, ( | |
62 | "addclosehook", |
|
112 | "addclosehook", | |
63 | "addinfourl", |
|
113 | "addinfourl", | |
64 | "ftpwrapper", |
|
114 | "ftpwrapper", | |
@@ -71,9 +121,8 b' try:' | |||||
71 | "unquote", |
|
121 | "unquote", | |
72 | "url2pathname", |
|
122 | "url2pathname", | |
73 | "urlencode", |
|
123 | "urlencode", | |
74 | "urlencode", |
|
|||
75 | )) |
|
124 | )) | |
76 | _alias(urlreq, urllib2, ( |
|
125 | urlreq._registeraliases(urllib2, ( | |
77 | "AbstractHTTPHandler", |
|
126 | "AbstractHTTPHandler", | |
78 | "BaseHandler", |
|
127 | "BaseHandler", | |
79 | "build_opener", |
|
128 | "build_opener", | |
@@ -89,24 +138,24 b' try:' | |||||
89 | "Request", |
|
138 | "Request", | |
90 | "urlopen", |
|
139 | "urlopen", | |
91 | )) |
|
140 | )) | |
92 | _alias(urlerr, urllib2, ( |
|
141 | urlerr._registeraliases(urllib2, ( | |
93 | "HTTPError", |
|
142 | "HTTPError", | |
94 | "URLError", |
|
143 | "URLError", | |
95 | )) |
|
144 | )) | |
96 |
_alias( |
|
145 | httpserver._registeraliases(BaseHTTPServer, ( | |
97 | "HTTPServer", |
|
146 | "HTTPServer", | |
98 | "BaseHTTPRequestHandler", |
|
147 | "BaseHTTPRequestHandler", | |
99 | )) |
|
148 | )) | |
100 |
_alias( |
|
149 | httpserver._registeraliases(SimpleHTTPServer, ( | |
101 | "SimpleHTTPRequestHandler", |
|
150 | "SimpleHTTPRequestHandler", | |
102 | )) |
|
151 | )) | |
103 |
_alias( |
|
152 | httpserver._registeraliases(CGIHTTPServer, ( | |
104 | "CGIHTTPRequestHandler", |
|
153 | "CGIHTTPRequestHandler", | |
105 | )) |
|
154 | )) | |
106 |
|
155 | |||
107 | except ImportError: |
|
156 | else: | |
108 | import urllib.request |
|
157 | import urllib.request | |
109 |
_alias( |
|
158 | urlreq._registeraliases(urllib.request, ( | |
110 | "AbstractHTTPHandler", |
|
159 | "AbstractHTTPHandler", | |
111 | "addclosehook", |
|
160 | "addclosehook", | |
112 | "addinfourl", |
|
161 | "addinfourl", | |
@@ -134,20 +183,14 b' except ImportError:' | |||||
134 | "urlopen", |
|
183 | "urlopen", | |
135 | )) |
|
184 | )) | |
136 | import urllib.error |
|
185 | import urllib.error | |
137 |
_alias( |
|
186 | urlerr._registeraliases(urllib.error, ( | |
138 | "HTTPError", |
|
187 | "HTTPError", | |
139 | "URLError", |
|
188 | "URLError", | |
140 | )) |
|
189 | )) | |
141 | import http.server |
|
190 | import http.server | |
142 |
_alias( |
|
191 | httpserver._registeraliases(http.server, ( | |
143 | "HTTPServer", |
|
192 | "HTTPServer", | |
144 | "BaseHTTPRequestHandler", |
|
193 | "BaseHTTPRequestHandler", | |
145 | "SimpleHTTPRequestHandler", |
|
194 | "SimpleHTTPRequestHandler", | |
146 | "CGIHTTPRequestHandler", |
|
195 | "CGIHTTPRequestHandler", | |
147 | )) |
|
196 | )) | |
148 |
|
||||
149 | try: |
|
|||
150 | xrange |
|
|||
151 | except NameError: |
|
|||
152 | import builtins |
|
|||
153 | builtins.xrange = range |
|
@@ -8,6 +8,7 b'' | |||||
8 | from __future__ import absolute_import |
|
8 | from __future__ import absolute_import | |
9 |
|
9 | |||
10 | from . import ( |
|
10 | from . import ( | |
|
11 | pycompat, | |||
11 | util, |
|
12 | util, | |
12 | ) |
|
13 | ) | |
13 |
|
14 | |||
@@ -108,6 +109,9 b' class revsetpredicate(_funcregistrarbase' | |||||
108 | Optional argument 'safe' indicates whether a predicate is safe for |
|
109 | Optional argument 'safe' indicates whether a predicate is safe for | |
109 | DoS attack (False by default). |
|
110 | DoS attack (False by default). | |
110 |
|
111 | |||
|
112 | Optional argument 'takeorder' indicates whether a predicate function | |||
|
113 | takes ordering policy as the last argument. | |||
|
114 | ||||
111 | 'revsetpredicate' instance in example above can be used to |
|
115 | 'revsetpredicate' instance in example above can be used to | |
112 | decorate multiple functions. |
|
116 | decorate multiple functions. | |
113 |
|
117 | |||
@@ -118,10 +122,11 b' class revsetpredicate(_funcregistrarbase' | |||||
118 | Otherwise, explicit 'revset.loadpredicate()' is needed. |
|
122 | Otherwise, explicit 'revset.loadpredicate()' is needed. | |
119 | """ |
|
123 | """ | |
120 | _getname = _funcregistrarbase._parsefuncdecl |
|
124 | _getname = _funcregistrarbase._parsefuncdecl | |
121 | _docformat = "``%s``\n %s" |
|
125 | _docformat = pycompat.sysstr("``%s``\n %s") | |
122 |
|
126 | |||
123 | def _extrasetup(self, name, func, safe=False): |
|
127 | def _extrasetup(self, name, func, safe=False, takeorder=False): | |
124 | func._safe = safe |
|
128 | func._safe = safe | |
|
129 | func._takeorder = takeorder | |||
125 |
|
130 | |||
126 | class filesetpredicate(_funcregistrarbase): |
|
131 | class filesetpredicate(_funcregistrarbase): | |
127 | """Decorator to register fileset predicate |
|
132 | """Decorator to register fileset predicate | |
@@ -156,7 +161,7 b' class filesetpredicate(_funcregistrarbas' | |||||
156 | Otherwise, explicit 'fileset.loadpredicate()' is needed. |
|
161 | Otherwise, explicit 'fileset.loadpredicate()' is needed. | |
157 | """ |
|
162 | """ | |
158 | _getname = _funcregistrarbase._parsefuncdecl |
|
163 | _getname = _funcregistrarbase._parsefuncdecl | |
159 | _docformat = "``%s``\n %s" |
|
164 | _docformat = pycompat.sysstr("``%s``\n %s") | |
160 |
|
165 | |||
161 | def _extrasetup(self, name, func, callstatus=False, callexisting=False): |
|
166 | def _extrasetup(self, name, func, callstatus=False, callexisting=False): | |
162 | func._callstatus = callstatus |
|
167 | func._callstatus = callstatus | |
@@ -165,7 +170,7 b' class filesetpredicate(_funcregistrarbas' | |||||
165 | class _templateregistrarbase(_funcregistrarbase): |
|
170 | class _templateregistrarbase(_funcregistrarbase): | |
166 | """Base of decorator to register functions as template specific one |
|
171 | """Base of decorator to register functions as template specific one | |
167 | """ |
|
172 | """ | |
168 | _docformat = ":%s: %s" |
|
173 | _docformat = pycompat.sysstr(":%s: %s") | |
169 |
|
174 | |||
170 | class templatekeyword(_templateregistrarbase): |
|
175 | class templatekeyword(_templateregistrarbase): | |
171 | """Decorator to register template keyword |
|
176 | """Decorator to register template keyword |
@@ -147,9 +147,10 b' def strip(ui, repo, nodelist, backup=Tru' | |||||
147 | vfs.join(backupfile)) |
|
147 | vfs.join(backupfile)) | |
148 | repo.ui.log("backupbundle", "saved backup bundle to %s\n", |
|
148 | repo.ui.log("backupbundle", "saved backup bundle to %s\n", | |
149 | vfs.join(backupfile)) |
|
149 | vfs.join(backupfile)) | |
150 | if saveheads or savebases: |
|
150 | tmpbundlefile = None | |
151 | # do not compress partial bundle if we remove it from disk later |
|
151 | if saveheads: | |
152 | chgrpfile = _bundle(repo, savebases, saveheads, node, 'temp', |
|
152 | # do not compress temporary bundle if we remove it from disk later | |
|
153 | tmpbundlefile = _bundle(repo, savebases, saveheads, node, 'temp', | |||
153 | compress=False) |
|
154 | compress=False) | |
154 |
|
155 | |||
155 | mfst = repo.manifest |
|
156 | mfst = repo.manifest | |
@@ -173,32 +174,34 b' def strip(ui, repo, nodelist, backup=Tru' | |||||
173 | if (unencoded.startswith('meta/') and |
|
174 | if (unencoded.startswith('meta/') and | |
174 | unencoded.endswith('00manifest.i')): |
|
175 | unencoded.endswith('00manifest.i')): | |
175 | dir = unencoded[5:-12] |
|
176 | dir = unencoded[5:-12] | |
176 | repo.dirlog(dir).strip(striprev, tr) |
|
177 | repo.manifest.dirlog(dir).strip(striprev, tr) | |
177 | for fn in files: |
|
178 | for fn in files: | |
178 | repo.file(fn).strip(striprev, tr) |
|
179 | repo.file(fn).strip(striprev, tr) | |
179 | tr.endgroup() |
|
180 | tr.endgroup() | |
180 |
|
181 | |||
181 | for i in xrange(offset, len(tr.entries)): |
|
182 | for i in xrange(offset, len(tr.entries)): | |
182 | file, troffset, ignore = tr.entries[i] |
|
183 | file, troffset, ignore = tr.entries[i] | |
183 |
repo.svfs(file, 'a') |
|
184 | with repo.svfs(file, 'a', checkambig=True) as fp: | |
|
185 | fp.truncate(troffset) | |||
184 | if troffset == 0: |
|
186 | if troffset == 0: | |
185 | repo.store.markremoved(file) |
|
187 | repo.store.markremoved(file) | |
186 |
|
188 | |||
187 | if saveheads or savebases: |
|
189 | if tmpbundlefile: | |
188 | ui.note(_("adding branch\n")) |
|
190 | ui.note(_("adding branch\n")) | |
189 |
f = vfs.open( |
|
191 | f = vfs.open(tmpbundlefile, "rb") | |
190 |
gen = exchange.readbundle(ui, f, |
|
192 | gen = exchange.readbundle(ui, f, tmpbundlefile, vfs) | |
191 | if not repo.ui.verbose: |
|
193 | if not repo.ui.verbose: | |
192 | # silence internal shuffling chatter |
|
194 | # silence internal shuffling chatter | |
193 | repo.ui.pushbuffer() |
|
195 | repo.ui.pushbuffer() | |
194 | if isinstance(gen, bundle2.unbundle20): |
|
196 | if isinstance(gen, bundle2.unbundle20): | |
195 | with repo.transaction('strip') as tr: |
|
197 | with repo.transaction('strip') as tr: | |
196 | tr.hookargs = {'source': 'strip', |
|
198 | tr.hookargs = {'source': 'strip', | |
197 |
'url': 'bundle:' + vfs.join( |
|
199 | 'url': 'bundle:' + vfs.join(tmpbundlefile)} | |
198 | bundle2.applybundle(repo, gen, tr, source='strip', |
|
200 | bundle2.applybundle(repo, gen, tr, source='strip', | |
199 |
url='bundle:' + vfs.join( |
|
201 | url='bundle:' + vfs.join(tmpbundlefile)) | |
200 | else: |
|
202 | else: | |
201 |
gen.apply(repo, 'strip', 'bundle:' + vfs.join( |
|
203 | gen.apply(repo, 'strip', 'bundle:' + vfs.join(tmpbundlefile), | |
|
204 | True) | |||
202 | if not repo.ui.verbose: |
|
205 | if not repo.ui.verbose: | |
203 | repo.ui.popbuffer() |
|
206 | repo.ui.popbuffer() | |
204 | f.close() |
|
207 | f.close() | |
@@ -227,16 +230,18 b' def strip(ui, repo, nodelist, backup=Tru' | |||||
227 |
|
230 | |||
228 | except: # re-raises |
|
231 | except: # re-raises | |
229 | if backupfile: |
|
232 | if backupfile: | |
230 |
ui.warn(_("strip failed, |
|
233 | ui.warn(_("strip failed, backup bundle stored in '%s'\n") | |
231 | % vfs.join(backupfile)) |
|
234 | % vfs.join(backupfile)) | |
232 | elif saveheads: |
|
235 | if tmpbundlefile: | |
233 |
ui.warn(_("strip failed, |
|
236 | ui.warn(_("strip failed, unrecovered changes stored in '%s'\n") | |
234 |
% vfs.join( |
|
237 | % vfs.join(tmpbundlefile)) | |
|
238 | ui.warn(_("(fix the problem, then recover the changesets with " | |||
|
239 | "\"hg unbundle '%s'\")\n") % vfs.join(tmpbundlefile)) | |||
235 | raise |
|
240 | raise | |
236 | else: |
|
241 | else: | |
237 | if saveheads or savebases: |
|
242 | if tmpbundlefile: | |
238 |
# Remove |
|
243 | # Remove temporary bundle only if there were no exceptions | |
239 |
vfs.unlink( |
|
244 | vfs.unlink(tmpbundlefile) | |
240 |
|
245 | |||
241 | repo.destroyed() |
|
246 | repo.destroyed() | |
242 |
|
247 |
@@ -212,8 +212,11 b' class revlog(object):' | |||||
212 | fashion, which means we never need to rewrite a file to insert or |
|
212 | fashion, which means we never need to rewrite a file to insert or | |
213 | remove data, and can use some simple techniques to avoid the need |
|
213 | remove data, and can use some simple techniques to avoid the need | |
214 | for locking while reading. |
|
214 | for locking while reading. | |
|
215 | ||||
|
216 | If checkambig, indexfile is opened with checkambig=True at | |||
|
217 | writing, to avoid file stat ambiguity. | |||
215 | """ |
|
218 | """ | |
216 | def __init__(self, opener, indexfile): |
|
219 | def __init__(self, opener, indexfile, checkambig=False): | |
217 | """ |
|
220 | """ | |
218 | create a revlog object |
|
221 | create a revlog object | |
219 |
|
222 | |||
@@ -223,11 +226,13 b' class revlog(object):' | |||||
223 | self.indexfile = indexfile |
|
226 | self.indexfile = indexfile | |
224 | self.datafile = indexfile[:-2] + ".d" |
|
227 | self.datafile = indexfile[:-2] + ".d" | |
225 | self.opener = opener |
|
228 | self.opener = opener | |
|
229 | # When True, indexfile is opened with checkambig=True at writing, to | |||
|
230 | # avoid file stat ambiguity. | |||
|
231 | self._checkambig = checkambig | |||
226 | # 3-tuple of (node, rev, text) for a raw revision. |
|
232 | # 3-tuple of (node, rev, text) for a raw revision. | |
227 | self._cache = None |
|
233 | self._cache = None | |
228 | # 2-tuple of (rev, baserev) defining the base revision the delta chain |
|
234 | # Maps rev to chain base rev. | |
229 | # begins at for a revision. |
|
235 | self._chainbasecache = util.lrucachedict(100) | |
230 | self._basecache = None |
|
|||
231 | # 2-tuple of (offset, data) of raw data from the revlog at an offset. |
|
236 | # 2-tuple of (offset, data) of raw data from the revlog at an offset. | |
232 | self._chunkcache = (0, '') |
|
237 | self._chunkcache = (0, '') | |
233 | # How much data to read and cache into the raw revlog data cache. |
|
238 | # How much data to read and cache into the raw revlog data cache. | |
@@ -292,6 +297,8 b' class revlog(object):' | |||||
292 | raise RevlogError(_("index %s unknown format %d") |
|
297 | raise RevlogError(_("index %s unknown format %d") | |
293 | % (self.indexfile, fmt)) |
|
298 | % (self.indexfile, fmt)) | |
294 |
|
299 | |||
|
300 | self.storedeltachains = True | |||
|
301 | ||||
295 | self._io = revlogio() |
|
302 | self._io = revlogio() | |
296 | if self.version == REVLOGV0: |
|
303 | if self.version == REVLOGV0: | |
297 | self._io = revlogoldio() |
|
304 | self._io = revlogoldio() | |
@@ -340,7 +347,7 b' class revlog(object):' | |||||
340 |
|
347 | |||
341 | def clearcaches(self): |
|
348 | def clearcaches(self): | |
342 | self._cache = None |
|
349 | self._cache = None | |
343 |
self._basecache |
|
350 | self._chainbasecache.clear() | |
344 | self._chunkcache = (0, '') |
|
351 | self._chunkcache = (0, '') | |
345 | self._pcache = {} |
|
352 | self._pcache = {} | |
346 |
|
353 | |||
@@ -390,11 +397,17 b' class revlog(object):' | |||||
390 | def length(self, rev): |
|
397 | def length(self, rev): | |
391 | return self.index[rev][1] |
|
398 | return self.index[rev][1] | |
392 | def chainbase(self, rev): |
|
399 | def chainbase(self, rev): | |
|
400 | base = self._chainbasecache.get(rev) | |||
|
401 | if base is not None: | |||
|
402 | return base | |||
|
403 | ||||
393 | index = self.index |
|
404 | index = self.index | |
394 | base = index[rev][3] |
|
405 | base = index[rev][3] | |
395 | while base != rev: |
|
406 | while base != rev: | |
396 | rev = base |
|
407 | rev = base | |
397 | base = index[rev][3] |
|
408 | base = index[rev][3] | |
|
409 | ||||
|
410 | self._chainbasecache[rev] = base | |||
398 | return base |
|
411 | return base | |
399 | def chainlen(self, rev): |
|
412 | def chainlen(self, rev): | |
400 | return self._chaininfo(rev)[0] |
|
413 | return self._chaininfo(rev)[0] | |
@@ -1271,7 +1284,8 b' class revlog(object):' | |||||
1271 | finally: |
|
1284 | finally: | |
1272 | df.close() |
|
1285 | df.close() | |
1273 |
|
1286 | |||
1274 |
fp = self.opener(self.indexfile, 'w', atomictemp=True |
|
1287 | fp = self.opener(self.indexfile, 'w', atomictemp=True, | |
|
1288 | checkambig=self._checkambig) | |||
1275 | self.version &= ~(REVLOGNGINLINEDATA) |
|
1289 | self.version &= ~(REVLOGNGINLINEDATA) | |
1276 | self._inline = False |
|
1290 | self._inline = False | |
1277 | for i in self: |
|
1291 | for i in self: | |
@@ -1314,7 +1328,7 b' class revlog(object):' | |||||
1314 | dfh = None |
|
1328 | dfh = None | |
1315 | if not self._inline: |
|
1329 | if not self._inline: | |
1316 | dfh = self.opener(self.datafile, "a+") |
|
1330 | dfh = self.opener(self.datafile, "a+") | |
1317 | ifh = self.opener(self.indexfile, "a+") |
|
1331 | ifh = self.opener(self.indexfile, "a+", checkambig=self._checkambig) | |
1318 | try: |
|
1332 | try: | |
1319 | return self._addrevision(node, text, transaction, link, p1, p2, |
|
1333 | return self._addrevision(node, text, transaction, link, p1, p2, | |
1320 | REVIDX_DEFAULT_FLAGS, cachedelta, ifh, dfh) |
|
1334 | REVIDX_DEFAULT_FLAGS, cachedelta, ifh, dfh) | |
@@ -1428,30 +1442,24 b' class revlog(object):' | |||||
1428 | fh = dfh |
|
1442 | fh = dfh | |
1429 | ptext = self.revision(self.node(rev), _df=fh) |
|
1443 | ptext = self.revision(self.node(rev), _df=fh) | |
1430 | delta = mdiff.textdiff(ptext, t) |
|
1444 | delta = mdiff.textdiff(ptext, t) | |
1431 | data = self.compress(delta) |
|
1445 | header, data = self.compress(delta) | |
1432 |
l = len( |
|
1446 | deltalen = len(header) + len(data) | |
1433 | if basecache[0] == rev: |
|
1447 | chainbase = self.chainbase(rev) | |
1434 | chainbase = basecache[1] |
|
1448 | dist = deltalen + offset - self.start(chainbase) | |
1435 | else: |
|
|||
1436 | chainbase = self.chainbase(rev) |
|
|||
1437 | dist = l + offset - self.start(chainbase) |
|
|||
1438 | if self._generaldelta: |
|
1449 | if self._generaldelta: | |
1439 | base = rev |
|
1450 | base = rev | |
1440 | else: |
|
1451 | else: | |
1441 | base = chainbase |
|
1452 | base = chainbase | |
1442 | chainlen, compresseddeltalen = self._chaininfo(rev) |
|
1453 | chainlen, compresseddeltalen = self._chaininfo(rev) | |
1443 | chainlen += 1 |
|
1454 | chainlen += 1 | |
1444 | compresseddeltalen += l |
|
1455 | compresseddeltalen += deltalen | |
1445 | return dist, l, data, base, chainbase, chainlen, compresseddeltalen |
|
1456 | return (dist, deltalen, (header, data), base, | |
|
1457 | chainbase, chainlen, compresseddeltalen) | |||
1446 |
|
1458 | |||
1447 | curr = len(self) |
|
1459 | curr = len(self) | |
1448 | prev = curr - 1 |
|
1460 | prev = curr - 1 | |
1449 | base = chainbase = curr |
|
|||
1450 | offset = self.end(prev) |
|
1461 | offset = self.end(prev) | |
1451 | delta = None |
|
1462 | delta = None | |
1452 | if self._basecache is None: |
|
|||
1453 | self._basecache = (prev, self.chainbase(prev)) |
|
|||
1454 | basecache = self._basecache |
|
|||
1455 | p1r, p2r = self.rev(p1), self.rev(p2) |
|
1463 | p1r, p2r = self.rev(p1), self.rev(p2) | |
1456 |
|
1464 | |||
1457 | # full versions are inserted when the needed deltas |
|
1465 | # full versions are inserted when the needed deltas | |
@@ -1463,8 +1471,12 b' class revlog(object):' | |||||
1463 | textlen = len(text) |
|
1471 | textlen = len(text) | |
1464 |
|
1472 | |||
1465 | # should we try to build a delta? |
|
1473 | # should we try to build a delta? | |
1466 | if prev != nullrev: |
|
1474 | if prev != nullrev and self.storedeltachains: | |
1467 | tested = set() |
|
1475 | tested = set() | |
|
1476 | # This condition is true most of the time when processing | |||
|
1477 | # changegroup data into a generaldelta repo. The only time it | |||
|
1478 | # isn't true is if this is the first revision in a delta chain | |||
|
1479 | # or if ``format.generaldelta=true`` disabled ``lazydeltabase``. | |||
1468 | if cachedelta and self._generaldelta and self._lazydeltabase: |
|
1480 | if cachedelta and self._generaldelta and self._lazydeltabase: | |
1469 | # Assume what we received from the server is a good choice |
|
1481 | # Assume what we received from the server is a good choice | |
1470 | # build delta will reuse the cache |
|
1482 | # build delta will reuse the cache | |
@@ -1515,7 +1527,7 b' class revlog(object):' | |||||
1515 |
|
1527 | |||
1516 | if type(text) == str: # only accept immutable objects |
|
1528 | if type(text) == str: # only accept immutable objects | |
1517 | self._cache = (node, curr, text) |
|
1529 | self._cache = (node, curr, text) | |
1518 |
self._basecache = |
|
1530 | self._chainbasecache[curr] = chainbase | |
1519 | return node |
|
1531 | return node | |
1520 |
|
1532 | |||
1521 | def _writeentry(self, transaction, ifh, dfh, entry, data, link, offset): |
|
1533 | def _writeentry(self, transaction, ifh, dfh, entry, data, link, offset): | |
@@ -1569,7 +1581,7 b' class revlog(object):' | |||||
1569 | end = 0 |
|
1581 | end = 0 | |
1570 | if r: |
|
1582 | if r: | |
1571 | end = self.end(r - 1) |
|
1583 | end = self.end(r - 1) | |
1572 | ifh = self.opener(self.indexfile, "a+") |
|
1584 | ifh = self.opener(self.indexfile, "a+", checkambig=self._checkambig) | |
1573 | isize = r * self._io.size |
|
1585 | isize = r * self._io.size | |
1574 | if self._inline: |
|
1586 | if self._inline: | |
1575 | transaction.add(self.indexfile, end + isize, r) |
|
1587 | transaction.add(self.indexfile, end + isize, r) | |
@@ -1585,10 +1597,7 b' class revlog(object):' | |||||
1585 | try: |
|
1597 | try: | |
1586 | # loop through our set of deltas |
|
1598 | # loop through our set of deltas | |
1587 | chain = None |
|
1599 | chain = None | |
1588 | while True: |
|
1600 | for chunkdata in iter(lambda: cg.deltachunk(chain), {}): | |
1589 | chunkdata = cg.deltachunk(chain) |
|
|||
1590 | if not chunkdata: |
|
|||
1591 | break |
|
|||
1592 | node = chunkdata['node'] |
|
1601 | node = chunkdata['node'] | |
1593 | p1 = chunkdata['p1'] |
|
1602 | p1 = chunkdata['p1'] | |
1594 | p2 = chunkdata['p2'] |
|
1603 | p2 = chunkdata['p2'] | |
@@ -1646,7 +1655,8 b' class revlog(object):' | |||||
1646 | # reopen the index |
|
1655 | # reopen the index | |
1647 | ifh.close() |
|
1656 | ifh.close() | |
1648 | dfh = self.opener(self.datafile, "a+") |
|
1657 | dfh = self.opener(self.datafile, "a+") | |
1649 |
ifh = self.opener(self.indexfile, "a+" |
|
1658 | ifh = self.opener(self.indexfile, "a+", | |
|
1659 | checkambig=self._checkambig) | |||
1650 | finally: |
|
1660 | finally: | |
1651 | if dfh: |
|
1661 | if dfh: | |
1652 | dfh.close() |
|
1662 | dfh.close() |
@@ -9,6 +9,7 b' from __future__ import absolute_import' | |||||
9 |
|
9 | |||
10 | import heapq |
|
10 | import heapq | |
11 | import re |
|
11 | import re | |
|
12 | import string | |||
12 |
|
13 | |||
13 | from .i18n import _ |
|
14 | from .i18n import _ | |
14 | from . import ( |
|
15 | from . import ( | |
@@ -22,6 +23,7 b' from . import (' | |||||
22 | parser, |
|
23 | parser, | |
23 | pathutil, |
|
24 | pathutil, | |
24 | phases, |
|
25 | phases, | |
|
26 | pycompat, | |||
25 | registrar, |
|
27 | registrar, | |
26 | repoview, |
|
28 | repoview, | |
27 | util, |
|
29 | util, | |
@@ -149,18 +151,16 b' elements = {' | |||||
149 | "(": (21, None, ("group", 1, ")"), ("func", 1, ")"), None), |
|
151 | "(": (21, None, ("group", 1, ")"), ("func", 1, ")"), None), | |
150 | "##": (20, None, None, ("_concat", 20), None), |
|
152 | "##": (20, None, None, ("_concat", 20), None), | |
151 | "~": (18, None, None, ("ancestor", 18), None), |
|
153 | "~": (18, None, None, ("ancestor", 18), None), | |
152 |
"^": (18, None, None, ("parent", 18), |
|
154 | "^": (18, None, None, ("parent", 18), "parentpost"), | |
153 | "-": (5, None, ("negate", 19), ("minus", 5), None), |
|
155 | "-": (5, None, ("negate", 19), ("minus", 5), None), | |
154 | "::": (17, None, ("dagrangepre", 17), ("dagrange", 17), |
|
156 | "::": (17, None, ("dagrangepre", 17), ("dagrange", 17), "dagrangepost"), | |
155 | ("dagrangepost", 17)), |
|
157 | "..": (17, None, ("dagrangepre", 17), ("dagrange", 17), "dagrangepost"), | |
156 |
" |
|
158 | ":": (15, "rangeall", ("rangepre", 15), ("range", 15), "rangepost"), | |
157 | ("dagrangepost", 17)), |
|
|||
158 | ":": (15, "rangeall", ("rangepre", 15), ("range", 15), ("rangepost", 15)), |
|
|||
159 | "not": (10, None, ("not", 10), None, None), |
|
159 | "not": (10, None, ("not", 10), None, None), | |
160 | "!": (10, None, ("not", 10), None, None), |
|
160 | "!": (10, None, ("not", 10), None, None), | |
161 | "and": (5, None, None, ("and", 5), None), |
|
161 | "and": (5, None, None, ("and", 5), None), | |
162 | "&": (5, None, None, ("and", 5), None), |
|
162 | "&": (5, None, None, ("and", 5), None), | |
163 |
"%": (5, None, None, ("only", 5), |
|
163 | "%": (5, None, None, ("only", 5), "onlypost"), | |
164 | "or": (4, None, None, ("or", 4), None), |
|
164 | "or": (4, None, None, ("or", 4), None), | |
165 | "|": (4, None, None, ("or", 4), None), |
|
165 | "|": (4, None, None, ("or", 4), None), | |
166 | "+": (4, None, None, ("or", 4), None), |
|
166 | "+": (4, None, None, ("or", 4), None), | |
@@ -175,12 +175,12 b' elements = {' | |||||
175 | keywords = set(['and', 'or', 'not']) |
|
175 | keywords = set(['and', 'or', 'not']) | |
176 |
|
176 | |||
177 | # default set of valid characters for the initial letter of symbols |
|
177 | # default set of valid characters for the initial letter of symbols | |
178 | _syminitletters = set(c for c in [chr(i) for i in xrange(256)] |
|
178 | _syminitletters = set( | |
179 | if c.isalnum() or c in '._@' or ord(c) > 127) |
|
179 | string.ascii_letters + | |
|
180 | string.digits + pycompat.sysstr('._@')) | set(map(chr, xrange(128, 256))) | |||
180 |
|
181 | |||
181 | # default set of valid characters for non-initial letters of symbols |
|
182 | # default set of valid characters for non-initial letters of symbols | |
182 | _symletters = set(c for c in [chr(i) for i in xrange(256)] |
|
183 | _symletters = _syminitletters | set(pycompat.sysstr('-/')) | |
183 | if c.isalnum() or c in '-._/@' or ord(c) > 127) |
|
|||
184 |
|
184 | |||
185 | def tokenize(program, lookup=None, syminitletters=None, symletters=None): |
|
185 | def tokenize(program, lookup=None, syminitletters=None, symletters=None): | |
186 | ''' |
|
186 | ''' | |
@@ -362,14 +362,22 b' def stringset(repo, subset, x):' | |||||
362 | return baseset([x]) |
|
362 | return baseset([x]) | |
363 | return baseset() |
|
363 | return baseset() | |
364 |
|
364 | |||
365 | def rangeset(repo, subset, x, y): |
|
365 | def rangeset(repo, subset, x, y, order): | |
366 | m = getset(repo, fullreposet(repo), x) |
|
366 | m = getset(repo, fullreposet(repo), x) | |
367 | n = getset(repo, fullreposet(repo), y) |
|
367 | n = getset(repo, fullreposet(repo), y) | |
368 |
|
368 | |||
369 | if not m or not n: |
|
369 | if not m or not n: | |
370 | return baseset() |
|
370 | return baseset() | |
371 | m, n = m.first(), n.last() |
|
371 | return _makerangeset(repo, subset, m.first(), n.last(), order) | |
372 |
|
372 | |||
|
373 | def rangepre(repo, subset, y, order): | |||
|
374 | # ':y' can't be rewritten to '0:y' since '0' may be hidden | |||
|
375 | n = getset(repo, fullreposet(repo), y) | |||
|
376 | if not n: | |||
|
377 | return baseset() | |||
|
378 | return _makerangeset(repo, subset, 0, n.last(), order) | |||
|
379 | ||||
|
380 | def _makerangeset(repo, subset, m, n, order): | |||
373 | if m == n: |
|
381 | if m == n: | |
374 | r = baseset([m]) |
|
382 | r = baseset([m]) | |
375 | elif n == node.wdirrev: |
|
383 | elif n == node.wdirrev: | |
@@ -380,35 +388,43 b' def rangeset(repo, subset, x, y):' | |||||
380 | r = spanset(repo, m, n + 1) |
|
388 | r = spanset(repo, m, n + 1) | |
381 | else: |
|
389 | else: | |
382 | r = spanset(repo, m, n - 1) |
|
390 | r = spanset(repo, m, n - 1) | |
383 | # XXX We should combine with subset first: 'subset & baseset(...)'. This is |
|
391 | ||
384 | # necessary to ensure we preserve the order in subset. |
|
392 | if order == defineorder: | |
385 | # |
|
393 | return r & subset | |
386 | # This has performance implication, carrying the sorting over when possible |
|
394 | else: | |
387 | # would be more efficient. |
|
395 | # carrying the sorting over when possible would be more efficient | |
388 |
return |
|
396 | return subset & r | |
389 |
|
397 | |||
390 | def dagrange(repo, subset, x, y): |
|
398 | def dagrange(repo, subset, x, y, order): | |
391 | r = fullreposet(repo) |
|
399 | r = fullreposet(repo) | |
392 | xs = reachableroots(repo, getset(repo, r, x), getset(repo, r, y), |
|
400 | xs = reachableroots(repo, getset(repo, r, x), getset(repo, r, y), | |
393 | includepath=True) |
|
401 | includepath=True) | |
394 | return subset & xs |
|
402 | return subset & xs | |
395 |
|
403 | |||
396 | def andset(repo, subset, x, y): |
|
404 | def andset(repo, subset, x, y, order): | |
397 | return getset(repo, getset(repo, subset, x), y) |
|
405 | return getset(repo, getset(repo, subset, x), y) | |
398 |
|
406 | |||
399 | def differenceset(repo, subset, x, y): |
|
407 | def differenceset(repo, subset, x, y, order): | |
400 | return getset(repo, subset, x) - getset(repo, subset, y) |
|
408 | return getset(repo, subset, x) - getset(repo, subset, y) | |
401 |
|
409 | |||
402 |
def orset(repo, subset, |
|
410 | def _orsetlist(repo, subset, xs): | |
403 | assert xs |
|
411 | assert xs | |
404 | if len(xs) == 1: |
|
412 | if len(xs) == 1: | |
405 | return getset(repo, subset, xs[0]) |
|
413 | return getset(repo, subset, xs[0]) | |
406 | p = len(xs) // 2 |
|
414 | p = len(xs) // 2 | |
407 |
a = orset(repo, subset, |
|
415 | a = _orsetlist(repo, subset, xs[:p]) | |
408 |
b = orset(repo, subset, |
|
416 | b = _orsetlist(repo, subset, xs[p:]) | |
409 | return a + b |
|
417 | return a + b | |
410 |
|
418 | |||
411 |
def |
|
419 | def orset(repo, subset, x, order): | |
|
420 | xs = getlist(x) | |||
|
421 | if order == followorder: | |||
|
422 | # slow path to take the subset order | |||
|
423 | return subset & _orsetlist(repo, fullreposet(repo), xs) | |||
|
424 | else: | |||
|
425 | return _orsetlist(repo, subset, xs) | |||
|
426 | ||||
|
427 | def notset(repo, subset, x, order): | |||
412 | return subset - getset(repo, subset, x) |
|
428 | return subset - getset(repo, subset, x) | |
413 |
|
429 | |||
414 | def listset(repo, subset, *xs): |
|
430 | def listset(repo, subset, *xs): | |
@@ -418,10 +434,13 b' def listset(repo, subset, *xs):' | |||||
418 | def keyvaluepair(repo, subset, k, v): |
|
434 | def keyvaluepair(repo, subset, k, v): | |
419 | raise error.ParseError(_("can't use a key-value pair in this context")) |
|
435 | raise error.ParseError(_("can't use a key-value pair in this context")) | |
420 |
|
436 | |||
421 | def func(repo, subset, a, b): |
|
437 | def func(repo, subset, a, b, order): | |
422 | f = getsymbol(a) |
|
438 | f = getsymbol(a) | |
423 | if f in symbols: |
|
439 | if f in symbols: | |
424 |
|
|
440 | fn = symbols[f] | |
|
441 | if getattr(fn, '_takeorder', False): | |||
|
442 | return fn(repo, subset, b, order) | |||
|
443 | return fn(repo, subset, b) | |||
425 |
|
444 | |||
426 | keep = lambda fn: getattr(fn, '__doc__', None) is not None |
|
445 | keep = lambda fn: getattr(fn, '__doc__', None) is not None | |
427 |
|
446 | |||
@@ -515,7 +534,7 b' def _firstancestors(repo, subset, x):' | |||||
515 | # Like ``ancestors(set)`` but follows only the first parents. |
|
534 | # Like ``ancestors(set)`` but follows only the first parents. | |
516 | return _ancestors(repo, subset, x, followfirst=True) |
|
535 | return _ancestors(repo, subset, x, followfirst=True) | |
517 |
|
536 | |||
518 | def ancestorspec(repo, subset, x, n): |
|
537 | def ancestorspec(repo, subset, x, n, order): | |
519 | """``set~n`` |
|
538 | """``set~n`` | |
520 | Changesets that are the Nth ancestor (first parents only) of a changeset |
|
539 | Changesets that are the Nth ancestor (first parents only) of a changeset | |
521 | in set. |
|
540 | in set. | |
@@ -1001,12 +1020,21 b' def first(repo, subset, x):' | |||||
1001 | return limit(repo, subset, x) |
|
1020 | return limit(repo, subset, x) | |
1002 |
|
1021 | |||
1003 | def _follow(repo, subset, x, name, followfirst=False): |
|
1022 | def _follow(repo, subset, x, name, followfirst=False): | |
1004 |
l = getargs(x, 0, |
|
1023 | l = getargs(x, 0, 2, _("%s takes no arguments or a pattern " | |
|
1024 | "and an optional revset") % name) | |||
1005 | c = repo['.'] |
|
1025 | c = repo['.'] | |
1006 | if l: |
|
1026 | if l: | |
1007 | x = getstring(l[0], _("%s expected a pattern") % name) |
|
1027 | x = getstring(l[0], _("%s expected a pattern") % name) | |
|
1028 | rev = None | |||
|
1029 | if len(l) >= 2: | |||
|
1030 | revs = getset(repo, fullreposet(repo), l[1]) | |||
|
1031 | if len(revs) != 1: | |||
|
1032 | raise error.RepoLookupError( | |||
|
1033 | _("%s expected one starting revision") % name) | |||
|
1034 | rev = revs.last() | |||
|
1035 | c = repo[rev] | |||
1008 | matcher = matchmod.match(repo.root, repo.getcwd(), [x], |
|
1036 | matcher = matchmod.match(repo.root, repo.getcwd(), [x], | |
1009 |
ctx=repo[ |
|
1037 | ctx=repo[rev], default='path') | |
1010 |
|
1038 | |||
1011 | files = c.manifest().walk(matcher) |
|
1039 | files = c.manifest().walk(matcher) | |
1012 |
|
1040 | |||
@@ -1021,20 +1049,20 b' def _follow(repo, subset, x, name, follo' | |||||
1021 |
|
1049 | |||
1022 | return subset & s |
|
1050 | return subset & s | |
1023 |
|
1051 | |||
1024 | @predicate('follow([pattern])', safe=True) |
|
1052 | @predicate('follow([pattern[, startrev]])', safe=True) | |
1025 | def follow(repo, subset, x): |
|
1053 | def follow(repo, subset, x): | |
1026 | """ |
|
1054 | """ | |
1027 | An alias for ``::.`` (ancestors of the working directory's first parent). |
|
1055 | An alias for ``::.`` (ancestors of the working directory's first parent). | |
1028 | If pattern is specified, the histories of files matching given |
|
1056 | If pattern is specified, the histories of files matching given | |
1029 | pattern is followed, including copies. |
|
1057 | pattern in the revision given by startrev are followed, including copies. | |
1030 | """ |
|
1058 | """ | |
1031 | return _follow(repo, subset, x, 'follow') |
|
1059 | return _follow(repo, subset, x, 'follow') | |
1032 |
|
1060 | |||
1033 | @predicate('_followfirst', safe=True) |
|
1061 | @predicate('_followfirst', safe=True) | |
1034 | def _followfirst(repo, subset, x): |
|
1062 | def _followfirst(repo, subset, x): | |
1035 | # ``followfirst([pattern])`` |
|
1063 | # ``followfirst([pattern[, startrev]])`` | |
1036 |
# Like ``follow([pattern])`` but follows only the first parent |
|
1064 | # Like ``follow([pattern[, startrev]])`` but follows only the first parent | |
1037 | # every revisions or files revisions. |
|
1065 | # of every revisions or files revisions. | |
1038 | return _follow(repo, subset, x, '_followfirst', followfirst=True) |
|
1066 | return _follow(repo, subset, x, '_followfirst', followfirst=True) | |
1039 |
|
1067 | |||
1040 | @predicate('all()', safe=True) |
|
1068 | @predicate('all()', safe=True) | |
@@ -1519,6 +1547,9 b' def p2(repo, subset, x):' | |||||
1519 | # some optimisations from the fact this is a baseset. |
|
1547 | # some optimisations from the fact this is a baseset. | |
1520 | return subset & ps |
|
1548 | return subset & ps | |
1521 |
|
1549 | |||
|
1550 | def parentpost(repo, subset, x, order): | |||
|
1551 | return p1(repo, subset, x) | |||
|
1552 | ||||
1522 | @predicate('parents([set])', safe=True) |
|
1553 | @predicate('parents([set])', safe=True) | |
1523 | def parents(repo, subset, x): |
|
1554 | def parents(repo, subset, x): | |
1524 | """ |
|
1555 | """ | |
@@ -1569,7 +1600,7 b' def secret(repo, subset, x):' | |||||
1569 | target = phases.secret |
|
1600 | target = phases.secret | |
1570 | return _phase(repo, subset, target) |
|
1601 | return _phase(repo, subset, target) | |
1571 |
|
1602 | |||
1572 | def parentspec(repo, subset, x, n): |
|
1603 | def parentspec(repo, subset, x, n, order): | |
1573 | """``set^0`` |
|
1604 | """``set^0`` | |
1574 | The set. |
|
1605 | The set. | |
1575 | ``set^1`` (or ``set^``), ``set^2`` |
|
1606 | ``set^1`` (or ``set^``), ``set^2`` | |
@@ -1590,7 +1621,7 b' def parentspec(repo, subset, x, n):' | |||||
1590 | ps.add(cl.parentrevs(r)[0]) |
|
1621 | ps.add(cl.parentrevs(r)[0]) | |
1591 | elif n == 2: |
|
1622 | elif n == 2: | |
1592 | parents = cl.parentrevs(r) |
|
1623 | parents = cl.parentrevs(r) | |
1593 |
if |
|
1624 | if parents[1] != node.nullrev: | |
1594 | ps.add(parents[1]) |
|
1625 | ps.add(parents[1]) | |
1595 | return subset & ps |
|
1626 | return subset & ps | |
1596 |
|
1627 | |||
@@ -1813,12 +1844,13 b' def matching(repo, subset, x):' | |||||
1813 |
|
1844 | |||
1814 | return subset.filter(matches, condrepr=('<matching%r %r>', fields, revs)) |
|
1845 | return subset.filter(matches, condrepr=('<matching%r %r>', fields, revs)) | |
1815 |
|
1846 | |||
1816 | @predicate('reverse(set)', safe=True) |
|
1847 | @predicate('reverse(set)', safe=True, takeorder=True) | |
1817 | def reverse(repo, subset, x): |
|
1848 | def reverse(repo, subset, x, order): | |
1818 | """Reverse order of set. |
|
1849 | """Reverse order of set. | |
1819 | """ |
|
1850 | """ | |
1820 | l = getset(repo, subset, x) |
|
1851 | l = getset(repo, subset, x) | |
1821 | l.reverse() |
|
1852 | if order == defineorder: | |
|
1853 | l.reverse() | |||
1822 | return l |
|
1854 | return l | |
1823 |
|
1855 | |||
1824 | @predicate('roots(set)', safe=True) |
|
1856 | @predicate('roots(set)', safe=True) | |
@@ -1880,8 +1912,8 b' def _getsortargs(x):' | |||||
1880 |
|
1912 | |||
1881 | return args['set'], keyflags, opts |
|
1913 | return args['set'], keyflags, opts | |
1882 |
|
1914 | |||
1883 | @predicate('sort(set[, [-]key... [, ...]])', safe=True) |
|
1915 | @predicate('sort(set[, [-]key... [, ...]])', safe=True, takeorder=True) | |
1884 | def sort(repo, subset, x): |
|
1916 | def sort(repo, subset, x, order): | |
1885 | """Sort set by keys. The default sort order is ascending, specify a key |
|
1917 | """Sort set by keys. The default sort order is ascending, specify a key | |
1886 | as ``-key`` to sort in descending order. |
|
1918 | as ``-key`` to sort in descending order. | |
1887 |
|
1919 | |||
@@ -1902,7 +1934,7 b' def sort(repo, subset, x):' | |||||
1902 | s, keyflags, opts = _getsortargs(x) |
|
1934 | s, keyflags, opts = _getsortargs(x) | |
1903 | revs = getset(repo, subset, s) |
|
1935 | revs = getset(repo, subset, s) | |
1904 |
|
1936 | |||
1905 | if not keyflags: |
|
1937 | if not keyflags or order != defineorder: | |
1906 | return revs |
|
1938 | return revs | |
1907 | if len(keyflags) == 1 and keyflags[0][0] == "rev": |
|
1939 | if len(keyflags) == 1 and keyflags[0][0] == "rev": | |
1908 | revs.sort(reverse=keyflags[0][1]) |
|
1940 | revs.sort(reverse=keyflags[0][1]) | |
@@ -2233,9 +2265,7 b' def wdir(repo, subset, x):' | |||||
2233 | return baseset([node.wdirrev]) |
|
2265 | return baseset([node.wdirrev]) | |
2234 | return baseset() |
|
2266 | return baseset() | |
2235 |
|
2267 | |||
2236 | # for internal use |
|
2268 | def _orderedlist(repo, subset, x): | |
2237 | @predicate('_list', safe=True) |
|
|||
2238 | def _list(repo, subset, x): |
|
|||
2239 | s = getstring(x, "internal error") |
|
2269 | s = getstring(x, "internal error") | |
2240 | if not s: |
|
2270 | if not s: | |
2241 | return baseset() |
|
2271 | return baseset() | |
@@ -2264,8 +2294,15 b' def _list(repo, subset, x):' | |||||
2264 | return baseset(ls) |
|
2294 | return baseset(ls) | |
2265 |
|
2295 | |||
2266 | # for internal use |
|
2296 | # for internal use | |
2267 |
@predicate('_ |
|
2297 | @predicate('_list', safe=True, takeorder=True) | |
2268 |
def _ |
|
2298 | def _list(repo, subset, x, order): | |
|
2299 | if order == followorder: | |||
|
2300 | # slow path to take the subset order | |||
|
2301 | return subset & _orderedlist(repo, fullreposet(repo), x) | |||
|
2302 | else: | |||
|
2303 | return _orderedlist(repo, subset, x) | |||
|
2304 | ||||
|
2305 | def _orderedintlist(repo, subset, x): | |||
2269 | s = getstring(x, "internal error") |
|
2306 | s = getstring(x, "internal error") | |
2270 | if not s: |
|
2307 | if not s: | |
2271 | return baseset() |
|
2308 | return baseset() | |
@@ -2274,8 +2311,15 b' def _intlist(repo, subset, x):' | |||||
2274 | return baseset([r for r in ls if r in s]) |
|
2311 | return baseset([r for r in ls if r in s]) | |
2275 |
|
2312 | |||
2276 | # for internal use |
|
2313 | # for internal use | |
2277 |
@predicate('_ |
|
2314 | @predicate('_intlist', safe=True, takeorder=True) | |
2278 |
def _ |
|
2315 | def _intlist(repo, subset, x, order): | |
|
2316 | if order == followorder: | |||
|
2317 | # slow path to take the subset order | |||
|
2318 | return subset & _orderedintlist(repo, fullreposet(repo), x) | |||
|
2319 | else: | |||
|
2320 | return _orderedintlist(repo, subset, x) | |||
|
2321 | ||||
|
2322 | def _orderedhexlist(repo, subset, x): | |||
2279 | s = getstring(x, "internal error") |
|
2323 | s = getstring(x, "internal error") | |
2280 | if not s: |
|
2324 | if not s: | |
2281 | return baseset() |
|
2325 | return baseset() | |
@@ -2284,8 +2328,18 b' def _hexlist(repo, subset, x):' | |||||
2284 | s = subset |
|
2328 | s = subset | |
2285 | return baseset([r for r in ls if r in s]) |
|
2329 | return baseset([r for r in ls if r in s]) | |
2286 |
|
2330 | |||
|
2331 | # for internal use | |||
|
2332 | @predicate('_hexlist', safe=True, takeorder=True) | |||
|
2333 | def _hexlist(repo, subset, x, order): | |||
|
2334 | if order == followorder: | |||
|
2335 | # slow path to take the subset order | |||
|
2336 | return subset & _orderedhexlist(repo, fullreposet(repo), x) | |||
|
2337 | else: | |||
|
2338 | return _orderedhexlist(repo, subset, x) | |||
|
2339 | ||||
2287 | methods = { |
|
2340 | methods = { | |
2288 | "range": rangeset, |
|
2341 | "range": rangeset, | |
|
2342 | "rangepre": rangepre, | |||
2289 | "dagrange": dagrange, |
|
2343 | "dagrange": dagrange, | |
2290 | "string": stringset, |
|
2344 | "string": stringset, | |
2291 | "symbol": stringset, |
|
2345 | "symbol": stringset, | |
@@ -2298,7 +2352,51 b' methods = {' | |||||
2298 | "func": func, |
|
2352 | "func": func, | |
2299 | "ancestor": ancestorspec, |
|
2353 | "ancestor": ancestorspec, | |
2300 | "parent": parentspec, |
|
2354 | "parent": parentspec, | |
2301 |
"parentpost": p |
|
2355 | "parentpost": parentpost, | |
|
2356 | } | |||
|
2357 | ||||
|
2358 | # Constants for ordering requirement, used in _analyze(): | |||
|
2359 | # | |||
|
2360 | # If 'define', any nested functions and operations can change the ordering of | |||
|
2361 | # the entries in the set. If 'follow', any nested functions and operations | |||
|
2362 | # should take the ordering specified by the first operand to the '&' operator. | |||
|
2363 | # | |||
|
2364 | # For instance, | |||
|
2365 | # | |||
|
2366 | # X & (Y | Z) | |||
|
2367 | # ^ ^^^^^^^ | |||
|
2368 | # | follow | |||
|
2369 | # define | |||
|
2370 | # | |||
|
2371 | # will be evaluated as 'or(y(x()), z(x()))', where 'x()' can change the order | |||
|
2372 | # of the entries in the set, but 'y()', 'z()' and 'or()' shouldn't. | |||
|
2373 | # | |||
|
2374 | # 'any' means the order doesn't matter. For instance, | |||
|
2375 | # | |||
|
2376 | # X & !Y | |||
|
2377 | # ^ | |||
|
2378 | # any | |||
|
2379 | # | |||
|
2380 | # 'y()' can either enforce its ordering requirement or take the ordering | |||
|
2381 | # specified by 'x()' because 'not()' doesn't care the order. | |||
|
2382 | # | |||
|
2383 | # Transition of ordering requirement: | |||
|
2384 | # | |||
|
2385 | # 1. starts with 'define' | |||
|
2386 | # 2. shifts to 'follow' by 'x & y' | |||
|
2387 | # 3. changes back to 'define' on function call 'f(x)' or function-like | |||
|
2388 | # operation 'x (f) y' because 'f' may have its own ordering requirement | |||
|
2389 | # for 'x' and 'y' (e.g. 'first(x)') | |||
|
2390 | # | |||
|
2391 | anyorder = 'any' # don't care the order | |||
|
2392 | defineorder = 'define' # should define the order | |||
|
2393 | followorder = 'follow' # must follow the current order | |||
|
2394 | ||||
|
2395 | # transition table for 'x & y', from the current expression 'x' to 'y' | |||
|
2396 | _tofolloworder = { | |||
|
2397 | anyorder: anyorder, | |||
|
2398 | defineorder: followorder, | |||
|
2399 | followorder: followorder, | |||
2302 | } |
|
2400 | } | |
2303 |
|
2401 | |||
2304 | def _matchonly(revs, bases): |
|
2402 | def _matchonly(revs, bases): | |
@@ -2316,6 +2414,97 b' def _matchonly(revs, bases):' | |||||
2316 | and getsymbol(bases[1][1]) == 'ancestors'): |
|
2414 | and getsymbol(bases[1][1]) == 'ancestors'): | |
2317 | return ('list', revs[2], bases[1][2]) |
|
2415 | return ('list', revs[2], bases[1][2]) | |
2318 |
|
2416 | |||
|
2417 | def _fixops(x): | |||
|
2418 | """Rewrite raw parsed tree to resolve ambiguous syntax which cannot be | |||
|
2419 | handled well by our simple top-down parser""" | |||
|
2420 | if not isinstance(x, tuple): | |||
|
2421 | return x | |||
|
2422 | ||||
|
2423 | op = x[0] | |||
|
2424 | if op == 'parent': | |||
|
2425 | # x^:y means (x^) : y, not x ^ (:y) | |||
|
2426 | # x^: means (x^) :, not x ^ (:) | |||
|
2427 | post = ('parentpost', x[1]) | |||
|
2428 | if x[2][0] == 'dagrangepre': | |||
|
2429 | return _fixops(('dagrange', post, x[2][1])) | |||
|
2430 | elif x[2][0] == 'rangepre': | |||
|
2431 | return _fixops(('range', post, x[2][1])) | |||
|
2432 | elif x[2][0] == 'rangeall': | |||
|
2433 | return _fixops(('rangepost', post)) | |||
|
2434 | elif op == 'or': | |||
|
2435 | # make number of arguments deterministic: | |||
|
2436 | # x + y + z -> (or x y z) -> (or (list x y z)) | |||
|
2437 | return (op, _fixops(('list',) + x[1:])) | |||
|
2438 | ||||
|
2439 | return (op,) + tuple(_fixops(y) for y in x[1:]) | |||
|
2440 | ||||
|
2441 | def _analyze(x, order): | |||
|
2442 | if x is None: | |||
|
2443 | return x | |||
|
2444 | ||||
|
2445 | op = x[0] | |||
|
2446 | if op == 'minus': | |||
|
2447 | return _analyze(('and', x[1], ('not', x[2])), order) | |||
|
2448 | elif op == 'only': | |||
|
2449 | t = ('func', ('symbol', 'only'), ('list', x[1], x[2])) | |||
|
2450 | return _analyze(t, order) | |||
|
2451 | elif op == 'onlypost': | |||
|
2452 | return _analyze(('func', ('symbol', 'only'), x[1]), order) | |||
|
2453 | elif op == 'dagrangepre': | |||
|
2454 | return _analyze(('func', ('symbol', 'ancestors'), x[1]), order) | |||
|
2455 | elif op == 'dagrangepost': | |||
|
2456 | return _analyze(('func', ('symbol', 'descendants'), x[1]), order) | |||
|
2457 | elif op == 'rangeall': | |||
|
2458 | return _analyze(('rangepre', ('string', 'tip')), order) | |||
|
2459 | elif op == 'rangepost': | |||
|
2460 | return _analyze(('range', x[1], ('string', 'tip')), order) | |||
|
2461 | elif op == 'negate': | |||
|
2462 | s = getstring(x[1], _("can't negate that")) | |||
|
2463 | return _analyze(('string', '-' + s), order) | |||
|
2464 | elif op in ('string', 'symbol'): | |||
|
2465 | return x | |||
|
2466 | elif op == 'and': | |||
|
2467 | ta = _analyze(x[1], order) | |||
|
2468 | tb = _analyze(x[2], _tofolloworder[order]) | |||
|
2469 | return (op, ta, tb, order) | |||
|
2470 | elif op == 'or': | |||
|
2471 | return (op, _analyze(x[1], order), order) | |||
|
2472 | elif op == 'not': | |||
|
2473 | return (op, _analyze(x[1], anyorder), order) | |||
|
2474 | elif op in ('rangepre', 'parentpost'): | |||
|
2475 | return (op, _analyze(x[1], defineorder), order) | |||
|
2476 | elif op == 'group': | |||
|
2477 | return _analyze(x[1], order) | |||
|
2478 | elif op in ('dagrange', 'range', 'parent', 'ancestor'): | |||
|
2479 | ta = _analyze(x[1], defineorder) | |||
|
2480 | tb = _analyze(x[2], defineorder) | |||
|
2481 | return (op, ta, tb, order) | |||
|
2482 | elif op == 'list': | |||
|
2483 | return (op,) + tuple(_analyze(y, order) for y in x[1:]) | |||
|
2484 | elif op == 'keyvalue': | |||
|
2485 | return (op, x[1], _analyze(x[2], order)) | |||
|
2486 | elif op == 'func': | |||
|
2487 | f = getsymbol(x[1]) | |||
|
2488 | d = defineorder | |||
|
2489 | if f == 'present': | |||
|
2490 | # 'present(set)' is known to return the argument set with no | |||
|
2491 | # modification, so forward the current order to its argument | |||
|
2492 | d = order | |||
|
2493 | return (op, x[1], _analyze(x[2], d), order) | |||
|
2494 | raise ValueError('invalid operator %r' % op) | |||
|
2495 | ||||
|
2496 | def analyze(x, order=defineorder): | |||
|
2497 | """Transform raw parsed tree to evaluatable tree which can be fed to | |||
|
2498 | optimize() or getset() | |||
|
2499 | ||||
|
2500 | All pseudo operations should be mapped to real operations or functions | |||
|
2501 | defined in methods or symbols table respectively. | |||
|
2502 | ||||
|
2503 | 'order' specifies how the current expression 'x' is ordered (see the | |||
|
2504 | constants defined above.) | |||
|
2505 | """ | |||
|
2506 | return _analyze(x, order) | |||
|
2507 | ||||
2319 | def _optimize(x, small): |
|
2508 | def _optimize(x, small): | |
2320 | if x is None: |
|
2509 | if x is None: | |
2321 | return 0, x |
|
2510 | return 0, x | |
@@ -2325,47 +2514,29 b' def _optimize(x, small):' | |||||
2325 | smallbonus = .5 |
|
2514 | smallbonus = .5 | |
2326 |
|
2515 | |||
2327 | op = x[0] |
|
2516 | op = x[0] | |
2328 | if op == 'minus': |
|
2517 | if op in ('string', 'symbol'): | |
2329 | return _optimize(('and', x[1], ('not', x[2])), small) |
|
|||
2330 | elif op == 'only': |
|
|||
2331 | t = ('func', ('symbol', 'only'), ('list', x[1], x[2])) |
|
|||
2332 | return _optimize(t, small) |
|
|||
2333 | elif op == 'onlypost': |
|
|||
2334 | return _optimize(('func', ('symbol', 'only'), x[1]), small) |
|
|||
2335 | elif op == 'dagrangepre': |
|
|||
2336 | return _optimize(('func', ('symbol', 'ancestors'), x[1]), small) |
|
|||
2337 | elif op == 'dagrangepost': |
|
|||
2338 | return _optimize(('func', ('symbol', 'descendants'), x[1]), small) |
|
|||
2339 | elif op == 'rangeall': |
|
|||
2340 | return _optimize(('range', ('string', '0'), ('string', 'tip')), small) |
|
|||
2341 | elif op == 'rangepre': |
|
|||
2342 | return _optimize(('range', ('string', '0'), x[1]), small) |
|
|||
2343 | elif op == 'rangepost': |
|
|||
2344 | return _optimize(('range', x[1], ('string', 'tip')), small) |
|
|||
2345 | elif op == 'negate': |
|
|||
2346 | s = getstring(x[1], _("can't negate that")) |
|
|||
2347 | return _optimize(('string', '-' + s), small) |
|
|||
2348 | elif op in 'string symbol negate': |
|
|||
2349 | return smallbonus, x # single revisions are small |
|
2518 | return smallbonus, x # single revisions are small | |
2350 | elif op == 'and': |
|
2519 | elif op == 'and': | |
2351 | wa, ta = _optimize(x[1], True) |
|
2520 | wa, ta = _optimize(x[1], True) | |
2352 | wb, tb = _optimize(x[2], True) |
|
2521 | wb, tb = _optimize(x[2], True) | |
|
2522 | order = x[3] | |||
2353 | w = min(wa, wb) |
|
2523 | w = min(wa, wb) | |
2354 |
|
2524 | |||
2355 | # (::x and not ::y)/(not ::y and ::x) have a fast path |
|
2525 | # (::x and not ::y)/(not ::y and ::x) have a fast path | |
2356 | tm = _matchonly(ta, tb) or _matchonly(tb, ta) |
|
2526 | tm = _matchonly(ta, tb) or _matchonly(tb, ta) | |
2357 | if tm: |
|
2527 | if tm: | |
2358 | return w, ('func', ('symbol', 'only'), tm) |
|
2528 | return w, ('func', ('symbol', 'only'), tm, order) | |
2359 |
|
2529 | |||
2360 | if tb is not None and tb[0] == 'not': |
|
2530 | if tb is not None and tb[0] == 'not': | |
2361 | return wa, ('difference', ta, tb[1]) |
|
2531 | return wa, ('difference', ta, tb[1], order) | |
2362 |
|
2532 | |||
2363 | if wa > wb: |
|
2533 | if wa > wb: | |
2364 | return w, (op, tb, ta) |
|
2534 | return w, (op, tb, ta, order) | |
2365 | return w, (op, ta, tb) |
|
2535 | return w, (op, ta, tb, order) | |
2366 | elif op == 'or': |
|
2536 | elif op == 'or': | |
2367 | # fast path for machine-generated expression, that is likely to have |
|
2537 | # fast path for machine-generated expression, that is likely to have | |
2368 | # lots of trivial revisions: 'a + b + c()' to '_list(a b) + c()' |
|
2538 | # lots of trivial revisions: 'a + b + c()' to '_list(a b) + c()' | |
|
2539 | order = x[2] | |||
2369 | ws, ts, ss = [], [], [] |
|
2540 | ws, ts, ss = [], [], [] | |
2370 | def flushss(): |
|
2541 | def flushss(): | |
2371 | if not ss: |
|
2542 | if not ss: | |
@@ -2374,12 +2545,12 b' def _optimize(x, small):' | |||||
2374 | w, t = ss[0] |
|
2545 | w, t = ss[0] | |
2375 | else: |
|
2546 | else: | |
2376 | s = '\0'.join(t[1] for w, t in ss) |
|
2547 | s = '\0'.join(t[1] for w, t in ss) | |
2377 | y = ('func', ('symbol', '_list'), ('string', s)) |
|
2548 | y = ('func', ('symbol', '_list'), ('string', s), order) | |
2378 | w, t = _optimize(y, False) |
|
2549 | w, t = _optimize(y, False) | |
2379 | ws.append(w) |
|
2550 | ws.append(w) | |
2380 | ts.append(t) |
|
2551 | ts.append(t) | |
2381 | del ss[:] |
|
2552 | del ss[:] | |
2382 |
for y in x[1 |
|
2553 | for y in getlist(x[1]): | |
2383 | w, t = _optimize(y, False) |
|
2554 | w, t = _optimize(y, False) | |
2384 | if t is not None and (t[0] == 'string' or t[0] == 'symbol'): |
|
2555 | if t is not None and (t[0] == 'string' or t[0] == 'symbol'): | |
2385 | ss.append((w, t)) |
|
2556 | ss.append((w, t)) | |
@@ -2393,33 +2564,27 b' def _optimize(x, small):' | |||||
2393 | # we can't reorder trees by weight because it would change the order. |
|
2564 | # we can't reorder trees by weight because it would change the order. | |
2394 | # ("sort(a + b)" == "sort(b + a)", but "a + b" != "b + a") |
|
2565 | # ("sort(a + b)" == "sort(b + a)", but "a + b" != "b + a") | |
2395 | # ts = tuple(t for w, t in sorted(zip(ws, ts), key=lambda wt: wt[0])) |
|
2566 | # ts = tuple(t for w, t in sorted(zip(ws, ts), key=lambda wt: wt[0])) | |
2396 | return max(ws), (op,) + tuple(ts) |
|
2567 | return max(ws), (op, ('list',) + tuple(ts), order) | |
2397 | elif op == 'not': |
|
2568 | elif op == 'not': | |
2398 | # Optimize not public() to _notpublic() because we have a fast version |
|
2569 | # Optimize not public() to _notpublic() because we have a fast version | |
2399 | if x[1] == ('func', ('symbol', 'public'), None): |
|
2570 | if x[1][:3] == ('func', ('symbol', 'public'), None): | |
2400 | newsym = ('func', ('symbol', '_notpublic'), None) |
|
2571 | order = x[1][3] | |
|
2572 | newsym = ('func', ('symbol', '_notpublic'), None, order) | |||
2401 | o = _optimize(newsym, not small) |
|
2573 | o = _optimize(newsym, not small) | |
2402 | return o[0], o[1] |
|
2574 | return o[0], o[1] | |
2403 | else: |
|
2575 | else: | |
2404 | o = _optimize(x[1], not small) |
|
2576 | o = _optimize(x[1], not small) | |
2405 |
re |
|
2577 | order = x[2] | |
2406 | elif op == 'parentpost': |
|
2578 | return o[0], (op, o[1], order) | |
|
2579 | elif op in ('rangepre', 'parentpost'): | |||
2407 | o = _optimize(x[1], small) |
|
2580 | o = _optimize(x[1], small) | |
2408 | return o[0], (op, o[1]) |
|
2581 | order = x[2] | |
2409 | elif op == 'group': |
|
2582 | return o[0], (op, o[1], order) | |
2410 | return _optimize(x[1], small) |
|
2583 | elif op in ('dagrange', 'range', 'parent', 'ancestor'): | |
2411 | elif op in 'dagrange range parent ancestorspec': |
|
|||
2412 | if op == 'parent': |
|
|||
2413 | # x^:y means (x^) : y, not x ^ (:y) |
|
|||
2414 | post = ('parentpost', x[1]) |
|
|||
2415 | if x[2][0] == 'dagrangepre': |
|
|||
2416 | return _optimize(('dagrange', post, x[2][1]), small) |
|
|||
2417 | elif x[2][0] == 'rangepre': |
|
|||
2418 | return _optimize(('range', post, x[2][1]), small) |
|
|||
2419 |
|
||||
2420 | wa, ta = _optimize(x[1], small) |
|
2584 | wa, ta = _optimize(x[1], small) | |
2421 | wb, tb = _optimize(x[2], small) |
|
2585 | wb, tb = _optimize(x[2], small) | |
2422 | return wa + wb, (op, ta, tb) |
|
2586 | order = x[3] | |
|
2587 | return wa + wb, (op, ta, tb, order) | |||
2423 | elif op == 'list': |
|
2588 | elif op == 'list': | |
2424 | ws, ts = zip(*(_optimize(y, small) for y in x[1:])) |
|
2589 | ws, ts = zip(*(_optimize(y, small) for y in x[1:])) | |
2425 | return sum(ws), (op,) + ts |
|
2590 | return sum(ws), (op,) + ts | |
@@ -2429,32 +2594,36 b' def _optimize(x, small):' | |||||
2429 | elif op == 'func': |
|
2594 | elif op == 'func': | |
2430 | f = getsymbol(x[1]) |
|
2595 | f = getsymbol(x[1]) | |
2431 | wa, ta = _optimize(x[2], small) |
|
2596 | wa, ta = _optimize(x[2], small) | |
2432 |
if f in ( |
|
2597 | if f in ('author', 'branch', 'closed', 'date', 'desc', 'file', 'grep', | |
2433 |
|
|
2598 | 'keyword', 'outgoing', 'user', 'destination'): | |
2434 | w = 10 # slow |
|
2599 | w = 10 # slow | |
2435 |
elif f in |
|
2600 | elif f in ('modifies', 'adds', 'removes'): | |
2436 | w = 30 # slower |
|
2601 | w = 30 # slower | |
2437 | elif f == "contains": |
|
2602 | elif f == "contains": | |
2438 | w = 100 # very slow |
|
2603 | w = 100 # very slow | |
2439 | elif f == "ancestor": |
|
2604 | elif f == "ancestor": | |
2440 | w = 1 * smallbonus |
|
2605 | w = 1 * smallbonus | |
2441 |
elif f in |
|
2606 | elif f in ('reverse', 'limit', 'first', '_intlist'): | |
2442 | w = 0 |
|
2607 | w = 0 | |
2443 |
elif f |
|
2608 | elif f == "sort": | |
2444 | w = 10 # assume most sorts look at changelog |
|
2609 | w = 10 # assume most sorts look at changelog | |
2445 | else: |
|
2610 | else: | |
2446 | w = 1 |
|
2611 | w = 1 | |
2447 | return w + wa, (op, x[1], ta) |
|
2612 | order = x[3] | |
2448 | return 1, x |
|
2613 | return w + wa, (op, x[1], ta, order) | |
|
2614 | raise ValueError('invalid operator %r' % op) | |||
2449 |
|
2615 | |||
2450 | def optimize(tree): |
|
2616 | def optimize(tree): | |
|
2617 | """Optimize evaluatable tree | |||
|
2618 | ||||
|
2619 | All pseudo operations should be transformed beforehand. | |||
|
2620 | """ | |||
2451 | _weight, newtree = _optimize(tree, small=True) |
|
2621 | _weight, newtree = _optimize(tree, small=True) | |
2452 | return newtree |
|
2622 | return newtree | |
2453 |
|
2623 | |||
2454 | # the set of valid characters for the initial letter of symbols in |
|
2624 | # the set of valid characters for the initial letter of symbols in | |
2455 | # alias declarations and definitions |
|
2625 | # alias declarations and definitions | |
2456 | _aliassyminitletters = set(c for c in [chr(i) for i in xrange(256)] |
|
2626 | _aliassyminitletters = _syminitletters | set(pycompat.sysstr('$')) | |
2457 | if c.isalnum() or c in '._@$' or ord(c) > 127) |
|
|||
2458 |
|
2627 | |||
2459 | def _parsewith(spec, lookup=None, syminitletters=None): |
|
2628 | def _parsewith(spec, lookup=None, syminitletters=None): | |
2460 | """Generate a parse tree of given spec with given tokenizing options |
|
2629 | """Generate a parse tree of given spec with given tokenizing options | |
@@ -2475,7 +2644,7 b' def _parsewith(spec, lookup=None, symini' | |||||
2475 | syminitletters=syminitletters)) |
|
2644 | syminitletters=syminitletters)) | |
2476 | if pos != len(spec): |
|
2645 | if pos != len(spec): | |
2477 | raise error.ParseError(_('invalid token'), pos) |
|
2646 | raise error.ParseError(_('invalid token'), pos) | |
2478 | return parser.simplifyinfixops(tree, ('list', 'or')) |
|
2647 | return _fixops(parser.simplifyinfixops(tree, ('list', 'or'))) | |
2479 |
|
2648 | |||
2480 | class _aliasrules(parser.basealiasrules): |
|
2649 | class _aliasrules(parser.basealiasrules): | |
2481 | """Parsing and expansion rule set of revset aliases""" |
|
2650 | """Parsing and expansion rule set of revset aliases""" | |
@@ -2496,15 +2665,14 b' class _aliasrules(parser.basealiasrules)' | |||||
2496 | if tree[0] == 'func' and tree[1][0] == 'symbol': |
|
2665 | if tree[0] == 'func' and tree[1][0] == 'symbol': | |
2497 | return tree[1][1], getlist(tree[2]) |
|
2666 | return tree[1][1], getlist(tree[2]) | |
2498 |
|
2667 | |||
2499 |
def expandaliases(ui, tree |
|
2668 | def expandaliases(ui, tree): | |
2500 | aliases = _aliasrules.buildmap(ui.configitems('revsetalias')) |
|
2669 | aliases = _aliasrules.buildmap(ui.configitems('revsetalias')) | |
2501 | tree = _aliasrules.expand(aliases, tree) |
|
2670 | tree = _aliasrules.expand(aliases, tree) | |
2502 | if showwarning: |
|
2671 | # warn about problematic (but not referred) aliases | |
2503 | # warn about problematic (but not referred) aliases |
|
2672 | for name, alias in sorted(aliases.iteritems()): | |
2504 | for name, alias in sorted(aliases.iteritems()): |
|
2673 | if alias.error and not alias.warned: | |
2505 | if alias.error and not alias.warned: |
|
2674 | ui.warn(_('warning: %s\n') % (alias.error)) | |
2506 | showwarning(_('warning: %s\n') % (alias.error)) |
|
2675 | alias.warned = True | |
2507 | alias.warned = True |
|
|||
2508 | return tree |
|
2676 | return tree | |
2509 |
|
2677 | |||
2510 | def foldconcat(tree): |
|
2678 | def foldconcat(tree): | |
@@ -2535,13 +2703,21 b' def posttreebuilthook(tree, repo):' | |||||
2535 | # hook for extensions to execute code on the optimized tree |
|
2703 | # hook for extensions to execute code on the optimized tree | |
2536 | pass |
|
2704 | pass | |
2537 |
|
2705 | |||
2538 | def match(ui, spec, repo=None): |
|
2706 | def match(ui, spec, repo=None, order=defineorder): | |
2539 |
"""Create a matcher for a single revision spec |
|
2707 | """Create a matcher for a single revision spec | |
2540 | return matchany(ui, [spec], repo=repo) |
|
2708 | ||
2541 |
|
2709 | If order=followorder, a matcher takes the ordering specified by the input | ||
2542 | def matchany(ui, specs, repo=None): |
|
2710 | set. | |
|
2711 | """ | |||
|
2712 | return matchany(ui, [spec], repo=repo, order=order) | |||
|
2713 | ||||
|
2714 | def matchany(ui, specs, repo=None, order=defineorder): | |||
2543 | """Create a matcher that will include any revisions matching one of the |
|
2715 | """Create a matcher that will include any revisions matching one of the | |
2544 |
given specs |
|
2716 | given specs | |
|
2717 | ||||
|
2718 | If order=followorder, a matcher takes the ordering specified by the input | |||
|
2719 | set. | |||
|
2720 | """ | |||
2545 | if not specs: |
|
2721 | if not specs: | |
2546 | def mfunc(repo, subset=None): |
|
2722 | def mfunc(repo, subset=None): | |
2547 | return baseset() |
|
2723 | return baseset() | |
@@ -2554,15 +2730,18 b' def matchany(ui, specs, repo=None):' | |||||
2554 | if len(specs) == 1: |
|
2730 | if len(specs) == 1: | |
2555 | tree = parse(specs[0], lookup) |
|
2731 | tree = parse(specs[0], lookup) | |
2556 | else: |
|
2732 | else: | |
2557 | tree = ('or',) + tuple(parse(s, lookup) for s in specs) |
|
2733 | tree = ('or', ('list',) + tuple(parse(s, lookup) for s in specs)) | |
2558 | return _makematcher(ui, tree, repo) |
|
2734 | ||
2559 |
|
||||
2560 | def _makematcher(ui, tree, repo): |
|
|||
2561 | if ui: |
|
2735 | if ui: | |
2562 |
tree = expandaliases(ui, tree |
|
2736 | tree = expandaliases(ui, tree) | |
2563 | tree = foldconcat(tree) |
|
2737 | tree = foldconcat(tree) | |
|
2738 | tree = analyze(tree, order) | |||
2564 | tree = optimize(tree) |
|
2739 | tree = optimize(tree) | |
2565 | posttreebuilthook(tree, repo) |
|
2740 | posttreebuilthook(tree, repo) | |
|
2741 | return makematcher(tree) | |||
|
2742 | ||||
|
2743 | def makematcher(tree): | |||
|
2744 | """Create a matcher from an evaluatable tree""" | |||
2566 | def mfunc(repo, subset=None): |
|
2745 | def mfunc(repo, subset=None): | |
2567 | if subset is None: |
|
2746 | if subset is None: | |
2568 | subset = fullreposet(repo) |
|
2747 | subset = fullreposet(repo) |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
General Comments 0
You need to be logged in to leave comments.
Login now