##// END OF EJS Templates
merge default into stable for 4.0 code freeze
Kevin Bullock -
r30215:438173c4 merge 4.0-rc stable
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -0,0 +1,13 b''
1 Our full contribution guidelines are in our wiki, please see:
2
3 https://www.mercurial-scm.org/wiki/ContributingChanges
4
5 If you just want a checklist to follow, you can go straight to
6
7 https://www.mercurial-scm.org/wiki/ContributingChanges#Submission_checklist
8
9 If you can't run the entire testsuite for some reason (it can be
10 difficult on Windows), please at least run `contrib/check-code.py` on
11 any files you've modified and run `python contrib/check-commit` on any
12 commits you've made (for example, `python contrib/check-commit
13 273ce12ad8f1` will report some style violations on a very old commit).
This diff has been collapsed as it changes many lines, (773 lines changed) Show them Hide them
@@ -0,0 +1,773 b''
1 The Mercurial wire protocol is a request-response based protocol
2 with multiple wire representations.
3
4 Each request is modeled as a command name, a dictionary of arguments, and
5 optional raw input. Command arguments and their types are intrinsic
6 properties of commands. So is the response type of the command. This means
7 clients can't always send arbitrary arguments to servers and servers can't
8 return multiple response types.
9
10 The protocol is synchronous and does not support multiplexing (concurrent
11 commands).
12
13 Transport Protocols
14 ===================
15
16 HTTP Transport
17 --------------
18
19 Commands are issued as HTTP/1.0 or HTTP/1.1 requests. Commands are
20 sent to the base URL of the repository with the command name sent in
21 the ``cmd`` query string parameter. e.g.
22 ``https://example.com/repo?cmd=capabilities``. The HTTP method is ``GET``
23 or ``POST`` depending on the command and whether there is a request
24 body.
25
26 Command arguments can be sent multiple ways.
27
28 The simplest is part of the URL query string using ``x-www-form-urlencoded``
29 encoding (see Python's ``urllib.urlencode()``. However, many servers impose
30 length limitations on the URL. So this mechanism is typically only used if
31 the server doesn't support other mechanisms.
32
33 If the server supports the ``httpheader`` capability, command arguments can
34 be sent in HTTP request headers named ``X-HgArg-<N>`` where ``<N>`` is an
35 integer starting at 1. A ``x-www-form-urlencoded`` representation of the
36 arguments is obtained. This full string is then split into chunks and sent
37 in numbered ``X-HgArg-<N>`` headers. The maximum length of each HTTP header
38 is defined by the server in the ``httpheader`` capability value, which defaults
39 to ``1024``. The server reassembles the encoded arguments string by
40 concatenating the ``X-HgArg-<N>`` headers then URL decodes them into a
41 dictionary.
42
43 The list of ``X-HgArg-<N>`` headers should be added to the ``Vary`` request
44 header to instruct caches to take these headers into consideration when caching
45 requests.
46
47 If the server supports the ``httppostargs`` capability, the client
48 may send command arguments in the HTTP request body as part of an
49 HTTP POST request. The command arguments will be URL encoded just like
50 they would for sending them via HTTP headers. However, no splitting is
51 performed: the raw arguments are included in the HTTP request body.
52
53 The client sends a ``X-HgArgs-Post`` header with the string length of the
54 encoded arguments data. Additional data may be included in the HTTP
55 request body immediately following the argument data. The offset of the
56 non-argument data is defined by the ``X-HgArgs-Post`` header. The
57 ``X-HgArgs-Post`` header is not required if there is no argument data.
58
59 Additional command data can be sent as part of the HTTP request body. The
60 default ``Content-Type`` when sending data is ``application/mercurial-0.1``.
61 A ``Content-Length`` header is currently always sent.
62
63 Example HTTP requests::
64
65 GET /repo?cmd=capabilities
66 X-HgArg-1: foo=bar&baz=hello%20world
67
68 The ``Content-Type`` HTTP response header identifies the response as coming
69 from Mercurial and can also be used to signal an error has occurred.
70
71 The ``application/mercurial-0.1`` media type indicates a generic Mercurial
72 response. It matches the media type sent by the client.
73
74 The ``application/hg-error`` media type indicates a generic error occurred.
75 The content of the HTTP response body typically holds text describing the
76 error.
77
78 The ``application/hg-changegroup`` media type indicates a changegroup response
79 type.
80
81 Clients also accept the ``text/plain`` media type. All other media
82 types should cause the client to error.
83
84 Clients should issue a ``User-Agent`` request header that identifies the client.
85 The server should not use the ``User-Agent`` for feature detection.
86
87 A command returning a ``string`` response issues the
88 ``application/mercurial-0.1`` media type and the HTTP response body contains
89 the raw string value. A ``Content-Length`` header is typically issued.
90
91 A command returning a ``stream`` response issues the
92 ``application/mercurial-0.1`` media type and the HTTP response is typically
93 using *chunked transfer* (``Transfer-Encoding: chunked``).
94
95 SSH Transport
96 =============
97
98 The SSH transport is a custom text-based protocol suitable for use over any
99 bi-directional stream transport. It is most commonly used with SSH.
100
101 A SSH transport server can be started with ``hg serve --stdio``. The stdin,
102 stderr, and stdout file descriptors of the started process are used to exchange
103 data. When Mercurial connects to a remote server over SSH, it actually starts
104 a ``hg serve --stdio`` process on the remote server.
105
106 Commands are issued by sending the command name followed by a trailing newline
107 ``\n`` to the server. e.g. ``capabilities\n``.
108
109 Command arguments are sent in the following format::
110
111 <argument> <length>\n<value>
112
113 That is, the argument string name followed by a space followed by the
114 integer length of the value (expressed as a string) followed by a newline
115 (``\n``) followed by the raw argument value.
116
117 Dictionary arguments are encoded differently::
118
119 <argument> <# elements>\n
120 <key1> <length1>\n<value1>
121 <key2> <length2>\n<value2>
122 ...
123
124 Non-argument data is sent immediately after the final argument value. It is
125 encoded in chunks::
126
127 <length>\n<data>
128
129 Each command declares a list of supported arguments and their types. If a
130 client sends an unknown argument to the server, the server should abort
131 immediately. The special argument ``*`` in a command's definition indicates
132 that all argument names are allowed.
133
134 The definition of supported arguments and types is initially made when a
135 new command is implemented. The client and server must initially independently
136 agree on the arguments and their types. This initial set of arguments can be
137 supplemented through the presence of *capabilities* advertised by the server.
138
139 Each command has a defined expected response type.
140
141 A ``string`` response type is a length framed value. The response consists of
142 the string encoded integer length of a value followed by a newline (``\n``)
143 followed by the value. Empty values are allowed (and are represented as
144 ``0\n``).
145
146 A ``stream`` response type consists of raw bytes of data. There is no framing.
147
148 A generic error response type is also supported. It consists of a an error
149 message written to ``stderr`` followed by ``\n-\n``. In addition, ``\n`` is
150 written to ``stdout``.
151
152 If the server receives an unknown command, it will send an empty ``string``
153 response.
154
155 The server terminates if it receives an empty command (a ``\n`` character).
156
157 Capabilities
158 ============
159
160 Servers advertise supported wire protocol features. This allows clients to
161 probe for server features before blindly calling a command or passing a
162 specific argument.
163
164 The server's features are exposed via a *capabilities* string. This is a
165 space-delimited string of tokens/features. Some features are single words
166 like ``lookup`` or ``batch``. Others are complicated key-value pairs
167 advertising sub-features. e.g. ``httpheader=2048``. When complex, non-word
168 values are used, each feature name can define its own encoding of sub-values.
169 Comma-delimited and ``x-www-form-urlencoded`` values are common.
170
171 The following document capabilities defined by the canonical Mercurial server
172 implementation.
173
174 batch
175 -----
176
177 Whether the server supports the ``batch`` command.
178
179 This capability/command was introduced in Mercurial 1.9 (released July 2011).
180
181 branchmap
182 ---------
183
184 Whether the server supports the ``branchmap`` command.
185
186 This capability/command was introduced in Mercurial 1.3 (released July 2009).
187
188 bundle2-exp
189 -----------
190
191 Precursor to ``bundle2`` capability that was used before bundle2 was a
192 stable feature.
193
194 This capability was introduced in Mercurial 3.0 behind an experimental
195 flag. This capability should not be observed in the wild.
196
197 bundle2
198 -------
199
200 Indicates whether the server supports the ``bundle2`` data exchange format.
201
202 The value of the capability is a URL quoted, newline (``\n``) delimited
203 list of keys or key-value pairs.
204
205 A key is simply a URL encoded string.
206
207 A key-value pair is a URL encoded key separated from a URL encoded value by
208 an ``=``. If the value is a list, elements are delimited by a ``,`` after
209 URL encoding.
210
211 For example, say we have the values::
212
213 {'HG20': [], 'changegroup': ['01', '02'], 'digests': ['sha1', 'sha512']}
214
215 We would first construct a string::
216
217 HG20\nchangegroup=01,02\ndigests=sha1,sha512
218
219 We would then URL quote this string::
220
221 HG20%0Achangegroup%3D01%2C02%0Adigests%3Dsha1%2Csha512
222
223 This capability was introduced in Mercurial 3.4 (released May 2015).
224
225 changegroupsubset
226 -----------------
227
228 Whether the server supports the ``changegroupsubset`` command.
229
230 This capability was introduced in Mercurial 0.9.2 (released December
231 2006).
232
233 This capability was introduced at the same time as the ``lookup``
234 capability/command.
235
236 getbundle
237 ---------
238
239 Whether the server supports the ``getbundle`` command.
240
241 This capability was introduced in Mercurial 1.9 (released July 2011).
242
243 httpheader
244 ----------
245
246 Whether the server supports receiving command arguments via HTTP request
247 headers.
248
249 The value of the capability is an integer describing the max header
250 length that clients should send. Clients should ignore any content after a
251 comma in the value, as this is reserved for future use.
252
253 This capability was introduced in Mercurial 1.9 (released July 2011).
254
255 httppostargs
256 ------------
257
258 **Experimental**
259
260 Indicates that the server supports and prefers clients send command arguments
261 via a HTTP POST request as part of the request body.
262
263 This capability was introduced in Mercurial 3.8 (released May 2016).
264
265 known
266 -----
267
268 Whether the server supports the ``known`` command.
269
270 This capability/command was introduced in Mercurial 1.9 (released July 2011).
271
272 lookup
273 ------
274
275 Whether the server supports the ``lookup`` command.
276
277 This capability was introduced in Mercurial 0.9.2 (released December
278 2006).
279
280 This capability was introduced at the same time as the ``changegroupsubset``
281 capability/command.
282
283 pushkey
284 -------
285
286 Whether the server supports the ``pushkey`` and ``listkeys`` commands.
287
288 This capability was introduced in Mercurial 1.6 (released July 2010).
289
290 standardbundle
291 --------------
292
293 **Unsupported**
294
295 This capability was introduced during the Mercurial 0.9.2 development cycle in
296 2006. It was never present in a release, as it was replaced by the ``unbundle``
297 capability. This capability should not be encountered in the wild.
298
299 stream-preferred
300 ----------------
301
302 If present the server prefers that clients clone using the streaming clone
303 protocol (``hg clone --uncompressed``) rather than the standard
304 changegroup/bundle based protocol.
305
306 This capability was introduced in Mercurial 2.2 (released May 2012).
307
308 streamreqs
309 ----------
310
311 Indicates whether the server supports *streaming clones* and the *requirements*
312 that clients must support to receive it.
313
314 If present, the server supports the ``stream_out`` command, which transmits
315 raw revlogs from the repository instead of changegroups. This provides a faster
316 cloning mechanism at the expense of more bandwidth used.
317
318 The value of this capability is a comma-delimited list of repo format
319 *requirements*. These are requirements that impact the reading of data in
320 the ``.hg/store`` directory. An example value is
321 ``streamreqs=generaldelta,revlogv1`` indicating the server repo requires
322 the ``revlogv1`` and ``generaldelta`` requirements.
323
324 If the only format requirement is ``revlogv1``, the server may expose the
325 ``stream`` capability instead of the ``streamreqs`` capability.
326
327 This capability was introduced in Mercurial 1.7 (released November 2010).
328
329 stream
330 ------
331
332 Whether the server supports *streaming clones* from ``revlogv1`` repos.
333
334 If present, the server supports the ``stream_out`` command, which transmits
335 raw revlogs from the repository instead of changegroups. This provides a faster
336 cloning mechanism at the expense of more bandwidth used.
337
338 This capability was introduced in Mercurial 0.9.1 (released July 2006).
339
340 When initially introduced, the value of the capability was the numeric
341 revlog revision. e.g. ``stream=1``. This indicates the changegroup is using
342 ``revlogv1``. This simple integer value wasn't powerful enough, so the
343 ``streamreqs`` capability was invented to handle cases where the repo
344 requirements have more than just ``revlogv1``. Newer servers omit the
345 ``=1`` since it was the only value supported and the value of ``1`` can
346 be implied by clients.
347
348 unbundlehash
349 ------------
350
351 Whether the ``unbundle`` commands supports receiving a hash of all the
352 heads instead of a list.
353
354 For more, see the documentation for the ``unbundle`` command.
355
356 This capability was introduced in Mercurial 1.9 (released July 2011).
357
358 unbundle
359 --------
360
361 Whether the server supports pushing via the ``unbundle`` command.
362
363 This capability/command has been present since Mercurial 0.9.1 (released
364 July 2006).
365
366 Mercurial 0.9.2 (released December 2006) added values to the capability
367 indicating which bundle types the server supports receiving. This value is a
368 comma-delimited list. e.g. ``HG10GZ,HG10BZ,HG10UN``. The order of values
369 reflects the priority/preference of that type, where the first value is the
370 most preferred type.
371
372 Handshake Protocol
373 ==================
374
375 While not explicitly required, it is common for clients to perform a
376 *handshake* when connecting to a server. The handshake accomplishes 2 things:
377
378 * Obtaining capabilities and other server features
379 * Flushing extra server output (e.g. SSH servers may print extra text
380 when connecting that may confuse the wire protocol)
381
382 This isn't a traditional *handshake* as far as network protocols go because
383 there is no persistent state as a result of the handshake: the handshake is
384 simply the issuing of commands and commands are stateless.
385
386 The canonical clients perform a capabilities lookup at connection establishment
387 time. This is because clients must assume a server only supports the features
388 of the original Mercurial server implementation until proven otherwise (from
389 advertised capabilities). Nearly every server running today supports features
390 that weren't present in the original Mercurial server implementation. Rather
391 than wait for a client to perform functionality that needs to consult
392 capabilities, it issues the lookup at connection start to avoid any delay later.
393
394 For HTTP servers, the client sends a ``capabilities`` command request as
395 soon as the connection is established. The server responds with a capabilities
396 string, which the client parses.
397
398 For SSH servers, the client sends the ``hello`` command (no arguments)
399 and a ``between`` command with the ``pairs`` argument having the value
400 ``0000000000000000000000000000000000000000-0000000000000000000000000000000000000000``.
401
402 The ``between`` command has been supported since the original Mercurial
403 server. Requesting the empty range will return a ``\n`` string response,
404 which will be encoded as ``1\n\n`` (value length of ``1`` followed by a newline
405 followed by the value, which happens to be a newline).
406
407 The ``hello`` command was later introduced. Servers supporting it will issue
408 a response to that command before sending the ``1\n\n`` response to the
409 ``between`` command. Servers not supporting ``hello`` will send an empty
410 response (``0\n``).
411
412 In addition to the expected output from the ``hello`` and ``between`` commands,
413 servers may also send other output, such as *message of the day (MOTD)*
414 announcements. Clients assume servers will send this output before the
415 Mercurial server replies to the client-issued commands. So any server output
416 not conforming to the expected command responses is assumed to be not related
417 to Mercurial and can be ignored.
418
419 Commands
420 ========
421
422 This section contains a list of all wire protocol commands implemented by
423 the canonical Mercurial server.
424
425 batch
426 -----
427
428 Issue multiple commands while sending a single command request. The purpose
429 of this command is to allow a client to issue multiple commands while avoiding
430 multiple round trips to the server therefore enabling commands to complete
431 quicker.
432
433 The command accepts a ``cmds`` argument that contains a list of commands to
434 execute.
435
436 The value of ``cmds`` is a ``;`` delimited list of strings. Each string has the
437 form ``<command> <arguments>``. That is, the command name followed by a space
438 followed by an argument string.
439
440 The argument string is a ``,`` delimited list of ``<key>=<value>`` values
441 corresponding to command arguments. Both the argument name and value are
442 escaped using a special substitution map::
443
444 : -> :c
445 , -> :o
446 ; -> :s
447 = -> :e
448
449 The response type for this command is ``string``. The value contains a
450 ``;`` delimited list of responses for each requested command. Each value
451 in this list is escaped using the same substitution map used for arguments.
452
453 If an error occurs, the generic error response may be sent.
454
455 between
456 -------
457
458 (Legacy command used for discovery in old clients)
459
460 Obtain nodes between pairs of nodes.
461
462 The ``pairs`` arguments contains a space-delimited list of ``-`` delimited
463 hex node pairs. e.g.::
464
465 a072279d3f7fd3a4aa7ffa1a5af8efc573e1c896-6dc58916e7c070f678682bfe404d2e2d68291a18
466
467 Return type is a ``string``. Value consists of lines corresponding to each
468 requested range. Each line contains a space-delimited list of hex nodes.
469 A newline ``\n`` terminates each line, including the last one.
470
471 branchmap
472 ---------
473
474 Obtain heads in named branches.
475
476 Accepts no arguments. Return type is a ``string``.
477
478 Return value contains lines with URL encoded branch names followed by a space
479 followed by a space-delimited list of hex nodes of heads on that branch.
480 e.g.::
481
482 default a072279d3f7fd3a4aa7ffa1a5af8efc573e1c896 6dc58916e7c070f678682bfe404d2e2d68291a18
483 stable baae3bf31522f41dd5e6d7377d0edd8d1cf3fccc
484
485 There is no trailing newline.
486
487 branches
488 --------
489
490 Obtain ancestor changesets of specific nodes back to a branch point.
491
492 Despite the name, this command has nothing to do with Mercurial named branches.
493 Instead, it is related to DAG branches.
494
495 The command accepts a ``nodes`` argument, which is a string of space-delimited
496 hex nodes.
497
498 For each node requested, the server will find the first ancestor node that is
499 a DAG root or is a merge.
500
501 Return type is a ``string``. Return value contains lines with result data for
502 each requested node. Each line contains space-delimited nodes followed by a
503 newline (``\n``). The 4 nodes reported on each line correspond to the requested
504 node, the ancestor node found, and its 2 parent nodes (which may be the null
505 node).
506
507 capabilities
508 ------------
509
510 Obtain the capabilities string for the repo.
511
512 Unlike the ``hello`` command, the capabilities string is not prefixed.
513 There is no trailing newline.
514
515 This command does not accept any arguments. Return type is a ``string``.
516
517 changegroup
518 -----------
519
520 (Legacy command: use ``getbundle`` instead)
521
522 Obtain a changegroup version 1 with data for changesets that are
523 descendants of client-specified changesets.
524
525 The ``roots`` arguments contains a list of space-delimited hex nodes.
526
527 The server responds with a changegroup version 1 containing all
528 changesets between the requested root/base nodes and the repo's head nodes
529 at the time of the request.
530
531 The return type is a ``stream``.
532
533 changegroupsubset
534 -----------------
535
536 (Legacy command: use ``getbundle`` instead)
537
538 Obtain a changegroup version 1 with data for changesetsets between
539 client specified base and head nodes.
540
541 The ``bases`` argument contains a list of space-delimited hex nodes.
542 The ``heads`` argument contains a list of space-delimited hex nodes.
543
544 The server responds with a changegroup version 1 containing all
545 changesets between the requested base and head nodes at the time of the
546 request.
547
548 The return type is a ``stream``.
549
550 clonebundles
551 ------------
552
553 Obtains a manifest of bundle URLs available to seed clones.
554
555 Each returned line contains a URL followed by metadata. See the
556 documentation in the ``clonebundles`` extension for more.
557
558 The return type is a ``string``.
559
560 getbundle
561 ---------
562
563 Obtain a bundle containing repository data.
564
565 This command accepts the following arguments:
566
567 heads
568 List of space-delimited hex nodes of heads to retrieve.
569 common
570 List of space-delimited hex nodes that the client has in common with the
571 server.
572 obsmarkers
573 Boolean indicating whether to include obsolescence markers as part
574 of the response. Only works with bundle2.
575 bundlecaps
576 Comma-delimited set of strings defining client bundle capabilities.
577 listkeys
578 Comma-delimited list of strings of ``pushkey`` namespaces. For each
579 namespace listed, a bundle2 part will be included with the content of
580 that namespace.
581 cg
582 Boolean indicating whether changegroup data is requested.
583 cbattempted
584 Boolean indicating whether the client attempted to use the *clone bundles*
585 feature before performing this request.
586
587 The return type on success is a ``stream`` where the value is bundle.
588 On the HTTP transport, the response is zlib compressed.
589
590 If an error occurs, a generic error response can be sent.
591
592 Unless the client sends a false value for the ``cg`` argument, the returned
593 bundle contains a changegroup with the nodes between the specified ``common``
594 and ``heads`` nodes. Depending on the command arguments, the type and content
595 of the returned bundle can vary significantly.
596
597 The default behavior is for the server to send a raw changegroup version
598 ``01`` response.
599
600 If the ``bundlecaps`` provided by the client contain a value beginning
601 with ``HG2``, a bundle2 will be returned. The bundle2 data may contain
602 additional repository data, such as ``pushkey`` namespace values.
603
604 heads
605 -----
606
607 Returns a list of space-delimited hex nodes of repository heads followed
608 by a newline. e.g.
609 ``a9eeb3adc7ddb5006c088e9eda61791c777cbf7c 31f91a3da534dc849f0d6bfc00a395a97cf218a1\n``
610
611 This command does not accept any arguments. The return type is a ``string``.
612
613 hello
614 -----
615
616 Returns lines describing interesting things about the server in an RFC-822
617 like format.
618
619 Currently, the only line defines the server capabilities. It has the form::
620
621 capabilities: <value>
622
623 See above for more about the capabilities string.
624
625 SSH clients typically issue this command as soon as a connection is
626 established.
627
628 This command does not accept any arguments. The return type is a ``string``.
629
630 listkeys
631 --------
632
633 List values in a specified ``pushkey`` namespace.
634
635 The ``namespace`` argument defines the pushkey namespace to operate on.
636
637 The return type is a ``string``. The value is an encoded dictionary of keys.
638
639 Key-value pairs are delimited by newlines (``\n``). Within each line, keys and
640 values are separated by a tab (``\t``). Keys and values are both strings.
641
642 lookup
643 ------
644
645 Try to resolve a value to a known repository revision.
646
647 The ``key`` argument is converted from bytes to an
648 ``encoding.localstr`` instance then passed into
649 ``localrepository.__getitem__`` in an attempt to resolve it.
650
651 The return type is a ``string``.
652
653 Upon successful resolution, returns ``1 <hex node>\n``. On failure,
654 returns ``0 <error string>\n``. e.g.::
655
656 1 273ce12ad8f155317b2c078ec75a4eba507f1fba\n
657
658 0 unknown revision 'foo'\n
659
660 known
661 -----
662
663 Determine whether multiple nodes are known.
664
665 The ``nodes`` argument is a list of space-delimited hex nodes to check
666 for existence.
667
668 The return type is ``string``.
669
670 Returns a string consisting of ``0``s and ``1``s indicating whether nodes
671 are known. If the Nth node specified in the ``nodes`` argument is known,
672 a ``1`` will be returned at byte offset N. If the node isn't known, ``0``
673 will be present at byte offset N.
674
675 There is no trailing newline.
676
677 pushkey
678 -------
679
680 Set a value using the ``pushkey`` protocol.
681
682 Accepts arguments ``namespace``, ``key``, ``old``, and ``new``, which
683 correspond to the pushkey namespace to operate on, the key within that
684 namespace to change, the old value (which may be empty), and the new value.
685 All arguments are string types.
686
687 The return type is a ``string``. The value depends on the transport protocol.
688
689 The SSH transport sends a string encoded integer followed by a newline
690 (``\n``) which indicates operation result. The server may send additional
691 output on the ``stderr`` stream that should be displayed to the user.
692
693 The HTTP transport sends a string encoded integer followed by a newline
694 followed by additional server output that should be displayed to the user.
695 This may include output from hooks, etc.
696
697 The integer result varies by namespace. ``0`` means an error has occurred
698 and there should be additional output to display to the user.
699
700 stream_out
701 ----------
702
703 Obtain *streaming clone* data.
704
705 The return type is either a ``string`` or a ``stream``, depending on
706 whether the request was fulfilled properly.
707
708 A return value of ``1\n`` indicates the server is not configured to serve
709 this data. If this is seen by the client, they may not have verified the
710 ``stream`` capability is set before making the request.
711
712 A return value of ``2\n`` indicates the server was unable to lock the
713 repository to generate data.
714
715 All other responses are a ``stream`` of bytes. The first line of this data
716 contains 2 space-delimited integers corresponding to the path count and
717 payload size, respectively::
718
719 <path count> <payload size>\n
720
721 The ``<payload size>`` is the total size of path data: it does not include
722 the size of the per-path header lines.
723
724 Following that header are ``<path count>`` entries. Each entry consists of a
725 line with metadata followed by raw revlog data. The line consists of::
726
727 <store path>\0<size>\n
728
729 The ``<store path>`` is the encoded store path of the data that follows.
730 ``<size>`` is the amount of data for this store path/revlog that follows the
731 newline.
732
733 There is no trailer to indicate end of data. Instead, the client should stop
734 reading after ``<path count>`` entries are consumed.
735
736 unbundle
737 --------
738
739 Send a bundle containing data (usually changegroup data) to the server.
740
741 Accepts the argument ``heads``, which is a space-delimited list of hex nodes
742 corresponding to server repository heads observed by the client. This is used
743 to detect race conditions and abort push operations before a server performs
744 too much work or a client transfers too much data.
745
746 The request payload consists of a bundle to be applied to the repository,
747 similarly to as if :hg:`unbundle` were called.
748
749 In most scenarios, a special ``push response`` type is returned. This type
750 contains an integer describing the change in heads as a result of the
751 operation. A value of ``0`` indicates nothing changed. ``1`` means the number
752 of heads remained the same. Values ``2`` and larger indicate the number of
753 added heads minus 1. e.g. ``3`` means 2 heads were added. Negative values
754 indicate the number of fewer heads, also off by 1. e.g. ``-2`` means there
755 is 1 fewer head.
756
757 The encoding of the ``push response`` type varies by transport.
758
759 For the SSH transport, this type is composed of 2 ``string`` responses: an
760 empty response (``0\n``) followed by the integer result value. e.g.
761 ``1\n2``. So the full response might be ``0\n1\n2``.
762
763 For the HTTP transport, the response is a ``string`` type composed of an
764 integer result value followed by a newline (``\n``) followed by string
765 content holding server output that should be displayed on the client (output
766 hooks, etc).
767
768 In some cases, the server may respond with a ``bundle2`` bundle. In this
769 case, the response type is ``stream``. For the HTTP transport, the response
770 is zlib compressed.
771
772 The server may also respond with a generic error type, which contains a string
773 indicating the failure.
@@ -0,0 +1,26 b''
1 #ifndef _HG_MPATCH_H_
2 #define _HG_MPATCH_H_
3
4 #define MPATCH_ERR_NO_MEM -3
5 #define MPATCH_ERR_CANNOT_BE_DECODED -2
6 #define MPATCH_ERR_INVALID_PATCH -1
7
8 struct mpatch_frag {
9 int start, end, len;
10 const char *data;
11 };
12
13 struct mpatch_flist {
14 struct mpatch_frag *base, *head, *tail;
15 };
16
17 int mpatch_decode(const char *bin, ssize_t len, struct mpatch_flist** res);
18 ssize_t mpatch_calcsize(ssize_t len, struct mpatch_flist *l);
19 void mpatch_lfree(struct mpatch_flist *a);
20 int mpatch_apply(char *buf, const char *orig, ssize_t len,
21 struct mpatch_flist *l);
22 struct mpatch_flist *mpatch_fold(void *bins,
23 struct mpatch_flist* (*get_next_item)(void*, ssize_t),
24 ssize_t start, ssize_t end);
25
26 #endif
@@ -0,0 +1,164 b''
1 # profiling.py - profiling functions
2 #
3 # Copyright 2016 Gregory Szorc <gregory.szorc@gmail.com>
4 #
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
7
8 from __future__ import absolute_import, print_function
9
10 import contextlib
11 import os
12 import sys
13 import time
14
15 from .i18n import _
16 from . import (
17 error,
18 util,
19 )
20
21 @contextlib.contextmanager
22 def lsprofile(ui, fp):
23 format = ui.config('profiling', 'format', default='text')
24 field = ui.config('profiling', 'sort', default='inlinetime')
25 limit = ui.configint('profiling', 'limit', default=30)
26 climit = ui.configint('profiling', 'nested', default=0)
27
28 if format not in ['text', 'kcachegrind']:
29 ui.warn(_("unrecognized profiling format '%s'"
30 " - Ignored\n") % format)
31 format = 'text'
32
33 try:
34 from . import lsprof
35 except ImportError:
36 raise error.Abort(_(
37 'lsprof not available - install from '
38 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
39 p = lsprof.Profiler()
40 p.enable(subcalls=True)
41 try:
42 yield
43 finally:
44 p.disable()
45
46 if format == 'kcachegrind':
47 from . import lsprofcalltree
48 calltree = lsprofcalltree.KCacheGrind(p)
49 calltree.output(fp)
50 else:
51 # format == 'text'
52 stats = lsprof.Stats(p.getstats())
53 stats.sort(field)
54 stats.pprint(limit=limit, file=fp, climit=climit)
55
56 @contextlib.contextmanager
57 def flameprofile(ui, fp):
58 try:
59 from flamegraph import flamegraph
60 except ImportError:
61 raise error.Abort(_(
62 'flamegraph not available - install from '
63 'https://github.com/evanhempel/python-flamegraph'))
64 # developer config: profiling.freq
65 freq = ui.configint('profiling', 'freq', default=1000)
66 filter_ = None
67 collapse_recursion = True
68 thread = flamegraph.ProfileThread(fp, 1.0 / freq,
69 filter_, collapse_recursion)
70 start_time = time.clock()
71 try:
72 thread.start()
73 yield
74 finally:
75 thread.stop()
76 thread.join()
77 print('Collected %d stack frames (%d unique) in %2.2f seconds.' % (
78 time.clock() - start_time, thread.num_frames(),
79 thread.num_frames(unique=True)))
80
81 @contextlib.contextmanager
82 def statprofile(ui, fp):
83 try:
84 import statprof
85 except ImportError:
86 raise error.Abort(_(
87 'statprof not available - install using "easy_install statprof"'))
88
89 freq = ui.configint('profiling', 'freq', default=1000)
90 if freq > 0:
91 # Cannot reset when profiler is already active. So silently no-op.
92 if statprof.state.profile_level == 0:
93 statprof.reset(freq)
94 else:
95 ui.warn(_("invalid sampling frequency '%s' - ignoring\n") % freq)
96
97 statprof.start()
98 try:
99 yield
100 finally:
101 statprof.stop()
102 statprof.display(fp)
103
104 @contextlib.contextmanager
105 def profile(ui):
106 """Start profiling.
107
108 Profiling is active when the context manager is active. When the context
109 manager exits, profiling results will be written to the configured output.
110 """
111 profiler = os.getenv('HGPROF')
112 if profiler is None:
113 profiler = ui.config('profiling', 'type', default='ls')
114 if profiler not in ('ls', 'stat', 'flame'):
115 ui.warn(_("unrecognized profiler '%s' - ignored\n") % profiler)
116 profiler = 'ls'
117
118 output = ui.config('profiling', 'output')
119
120 if output == 'blackbox':
121 fp = util.stringio()
122 elif output:
123 path = ui.expandpath(output)
124 fp = open(path, 'wb')
125 else:
126 fp = sys.stderr
127
128 try:
129 if profiler == 'ls':
130 proffn = lsprofile
131 elif profiler == 'flame':
132 proffn = flameprofile
133 else:
134 proffn = statprofile
135
136 with proffn(ui, fp):
137 yield
138
139 finally:
140 if output:
141 if output == 'blackbox':
142 val = 'Profile:\n%s' % fp.getvalue()
143 # ui.log treats the input as a format string,
144 # so we need to escape any % signs.
145 val = val.replace('%', '%%')
146 ui.log('profile', val)
147 fp.close()
148
149 @contextlib.contextmanager
150 def maybeprofile(ui):
151 """Profile if enabled, else do nothing.
152
153 This context manager can be used to optionally profile if profiling
154 is enabled. Otherwise, it does nothing.
155
156 The purpose of this context manager is to make calling code simpler:
157 just use a single code path for calling into code you may want to profile
158 and this function determines whether to start profiling.
159 """
160 if ui.configbool('profiling', 'enabled'):
161 with profile(ui):
162 yield
163 else:
164 yield
1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
@@ -164,10 +164,12 b' osx:'
164 164 --install-lib=/Library/Python/2.7/site-packages/
165 165 make -C doc all install DESTDIR="$(PWD)/build/mercurial/"
166 166 mkdir -p $${OUTPUTDIR:-dist}
167 pkgbuild --root build/mercurial/ --identifier org.mercurial-scm.mercurial \
168 build/mercurial.pkg
169 167 HGVER=$$((cat build/mercurial/Library/Python/2.7/site-packages/mercurial/__version__.py; echo 'print(version)') | python) && \
170 168 OSXVER=$$(sw_vers -productVersion | cut -d. -f1,2) && \
169 pkgbuild --root build/mercurial/ \
170 --identifier org.mercurial-scm.mercurial \
171 --version "$${HGVER}" \
172 build/mercurial.pkg && \
171 173 productbuild --distribution contrib/macosx/distribution.xml \
172 174 --package-path build/ \
173 175 --version "$${HGVER}" \
@@ -89,9 +89,11 b' shopt -s extglob'
89 89
90 90 _hg_status()
91 91 {
92 local files="$(_hg_cmd status -n$1 "glob:$cur**")"
93 local IFS=$'\n'
94 COMPREPLY=(${COMPREPLY[@]:-} $(compgen -W '$files' -- "$cur"))
92 if [ -z "$HGCOMPLETE_NOSTATUS" ]; then
93 local files="$(_hg_cmd status -n$1 "glob:$cur**")"
94 local IFS=$'\n'
95 COMPREPLY=(${COMPREPLY[@]:-} $(compgen -W '$files' -- "$cur"))
96 fi
95 97 }
96 98
97 99 _hg_branches()
@@ -237,7 +237,7 b' pypats = ['
237 237 "tuple parameter unpacking not available in Python 3+"),
238 238 (r'(?<!def)\s+(cmp)\(', "cmp is not available in Python 3+"),
239 239 (r'\breduce\s*\(.*', "reduce is not available in Python 3+"),
240 (r'dict\(.*=', 'dict() is different in Py2 and 3 and is slower than {}',
240 (r'\bdict\(.*=', 'dict() is different in Py2 and 3 and is slower than {}',
241 241 'dict-from-generator'),
242 242 (r'\.has_key\b', "dict.has_key is not available in Python 3+"),
243 243 (r'\s<>\s', '<> operator is not available in Python 3+, use !='),
@@ -295,7 +295,7 b' pypats = ['
295 295 "comparison with singleton, use 'is' or 'is not' instead"),
296 296 (r'^\s*(while|if) [01]:',
297 297 "use True/False for constant Boolean expression"),
298 (r'(?:(?<!def)\s+|\()hasattr',
298 (r'(?:(?<!def)\s+|\()hasattr\(',
299 299 'hasattr(foo, bar) is broken, use util.safehasattr(foo, bar) instead'),
300 300 (r'opener\([^)]*\).read\(',
301 301 "use opener.read() instead"),
@@ -35,13 +35,18 b' errors = ['
35 35 "summary line doesn't start with 'topic: '"),
36 36 (afterheader + r"[A-Z][a-z]\S+", "don't capitalize summary lines"),
37 37 (afterheader + r"[^\n]*: *[A-Z][a-z]\S+", "don't capitalize summary lines"),
38 (afterheader + r"\S*[^A-Za-z0-9-]\S*: ",
38 (afterheader + r"\S*[^A-Za-z0-9-_]\S*: ",
39 39 "summary keyword should be most user-relevant one-word command or topic"),
40 40 (afterheader + r".*\.\s*\n", "don't add trailing period on summary line"),
41 41 (afterheader + r".{79,}", "summary line too long (limit is 78)"),
42 42 (r"\n\+\n( |\+)\n", "adds double empty line"),
43 43 (r"\n \n\+\n", "adds double empty line"),
44 (r"\n\+[ \t]+def [a-z]+_[a-z]", "adds a function with foo_bar naming"),
44 # Forbid "_" in function name.
45 #
46 # We skip the check for cffi related functions. They use names mapping the
47 # name of the C function. C function names may contain "_".
48 (r"\n\+[ \t]+def (?!cffi)[a-z]+_[a-z]",
49 "adds a function with foo_bar naming"),
45 50 ]
46 51
47 52 word = re.compile('\S')
@@ -10,7 +10,6 b''
10 10 from __future__ import absolute_import, print_function
11 11
12 12 import ast
13 import imp
14 13 import os
15 14 import sys
16 15 import traceback
@@ -41,6 +40,7 b' def check_compat_py2(f):'
41 40
42 41 def check_compat_py3(f):
43 42 """Check Python 3 compatibility of a file with Python 3."""
43 import importlib # not available on Python 2.6
44 44 with open(f, 'rb') as fh:
45 45 content = fh.read()
46 46
@@ -55,34 +55,33 b' def check_compat_py3(f):'
55 55 # out module paths for things not in a package can be confusing.
56 56 if f.startswith(('hgext/', 'mercurial/')) and not f.endswith('__init__.py'):
57 57 assert f.endswith('.py')
58 name = f.replace('/', '.')[:-3]
59 with open(f, 'r') as fh:
60 try:
61 imp.load_module(name, fh, '', ('py', 'r', imp.PY_SOURCE))
62 except Exception as e:
63 exc_type, exc_value, tb = sys.exc_info()
64 # We walk the stack and ignore frames from our custom importer,
65 # import mechanisms, and stdlib modules. This kinda/sorta
66 # emulates CPython behavior in import.c while also attempting
67 # to pin blame on a Mercurial file.
68 for frame in reversed(traceback.extract_tb(tb)):
69 if frame.name == '_call_with_frames_removed':
70 continue
71 if 'importlib' in frame.filename:
72 continue
73 if 'mercurial/__init__.py' in frame.filename:
74 continue
75 if frame.filename.startswith(sys.prefix):
76 continue
77 break
58 name = f.replace('/', '.')[:-3].replace('.pure.', '.')
59 try:
60 importlib.import_module(name)
61 except Exception as e:
62 exc_type, exc_value, tb = sys.exc_info()
63 # We walk the stack and ignore frames from our custom importer,
64 # import mechanisms, and stdlib modules. This kinda/sorta
65 # emulates CPython behavior in import.c while also attempting
66 # to pin blame on a Mercurial file.
67 for frame in reversed(traceback.extract_tb(tb)):
68 if frame.name == '_call_with_frames_removed':
69 continue
70 if 'importlib' in frame.filename:
71 continue
72 if 'mercurial/__init__.py' in frame.filename:
73 continue
74 if frame.filename.startswith(sys.prefix):
75 continue
76 break
78 77
79 if frame.filename:
80 filename = os.path.basename(frame.filename)
81 print('%s: error importing: <%s> %s (error at %s:%d)' % (
82 f, type(e).__name__, e, filename, frame.lineno))
83 else:
84 print('%s: error importing module: <%s> %s (line %d)' % (
85 f, type(e).__name__, e, frame.lineno))
78 if frame.filename:
79 filename = os.path.basename(frame.filename)
80 print('%s: error importing: <%s> %s (error at %s:%d)' % (
81 f, type(e).__name__, e, filename, frame.lineno))
82 else:
83 print('%s: error importing module: <%s> %s (line %d)' % (
84 f, type(e).__name__, e, frame.lineno))
86 85
87 86 if __name__ == '__main__':
88 87 if sys.version_info[0] == 2:
@@ -126,15 +126,10 b' static void readchannel(hgclient_t *hgc)'
126 126 return; /* assumes input request */
127 127
128 128 size_t cursize = 0;
129 int emptycount = 0;
130 129 while (cursize < hgc->ctx.datasize) {
131 130 rsize = recv(hgc->sockfd, hgc->ctx.data + cursize,
132 131 hgc->ctx.datasize - cursize, 0);
133 /* rsize == 0 normally indicates EOF, while it's also a valid
134 * packet size for unix socket. treat it as EOF and abort if
135 * we get many empty responses in a row. */
136 emptycount = (rsize == 0 ? emptycount + 1 : 0);
137 if (rsize < 0 || emptycount > 20)
132 if (rsize < 1)
138 133 abortmsg("failed to read data block");
139 134 cursize += rsize;
140 135 }
@@ -25,6 +25,7 b' import random'
25 25 import sys
26 26 import time
27 27 from mercurial import (
28 changegroup,
28 29 cmdutil,
29 30 commands,
30 31 copies,
@@ -130,15 +131,53 b' def gettimer(ui, opts=None):'
130 131
131 132 # enforce an idle period before execution to counteract power management
132 133 # experimental config: perf.presleep
133 time.sleep(ui.configint("perf", "presleep", 1))
134 time.sleep(getint(ui, "perf", "presleep", 1))
134 135
135 136 if opts is None:
136 137 opts = {}
137 138 # redirect all to stderr
138 139 ui = ui.copy()
139 ui.fout = ui.ferr
140 uifout = safeattrsetter(ui, 'fout', ignoremissing=True)
141 if uifout:
142 # for "historical portability":
143 # ui.fout/ferr have been available since 1.9 (or 4e1ccd4c2b6d)
144 uifout.set(ui.ferr)
145
140 146 # get a formatter
141 fm = ui.formatter('perf', opts)
147 uiformatter = getattr(ui, 'formatter', None)
148 if uiformatter:
149 fm = uiformatter('perf', opts)
150 else:
151 # for "historical portability":
152 # define formatter locally, because ui.formatter has been
153 # available since 2.2 (or ae5f92e154d3)
154 from mercurial import node
155 class defaultformatter(object):
156 """Minimized composition of baseformatter and plainformatter
157 """
158 def __init__(self, ui, topic, opts):
159 self._ui = ui
160 if ui.debugflag:
161 self.hexfunc = node.hex
162 else:
163 self.hexfunc = node.short
164 def __nonzero__(self):
165 return False
166 def startitem(self):
167 pass
168 def data(self, **data):
169 pass
170 def write(self, fields, deftext, *fielddata, **opts):
171 self._ui.write(deftext % fielddata, **opts)
172 def condwrite(self, cond, fields, deftext, *fielddata, **opts):
173 if cond:
174 self._ui.write(deftext % fielddata, **opts)
175 def plain(self, text, **opts):
176 self._ui.write(text, **opts)
177 def end(self):
178 pass
179 fm = defaultformatter(ui, 'perf', opts)
180
142 181 # stub function, runs code only once instead of in a loop
143 182 # experimental config: perf.stub
144 183 if ui.configbool("perf", "stub"):
@@ -181,6 +220,121 b' def _timer(fm, func, title=None):'
181 220 fm.write('count', ' (best of %d)', count)
182 221 fm.plain('\n')
183 222
223 # utilities for historical portability
224
225 def getint(ui, section, name, default):
226 # for "historical portability":
227 # ui.configint has been available since 1.9 (or fa2b596db182)
228 v = ui.config(section, name, None)
229 if v is None:
230 return default
231 try:
232 return int(v)
233 except ValueError:
234 raise error.ConfigError(("%s.%s is not an integer ('%s')")
235 % (section, name, v))
236
237 def safeattrsetter(obj, name, ignoremissing=False):
238 """Ensure that 'obj' has 'name' attribute before subsequent setattr
239
240 This function is aborted, if 'obj' doesn't have 'name' attribute
241 at runtime. This avoids overlooking removal of an attribute, which
242 breaks assumption of performance measurement, in the future.
243
244 This function returns the object to (1) assign a new value, and
245 (2) restore an original value to the attribute.
246
247 If 'ignoremissing' is true, missing 'name' attribute doesn't cause
248 abortion, and this function returns None. This is useful to
249 examine an attribute, which isn't ensured in all Mercurial
250 versions.
251 """
252 if not util.safehasattr(obj, name):
253 if ignoremissing:
254 return None
255 raise error.Abort(("missing attribute %s of %s might break assumption"
256 " of performance measurement") % (name, obj))
257
258 origvalue = getattr(obj, name)
259 class attrutil(object):
260 def set(self, newvalue):
261 setattr(obj, name, newvalue)
262 def restore(self):
263 setattr(obj, name, origvalue)
264
265 return attrutil()
266
267 # utilities to examine each internal API changes
268
269 def getbranchmapsubsettable():
270 # for "historical portability":
271 # subsettable is defined in:
272 # - branchmap since 2.9 (or 175c6fd8cacc)
273 # - repoview since 2.5 (or 59a9f18d4587)
274 for mod in (branchmap, repoview):
275 subsettable = getattr(mod, 'subsettable', None)
276 if subsettable:
277 return subsettable
278
279 # bisecting in bcee63733aad::59a9f18d4587 can reach here (both
280 # branchmap and repoview modules exist, but subsettable attribute
281 # doesn't)
282 raise error.Abort(("perfbranchmap not available with this Mercurial"),
283 hint="use 2.5 or later")
284
285 def getsvfs(repo):
286 """Return appropriate object to access files under .hg/store
287 """
288 # for "historical portability":
289 # repo.svfs has been available since 2.3 (or 7034365089bf)
290 svfs = getattr(repo, 'svfs', None)
291 if svfs:
292 return svfs
293 else:
294 return getattr(repo, 'sopener')
295
296 def getvfs(repo):
297 """Return appropriate object to access files under .hg
298 """
299 # for "historical portability":
300 # repo.vfs has been available since 2.3 (or 7034365089bf)
301 vfs = getattr(repo, 'vfs', None)
302 if vfs:
303 return vfs
304 else:
305 return getattr(repo, 'opener')
306
307 def repocleartagscachefunc(repo):
308 """Return the function to clear tags cache according to repo internal API
309 """
310 if util.safehasattr(repo, '_tagscache'): # since 2.0 (or 9dca7653b525)
311 # in this case, setattr(repo, '_tagscache', None) or so isn't
312 # correct way to clear tags cache, because existing code paths
313 # expect _tagscache to be a structured object.
314 def clearcache():
315 # _tagscache has been filteredpropertycache since 2.5 (or
316 # 98c867ac1330), and delattr() can't work in such case
317 if '_tagscache' in vars(repo):
318 del repo.__dict__['_tagscache']
319 return clearcache
320
321 repotags = safeattrsetter(repo, '_tags', ignoremissing=True)
322 if repotags: # since 1.4 (or 5614a628d173)
323 return lambda : repotags.set(None)
324
325 repotagscache = safeattrsetter(repo, 'tagscache', ignoremissing=True)
326 if repotagscache: # since 0.6 (or d7df759d0e97)
327 return lambda : repotagscache.set(None)
328
329 # Mercurial earlier than 0.6 (or d7df759d0e97) logically reaches
330 # this point, but it isn't so problematic, because:
331 # - repo.tags of such Mercurial isn't "callable", and repo.tags()
332 # in perftags() causes failure soon
333 # - perf.py itself has been available since 1.1 (or eb240755386d)
334 raise error.Abort(("tags API of this hg command is unknown"))
335
336 # perf commands
337
184 338 @command('perfwalk', formatteropts)
185 339 def perfwalk(ui, repo, *pats, **opts):
186 340 timer, fm = gettimer(ui, opts)
@@ -249,10 +403,12 b' def perftags(ui, repo, **opts):'
249 403 import mercurial.changelog
250 404 import mercurial.manifest
251 405 timer, fm = gettimer(ui, opts)
406 svfs = getsvfs(repo)
407 repocleartagscache = repocleartagscachefunc(repo)
252 408 def t():
253 repo.changelog = mercurial.changelog.changelog(repo.svfs)
254 repo.manifest = mercurial.manifest.manifest(repo.svfs)
255 repo._tags = None
409 repo.changelog = mercurial.changelog.changelog(svfs)
410 repo.manifest = mercurial.manifest.manifest(svfs)
411 repocleartagscache()
256 412 return len(repo.tags())
257 413 timer(t)
258 414 fm.end()
@@ -279,6 +435,37 b' def perfancestorset(ui, repo, revset, **'
279 435 timer(d)
280 436 fm.end()
281 437
438 @command('perfchangegroupchangelog', formatteropts +
439 [('', 'version', '02', 'changegroup version'),
440 ('r', 'rev', '', 'revisions to add to changegroup')])
441 def perfchangegroupchangelog(ui, repo, version='02', rev=None, **opts):
442 """Benchmark producing a changelog group for a changegroup.
443
444 This measures the time spent processing the changelog during a
445 bundle operation. This occurs during `hg bundle` and on a server
446 processing a `getbundle` wire protocol request (handles clones
447 and pull requests).
448
449 By default, all revisions are added to the changegroup.
450 """
451 cl = repo.changelog
452 revs = [cl.lookup(r) for r in repo.revs(rev or 'all()')]
453 bundler = changegroup.getbundler(version, repo)
454
455 def lookup(node):
456 # The real bundler reads the revision in order to access the
457 # manifest node and files list. Do that here.
458 cl.read(node)
459 return node
460
461 def d():
462 for chunk in bundler.group(revs, cl, lookup):
463 pass
464
465 timer, fm = gettimer(ui, opts)
466 timer(d)
467 fm.end()
468
282 469 @command('perfdirs', formatteropts)
283 470 def perfdirs(ui, repo, **opts):
284 471 timer, fm = gettimer(ui, opts)
@@ -399,8 +586,9 b' def perfindex(ui, repo, **opts):'
399 586 timer, fm = gettimer(ui, opts)
400 587 mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg
401 588 n = repo["tip"].node()
589 svfs = getsvfs(repo)
402 590 def d():
403 cl = mercurial.revlog.revlog(repo.svfs, "00changelog.i")
591 cl = mercurial.revlog.revlog(svfs, "00changelog.i")
404 592 cl.rev(n)
405 593 timer(d)
406 594 fm.end()
@@ -423,7 +611,7 b' def perfparents(ui, repo, **opts):'
423 611 timer, fm = gettimer(ui, opts)
424 612 # control the number of commits perfparents iterates over
425 613 # experimental config: perf.parentscount
426 count = ui.configint("perf", "parentscount", 1000)
614 count = getint(ui, "perf", "parentscount", 1000)
427 615 if len(repo.changelog) < count:
428 616 raise error.Abort("repo needs %d commits for this test" % count)
429 617 repo = repo.unfiltered()
@@ -472,7 +660,7 b' def perfnodelookup(ui, repo, rev, **opts'
472 660 import mercurial.revlog
473 661 mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg
474 662 n = repo[rev].node()
475 cl = mercurial.revlog.revlog(repo.svfs, "00changelog.i")
663 cl = mercurial.revlog.revlog(getsvfs(repo), "00changelog.i")
476 664 def d():
477 665 cl.rev(n)
478 666 clearcaches(cl)
@@ -543,8 +731,8 b' def perffncachewrite(ui, repo, **opts):'
543 731 s.fncache._dirty = True
544 732 s.fncache.write(tr)
545 733 timer(d)
734 tr.close()
546 735 lock.release()
547 tr.close()
548 736 fm.end()
549 737
550 738 @command('perffncacheencode', formatteropts)
@@ -580,9 +768,10 b' def perfdiffwd(ui, repo, **opts):'
580 768
581 769 @command('perfrevlog', revlogopts + formatteropts +
582 770 [('d', 'dist', 100, 'distance between the revisions'),
583 ('s', 'startrev', 0, 'revision to start reading at')],
771 ('s', 'startrev', 0, 'revision to start reading at'),
772 ('', 'reverse', False, 'read in reverse')],
584 773 '-c|-m|FILE')
585 def perfrevlog(ui, repo, file_=None, startrev=0, **opts):
774 def perfrevlog(ui, repo, file_=None, startrev=0, reverse=False, **opts):
586 775 """Benchmark reading a series of revisions from a revlog.
587 776
588 777 By default, we read every ``-d/--dist`` revision from 0 to tip of
@@ -591,11 +780,20 b' def perfrevlog(ui, repo, file_=None, sta'
591 780 The start revision can be defined via ``-s/--startrev``.
592 781 """
593 782 timer, fm = gettimer(ui, opts)
594 dist = opts['dist']
595 783 _len = getlen(ui)
784
596 785 def d():
597 786 r = cmdutil.openrevlog(repo, 'perfrevlog', file_, opts)
598 for x in xrange(startrev, _len(r), dist):
787
788 startrev = 0
789 endrev = _len(r)
790 dist = opts['dist']
791
792 if reverse:
793 startrev, endrev = endrev, startrev
794 dist = -1 * dist
795
796 for x in xrange(startrev, endrev, dist):
599 797 r.revision(r.node(x))
600 798
601 799 timer(d)
@@ -772,10 +970,11 b' def perfbranchmap(ui, repo, full=False, '
772 970 return d
773 971 # add filter in smaller subset to bigger subset
774 972 possiblefilters = set(repoview.filtertable)
973 subsettable = getbranchmapsubsettable()
775 974 allfilters = []
776 975 while possiblefilters:
777 976 for name in possiblefilters:
778 subset = branchmap.subsettable.get(name)
977 subset = subsettable.get(name)
779 978 if subset not in possiblefilters:
780 979 break
781 980 else:
@@ -789,16 +988,17 b' def perfbranchmap(ui, repo, full=False, '
789 988 repo.filtered(name).branchmap()
790 989 # add unfiltered
791 990 allfilters.append(None)
792 oldread = branchmap.read
793 oldwrite = branchmap.branchcache.write
991
992 branchcacheread = safeattrsetter(branchmap, 'read')
993 branchcachewrite = safeattrsetter(branchmap.branchcache, 'write')
994 branchcacheread.set(lambda repo: None)
995 branchcachewrite.set(lambda bc, repo: None)
794 996 try:
795 branchmap.read = lambda repo: None
796 branchmap.write = lambda repo: None
797 997 for name in allfilters:
798 998 timer(getbranchmap(name), title=str(name))
799 999 finally:
800 branchmap.read = oldread
801 branchmap.branchcache.write = oldwrite
1000 branchcacheread.restore()
1001 branchcachewrite.restore()
802 1002 fm.end()
803 1003
804 1004 @command('perfloadmarkers')
@@ -807,7 +1007,8 b' def perfloadmarkers(ui, repo):'
807 1007
808 1008 Result is the number of markers in the repo."""
809 1009 timer, fm = gettimer(ui)
810 timer(lambda: len(obsolete.obsstore(repo.svfs)))
1010 svfs = getsvfs(repo)
1011 timer(lambda: len(obsolete.obsstore(svfs)))
811 1012 fm.end()
812 1013
813 1014 @command('perflrucachedict', formatteropts +
@@ -62,11 +62,11 b' from mercurial import ('
62 62 util,
63 63 )
64 64
65 # Note for extension authors: ONLY specify testedwith = 'internal' for
65 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
66 66 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
67 67 # be specifying the version(s) of Mercurial they are tested with, or
68 68 # leave the attribute unspecified.
69 testedwith = 'internal'
69 testedwith = 'ships-with-hg-core'
70 70
71 71 cmdtable = {}
72 72 command = cmdutil.command(cmdtable)
@@ -42,6 +42,7 b''
42 42 <File Id="internals.changegroups.txt" Name="changegroups.txt" />
43 43 <File Id="internals.requirements.txt" Name="requirements.txt" />
44 44 <File Id="internals.revlogs.txt" Name="revlogs.txt" />
45 <File Id="internals.wireprotocol.txt" Name="wireprotocol.txt" />
45 46 </Component>
46 47 </Directory>
47 48
@@ -371,7 +371,7 b' typeset -A _hg_cmd_globals'
371 371
372 372 # Common options
373 373 _hg_global_opts=(
374 '(--repository -R)'{-R+,--repository}'[repository root directory]:repository:_files -/'
374 '(--repository -R)'{-R+,--repository=}'[repository root directory]:repository:_files -/'
375 375 '--cwd[change working directory]:new working directory:_files -/'
376 376 '(--noninteractive -y)'{-y,--noninteractive}'[do not prompt, assume yes for any required answers]'
377 377 '(--verbose -v)'{-v,--verbose}'[enable additional output]'
@@ -390,8 +390,8 b' typeset -A _hg_cmd_globals'
390 390 )
391 391
392 392 _hg_pat_opts=(
393 '*'{-I+,--include}'[include names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/'
394 '*'{-X+,--exclude}'[exclude names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/')
393 '*'{-I+,--include=}'[include names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/'
394 '*'{-X+,--exclude=}'[exclude names matching the given patterns]:dir:_files -W $(_hg_cmd root) -/')
395 395
396 396 _hg_clone_opts=(
397 397 $_hg_remote_opts
@@ -402,8 +402,8 b' typeset -A _hg_cmd_globals'
402 402 _hg_date_user_opts=(
403 403 '(--currentdate -D)'{-D,--currentdate}'[record the current date as commit date]'
404 404 '(--currentuser -U)'{-U,--currentuser}'[record the current user as committer]'
405 '(--date -d)'{-d+,--date}'[record the specified date as commit date]:date:'
406 '(--user -u)'{-u+,--user}'[record the specified user as committer]:user:')
405 '(--date -d)'{-d+,--date=}'[record the specified date as commit date]:date:'
406 '(--user -u)'{-u+,--user=}'[record the specified user as committer]:user:')
407 407
408 408 _hg_gitlike_opts=(
409 409 '(--git -g)'{-g,--git}'[use git extended diff format]')
@@ -414,7 +414,7 b' typeset -A _hg_cmd_globals'
414 414 '--nodates[omit dates from diff headers]')
415 415
416 416 _hg_mergetool_opts=(
417 '(--tool -t)'{-t+,--tool}'[specify merge tool]:tool:')
417 '(--tool -t)'{-t+,--tool=}'[specify merge tool]:tool:')
418 418
419 419 _hg_dryrun_opts=(
420 420 '(--dry-run -n)'{-n,--dry-run}'[do not perform actions, just print output]')
@@ -430,7 +430,7 b' typeset -A _hg_cmd_globals'
430 430
431 431 _hg_log_opts=(
432 432 $_hg_global_opts $_hg_style_opts $_hg_gitlike_opts
433 '(--limit -l)'{-l+,--limit}'[limit number of changes displayed]:'
433 '(--limit -l)'{-l+,--limit=}'[limit number of changes displayed]:'
434 434 '(--no-merges -M)'{-M,--no-merges}'[do not show merges]'
435 435 '(--patch -p)'{-p,--patch}'[show patch]'
436 436 '--stat[output diffstat-style summary of changes]'
@@ -438,16 +438,16 b' typeset -A _hg_cmd_globals'
438 438
439 439 _hg_commit_opts=(
440 440 '(-m --message -l --logfile --edit -e)'{-e,--edit}'[edit commit message]'
441 '(-e --edit -l --logfile --message -m)'{-m+,--message}'[use <text> as commit message]:message:'
442 '(-e --edit -m --message --logfile -l)'{-l+,--logfile}'[read the commit message from <file>]:log file:_files')
441 '(-e --edit -l --logfile --message -m)'{-m+,--message=}'[use <text> as commit message]:message:'
442 '(-e --edit -m --message --logfile -l)'{-l+,--logfile=}'[read the commit message from <file>]:log file:_files')
443 443
444 444 _hg_remote_opts=(
445 '(--ssh -e)'{-e+,--ssh}'[specify ssh command to use]:'
445 '(--ssh -e)'{-e+,--ssh=}'[specify ssh command to use]:'
446 446 '--remotecmd[specify hg command to run on the remote side]:')
447 447
448 448 _hg_branch_bmark_opts=(
449 '(--bookmark -B)'{-B+,--bookmark}'[specify bookmark(s)]:bookmark:_hg_bookmarks'
450 '(--branch -b)'{-b+,--branch}'[specify branch(es)]:branch:_hg_branches'
449 '(--bookmark -B)'{-B+,--bookmark=}'[specify bookmark(s)]:bookmark:_hg_bookmarks'
450 '(--branch -b)'{-b+,--branch=}'[specify branch(es)]:branch:_hg_branches'
451 451 )
452 452
453 453 _hg_subrepos_opts=(
@@ -464,13 +464,13 b' typeset -A _hg_cmd_globals'
464 464
465 465 _hg_cmd_addremove() {
466 466 _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_dryrun_opts \
467 '(--similarity -s)'{-s+,--similarity}'[guess renamed files by similarity (0<=s<=100)]:' \
467 '(--similarity -s)'{-s+,--similarity=}'[guess renamed files by similarity (0<=s<=100)]:' \
468 468 '*:unknown or missing files:_hg_addremove'
469 469 }
470 470
471 471 _hg_cmd_annotate() {
472 472 _arguments -s -w : $_hg_global_opts $_hg_pat_opts \
473 '(--rev -r)'{-r+,--rev}'[annotate the specified revision]:revision:_hg_labels' \
473 '(--rev -r)'{-r+,--rev=}'[annotate the specified revision]:revision:_hg_labels' \
474 474 '(--follow -f)'{-f,--follow}'[follow file copies and renames]' \
475 475 '(--text -a)'{-a,--text}'[treat all files as text]' \
476 476 '(--user -u)'{-u,--user}'[list the author]' \
@@ -483,21 +483,21 b' typeset -A _hg_cmd_globals'
483 483 _hg_cmd_archive() {
484 484 _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_subrepos_opts \
485 485 '--no-decode[do not pass files through decoders]' \
486 '(--prefix -p)'{-p+,--prefix}'[directory prefix for files in archive]:' \
487 '(--rev -r)'{-r+,--rev}'[revision to distribute]:revision:_hg_labels' \
488 '(--type -t)'{-t+,--type}'[type of distribution to create]:archive type:(files tar tbz2 tgz uzip zip)' \
486 '(--prefix -p)'{-p+,--prefix=}'[directory prefix for files in archive]:' \
487 '(--rev -r)'{-r+,--rev=}'[revision to distribute]:revision:_hg_labels' \
488 '(--type -t)'{-t+,--type=}'[type of distribution to create]:archive type:(files tar tbz2 tgz uzip zip)' \
489 489 '*:destination:_files'
490 490 }
491 491
492 492 _hg_cmd_backout() {
493 493 _arguments -s -w : $_hg_global_opts $_hg_mergetool_opts $_hg_pat_opts \
494 494 '--merge[merge with old dirstate parent after backout]' \
495 '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \
495 '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \
496 496 '--parent[parent to choose when backing out merge]' \
497 '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \
498 '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \
499 '(--message -m)'{-m+,--message}'[use <text> as commit message]:text:' \
500 '(--logfile -l)'{-l+,--logfile}'[read commit message from <file>]:log file:_files'
497 '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \
498 '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \
499 '(--message -m)'{-m+,--message=}'[use <text> as commit message]:text:' \
500 '(--logfile -l)'{-l+,--logfile=}'[read commit message from <file>]:log file:_files'
501 501 }
502 502
503 503 _hg_cmd_bisect() {
@@ -507,7 +507,7 b' typeset -A _hg_cmd_globals'
507 507 '(--good -g --bad -b --skip -s --reset -r)'{-g,--good}'[mark changeset good]'::revision:_hg_labels \
508 508 '(--good -g --bad -b --skip -s --reset -r)'{-b,--bad}'[mark changeset bad]'::revision:_hg_labels \
509 509 '(--good -g --bad -b --skip -s --reset -r)'{-s,--skip}'[skip testing changeset]' \
510 '(--command -c --noupdate -U)'{-c+,--command}'[use command to check changeset state]':commands:_command_names \
510 '(--command -c --noupdate -U)'{-c+,--command=}'[use command to check changeset state]':commands:_command_names \
511 511 '(--command -c --noupdate -U)'{-U,--noupdate}'[do not update to target]'
512 512 }
513 513
@@ -515,9 +515,9 b' typeset -A _hg_cmd_globals'
515 515 _arguments -s -w : $_hg_global_opts \
516 516 '(--force -f)'{-f,--force}'[force]' \
517 517 '(--inactive -i)'{-i,--inactive}'[mark a bookmark inactive]' \
518 '(--rev -r --delete -d --rename -m)'{-r+,--rev}'[revision]:revision:_hg_labels' \
518 '(--rev -r --delete -d --rename -m)'{-r+,--rev=}'[revision]:revision:_hg_labels' \
519 519 '(--rev -r --delete -d --rename -m)'{-d,--delete}'[delete a given bookmark]' \
520 '(--rev -r --delete -d --rename -m)'{-m+,--rename}'[rename a given bookmark]:bookmark:_hg_bookmarks' \
520 '(--rev -r --delete -d --rename -m)'{-m+,--rename=}'[rename a given bookmark]:bookmark:_hg_bookmarks' \
521 521 ':bookmark:_hg_bookmarks'
522 522 }
523 523
@@ -537,8 +537,8 b' typeset -A _hg_cmd_globals'
537 537 _arguments -s -w : $_hg_global_opts $_hg_remote_opts \
538 538 '(--force -f)'{-f,--force}'[run even when remote repository is unrelated]' \
539 539 '(2)*--base[a base changeset to specify instead of a destination]:revision:_hg_labels' \
540 '(--branch -b)'{-b+,--branch}'[a specific branch to bundle]' \
541 '(--rev -r)'{-r+,--rev}'[changeset(s) to bundle]:' \
540 '(--branch -b)'{-b+,--branch=}'[a specific branch to bundle]:' \
541 '(--rev -r)'{-r+,--rev=}'[changeset(s) to bundle]:' \
542 542 '--all[bundle all changesets in the repository]' \
543 543 ':output file:_files' \
544 544 ':destination repository:_files -/'
@@ -546,17 +546,17 b' typeset -A _hg_cmd_globals'
546 546
547 547 _hg_cmd_cat() {
548 548 _arguments -s -w : $_hg_global_opts $_hg_pat_opts \
549 '(--output -o)'{-o+,--output}'[print output to file with formatted name]:filespec:' \
550 '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \
549 '(--output -o)'{-o+,--output=}'[print output to file with formatted name]:filespec:' \
550 '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \
551 551 '--decode[apply any matching decode filter]' \
552 552 '*:file:_hg_files'
553 553 }
554 554
555 555 _hg_cmd_clone() {
556 556 _arguments -s -w : $_hg_global_opts $_hg_clone_opts \
557 '(--rev -r)'{-r+,--rev}'[a changeset you would like to have after cloning]:' \
558 '(--updaterev -u)'{-u+,--updaterev}'[revision, tag or branch to check out]' \
559 '(--branch -b)'{-b+,--branch}'[clone only the specified branch]' \
557 '(--rev -r)'{-r+,--rev=}'[a changeset you would like to have after cloning]:' \
558 '(--updaterev -u)'{-u+,--updaterev=}'[revision, tag or branch to check out]:' \
559 '(--branch -b)'{-b+,--branch=}'[clone only the specified branch]:' \
560 560 ':source repository:_hg_remote' \
561 561 ':destination:_hg_clone_dest'
562 562 }
@@ -564,10 +564,10 b' typeset -A _hg_cmd_globals'
564 564 _hg_cmd_commit() {
565 565 _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_subrepos_opts \
566 566 '(--addremove -A)'{-A,--addremove}'[mark new/missing files as added/removed before committing]' \
567 '(--message -m)'{-m+,--message}'[use <text> as commit message]:text:' \
568 '(--logfile -l)'{-l+,--logfile}'[read commit message from <file>]:log file:_files' \
569 '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \
570 '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \
567 '(--message -m)'{-m+,--message=}'[use <text> as commit message]:text:' \
568 '(--logfile -l)'{-l+,--logfile=}'[read commit message from <file>]:log file:_files' \
569 '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \
570 '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \
571 571 '--amend[amend the parent of the working dir]' \
572 572 '--close-branch[mark a branch as closed]' \
573 573 '*:file:_hg_files'
@@ -584,12 +584,12 b' typeset -A _hg_cmd_globals'
584 584 typeset -A opt_args
585 585 _arguments -s -w : $_hg_global_opts $_hg_diff_opts $_hg_ignore_space_opts \
586 586 $_hg_pat_opts $_hg_subrepos_opts \
587 '*'{-r,--rev}'+[revision]:revision:_hg_revrange' \
587 '*'{-r+,--rev=}'[revision]:revision:_hg_revrange' \
588 588 '(--show-function -p)'{-p,--show-function}'[show which function each change is in]' \
589 '(--change -c)'{-c,--change}'[change made by revision]' \
589 '(--change -c)'{-c+,--change=}'[change made by revision]:' \
590 590 '(--text -a)'{-a,--text}'[treat all files as text]' \
591 591 '--reverse[produce a diff that undoes the changes]' \
592 '(--unified -U)'{-U,--unified}'[number of lines of context to show]' \
592 '(--unified -U)'{-U+,--unified=}'[number of lines of context to show]:' \
593 593 '--stat[output diffstat-style summary of changes]' \
594 594 '*:file:->diff_files'
595 595
@@ -606,9 +606,9 b' typeset -A _hg_cmd_globals'
606 606
607 607 _hg_cmd_export() {
608 608 _arguments -s -w : $_hg_global_opts $_hg_diff_opts \
609 '(--outout -o)'{-o+,--output}'[print output to file with formatted name]:filespec:' \
609 '(--outout -o)'{-o+,--output=}'[print output to file with formatted name]:filespec:' \
610 610 '--switch-parent[diff against the second parent]' \
611 '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \
611 '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \
612 612 '*:revision:_hg_labels'
613 613 }
614 614
@@ -634,7 +634,7 b' typeset -A _hg_cmd_globals'
634 634 '(--ignore-case -i)'{-i,--ignore-case}'[ignore case when matching]' \
635 635 '(--files-with-matches -l)'{-l,--files-with-matches}'[print only filenames and revs that match]' \
636 636 '(--line-number -n)'{-n,--line-number}'[print matching line numbers]' \
637 '*'{-r+,--rev}'[search in given revision range]:revision:_hg_revrange' \
637 '*'{-r+,--rev=}'[search in given revision range]:revision:_hg_revrange' \
638 638 '(--user -u)'{-u,--user}'[print user who committed change]' \
639 639 '(--date -d)'{-d,--date}'[print date of a changeset]' \
640 640 '1:search pattern:' \
@@ -645,7 +645,7 b' typeset -A _hg_cmd_globals'
645 645 _arguments -s -w : $_hg_global_opts $_hg_style_opts \
646 646 '(--topo -t)'{-t,--topo}'[show topological heads only]' \
647 647 '(--closed -c)'{-c,--closed}'[show normal and closed branch heads]' \
648 '(--rev -r)'{-r+,--rev}'[show only heads which are descendants of rev]:revision:_hg_labels'
648 '(--rev -r)'{-r+,--rev=}'[show only heads which are descendants of rev]:revision:_hg_labels'
649 649 }
650 650
651 651 _hg_cmd_help() {
@@ -658,25 +658,25 b' typeset -A _hg_cmd_globals'
658 658
659 659 _hg_cmd_identify() {
660 660 _arguments -s -w : $_hg_global_opts $_hg_remote_opts \
661 '(--rev -r)'{-r+,--rev}'[identify the specified rev]:revision:_hg_labels' \
662 '(--num -n)'{-n+,--num}'[show local revision number]' \
663 '(--id -i)'{-i+,--id}'[show global revision id]' \
664 '(--branch -b)'{-b+,--branch}'[show branch]' \
665 '(--bookmark -B)'{-B+,--bookmark}'[show bookmarks]' \
666 '(--tags -t)'{-t+,--tags}'[show tags]'
661 '(--rev -r)'{-r+,--rev=}'[identify the specified rev]:revision:_hg_labels' \
662 '(--num -n)'{-n,--num}'[show local revision number]' \
663 '(--id -i)'{-i,--id}'[show global revision id]' \
664 '(--branch -b)'{-b,--branch}'[show branch]' \
665 '(--bookmark -B)'{-B,--bookmark}'[show bookmarks]' \
666 '(--tags -t)'{-t,--tags}'[show tags]'
667 667 }
668 668
669 669 _hg_cmd_import() {
670 670 _arguments -s -w : $_hg_global_opts $_hg_commit_opts \
671 '(--strip -p)'{-p+,--strip}'[directory strip option for patch (default: 1)]:count:' \
671 '(--strip -p)'{-p+,--strip=}'[directory strip option for patch (default: 1)]:count:' \
672 672 '(--force -f)'{-f,--force}'[skip check for outstanding uncommitted changes]' \
673 673 '--bypass[apply patch without touching the working directory]' \
674 674 '--no-commit[do not commit, just update the working directory]' \
675 675 '--exact[apply patch to the nodes from which it was generated]' \
676 676 '--import-branch[use any branch information in patch (implied by --exact)]' \
677 '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \
678 '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \
679 '(--similarity -s)'{-s+,--similarity}'[guess renamed files by similarity (0<=s<=100)]:' \
677 '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \
678 '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \
679 '(--similarity -s)'{-s+,--similarity=}'[guess renamed files by similarity (0<=s<=100)]:' \
680 680 '*:patch:_files'
681 681 }
682 682
@@ -684,7 +684,7 b' typeset -A _hg_cmd_globals'
684 684 _arguments -s -w : $_hg_log_opts $_hg_branch_bmark_opts $_hg_remote_opts \
685 685 $_hg_subrepos_opts \
686 686 '(--force -f)'{-f,--force}'[run even when the remote repository is unrelated]' \
687 '(--rev -r)'{-r+,--rev}'[a specific revision up to which you would like to pull]:revision:_hg_labels' \
687 '(--rev -r)'{-r+,--rev=}'[a specific revision up to which you would like to pull]:revision:_hg_labels' \
688 688 '(--newest-first -n)'{-n,--newest-first}'[show newest record first]' \
689 689 '--bundle[file to store the bundles into]:bundle file:_files' \
690 690 ':source:_hg_remote'
@@ -697,7 +697,7 b' typeset -A _hg_cmd_globals'
697 697
698 698 _hg_cmd_locate() {
699 699 _arguments -s -w : $_hg_global_opts $_hg_pat_opts \
700 '(--rev -r)'{-r+,--rev}'[search repository as it stood at revision]:revision:_hg_labels' \
700 '(--rev -r)'{-r+,--rev=}'[search repository as it stood at revision]:revision:_hg_labels' \
701 701 '(--print0 -0)'{-0,--print0}'[end filenames with NUL, for use with xargs]' \
702 702 '(--fullpath -f)'{-f,--fullpath}'[print complete paths]' \
703 703 '*:search pattern:_hg_files'
@@ -709,27 +709,27 b' typeset -A _hg_cmd_globals'
709 709 '(-f --follow)--follow-first[only follow the first parent of merge changesets]' \
710 710 '(--copies -C)'{-C,--copies}'[show copied files]' \
711 711 '(--keyword -k)'{-k+,--keyword}'[search for a keyword]:' \
712 '*'{-r,--rev}'[show the specified revision or revset]:revision:_hg_revrange' \
712 '*'{-r+,--rev=}'[show the specified revision or revset]:revision:_hg_revrange' \
713 713 '(--only-merges -m)'{-m,--only-merges}'[show only merges]' \
714 '(--prune -P)'{-P+,--prune}'[do not display revision or any of its ancestors]:revision:_hg_labels' \
715 '(--graph -G)'{-G+,--graph}'[show the revision DAG]' \
716 '(--branch -b)'{-b+,--branch}'[show changesets within the given named branch]:branch:_hg_branches' \
717 '(--user -u)'{-u+,--user}'[revisions committed by user]:user:' \
718 '(--date -d)'{-d+,--date}'[show revisions matching date spec]:date:' \
714 '(--prune -P)'{-P+,--prune=}'[do not display revision or any of its ancestors]:revision:_hg_labels' \
715 '(--graph -G)'{-G,--graph}'[show the revision DAG]' \
716 '(--branch -b)'{-b+,--branch=}'[show changesets within the given named branch]:branch:_hg_branches' \
717 '(--user -u)'{-u+,--user=}'[revisions committed by user]:user:' \
718 '(--date -d)'{-d+,--date=}'[show revisions matching date spec]:date:' \
719 719 '*:files:_hg_files'
720 720 }
721 721
722 722 _hg_cmd_manifest() {
723 723 _arguments -s -w : $_hg_global_opts \
724 724 '--all[list files from all revisions]' \
725 '(--rev -r)'{-r+,--rev}'[revision to display]:revision:_hg_labels' \
725 '(--rev -r)'{-r+,--rev=}'[revision to display]:revision:_hg_labels' \
726 726 ':revision:_hg_labels'
727 727 }
728 728
729 729 _hg_cmd_merge() {
730 730 _arguments -s -w : $_hg_global_opts $_hg_mergetool_opts \
731 731 '(--force -f)'{-f,--force}'[force a merge with outstanding changes]' \
732 '(--rev -r 1)'{-r,--rev}'[revision to merge]:revision:_hg_mergerevs' \
732 '(--rev -r 1)'{-r+,--rev=}'[revision to merge]:revision:_hg_mergerevs' \
733 733 '(--preview -P)'{-P,--preview}'[review revisions to merge (no merge is performed)]' \
734 734 ':revision:_hg_mergerevs'
735 735 }
@@ -738,14 +738,14 b' typeset -A _hg_cmd_globals'
738 738 _arguments -s -w : $_hg_log_opts $_hg_branch_bmark_opts $_hg_remote_opts \
739 739 $_hg_subrepos_opts \
740 740 '(--force -f)'{-f,--force}'[run even when the remote repository is unrelated]' \
741 '*'{-r,--rev}'[a specific revision you would like to push]:revision:_hg_revrange' \
741 '*'{-r+,--rev=}'[a specific revision you would like to push]:revision:_hg_revrange' \
742 742 '(--newest-first -n)'{-n,--newest-first}'[show newest record first]' \
743 743 ':destination:_hg_remote'
744 744 }
745 745
746 746 _hg_cmd_parents() {
747 747 _arguments -s -w : $_hg_global_opts $_hg_style_opts \
748 '(--rev -r)'{-r+,--rev}'[show parents of the specified rev]:revision:_hg_labels' \
748 '(--rev -r)'{-r+,--rev=}'[show parents of the specified rev]:revision:_hg_labels' \
749 749 ':last modified file:_hg_files'
750 750 }
751 751
@@ -760,7 +760,7 b' typeset -A _hg_cmd_globals'
760 760 '(--draft -d)'{-d,--draft}'[set changeset phase to draft]' \
761 761 '(--secret -s)'{-s,--secret}'[set changeset phase to secret]' \
762 762 '(--force -f)'{-f,--force}'[allow to move boundary backward]' \
763 '(--rev -r)'{-r+,--rev}'[target revision]:revision:_hg_labels' \
763 '(--rev -r)'{-r+,--rev=}'[target revision]:revision:_hg_labels' \
764 764 ':revision:_hg_labels'
765 765 }
766 766
@@ -775,7 +775,7 b' typeset -A _hg_cmd_globals'
775 775 _hg_cmd_push() {
776 776 _arguments -s -w : $_hg_global_opts $_hg_branch_bmark_opts $_hg_remote_opts \
777 777 '(--force -f)'{-f,--force}'[force push]' \
778 '(--rev -r)'{-r+,--rev}'[a specific revision you would like to push]:revision:_hg_labels' \
778 '(--rev -r)'{-r+,--rev=}'[a specific revision you would like to push]:revision:_hg_labels' \
779 779 '--new-branch[allow pushing a new branch]' \
780 780 ':destination:_hg_remote'
781 781 }
@@ -819,9 +819,9 b' typeset -A _hg_cmd_globals'
819 819
820 820 _arguments -s -w : $_hg_global_opts $_hg_pat_opts $_hg_dryrun_opts \
821 821 '(--all -a :)'{-a,--all}'[revert all changes when no arguments given]' \
822 '(--rev -r)'{-r+,--rev}'[revision to revert to]:revision:_hg_labels' \
822 '(--rev -r)'{-r+,--rev=}'[revision to revert to]:revision:_hg_labels' \
823 823 '(--no-backup -C)'{-C,--no-backup}'[do not save backup copies of files]' \
824 '(--date -d)'{-d+,--date}'[tipmost revision matching date]:date code:' \
824 '(--date -d)'{-d+,--date=}'[tipmost revision matching date]:date code:' \
825 825 '*:file:->diff_files'
826 826
827 827 if [[ $state == 'diff_files' ]]
@@ -844,13 +844,13 b' typeset -A _hg_cmd_globals'
844 844
845 845 _hg_cmd_serve() {
846 846 _arguments -s -w : $_hg_global_opts \
847 '(--accesslog -A)'{-A+,--accesslog}'[name of access log file]:log file:_files' \
848 '(--errorlog -E)'{-E+,--errorlog}'[name of error log file]:log file:_files' \
847 '(--accesslog -A)'{-A+,--accesslog=}'[name of access log file]:log file:_files' \
848 '(--errorlog -E)'{-E+,--errorlog=}'[name of error log file]:log file:_files' \
849 849 '(--daemon -d)'{-d,--daemon}'[run server in background]' \
850 '(--port -p)'{-p+,--port}'[listen port]:listen port:' \
851 '(--address -a)'{-a+,--address}'[interface address]:interface address:' \
850 '(--port -p)'{-p+,--port=}'[listen port]:listen port:' \
851 '(--address -a)'{-a+,--address=}'[interface address]:interface address:' \
852 852 '--prefix[prefix path to serve from]:directory:_files' \
853 '(--name -n)'{-n+,--name}'[name to show in web pages]:repository name:' \
853 '(--name -n)'{-n+,--name=}'[name to show in web pages]:repository name:' \
854 854 '--web-conf[name of the hgweb config file]:webconf_file:_files' \
855 855 '--pid-file[name of file to write process ID to]:pid_file:_files' \
856 856 '--cmdserver[cmdserver mode]:mode:' \
@@ -863,7 +863,7 b' typeset -A _hg_cmd_globals'
863 863
864 864 _hg_cmd_showconfig() {
865 865 _arguments -s -w : $_hg_global_opts \
866 '(--untrusted -u)'{-u+,--untrusted}'[show untrusted configuration options]' \
866 '(--untrusted -u)'{-u,--untrusted}'[show untrusted configuration options]' \
867 867 ':config item:_hg_config'
868 868 }
869 869
@@ -893,10 +893,10 b' typeset -A _hg_cmd_globals'
893 893 _hg_cmd_tag() {
894 894 _arguments -s -w : $_hg_global_opts \
895 895 '(--local -l)'{-l,--local}'[make the tag local]' \
896 '(--message -m)'{-m+,--message}'[message for tag commit log entry]:message:' \
897 '(--date -d)'{-d+,--date}'[record datecode as commit date]:date code:' \
898 '(--user -u)'{-u+,--user}'[record user as commiter]:user:' \
899 '(--rev -r)'{-r+,--rev}'[revision to tag]:revision:_hg_labels' \
896 '(--message -m)'{-m+,--message=}'[message for tag commit log entry]:message:' \
897 '(--date -d)'{-d+,--date=}'[record datecode as commit date]:date code:' \
898 '(--user -u)'{-u+,--user=}'[record user as commiter]:user:' \
899 '(--rev -r)'{-r+,--rev=}'[revision to tag]:revision:_hg_labels' \
900 900 '(--force -f)'{-f,--force}'[force tag]' \
901 901 '--remove[remove a tag]' \
902 902 '(--edit -e)'{-e,--edit}'[edit commit message]' \
@@ -917,9 +917,9 b' typeset -A _hg_cmd_globals'
917 917 _hg_cmd_update() {
918 918 _arguments -s -w : $_hg_global_opts \
919 919 '(--clean -C)'{-C,--clean}'[overwrite locally modified files]' \
920 '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \
920 '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \
921 921 '(--check -c)'{-c,--check}'[update across branches if no uncommitted changes]' \
922 '(--date -d)'{-d+,--date}'[tipmost revision matching date]' \
922 '(--date -d)'{-d+,--date=}'[tipmost revision matching date]:' \
923 923 ':revision:_hg_labels'
924 924 }
925 925
@@ -928,7 +928,7 b' typeset -A _hg_cmd_globals'
928 928 # HGK
929 929 _hg_cmd_view() {
930 930 _arguments -s -w : $_hg_global_opts \
931 '(--limit -l)'{-l+,--limit}'[limit number of changes displayed]:' \
931 '(--limit -l)'{-l+,--limit=}'[limit number of changes displayed]:' \
932 932 ':revision range:_hg_labels'
933 933 }
934 934
@@ -989,7 +989,7 b' typeset -A _hg_cmd_globals'
989 989
990 990 _hg_cmd_qclone() {
991 991 _arguments -s -w : $_hg_global_opts $_hg_remote_opts $_hg_clone_opts \
992 '(--patches -p)'{-p+,--patches}'[location of source patch repository]' \
992 '(--patches -p)'{-p+,--patches=}'[location of source patch repository]:' \
993 993 ':source repository:_hg_remote' \
994 994 ':destination:_hg_clone_dest'
995 995 }
@@ -997,7 +997,7 b' typeset -A _hg_cmd_globals'
997 997 _hg_cmd_qdelete() {
998 998 _arguments -s -w : $_hg_global_opts \
999 999 '(--keep -k)'{-k,--keep}'[keep patch file]' \
1000 '*'{-r+,--rev}'[stop managing a revision]:applied patch:_hg_revrange' \
1000 '*'{-r+,--rev=}'[stop managing a revision]:applied patch:_hg_revrange' \
1001 1001 '*:unapplied patch:_hg_qdeletable'
1002 1002 }
1003 1003
@@ -1046,7 +1046,7 b' typeset -A _hg_cmd_globals'
1046 1046 '(--existing -e)'{-e,--existing}'[import file in patch dir]' \
1047 1047 '(--name -n 2)'{-n+,--name}'[patch file name]:name:' \
1048 1048 '(--force -f)'{-f,--force}'[overwrite existing files]' \
1049 '*'{-r+,--rev}'[place existing revisions under mq control]:revision:_hg_revrange' \
1049 '*'{-r+,--rev=}'[place existing revisions under mq control]:revision:_hg_revrange' \
1050 1050 '(--push -P)'{-P,--push}'[qpush after importing]' \
1051 1051 '*:patch:_files'
1052 1052 }
@@ -1125,8 +1125,8 b' typeset -A _hg_cmd_globals'
1125 1125 '(--force -f)'{-f,--force}'[force removal, discard uncommitted changes, no backup]' \
1126 1126 '(--no-backup -n)'{-n,--no-backup}'[no backups]' \
1127 1127 '(--keep -k)'{-k,--keep}'[do not modify working copy during strip]' \
1128 '(--bookmark -B)'{-B+,--bookmark}'[remove revs only reachable from given bookmark]:bookmark:_hg_bookmarks' \
1129 '(--rev -r)'{-r+,--rev}'[revision]:revision:_hg_labels' \
1128 '(--bookmark -B)'{-B+,--bookmark=}'[remove revs only reachable from given bookmark]:bookmark:_hg_bookmarks' \
1129 '(--rev -r)'{-r+,--rev=}'[revision]:revision:_hg_labels' \
1130 1130 ':revision:_hg_labels'
1131 1131 }
1132 1132
@@ -1138,7 +1138,7 b' typeset -A _hg_cmd_globals'
1138 1138 '(--outgoing -o)'{-o,--outgoing}'[send changes not found in the target repository]' \
1139 1139 '(--bundle -b)'{-b,--bundle}'[send changes not in target as a binary bundle]' \
1140 1140 '--bundlename[name of the bundle attachment file (default: bundle)]:' \
1141 '*'{-r+,--rev}'[search in given revision range]:revision:_hg_revrange' \
1141 '*'{-r+,--rev=}'[search in given revision range]:revision:_hg_revrange' \
1142 1142 '--force[run even when remote repository is unrelated (with -b/--bundle)]' \
1143 1143 '*--base[a base changeset to specify instead of a destination (with -b/--bundle)]:revision:_hg_labels' \
1144 1144 '--intro[send an introduction email for a single patch]' \
@@ -1163,10 +1163,10 b' typeset -A _hg_cmd_globals'
1163 1163 # Rebase
1164 1164 _hg_cmd_rebase() {
1165 1165 _arguments -s -w : $_hg_global_opts $_hg_commit_opts $_hg_mergetool_opts \
1166 '*'{-r,--rev}'[rebase these revisions]:revision:_hg_revrange' \
1167 '(--source -s)'{-s+,--source}'[rebase from the specified changeset]:revision:_hg_labels' \
1168 '(--base -b)'{-b+,--base}'[rebase from the base of the specified changeset]:revision:_hg_labels' \
1169 '(--dest -d)'{-d+,--dest}'[rebase onto the specified changeset]:revision:_hg_labels' \
1166 '*'{-r+,--rev=}'[rebase these revisions]:revision:_hg_revrange' \
1167 '(--source -s)'{-s+,--source=}'[rebase from the specified changeset]:revision:_hg_labels' \
1168 '(--base -b)'{-b+,--base=}'[rebase from the base of the specified changeset]:revision:_hg_labels' \
1169 '(--dest -d)'{-d+,--dest=}'[rebase onto the specified changeset]:revision:_hg_labels' \
1170 1170 '--collapse[collapse the rebased changeset]' \
1171 1171 '--keep[keep original changeset]' \
1172 1172 '--keepbranches[keep original branch name]' \
@@ -1181,8 +1181,8 b' typeset -A _hg_cmd_globals'
1181 1181 '(--addremove -A)'{-A,--addremove}'[mark new/missing files as added/removed before committing]' \
1182 1182 '--close-branch[mark a branch as closed, hiding it from the branch list]' \
1183 1183 '--amend[amend the parent of the working dir]' \
1184 '(--date -d)'{-d+,--date}'[record the specified date as commit date]:date:' \
1185 '(--user -u)'{-u+,--user}'[record the specified user as committer]:user:'
1184 '(--date -d)'{-d+,--date=}'[record the specified date as commit date]:date:' \
1185 '(--user -u)'{-u+,--user=}'[record the specified user as committer]:user:'
1186 1186 }
1187 1187
1188 1188 _hg_cmd_qrecord() {
@@ -1195,8 +1195,8 b' typeset -A _hg_cmd_globals'
1195 1195 _arguments -s -w : $_hg_global_opts \
1196 1196 '(--source-type -s)'{-s,--source-type}'[source repository type]' \
1197 1197 '(--dest-type -d)'{-d,--dest-type}'[destination repository type]' \
1198 '(--rev -r)'{-r+,--rev}'[import up to target revision]:revision:' \
1199 '(--authormap -A)'{-A+,--authormap}'[remap usernames using this file]:file:_files' \
1198 '(--rev -r)'{-r+,--rev=}'[import up to target revision]:revision:' \
1199 '(--authormap -A)'{-A+,--authormap=}'[remap usernames using this file]:file:_files' \
1200 1200 '--filemap[remap file names using contents of file]:file:_files' \
1201 1201 '--splicemap[splice synthesized history into place]:file:_files' \
1202 1202 '--branchmap[change branch names while converting]:file:_files' \
@@ -57,7 +57,6 b' try:'
57 57 import roman
58 58 except ImportError:
59 59 from docutils.utils import roman
60 import inspect
61 60
62 61 FIELD_LIST_INDENT = 7
63 62 DEFINITION_LIST_INDENT = 7
@@ -289,10 +288,10 b' class Translator(nodes.NodeVisitor):'
289 288 text = node.astext()
290 289 text = text.replace('\\','\\e')
291 290 replace_pairs = [
292 (u'-', ur'\-'),
293 (u'\'', ur'\(aq'),
294 (u'´', ur'\''),
295 (u'`', ur'\(ga'),
291 (u'-', u'\\-'),
292 (u"'", u'\\(aq'),
293 (u'´', u"\\'"),
294 (u'`', u'\\(ga'),
296 295 ]
297 296 for (in_char, out_markup) in replace_pairs:
298 297 text = text.replace(in_char, out_markup)
@@ -204,11 +204,11 b' from mercurial import ('
204 204
205 205 urlreq = util.urlreq
206 206
207 # Note for extension authors: ONLY specify testedwith = 'internal' for
207 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
208 208 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
209 209 # be specifying the version(s) of Mercurial they are tested with, or
210 210 # leave the attribute unspecified.
211 testedwith = 'internal'
211 testedwith = 'ships-with-hg-core'
212 212
213 213 def _getusers(ui, group):
214 214
@@ -51,11 +51,11 b' from mercurial import ('
51 51
52 52 cmdtable = {}
53 53 command = cmdutil.command(cmdtable)
54 # Note for extension authors: ONLY specify testedwith = 'internal' for
54 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
55 55 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
56 56 # be specifying the version(s) of Mercurial they are tested with, or
57 57 # leave the attribute unspecified.
58 testedwith = 'internal'
58 testedwith = 'ships-with-hg-core'
59 59 lastui = None
60 60
61 61 filehandles = {}
@@ -294,11 +294,11 b' from mercurial import ('
294 294 urlparse = util.urlparse
295 295 xmlrpclib = util.xmlrpclib
296 296
297 # Note for extension authors: ONLY specify testedwith = 'internal' for
297 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
298 298 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
299 299 # be specifying the version(s) of Mercurial they are tested with, or
300 300 # leave the attribute unspecified.
301 testedwith = 'internal'
301 testedwith = 'ships-with-hg-core'
302 302
303 303 class bzaccess(object):
304 304 '''Base class for access to Bugzilla.'''
@@ -42,11 +42,11 b' from mercurial import ('
42 42
43 43 cmdtable = {}
44 44 command = cmdutil.command(cmdtable)
45 # Note for extension authors: ONLY specify testedwith = 'internal' for
45 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
46 46 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
47 47 # be specifying the version(s) of Mercurial they are tested with, or
48 48 # leave the attribute unspecified.
49 testedwith = 'internal'
49 testedwith = 'ships-with-hg-core'
50 50
51 51 @command('censor',
52 52 [('r', 'rev', '', _('censor file from specified revision'), _('REV')),
@@ -63,11 +63,11 b' from mercurial import ('
63 63 util,
64 64 )
65 65
66 # Note for extension authors: ONLY specify testedwith = 'internal' for
66 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
67 67 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
68 68 # be specifying the version(s) of Mercurial they are tested with, or
69 69 # leave the attribute unspecified.
70 testedwith = 'internal'
70 testedwith = 'ships-with-hg-core'
71 71
72 72 _log = commandserver.log
73 73
@@ -26,11 +26,11 b' templateopts = commands.templateopts'
26 26
27 27 cmdtable = {}
28 28 command = cmdutil.command(cmdtable)
29 # Note for extension authors: ONLY specify testedwith = 'internal' for
29 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
30 30 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
31 31 # be specifying the version(s) of Mercurial they are tested with, or
32 32 # leave the attribute unspecified.
33 testedwith = 'internal'
33 testedwith = 'ships-with-hg-core'
34 34
35 35 @command('children',
36 36 [('r', 'rev', '',
@@ -26,11 +26,11 b' from mercurial import ('
26 26
27 27 cmdtable = {}
28 28 command = cmdutil.command(cmdtable)
29 # Note for extension authors: ONLY specify testedwith = 'internal' for
29 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
30 30 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
31 31 # be specifying the version(s) of Mercurial they are tested with, or
32 32 # leave the attribute unspecified.
33 testedwith = 'internal'
33 testedwith = 'ships-with-hg-core'
34 34
35 35 def maketemplater(ui, repo, tmpl):
36 36 return cmdutil.changeset_templater(ui, repo, False, None, tmpl, None, False)
@@ -169,7 +169,7 b' from mercurial import ('
169 169 wireproto,
170 170 )
171 171
172 testedwith = 'internal'
172 testedwith = 'ships-with-hg-core'
173 173
174 174 def capabilities(orig, repo, proto):
175 175 caps = orig(repo, proto)
@@ -29,6 +29,15 b" ECMA-48 mode, the options are 'bold', 'i"
29 29 Some may not be available for a given terminal type, and will be
30 30 silently ignored.
31 31
32 If the terminfo entry for your terminal is missing codes for an effect
33 or has the wrong codes, you can add or override those codes in your
34 configuration::
35
36 [color]
37 terminfo.dim = \E[2m
38
39 where '\E' is substituted with an escape character.
40
32 41 Labels
33 42 ------
34 43
@@ -170,11 +179,11 b' from mercurial import ('
170 179
171 180 cmdtable = {}
172 181 command = cmdutil.command(cmdtable)
173 # Note for extension authors: ONLY specify testedwith = 'internal' for
182 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
174 183 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
175 184 # be specifying the version(s) of Mercurial they are tested with, or
176 185 # leave the attribute unspecified.
177 testedwith = 'internal'
186 testedwith = 'ships-with-hg-core'
178 187
179 188 # start and stop parameters for effects
180 189 _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33,
@@ -196,9 +205,12 b' def _terminfosetup(ui, mode):'
196 205 if mode not in ('auto', 'terminfo'):
197 206 return
198 207
199 _terminfo_params.update((key[6:], (False, int(val)))
208 _terminfo_params.update((key[6:], (False, int(val), ''))
200 209 for key, val in ui.configitems('color')
201 210 if key.startswith('color.'))
211 _terminfo_params.update((key[9:], (True, '', val.replace('\\E', '\x1b')))
212 for key, val in ui.configitems('color')
213 if key.startswith('terminfo.'))
202 214
203 215 try:
204 216 curses.setupterm()
@@ -206,10 +218,10 b' def _terminfosetup(ui, mode):'
206 218 _terminfo_params = {}
207 219 return
208 220
209 for key, (b, e) in _terminfo_params.items():
221 for key, (b, e, c) in _terminfo_params.items():
210 222 if not b:
211 223 continue
212 if not curses.tigetstr(e):
224 if not c and not curses.tigetstr(e):
213 225 # Most terminals don't support dim, invis, etc, so don't be
214 226 # noisy and use ui.debug().
215 227 ui.debug("no terminfo entry for %s\n" % e)
@@ -290,26 +302,26 b' def _modesetup(ui, coloropt):'
290 302
291 303 try:
292 304 import curses
293 # Mapping from effect name to terminfo attribute name or color number.
294 # This will also force-load the curses module.
295 _terminfo_params = {'none': (True, 'sgr0'),
296 'standout': (True, 'smso'),
297 'underline': (True, 'smul'),
298 'reverse': (True, 'rev'),
299 'inverse': (True, 'rev'),
300 'blink': (True, 'blink'),
301 'dim': (True, 'dim'),
302 'bold': (True, 'bold'),
303 'invisible': (True, 'invis'),
304 'italic': (True, 'sitm'),
305 'black': (False, curses.COLOR_BLACK),
306 'red': (False, curses.COLOR_RED),
307 'green': (False, curses.COLOR_GREEN),
308 'yellow': (False, curses.COLOR_YELLOW),
309 'blue': (False, curses.COLOR_BLUE),
310 'magenta': (False, curses.COLOR_MAGENTA),
311 'cyan': (False, curses.COLOR_CYAN),
312 'white': (False, curses.COLOR_WHITE)}
305 # Mapping from effect name to terminfo attribute name (or raw code) or
306 # color number. This will also force-load the curses module.
307 _terminfo_params = {'none': (True, 'sgr0', ''),
308 'standout': (True, 'smso', ''),
309 'underline': (True, 'smul', ''),
310 'reverse': (True, 'rev', ''),
311 'inverse': (True, 'rev', ''),
312 'blink': (True, 'blink', ''),
313 'dim': (True, 'dim', ''),
314 'bold': (True, 'bold', ''),
315 'invisible': (True, 'invis', ''),
316 'italic': (True, 'sitm', ''),
317 'black': (False, curses.COLOR_BLACK, ''),
318 'red': (False, curses.COLOR_RED, ''),
319 'green': (False, curses.COLOR_GREEN, ''),
320 'yellow': (False, curses.COLOR_YELLOW, ''),
321 'blue': (False, curses.COLOR_BLUE, ''),
322 'magenta': (False, curses.COLOR_MAGENTA, ''),
323 'cyan': (False, curses.COLOR_CYAN, ''),
324 'white': (False, curses.COLOR_WHITE, '')}
313 325 except ImportError:
314 326 _terminfo_params = {}
315 327
@@ -375,9 +387,15 b' def _effect_str(effect):'
375 387 if effect.endswith('_background'):
376 388 bg = True
377 389 effect = effect[:-11]
378 attr, val = _terminfo_params[effect]
390 try:
391 attr, val, termcode = _terminfo_params[effect]
392 except KeyError:
393 return ''
379 394 if attr:
380 return curses.tigetstr(val)
395 if termcode:
396 return termcode
397 else:
398 return curses.tigetstr(val)
381 399 elif bg:
382 400 return curses.tparm(curses.tigetstr('setab'), val)
383 401 else:
@@ -412,7 +430,7 b' def valideffect(effect):'
412 430
413 431 def configstyles(ui):
414 432 for status, cfgeffects in ui.configitems('color'):
415 if '.' not in status or status.startswith('color.'):
433 if '.' not in status or status.startswith(('color.', 'terminfo.')):
416 434 continue
417 435 cfgeffects = ui.configlist('color', status)
418 436 if cfgeffects:
@@ -524,10 +542,16 b' def debugcolor(ui, repo, **opts):'
524 542 _styles = {}
525 543 for effect in _effects.keys():
526 544 _styles[effect] = effect
545 if _terminfo_params:
546 for k, v in ui.configitems('color'):
547 if k.startswith('color.'):
548 _styles[k] = k[6:]
549 elif k.startswith('terminfo.'):
550 _styles[k] = k[9:]
527 551 ui.write(('color mode: %s\n') % ui._colormode)
528 552 ui.write(_('available colors:\n'))
529 for label, colors in _styles.items():
530 ui.write(('%s\n') % colors, label=label)
553 for colorname, label in _styles.items():
554 ui.write(('%s\n') % colorname, label=label)
531 555
532 556 if os.name != 'nt':
533 557 w32effects = None
@@ -558,8 +582,8 b' else:'
558 582 ('srWindow', _SMALL_RECT),
559 583 ('dwMaximumWindowSize', _COORD)]
560 584
561 _STD_OUTPUT_HANDLE = 0xfffffff5L # (DWORD)-11
562 _STD_ERROR_HANDLE = 0xfffffff4L # (DWORD)-12
585 _STD_OUTPUT_HANDLE = 0xfffffff5 # (DWORD)-11
586 _STD_ERROR_HANDLE = 0xfffffff4 # (DWORD)-12
563 587
564 588 _FOREGROUND_BLUE = 0x0001
565 589 _FOREGROUND_GREEN = 0x0002
@@ -23,11 +23,11 b' from . import ('
23 23
24 24 cmdtable = {}
25 25 command = cmdutil.command(cmdtable)
26 # Note for extension authors: ONLY specify testedwith = 'internal' for
26 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
27 27 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
28 28 # be specifying the version(s) of Mercurial they are tested with, or
29 29 # leave the attribute unspecified.
30 testedwith = 'internal'
30 testedwith = 'ships-with-hg-core'
31 31
32 32 # Commands definition was moved elsewhere to ease demandload job.
33 33
@@ -531,8 +531,8 b' class svn_source(converter_source):'
531 531 def checkrevformat(self, revstr, mapname='splicemap'):
532 532 """ fails if revision format does not match the correct format"""
533 533 if not re.match(r'svn:[0-9a-f]{8,8}-[0-9a-f]{4,4}-'
534 '[0-9a-f]{4,4}-[0-9a-f]{4,4}-[0-9a-f]'
535 '{12,12}(.*)\@[0-9]+$',revstr):
534 r'[0-9a-f]{4,4}-[0-9a-f]{4,4}-[0-9a-f]'
535 r'{12,12}(.*)\@[0-9]+$',revstr):
536 536 raise error.Abort(_('%s entry %s is not a valid revision'
537 537 ' identifier') % (mapname, revstr))
538 538
@@ -104,11 +104,11 b' from mercurial import ('
104 104 util,
105 105 )
106 106
107 # Note for extension authors: ONLY specify testedwith = 'internal' for
107 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
108 108 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
109 109 # be specifying the version(s) of Mercurial they are tested with, or
110 110 # leave the attribute unspecified.
111 testedwith = 'internal'
111 testedwith = 'ships-with-hg-core'
112 112
113 113 # Matches a lone LF, i.e., one that is not part of CRLF.
114 114 singlelf = re.compile('(^|[^\r])\n')
@@ -175,25 +175,27 b' class eolfile(object):'
175 175
176 176 include = []
177 177 exclude = []
178 self.patterns = []
178 179 for pattern, style in self.cfg.items('patterns'):
179 180 key = style.upper()
180 181 if key == 'BIN':
181 182 exclude.append(pattern)
182 183 else:
183 184 include.append(pattern)
185 m = match.match(root, '', [pattern])
186 self.patterns.append((pattern, key, m))
184 187 # This will match the files for which we need to care
185 188 # about inconsistent newlines.
186 189 self.match = match.match(root, '', [], include, exclude)
187 190
188 191 def copytoui(self, ui):
189 for pattern, style in self.cfg.items('patterns'):
190 key = style.upper()
192 for pattern, key, m in self.patterns:
191 193 try:
192 194 ui.setconfig('decode', pattern, self._decode[key], 'eol')
193 195 ui.setconfig('encode', pattern, self._encode[key], 'eol')
194 196 except KeyError:
195 197 ui.warn(_("ignoring unknown EOL style '%s' from %s\n")
196 % (style, self.cfg.source('patterns', pattern)))
198 % (key, self.cfg.source('patterns', pattern)))
197 199 # eol.only-consistent can be specified in ~/.hgrc or .hgeol
198 200 for k, v in self.cfg.items('eol'):
199 201 ui.setconfig('eol', k, v, 'eol')
@@ -203,10 +205,10 b' class eolfile(object):'
203 205 for f in (files or ctx.files()):
204 206 if f not in ctx:
205 207 continue
206 for pattern, style in self.cfg.items('patterns'):
207 if not match.match(repo.root, '', [pattern])(f):
208 for pattern, key, m in self.patterns:
209 if not m(f):
208 210 continue
209 target = self._encode[style.upper()]
211 target = self._encode[key]
210 212 data = ctx[f].data()
211 213 if (target == "to-lf" and "\r\n" in data
212 214 or target == "to-crlf" and singlelf.search(data)):
@@ -305,15 +307,20 b' def reposetup(ui, repo):'
305 307 return eol.match
306 308
307 309 def _hgcleardirstate(self):
308 self._eolfile = self.loadeol([None, 'tip'])
309 if not self._eolfile:
310 self._eolfile = util.never
310 self._eolmatch = self.loadeol([None, 'tip'])
311 if not self._eolmatch:
312 self._eolmatch = util.never
311 313 return
312 314
315 oldeol = None
313 316 try:
314 317 cachemtime = os.path.getmtime(self.join("eol.cache"))
315 318 except OSError:
316 319 cachemtime = 0
320 else:
321 olddata = self.vfs.read("eol.cache")
322 if olddata:
323 oldeol = eolfile(self.ui, self.root, olddata)
317 324
318 325 try:
319 326 eolmtime = os.path.getmtime(self.wjoin(".hgeol"))
@@ -322,18 +329,37 b' def reposetup(ui, repo):'
322 329
323 330 if eolmtime > cachemtime:
324 331 self.ui.debug("eol: detected change in .hgeol\n")
332
333 hgeoldata = self.wvfs.read('.hgeol')
334 neweol = eolfile(self.ui, self.root, hgeoldata)
335
325 336 wlock = None
326 337 try:
327 338 wlock = self.wlock()
328 339 for f in self.dirstate:
329 if self.dirstate[f] == 'n':
330 # all normal files need to be looked at
331 # again since the new .hgeol file might no
332 # longer match a file it matched before
333 self.dirstate.normallookup(f)
334 # Create or touch the cache to update mtime
335 self.vfs("eol.cache", "w").close()
336 wlock.release()
340 if self.dirstate[f] != 'n':
341 continue
342 if oldeol is not None:
343 if not oldeol.match(f) and not neweol.match(f):
344 continue
345 oldkey = None
346 for pattern, key, m in oldeol.patterns:
347 if m(f):
348 oldkey = key
349 break
350 newkey = None
351 for pattern, key, m in neweol.patterns:
352 if m(f):
353 newkey = key
354 break
355 if oldkey == newkey:
356 continue
357 # all normal files need to be looked at again since
358 # the new .hgeol file specify a different filter
359 self.dirstate.normallookup(f)
360 # Write the cache to update mtime and cache .hgeol
361 with self.vfs("eol.cache", "w") as f:
362 f.write(hgeoldata)
337 363 except error.LockUnavailable:
338 364 # If we cannot lock the repository and clear the
339 365 # dirstate, then a commit might not see all files
@@ -341,10 +367,13 b' def reposetup(ui, repo):'
341 367 # repository, then we can also not make a commit,
342 368 # so ignore the error.
343 369 pass
370 finally:
371 if wlock is not None:
372 wlock.release()
344 373
345 374 def commitctx(self, ctx, haserror=False):
346 375 for f in sorted(ctx.added() + ctx.modified()):
347 if not self._eolfile(f):
376 if not self._eolmatch(f):
348 377 continue
349 378 fctx = ctx[f]
350 379 if fctx is None:
@@ -84,11 +84,11 b' from mercurial import ('
84 84
85 85 cmdtable = {}
86 86 command = cmdutil.command(cmdtable)
87 # Note for extension authors: ONLY specify testedwith = 'internal' for
87 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
88 88 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
89 89 # be specifying the version(s) of Mercurial they are tested with, or
90 90 # leave the attribute unspecified.
91 testedwith = 'internal'
91 testedwith = 'ships-with-hg-core'
92 92
93 93 def snapshot(ui, repo, files, node, tmproot, listsubrepos):
94 94 '''snapshot files as of some revision
@@ -324,6 +324,34 b' def extdiff(ui, repo, *pats, **opts):'
324 324 cmdline = ' '.join(map(util.shellquote, [program] + option))
325 325 return dodiff(ui, repo, cmdline, pats, opts)
326 326
327 class savedcmd(object):
328 """use external program to diff repository (or selected files)
329
330 Show differences between revisions for the specified files, using
331 the following program::
332
333 %(path)s
334
335 When two revision arguments are given, then changes are shown
336 between those revisions. If only one revision is specified then
337 that revision is compared to the working directory, and, when no
338 revisions are specified, the working directory files are compared
339 to its parent.
340 """
341
342 def __init__(self, path, cmdline):
343 # We can't pass non-ASCII through docstrings (and path is
344 # in an unknown encoding anyway)
345 docpath = path.encode("string-escape")
346 self.__doc__ = self.__doc__ % {'path': util.uirepr(docpath)}
347 self._cmdline = cmdline
348
349 def __call__(self, ui, repo, *pats, **opts):
350 options = ' '.join(map(util.shellquote, opts['option']))
351 if options:
352 options = ' ' + options
353 return dodiff(ui, repo, self._cmdline + options, pats, opts)
354
327 355 def uisetup(ui):
328 356 for cmd, path in ui.configitems('extdiff'):
329 357 path = util.expandpath(path)
@@ -357,28 +385,8 b' def uisetup(ui):'
357 385 ui.config('merge-tools', cmd+'.diffargs')
358 386 if args:
359 387 cmdline += ' ' + args
360 def save(cmdline):
361 '''use closure to save diff command to use'''
362 def mydiff(ui, repo, *pats, **opts):
363 options = ' '.join(map(util.shellquote, opts['option']))
364 if options:
365 options = ' ' + options
366 return dodiff(ui, repo, cmdline + options, pats, opts)
367 # We can't pass non-ASCII through docstrings (and path is
368 # in an unknown encoding anyway)
369 docpath = path.encode("string-escape")
370 mydiff.__doc__ = '''\
371 use %(path)s to diff repository (or selected files)
388 command(cmd, extdiffopts[:], _('hg %s [OPTION]... [FILE]...') % cmd,
389 inferrepo=True)(savedcmd(path, cmdline))
372 390
373 Show differences between revisions for the specified files, using
374 the %(path)s program.
375
376 When two revision arguments are given, then changes are shown
377 between those revisions. If only one revision is specified then
378 that revision is compared to the working directory, and, when no
379 revisions are specified, the working directory files are compared
380 to its parent.\
381 ''' % {'path': util.uirepr(docpath)}
382 return mydiff
383 command(cmd, extdiffopts[:], _('hg %s [OPTION]... [FILE]...') % cmd,
384 inferrepo=True)(save(cmdline))
391 # tell hggettext to extract docstrings from these functions:
392 i18nfunctions = [savedcmd]
@@ -26,11 +26,11 b' from mercurial import ('
26 26 release = lock.release
27 27 cmdtable = {}
28 28 command = cmdutil.command(cmdtable)
29 # Note for extension authors: ONLY specify testedwith = 'internal' for
29 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
30 30 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
31 31 # be specifying the version(s) of Mercurial they are tested with, or
32 32 # leave the attribute unspecified.
33 testedwith = 'internal'
33 testedwith = 'ships-with-hg-core'
34 34
35 35 @command('fetch',
36 36 [('r', 'rev', [],
@@ -113,11 +113,11 b' from . import ('
113 113 watchmanclient,
114 114 )
115 115
116 # Note for extension authors: ONLY specify testedwith = 'internal' for
116 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
117 117 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
118 118 # be specifying the version(s) of Mercurial they are tested with, or
119 119 # leave the attribute unspecified.
120 testedwith = 'internal'
120 testedwith = 'ships-with-hg-core'
121 121
122 122 # This extension is incompatible with the following blacklisted extensions
123 123 # and will disable itself when encountering one of these:
@@ -23,11 +23,11 b' from mercurial import ('
23 23
24 24 cmdtable = {}
25 25 command = cmdutil.command(cmdtable)
26 # Note for extension authors: ONLY specify testedwith = 'internal' for
26 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
27 27 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
28 28 # be specifying the version(s) of Mercurial they are tested with, or
29 29 # leave the attribute unspecified.
30 testedwith = 'internal'
30 testedwith = 'ships-with-hg-core'
31 31
32 32 class gpg(object):
33 33 def __init__(self, path, key=None):
@@ -25,11 +25,11 b' from mercurial import ('
25 25
26 26 cmdtable = {}
27 27 command = cmdutil.command(cmdtable)
28 # Note for extension authors: ONLY specify testedwith = 'internal' for
28 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
29 29 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
30 30 # be specifying the version(s) of Mercurial they are tested with, or
31 31 # leave the attribute unspecified.
32 testedwith = 'internal'
32 testedwith = 'ships-with-hg-core'
33 33
34 34 @command('glog',
35 35 [('f', 'follow', None,
@@ -54,11 +54,11 b' from mercurial import ('
54 54
55 55 cmdtable = {}
56 56 command = cmdutil.command(cmdtable)
57 # Note for extension authors: ONLY specify testedwith = 'internal' for
57 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
58 58 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
59 59 # be specifying the version(s) of Mercurial they are tested with, or
60 60 # leave the attribute unspecified.
61 testedwith = 'internal'
61 testedwith = 'ships-with-hg-core'
62 62
63 63 @command('debug-diff-tree',
64 64 [('p', 'patch', None, _('generate patch')),
@@ -41,11 +41,11 b' from mercurial import ('
41 41 fileset,
42 42 )
43 43
44 # Note for extension authors: ONLY specify testedwith = 'internal' for
44 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
45 45 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
46 46 # be specifying the version(s) of Mercurial they are tested with, or
47 47 # leave the attribute unspecified.
48 testedwith = 'internal'
48 testedwith = 'ships-with-hg-core'
49 49
50 50 def pygmentize(web, field, fctx, tmpl):
51 51 style = web.config('web', 'pygments_style', 'colorful')
@@ -201,23 +201,11 b' release = lock.release'
201 201 cmdtable = {}
202 202 command = cmdutil.command(cmdtable)
203 203
204 class _constraints(object):
205 # aborts if there are multiple rules for one node
206 noduplicates = 'noduplicates'
207 # abort if the node does belong to edited stack
208 forceother = 'forceother'
209 # abort if the node doesn't belong to edited stack
210 noother = 'noother'
211
212 @classmethod
213 def known(cls):
214 return set([v for k, v in cls.__dict__.items() if k[0] != '_'])
215
216 # Note for extension authors: ONLY specify testedwith = 'internal' for
204 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
217 205 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
218 206 # be specifying the version(s) of Mercurial they are tested with, or
219 207 # leave the attribute unspecified.
220 testedwith = 'internal'
208 testedwith = 'ships-with-hg-core'
221 209
222 210 actiontable = {}
223 211 primaryactions = set()
@@ -403,7 +391,7 b' class histeditaction(object):'
403 391 raise error.ParseError("invalid changeset %s" % rulehash)
404 392 return cls(state, rev)
405 393
406 def verify(self, prev):
394 def verify(self, prev, expected, seen):
407 395 """ Verifies semantic correctness of the rule"""
408 396 repo = self.repo
409 397 ha = node.hex(self.node)
@@ -412,6 +400,19 b' class histeditaction(object):'
412 400 except error.RepoError:
413 401 raise error.ParseError(_('unknown changeset %s listed')
414 402 % ha[:12])
403 if self.node is not None:
404 self._verifynodeconstraints(prev, expected, seen)
405
406 def _verifynodeconstraints(self, prev, expected, seen):
407 # by default command need a node in the edited list
408 if self.node not in expected:
409 raise error.ParseError(_('%s "%s" changeset was not a candidate')
410 % (self.verb, node.short(self.node)),
411 hint=_('only use listed changesets'))
412 # and only one command per node
413 if self.node in seen:
414 raise error.ParseError(_('duplicated command for changeset %s') %
415 node.short(self.node))
415 416
416 417 def torule(self):
417 418 """build a histedit rule line for an action
@@ -434,19 +435,6 b' class histeditaction(object):'
434 435 """
435 436 return "%s\n%s" % (self.verb, node.hex(self.node))
436 437
437 def constraints(self):
438 """Return a set of constrains that this action should be verified for
439 """
440 return set([_constraints.noduplicates, _constraints.noother])
441
442 def nodetoverify(self):
443 """Returns a node associated with the action that will be used for
444 verification purposes.
445
446 If the action doesn't correspond to node it should return None
447 """
448 return self.node
449
450 438 def run(self):
451 439 """Runs the action. The default behavior is simply apply the action's
452 440 rulectx onto the current parentctx."""
@@ -573,18 +561,7 b' def collapse(repo, first, last, commitop'
573 561 copied = copies.pathcopies(base, last)
574 562
575 563 # prune files which were reverted by the updates
576 def samefile(f):
577 if f in last.manifest():
578 a = last.filectx(f)
579 if f in base.manifest():
580 b = base.filectx(f)
581 return (a.data() == b.data()
582 and a.flags() == b.flags())
583 else:
584 return False
585 else:
586 return f not in base.manifest()
587 files = [f for f in files if not samefile(f)]
564 files = [f for f in files if not cmdutil.samefile(f, last, base)]
588 565 # commit version of these files as defined by head
589 566 headmf = last.manifest()
590 567 def filectxfn(repo, ctx, path):
@@ -683,9 +660,9 b' class edit(histeditaction):'
683 660 @action(['fold', 'f'],
684 661 _('use commit, but combine it with the one above'))
685 662 class fold(histeditaction):
686 def verify(self, prev):
663 def verify(self, prev, expected, seen):
687 664 """ Verifies semantic correctness of the fold rule"""
688 super(fold, self).verify(prev)
665 super(fold, self).verify(prev, expected, seen)
689 666 repo = self.repo
690 667 if not prev:
691 668 c = repo[self.node].parents()[0]
@@ -795,8 +772,6 b' class fold(histeditaction):'
795 772 return repo[n], replacements
796 773
797 774 class base(histeditaction):
798 def constraints(self):
799 return set([_constraints.forceother])
800 775
801 776 def run(self):
802 777 if self.repo['.'].node() != self.node:
@@ -811,6 +786,14 b' class base(histeditaction):'
811 786 basectx = self.repo['.']
812 787 return basectx, []
813 788
789 def _verifynodeconstraints(self, prev, expected, seen):
790 # base can only be use with a node not in the edited set
791 if self.node in expected:
792 msg = _('%s "%s" changeset was an edited list candidate')
793 raise error.ParseError(
794 msg % (self.verb, node.short(self.node)),
795 hint=_('base must only use unlisted changesets'))
796
814 797 @action(['_multifold'],
815 798 _(
816 799 """fold subclass used for when multiple folds happen in a row
@@ -871,7 +854,7 b' def findoutgoing(ui, repo, remote=None, '
871 854 roots = list(repo.revs("roots(%ln)", outgoing.missing))
872 855 if 1 < len(roots):
873 856 msg = _('there are ambiguous outgoing revisions')
874 hint = _('see "hg help histedit" for more detail')
857 hint = _("see 'hg help histedit' for more detail")
875 858 raise error.Abort(msg, hint=hint)
876 859 return repo.lookup(roots[0])
877 860
@@ -1210,8 +1193,8 b' def _edithisteditplan(ui, repo, state, r'
1210 1193 else:
1211 1194 rules = _readfile(rules)
1212 1195 actions = parserules(rules, state)
1213 ctxs = [repo[act.nodetoverify()] \
1214 for act in state.actions if act.nodetoverify()]
1196 ctxs = [repo[act.node] \
1197 for act in state.actions if act.node]
1215 1198 warnverifyactions(ui, repo, actions, state, ctxs)
1216 1199 state.actions = actions
1217 1200 state.write()
@@ -1307,7 +1290,7 b' def between(repo, old, new, keep):'
1307 1290 root = ctxs[0] # list is already sorted by repo.set
1308 1291 if not root.mutable():
1309 1292 raise error.Abort(_('cannot edit public changeset: %s') % root,
1310 hint=_('see "hg help phases" for details'))
1293 hint=_("see 'hg help phases' for details"))
1311 1294 return [c.node() for c in ctxs]
1312 1295
1313 1296 def ruleeditor(repo, ui, actions, editcomment=""):
@@ -1396,36 +1379,14 b' def verifyactions(actions, state, ctxs):'
1396 1379 Will abort if there are to many or too few rules, a malformed rule,
1397 1380 or a rule on a changeset outside of the user-given range.
1398 1381 """
1399 expected = set(c.hex() for c in ctxs)
1382 expected = set(c.node() for c in ctxs)
1400 1383 seen = set()
1401 1384 prev = None
1402 1385 for action in actions:
1403 action.verify(prev)
1386 action.verify(prev, expected, seen)
1404 1387 prev = action
1405 constraints = action.constraints()
1406 for constraint in constraints:
1407 if constraint not in _constraints.known():
1408 raise error.ParseError(_('unknown constraint "%s"') %
1409 constraint)
1410
1411 nodetoverify = action.nodetoverify()
1412 if nodetoverify is not None:
1413 ha = node.hex(nodetoverify)
1414 if _constraints.noother in constraints and ha not in expected:
1415 raise error.ParseError(
1416 _('%s "%s" changeset was not a candidate')
1417 % (action.verb, ha[:12]),
1418 hint=_('only use listed changesets'))
1419 if _constraints.forceother in constraints and ha in expected:
1420 raise error.ParseError(
1421 _('%s "%s" changeset was not an edited list candidate')
1422 % (action.verb, ha[:12]),
1423 hint=_('only use listed changesets'))
1424 if _constraints.noduplicates in constraints and ha in seen:
1425 raise error.ParseError(_(
1426 'duplicated command for changeset %s') %
1427 ha[:12])
1428 seen.add(ha)
1388 if action.node is not None:
1389 seen.add(action.node)
1429 1390 missing = sorted(expected - seen) # sort to stabilize output
1430 1391
1431 1392 if state.repo.ui.configbool('histedit', 'dropmissing'):
@@ -1433,15 +1394,16 b' def verifyactions(actions, state, ctxs):'
1433 1394 raise error.ParseError(_('no rules provided'),
1434 1395 hint=_('use strip extension to remove commits'))
1435 1396
1436 drops = [drop(state, node.bin(n)) for n in missing]
1397 drops = [drop(state, n) for n in missing]
1437 1398 # put the in the beginning so they execute immediately and
1438 1399 # don't show in the edit-plan in the future
1439 1400 actions[:0] = drops
1440 1401 elif missing:
1441 1402 raise error.ParseError(_('missing rules for changeset %s') %
1442 missing[0][:12],
1403 node.short(missing[0]),
1443 1404 hint=_('use "drop %s" to discard, see also: '
1444 '"hg help -e histedit.config"') % missing[0][:12])
1405 "'hg help -e histedit.config'")
1406 % node.short(missing[0]))
1445 1407
1446 1408 def adjustreplacementsfrommarkers(repo, oldreplacements):
1447 1409 """Adjust replacements from obsolescense markers
@@ -1608,10 +1570,9 b' def stripwrapper(orig, ui, repo, nodelis'
1608 1570 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1609 1571 state = histeditstate(repo)
1610 1572 state.read()
1611 histedit_nodes = set([action.nodetoverify() for action
1612 in state.actions if action.nodetoverify()])
1613 strip_nodes = set([repo[n].node() for n in nodelist])
1614 common_nodes = histedit_nodes & strip_nodes
1573 histedit_nodes = set([action.node for action
1574 in state.actions if action.node])
1575 common_nodes = histedit_nodes & set(nodelist)
1615 1576 if common_nodes:
1616 1577 raise error.Abort(_("histedit in progress, can't strip %s")
1617 1578 % ', '.join(node.short(x) for x in common_nodes))
@@ -24,7 +24,6 b' from mercurial import ('
24 24 bookmarks,
25 25 cmdutil,
26 26 commands,
27 dirstate,
28 27 dispatch,
29 28 error,
30 29 extensions,
@@ -40,11 +39,11 b' from . import share'
40 39 cmdtable = {}
41 40 command = cmdutil.command(cmdtable)
42 41
43 # Note for extension authors: ONLY specify testedwith = 'internal' for
42 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
44 43 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
45 44 # be specifying the version(s) of Mercurial they are tested with, or
46 45 # leave the attribute unspecified.
47 testedwith = 'internal'
46 testedwith = 'ships-with-hg-core'
48 47
49 48 # storage format version; increment when the format changes
50 49 storageversion = 0
@@ -63,8 +62,6 b' def extsetup(ui):'
63 62 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
64 63 extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks)
65 64 extensions.wrapfunction(
66 dirstate.dirstate, '_writedirstate', recorddirstateparents)
67 extensions.wrapfunction(
68 65 localrepo.localrepository.dirstate, 'func', wrapdirstate)
69 66 extensions.wrapfunction(hg, 'postshare', wrappostshare)
70 67 extensions.wrapfunction(hg, 'copystore', unsharejournal)
@@ -84,34 +81,19 b' def wrapdirstate(orig, repo):'
84 81 dirstate = orig(repo)
85 82 if util.safehasattr(repo, 'journal'):
86 83 dirstate.journalstorage = repo.journal
84 dirstate.addparentchangecallback('journal', recorddirstateparents)
87 85 return dirstate
88 86
89 def recorddirstateparents(orig, dirstate, dirstatefp):
87 def recorddirstateparents(dirstate, old, new):
90 88 """Records all dirstate parent changes in the journal."""
89 old = list(old)
90 new = list(new)
91 91 if util.safehasattr(dirstate, 'journalstorage'):
92 old = [node.nullid, node.nullid]
93 nodesize = len(node.nullid)
94 try:
95 # The only source for the old state is in the dirstate file still
96 # on disk; the in-memory dirstate object only contains the new
97 # state. dirstate._opendirstatefile() switches beteen .hg/dirstate
98 # and .hg/dirstate.pending depending on the transaction state.
99 with dirstate._opendirstatefile() as fp:
100 state = fp.read(2 * nodesize)
101 if len(state) == 2 * nodesize:
102 old = [state[:nodesize], state[nodesize:]]
103 except IOError:
104 pass
105
106 new = dirstate.parents()
107 if old != new:
108 # only record two hashes if there was a merge
109 oldhashes = old[:1] if old[1] == node.nullid else old
110 newhashes = new[:1] if new[1] == node.nullid else new
111 dirstate.journalstorage.record(
112 wdirparenttype, '.', oldhashes, newhashes)
113
114 return orig(dirstate, dirstatefp)
92 # only record two hashes if there was a merge
93 oldhashes = old[:1] if old[1] == node.nullid else old
94 newhashes = new[:1] if new[1] == node.nullid else new
95 dirstate.journalstorage.record(
96 wdirparenttype, '.', oldhashes, newhashes)
115 97
116 98 # hooks to record bookmark changes (both local and remote)
117 99 def recordbookmarks(orig, store, fp):
@@ -165,9 +147,10 b' def _mergeentriesiter(*iterables, **kwar'
165 147
166 148 def wrappostshare(orig, sourcerepo, destrepo, **kwargs):
167 149 """Mark this shared working copy as sharing journal information"""
168 orig(sourcerepo, destrepo, **kwargs)
169 with destrepo.vfs('shared', 'a') as fp:
170 fp.write('journal\n')
150 with destrepo.wlock():
151 orig(sourcerepo, destrepo, **kwargs)
152 with destrepo.vfs('shared', 'a') as fp:
153 fp.write('journal\n')
171 154
172 155 def unsharejournal(orig, ui, repo, repopath):
173 156 """Copy shared journal entries into this repo when unsharing"""
@@ -179,12 +162,12 b' def unsharejournal(orig, ui, repo, repop'
179 162 # there is a shared repository and there are shared journal entries
180 163 # to copy. move shared date over from source to destination but
181 164 # move the local file first
182 if repo.vfs.exists('journal'):
183 journalpath = repo.join('journal')
165 if repo.vfs.exists('namejournal'):
166 journalpath = repo.join('namejournal')
184 167 util.rename(journalpath, journalpath + '.bak')
185 168 storage = repo.journal
186 169 local = storage._open(
187 repo.vfs, filename='journal.bak', _newestfirst=False)
170 repo.vfs, filename='namejournal.bak', _newestfirst=False)
188 171 shared = (
189 172 e for e in storage._open(sharedrepo.vfs, _newestfirst=False)
190 173 if sharednamespaces.get(e.namespace) in sharedfeatures)
@@ -194,8 +177,8 b' def unsharejournal(orig, ui, repo, repop'
194 177 return orig(ui, repo, repopath)
195 178
196 179 class journalentry(collections.namedtuple(
197 'journalentry',
198 'timestamp user command namespace name oldhashes newhashes')):
180 u'journalentry',
181 u'timestamp user command namespace name oldhashes newhashes')):
199 182 """Individual journal entry
200 183
201 184 * timestamp: a mercurial (time, timezone) tuple
@@ -284,19 +267,31 b' class journalstorage(object):'
284 267 # with a non-local repo (cloning for example).
285 268 cls._currentcommand = fullargs
286 269
270 def _currentlock(self, lockref):
271 """Returns the lock if it's held, or None if it's not.
272
273 (This is copied from the localrepo class)
274 """
275 if lockref is None:
276 return None
277 l = lockref()
278 if l is None or not l.held:
279 return None
280 return l
281
287 282 def jlock(self, vfs):
288 283 """Create a lock for the journal file"""
289 if self._lockref and self._lockref():
284 if self._currentlock(self._lockref) is not None:
290 285 raise error.Abort(_('journal lock does not support nesting'))
291 286 desc = _('journal of %s') % vfs.base
292 287 try:
293 l = lock.lock(vfs, 'journal.lock', 0, desc=desc)
288 l = lock.lock(vfs, 'namejournal.lock', 0, desc=desc)
294 289 except error.LockHeld as inst:
295 290 self.ui.warn(
296 291 _("waiting for lock on %s held by %r\n") % (desc, inst.locker))
297 292 # default to 600 seconds timeout
298 293 l = lock.lock(
299 vfs, 'journal.lock',
294 vfs, 'namejournal.lock',
300 295 int(self.ui.config("ui", "timeout", "600")), desc=desc)
301 296 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
302 297 self._lockref = weakref.ref(l)
@@ -336,7 +331,7 b' class journalstorage(object):'
336 331 with self.jlock(vfs):
337 332 version = None
338 333 # open file in amend mode to ensure it is created if missing
339 with vfs('journal', mode='a+b', atomictemp=True) as f:
334 with vfs('namejournal', mode='a+b', atomictemp=True) as f:
340 335 f.seek(0, os.SEEK_SET)
341 336 # Read just enough bytes to get a version number (up to 2
342 337 # digits plus separator)
@@ -394,7 +389,7 b' class journalstorage(object):'
394 389 if sharednamespaces.get(e.namespace) in self.sharedfeatures)
395 390 return _mergeentriesiter(local, shared)
396 391
397 def _open(self, vfs, filename='journal', _newestfirst=True):
392 def _open(self, vfs, filename='namejournal', _newestfirst=True):
398 393 if not vfs.exists(filename):
399 394 return
400 395
@@ -475,8 +470,10 b' def journal(ui, repo, *args, **opts):'
475 470 for count, entry in enumerate(repo.journal.filtered(name=name)):
476 471 if count == limit:
477 472 break
478 newhashesstr = ','.join([node.short(hash) for hash in entry.newhashes])
479 oldhashesstr = ','.join([node.short(hash) for hash in entry.oldhashes])
473 newhashesstr = fm.formatlist(map(fm.hexfunc, entry.newhashes),
474 name='node', sep=',')
475 oldhashesstr = fm.formatlist(map(fm.hexfunc, entry.oldhashes),
476 name='node', sep=',')
480 477
481 478 fm.startitem()
482 479 fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr)
@@ -486,7 +483,7 b' def journal(ui, repo, *args, **opts):'
486 483 opts.get('all') or name.startswith('re:'),
487 484 'name', ' %-8s', entry.name)
488 485
489 timestring = util.datestr(entry.timestamp, '%Y-%m-%d %H:%M %1%2')
486 timestring = fm.formatdate(entry.timestamp, '%Y-%m-%d %H:%M %1%2')
490 487 fm.condwrite(ui.verbose, 'date', ' %s', timestring)
491 488 fm.write('command', ' %s\n', entry.command)
492 489
@@ -24,7 +24,7 b''
24 24 # Files to act upon/ignore are specified in the [keyword] section.
25 25 # Customized keyword template mappings in the [keywordmaps] section.
26 26 #
27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
27 # Run 'hg help keyword' and 'hg kwdemo' to get info on configuration.
28 28
29 29 '''expand keywords in tracked files
30 30
@@ -112,11 +112,11 b' from mercurial import ('
112 112
113 113 cmdtable = {}
114 114 command = cmdutil.command(cmdtable)
115 # Note for extension authors: ONLY specify testedwith = 'internal' for
115 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
116 116 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
117 117 # be specifying the version(s) of Mercurial they are tested with, or
118 118 # leave the attribute unspecified.
119 testedwith = 'internal'
119 testedwith = 'ships-with-hg-core'
120 120
121 121 # hg commands that do not act on keywords
122 122 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
@@ -119,11 +119,11 b' from . import ('
119 119 uisetup as uisetupmod,
120 120 )
121 121
122 # Note for extension authors: ONLY specify testedwith = 'internal' for
122 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
123 123 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
124 124 # be specifying the version(s) of Mercurial they are tested with, or
125 125 # leave the attribute unspecified.
126 testedwith = 'internal'
126 testedwith = 'ships-with-hg-core'
127 127
128 128 reposetup = reposetup.reposetup
129 129
@@ -91,15 +91,13 b' class basestore(object):'
91 91 storefilename = lfutil.storepath(self.repo, hash)
92 92
93 93 tmpname = storefilename + '.tmp'
94 tmpfile = util.atomictempfile(tmpname,
95 createmode=self.repo.store.createmode)
96
97 try:
98 gothash = self._getfile(tmpfile, filename, hash)
99 except StoreError as err:
100 self.ui.warn(err.longmessage())
101 gothash = ""
102 tmpfile.close()
94 with util.atomictempfile(tmpname,
95 createmode=self.repo.store.createmode) as tmpfile:
96 try:
97 gothash = self._getfile(tmpfile, filename, hash)
98 except StoreError as err:
99 self.ui.warn(err.longmessage())
100 gothash = ""
103 101
104 102 if gothash != hash:
105 103 if gothash != "":
@@ -515,9 +515,13 b' def updatelfiles(ui, repo, filelist=None'
515 515 rellfile = lfile
516 516 relstandin = lfutil.standin(lfile)
517 517 if wvfs.exists(relstandin):
518 mode = wvfs.stat(relstandin).st_mode
519 if mode != wvfs.stat(rellfile).st_mode:
520 wvfs.chmod(rellfile, mode)
518 standinexec = wvfs.stat(relstandin).st_mode & 0o100
519 st = wvfs.stat(rellfile).st_mode
520 if standinexec != st & 0o100:
521 st &= ~0o111
522 if standinexec:
523 st |= (st >> 2) & 0o111 & ~util.umask
524 wvfs.chmod(rellfile, st)
521 525 update1 = 1
522 526
523 527 updated += update1
@@ -54,10 +54,10 b' def link(src, dest):'
54 54 util.oslink(src, dest)
55 55 except OSError:
56 56 # if hardlinks fail, fallback on atomic copy
57 dst = util.atomictempfile(dest)
58 for chunk in util.filechunkiter(open(src, 'rb')):
59 dst.write(chunk)
60 dst.close()
57 with open(src, 'rb') as srcf:
58 with util.atomictempfile(dest) as dstf:
59 for chunk in util.filechunkiter(srcf):
60 dstf.write(chunk)
61 61 os.chmod(dest, os.stat(src).st_mode)
62 62
63 63 def usercachepath(ui, hash):
@@ -231,7 +231,8 b' def copyfromcache(repo, hash, filename):'
231 231 # don't use atomic writes in the working copy.
232 232 with open(path, 'rb') as srcfd:
233 233 with wvfs(filename, 'wb') as destfd:
234 gothash = copyandhash(srcfd, destfd)
234 gothash = copyandhash(
235 util.filechunkiter(srcfd), destfd)
235 236 if gothash != hash:
236 237 repo.ui.warn(_('%s: data corruption in %s with hash %s\n')
237 238 % (filename, path, gothash))
@@ -264,11 +265,11 b' def copytostoreabsolute(repo, file, hash'
264 265 link(usercachepath(repo.ui, hash), storepath(repo, hash))
265 266 else:
266 267 util.makedirs(os.path.dirname(storepath(repo, hash)))
267 dst = util.atomictempfile(storepath(repo, hash),
268 createmode=repo.store.createmode)
269 for chunk in util.filechunkiter(open(file, 'rb')):
270 dst.write(chunk)
271 dst.close()
268 with open(file, 'rb') as srcf:
269 with util.atomictempfile(storepath(repo, hash),
270 createmode=repo.store.createmode) as dstf:
271 for chunk in util.filechunkiter(srcf):
272 dstf.write(chunk)
272 273 linktousercache(repo, hash)
273 274
274 275 def linktousercache(repo, hash):
@@ -370,10 +371,9 b' def hashfile(file):'
370 371 if not os.path.exists(file):
371 372 return ''
372 373 hasher = hashlib.sha1('')
373 fd = open(file, 'rb')
374 for data in util.filechunkiter(fd, 128 * 1024):
375 hasher.update(data)
376 fd.close()
374 with open(file, 'rb') as fd:
375 for data in util.filechunkiter(fd):
376 hasher.update(data)
377 377 return hasher.hexdigest()
378 378
379 379 def getexecutable(filename):
@@ -10,6 +10,7 b''
10 10 from __future__ import absolute_import
11 11
12 12 from mercurial.i18n import _
13 from mercurial import util
13 14
14 15 from . import (
15 16 basestore,
@@ -42,7 +43,8 b' class localstore(basestore.basestore):'
42 43 raise basestore.StoreError(filename, hash, self.url,
43 44 _("can't get file locally"))
44 45 with open(path, 'rb') as fd:
45 return lfutil.copyandhash(fd, tmpfile)
46 return lfutil.copyandhash(
47 util.filechunkiter(fd), tmpfile)
46 48
47 49 def _verifyfiles(self, contents, filestocheck):
48 50 failed = False
@@ -883,11 +883,8 b' def hgclone(orig, ui, opts, *args, **kwa'
883 883
884 884 # If largefiles is required for this repo, permanently enable it locally
885 885 if 'largefiles' in repo.requirements:
886 fp = repo.vfs('hgrc', 'a', text=True)
887 try:
886 with repo.vfs('hgrc', 'a', text=True) as fp:
888 887 fp.write('\n[extensions]\nlargefiles=\n')
889 finally:
890 fp.close()
891 888
892 889 # Caching is implicitly limited to 'rev' option, since the dest repo was
893 890 # truncated at that point. The user may expect a download count with
@@ -1339,30 +1336,28 b' def overridecat(orig, ui, repo, file1, *'
1339 1336 m.visitdir = lfvisitdirfn
1340 1337
1341 1338 for f in ctx.walk(m):
1342 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1343 pathname=f)
1344 lf = lfutil.splitstandin(f)
1345 if lf is None or origmatchfn(f):
1346 # duplicating unreachable code from commands.cat
1347 data = ctx[f].data()
1348 if opts.get('decode'):
1349 data = repo.wwritedata(f, data)
1350 fp.write(data)
1351 else:
1352 hash = lfutil.readstandin(repo, lf, ctx.rev())
1353 if not lfutil.inusercache(repo.ui, hash):
1354 store = storefactory.openstore(repo)
1355 success, missing = store.get([(lf, hash)])
1356 if len(success) != 1:
1357 raise error.Abort(
1358 _('largefile %s is not in cache and could not be '
1359 'downloaded') % lf)
1360 path = lfutil.usercachepath(repo.ui, hash)
1361 fpin = open(path, "rb")
1362 for chunk in util.filechunkiter(fpin, 128 * 1024):
1363 fp.write(chunk)
1364 fpin.close()
1365 fp.close()
1339 with cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1340 pathname=f) as fp:
1341 lf = lfutil.splitstandin(f)
1342 if lf is None or origmatchfn(f):
1343 # duplicating unreachable code from commands.cat
1344 data = ctx[f].data()
1345 if opts.get('decode'):
1346 data = repo.wwritedata(f, data)
1347 fp.write(data)
1348 else:
1349 hash = lfutil.readstandin(repo, lf, ctx.rev())
1350 if not lfutil.inusercache(repo.ui, hash):
1351 store = storefactory.openstore(repo)
1352 success, missing = store.get([(lf, hash)])
1353 if len(success) != 1:
1354 raise error.Abort(
1355 _('largefile %s is not in cache and could not be '
1356 'downloaded') % lf)
1357 path = lfutil.usercachepath(repo.ui, hash)
1358 with open(path, "rb") as fpin:
1359 for chunk in util.filechunkiter(fpin):
1360 fp.write(chunk)
1366 1361 err = 0
1367 1362 return err
1368 1363
@@ -134,7 +134,7 b' def wirereposetup(ui, repo):'
134 134 length))
135 135
136 136 # SSH streams will block if reading more than length
137 for chunk in util.filechunkiter(stream, 128 * 1024, length):
137 for chunk in util.filechunkiter(stream, limit=length):
138 138 yield chunk
139 139 # HTTP streams must hit the end to process the last empty
140 140 # chunk of Chunked-Encoding so the connection can be reused.
@@ -45,17 +45,13 b' class remotestore(basestore.basestore):'
45 45
46 46 def sendfile(self, filename, hash):
47 47 self.ui.debug('remotestore: sendfile(%s, %s)\n' % (filename, hash))
48 fd = None
49 48 try:
50 fd = lfutil.httpsendfile(self.ui, filename)
51 return self._put(hash, fd)
49 with lfutil.httpsendfile(self.ui, filename) as fd:
50 return self._put(hash, fd)
52 51 except IOError as e:
53 52 raise error.Abort(
54 53 _('remotestore: could not open file %s: %s')
55 54 % (filename, str(e)))
56 finally:
57 if fd:
58 fd.close()
59 55
60 56 def _getfile(self, tmpfile, filename, hash):
61 57 try:
@@ -122,7 +118,7 b' class remotestore(basestore.basestore):'
122 118 raise NotImplementedError('abstract method')
123 119
124 120 def _get(self, hash):
125 '''Get file with the given hash from the remote store.'''
121 '''Get a iterator for content with the given hash.'''
126 122 raise NotImplementedError('abstract method')
127 123
128 124 def _stat(self, hashes):
@@ -40,11 +40,11 b' import platform'
40 40 import subprocess
41 41 import sys
42 42
43 # Note for extension authors: ONLY specify testedwith = 'internal' for
43 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
44 44 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
45 45 # be specifying the version(s) of Mercurial they are tested with, or
46 46 # leave the attribute unspecified.
47 testedwith = 'internal'
47 testedwith = 'ships-with-hg-core'
48 48
49 49 def uisetup(ui):
50 50 if platform.system() == 'Windows':
@@ -99,11 +99,11 b" seriesopts = [('s', 'summary', None, _('"
99 99
100 100 cmdtable = {}
101 101 command = cmdutil.command(cmdtable)
102 # Note for extension authors: ONLY specify testedwith = 'internal' for
102 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
103 103 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
104 104 # be specifying the version(s) of Mercurial they are tested with, or
105 105 # leave the attribute unspecified.
106 testedwith = 'internal'
106 testedwith = 'ships-with-hg-core'
107 107
108 108 # force load strip extension formerly included in mq and import some utility
109 109 try:
@@ -1562,7 +1562,7 b' class queue(object):'
1562 1562 if not repo[self.applied[-1].node].mutable():
1563 1563 raise error.Abort(
1564 1564 _("popping would remove a public revision"),
1565 hint=_('see "hg help phases" for details'))
1565 hint=_("see 'hg help phases' for details"))
1566 1566
1567 1567 # we know there are no local changes, so we can make a simplified
1568 1568 # form of hg.update.
@@ -1631,7 +1631,7 b' class queue(object):'
1631 1631 raise error.Abort(_("cannot qrefresh a revision with children"))
1632 1632 if not repo[top].mutable():
1633 1633 raise error.Abort(_("cannot qrefresh public revision"),
1634 hint=_('see "hg help phases" for details'))
1634 hint=_("see 'hg help phases' for details"))
1635 1635
1636 1636 cparents = repo.changelog.parents(top)
1637 1637 patchparent = self.qparents(repo, top)
@@ -1840,7 +1840,7 b' class queue(object):'
1840 1840
1841 1841 self.applied.append(statusentry(n, patchfn))
1842 1842 finally:
1843 lockmod.release(lock, tr)
1843 lockmod.release(tr, lock)
1844 1844 except: # re-raises
1845 1845 ctx = repo[cparents[0]]
1846 1846 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
@@ -2117,7 +2117,7 b' class queue(object):'
2117 2117 for r in rev:
2118 2118 if not repo[r].mutable():
2119 2119 raise error.Abort(_('revision %d is not mutable') % r,
2120 hint=_('see "hg help phases" '
2120 hint=_("see 'hg help phases' "
2121 2121 'for details'))
2122 2122 p1, p2 = repo.changelog.parentrevs(r)
2123 2123 n = repo.changelog.node(r)
@@ -3354,53 +3354,54 b' def qqueue(ui, repo, name=None, **opts):'
3354 3354 raise error.Abort(
3355 3355 _('invalid queue name, may not contain the characters ":\\/."'))
3356 3356
3357 existing = _getqueues()
3358
3359 if opts.get('create'):
3360 if name in existing:
3361 raise error.Abort(_('queue "%s" already exists') % name)
3362 if _noqueues():
3363 _addqueue(_defaultqueue)
3364 _addqueue(name)
3365 _setactive(name)
3366 elif opts.get('rename'):
3367 current = _getcurrent()
3368 if name == current:
3369 raise error.Abort(_('can\'t rename "%s" to its current name')
3370 % name)
3371 if name in existing:
3372 raise error.Abort(_('queue "%s" already exists') % name)
3373
3374 olddir = _queuedir(current)
3375 newdir = _queuedir(name)
3376
3377 if os.path.exists(newdir):
3378 raise error.Abort(_('non-queue directory "%s" already exists') %
3379 newdir)
3380
3381 fh = repo.vfs('patches.queues.new', 'w')
3382 for queue in existing:
3383 if queue == current:
3384 fh.write('%s\n' % (name,))
3385 if os.path.exists(olddir):
3386 util.rename(olddir, newdir)
3387 else:
3388 fh.write('%s\n' % (queue,))
3389 fh.close()
3390 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3391 _setactivenocheck(name)
3392 elif opts.get('delete'):
3393 _delete(name)
3394 elif opts.get('purge'):
3395 if name in existing:
3357 with repo.wlock():
3358 existing = _getqueues()
3359
3360 if opts.get('create'):
3361 if name in existing:
3362 raise error.Abort(_('queue "%s" already exists') % name)
3363 if _noqueues():
3364 _addqueue(_defaultqueue)
3365 _addqueue(name)
3366 _setactive(name)
3367 elif opts.get('rename'):
3368 current = _getcurrent()
3369 if name == current:
3370 raise error.Abort(_('can\'t rename "%s" to its current name')
3371 % name)
3372 if name in existing:
3373 raise error.Abort(_('queue "%s" already exists') % name)
3374
3375 olddir = _queuedir(current)
3376 newdir = _queuedir(name)
3377
3378 if os.path.exists(newdir):
3379 raise error.Abort(_('non-queue directory "%s" already exists') %
3380 newdir)
3381
3382 fh = repo.vfs('patches.queues.new', 'w')
3383 for queue in existing:
3384 if queue == current:
3385 fh.write('%s\n' % (name,))
3386 if os.path.exists(olddir):
3387 util.rename(olddir, newdir)
3388 else:
3389 fh.write('%s\n' % (queue,))
3390 fh.close()
3391 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3392 _setactivenocheck(name)
3393 elif opts.get('delete'):
3396 3394 _delete(name)
3397 qdir = _queuedir(name)
3398 if os.path.exists(qdir):
3399 shutil.rmtree(qdir)
3400 else:
3401 if name not in existing:
3402 raise error.Abort(_('use --create to create a new queue'))
3403 _setactive(name)
3395 elif opts.get('purge'):
3396 if name in existing:
3397 _delete(name)
3398 qdir = _queuedir(name)
3399 if os.path.exists(qdir):
3400 shutil.rmtree(qdir)
3401 else:
3402 if name not in existing:
3403 raise error.Abort(_('use --create to create a new queue'))
3404 _setactive(name)
3404 3405
3405 3406 def mqphasedefaults(repo, roots):
3406 3407 """callback used to set mq changeset as secret when no phase data exists"""
@@ -148,11 +148,11 b' from mercurial import ('
148 148 util,
149 149 )
150 150
151 # Note for extension authors: ONLY specify testedwith = 'internal' for
151 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
152 152 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
153 153 # be specifying the version(s) of Mercurial they are tested with, or
154 154 # leave the attribute unspecified.
155 testedwith = 'internal'
155 testedwith = 'ships-with-hg-core'
156 156
157 157 # template for single changeset can include email headers.
158 158 single_template = '''
@@ -10,7 +10,7 b''
10 10 # [extension]
11 11 # pager =
12 12 #
13 # Run "hg help pager" to get info on configuration.
13 # Run 'hg help pager' to get info on configuration.
14 14
15 15 '''browse command output with an external pager
16 16
@@ -75,11 +75,11 b' from mercurial import ('
75 75 util,
76 76 )
77 77
78 # Note for extension authors: ONLY specify testedwith = 'internal' for
78 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
79 79 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
80 80 # be specifying the version(s) of Mercurial they are tested with, or
81 81 # leave the attribute unspecified.
82 testedwith = 'internal'
82 testedwith = 'ships-with-hg-core'
83 83
84 84 def _runpager(ui, p):
85 85 pager = subprocess.Popen(p, shell=True, bufsize=-1,
@@ -87,11 +87,11 b' stringio = util.stringio'
87 87
88 88 cmdtable = {}
89 89 command = cmdutil.command(cmdtable)
90 # Note for extension authors: ONLY specify testedwith = 'internal' for
90 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
91 91 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
92 92 # be specifying the version(s) of Mercurial they are tested with, or
93 93 # leave the attribute unspecified.
94 testedwith = 'internal'
94 testedwith = 'ships-with-hg-core'
95 95
96 96 def _addpullheader(seq, ctx):
97 97 """Add a header pointing to a public URL where the changeset is available
@@ -38,11 +38,11 b' from mercurial import ('
38 38
39 39 cmdtable = {}
40 40 command = cmdutil.command(cmdtable)
41 # Note for extension authors: ONLY specify testedwith = 'internal' for
41 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
42 42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
43 43 # be specifying the version(s) of Mercurial they are tested with, or
44 44 # leave the attribute unspecified.
45 testedwith = 'internal'
45 testedwith = 'ships-with-hg-core'
46 46
47 47 @command('purge|clean',
48 48 [('a', 'abort-on-err', None, _('abort if an error occurs')),
@@ -66,11 +66,11 b' revskipped = (revignored, revprecursor, '
66 66
67 67 cmdtable = {}
68 68 command = cmdutil.command(cmdtable)
69 # Note for extension authors: ONLY specify testedwith = 'internal' for
69 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
70 70 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
71 71 # be specifying the version(s) of Mercurial they are tested with, or
72 72 # leave the attribute unspecified.
73 testedwith = 'internal'
73 testedwith = 'ships-with-hg-core'
74 74
75 75 def _nothingtorebase():
76 76 return 1
@@ -296,7 +296,7 b' class rebaseruntime(object):'
296 296 if not self.keepf and not self.repo[root].mutable():
297 297 raise error.Abort(_("can't rebase public changeset %s")
298 298 % self.repo[root],
299 hint=_('see "hg help phases" for details'))
299 hint=_("see 'hg help phases' for details"))
300 300
301 301 (self.originalwd, self.target, self.state) = result
302 302 if self.collapsef:
@@ -335,8 +335,9 b' class rebaseruntime(object):'
335 335 if self.activebookmark:
336 336 bookmarks.deactivate(repo)
337 337
338 sortedrevs = sorted(self.state)
339 total = len(self.state)
338 sortedrevs = repo.revs('sort(%ld, -topo)', self.state)
339 cands = [k for k, v in self.state.iteritems() if v == revtodo]
340 total = len(cands)
340 341 pos = 0
341 342 for rev in sortedrevs:
342 343 ctx = repo[rev]
@@ -345,8 +346,8 b' class rebaseruntime(object):'
345 346 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
346 347 if names:
347 348 desc += ' (%s)' % ' '.join(names)
348 pos += 1
349 349 if self.state[rev] == revtodo:
350 pos += 1
350 351 ui.status(_('rebasing %s\n') % desc)
351 352 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
352 353 _('changesets'), total)
@@ -1127,7 +1128,7 b' def abort(repo, originalwd, target, stat'
1127 1128 if immutable:
1128 1129 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1129 1130 % ', '.join(str(repo[r]) for r in immutable),
1130 hint=_('see "hg help phases" for details'))
1131 hint=_("see 'hg help phases' for details"))
1131 1132 cleanup = False
1132 1133
1133 1134 descendants = set()
@@ -1197,7 +1198,7 b' def buildstate(repo, dest, rebaseset, co'
1197 1198 repo.ui.debug('source is a child of destination\n')
1198 1199 return None
1199 1200
1200 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1201 repo.ui.debug('rebase onto %s starting from %s\n' % (dest, root))
1201 1202 state.update(dict.fromkeys(rebaseset, revtodo))
1202 1203 # Rebase tries to turn <dest> into a parent of <root> while
1203 1204 # preserving the number of parents of rebased changesets:
@@ -22,11 +22,11 b' from mercurial import ('
22 22
23 23 cmdtable = {}
24 24 command = cmdutil.command(cmdtable)
25 # Note for extension authors: ONLY specify testedwith = 'internal' for
25 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
26 26 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
27 27 # be specifying the version(s) of Mercurial they are tested with, or
28 28 # leave the attribute unspecified.
29 testedwith = 'internal'
29 testedwith = 'ships-with-hg-core'
30 30
31 31
32 32 @command("record",
@@ -70,7 +70,7 b' def record(ui, repo, *pats, **opts):'
70 70 backup = ui.backupconfig('experimental', 'crecord')
71 71 try:
72 72 ui.setconfig('experimental', 'crecord', False, 'record')
73 commands.commit(ui, repo, *pats, **opts)
73 return commands.commit(ui, repo, *pats, **opts)
74 74 finally:
75 75 ui.restoreconfig(backup)
76 76
@@ -21,11 +21,11 b' from mercurial import ('
21 21
22 22 cmdtable = {}
23 23 command = cmdutil.command(cmdtable)
24 # Note for extension authors: ONLY specify testedwith = 'internal' for
24 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
25 25 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
26 26 # be specifying the version(s) of Mercurial they are tested with, or
27 27 # leave the attribute unspecified.
28 testedwith = 'internal'
28 testedwith = 'ships-with-hg-core'
29 29
30 30 @command('relink', [], _('[ORIGIN]'))
31 31 def relink(ui, repo, origin=None, **opts):
@@ -56,11 +56,11 b' from mercurial import ('
56 56
57 57 cmdtable = {}
58 58 command = cmdutil.command(cmdtable)
59 # Note for extension authors: ONLY specify testedwith = 'internal' for
59 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
60 60 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
61 61 # be specifying the version(s) of Mercurial they are tested with, or
62 62 # leave the attribute unspecified.
63 testedwith = 'internal'
63 testedwith = 'ships-with-hg-core'
64 64
65 65
66 66 class ShortRepository(object):
@@ -56,11 +56,11 b' parseurl = hg.parseurl'
56 56
57 57 cmdtable = {}
58 58 command = cmdutil.command(cmdtable)
59 # Note for extension authors: ONLY specify testedwith = 'internal' for
59 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
60 60 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
61 61 # be specifying the version(s) of Mercurial they are tested with, or
62 62 # leave the attribute unspecified.
63 testedwith = 'internal'
63 testedwith = 'ships-with-hg-core'
64 64
65 65 @command('share',
66 66 [('U', 'noupdate', None, _('do not create a working directory')),
@@ -54,11 +54,11 b' from . import ('
54 54
55 55 cmdtable = {}
56 56 command = cmdutil.command(cmdtable)
57 # Note for extension authors: ONLY specify testedwith = 'internal' for
57 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
58 58 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
59 59 # be specifying the version(s) of Mercurial they are tested with, or
60 60 # leave the attribute unspecified.
61 testedwith = 'internal'
61 testedwith = 'ships-with-hg-core'
62 62
63 63 backupdir = 'shelve-backup'
64 64 shelvedir = 'shelved'
@@ -23,11 +23,11 b' release = lockmod.release'
23 23
24 24 cmdtable = {}
25 25 command = cmdutil.command(cmdtable)
26 # Note for extension authors: ONLY specify testedwith = 'internal' for
26 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
27 27 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
28 28 # be specifying the version(s) of Mercurial they are tested with, or
29 29 # leave the attribute unspecified.
30 testedwith = 'internal'
30 testedwith = 'ships-with-hg-core'
31 31
32 32 def checksubstate(repo, baserev=None):
33 33 '''return list of subrepos at a different revision than substate.
@@ -40,11 +40,11 b' class TransplantError(error.Abort):'
40 40
41 41 cmdtable = {}
42 42 command = cmdutil.command(cmdtable)
43 # Note for extension authors: ONLY specify testedwith = 'internal' for
43 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
44 44 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
45 45 # be specifying the version(s) of Mercurial they are tested with, or
46 46 # leave the attribute unspecified.
47 testedwith = 'internal'
47 testedwith = 'ships-with-hg-core'
48 48
49 49 class transplantentry(object):
50 50 def __init__(self, lnode, rnode):
@@ -55,11 +55,11 b' from mercurial import ('
55 55 error,
56 56 )
57 57
58 # Note for extension authors: ONLY specify testedwith = 'internal' for
58 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
59 59 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
60 60 # be specifying the version(s) of Mercurial they are tested with, or
61 61 # leave the attribute unspecified.
62 testedwith = 'internal'
62 testedwith = 'ships-with-hg-core'
63 63
64 64 _encoding = None # see extsetup
65 65
@@ -148,8 +148,8 b' def wrapname(name, wrapper):'
148 148 # NOTE: os.path.dirname() and os.path.basename() are safe because
149 149 # they use result of os.path.split()
150 150 funcs = '''os.path.join os.path.split os.path.splitext
151 os.path.normpath os.makedirs
152 mercurial.util.endswithsep mercurial.util.splitpath mercurial.util.checkcase
151 os.path.normpath os.makedirs mercurial.util.endswithsep
152 mercurial.util.splitpath mercurial.util.fscasesensitive
153 153 mercurial.util.fspath mercurial.util.pconvert mercurial.util.normpath
154 154 mercurial.util.checkwinfilename mercurial.util.checkosfilename
155 155 mercurial.util.split'''
@@ -52,11 +52,11 b' from mercurial import ('
52 52 util,
53 53 )
54 54
55 # Note for extension authors: ONLY specify testedwith = 'internal' for
55 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
56 56 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
57 57 # be specifying the version(s) of Mercurial they are tested with, or
58 58 # leave the attribute unspecified.
59 testedwith = 'internal'
59 testedwith = 'ships-with-hg-core'
60 60
61 61 # regexp for single LF without CR preceding.
62 62 re_single_lf = re.compile('(^|[^\r])\n', re.MULTILINE)
@@ -40,11 +40,11 b' from mercurial.hgweb import ('
40 40 server as servermod
41 41 )
42 42
43 # Note for extension authors: ONLY specify testedwith = 'internal' for
43 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
44 44 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
45 45 # be specifying the version(s) of Mercurial they are tested with, or
46 46 # leave the attribute unspecified.
47 testedwith = 'internal'
47 testedwith = 'ships-with-hg-core'
48 48
49 49 # publish
50 50
@@ -114,7 +114,7 b' def docstrings(path):'
114 114 if func.__doc__:
115 115 src = inspect.getsource(func)
116 116 name = "%s.%s" % (path, func.__name__)
117 lineno = func.func_code.co_firstlineno
117 lineno = inspect.getsourcelines(func)[1]
118 118 doc = func.__doc__
119 119 if rstrip:
120 120 doc = doc.rstrip()
@@ -170,7 +170,7 b' if sys.version_info[0] >= 3:'
170 170 spec.loader = hgloader(spec.name, spec.origin)
171 171 return spec
172 172
173 def replacetokens(tokens):
173 def replacetokens(tokens, fullname):
174 174 """Transform a stream of tokens from raw to Python 3.
175 175
176 176 It is called by the custom module loading machinery to rewrite
@@ -184,6 +184,57 b' if sys.version_info[0] >= 3:'
184 184 REMEMBER TO CHANGE ``BYTECODEHEADER`` WHEN CHANGING THIS FUNCTION
185 185 OR CACHED FILES WON'T GET INVALIDATED PROPERLY.
186 186 """
187 futureimpline = False
188
189 # The following utility functions access the tokens list and i index of
190 # the for i, t enumerate(tokens) loop below
191 def _isop(j, *o):
192 """Assert that tokens[j] is an OP with one of the given values"""
193 try:
194 return tokens[j].type == token.OP and tokens[j].string in o
195 except IndexError:
196 return False
197
198 def _findargnofcall(n):
199 """Find arg n of a call expression (start at 0)
200
201 Returns index of the first token of that argument, or None if
202 there is not that many arguments.
203
204 Assumes that token[i + 1] is '('.
205
206 """
207 nested = 0
208 for j in range(i + 2, len(tokens)):
209 if _isop(j, ')', ']', '}'):
210 # end of call, tuple, subscription or dict / set
211 nested -= 1
212 if nested < 0:
213 return None
214 elif n == 0:
215 # this is the starting position of arg
216 return j
217 elif _isop(j, '(', '[', '{'):
218 nested += 1
219 elif _isop(j, ',') and nested == 0:
220 n -= 1
221
222 return None
223
224 def _ensureunicode(j):
225 """Make sure the token at j is a unicode string
226
227 This rewrites a string token to include the unicode literal prefix
228 so the string transformer won't add the byte prefix.
229
230 Ignores tokens that are not strings. Assumes bounds checking has
231 already been done.
232
233 """
234 st = tokens[j]
235 if st.type == token.STRING and st.string.startswith(("'", '"')):
236 tokens[j] = st._replace(string='u%s' % st.string)
237
187 238 for i, t in enumerate(tokens):
188 239 # Convert most string literals to byte literals. String literals
189 240 # in Python 2 are bytes. String literals in Python 3 are unicode.
@@ -213,64 +264,61 b' if sys.version_info[0] >= 3:'
213 264 continue
214 265
215 266 # String literal. Prefix to make a b'' string.
216 yield tokenize.TokenInfo(t.type, 'b%s' % s, t.start, t.end,
217 t.line)
267 yield t._replace(string='b%s' % t.string)
218 268 continue
219 269
220 try:
221 nexttoken = tokens[i + 1]
222 except IndexError:
223 nexttoken = None
224
225 try:
226 prevtoken = tokens[i - 1]
227 except IndexError:
228 prevtoken = None
270 # Insert compatibility imports at "from __future__ import" line.
271 # No '\n' should be added to preserve line numbers.
272 if (t.type == token.NAME and t.string == 'import' and
273 all(u.type == token.NAME for u in tokens[i - 2:i]) and
274 [u.string for u in tokens[i - 2:i]] == ['from', '__future__']):
275 futureimpline = True
276 if t.type == token.NEWLINE and futureimpline:
277 futureimpline = False
278 if fullname == 'mercurial.pycompat':
279 yield t
280 continue
281 r, c = t.start
282 l = (b'; from mercurial.pycompat import '
283 b'delattr, getattr, hasattr, setattr, xrange\n')
284 for u in tokenize.tokenize(io.BytesIO(l).readline):
285 if u.type in (tokenize.ENCODING, token.ENDMARKER):
286 continue
287 yield u._replace(
288 start=(r, c + u.start[1]), end=(r, c + u.end[1]))
289 continue
229 290
230 291 # This looks like a function call.
231 if (t.type == token.NAME and nexttoken and
232 nexttoken.type == token.OP and nexttoken.string == '('):
292 if t.type == token.NAME and _isop(i + 1, '('):
233 293 fn = t.string
234 294
235 295 # *attr() builtins don't accept byte strings to 2nd argument.
236 # Rewrite the token to include the unicode literal prefix so
237 # the string transformer above doesn't add the byte prefix.
238 if fn in ('getattr', 'setattr', 'hasattr', 'safehasattr'):
239 try:
240 # (NAME, 'getattr')
241 # (OP, '(')
242 # (NAME, 'foo')
243 # (OP, ',')
244 # (NAME|STRING, foo)
245 st = tokens[i + 4]
246 if (st.type == token.STRING and
247 st.string[0] in ("'", '"')):
248 rt = tokenize.TokenInfo(st.type, 'u%s' % st.string,
249 st.start, st.end, st.line)
250 tokens[i + 4] = rt
251 except IndexError:
252 pass
296 if (fn in ('getattr', 'setattr', 'hasattr', 'safehasattr') and
297 not _isop(i - 1, '.')):
298 arg1idx = _findargnofcall(1)
299 if arg1idx is not None:
300 _ensureunicode(arg1idx)
253 301
254 302 # .encode() and .decode() on str/bytes/unicode don't accept
255 # byte strings on Python 3. Rewrite the token to include the
256 # unicode literal prefix so the string transformer above doesn't
257 # add the byte prefix.
258 if (fn in ('encode', 'decode') and
259 prevtoken.type == token.OP and prevtoken.string == '.'):
260 # (OP, '.')
261 # (NAME, 'encode')
262 # (OP, '(')
263 # (STRING, 'utf-8')
264 # (OP, ')')
265 try:
266 st = tokens[i + 2]
267 if (st.type == token.STRING and
268 st.string[0] in ("'", '"')):
269 rt = tokenize.TokenInfo(st.type, 'u%s' % st.string,
270 st.start, st.end, st.line)
271 tokens[i + 2] = rt
272 except IndexError:
273 pass
303 # byte strings on Python 3.
304 elif fn in ('encode', 'decode') and _isop(i - 1, '.'):
305 for argn in range(2):
306 argidx = _findargnofcall(argn)
307 if argidx is not None:
308 _ensureunicode(argidx)
309
310 # Bare open call (not an attribute on something else), the
311 # second argument (mode) must be a string, not bytes
312 elif fn == 'open' and not _isop(i - 1, '.'):
313 arg1idx = _findargnofcall(1)
314 if arg1idx is not None:
315 _ensureunicode(arg1idx)
316
317 # It changes iteritems to items as iteritems is not
318 # present in Python 3 world.
319 elif fn == 'iteritems':
320 yield t._replace(string='items')
321 continue
274 322
275 323 # Emit unmodified token.
276 324 yield t
@@ -279,7 +327,7 b' if sys.version_info[0] >= 3:'
279 327 # ``replacetoken`` or any mechanism that changes semantics of module
280 328 # loading is changed. Otherwise cached bytecode may get loaded without
281 329 # the new transformation mechanisms applied.
282 BYTECODEHEADER = b'HG\x00\x01'
330 BYTECODEHEADER = b'HG\x00\x06'
283 331
284 332 class hgloader(importlib.machinery.SourceFileLoader):
285 333 """Custom module loader that transforms source code.
@@ -338,7 +386,7 b' if sys.version_info[0] >= 3:'
338 386 """Perform token transformation before compilation."""
339 387 buf = io.BytesIO(data)
340 388 tokens = tokenize.tokenize(buf.readline)
341 data = tokenize.untokenize(replacetokens(list(tokens)))
389 data = tokenize.untokenize(replacetokens(list(tokens), self.name))
342 390 # Python's built-in importer strips frames from exceptions raised
343 391 # for this code. Unfortunately, that mechanism isn't extensible
344 392 # and our frame will be blamed for the import failure. There
@@ -231,7 +231,7 b' class zipit(object):'
231 231 if islink:
232 232 mode = 0o777
233 233 ftype = _UNX_IFLNK
234 i.external_attr = (mode | ftype) << 16L
234 i.external_attr = (mode | ftype) << 16
235 235 # add "extended-timestamp" extra block, because zip archives
236 236 # without this will be extracted with unexpected timestamp,
237 237 # if TZ is not configured as GMT
@@ -17,6 +17,7 b''
17 17
18 18 #include "bdiff.h"
19 19 #include "bitmanipulation.h"
20 #include "util.h"
20 21
21 22
22 23 static PyObject *blocks(PyObject *self, PyObject *args)
@@ -470,8 +470,12 b' class revbranchcache(object):'
470 470 def write(self, tr=None):
471 471 """Save branch cache if it is dirty."""
472 472 repo = self._repo
473 if self._rbcnamescount < len(self._names):
474 try:
473 wlock = None
474 step = ''
475 try:
476 if self._rbcnamescount < len(self._names):
477 step = ' names'
478 wlock = repo.wlock(wait=False)
475 479 if self._rbcnamescount != 0:
476 480 f = repo.vfs.open(_rbcnames, 'ab')
477 481 if f.tell() == self._rbcsnameslen:
@@ -489,16 +493,15 b' class revbranchcache(object):'
489 493 for b in self._names[self._rbcnamescount:]))
490 494 self._rbcsnameslen = f.tell()
491 495 f.close()
492 except (IOError, OSError, error.Abort) as inst:
493 repo.ui.debug("couldn't write revision branch cache names: "
494 "%s\n" % inst)
495 return
496 self._rbcnamescount = len(self._names)
496 self._rbcnamescount = len(self._names)
497 497
498 start = self._rbcrevslen * _rbcrecsize
499 if start != len(self._rbcrevs):
500 revs = min(len(repo.changelog), len(self._rbcrevs) // _rbcrecsize)
501 try:
498 start = self._rbcrevslen * _rbcrecsize
499 if start != len(self._rbcrevs):
500 step = ''
501 if wlock is None:
502 wlock = repo.wlock(wait=False)
503 revs = min(len(repo.changelog),
504 len(self._rbcrevs) // _rbcrecsize)
502 505 f = repo.vfs.open(_rbcrevs, 'ab')
503 506 if f.tell() != start:
504 507 repo.ui.debug("truncating %s to %s\n" % (_rbcrevs, start))
@@ -510,8 +513,10 b' class revbranchcache(object):'
510 513 end = revs * _rbcrecsize
511 514 f.write(self._rbcrevs[start:end])
512 515 f.close()
513 except (IOError, OSError, error.Abort) as inst:
514 repo.ui.debug("couldn't write revision branch cache: %s\n" %
515 inst)
516 return
517 self._rbcrevslen = revs
516 self._rbcrevslen = revs
517 except (IOError, OSError, error.Abort, error.LockError) as inst:
518 repo.ui.debug("couldn't write revision branch cache%s: %s\n"
519 % (step, inst))
520 finally:
521 if wlock is not None:
522 wlock.release()
@@ -159,6 +159,7 b' from . import ('
159 159 error,
160 160 obsolete,
161 161 pushkey,
162 pycompat,
162 163 tags,
163 164 url,
164 165 util,
@@ -572,7 +573,9 b' class bundle20(object):'
572 573 yield param
573 574 # starting compression
574 575 for chunk in self._getcorechunk():
575 yield self._compressor.compress(chunk)
576 data = self._compressor.compress(chunk)
577 if data:
578 yield data
576 579 yield self._compressor.flush()
577 580
578 581 def _paramchunk(self):
@@ -996,7 +999,10 b' class bundlepart(object):'
996 999 outdebug(ui, 'closing payload chunk')
997 1000 # abort current part payload
998 1001 yield _pack(_fpayloadsize, 0)
999 raise exc_info[0], exc_info[1], exc_info[2]
1002 if pycompat.ispy3:
1003 raise exc_info[0](exc_info[1]).with_traceback(exc_info[2])
1004 else:
1005 exec("""raise exc_info[0], exc_info[1], exc_info[2]""")
1000 1006 # end of payload
1001 1007 outdebug(ui, 'closing payload chunk')
1002 1008 yield _pack(_fpayloadsize, 0)
@@ -1320,7 +1326,9 b' def writebundle(ui, cg, filename, bundle'
1320 1326 def chunkiter():
1321 1327 yield header
1322 1328 for chunk in subchunkiter:
1323 yield z.compress(chunk)
1329 data = z.compress(chunk)
1330 if data:
1331 yield data
1324 1332 yield z.flush()
1325 1333 chunkiter = chunkiter()
1326 1334
@@ -56,10 +56,8 b' class bundlerevlog(revlog.revlog):'
56 56 self.repotiprev = n - 1
57 57 chain = None
58 58 self.bundlerevs = set() # used by 'bundle()' revset expression
59 while True:
60 chunkdata = bundle.deltachunk(chain)
61 if not chunkdata:
62 break
59 getchunk = lambda: bundle.deltachunk(chain)
60 for chunkdata in iter(getchunk, {}):
63 61 node = chunkdata['node']
64 62 p1 = chunkdata['p1']
65 63 p2 = chunkdata['p2']
@@ -190,22 +188,36 b' class bundlechangelog(bundlerevlog, chan'
190 188 self.filteredrevs = oldfilter
191 189
192 190 class bundlemanifest(bundlerevlog, manifest.manifest):
193 def __init__(self, opener, bundle, linkmapper):
194 manifest.manifest.__init__(self, opener)
191 def __init__(self, opener, bundle, linkmapper, dirlogstarts=None, dir=''):
192 manifest.manifest.__init__(self, opener, dir=dir)
195 193 bundlerevlog.__init__(self, opener, self.indexfile, bundle,
196 194 linkmapper)
195 if dirlogstarts is None:
196 dirlogstarts = {}
197 if self.bundle.version == "03":
198 dirlogstarts = _getfilestarts(self.bundle)
199 self._dirlogstarts = dirlogstarts
200 self._linkmapper = linkmapper
197 201
198 202 def baserevision(self, nodeorrev):
199 203 node = nodeorrev
200 204 if isinstance(node, int):
201 205 node = self.node(node)
202 206
203 if node in self._mancache:
204 result = self._mancache[node][0].text()
207 if node in self.fulltextcache:
208 result = self.fulltextcache[node].tostring()
205 209 else:
206 210 result = manifest.manifest.revision(self, nodeorrev)
207 211 return result
208 212
213 def dirlog(self, d):
214 if d in self._dirlogstarts:
215 self.bundle.seek(self._dirlogstarts[d])
216 return bundlemanifest(
217 self.opener, self.bundle, self._linkmapper,
218 self._dirlogstarts, dir=d)
219 return super(bundlemanifest, self).dirlog(d)
220
209 221 class bundlefilelog(bundlerevlog, filelog.filelog):
210 222 def __init__(self, opener, path, bundle, linkmapper):
211 223 filelog.filelog.__init__(self, opener, path)
@@ -236,6 +248,15 b' class bundlephasecache(phases.phasecache'
236 248 self.invalidate()
237 249 self.dirty = True
238 250
251 def _getfilestarts(bundle):
252 bundlefilespos = {}
253 for chunkdata in iter(bundle.filelogheader, {}):
254 fname = chunkdata['filename']
255 bundlefilespos[fname] = bundle.tell()
256 for chunk in iter(lambda: bundle.deltachunk(None), {}):
257 pass
258 return bundlefilespos
259
239 260 class bundlerepository(localrepo.localrepository):
240 261 def __init__(self, ui, path, bundlename):
241 262 def _writetempbundle(read, suffix, header=''):
@@ -283,7 +304,8 b' class bundlerepository(localrepo.localre'
283 304 "multiple changegroups")
284 305 cgstream = part
285 306 version = part.params.get('version', '01')
286 if version not in changegroup.allsupportedversions(ui):
307 legalcgvers = changegroup.supportedincomingversions(self)
308 if version not in legalcgvers:
287 309 msg = _('Unsupported changegroup version: %s')
288 310 raise error.Abort(msg % version)
289 311 if self.bundle.compressed():
@@ -328,10 +350,6 b' class bundlerepository(localrepo.localre'
328 350 self.bundle.manifestheader()
329 351 linkmapper = self.unfiltered().changelog.rev
330 352 m = bundlemanifest(self.svfs, self.bundle, linkmapper)
331 # XXX: hack to work with changegroup3, but we still don't handle
332 # tree manifests correctly
333 if self.bundle.version == "03":
334 self.bundle.filelogheader()
335 353 self.filestart = self.bundle.tell()
336 354 return m
337 355
@@ -351,16 +369,7 b' class bundlerepository(localrepo.localre'
351 369 def file(self, f):
352 370 if not self.bundlefilespos:
353 371 self.bundle.seek(self.filestart)
354 while True:
355 chunkdata = self.bundle.filelogheader()
356 if not chunkdata:
357 break
358 fname = chunkdata['filename']
359 self.bundlefilespos[fname] = self.bundle.tell()
360 while True:
361 c = self.bundle.deltachunk(None)
362 if not c:
363 break
372 self.bundlefilespos = _getfilestarts(self.bundle)
364 373
365 374 if f in self.bundlefilespos:
366 375 self.bundle.seek(self.bundlefilespos[f])
@@ -480,7 +489,10 b' def getremotechanges(ui, repo, other, on'
480 489 if bundlename or not localrepo:
481 490 # create a bundle (uncompressed if other repo is not local)
482 491
483 canbundle2 = (ui.configbool('experimental', 'bundle2-exp', True)
492 # developer config: devel.legacy.exchange
493 legexc = ui.configlist('devel', 'legacy.exchange')
494 forcebundle1 = 'bundle2' not in legexc and 'bundle1' in legexc
495 canbundle2 = (not forcebundle1
484 496 and other.capable('getbundle')
485 497 and other.capable('bundle2'))
486 498 if canbundle2:
@@ -15,7 +15,6 b' import weakref'
15 15 from .i18n import _
16 16 from .node import (
17 17 hex,
18 nullid,
19 18 nullrev,
20 19 short,
21 20 )
@@ -94,7 +93,9 b' def writechunks(ui, chunks, filename, vf'
94 93 if vfs:
95 94 fh = vfs.open(filename, "wb")
96 95 else:
97 fh = open(filename, "wb")
96 # Increase default buffer size because default is usually
97 # small (4k is common on Linux).
98 fh = open(filename, "wb", 131072)
98 99 else:
99 100 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
100 101 fh = os.fdopen(fd, "wb")
@@ -333,7 +334,7 b' class cg1unpacker(object):'
333 334 for cset in xrange(clstart, clend):
334 335 mfnode = repo.changelog.read(
335 336 repo.changelog.node(cset))[0]
336 mfest = repo.manifest.readdelta(mfnode)
337 mfest = repo.manifestlog[mfnode].readdelta()
337 338 # store file nodes we must see
338 339 for f, n in mfest.iteritems():
339 340 needfiles.setdefault(f, set()).add(n)
@@ -404,6 +405,7 b' class cg1unpacker(object):'
404 405 # coming call to `destroyed` will repair it.
405 406 # In other case we can safely update cache on
406 407 # disk.
408 repo.ui.debug('updating the branch cache\n')
407 409 branchmap.updatecache(repo.filtered('served'))
408 410
409 411 def runhooks():
@@ -413,8 +415,6 b' class cg1unpacker(object):'
413 415 if clstart >= len(repo):
414 416 return
415 417
416 # forcefully update the on-disk branch cache
417 repo.ui.debug("updating the branch cache\n")
418 418 repo.hook("changegroup", **hookargs)
419 419
420 420 for n in added:
@@ -475,10 +475,7 b' class cg3unpacker(cg2unpacker):'
475 475 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
476 476 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
477 477 numchanges)
478 while True:
479 chunkdata = self.filelogheader()
480 if not chunkdata:
481 break
478 for chunkdata in iter(self.filelogheader, {}):
482 479 # If we get here, there are directory manifests in the changegroup
483 480 d = chunkdata["filename"]
484 481 repo.ui.debug("adding %s revisions\n" % d)
@@ -823,11 +820,24 b' class cg2packer(cg1packer):'
823 820
824 821 def deltaparent(self, revlog, rev, p1, p2, prev):
825 822 dp = revlog.deltaparent(rev)
826 # avoid storing full revisions; pick prev in those cases
827 # also pick prev when we can't be sure remote has dp
828 if dp == nullrev or (dp != p1 and dp != p2 and dp != prev):
823 if dp == nullrev and revlog.storedeltachains:
824 # Avoid sending full revisions when delta parent is null. Pick prev
825 # in that case. It's tempting to pick p1 in this case, as p1 will
826 # be smaller in the common case. However, computing a delta against
827 # p1 may require resolving the raw text of p1, which could be
828 # expensive. The revlog caches should have prev cached, meaning
829 # less CPU for changegroup generation. There is likely room to add
830 # a flag and/or config option to control this behavior.
829 831 return prev
830 return dp
832 elif dp == nullrev:
833 # revlog is configured to use full snapshot for a reason,
834 # stick to full snapshot.
835 return nullrev
836 elif dp not in (p1, p2, prev):
837 # Pick prev when we can't be sure remote has the base revision.
838 return prev
839 else:
840 return dp
831 841
832 842 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
833 843 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
@@ -946,17 +956,7 b' def changegroupsubset(repo, roots, heads'
946 956 Another wrinkle is doing the reverse, figuring out which changeset in
947 957 the changegroup a particular filenode or manifestnode belongs to.
948 958 """
949 cl = repo.changelog
950 if not roots:
951 roots = [nullid]
952 discbases = []
953 for n in roots:
954 discbases.extend([p for p in cl.parents(n) if p != nullid])
955 # TODO: remove call to nodesbetween.
956 csets, roots, heads = cl.nodesbetween(roots, heads)
957 included = set(csets)
958 discbases = [n for n in discbases if n not in included]
959 outgoing = discovery.outgoing(cl, discbases, heads)
959 outgoing = discovery.outgoing(repo, missingroots=roots, missingheads=heads)
960 960 bundler = getbundler(version, repo)
961 961 return getsubset(repo, outgoing, bundler, source)
962 962
@@ -982,26 +982,7 b' def getlocalchangegroup(repo, source, ou'
982 982 bundler = getbundler(version, repo, bundlecaps)
983 983 return getsubset(repo, outgoing, bundler, source)
984 984
985 def computeoutgoing(repo, heads, common):
986 """Computes which revs are outgoing given a set of common
987 and a set of heads.
988
989 This is a separate function so extensions can have access to
990 the logic.
991
992 Returns a discovery.outgoing object.
993 """
994 cl = repo.changelog
995 if common:
996 hasnode = cl.hasnode
997 common = [n for n in common if hasnode(n)]
998 else:
999 common = [nullid]
1000 if not heads:
1001 heads = cl.heads()
1002 return discovery.outgoing(cl, common, heads)
1003
1004 def getchangegroup(repo, source, heads=None, common=None, bundlecaps=None,
985 def getchangegroup(repo, source, outgoing, bundlecaps=None,
1005 986 version='01'):
1006 987 """Like changegroupsubset, but returns the set difference between the
1007 988 ancestors of heads and the ancestors common.
@@ -1011,7 +992,6 b' def getchangegroup(repo, source, heads=N'
1011 992 The nodes in common might not all be known locally due to the way the
1012 993 current discovery protocol works.
1013 994 """
1014 outgoing = computeoutgoing(repo, heads, common)
1015 995 return getlocalchangegroup(repo, source, outgoing, bundlecaps=bundlecaps,
1016 996 version=version)
1017 997
@@ -1022,10 +1002,7 b' def changegroup(repo, basenodes, source)'
1022 1002 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
1023 1003 revisions = 0
1024 1004 files = 0
1025 while True:
1026 chunkdata = source.filelogheader()
1027 if not chunkdata:
1028 break
1005 for chunkdata in iter(source.filelogheader, {}):
1029 1006 files += 1
1030 1007 f = chunkdata["filename"]
1031 1008 repo.ui.debug("adding %s revisions\n" % f)
@@ -124,7 +124,7 b' class appender(object):'
124 124
125 125 def _divertopener(opener, target):
126 126 """build an opener that writes in 'target.a' instead of 'target'"""
127 def _divert(name, mode='r'):
127 def _divert(name, mode='r', checkambig=False):
128 128 if name != target:
129 129 return opener(name, mode)
130 130 return opener(name + ".a", mode)
@@ -132,15 +132,16 b' def _divertopener(opener, target):'
132 132
133 133 def _delayopener(opener, target, buf):
134 134 """build an opener that stores chunks in 'buf' instead of 'target'"""
135 def _delay(name, mode='r'):
135 def _delay(name, mode='r', checkambig=False):
136 136 if name != target:
137 137 return opener(name, mode)
138 138 return appender(opener, name, mode, buf)
139 139 return _delay
140 140
141 _changelogrevision = collections.namedtuple('changelogrevision',
142 ('manifest', 'user', 'date',
143 'files', 'description', 'extra'))
141 _changelogrevision = collections.namedtuple(u'changelogrevision',
142 (u'manifest', u'user', u'date',
143 u'files', u'description',
144 u'extra'))
144 145
145 146 class changelogrevision(object):
146 147 """Holds results of a parsed changelog revision.
@@ -151,8 +152,8 b' class changelogrevision(object):'
151 152 """
152 153
153 154 __slots__ = (
154 '_offsets',
155 '_text',
155 u'_offsets',
156 u'_text',
156 157 )
157 158
158 159 def __new__(cls, text):
@@ -256,11 +257,18 b' class changelogrevision(object):'
256 257
257 258 class changelog(revlog.revlog):
258 259 def __init__(self, opener):
259 revlog.revlog.__init__(self, opener, "00changelog.i")
260 revlog.revlog.__init__(self, opener, "00changelog.i",
261 checkambig=True)
260 262 if self._initempty:
261 263 # changelogs don't benefit from generaldelta
262 264 self.version &= ~revlog.REVLOGGENERALDELTA
263 265 self._generaldelta = False
266
267 # Delta chains for changelogs tend to be very small because entries
268 # tend to be small and don't delta well with each. So disable delta
269 # chains.
270 self.storedeltachains = False
271
264 272 self._realopener = opener
265 273 self._delayed = False
266 274 self._delaybuf = None
@@ -381,9 +389,9 b' class changelog(revlog.revlog):'
381 389 tmpname = self.indexfile + ".a"
382 390 nfile = self.opener.open(tmpname)
383 391 nfile.close()
384 self.opener.rename(tmpname, self.indexfile)
392 self.opener.rename(tmpname, self.indexfile, checkambig=True)
385 393 elif self._delaybuf:
386 fp = self.opener(self.indexfile, 'a')
394 fp = self.opener(self.indexfile, 'a', checkambig=True)
387 395 fp.write("".join(self._delaybuf))
388 396 fp.close()
389 397 self._delaybuf = None
@@ -499,6 +499,12 b' class _unclosablefile(object):'
499 499 def __getattr__(self, attr):
500 500 return getattr(self._fp, attr)
501 501
502 def __enter__(self):
503 return self
504
505 def __exit__(self, exc_type, exc_value, exc_tb):
506 pass
507
502 508 def makefileobj(repo, pat, node=None, desc=None, total=None,
503 509 seqno=None, revwidth=None, mode='wb', modemap=None,
504 510 pathname=None):
@@ -549,7 +555,7 b' def openrevlog(repo, cmd, file_, opts):'
549 555 if 'treemanifest' not in repo.requirements:
550 556 raise error.Abort(_("--dir can only be used on repos with "
551 557 "treemanifest enabled"))
552 dirlog = repo.dirlog(dir)
558 dirlog = repo.manifest.dirlog(dir)
553 559 if len(dirlog):
554 560 r = dirlog
555 561 elif mf:
@@ -640,8 +646,26 b' def copy(ui, repo, pats, opts, rename=Fa'
640 646
641 647 if not after and exists or after and state in 'mn':
642 648 if not opts['force']:
643 ui.warn(_('%s: not overwriting - file exists\n') %
644 reltarget)
649 if state in 'mn':
650 msg = _('%s: not overwriting - file already committed\n')
651 if after:
652 flags = '--after --force'
653 else:
654 flags = '--force'
655 if rename:
656 hint = _('(hg rename %s to replace the file by '
657 'recording a rename)\n') % flags
658 else:
659 hint = _('(hg copy %s to replace the file by '
660 'recording a copy)\n') % flags
661 else:
662 msg = _('%s: not overwriting - file exists\n')
663 if rename:
664 hint = _('(hg rename --after to record the rename)\n')
665 else:
666 hint = _('(hg copy --after to record the copy)\n')
667 ui.warn(msg % reltarget)
668 ui.warn(hint)
645 669 return
646 670
647 671 if after:
@@ -1611,25 +1635,26 b' def show_changeset(ui, repo, opts, buffe'
1611 1635
1612 1636 return changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile, buffered)
1613 1637
1614 def showmarker(ui, marker, index=None):
1638 def showmarker(fm, marker, index=None):
1615 1639 """utility function to display obsolescence marker in a readable way
1616 1640
1617 1641 To be used by debug function."""
1618 1642 if index is not None:
1619 ui.write("%i " % index)
1620 ui.write(hex(marker.precnode()))
1621 for repl in marker.succnodes():
1622 ui.write(' ')
1623 ui.write(hex(repl))
1624 ui.write(' %X ' % marker.flags())
1643 fm.write('index', '%i ', index)
1644 fm.write('precnode', '%s ', hex(marker.precnode()))
1645 succs = marker.succnodes()
1646 fm.condwrite(succs, 'succnodes', '%s ',
1647 fm.formatlist(map(hex, succs), name='node'))
1648 fm.write('flag', '%X ', marker.flags())
1625 1649 parents = marker.parentnodes()
1626 1650 if parents is not None:
1627 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1628 ui.write('(%s) ' % util.datestr(marker.date()))
1629 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1630 sorted(marker.metadata().items())
1631 if t[0] != 'date')))
1632 ui.write('\n')
1651 fm.write('parentnodes', '{%s} ',
1652 fm.formatlist(map(hex, parents), name='node', sep=', '))
1653 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1654 meta = marker.metadata().copy()
1655 meta.pop('date', None)
1656 fm.write('metadata', '{%s}', fm.formatdict(meta, fmt='%r: %r', sep=', '))
1657 fm.plain('\n')
1633 1658
1634 1659 def finddate(ui, repo, date):
1635 1660 """Find the tipmost changeset that matches the given date spec"""
@@ -1940,7 +1965,7 b' def _makefollowlogfilematcher(repo, file'
1940 1965 # --follow, we want the names of the ancestors of FILE in the
1941 1966 # revision, stored in "fcache". "fcache" is populated by
1942 1967 # reproducing the graph traversal already done by --follow revset
1943 # and relating linkrevs to file names (which is not "correct" but
1968 # and relating revs to file names (which is not "correct" but
1944 1969 # good enough).
1945 1970 fcache = {}
1946 1971 fcacheready = [False]
@@ -1948,9 +1973,10 b' def _makefollowlogfilematcher(repo, file'
1948 1973
1949 1974 def populate():
1950 1975 for fn in files:
1951 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1952 for c in i:
1953 fcache.setdefault(c.linkrev(), set()).add(c.path())
1976 fctx = pctx[fn]
1977 fcache.setdefault(fctx.introrev(), set()).add(fctx.path())
1978 for c in fctx.ancestors(followfirst=followfirst):
1979 fcache.setdefault(c.rev(), set()).add(c.path())
1954 1980
1955 1981 def filematcher(rev):
1956 1982 if not fcacheready[0]:
@@ -2151,15 +2177,8 b' def getgraphlogrevs(repo, pats, opts):'
2151 2177 if not (revs.isdescending() or revs.istopo()):
2152 2178 revs.sort(reverse=True)
2153 2179 if expr:
2154 # Revset matchers often operate faster on revisions in changelog
2155 # order, because most filters deal with the changelog.
2156 revs.reverse()
2157 matcher = revset.match(repo.ui, expr)
2158 # Revset matches can reorder revisions. "A or B" typically returns
2159 # returns the revision matching A then the revision matching B. Sort
2160 # again to fix that.
2180 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2161 2181 revs = matcher(repo, revs)
2162 revs.sort(reverse=True)
2163 2182 if limit is not None:
2164 2183 limitedrevs = []
2165 2184 for idx, rev in enumerate(revs):
@@ -2184,23 +2203,8 b' def getlogrevs(repo, pats, opts):'
2184 2203 return revset.baseset([]), None, None
2185 2204 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2186 2205 if expr:
2187 # Revset matchers often operate faster on revisions in changelog
2188 # order, because most filters deal with the changelog.
2189 if not opts.get('rev'):
2190 revs.reverse()
2191 matcher = revset.match(repo.ui, expr)
2192 # Revset matches can reorder revisions. "A or B" typically returns
2193 # returns the revision matching A then the revision matching B. Sort
2194 # again to fix that.
2195 fixopts = ['branch', 'only_branch', 'keyword', 'user']
2196 oldrevs = revs
2206 matcher = revset.match(repo.ui, expr, order=revset.followorder)
2197 2207 revs = matcher(repo, revs)
2198 if not opts.get('rev'):
2199 revs.sort(reverse=True)
2200 elif len(pats) > 1 or any(len(opts.get(op, [])) > 1 for op in fixopts):
2201 # XXX "A or B" is known to change the order; fix it by filtering
2202 # matched set again (issue5100)
2203 revs = oldrevs & revs
2204 2208 if limit is not None:
2205 2209 limitedrevs = []
2206 2210 for idx, r in enumerate(revs):
@@ -2415,14 +2419,10 b' def files(ui, ctx, m, fm, fmt, subrepos)'
2415 2419 ret = 0
2416 2420
2417 2421 for subpath in sorted(ctx.substate):
2418 def matchessubrepo(subpath):
2419 return (m.exact(subpath)
2420 or any(f.startswith(subpath + '/') for f in m.files()))
2421
2422 if subrepos or matchessubrepo(subpath):
2422 submatch = matchmod.subdirmatcher(subpath, m)
2423 if (subrepos or m.exact(subpath) or any(submatch.files())):
2423 2424 sub = ctx.sub(subpath)
2424 2425 try:
2425 submatch = matchmod.subdirmatcher(subpath, m)
2426 2426 recurse = m.exact(subpath) or subrepos
2427 2427 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2428 2428 ret = 0
@@ -2450,21 +2450,12 b' def remove(ui, repo, m, prefix, after, f'
2450 2450 total = len(subs)
2451 2451 count = 0
2452 2452 for subpath in subs:
2453 def matchessubrepo(matcher, subpath):
2454 if matcher.exact(subpath):
2455 return True
2456 for f in matcher.files():
2457 if f.startswith(subpath):
2458 return True
2459 return False
2460
2461 2453 count += 1
2462 if subrepos or matchessubrepo(m, subpath):
2454 submatch = matchmod.subdirmatcher(subpath, m)
2455 if subrepos or m.exact(subpath) or any(submatch.files()):
2463 2456 ui.progress(_('searching'), count, total=total, unit=_('subrepos'))
2464
2465 2457 sub = wctx.sub(subpath)
2466 2458 try:
2467 submatch = matchmod.subdirmatcher(subpath, m)
2468 2459 if sub.removefiles(submatch, prefix, after, force, subrepos,
2469 2460 warnings):
2470 2461 ret = 1
@@ -2530,8 +2521,8 b' def remove(ui, repo, m, prefix, after, f'
2530 2521 for f in added:
2531 2522 count += 1
2532 2523 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2533 warnings.append(_('not removing %s: file has been marked for add'
2534 ' (use forget to undo)\n') % m.rel(f))
2524 warnings.append(_("not removing %s: file has been marked for add"
2525 " (use 'hg forget' to undo add)\n") % m.rel(f))
2535 2526 ret = 1
2536 2527 ui.progress(_('skipping'), None)
2537 2528
@@ -2581,14 +2572,7 b' def cat(ui, repo, ctx, matcher, prefix, '
2581 2572 write(file)
2582 2573 return 0
2583 2574
2584 # Don't warn about "missing" files that are really in subrepos
2585 def badfn(path, msg):
2586 for subpath in ctx.substate:
2587 if path.startswith(subpath + '/'):
2588 return
2589 matcher.bad(path, msg)
2590
2591 for abs in ctx.walk(matchmod.badmatch(matcher, badfn)):
2575 for abs in ctx.walk(matcher):
2592 2576 write(abs)
2593 2577 err = 0
2594 2578
@@ -2623,6 +2607,18 b' def commit(ui, repo, commitfunc, pats, o'
2623 2607
2624 2608 return commitfunc(ui, repo, message, matcher, opts)
2625 2609
2610 def samefile(f, ctx1, ctx2):
2611 if f in ctx1.manifest():
2612 a = ctx1.filectx(f)
2613 if f in ctx2.manifest():
2614 b = ctx2.filectx(f)
2615 return (not a.cmp(b)
2616 and a.flags() == b.flags())
2617 else:
2618 return False
2619 else:
2620 return f not in ctx2.manifest()
2621
2626 2622 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2627 2623 # avoid cycle context -> subrepo -> cmdutil
2628 2624 from . import context
@@ -2706,19 +2702,7 b' def amend(ui, repo, commitfunc, old, ext'
2706 2702 # we can discard X from our list of files. Likewise if X
2707 2703 # was deleted, it's no longer relevant
2708 2704 files.update(ctx.files())
2709
2710 def samefile(f):
2711 if f in ctx.manifest():
2712 a = ctx.filectx(f)
2713 if f in base.manifest():
2714 b = base.filectx(f)
2715 return (not a.cmp(b)
2716 and a.flags() == b.flags())
2717 else:
2718 return False
2719 else:
2720 return f not in base.manifest()
2721 files = [f for f in files if not samefile(f)]
2705 files = [f for f in files if not samefile(f, ctx, base)]
2722 2706
2723 2707 def filectxfn(repo, ctx_, path):
2724 2708 try:
@@ -3542,10 +3526,11 b' class dirstateguard(object):'
3542 3526
3543 3527 def __init__(self, repo, name):
3544 3528 self._repo = repo
3529 self._active = False
3530 self._closed = False
3545 3531 self._suffix = '.backup.%s.%d' % (name, id(self))
3546 3532 repo.dirstate.savebackup(repo.currenttransaction(), self._suffix)
3547 3533 self._active = True
3548 self._closed = False
3549 3534
3550 3535 def __del__(self):
3551 3536 if self._active: # still active
This diff has been collapsed as it changes many lines, (565 lines changed) Show them Hide them
@@ -441,12 +441,12 b' def annotate(ui, repo, *pats, **opts):'
441 441 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
442 442 raise error.Abort(_('at least one of -n/-c is required for -l'))
443 443
444 if fm:
444 if fm.isplain():
445 def makefunc(get, fmt):
446 return lambda x: fmt(get(x))
447 else:
445 448 def makefunc(get, fmt):
446 449 return get
447 else:
448 def makefunc(get, fmt):
449 return lambda x: fmt(get(x))
450 450 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
451 451 if opts.get(op)]
452 452 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
@@ -476,12 +476,12 b' def annotate(ui, repo, *pats, **opts):'
476 476
477 477 for f, sep in funcmap:
478 478 l = [f(n) for n, dummy in lines]
479 if fm:
480 formats.append(['%s' for x in l])
481 else:
479 if fm.isplain():
482 480 sizes = [encoding.colwidth(x) for x in l]
483 481 ml = max(sizes)
484 482 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
483 else:
484 formats.append(['%s' for x in l])
485 485 pieces.append(l)
486 486
487 487 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
@@ -835,57 +835,6 b' def bisect(ui, repo, rev=None, extra=Non'
835 835
836 836 Returns 0 on success.
837 837 """
838 def extendbisectrange(nodes, good):
839 # bisect is incomplete when it ends on a merge node and
840 # one of the parent was not checked.
841 parents = repo[nodes[0]].parents()
842 if len(parents) > 1:
843 if good:
844 side = state['bad']
845 else:
846 side = state['good']
847 num = len(set(i.node() for i in parents) & set(side))
848 if num == 1:
849 return parents[0].ancestor(parents[1])
850 return None
851
852 def print_result(nodes, good):
853 displayer = cmdutil.show_changeset(ui, repo, {})
854 if len(nodes) == 1:
855 # narrowed it down to a single revision
856 if good:
857 ui.write(_("The first good revision is:\n"))
858 else:
859 ui.write(_("The first bad revision is:\n"))
860 displayer.show(repo[nodes[0]])
861 extendnode = extendbisectrange(nodes, good)
862 if extendnode is not None:
863 ui.write(_('Not all ancestors of this changeset have been'
864 ' checked.\nUse bisect --extend to continue the '
865 'bisection from\nthe common ancestor, %s.\n')
866 % extendnode)
867 else:
868 # multiple possible revisions
869 if good:
870 ui.write(_("Due to skipped revisions, the first "
871 "good revision could be any of:\n"))
872 else:
873 ui.write(_("Due to skipped revisions, the first "
874 "bad revision could be any of:\n"))
875 for n in nodes:
876 displayer.show(repo[n])
877 displayer.close()
878
879 def check_state(state, interactive=True):
880 if not state['good'] or not state['bad']:
881 if (good or bad or skip or reset) and interactive:
882 return
883 if not state['good']:
884 raise error.Abort(_('cannot bisect (no known good revisions)'))
885 else:
886 raise error.Abort(_('cannot bisect (no known bad revisions)'))
887 return True
888
889 838 # backward compatibility
890 839 if rev in "good bad reset init".split():
891 840 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
@@ -902,13 +851,36 b' def bisect(ui, repo, rev=None, extra=Non'
902 851 cmdutil.checkunfinished(repo)
903 852
904 853 if reset:
905 p = repo.join("bisect.state")
906 if os.path.exists(p):
907 os.unlink(p)
854 hbisect.resetstate(repo)
908 855 return
909 856
910 857 state = hbisect.load_state(repo)
911 858
859 # update state
860 if good or bad or skip:
861 if rev:
862 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
863 else:
864 nodes = [repo.lookup('.')]
865 if good:
866 state['good'] += nodes
867 elif bad:
868 state['bad'] += nodes
869 elif skip:
870 state['skip'] += nodes
871 hbisect.save_state(repo, state)
872 if not (state['good'] and state['bad']):
873 return
874
875 def mayupdate(repo, node, show_stats=True):
876 """common used update sequence"""
877 if noupdate:
878 return
879 cmdutil.bailifchanged(repo)
880 return hg.clean(repo, node, show_stats=show_stats)
881
882 displayer = cmdutil.show_changeset(ui, repo, {})
883
912 884 if command:
913 885 changesets = 1
914 886 if noupdate:
@@ -921,6 +893,8 b' def bisect(ui, repo, rev=None, extra=Non'
921 893 node, p2 = repo.dirstate.parents()
922 894 if p2 != nullid:
923 895 raise error.Abort(_('current bisect revision is a merge'))
896 if rev:
897 node = repo[scmutil.revsingle(repo, rev, node)].node()
924 898 try:
925 899 while changesets:
926 900 # update state
@@ -938,61 +912,38 b' def bisect(ui, repo, rev=None, extra=Non'
938 912 raise error.Abort(_("%s killed") % command)
939 913 else:
940 914 transition = "bad"
941 ctx = scmutil.revsingle(repo, rev, node)
942 rev = None # clear for future iterations
943 state[transition].append(ctx.node())
915 state[transition].append(node)
916 ctx = repo[node]
944 917 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
945 check_state(state, interactive=False)
918 hbisect.checkstate(state)
946 919 # bisect
947 920 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
948 921 # update to next check
949 922 node = nodes[0]
950 if not noupdate:
951 cmdutil.bailifchanged(repo)
952 hg.clean(repo, node, show_stats=False)
923 mayupdate(repo, node, show_stats=False)
953 924 finally:
954 925 state['current'] = [node]
955 926 hbisect.save_state(repo, state)
956 print_result(nodes, bgood)
927 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
957 928 return
958 929
959 # update state
960
961 if rev:
962 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
963 else:
964 nodes = [repo.lookup('.')]
965
966 if good or bad or skip:
967 if good:
968 state['good'] += nodes
969 elif bad:
970 state['bad'] += nodes
971 elif skip:
972 state['skip'] += nodes
973 hbisect.save_state(repo, state)
974
975 if not check_state(state):
976 return
930 hbisect.checkstate(state)
977 931
978 932 # actually bisect
979 933 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
980 934 if extend:
981 935 if not changesets:
982 extendnode = extendbisectrange(nodes, good)
936 extendnode = hbisect.extendrange(repo, state, nodes, good)
983 937 if extendnode is not None:
984 938 ui.write(_("Extending search to changeset %d:%s\n")
985 939 % (extendnode.rev(), extendnode))
986 940 state['current'] = [extendnode.node()]
987 941 hbisect.save_state(repo, state)
988 if noupdate:
989 return
990 cmdutil.bailifchanged(repo)
991 return hg.clean(repo, extendnode.node())
942 return mayupdate(repo, extendnode.node())
992 943 raise error.Abort(_("nothing to extend"))
993 944
994 945 if changesets == 0:
995 print_result(nodes, good)
946 hbisect.printresult(ui, repo, state, displayer, nodes, good)
996 947 else:
997 948 assert len(nodes) == 1 # only a single node can be tested next
998 949 node = nodes[0]
@@ -1006,9 +957,7 b' def bisect(ui, repo, rev=None, extra=Non'
1006 957 % (rev, short(node), changesets, tests))
1007 958 state['current'] = [node]
1008 959 hbisect.save_state(repo, state)
1009 if not noupdate:
1010 cmdutil.bailifchanged(repo)
1011 return hg.clean(repo, node)
960 return mayupdate(repo, node)
1012 961
1013 962 @command('bookmarks|bookmark',
1014 963 [('f', 'force', False, _('force')),
@@ -1185,7 +1134,7 b' def bookmark(ui, repo, *names, **opts):'
1185 1134 fm = ui.formatter('bookmarks', opts)
1186 1135 hexfn = fm.hexfunc
1187 1136 marks = repo._bookmarks
1188 if len(marks) == 0 and not fm:
1137 if len(marks) == 0 and fm.isplain():
1189 1138 ui.status(_("no bookmarks set\n"))
1190 1139 for bmark, n in sorted(marks.iteritems()):
1191 1140 active = repo._activebookmark
@@ -1383,8 +1332,8 b' def bundle(ui, repo, fname, dest=None, *'
1383 1332 repo, bundletype, strict=False)
1384 1333 except error.UnsupportedBundleSpecification as e:
1385 1334 raise error.Abort(str(e),
1386 hint=_('see "hg help bundle" for supported '
1387 'values for --type'))
1335 hint=_("see 'hg help bundle' for supported "
1336 "values for --type"))
1388 1337
1389 1338 # Packed bundles are a pseudo bundle format for now.
1390 1339 if cgversion == 's1':
@@ -1411,10 +1360,11 b' def bundle(ui, repo, fname, dest=None, *'
1411 1360 raise error.Abort(_("--base is incompatible with specifying "
1412 1361 "a destination"))
1413 1362 common = [repo.lookup(rev) for rev in base]
1414 heads = revs and map(repo.lookup, revs) or revs
1415 cg = changegroup.getchangegroup(repo, 'bundle', heads=heads,
1416 common=common, bundlecaps=bundlecaps,
1417 version=cgversion)
1363 heads = revs and map(repo.lookup, revs) or None
1364 outgoing = discovery.outgoing(repo, common, heads)
1365 cg = changegroup.getchangegroup(repo, 'bundle', outgoing,
1366 bundlecaps=bundlecaps,
1367 version=cgversion)
1418 1368 outgoing = None
1419 1369 else:
1420 1370 dest = ui.expandpath(dest or 'default-push', dest or 'default')
@@ -1688,9 +1638,11 b' def commit(ui, repo, *pats, **opts):'
1688 1638 def _docommit(ui, repo, *pats, **opts):
1689 1639 if opts.get('interactive'):
1690 1640 opts.pop('interactive')
1691 cmdutil.dorecord(ui, repo, commit, None, False,
1692 cmdutil.recordfilter, *pats, **opts)
1693 return
1641 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1642 cmdutil.recordfilter, *pats, **opts)
1643 # ret can be 0 (no changes to record) or the value returned by
1644 # commit(), 1 if nothing changed or None on success.
1645 return 1 if ret == 0 else ret
1694 1646
1695 1647 if opts.get('subrepos'):
1696 1648 if opts.get('amend'):
@@ -1787,7 +1739,7 b' def _docommit(ui, repo, *pats, **opts):'
1787 1739 [('u', 'untrusted', None, _('show untrusted configuration options')),
1788 1740 ('e', 'edit', None, _('edit user config')),
1789 1741 ('l', 'local', None, _('edit repository config')),
1790 ('g', 'global', None, _('edit global config'))],
1742 ('g', 'global', None, _('edit global config'))] + formatteropts,
1791 1743 _('[-u] [NAME]...'),
1792 1744 optionalrepo=True)
1793 1745 def config(ui, repo, *values, **opts):
@@ -1848,6 +1800,7 b' def config(ui, repo, *values, **opts):'
1848 1800 onerr=error.Abort, errprefix=_("edit failed"))
1849 1801 return
1850 1802
1803 fm = ui.formatter('config', opts)
1851 1804 for f in scmutil.rcpath():
1852 1805 ui.debug('read config from: %s\n' % f)
1853 1806 untrusted = bool(opts.get('untrusted'))
@@ -1858,25 +1811,32 b' def config(ui, repo, *values, **opts):'
1858 1811 raise error.Abort(_('only one config item permitted'))
1859 1812 matched = False
1860 1813 for section, name, value in ui.walkconfig(untrusted=untrusted):
1861 value = str(value).replace('\n', '\\n')
1862 sectname = section + '.' + name
1814 value = str(value)
1815 if fm.isplain():
1816 value = value.replace('\n', '\\n')
1817 entryname = section + '.' + name
1863 1818 if values:
1864 1819 for v in values:
1865 1820 if v == section:
1866 ui.debug('%s: ' %
1867 ui.configsource(section, name, untrusted))
1868 ui.write('%s=%s\n' % (sectname, value))
1821 fm.startitem()
1822 fm.condwrite(ui.debugflag, 'source', '%s: ',
1823 ui.configsource(section, name, untrusted))
1824 fm.write('name value', '%s=%s\n', entryname, value)
1869 1825 matched = True
1870 elif v == sectname:
1871 ui.debug('%s: ' %
1872 ui.configsource(section, name, untrusted))
1873 ui.write(value, '\n')
1826 elif v == entryname:
1827 fm.startitem()
1828 fm.condwrite(ui.debugflag, 'source', '%s: ',
1829 ui.configsource(section, name, untrusted))
1830 fm.write('value', '%s\n', value)
1831 fm.data(name=entryname)
1874 1832 matched = True
1875 1833 else:
1876 ui.debug('%s: ' %
1877 ui.configsource(section, name, untrusted))
1878 ui.write('%s=%s\n' % (sectname, value))
1834 fm.startitem()
1835 fm.condwrite(ui.debugflag, 'source', '%s: ',
1836 ui.configsource(section, name, untrusted))
1837 fm.write('name value', '%s=%s\n', entryname, value)
1879 1838 matched = True
1839 fm.end()
1880 1840 if matched:
1881 1841 return 0
1882 1842 return 1
@@ -1987,8 +1947,9 b' def debugbuilddag(ui, repo, text=None,'
1987 1947
1988 1948 tags = []
1989 1949
1990 lock = tr = None
1950 wlock = lock = tr = None
1991 1951 try:
1952 wlock = repo.wlock()
1992 1953 lock = repo.lock()
1993 1954 tr = repo.transaction("builddag")
1994 1955
@@ -2073,7 +2034,7 b' def debugbuilddag(ui, repo, text=None,'
2073 2034 repo.vfs.write("localtags", "".join(tags))
2074 2035 finally:
2075 2036 ui.progress(_('building'), None)
2076 release(tr, lock)
2037 release(tr, lock, wlock)
2077 2038
2078 2039 @command('debugbundle',
2079 2040 [('a', 'all', None, _('show all details')),
@@ -2102,10 +2063,7 b' def _debugchangegroup(ui, gen, all=None,'
2102 2063 def showchunks(named):
2103 2064 ui.write("\n%s%s\n" % (indent_string, named))
2104 2065 chain = None
2105 while True:
2106 chunkdata = gen.deltachunk(chain)
2107 if not chunkdata:
2108 break
2066 for chunkdata in iter(lambda: gen.deltachunk(chain), {}):
2109 2067 node = chunkdata['node']
2110 2068 p1 = chunkdata['p1']
2111 2069 p2 = chunkdata['p2']
@@ -2121,10 +2079,7 b' def _debugchangegroup(ui, gen, all=None,'
2121 2079 showchunks("changelog")
2122 2080 chunkdata = gen.manifestheader()
2123 2081 showchunks("manifest")
2124 while True:
2125 chunkdata = gen.filelogheader()
2126 if not chunkdata:
2127 break
2082 for chunkdata in iter(gen.filelogheader, {}):
2128 2083 fname = chunkdata['filename']
2129 2084 showchunks(fname)
2130 2085 else:
@@ -2132,10 +2087,7 b' def _debugchangegroup(ui, gen, all=None,'
2132 2087 raise error.Abort(_('use debugbundle2 for this file'))
2133 2088 chunkdata = gen.changelogheader()
2134 2089 chain = None
2135 while True:
2136 chunkdata = gen.deltachunk(chain)
2137 if not chunkdata:
2138 break
2090 for chunkdata in iter(lambda: gen.deltachunk(chain), {}):
2139 2091 node = chunkdata['node']
2140 2092 ui.write("%s%s\n" % (indent_string, hex(node)))
2141 2093 chain = node
@@ -2398,12 +2350,15 b' def debugdiscovery(ui, repo, remoteurl="'
2398 2350 def debugextensions(ui, **opts):
2399 2351 '''show information about active extensions'''
2400 2352 exts = extensions.extensions(ui)
2353 hgver = util.version()
2401 2354 fm = ui.formatter('debugextensions', opts)
2402 2355 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
2356 isinternal = extensions.ismoduleinternal(extmod)
2403 2357 extsource = extmod.__file__
2404 exttestedwith = getattr(extmod, 'testedwith', None)
2405 if exttestedwith is not None:
2406 exttestedwith = exttestedwith.split()
2358 if isinternal:
2359 exttestedwith = [] # never expose magic string to users
2360 else:
2361 exttestedwith = getattr(extmod, 'testedwith', '').split()
2407 2362 extbuglink = getattr(extmod, 'buglink', None)
2408 2363
2409 2364 fm.startitem()
@@ -2412,21 +2367,24 b' def debugextensions(ui, **opts):'
2412 2367 fm.write('name', '%s\n', extname)
2413 2368 else:
2414 2369 fm.write('name', '%s', extname)
2415 if not exttestedwith:
2370 if isinternal or hgver in exttestedwith:
2371 fm.plain('\n')
2372 elif not exttestedwith:
2416 2373 fm.plain(_(' (untested!)\n'))
2417 2374 else:
2418 if exttestedwith == ['internal'] or \
2419 util.version() in exttestedwith:
2420 fm.plain('\n')
2421 else:
2422 lasttestedversion = exttestedwith[-1]
2423 fm.plain(' (%s!)\n' % lasttestedversion)
2375 lasttestedversion = exttestedwith[-1]
2376 fm.plain(' (%s!)\n' % lasttestedversion)
2424 2377
2425 2378 fm.condwrite(ui.verbose and extsource, 'source',
2426 2379 _(' location: %s\n'), extsource or "")
2427 2380
2381 if ui.verbose:
2382 fm.plain(_(' bundled: %s\n') % ['no', 'yes'][isinternal])
2383 fm.data(bundled=isinternal)
2384
2428 2385 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
2429 _(' tested with: %s\n'), ' '.join(exttestedwith or []))
2386 _(' tested with: %s\n'),
2387 fm.formatlist(exttestedwith, name='ver'))
2430 2388
2431 2389 fm.condwrite(ui.verbose and extbuglink, 'buglink',
2432 2390 _(' bug reporting: %s\n'), extbuglink or "")
@@ -2453,7 +2411,7 b' def debugfsinfo(ui, path="."):'
2453 2411 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
2454 2412 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
2455 2413 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
2456 ui.write(('case-sensitive: %s\n') % (util.checkcase('.debugfsinfo')
2414 ui.write(('case-sensitive: %s\n') % (util.fscasesensitive('.debugfsinfo')
2457 2415 and 'yes' or 'no'))
2458 2416 os.unlink('.debugfsinfo')
2459 2417
@@ -2832,7 +2790,7 b' def debuginstall(ui, **opts):'
2832 2790 if not problems:
2833 2791 fm.data(problems=problems)
2834 2792 fm.condwrite(problems, 'problems',
2835 _("%s problems detected,"
2793 _("%d problems detected,"
2836 2794 " please check your install!\n"), problems)
2837 2795 fm.end()
2838 2796
@@ -3054,7 +3012,7 b' def debuglocks(ui, repo, **opts):'
3054 3012 ('r', 'rev', [], _('display markers relevant to REV')),
3055 3013 ('', 'index', False, _('display index of the marker')),
3056 3014 ('', 'delete', [], _('delete markers specified by indices')),
3057 ] + commitopts2,
3015 ] + commitopts2 + formatteropts,
3058 3016 _('[OBSOLETED [REPLACEMENT ...]]'))
3059 3017 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
3060 3018 """create arbitrary obsolete marker
@@ -3142,6 +3100,7 b' def debugobsolete(ui, repo, precursor=No'
3142 3100 markerset = set(markers)
3143 3101 isrelevant = lambda m: m in markerset
3144 3102
3103 fm = ui.formatter('debugobsolete', opts)
3145 3104 for i, m in enumerate(markerstoiter):
3146 3105 if not isrelevant(m):
3147 3106 # marker can be irrelevant when we're iterating over a set
@@ -3152,8 +3111,10 b' def debugobsolete(ui, repo, precursor=No'
3152 3111 # to get the correct indices, but only display the ones that
3153 3112 # are relevant to --rev value
3154 3113 continue
3114 fm.startitem()
3155 3115 ind = i if opts.get('index') else None
3156 cmdutil.showmarker(ui, m, index=ind)
3116 cmdutil.showmarker(fm, m, index=ind)
3117 fm.end()
3157 3118
3158 3119 @command('debugpathcomplete',
3159 3120 [('f', 'full', None, _('complete an entire path')),
@@ -3382,9 +3343,9 b' def debugrevlog(ui, repo, file_=None, **'
3382 3343 nump2prev = 0
3383 3344 chainlengths = []
3384 3345
3385 datasize = [None, 0, 0L]
3386 fullsize = [None, 0, 0L]
3387 deltasize = [None, 0, 0L]
3346 datasize = [None, 0, 0]
3347 fullsize = [None, 0, 0]
3348 deltasize = [None, 0, 0]
3388 3349
3389 3350 def addsize(size, l):
3390 3351 if l[0] is None or size < l[0]:
@@ -3509,29 +3470,92 b' def debugrevlog(ui, repo, file_=None, **'
3509 3470 numdeltas))
3510 3471
3511 3472 @command('debugrevspec',
3512 [('', 'optimize', None, _('print parsed tree after optimizing'))],
3473 [('', 'optimize', None,
3474 _('print parsed tree after optimizing (DEPRECATED)')),
3475 ('p', 'show-stage', [],
3476 _('print parsed tree at the given stage'), _('NAME')),
3477 ('', 'no-optimized', False, _('evaluate tree without optimization')),
3478 ('', 'verify-optimized', False, _('verify optimized result')),
3479 ],
3513 3480 ('REVSPEC'))
3514 3481 def debugrevspec(ui, repo, expr, **opts):
3515 3482 """parse and apply a revision specification
3516 3483
3517 Use --verbose to print the parsed tree before and after aliases
3518 expansion.
3484 Use -p/--show-stage option to print the parsed tree at the given stages.
3485 Use -p all to print tree at every stage.
3486
3487 Use --verify-optimized to compare the optimized result with the unoptimized
3488 one. Returns 1 if the optimized result differs.
3519 3489 """
3520 if ui.verbose:
3521 tree = revset.parse(expr, lookup=repo.__contains__)
3522 ui.note(revset.prettyformat(tree), "\n")
3523 newtree = revset.expandaliases(ui, tree)
3524 if newtree != tree:
3525 ui.note(("* expanded:\n"), revset.prettyformat(newtree), "\n")
3526 tree = newtree
3527 newtree = revset.foldconcat(tree)
3528 if newtree != tree:
3529 ui.note(("* concatenated:\n"), revset.prettyformat(newtree), "\n")
3530 if opts["optimize"]:
3531 optimizedtree = revset.optimize(newtree)
3532 ui.note(("* optimized:\n"),
3533 revset.prettyformat(optimizedtree), "\n")
3534 func = revset.match(ui, expr, repo)
3490 stages = [
3491 ('parsed', lambda tree: tree),
3492 ('expanded', lambda tree: revset.expandaliases(ui, tree)),
3493 ('concatenated', revset.foldconcat),
3494 ('analyzed', revset.analyze),
3495 ('optimized', revset.optimize),
3496 ]
3497 if opts['no_optimized']:
3498 stages = stages[:-1]
3499 if opts['verify_optimized'] and opts['no_optimized']:
3500 raise error.Abort(_('cannot use --verify-optimized with '
3501 '--no-optimized'))
3502 stagenames = set(n for n, f in stages)
3503
3504 showalways = set()
3505 showchanged = set()
3506 if ui.verbose and not opts['show_stage']:
3507 # show parsed tree by --verbose (deprecated)
3508 showalways.add('parsed')
3509 showchanged.update(['expanded', 'concatenated'])
3510 if opts['optimize']:
3511 showalways.add('optimized')
3512 if opts['show_stage'] and opts['optimize']:
3513 raise error.Abort(_('cannot use --optimize with --show-stage'))
3514 if opts['show_stage'] == ['all']:
3515 showalways.update(stagenames)
3516 else:
3517 for n in opts['show_stage']:
3518 if n not in stagenames:
3519 raise error.Abort(_('invalid stage name: %s') % n)
3520 showalways.update(opts['show_stage'])
3521
3522 treebystage = {}
3523 printedtree = None
3524 tree = revset.parse(expr, lookup=repo.__contains__)
3525 for n, f in stages:
3526 treebystage[n] = tree = f(tree)
3527 if n in showalways or (n in showchanged and tree != printedtree):
3528 if opts['show_stage'] or n != 'parsed':
3529 ui.write(("* %s:\n") % n)
3530 ui.write(revset.prettyformat(tree), "\n")
3531 printedtree = tree
3532
3533 if opts['verify_optimized']:
3534 arevs = revset.makematcher(treebystage['analyzed'])(repo)
3535 brevs = revset.makematcher(treebystage['optimized'])(repo)
3536 if ui.verbose:
3537 ui.note(("* analyzed set:\n"), revset.prettyformatset(arevs), "\n")
3538 ui.note(("* optimized set:\n"), revset.prettyformatset(brevs), "\n")
3539 arevs = list(arevs)
3540 brevs = list(brevs)
3541 if arevs == brevs:
3542 return 0
3543 ui.write(('--- analyzed\n'), label='diff.file_a')
3544 ui.write(('+++ optimized\n'), label='diff.file_b')
3545 sm = difflib.SequenceMatcher(None, arevs, brevs)
3546 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
3547 if tag in ('delete', 'replace'):
3548 for c in arevs[alo:ahi]:
3549 ui.write('-%s\n' % c, label='diff.deleted')
3550 if tag in ('insert', 'replace'):
3551 for c in brevs[blo:bhi]:
3552 ui.write('+%s\n' % c, label='diff.inserted')
3553 if tag == 'equal':
3554 for c in arevs[alo:ahi]:
3555 ui.write(' %s\n' % c)
3556 return 1
3557
3558 func = revset.makematcher(tree)
3535 3559 revs = func(repo)
3536 3560 if ui.verbose:
3537 3561 ui.note(("* set:\n"), revset.prettyformatset(revs), "\n")
@@ -3914,16 +3938,16 b' def export(ui, repo, *changesets, **opts'
3914 3938 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3915 3939 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3916 3940 ] + walkopts + formatteropts + subrepoopts,
3917 _('[OPTION]... [PATTERN]...'))
3941 _('[OPTION]... [FILE]...'))
3918 3942 def files(ui, repo, *pats, **opts):
3919 3943 """list tracked files
3920 3944
3921 3945 Print files under Mercurial control in the working directory or
3922 specified revision whose names match the given patterns (excluding
3923 removed files).
3924
3925 If no patterns are given to match, this command prints the names
3926 of all files under Mercurial control in the working directory.
3946 specified revision for given files (excluding removed files).
3947 Files can be specified as filenames or filesets.
3948
3949 If no files are given to match, this command prints the names
3950 of all files under Mercurial control.
3927 3951
3928 3952 .. container:: verbose
3929 3953
@@ -3964,15 +3988,11 b' def files(ui, repo, *pats, **opts):'
3964 3988 end = '\n'
3965 3989 if opts.get('print0'):
3966 3990 end = '\0'
3967 fm = ui.formatter('files', opts)
3968 3991 fmt = '%s' + end
3969 3992
3970 3993 m = scmutil.match(ctx, pats, opts)
3971 ret = cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
3972
3973 fm.end()
3974
3975 return ret
3994 with ui.formatter('files', opts) as fm:
3995 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
3976 3996
3977 3997 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
3978 3998 def forget(ui, repo, *pats, **opts):
@@ -4142,9 +4162,7 b' def _dograft(ui, repo, *revs, **opts):'
4142 4162 # check for ancestors of dest branch
4143 4163 crev = repo['.'].rev()
4144 4164 ancestors = repo.changelog.ancestors([crev], inclusive=True)
4145 # Cannot use x.remove(y) on smart set, this has to be a list.
4146 4165 # XXX make this lazy in the future
4147 revs = list(revs)
4148 4166 # don't mutate while iterating, create a copy
4149 4167 for rev in list(revs):
4150 4168 if rev in ancestors:
@@ -4284,7 +4302,7 b' def _dograft(ui, repo, *revs, **opts):'
4284 4302 _('only search files changed within revision range'), _('REV')),
4285 4303 ('u', 'user', None, _('list the author (long with -v)')),
4286 4304 ('d', 'date', None, _('list the date (short with -q)')),
4287 ] + walkopts,
4305 ] + formatteropts + walkopts,
4288 4306 _('[OPTION]... PATTERN [FILE]...'),
4289 4307 inferrepo=True)
4290 4308 def grep(ui, repo, pattern, *pats, **opts):
@@ -4349,19 +4367,16 b' def grep(ui, repo, pattern, *pats, **opt'
4349 4367 def __eq__(self, other):
4350 4368 return self.line == other.line
4351 4369
4352 def __iter__(self):
4353 yield (self.line[:self.colstart], '')
4354 yield (self.line[self.colstart:self.colend], 'grep.match')
4355 rest = self.line[self.colend:]
4356 while rest != '':
4357 match = regexp.search(rest)
4358 if not match:
4359 yield (rest, '')
4370 def findpos(self):
4371 """Iterate all (start, end) indices of matches"""
4372 yield self.colstart, self.colend
4373 p = self.colend
4374 while p < len(self.line):
4375 m = regexp.search(self.line, p)
4376 if not m:
4360 4377 break
4361 mstart, mend = match.span()
4362 yield (rest[:mstart], '')
4363 yield (rest[mstart:mend], 'grep.match')
4364 rest = rest[mend:]
4378 yield m.span()
4379 p = m.end()
4365 4380
4366 4381 matches = {}
4367 4382 copies = {}
@@ -4387,50 +4402,76 b' def grep(ui, repo, pattern, *pats, **opt'
4387 4402 for i in xrange(blo, bhi):
4388 4403 yield ('+', b[i])
4389 4404
4390 def display(fn, ctx, pstates, states):
4405 def display(fm, fn, ctx, pstates, states):
4391 4406 rev = ctx.rev()
4407 if fm.isplain():
4408 formatuser = ui.shortuser
4409 else:
4410 formatuser = str
4392 4411 if ui.quiet:
4393 datefunc = util.shortdate
4412 datefmt = '%Y-%m-%d'
4394 4413 else:
4395 datefunc = util.datestr
4414 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
4396 4415 found = False
4397 4416 @util.cachefunc
4398 4417 def binary():
4399 4418 flog = getfile(fn)
4400 4419 return util.binary(flog.read(ctx.filenode(fn)))
4401 4420
4421 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
4402 4422 if opts.get('all'):
4403 4423 iter = difflinestates(pstates, states)
4404 4424 else:
4405 4425 iter = [('', l) for l in states]
4406 4426 for change, l in iter:
4407 cols = [(fn, 'grep.filename'), (str(rev), 'grep.rev')]
4408
4409 if opts.get('line_number'):
4410 cols.append((str(l.linenum), 'grep.linenumber'))
4427 fm.startitem()
4428 fm.data(node=fm.hexfunc(ctx.node()))
4429 cols = [
4430 ('filename', fn, True),
4431 ('rev', rev, True),
4432 ('linenumber', l.linenum, opts.get('line_number')),
4433 ]
4411 4434 if opts.get('all'):
4412 cols.append((change, 'grep.change'))
4413 if opts.get('user'):
4414 cols.append((ui.shortuser(ctx.user()), 'grep.user'))
4415 if opts.get('date'):
4416 cols.append((datefunc(ctx.date()), 'grep.date'))
4417 for col, label in cols[:-1]:
4418 ui.write(col, label=label)
4419 ui.write(sep, label='grep.sep')
4420 ui.write(cols[-1][0], label=cols[-1][1])
4435 cols.append(('change', change, True))
4436 cols.extend([
4437 ('user', formatuser(ctx.user()), opts.get('user')),
4438 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
4439 ])
4440 lastcol = next(name for name, data, cond in reversed(cols) if cond)
4441 for name, data, cond in cols:
4442 field = fieldnamemap.get(name, name)
4443 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
4444 if cond and name != lastcol:
4445 fm.plain(sep, label='grep.sep')
4421 4446 if not opts.get('files_with_matches'):
4422 ui.write(sep, label='grep.sep')
4447 fm.plain(sep, label='grep.sep')
4423 4448 if not opts.get('text') and binary():
4424 ui.write(_(" Binary file matches"))
4449 fm.plain(_(" Binary file matches"))
4425 4450 else:
4426 for s, label in l:
4427 ui.write(s, label=label)
4428 ui.write(eol)
4451 displaymatches(fm.nested('texts'), l)
4452 fm.plain(eol)
4429 4453 found = True
4430 4454 if opts.get('files_with_matches'):
4431 4455 break
4432 4456 return found
4433 4457
4458 def displaymatches(fm, l):
4459 p = 0
4460 for s, e in l.findpos():
4461 if p < s:
4462 fm.startitem()
4463 fm.write('text', '%s', l.line[p:s])
4464 fm.data(matched=False)
4465 fm.startitem()
4466 fm.write('text', '%s', l.line[s:e], label='grep.match')
4467 fm.data(matched=True)
4468 p = e
4469 if p < len(l.line):
4470 fm.startitem()
4471 fm.write('text', '%s', l.line[p:])
4472 fm.data(matched=False)
4473 fm.end()
4474
4434 4475 skip = {}
4435 4476 revfiles = {}
4436 4477 matchfn = scmutil.match(repo[None], pats, opts)
@@ -4472,6 +4513,7 b' def grep(ui, repo, pattern, *pats, **opt'
4472 4513 except error.LookupError:
4473 4514 pass
4474 4515
4516 fm = ui.formatter('grep', opts)
4475 4517 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
4476 4518 rev = ctx.rev()
4477 4519 parent = ctx.p1().rev()
@@ -4484,7 +4526,7 b' def grep(ui, repo, pattern, *pats, **opt'
4484 4526 continue
4485 4527 pstates = matches.get(parent, {}).get(copy or fn, [])
4486 4528 if pstates or states:
4487 r = display(fn, ctx, pstates, states)
4529 r = display(fm, fn, ctx, pstates, states)
4488 4530 found = found or r
4489 4531 if r and not opts.get('all'):
4490 4532 skip[fn] = True
@@ -4492,6 +4534,7 b' def grep(ui, repo, pattern, *pats, **opt'
4492 4534 skip[copy] = True
4493 4535 del matches[rev]
4494 4536 del revfiles[rev]
4537 fm.end()
4495 4538
4496 4539 return not found
4497 4540
@@ -4607,12 +4650,15 b' def help_(ui, name=None, **opts):'
4607 4650 section = None
4608 4651 subtopic = None
4609 4652 if name and '.' in name:
4610 name, section = name.split('.', 1)
4611 section = encoding.lower(section)
4612 if '.' in section:
4613 subtopic, section = section.split('.', 1)
4653 name, remaining = name.split('.', 1)
4654 remaining = encoding.lower(remaining)
4655 if '.' in remaining:
4656 subtopic, section = remaining.split('.', 1)
4614 4657 else:
4615 subtopic = section
4658 if name in help.subtopics:
4659 subtopic = remaining
4660 else:
4661 section = remaining
4616 4662
4617 4663 text = help.help_(ui, name, subtopic=subtopic, **opts)
4618 4664
@@ -5437,7 +5483,9 b' def merge(ui, repo, node=None, **opts):'
5437 5483 # ui.forcemerge is an internal variable, do not document
5438 5484 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
5439 5485 force = opts.get('force')
5440 return hg.merge(repo, node, force=force, mergeforce=force)
5486 labels = ['working copy', 'merge rev']
5487 return hg.merge(repo, node, force=force, mergeforce=force,
5488 labels=labels)
5441 5489 finally:
5442 5490 ui.setconfig('ui', 'forcemerge', '', 'merge')
5443 5491
@@ -5611,10 +5659,10 b' def paths(ui, repo, search=None, **opts)'
5611 5659 pathitems = sorted(ui.paths.iteritems())
5612 5660
5613 5661 fm = ui.formatter('paths', opts)
5614 if fm:
5662 if fm.isplain():
5663 hidepassword = util.hidepassword
5664 else:
5615 5665 hidepassword = str
5616 else:
5617 hidepassword = util.hidepassword
5618 5666 if ui.quiet:
5619 5667 namefmt = '%s\n'
5620 5668 else:
@@ -5936,7 +5984,7 b' def push(ui, repo, dest=None, **opts):'
5936 5984 path = ui.paths.getpath(dest, default=('default-push', 'default'))
5937 5985 if not path:
5938 5986 raise error.Abort(_('default repository not configured!'),
5939 hint=_('see the "path" section in "hg help config"'))
5987 hint=_("see 'hg help config.paths'"))
5940 5988 dest = path.pushloc or path.loc
5941 5989 branches = (path.branch, opts.get('branch') or [])
5942 5990 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
@@ -6459,7 +6507,7 b' def root(ui, repo):'
6459 6507 ('n', 'name', '',
6460 6508 _('name to show in web pages (default: working directory)'), _('NAME')),
6461 6509 ('', 'web-conf', '',
6462 _('name of the hgweb config file (see "hg help hgweb")'), _('FILE')),
6510 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
6463 6511 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
6464 6512 _('FILE')),
6465 6513 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
@@ -7247,37 +7295,48 b' def verify(ui, repo):'
7247 7295 """
7248 7296 return hg.verify(repo)
7249 7297
7250 @command('version', [], norepo=True)
7251 def version_(ui):
7298 @command('version', [] + formatteropts, norepo=True)
7299 def version_(ui, **opts):
7252 7300 """output version and copyright information"""
7253 ui.write(_("Mercurial Distributed SCM (version %s)\n")
7254 % util.version())
7255 ui.status(_(
7301 fm = ui.formatter("version", opts)
7302 fm.startitem()
7303 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
7304 util.version())
7305 license = _(
7256 7306 "(see https://mercurial-scm.org for more information)\n"
7257 7307 "\nCopyright (C) 2005-2016 Matt Mackall and others\n"
7258 7308 "This is free software; see the source for copying conditions. "
7259 7309 "There is NO\nwarranty; "
7260 7310 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
7261 ))
7262
7263 ui.note(_("\nEnabled extensions:\n\n"))
7311 )
7312 if not ui.quiet:
7313 fm.plain(license)
7314
7264 7315 if ui.verbose:
7265 # format names and versions into columns
7266 names = []
7267 vers = []
7268 place = []
7269 for name, module in extensions.extensions():
7270 names.append(name)
7271 vers.append(extensions.moduleversion(module))
7272 if extensions.ismoduleinternal(module):
7273 place.append(_("internal"))
7274 else:
7275 place.append(_("external"))
7276 if names:
7277 maxnamelen = max(len(n) for n in names)
7278 for i, name in enumerate(names):
7279 ui.write(" %-*s %s %s\n" %
7280 (maxnamelen, name, place[i], vers[i]))
7316 fm.plain(_("\nEnabled extensions:\n\n"))
7317 # format names and versions into columns
7318 names = []
7319 vers = []
7320 isinternals = []
7321 for name, module in extensions.extensions():
7322 names.append(name)
7323 vers.append(extensions.moduleversion(module) or None)
7324 isinternals.append(extensions.ismoduleinternal(module))
7325 fn = fm.nested("extensions")
7326 if names:
7327 namefmt = " %%-%ds " % max(len(n) for n in names)
7328 places = [_("external"), _("internal")]
7329 for n, v, p in zip(names, vers, isinternals):
7330 fn.startitem()
7331 fn.condwrite(ui.verbose, "name", namefmt, n)
7332 if ui.verbose:
7333 fn.plain("%s " % places[p])
7334 fn.data(bundled=p)
7335 fn.condwrite(ui.verbose and v, "ver", "%s", v)
7336 if ui.verbose:
7337 fn.plain("\n")
7338 fn.end()
7339 fm.end()
7281 7340
7282 7341 def loadcmdtable(ui, name, cmdtable):
7283 7342 """Load command functions from specified cmdtable
@@ -528,11 +528,12 b' class changectx(basectx):'
528 528
529 529 @propertycache
530 530 def _manifest(self):
531 return self._repo.manifest.read(self._changeset.manifest)
531 return self._repo.manifestlog[self._changeset.manifest].read()
532 532
533 533 @propertycache
534 534 def _manifestdelta(self):
535 return self._repo.manifest.readdelta(self._changeset.manifest)
535 mfnode = self._changeset.manifest
536 return self._repo.manifestlog[mfnode].readdelta()
536 537
537 538 @propertycache
538 539 def _parents(self):
@@ -823,7 +824,7 b' class basefilectx(object):'
823 824 """
824 825 repo = self._repo
825 826 cl = repo.unfiltered().changelog
826 ma = repo.manifest
827 mfl = repo.manifestlog
827 828 # fetch the linkrev
828 829 fr = filelog.rev(fnode)
829 830 lkr = filelog.linkrev(fr)
@@ -848,7 +849,7 b' class basefilectx(object):'
848 849 if path in ac[3]: # checking the 'files' field.
849 850 # The file has been touched, check if the content is
850 851 # similar to the one we search for.
851 if fnode == ma.readfast(ac[0]).get(path):
852 if fnode == mfl[ac[0]].readfast().get(path):
852 853 return a
853 854 # In theory, we should never get out of that loop without a result.
854 855 # But if manifest uses a buggy file revision (not children of the
@@ -929,7 +930,7 b' class basefilectx(object):'
929 930 def lines(text):
930 931 if text.endswith("\n"):
931 932 return text.count("\n")
932 return text.count("\n") + 1
933 return text.count("\n") + int(bool(text))
933 934
934 935 if linenumber:
935 936 def decorate(text, rev):
@@ -939,8 +940,7 b' class basefilectx(object):'
939 940 return ([(rev, False)] * lines(text), text)
940 941
941 942 def pair(parent, child):
942 blocks = mdiff.allblocks(parent[1], child[1], opts=diffopts,
943 refine=True)
943 blocks = mdiff.allblocks(parent[1], child[1], opts=diffopts)
944 944 for (a1, a2, b1, b2), t in blocks:
945 945 # Changed blocks ('!') or blocks made only of blank lines ('~')
946 946 # belong to the child.
@@ -1508,7 +1508,7 b' class workingctx(committablectx):'
1508 1508
1509 1509 # Only a case insensitive filesystem needs magic to translate user input
1510 1510 # to actual case in the filesystem.
1511 if not util.checkcase(r.root):
1511 if not util.fscasesensitive(r.root):
1512 1512 return matchmod.icasefsmatcher(r.root, r.getcwd(), pats, include,
1513 1513 exclude, default, r.auditor, self,
1514 1514 listsubrepos=listsubrepos,
@@ -231,30 +231,34 b' def pathcopies(x, y, match=None):'
231 231 return _chain(x, y, _backwardrenames(x, a),
232 232 _forwardcopies(a, y, match=match))
233 233
234 def _computenonoverlap(repo, c1, c2, addedinm1, addedinm2):
234 def _computenonoverlap(repo, c1, c2, addedinm1, addedinm2, baselabel=''):
235 235 """Computes, based on addedinm1 and addedinm2, the files exclusive to c1
236 236 and c2. This is its own function so extensions can easily wrap this call
237 237 to see what files mergecopies is about to process.
238 238
239 239 Even though c1 and c2 are not used in this function, they are useful in
240 240 other extensions for being able to read the file nodes of the changed files.
241
242 "baselabel" can be passed to help distinguish the multiple computations
243 done in the graft case.
241 244 """
242 245 u1 = sorted(addedinm1 - addedinm2)
243 246 u2 = sorted(addedinm2 - addedinm1)
244 247
248 header = " unmatched files in %s"
249 if baselabel:
250 header += ' (from %s)' % baselabel
245 251 if u1:
246 repo.ui.debug(" unmatched files in local:\n %s\n"
247 % "\n ".join(u1))
252 repo.ui.debug("%s:\n %s\n" % (header % 'local', "\n ".join(u1)))
248 253 if u2:
249 repo.ui.debug(" unmatched files in other:\n %s\n"
250 % "\n ".join(u2))
254 repo.ui.debug("%s:\n %s\n" % (header % 'other', "\n ".join(u2)))
251 255 return u1, u2
252 256
253 257 def _makegetfctx(ctx):
254 """return a 'getfctx' function suitable for checkcopies usage
258 """return a 'getfctx' function suitable for _checkcopies usage
255 259
256 260 We have to re-setup the function building 'filectx' for each
257 'checkcopies' to ensure the linkrev adjustment is properly setup for
261 '_checkcopies' to ensure the linkrev adjustment is properly setup for
258 262 each. Linkrev adjustment is important to avoid bug in rename
259 263 detection. Moreover, having a proper '_ancestrycontext' setup ensures
260 264 the performance impact of this adjustment is kept limited. Without it,
@@ -285,10 +289,26 b' def _makegetfctx(ctx):'
285 289 return fctx
286 290 return util.lrucachefunc(makectx)
287 291
288 def mergecopies(repo, c1, c2, ca):
292 def _combinecopies(copyfrom, copyto, finalcopy, diverge, incompletediverge):
293 """combine partial copy paths"""
294 remainder = {}
295 for f in copyfrom:
296 if f in copyto:
297 finalcopy[copyto[f]] = copyfrom[f]
298 del copyto[f]
299 for f in incompletediverge:
300 assert f not in diverge
301 ic = incompletediverge[f]
302 if ic[0] in copyto:
303 diverge[f] = [copyto[ic[0]], ic[1]]
304 else:
305 remainder[f] = ic
306 return remainder
307
308 def mergecopies(repo, c1, c2, base):
289 309 """
290 310 Find moves and copies between context c1 and c2 that are relevant
291 for merging.
311 for merging. 'base' will be used as the merge base.
292 312
293 313 Returns four dicts: "copy", "movewithdir", "diverge", and
294 314 "renamedelete".
@@ -321,6 +341,27 b' def mergecopies(repo, c1, c2, ca):'
321 341 if repo.ui.configbool('experimental', 'disablecopytrace'):
322 342 return {}, {}, {}, {}
323 343
344 # In certain scenarios (e.g. graft, update or rebase), base can be
345 # overridden We still need to know a real common ancestor in this case We
346 # can't just compute _c1.ancestor(_c2) and compare it to ca, because there
347 # can be multiple common ancestors, e.g. in case of bidmerge. Because our
348 # caller may not know if the revision passed in lieu of the CA is a genuine
349 # common ancestor or not without explicitly checking it, it's better to
350 # determine that here.
351 #
352 # base.descendant(wc) and base.descendant(base) are False, work around that
353 _c1 = c1.p1() if c1.rev() is None else c1
354 _c2 = c2.p1() if c2.rev() is None else c2
355 # an endpoint is "dirty" if it isn't a descendant of the merge base
356 # if we have a dirty endpoint, we need to trigger graft logic, and also
357 # keep track of which endpoint is dirty
358 dirtyc1 = not (base == _c1 or base.descendant(_c1))
359 dirtyc2 = not (base== _c2 or base.descendant(_c2))
360 graft = dirtyc1 or dirtyc2
361 tca = base
362 if graft:
363 tca = _c1.ancestor(_c2)
364
324 365 limit = _findlimit(repo, c1.rev(), c2.rev())
325 366 if limit is None:
326 367 # no common ancestor, no copies
@@ -329,28 +370,63 b' def mergecopies(repo, c1, c2, ca):'
329 370
330 371 m1 = c1.manifest()
331 372 m2 = c2.manifest()
332 ma = ca.manifest()
373 mb = base.manifest()
333 374
334 copy1, copy2, = {}, {}
335 movewithdir1, movewithdir2 = {}, {}
336 fullcopy1, fullcopy2 = {}, {}
337 diverge = {}
375 # gather data from _checkcopies:
376 # - diverge = record all diverges in this dict
377 # - copy = record all non-divergent copies in this dict
378 # - fullcopy = record all copies in this dict
379 # - incomplete = record non-divergent partial copies here
380 # - incompletediverge = record divergent partial copies here
381 diverge = {} # divergence data is shared
382 incompletediverge = {}
383 data1 = {'copy': {},
384 'fullcopy': {},
385 'incomplete': {},
386 'diverge': diverge,
387 'incompletediverge': incompletediverge,
388 }
389 data2 = {'copy': {},
390 'fullcopy': {},
391 'incomplete': {},
392 'diverge': diverge,
393 'incompletediverge': incompletediverge,
394 }
338 395
339 396 # find interesting file sets from manifests
340 addedinm1 = m1.filesnotin(ma)
341 addedinm2 = m2.filesnotin(ma)
342 u1, u2 = _computenonoverlap(repo, c1, c2, addedinm1, addedinm2)
397 addedinm1 = m1.filesnotin(mb)
398 addedinm2 = m2.filesnotin(mb)
343 399 bothnew = sorted(addedinm1 & addedinm2)
400 if tca == base:
401 # unmatched file from base
402 u1r, u2r = _computenonoverlap(repo, c1, c2, addedinm1, addedinm2)
403 u1u, u2u = u1r, u2r
404 else:
405 # unmatched file from base (DAG rotation in the graft case)
406 u1r, u2r = _computenonoverlap(repo, c1, c2, addedinm1, addedinm2,
407 baselabel='base')
408 # unmatched file from topological common ancestors (no DAG rotation)
409 # need to recompute this for directory move handling when grafting
410 mta = tca.manifest()
411 u1u, u2u = _computenonoverlap(repo, c1, c2, m1.filesnotin(mta),
412 m2.filesnotin(mta),
413 baselabel='topological common ancestor')
344 414
345 for f in u1:
346 checkcopies(c1, f, m1, m2, ca, limit, diverge, copy1, fullcopy1)
415 for f in u1u:
416 _checkcopies(c1, f, m1, m2, base, tca, dirtyc1, limit, data1)
417
418 for f in u2u:
419 _checkcopies(c2, f, m2, m1, base, tca, dirtyc2, limit, data2)
347 420
348 for f in u2:
349 checkcopies(c2, f, m2, m1, ca, limit, diverge, copy2, fullcopy2)
421 copy = dict(data1['copy'].items() + data2['copy'].items())
422 fullcopy = dict(data1['fullcopy'].items() + data2['fullcopy'].items())
350 423
351 copy = dict(copy1.items() + copy2.items())
352 movewithdir = dict(movewithdir1.items() + movewithdir2.items())
353 fullcopy = dict(fullcopy1.items() + fullcopy2.items())
424 if dirtyc1:
425 _combinecopies(data2['incomplete'], data1['incomplete'], copy, diverge,
426 incompletediverge)
427 else:
428 _combinecopies(data1['incomplete'], data2['incomplete'], copy, diverge,
429 incompletediverge)
354 430
355 431 renamedelete = {}
356 432 renamedeleteset = set()
@@ -369,10 +445,44 b' def mergecopies(repo, c1, c2, ca):'
369 445 if bothnew:
370 446 repo.ui.debug(" unmatched files new in both:\n %s\n"
371 447 % "\n ".join(bothnew))
372 bothdiverge, _copy, _fullcopy = {}, {}, {}
448 bothdiverge = {}
449 bothincompletediverge = {}
450 remainder = {}
451 both1 = {'copy': {},
452 'fullcopy': {},
453 'incomplete': {},
454 'diverge': bothdiverge,
455 'incompletediverge': bothincompletediverge
456 }
457 both2 = {'copy': {},
458 'fullcopy': {},
459 'incomplete': {},
460 'diverge': bothdiverge,
461 'incompletediverge': bothincompletediverge
462 }
373 463 for f in bothnew:
374 checkcopies(c1, f, m1, m2, ca, limit, bothdiverge, _copy, _fullcopy)
375 checkcopies(c2, f, m2, m1, ca, limit, bothdiverge, _copy, _fullcopy)
464 _checkcopies(c1, f, m1, m2, base, tca, dirtyc1, limit, both1)
465 _checkcopies(c2, f, m2, m1, base, tca, dirtyc2, limit, both2)
466 if dirtyc1:
467 # incomplete copies may only be found on the "dirty" side for bothnew
468 assert not both2['incomplete']
469 remainder = _combinecopies({}, both1['incomplete'], copy, bothdiverge,
470 bothincompletediverge)
471 elif dirtyc2:
472 assert not both1['incomplete']
473 remainder = _combinecopies({}, both2['incomplete'], copy, bothdiverge,
474 bothincompletediverge)
475 else:
476 # incomplete copies and divergences can't happen outside grafts
477 assert not both1['incomplete']
478 assert not both2['incomplete']
479 assert not bothincompletediverge
480 for f in remainder:
481 assert f not in bothdiverge
482 ic = remainder[f]
483 if ic[0] in (m1 if dirtyc1 else m2):
484 # backed-out rename on one side, but watch out for deleted files
485 bothdiverge[f] = ic
376 486 for of, fl in bothdiverge.items():
377 487 if len(fl) == 2 and fl[0] == fl[1]:
378 488 copy[fl[0]] = of # not actually divergent, just matching renames
@@ -393,7 +503,7 b' def mergecopies(repo, c1, c2, ca):'
393 503 del divergeset
394 504
395 505 if not fullcopy:
396 return copy, movewithdir, diverge, renamedelete
506 return copy, {}, diverge, renamedelete
397 507
398 508 repo.ui.debug(" checking for directory renames\n")
399 509
@@ -431,14 +541,15 b' def mergecopies(repo, c1, c2, ca):'
431 541 del d1, d2, invalid
432 542
433 543 if not dirmove:
434 return copy, movewithdir, diverge, renamedelete
544 return copy, {}, diverge, renamedelete
435 545
436 546 for d in dirmove:
437 547 repo.ui.debug(" discovered dir src: '%s' -> dst: '%s'\n" %
438 548 (d, dirmove[d]))
439 549
550 movewithdir = {}
440 551 # check unaccounted nonoverlapping files against directory moves
441 for f in u1 + u2:
552 for f in u1r + u2r:
442 553 if f not in fullcopy:
443 554 for d in dirmove:
444 555 if f.startswith(d):
@@ -452,55 +563,74 b' def mergecopies(repo, c1, c2, ca):'
452 563
453 564 return copy, movewithdir, diverge, renamedelete
454 565
455 def checkcopies(ctx, f, m1, m2, ca, limit, diverge, copy, fullcopy):
566 def _related(f1, f2, limit):
567 """return True if f1 and f2 filectx have a common ancestor
568
569 Walk back to common ancestor to see if the two files originate
570 from the same file. Since workingfilectx's rev() is None it messes
571 up the integer comparison logic, hence the pre-step check for
572 None (f1 and f2 can only be workingfilectx's initially).
573 """
574
575 if f1 == f2:
576 return f1 # a match
577
578 g1, g2 = f1.ancestors(), f2.ancestors()
579 try:
580 f1r, f2r = f1.linkrev(), f2.linkrev()
581
582 if f1r is None:
583 f1 = next(g1)
584 if f2r is None:
585 f2 = next(g2)
586
587 while True:
588 f1r, f2r = f1.linkrev(), f2.linkrev()
589 if f1r > f2r:
590 f1 = next(g1)
591 elif f2r > f1r:
592 f2 = next(g2)
593 elif f1 == f2:
594 return f1 # a match
595 elif f1r == f2r or f1r < limit or f2r < limit:
596 return False # copy no longer relevant
597 except StopIteration:
598 return False
599
600 def _checkcopies(ctx, f, m1, m2, base, tca, remotebase, limit, data):
456 601 """
457 602 check possible copies of f from m1 to m2
458 603
459 604 ctx = starting context for f in m1
460 f = the filename to check
605 f = the filename to check (as in m1)
461 606 m1 = the source manifest
462 607 m2 = the destination manifest
463 ca = the changectx of the common ancestor
608 base = the changectx used as a merge base
609 tca = topological common ancestor for graft-like scenarios
610 remotebase = True if base is outside tca::ctx, False otherwise
464 611 limit = the rev number to not search beyond
465 diverge = record all diverges in this dict
466 copy = record all non-divergent copies in this dict
467 fullcopy = record all copies in this dict
612 data = dictionary of dictionary to store copy data. (see mergecopies)
613
614 note: limit is only an optimization, and there is no guarantee that
615 irrelevant revisions will not be limited
616 there is no easy way to make this algorithm stop in a guaranteed way
617 once it "goes behind a certain revision".
468 618 """
469 619
470 ma = ca.manifest()
620 mb = base.manifest()
621 mta = tca.manifest()
622 # Might be true if this call is about finding backward renames,
623 # This happens in the case of grafts because the DAG is then rotated.
624 # If the file exists in both the base and the source, we are not looking
625 # for a rename on the source side, but on the part of the DAG that is
626 # traversed backwards.
627 #
628 # In the case there is both backward and forward renames (before and after
629 # the base) this is more complicated as we must detect a divergence.
630 # We use 'backwards = False' in that case.
631 backwards = not remotebase and base != tca and f in mb
471 632 getfctx = _makegetfctx(ctx)
472 633
473 def _related(f1, f2, limit):
474 # Walk back to common ancestor to see if the two files originate
475 # from the same file. Since workingfilectx's rev() is None it messes
476 # up the integer comparison logic, hence the pre-step check for
477 # None (f1 and f2 can only be workingfilectx's initially).
478
479 if f1 == f2:
480 return f1 # a match
481
482 g1, g2 = f1.ancestors(), f2.ancestors()
483 try:
484 f1r, f2r = f1.linkrev(), f2.linkrev()
485
486 if f1r is None:
487 f1 = next(g1)
488 if f2r is None:
489 f2 = next(g2)
490
491 while True:
492 f1r, f2r = f1.linkrev(), f2.linkrev()
493 if f1r > f2r:
494 f1 = next(g1)
495 elif f2r > f1r:
496 f2 = next(g2)
497 elif f1 == f2:
498 return f1 # a match
499 elif f1r == f2r or f1r < limit or f2r < limit:
500 return False # copy no longer relevant
501 except StopIteration:
502 return False
503
504 634 of = None
505 635 seen = set([f])
506 636 for oc in getfctx(f, m1[f]).ancestors():
@@ -513,20 +643,47 b' def checkcopies(ctx, f, m1, m2, ca, limi'
513 643 continue
514 644 seen.add(of)
515 645
516 fullcopy[f] = of # remember for dir rename detection
646 # remember for dir rename detection
647 if backwards:
648 data['fullcopy'][of] = f # grafting backwards through renames
649 else:
650 data['fullcopy'][f] = of
517 651 if of not in m2:
518 652 continue # no match, keep looking
519 if m2[of] == ma.get(of):
520 break # no merge needed, quit early
653 if m2[of] == mb.get(of):
654 return # no merge needed, quit early
521 655 c2 = getfctx(of, m2[of])
522 cr = _related(oc, c2, ca.rev())
656 # c2 might be a plain new file on added on destination side that is
657 # unrelated to the droids we are looking for.
658 cr = _related(oc, c2, tca.rev())
523 659 if cr and (of == f or of == c2.path()): # non-divergent
524 copy[f] = of
525 of = None
526 break
660 if backwards:
661 data['copy'][of] = f
662 elif of in mb:
663 data['copy'][f] = of
664 elif remotebase: # special case: a <- b <- a -> b "ping-pong" rename
665 data['copy'][of] = f
666 del data['fullcopy'][f]
667 data['fullcopy'][of] = f
668 else: # divergence w.r.t. graft CA on one side of topological CA
669 for sf in seen:
670 if sf in mb:
671 assert sf not in data['diverge']
672 data['diverge'][sf] = [f, of]
673 break
674 return
527 675
528 if of in ma:
529 diverge.setdefault(of, []).append(f)
676 if of in mta:
677 if backwards or remotebase:
678 data['incomplete'][of] = f
679 else:
680 for sf in seen:
681 if sf in mb:
682 if tca == base:
683 data['diverge'].setdefault(sf, []).append(f)
684 else:
685 data['incompletediverge'][sf] = [of, f]
686 return
530 687
531 688 def duplicatecopies(repo, rev, fromrev, skiprev=None):
532 689 '''reproduce copies from fromrev to rev in the dirstate
@@ -28,7 +28,7 b' stringio = util.stringio'
28 28
29 29 # This is required for ncurses to display non-ASCII characters in default user
30 30 # locale encoding correctly. --immerrr
31 locale.setlocale(locale.LC_ALL, '')
31 locale.setlocale(locale.LC_ALL, u'')
32 32
33 33 # patch comments based on the git one
34 34 diffhelptext = _("""# To remove '-' lines, make them ' ' lines (context).
@@ -719,7 +719,7 b' class curseschunkselector(object):'
719 719 "scroll the screen to fully show the currently-selected"
720 720 selstart = self.selecteditemstartline
721 721 selend = self.selecteditemendline
722 #selnumlines = selend - selstart
722
723 723 padstart = self.firstlineofpadtoprint
724 724 padend = padstart + self.yscreensize - self.numstatuslines - 1
725 725 # 'buffered' pad start/end values which scroll with a certain
@@ -1263,7 +1263,6 b' class curseschunkselector(object):'
1263 1263 self.statuswin.resize(self.numstatuslines, self.xscreensize)
1264 1264 self.numpadlines = self.getnumlinesdisplayed(ignorefolding=True) + 1
1265 1265 self.chunkpad = curses.newpad(self.numpadlines, self.xscreensize)
1266 # todo: try to resize commit message window if possible
1267 1266 except curses.error:
1268 1267 pass
1269 1268
@@ -1339,6 +1338,7 b' the following are valid keystrokes:'
1339 1338 shift-left-arrow [H] : go to parent header / fold selected header
1340 1339 f : fold / unfold item, hiding/revealing its children
1341 1340 F : fold / unfold parent item and all of its ancestors
1341 ctrl-l : scroll the selected line to the top of the screen
1342 1342 m : edit / resume editing the commit message
1343 1343 e : edit the currently selected hunk
1344 1344 a : toggle amend mode, only with commit -i
@@ -1583,13 +1583,17 b' are you sure you want to review/edit and'
1583 1583 self.helpwindow()
1584 1584 self.stdscr.clear()
1585 1585 self.stdscr.refresh()
1586 elif curses.unctrl(keypressed) in ["^L"]:
1587 # scroll the current line to the top of the screen
1588 self.scrolllines(self.selecteditemstartline)
1586 1589
1587 1590 def main(self, stdscr):
1588 1591 """
1589 1592 method to be wrapped by curses.wrapper() for selecting chunks.
1590 1593 """
1591 1594
1592 signal.signal(signal.SIGWINCH, self.sigwinchhandler)
1595 origsigwinchhandler = signal.signal(signal.SIGWINCH,
1596 self.sigwinchhandler)
1593 1597 self.stdscr = stdscr
1594 1598 # error during initialization, cannot be printed in the curses
1595 1599 # interface, it should be printed by the calling code
@@ -1640,3 +1644,4 b' are you sure you want to review/edit and'
1640 1644 keypressed = "foobar"
1641 1645 if self.handlekeypressed(keypressed):
1642 1646 break
1647 signal.signal(signal.SIGWINCH, origsigwinchhandler)
@@ -64,8 +64,12 b' def _hgextimport(importfunc, name, globa'
64 64 return importfunc(hgextname, globals, *args, **kwargs)
65 65
66 66 class _demandmod(object):
67 """module demand-loader and proxy"""
68 def __init__(self, name, globals, locals, level=level):
67 """module demand-loader and proxy
68
69 Specify 1 as 'level' argument at construction, to import module
70 relatively.
71 """
72 def __init__(self, name, globals, locals, level):
69 73 if '.' in name:
70 74 head, rest = name.split('.', 1)
71 75 after = [rest]
@@ -117,7 +121,8 b' class _demandmod(object):'
117 121 if '.' in p:
118 122 h, t = p.split('.', 1)
119 123 if getattr(mod, h, nothing) is nothing:
120 setattr(mod, h, _demandmod(p, mod.__dict__, mod.__dict__))
124 setattr(mod, h, _demandmod(p, mod.__dict__, mod.__dict__,
125 level=1))
121 126 elif t:
122 127 subload(getattr(mod, h), t)
123 128
@@ -186,11 +191,16 b' def _demandimport(name, globals=None, lo'
186 191 def processfromitem(mod, attr):
187 192 """Process an imported symbol in the import statement.
188 193
189 If the symbol doesn't exist in the parent module, it must be a
190 module. We set missing modules up as _demandmod instances.
194 If the symbol doesn't exist in the parent module, and if the
195 parent module is a package, it must be a module. We set missing
196 modules up as _demandmod instances.
191 197 """
192 198 symbol = getattr(mod, attr, nothing)
199 nonpkg = getattr(mod, '__path__', nothing) is nothing
193 200 if symbol is nothing:
201 if nonpkg:
202 # do not try relative import, which would raise ValueError
203 raise ImportError('cannot import name %s' % attr)
194 204 mn = '%s.%s' % (mod.__name__, attr)
195 205 if mn in ignore:
196 206 importfunc = _origimport
@@ -210,8 +220,8 b' def _demandimport(name, globals=None, lo'
210 220 mod = rootmod
211 221 for comp in modname.split('.')[1:]:
212 222 if getattr(mod, comp, nothing) is nothing:
213 setattr(mod, comp,
214 _demandmod(comp, mod.__dict__, mod.__dict__))
223 setattr(mod, comp, _demandmod(comp, mod.__dict__,
224 mod.__dict__, level=1))
215 225 mod = getattr(mod, comp)
216 226 return mod
217 227
@@ -259,6 +269,7 b' ignore = ['
259 269 '_imp',
260 270 '_xmlplus',
261 271 'fcntl',
272 'nt', # pathlib2 tests the existence of built-in 'nt' module
262 273 'win32com.gen_py',
263 274 '_winreg', # 2.7 mimetypes needs immediate ImportError
264 275 'pythoncom',
@@ -279,9 +290,17 b' ignore = ['
279 290 'mimetools',
280 291 'sqlalchemy.events', # has import-time side effects (issue5085)
281 292 # setuptools 8 expects this module to explode early when not on windows
282 'distutils.msvc9compiler'
293 'distutils.msvc9compiler',
294 '__builtin__',
295 'builtins',
283 296 ]
284 297
298 if _pypy:
299 ignore.extend([
300 # _ctypes.pointer is shadowed by "from ... import pointer" (PyPy 5)
301 '_ctypes.pointer',
302 ])
303
285 304 def isenabled():
286 305 return builtins.__import__ == _demandimport
287 306
@@ -416,8 +416,8 b' def _statusotherbranchheads(ui, repo):'
416 416 'updating to a closed head\n') %
417 417 (currentbranch))
418 418 if otherheads:
419 ui.warn(_('(committing will reopen the head, '
420 'use `hg heads .` to see %i other heads)\n') %
419 ui.warn(_("(committing will reopen the head, "
420 "use 'hg heads .' to see %i other heads)\n") %
421 421 (len(otherheads)))
422 422 else:
423 423 ui.warn(_('(committing will reopen branch "%s")\n') %
@@ -11,6 +11,12 b''
11 11 #include <Python.h>
12 12 #include "util.h"
13 13
14 #ifdef IS_PY3K
15 #define PYLONG_VALUE(o) ((PyLongObject *)o)->ob_digit[1]
16 #else
17 #define PYLONG_VALUE(o) PyInt_AS_LONG(o)
18 #endif
19
14 20 /*
15 21 * This is a multiset of directory names, built from the files that
16 22 * appear in a dirstate or manifest.
@@ -41,11 +47,20 b' static inline Py_ssize_t _finddir(const '
41 47
42 48 static int _addpath(PyObject *dirs, PyObject *path)
43 49 {
44 const char *cpath = PyString_AS_STRING(path);
45 Py_ssize_t pos = PyString_GET_SIZE(path);
50 const char *cpath = PyBytes_AS_STRING(path);
51 Py_ssize_t pos = PyBytes_GET_SIZE(path);
46 52 PyObject *key = NULL;
47 53 int ret = -1;
48 54
55 /* This loop is super critical for performance. That's why we inline
56 * access to Python structs instead of going through a supported API.
57 * The implementation, therefore, is heavily dependent on CPython
58 * implementation details. We also commit violations of the Python
59 * "protocol" such as mutating immutable objects. But since we only
60 * mutate objects created in this function or in other well-defined
61 * locations, the references are known so these violations should go
62 * unnoticed. The code for adjusting the length of a PyBytesObject is
63 * essentially a minimal version of _PyBytes_Resize. */
49 64 while ((pos = _finddir(cpath, pos - 1)) != -1) {
50 65 PyObject *val;
51 66
@@ -53,30 +68,36 b' static int _addpath(PyObject *dirs, PyOb'
53 68 in our dict. Try to avoid allocating and
54 69 deallocating a string for each prefix we check. */
55 70 if (key != NULL)
56 ((PyStringObject *)key)->ob_shash = -1;
71 ((PyBytesObject *)key)->ob_shash = -1;
57 72 else {
58 73 /* Force Python to not reuse a small shared string. */
59 key = PyString_FromStringAndSize(cpath,
74 key = PyBytes_FromStringAndSize(cpath,
60 75 pos < 2 ? 2 : pos);
61 76 if (key == NULL)
62 77 goto bail;
63 78 }
64 PyString_GET_SIZE(key) = pos;
65 PyString_AS_STRING(key)[pos] = '\0';
79 /* Py_SIZE(o) refers to the ob_size member of the struct. Yes,
80 * assigning to what looks like a function seems wrong. */
81 Py_SIZE(key) = pos;
82 ((PyBytesObject *)key)->ob_sval[pos] = '\0';
66 83
67 84 val = PyDict_GetItem(dirs, key);
68 85 if (val != NULL) {
69 PyInt_AS_LONG(val) += 1;
86 PYLONG_VALUE(val) += 1;
70 87 break;
71 88 }
72 89
73 90 /* Force Python to not reuse a small shared int. */
91 #ifdef IS_PY3K
92 val = PyLong_FromLong(0x1eadbeef);
93 #else
74 94 val = PyInt_FromLong(0x1eadbeef);
95 #endif
75 96
76 97 if (val == NULL)
77 98 goto bail;
78 99
79 PyInt_AS_LONG(val) = 1;
100 PYLONG_VALUE(val) = 1;
80 101 ret = PyDict_SetItem(dirs, key, val);
81 102 Py_DECREF(val);
82 103 if (ret == -1)
@@ -93,15 +114,15 b' bail:'
93 114
94 115 static int _delpath(PyObject *dirs, PyObject *path)
95 116 {
96 char *cpath = PyString_AS_STRING(path);
97 Py_ssize_t pos = PyString_GET_SIZE(path);
117 char *cpath = PyBytes_AS_STRING(path);
118 Py_ssize_t pos = PyBytes_GET_SIZE(path);
98 119 PyObject *key = NULL;
99 120 int ret = -1;
100 121
101 122 while ((pos = _finddir(cpath, pos - 1)) != -1) {
102 123 PyObject *val;
103 124
104 key = PyString_FromStringAndSize(cpath, pos);
125 key = PyBytes_FromStringAndSize(cpath, pos);
105 126
106 127 if (key == NULL)
107 128 goto bail;
@@ -113,7 +134,7 b' static int _delpath(PyObject *dirs, PyOb'
113 134 goto bail;
114 135 }
115 136
116 if (--PyInt_AS_LONG(val) <= 0) {
137 if (--PYLONG_VALUE(val) <= 0) {
117 138 if (PyDict_DelItem(dirs, key) == -1)
118 139 goto bail;
119 140 } else
@@ -134,7 +155,7 b' static int dirs_fromdict(PyObject *dirs,'
134 155 Py_ssize_t pos = 0;
135 156
136 157 while (PyDict_Next(source, &pos, &key, &value)) {
137 if (!PyString_Check(key)) {
158 if (!PyBytes_Check(key)) {
138 159 PyErr_SetString(PyExc_TypeError, "expected string key");
139 160 return -1;
140 161 }
@@ -165,7 +186,7 b' static int dirs_fromiter(PyObject *dirs,'
165 186 return -1;
166 187
167 188 while ((item = PyIter_Next(iter)) != NULL) {
168 if (!PyString_Check(item)) {
189 if (!PyBytes_Check(item)) {
169 190 PyErr_SetString(PyExc_TypeError, "expected string");
170 191 break;
171 192 }
@@ -224,7 +245,7 b' PyObject *dirs_addpath(dirsObject *self,'
224 245 {
225 246 PyObject *path;
226 247
227 if (!PyArg_ParseTuple(args, "O!:addpath", &PyString_Type, &path))
248 if (!PyArg_ParseTuple(args, "O!:addpath", &PyBytes_Type, &path))
228 249 return NULL;
229 250
230 251 if (_addpath(self->dict, path) == -1)
@@ -237,7 +258,7 b' static PyObject *dirs_delpath(dirsObject'
237 258 {
238 259 PyObject *path;
239 260
240 if (!PyArg_ParseTuple(args, "O!:delpath", &PyString_Type, &path))
261 if (!PyArg_ParseTuple(args, "O!:delpath", &PyBytes_Type, &path))
241 262 return NULL;
242 263
243 264 if (_delpath(self->dict, path) == -1)
@@ -248,7 +269,7 b' static PyObject *dirs_delpath(dirsObject'
248 269
249 270 static int dirs_contains(dirsObject *self, PyObject *value)
250 271 {
251 return PyString_Check(value) ? PyDict_Contains(self->dict, value) : 0;
272 return PyBytes_Check(value) ? PyDict_Contains(self->dict, value) : 0;
252 273 }
253 274
254 275 static void dirs_dealloc(dirsObject *self)
@@ -270,7 +291,7 b' static PyMethodDef dirs_methods[] = {'
270 291 {NULL} /* Sentinel */
271 292 };
272 293
273 static PyTypeObject dirsType = { PyObject_HEAD_INIT(NULL) };
294 static PyTypeObject dirsType = { PyVarObject_HEAD_INIT(NULL, 0) };
274 295
275 296 void dirs_module_init(PyObject *mod)
276 297 {
@@ -74,8 +74,6 b' def _trypending(root, vfs, filename):'
74 74 raise
75 75 return (vfs(filename), False)
76 76
77 _token = object()
78
79 77 class dirstate(object):
80 78
81 79 def __init__(self, opener, ui, root, validate):
@@ -103,6 +101,8 b' class dirstate(object):'
103 101 self._parentwriters = 0
104 102 self._filename = 'dirstate'
105 103 self._pendingfilename = '%s.pending' % self._filename
104 self._plchangecallbacks = {}
105 self._origpl = None
106 106
107 107 # for consistent view between _pl() and _read() invocations
108 108 self._pendingmode = None
@@ -227,7 +227,7 b' class dirstate(object):'
227 227
228 228 @propertycache
229 229 def _checkcase(self):
230 return not util.checkcase(self._join('.hg'))
230 return not util.fscasesensitive(self._join('.hg'))
231 231
232 232 def _join(self, f):
233 233 # much faster than os.path.join()
@@ -349,6 +349,8 b' class dirstate(object):'
349 349
350 350 self._dirty = self._dirtypl = True
351 351 oldp2 = self._pl[1]
352 if self._origpl is None:
353 self._origpl = self._pl
352 354 self._pl = p1, p2
353 355 copies = {}
354 356 if oldp2 != nullid and p2 == nullid:
@@ -444,6 +446,7 b' class dirstate(object):'
444 446 self._lastnormaltime = 0
445 447 self._dirty = False
446 448 self._parentwriters = 0
449 self._origpl = None
447 450
448 451 def copy(self, source, dest):
449 452 """Mark dest as a copy of source. Unmark dest if source is None."""
@@ -677,37 +680,23 b' class dirstate(object):'
677 680 self.clear()
678 681 self._lastnormaltime = lastnormaltime
679 682
683 if self._origpl is None:
684 self._origpl = self._pl
685 self._pl = (parent, nullid)
680 686 for f in changedfiles:
681 mode = 0o666
682 if f in allfiles and 'x' in allfiles.flags(f):
683 mode = 0o777
684
685 687 if f in allfiles:
686 self._map[f] = dirstatetuple('n', mode, -1, 0)
688 self.normallookup(f)
687 689 else:
688 self._map.pop(f, None)
689 if f in self._nonnormalset:
690 self._nonnormalset.remove(f)
690 self.drop(f)
691 691
692 self._pl = (parent, nullid)
693 692 self._dirty = True
694 693
695 def write(self, tr=_token):
694 def write(self, tr):
696 695 if not self._dirty:
697 696 return
698 697
699 698 filename = self._filename
700 if tr is _token: # not explicitly specified
701 self._ui.deprecwarn('use dirstate.write with '
702 'repo.currenttransaction()',
703 '3.9')
704
705 if self._opener.lexists(self._pendingfilename):
706 # if pending file already exists, in-memory changes
707 # should be written into it, because it has priority
708 # to '.hg/dirstate' at reading under HG_PENDING mode
709 filename = self._pendingfilename
710 elif tr:
699 if tr:
711 700 # 'dirstate.write()' is not only for writing in-memory
712 701 # changes out, but also for dropping ambiguous timestamp.
713 702 # delayed writing re-raise "ambiguous timestamp issue".
@@ -733,7 +722,23 b' class dirstate(object):'
733 722 st = self._opener(filename, "w", atomictemp=True, checkambig=True)
734 723 self._writedirstate(st)
735 724
725 def addparentchangecallback(self, category, callback):
726 """add a callback to be called when the wd parents are changed
727
728 Callback will be called with the following arguments:
729 dirstate, (oldp1, oldp2), (newp1, newp2)
730
731 Category is a unique identifier to allow overwriting an old callback
732 with a newer callback.
733 """
734 self._plchangecallbacks[category] = callback
735
736 736 def _writedirstate(self, st):
737 # notify callbacks about parents change
738 if self._origpl is not None and self._origpl != self._pl:
739 for c, callback in sorted(self._plchangecallbacks.iteritems()):
740 callback(self, self._origpl, self._pl)
741 self._origpl = None
737 742 # use the modification time of the newly created temporary file as the
738 743 # filesystem's notion of 'now'
739 744 now = util.fstat(st).st_mtime & _rangemask
@@ -76,10 +76,29 b' class outgoing(object):'
76 76 The sets are computed on demand from the heads, unless provided upfront
77 77 by discovery.'''
78 78
79 def __init__(self, revlog, commonheads, missingheads):
79 def __init__(self, repo, commonheads=None, missingheads=None,
80 missingroots=None):
81 # at least one of them must not be set
82 assert None in (commonheads, missingroots)
83 cl = repo.changelog
84 if missingheads is None:
85 missingheads = cl.heads()
86 if missingroots:
87 discbases = []
88 for n in missingroots:
89 discbases.extend([p for p in cl.parents(n) if p != nullid])
90 # TODO remove call to nodesbetween.
91 # TODO populate attributes on outgoing instance instead of setting
92 # discbases.
93 csets, roots, heads = cl.nodesbetween(missingroots, missingheads)
94 included = set(csets)
95 missingheads = heads
96 commonheads = [n for n in discbases if n not in included]
97 elif not commonheads:
98 commonheads = [nullid]
80 99 self.commonheads = commonheads
81 100 self.missingheads = missingheads
82 self._revlog = revlog
101 self._revlog = cl
83 102 self._common = None
84 103 self._missing = None
85 104 self.excluded = []
@@ -116,7 +135,7 b' def findcommonoutgoing(repo, other, only'
116 135 If portable is given, compute more conservative common and missingheads,
117 136 to make bundles created from the instance more portable.'''
118 137 # declare an empty outgoing object to be filled later
119 og = outgoing(repo.changelog, None, None)
138 og = outgoing(repo, None, None)
120 139
121 140 # get common set if not provided
122 141 if commoninc is None:
@@ -382,7 +401,7 b' def checkheads(pushop):'
382 401 errormsg = (_("push creates new branch '%s' "
383 402 "with multiple heads") % (branch))
384 403 hint = _("merge or"
385 " see \"hg help push\" for details about"
404 " see 'hg help push' for details about"
386 405 " pushing new heads")
387 406 elif len(newhs) > len(oldhs):
388 407 # remove bookmarked or existing remote heads from the new heads list
@@ -401,11 +420,11 b' def checkheads(pushop):'
401 420 ) % short(dhs[0])
402 421 if unsyncedheads:
403 422 hint = _("pull and merge or"
404 " see \"hg help push\" for details about"
423 " see 'hg help push' for details about"
405 424 " pushing new heads")
406 425 else:
407 426 hint = _("merge or"
408 " see \"hg help push\" for details about"
427 " see 'hg help push' for details about"
409 428 " pushing new heads")
410 429 if branch is None:
411 430 repo.ui.note(_("new remote heads:\n"))
@@ -34,6 +34,7 b' from . import ('
34 34 fileset,
35 35 hg,
36 36 hook,
37 profiling,
37 38 revset,
38 39 templatefilters,
39 40 templatekw,
@@ -150,7 +151,7 b' def _runcatch(req):'
150 151 except ValueError:
151 152 pass # happens if called in a thread
152 153
153 try:
154 def _runcatchfunc():
154 155 try:
155 156 debugger = 'pdb'
156 157 debugtrace = {
@@ -212,6 +213,16 b' def _runcatch(req):'
212 213 ui.traceback()
213 214 raise
214 215
216 return callcatch(ui, _runcatchfunc)
217
218 def callcatch(ui, func):
219 """call func() with global exception handling
220
221 return func() if no exception happens. otherwise do some error handling
222 and return an exit code accordingly.
223 """
224 try:
225 return func()
215 226 # Global exception handling, alphabetically
216 227 # Mercurial-specific first, followed by built-in and library exceptions
217 228 except error.AmbiguousCommand as inst:
@@ -489,6 +500,8 b' class cmdalias(object):'
489 500 ui.debug("alias '%s' shadows command '%s'\n" %
490 501 (self.name, self.cmdname))
491 502
503 ui.log('commandalias', "alias '%s' expands to '%s'\n",
504 self.name, self.definition)
492 505 if util.safehasattr(self, 'shell'):
493 506 return self.fn(ui, *args, **opts)
494 507 else:
@@ -545,7 +558,7 b' def _parse(ui, args):'
545 558 c.append((o[0], o[1], options[o[1]], o[3]))
546 559
547 560 try:
548 args = fancyopts.fancyopts(args, c, cmdoptions, True)
561 args = fancyopts.fancyopts(args, c, cmdoptions, gnu=True)
549 562 except fancyopts.getopt.GetoptError as inst:
550 563 raise error.CommandError(cmd, inst)
551 564
@@ -761,7 +774,8 b' def _dispatch(req):'
761 774 # Check abbreviation/ambiguity of shell alias.
762 775 shellaliasfn = _checkshellalias(lui, ui, args)
763 776 if shellaliasfn:
764 return shellaliasfn()
777 with profiling.maybeprofile(lui):
778 return shellaliasfn()
765 779
766 780 # check for fallback encoding
767 781 fallback = lui.config('ui', 'fallbackencoding')
@@ -808,6 +822,10 b' def _dispatch(req):'
808 822 for ui_ in uis:
809 823 ui_.setconfig('ui', opt, val, '--' + opt)
810 824
825 if options['profile']:
826 for ui_ in uis:
827 ui_.setconfig('profiling', 'enabled', 'true', '--profile')
828
811 829 if options['traceback']:
812 830 for ui_ in uis:
813 831 ui_.setconfig('ui', 'traceback', 'on', '--traceback')
@@ -827,187 +845,70 b' def _dispatch(req):'
827 845 elif not cmd:
828 846 return commands.help_(ui, 'shortlist')
829 847
830 repo = None
831 cmdpats = args[:]
832 if not _cmdattr(ui, cmd, func, 'norepo'):
833 # use the repo from the request only if we don't have -R
834 if not rpath and not cwd:
835 repo = req.repo
836
837 if repo:
838 # set the descriptors of the repo ui to those of ui
839 repo.ui.fin = ui.fin
840 repo.ui.fout = ui.fout
841 repo.ui.ferr = ui.ferr
842 else:
843 try:
844 repo = hg.repository(ui, path=path)
845 if not repo.local():
846 raise error.Abort(_("repository '%s' is not local") % path)
847 repo.ui.setconfig("bundle", "mainreporoot", repo.root, 'repo')
848 except error.RequirementError:
849 raise
850 except error.RepoError:
851 if rpath and rpath[-1]: # invalid -R path
852 raise
853 if not _cmdattr(ui, cmd, func, 'optionalrepo'):
854 if (_cmdattr(ui, cmd, func, 'inferrepo') and
855 args and not path):
856 # try to infer -R from command args
857 repos = map(cmdutil.findrepo, args)
858 guess = repos[0]
859 if guess and repos.count(guess) == len(repos):
860 req.args = ['--repository', guess] + fullargs
861 return _dispatch(req)
862 if not path:
863 raise error.RepoError(_("no repository found in '%s'"
864 " (.hg not found)")
865 % os.getcwd())
866 raise
867 if repo:
868 ui = repo.ui
869 if options['hidden']:
870 repo = repo.unfiltered()
871 args.insert(0, repo)
872 elif rpath:
873 ui.warn(_("warning: --repository ignored\n"))
874
875 msg = ' '.join(' ' in a and repr(a) or a for a in fullargs)
876 ui.log("command", '%s\n', msg)
877 d = lambda: util.checksignature(func)(ui, *args, **cmdoptions)
878 try:
879 return runcommand(lui, repo, cmd, fullargs, ui, options, d,
880 cmdpats, cmdoptions)
881 finally:
882 if repo and repo != req.repo:
883 repo.close()
884
885 def lsprofile(ui, func, fp):
886 format = ui.config('profiling', 'format', default='text')
887 field = ui.config('profiling', 'sort', default='inlinetime')
888 limit = ui.configint('profiling', 'limit', default=30)
889 climit = ui.configint('profiling', 'nested', default=0)
890
891 if format not in ['text', 'kcachegrind']:
892 ui.warn(_("unrecognized profiling format '%s'"
893 " - Ignored\n") % format)
894 format = 'text'
848 with profiling.maybeprofile(lui):
849 repo = None
850 cmdpats = args[:]
851 if not _cmdattr(ui, cmd, func, 'norepo'):
852 # use the repo from the request only if we don't have -R
853 if not rpath and not cwd:
854 repo = req.repo
895 855
896 try:
897 from . import lsprof
898 except ImportError:
899 raise error.Abort(_(
900 'lsprof not available - install from '
901 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
902 p = lsprof.Profiler()
903 p.enable(subcalls=True)
904 try:
905 return func()
906 finally:
907 p.disable()
908
909 if format == 'kcachegrind':
910 from . import lsprofcalltree
911 calltree = lsprofcalltree.KCacheGrind(p)
912 calltree.output(fp)
913 else:
914 # format == 'text'
915 stats = lsprof.Stats(p.getstats())
916 stats.sort(field)
917 stats.pprint(limit=limit, file=fp, climit=climit)
856 if repo:
857 # set the descriptors of the repo ui to those of ui
858 repo.ui.fin = ui.fin
859 repo.ui.fout = ui.fout
860 repo.ui.ferr = ui.ferr
861 else:
862 try:
863 repo = hg.repository(ui, path=path)
864 if not repo.local():
865 raise error.Abort(_("repository '%s' is not local")
866 % path)
867 repo.ui.setconfig("bundle", "mainreporoot", repo.root,
868 'repo')
869 except error.RequirementError:
870 raise
871 except error.RepoError:
872 if rpath and rpath[-1]: # invalid -R path
873 raise
874 if not _cmdattr(ui, cmd, func, 'optionalrepo'):
875 if (_cmdattr(ui, cmd, func, 'inferrepo') and
876 args and not path):
877 # try to infer -R from command args
878 repos = map(cmdutil.findrepo, args)
879 guess = repos[0]
880 if guess and repos.count(guess) == len(repos):
881 req.args = ['--repository', guess] + fullargs
882 return _dispatch(req)
883 if not path:
884 raise error.RepoError(_("no repository found in"
885 " '%s' (.hg not found)")
886 % os.getcwd())
887 raise
888 if repo:
889 ui = repo.ui
890 if options['hidden']:
891 repo = repo.unfiltered()
892 args.insert(0, repo)
893 elif rpath:
894 ui.warn(_("warning: --repository ignored\n"))
918 895
919 def flameprofile(ui, func, fp):
920 try:
921 from flamegraph import flamegraph
922 except ImportError:
923 raise error.Abort(_(
924 'flamegraph not available - install from '
925 'https://github.com/evanhempel/python-flamegraph'))
926 # developer config: profiling.freq
927 freq = ui.configint('profiling', 'freq', default=1000)
928 filter_ = None
929 collapse_recursion = True
930 thread = flamegraph.ProfileThread(fp, 1.0 / freq,
931 filter_, collapse_recursion)
932 start_time = time.clock()
933 try:
934 thread.start()
935 func()
936 finally:
937 thread.stop()
938 thread.join()
939 print('Collected %d stack frames (%d unique) in %2.2f seconds.' % (
940 time.clock() - start_time, thread.num_frames(),
941 thread.num_frames(unique=True)))
942
943
944 def statprofile(ui, func, fp):
945 try:
946 import statprof
947 except ImportError:
948 raise error.Abort(_(
949 'statprof not available - install using "easy_install statprof"'))
950
951 freq = ui.configint('profiling', 'freq', default=1000)
952 if freq > 0:
953 statprof.reset(freq)
954 else:
955 ui.warn(_("invalid sampling frequency '%s' - ignoring\n") % freq)
956
957 statprof.start()
958 try:
959 return func()
960 finally:
961 statprof.stop()
962 statprof.display(fp)
896 msg = ' '.join(' ' in a and repr(a) or a for a in fullargs)
897 ui.log("command", '%s\n', msg)
898 d = lambda: util.checksignature(func)(ui, *args, **cmdoptions)
899 try:
900 return runcommand(lui, repo, cmd, fullargs, ui, options, d,
901 cmdpats, cmdoptions)
902 finally:
903 if repo and repo != req.repo:
904 repo.close()
963 905
964 906 def _runcommand(ui, options, cmd, cmdfunc):
965 """Enables the profiler if applicable.
966
967 ``profiling.enabled`` - boolean config that enables or disables profiling
968 """
969 def checkargs():
970 try:
971 return cmdfunc()
972 except error.SignatureError:
973 raise error.CommandError(cmd, _("invalid arguments"))
974
975 if options['profile'] or ui.configbool('profiling', 'enabled'):
976 profiler = os.getenv('HGPROF')
977 if profiler is None:
978 profiler = ui.config('profiling', 'type', default='ls')
979 if profiler not in ('ls', 'stat', 'flame'):
980 ui.warn(_("unrecognized profiler '%s' - ignored\n") % profiler)
981 profiler = 'ls'
982
983 output = ui.config('profiling', 'output')
984
985 if output == 'blackbox':
986 fp = util.stringio()
987 elif output:
988 path = ui.expandpath(output)
989 fp = open(path, 'wb')
990 else:
991 fp = sys.stderr
992
993 try:
994 if profiler == 'ls':
995 return lsprofile(ui, checkargs, fp)
996 elif profiler == 'flame':
997 return flameprofile(ui, checkargs, fp)
998 else:
999 return statprofile(ui, checkargs, fp)
1000 finally:
1001 if output:
1002 if output == 'blackbox':
1003 val = "Profile:\n%s" % fp.getvalue()
1004 # ui.log treats the input as a format string,
1005 # so we need to escape any % signs.
1006 val = val.replace('%', '%%')
1007 ui.log('profile', val)
1008 fp.close()
1009 else:
1010 return checkargs()
907 """Run a command function, possibly with profiling enabled."""
908 try:
909 return cmdfunc()
910 except error.SignatureError:
911 raise error.CommandError(cmd, _('invalid arguments'))
1011 912
1012 913 def _exceptionwarning(ui):
1013 914 """Produce a warning message for the current active exception"""
@@ -1031,7 +932,7 b' def _exceptionwarning(ui):'
1031 932 break
1032 933
1033 934 # Never blame on extensions bundled with Mercurial.
1034 if testedwith == 'internal':
935 if extensions.ismoduleinternal(mod):
1035 936 continue
1036 937
1037 938 tested = [util.versiontuple(t, 2) for t in testedwith.split()]
@@ -10,14 +10,16 b' from __future__ import absolute_import'
10 10 import array
11 11 import locale
12 12 import os
13 import sys
14 13 import unicodedata
15 14
16 15 from . import (
17 16 error,
17 pycompat,
18 18 )
19 19
20 if sys.version_info[0] >= 3:
20 _sysstr = pycompat.sysstr
21
22 if pycompat.ispy3:
21 23 unichr = chr
22 24
23 25 # These unicode characters are ignored by HFS+ (Apple Technote 1150,
@@ -27,7 +29,7 b' if sys.version_info[0] >= 3:'
27 29 "200c 200d 200e 200f 202a 202b 202c 202d 202e "
28 30 "206a 206b 206c 206d 206e 206f feff".split()]
29 31 # verify the next function will work
30 if sys.version_info[0] >= 3:
32 if pycompat.ispy3:
31 33 assert set(i[0] for i in _ignore) == set([ord(b'\xe2'), ord(b'\xef')])
32 34 else:
33 35 assert set(i[0] for i in _ignore) == set(["\xe2", "\xef"])
@@ -45,6 +47,19 b' def hfsignoreclean(s):'
45 47 s = s.replace(c, '')
46 48 return s
47 49
50 # encoding.environ is provided read-only, which may not be used to modify
51 # the process environment
52 _nativeenviron = (not pycompat.ispy3 or os.supports_bytes_environ)
53 if not pycompat.ispy3:
54 environ = os.environ
55 elif _nativeenviron:
56 environ = os.environb
57 else:
58 # preferred encoding isn't known yet; use utf-8 to avoid unicode error
59 # and recreate it once encoding is settled
60 environ = dict((k.encode(u'utf-8'), v.encode(u'utf-8'))
61 for k, v in os.environ.items())
62
48 63 def _getpreferredencoding():
49 64 '''
50 65 On darwin, getpreferredencoding ignores the locale environment and
@@ -76,13 +91,13 b' def _getpreferredencoding():'
76 91 }
77 92
78 93 try:
79 encoding = os.environ.get("HGENCODING")
94 encoding = environ.get("HGENCODING")
80 95 if not encoding:
81 96 encoding = locale.getpreferredencoding() or 'ascii'
82 97 encoding = _encodingfixers.get(encoding, lambda: encoding)()
83 98 except locale.Error:
84 99 encoding = 'ascii'
85 encodingmode = os.environ.get("HGENCODINGMODE", "strict")
100 encodingmode = environ.get("HGENCODINGMODE", "strict")
86 101 fallbackencoding = 'ISO-8859-1'
87 102
88 103 class localstr(str):
@@ -136,23 +151,24 b' def tolocal(s):'
136 151 if encoding == 'UTF-8':
137 152 # fast path
138 153 return s
139 r = u.encode(encoding, "replace")
140 if u == r.decode(encoding):
154 r = u.encode(_sysstr(encoding), u"replace")
155 if u == r.decode(_sysstr(encoding)):
141 156 # r is a safe, non-lossy encoding of s
142 157 return r
143 158 return localstr(s, r)
144 159 except UnicodeDecodeError:
145 160 # we should only get here if we're looking at an ancient changeset
146 161 try:
147 u = s.decode(fallbackencoding)
148 r = u.encode(encoding, "replace")
149 if u == r.decode(encoding):
162 u = s.decode(_sysstr(fallbackencoding))
163 r = u.encode(_sysstr(encoding), u"replace")
164 if u == r.decode(_sysstr(encoding)):
150 165 # r is a safe, non-lossy encoding of s
151 166 return r
152 167 return localstr(u.encode('UTF-8'), r)
153 168 except UnicodeDecodeError:
154 169 u = s.decode("utf-8", "replace") # last ditch
155 return u.encode(encoding, "replace") # can't round-trip
170 # can't round-trip
171 return u.encode(_sysstr(encoding), u"replace")
156 172 except LookupError as k:
157 173 raise error.Abort(k, hint="please check your locale settings")
158 174
@@ -172,20 +188,27 b' def fromlocal(s):'
172 188 return s._utf8
173 189
174 190 try:
175 return s.decode(encoding, encodingmode).encode("utf-8")
191 u = s.decode(_sysstr(encoding), _sysstr(encodingmode))
192 return u.encode("utf-8")
176 193 except UnicodeDecodeError as inst:
177 194 sub = s[max(0, inst.start - 10):inst.start + 10]
178 195 raise error.Abort("decoding near '%s': %s!" % (sub, inst))
179 196 except LookupError as k:
180 197 raise error.Abort(k, hint="please check your locale settings")
181 198
199 if not _nativeenviron:
200 # now encoding and helper functions are available, recreate the environ
201 # dict to be exported to other modules
202 environ = dict((tolocal(k.encode(u'utf-8')), tolocal(v.encode(u'utf-8')))
203 for k, v in os.environ.items())
204
182 205 # How to treat ambiguous-width characters. Set to 'wide' to treat as wide.
183 wide = (os.environ.get("HGENCODINGAMBIGUOUS", "narrow") == "wide"
206 wide = (environ.get("HGENCODINGAMBIGUOUS", "narrow") == "wide"
184 207 and "WFA" or "WF")
185 208
186 209 def colwidth(s):
187 210 "Find the column width of a string for display in the local encoding"
188 return ucolwidth(s.decode(encoding, 'replace'))
211 return ucolwidth(s.decode(_sysstr(encoding), u'replace'))
189 212
190 213 def ucolwidth(d):
191 214 "Find the column width of a Unicode string for display"
@@ -265,7 +288,7 b" def trim(s, width, ellipsis='', leftside"
265 288 +
266 289 """
267 290 try:
268 u = s.decode(encoding)
291 u = s.decode(_sysstr(encoding))
269 292 except UnicodeDecodeError:
270 293 if len(s) <= width: # trimming is not needed
271 294 return s
@@ -292,7 +315,7 b" def trim(s, width, ellipsis='', leftside"
292 315 for i in xrange(1, len(u)):
293 316 usub = uslice(i)
294 317 if ucolwidth(usub) <= width:
295 return concat(usub.encode(encoding))
318 return concat(usub.encode(_sysstr(encoding)))
296 319 return ellipsis # no enough room for multi-column characters
297 320
298 321 def _asciilower(s):
@@ -337,12 +360,12 b' def lower(s):'
337 360 if isinstance(s, localstr):
338 361 u = s._utf8.decode("utf-8")
339 362 else:
340 u = s.decode(encoding, encodingmode)
363 u = s.decode(_sysstr(encoding), _sysstr(encodingmode))
341 364
342 365 lu = u.lower()
343 366 if u == lu:
344 367 return s # preserve localstring
345 return lu.encode(encoding)
368 return lu.encode(_sysstr(encoding))
346 369 except UnicodeError:
347 370 return s.lower() # we don't know how to fold this except in ASCII
348 371 except LookupError as k:
@@ -360,12 +383,12 b' def upperfallback(s):'
360 383 if isinstance(s, localstr):
361 384 u = s._utf8.decode("utf-8")
362 385 else:
363 u = s.decode(encoding, encodingmode)
386 u = s.decode(_sysstr(encoding), _sysstr(encodingmode))
364 387
365 388 uu = u.upper()
366 389 if u == uu:
367 390 return s # preserve localstring
368 return uu.encode(encoding)
391 return uu.encode(_sysstr(encoding))
369 392 except UnicodeError:
370 393 return s.upper() # we don't know how to fold this except in ASCII
371 394 except LookupError as k:
@@ -257,13 +257,40 b' def buildobsmarkerspart(bundler, markers'
257 257 return bundler.newpart('obsmarkers', data=stream)
258 258 return None
259 259
260 def _canusebundle2(op):
261 """return true if a pull/push can use bundle2
260 def _computeoutgoing(repo, heads, common):
261 """Computes which revs are outgoing given a set of common
262 and a set of heads.
263
264 This is a separate function so extensions can have access to
265 the logic.
262 266
263 Feel free to nuke this function when we drop the experimental option"""
264 return (op.repo.ui.configbool('experimental', 'bundle2-exp', True)
265 and op.remote.capable('bundle2'))
267 Returns a discovery.outgoing object.
268 """
269 cl = repo.changelog
270 if common:
271 hasnode = cl.hasnode
272 common = [n for n in common if hasnode(n)]
273 else:
274 common = [nullid]
275 if not heads:
276 heads = cl.heads()
277 return discovery.outgoing(repo, common, heads)
266 278
279 def _forcebundle1(op):
280 """return true if a pull/push must use bundle1
281
282 This function is used to allow testing of the older bundle version"""
283 ui = op.repo.ui
284 forcebundle1 = False
285 # The goal is this config is to allow developper to choose the bundle
286 # version used during exchanged. This is especially handy during test.
287 # Value is a list of bundle version to be picked from, highest version
288 # should be used.
289 #
290 # developer config: devel.legacy.exchange
291 exchange = ui.configlist('devel', 'legacy.exchange')
292 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
293 return forcebundle1 or not op.remote.capable('bundle2')
267 294
268 295 class pushoperation(object):
269 296 """A object that represent a single push operation
@@ -417,7 +444,7 b' def push(repo, remote, force=False, revs'
417 444 # bundle2 push may receive a reply bundle touching bookmarks or other
418 445 # things requiring the wlock. Take it now to ensure proper ordering.
419 446 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
420 if _canusebundle2(pushop) and maypushback:
447 if (not _forcebundle1(pushop)) and maypushback:
421 448 localwlock = pushop.repo.wlock()
422 449 locallock = pushop.repo.lock()
423 450 pushop.locallocked = True
@@ -442,7 +469,7 b' def push(repo, remote, force=False, revs'
442 469 lock = pushop.remote.lock()
443 470 try:
444 471 _pushdiscovery(pushop)
445 if _canusebundle2(pushop):
472 if not _forcebundle1(pushop):
446 473 _pushbundle2(pushop)
447 474 _pushchangeset(pushop)
448 475 _pushsyncphase(pushop)
@@ -1100,7 +1127,7 b' class pulloperation(object):'
1100 1127
1101 1128 @util.propertycache
1102 1129 def canusebundle2(self):
1103 return _canusebundle2(self)
1130 return not _forcebundle1(self)
1104 1131
1105 1132 @util.propertycache
1106 1133 def remotebundle2caps(self):
@@ -1174,8 +1201,10 b' def pull(repo, remote, heads=None, force'
1174 1201 " %s") % (', '.join(sorted(missing)))
1175 1202 raise error.Abort(msg)
1176 1203
1177 lock = pullop.repo.lock()
1204 wlock = lock = None
1178 1205 try:
1206 wlock = pullop.repo.wlock()
1207 lock = pullop.repo.lock()
1179 1208 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1180 1209 streamclone.maybeperformlegacystreamclone(pullop)
1181 1210 # This should ideally be in _pullbundle2(). However, it needs to run
@@ -1190,8 +1219,7 b' def pull(repo, remote, heads=None, force'
1190 1219 _pullobsolete(pullop)
1191 1220 pullop.trmanager.close()
1192 1221 finally:
1193 pullop.trmanager.release()
1194 lock.release()
1222 lockmod.release(pullop.trmanager, lock, wlock)
1195 1223
1196 1224 return pullop
1197 1225
@@ -1504,20 +1532,14 b' def bundle2requested(bundlecaps):'
1504 1532 return any(cap.startswith('HG2') for cap in bundlecaps)
1505 1533 return False
1506 1534
1507 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1508 **kwargs):
1509 """return a full bundle (with potentially multiple kind of parts)
1535 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1536 **kwargs):
1537 """Return chunks constituting a bundle's raw data.
1510 1538
1511 1539 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1512 passed. For now, the bundle can contain only changegroup, but this will
1513 changes when more part type will be available for bundle2.
1540 passed.
1514 1541
1515 This is different from changegroup.getchangegroup that only returns an HG10
1516 changegroup bundle. They may eventually get reunited in the future when we
1517 have a clearer idea of the API we what to query different data.
1518
1519 The implementation is at a very early stage and will get massive rework
1520 when the API of bundle is refined.
1542 Returns an iterator over raw chunks (of varying sizes).
1521 1543 """
1522 1544 usebundle2 = bundle2requested(bundlecaps)
1523 1545 # bundle10 case
@@ -1528,8 +1550,9 b' def getbundle(repo, source, heads=None, '
1528 1550 if kwargs:
1529 1551 raise ValueError(_('unsupported getbundle arguments: %s')
1530 1552 % ', '.join(sorted(kwargs.keys())))
1531 return changegroup.getchangegroup(repo, source, heads=heads,
1532 common=common, bundlecaps=bundlecaps)
1553 outgoing = _computeoutgoing(repo, heads, common)
1554 bundler = changegroup.getbundler('01', repo, bundlecaps)
1555 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1533 1556
1534 1557 # bundle20 case
1535 1558 b2caps = {}
@@ -1547,7 +1570,7 b' def getbundle(repo, source, heads=None, '
1547 1570 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1548 1571 **kwargs)
1549 1572
1550 return util.chunkbuffer(bundler.getchunks())
1573 return bundler.getchunks()
1551 1574
1552 1575 @getbundle2partsgenerator('changegroup')
1553 1576 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
@@ -1564,7 +1587,7 b' def _getbundlechangegrouppart(bundler, r'
1564 1587 if not cgversions:
1565 1588 raise ValueError(_('no common changegroup version'))
1566 1589 version = max(cgversions)
1567 outgoing = changegroup.computeoutgoing(repo, heads, common)
1590 outgoing = _computeoutgoing(repo, heads, common)
1568 1591 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1569 1592 bundlecaps=bundlecaps,
1570 1593 version=version)
@@ -1617,7 +1640,7 b' def _getbundletagsfnodes(bundler, repo, '
1617 1640 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1618 1641 return
1619 1642
1620 outgoing = changegroup.computeoutgoing(repo, heads, common)
1643 outgoing = _computeoutgoing(repo, heads, common)
1621 1644
1622 1645 if not outgoing.missingheads:
1623 1646 return
@@ -22,6 +22,7 b' from . import ('
22 22 )
23 23
24 24 _extensions = {}
25 _disabledextensions = {}
25 26 _aftercallbacks = {}
26 27 _order = []
27 28 _builtin = set(['hbisect', 'bookmarks', 'parentrevspec', 'progress', 'interhg',
@@ -79,7 +80,29 b' def _importh(name):'
79 80 mod = getattr(mod, comp)
80 81 return mod
81 82
83 def _importext(name, path=None, reportfunc=None):
84 if path:
85 # the module will be loaded in sys.modules
86 # choose an unique name so that it doesn't
87 # conflicts with other modules
88 mod = loadpath(path, 'hgext.%s' % name)
89 else:
90 try:
91 mod = _importh("hgext.%s" % name)
92 except ImportError as err:
93 if reportfunc:
94 reportfunc(err, "hgext.%s" % name, "hgext3rd.%s" % name)
95 try:
96 mod = _importh("hgext3rd.%s" % name)
97 except ImportError as err:
98 if reportfunc:
99 reportfunc(err, "hgext3rd.%s" % name, name)
100 mod = _importh(name)
101 return mod
102
82 103 def _reportimporterror(ui, err, failed, next):
104 # note: this ui.debug happens before --debug is processed,
105 # Use --config ui.debug=1 to see them.
83 106 ui.debug('could not import %s (%s): trying %s\n'
84 107 % (failed, err, next))
85 108 if ui.debugflag:
@@ -95,21 +118,7 b' def load(ui, name, path):'
95 118 if shortname in _extensions:
96 119 return _extensions[shortname]
97 120 _extensions[shortname] = None
98 if path:
99 # the module will be loaded in sys.modules
100 # choose an unique name so that it doesn't
101 # conflicts with other modules
102 mod = loadpath(path, 'hgext.%s' % name)
103 else:
104 try:
105 mod = _importh("hgext.%s" % name)
106 except ImportError as err:
107 _reportimporterror(ui, err, "hgext.%s" % name, name)
108 try:
109 mod = _importh("hgext3rd.%s" % name)
110 except ImportError as err:
111 _reportimporterror(ui, err, "hgext3rd.%s" % name, name)
112 mod = _importh(name)
121 mod = _importext(name, path, bind(_reportimporterror, ui))
113 122
114 123 # Before we do anything with the extension, check against minimum stated
115 124 # compatibility. This gives extension authors a mechanism to have their
@@ -148,6 +157,7 b' def loadall(ui):'
148 157 for (name, path) in result:
149 158 if path:
150 159 if path[0] == '!':
160 _disabledextensions[name] = path[1:]
151 161 continue
152 162 try:
153 163 load(ui, name, path)
@@ -210,11 +220,13 b' def bind(func, *args):'
210 220 return func(*(args + a), **kw)
211 221 return closure
212 222
213 def _updatewrapper(wrap, origfn):
214 '''Copy attributes to wrapper function'''
223 def _updatewrapper(wrap, origfn, unboundwrapper):
224 '''Copy and add some useful attributes to wrapper'''
215 225 wrap.__module__ = getattr(origfn, '__module__')
216 226 wrap.__doc__ = getattr(origfn, '__doc__')
217 227 wrap.__dict__.update(getattr(origfn, '__dict__', {}))
228 wrap._origfunc = origfn
229 wrap._unboundwrapper = unboundwrapper
218 230
219 231 def wrapcommand(table, command, wrapper, synopsis=None, docstring=None):
220 232 '''Wrap the command named `command' in table
@@ -254,7 +266,7 b' def wrapcommand(table, command, wrapper,'
254 266
255 267 origfn = entry[0]
256 268 wrap = bind(util.checksignature(wrapper), util.checksignature(origfn))
257 _updatewrapper(wrap, origfn)
269 _updatewrapper(wrap, origfn, wrapper)
258 270 if docstring is not None:
259 271 wrap.__doc__ += docstring
260 272
@@ -303,10 +315,46 b' def wrapfunction(container, funcname, wr'
303 315 origfn = getattr(container, funcname)
304 316 assert callable(origfn)
305 317 wrap = bind(wrapper, origfn)
306 _updatewrapper(wrap, origfn)
318 _updatewrapper(wrap, origfn, wrapper)
307 319 setattr(container, funcname, wrap)
308 320 return origfn
309 321
322 def unwrapfunction(container, funcname, wrapper=None):
323 '''undo wrapfunction
324
325 If wrappers is None, undo the last wrap. Otherwise removes the wrapper
326 from the chain of wrappers.
327
328 Return the removed wrapper.
329 Raise IndexError if wrapper is None and nothing to unwrap; ValueError if
330 wrapper is not None but is not found in the wrapper chain.
331 '''
332 chain = getwrapperchain(container, funcname)
333 origfn = chain.pop()
334 if wrapper is None:
335 wrapper = chain[0]
336 chain.remove(wrapper)
337 setattr(container, funcname, origfn)
338 for w in reversed(chain):
339 wrapfunction(container, funcname, w)
340 return wrapper
341
342 def getwrapperchain(container, funcname):
343 '''get a chain of wrappers of a function
344
345 Return a list of functions: [newest wrapper, ..., oldest wrapper, origfunc]
346
347 The wrapper functions are the ones passed to wrapfunction, whose first
348 argument is origfunc.
349 '''
350 result = []
351 fn = getattr(container, funcname)
352 while fn:
353 assert callable(fn)
354 result.append(getattr(fn, '_unboundwrapper', fn))
355 fn = getattr(fn, '_origfunc', None)
356 return result
357
310 358 def _disabledpaths(strip_init=False):
311 359 '''find paths of disabled extensions. returns a dict of {name: path}
312 360 removes /__init__.py from packages if strip_init is True'''
@@ -332,6 +380,7 b' def _disabledpaths(strip_init=False):'
332 380 if name in exts or name in _order or name == '__init__':
333 381 continue
334 382 exts[name] = path
383 exts.update(_disabledextensions)
335 384 return exts
336 385
337 386 def _moduledoc(file):
@@ -494,4 +543,4 b' def moduleversion(module):'
494 543
495 544 def ismoduleinternal(module):
496 545 exttestedwith = getattr(module, 'testedwith', None)
497 return exttestedwith == "internal"
546 return exttestedwith == "ships-with-hg-core"
@@ -12,6 +12,17 b' import getopt'
12 12 from .i18n import _
13 13 from . import error
14 14
15 # Set of flags to not apply boolean negation logic on
16 nevernegate = set([
17 # avoid --no-noninteractive
18 'noninteractive',
19 # These two flags are special because they cause hg to do one
20 # thing and then exit, and so aren't suitable for use in things
21 # like aliases anyway.
22 'help',
23 'version',
24 ])
25
15 26 def gnugetopt(args, options, longoptions):
16 27 """Parse options mostly like getopt.gnu_getopt.
17 28
@@ -64,6 +75,8 b' def fancyopts(args, options, state, gnu='
64 75 shortlist = ''
65 76 argmap = {}
66 77 defmap = {}
78 negations = {}
79 alllong = set(o[1] for o in options)
67 80
68 81 for option in options:
69 82 if len(option) == 5:
@@ -91,6 +104,18 b' def fancyopts(args, options, state, gnu='
91 104 short += ':'
92 105 if oname:
93 106 oname += '='
107 elif oname not in nevernegate:
108 if oname.startswith('no-'):
109 insert = oname[3:]
110 else:
111 insert = 'no-' + oname
112 # backout (as a practical example) has both --commit and
113 # --no-commit options, so we don't want to allow the
114 # negations of those flags.
115 if insert not in alllong:
116 assert ('--' + oname) not in negations
117 negations['--' + insert] = '--' + oname
118 namelist.append(insert)
94 119 if short:
95 120 shortlist += short
96 121 if name:
@@ -105,6 +130,11 b' def fancyopts(args, options, state, gnu='
105 130
106 131 # transfer result to state
107 132 for opt, val in opts:
133 boolval = True
134 negation = negations.get(opt, False)
135 if negation:
136 opt = negation
137 boolval = False
108 138 name = argmap[opt]
109 139 obj = defmap[name]
110 140 t = type(obj)
@@ -121,7 +151,7 b' def fancyopts(args, options, state, gnu='
121 151 elif t is type([]):
122 152 state[name].append(val)
123 153 elif t is type(None) or t is type(False):
124 state[name] = True
154 state[name] = boolval
125 155
126 156 # return unparsed args
127 157 return args
@@ -19,6 +19,7 b' from . import ('
19 19 error,
20 20 formatter,
21 21 match,
22 pycompat,
22 23 scmutil,
23 24 simplemerge,
24 25 tagmerge,
@@ -93,7 +94,8 b' def internaltool(name, mergetype, onfail'
93 94 '''return a decorator for populating internal merge tool table'''
94 95 def decorator(func):
95 96 fullname = ':' + name
96 func.__doc__ = "``%s``\n" % fullname + func.__doc__.strip()
97 func.__doc__ = (pycompat.sysstr("``%s``\n" % fullname)
98 + func.__doc__.strip())
97 99 internals[fullname] = func
98 100 internals['internal:' + name] = func
99 101 internalsdoc[fullname] = func
@@ -230,50 +232,56 b' def _matcheol(file, origfile):'
230 232 util.writefile(file, newdata)
231 233
232 234 @internaltool('prompt', nomerge)
233 def _iprompt(repo, mynode, orig, fcd, fco, fca, toolconf):
235 def _iprompt(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None):
234 236 """Asks the user which of the local `p1()` or the other `p2()` version to
235 237 keep as the merged version."""
236 238 ui = repo.ui
237 239 fd = fcd.path()
238 240
241 prompts = partextras(labels)
242 prompts['fd'] = fd
239 243 try:
240 244 if fco.isabsent():
241 245 index = ui.promptchoice(
242 _("local changed %s which remote deleted\n"
246 _("local%(l)s changed %(fd)s which other%(o)s deleted\n"
243 247 "use (c)hanged version, (d)elete, or leave (u)nresolved?"
244 "$$ &Changed $$ &Delete $$ &Unresolved") % fd, 2)
248 "$$ &Changed $$ &Delete $$ &Unresolved") % prompts, 2)
245 249 choice = ['local', 'other', 'unresolved'][index]
246 250 elif fcd.isabsent():
247 251 index = ui.promptchoice(
248 _("remote changed %s which local deleted\n"
252 _("other%(o)s changed %(fd)s which local%(l)s deleted\n"
249 253 "use (c)hanged version, leave (d)eleted, or "
250 254 "leave (u)nresolved?"
251 "$$ &Changed $$ &Deleted $$ &Unresolved") % fd, 2)
255 "$$ &Changed $$ &Deleted $$ &Unresolved") % prompts, 2)
252 256 choice = ['other', 'local', 'unresolved'][index]
253 257 else:
254 258 index = ui.promptchoice(
255 _("no tool found to merge %s\n"
256 "keep (l)ocal, take (o)ther, or leave (u)nresolved?"
257 "$$ &Local $$ &Other $$ &Unresolved") % fd, 2)
259 _("no tool found to merge %(fd)s\n"
260 "keep (l)ocal%(l)s, take (o)ther%(o)s, or leave (u)nresolved?"
261 "$$ &Local $$ &Other $$ &Unresolved") % prompts, 2)
258 262 choice = ['local', 'other', 'unresolved'][index]
259 263
260 264 if choice == 'other':
261 return _iother(repo, mynode, orig, fcd, fco, fca, toolconf)
265 return _iother(repo, mynode, orig, fcd, fco, fca, toolconf,
266 labels)
262 267 elif choice == 'local':
263 return _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf)
268 return _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf,
269 labels)
264 270 elif choice == 'unresolved':
265 return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf)
271 return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf,
272 labels)
266 273 except error.ResponseExpected:
267 274 ui.write("\n")
268 return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf)
275 return _ifail(repo, mynode, orig, fcd, fco, fca, toolconf,
276 labels)
269 277
270 278 @internaltool('local', nomerge)
271 def _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf):
279 def _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None):
272 280 """Uses the local `p1()` version of files as the merged version."""
273 281 return 0, fcd.isabsent()
274 282
275 283 @internaltool('other', nomerge)
276 def _iother(repo, mynode, orig, fcd, fco, fca, toolconf):
284 def _iother(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None):
277 285 """Uses the other `p2()` version of files as the merged version."""
278 286 if fco.isabsent():
279 287 # local changed, remote deleted -- 'deleted' picked
@@ -285,7 +293,7 b' def _iother(repo, mynode, orig, fcd, fco'
285 293 return 0, deleted
286 294
287 295 @internaltool('fail', nomerge)
288 def _ifail(repo, mynode, orig, fcd, fco, fca, toolconf):
296 def _ifail(repo, mynode, orig, fcd, fco, fca, toolconf, labels=None):
289 297 """
290 298 Rather than attempting to merge files that were modified on both
291 299 branches, it marks them as unresolved. The resolve command must be
@@ -508,11 +516,11 b' def _formatconflictmarker(repo, ctx, tem'
508 516 # 8 for the prefix of conflict marker lines (e.g. '<<<<<<< ')
509 517 return util.ellipsis(mark, 80 - 8)
510 518
511 _defaultconflictmarker = ('{node|short} ' +
512 '{ifeq(tags, "tip", "", "{tags} ")}' +
513 '{if(bookmarks, "{bookmarks} ")}' +
514 '{ifeq(branch, "default", "", "{branch} ")}' +
515 '- {author|user}: {desc|firstline}')
519 _defaultconflictmarker = ('{node|short} '
520 '{ifeq(tags, "tip", "", "{tags} ")}'
521 '{if(bookmarks, "{bookmarks} ")}'
522 '{ifeq(branch, "default", "", "{branch} ")}'
523 '- {author|user}: {desc|firstline}')
516 524
517 525 _defaultconflictlabels = ['local', 'other']
518 526
@@ -537,6 +545,22 b' def _formatlabels(repo, fcd, fco, fca, l'
537 545 newlabels.append(_formatconflictmarker(repo, ca, tmpl, labels[2], pad))
538 546 return newlabels
539 547
548 def partextras(labels):
549 """Return a dictionary of extra labels for use in prompts to the user
550
551 Intended use is in strings of the form "(l)ocal%(l)s".
552 """
553 if labels is None:
554 return {
555 "l": "",
556 "o": "",
557 }
558
559 return {
560 "l": " [%s]" % labels[0],
561 "o": " [%s]" % labels[1],
562 }
563
540 564 def _filemerge(premerge, repo, mynode, orig, fcd, fco, fca, labels=None):
541 565 """perform a 3-way merge in the working directory
542 566
@@ -588,7 +612,7 b' def _filemerge(premerge, repo, mynode, o'
588 612 toolconf = tool, toolpath, binary, symlink
589 613
590 614 if mergetype == nomerge:
591 r, deleted = func(repo, mynode, orig, fcd, fco, fca, toolconf)
615 r, deleted = func(repo, mynode, orig, fcd, fco, fca, toolconf, labels)
592 616 return True, r, deleted
593 617
594 618 if premerge:
@@ -345,10 +345,10 b' def _sizetomax(s):'
345 345 def size(mctx, x):
346 346 """File size matches the given expression. Examples:
347 347
348 - 1k (files from 1024 to 2047 bytes)
349 - < 20k (files less than 20480 bytes)
350 - >= .5MB (files at least 524288 bytes)
351 - 4k - 1MB (files from 4096 bytes to 1048576 bytes)
348 - size('1k') - files from 1024 to 2047 bytes
349 - size('< 20k') - files less than 20480 bytes
350 - size('>= .5MB') - files at least 524288 bytes
351 - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes
352 352 """
353 353
354 354 # i18n: "size" is a keyword
@@ -18,25 +18,45 b' from .node import ('
18 18 from . import (
19 19 encoding,
20 20 error,
21 templatekw,
21 22 templater,
22 23 util,
23 24 )
24 25
25 26 pickle = util.pickle
26 27
28 class _nullconverter(object):
29 '''convert non-primitive data types to be processed by formatter'''
30 @staticmethod
31 def formatdate(date, fmt):
32 '''convert date tuple to appropriate format'''
33 return date
34 @staticmethod
35 def formatdict(data, key, value, fmt, sep):
36 '''convert dict or key-value pairs to appropriate dict format'''
37 # use plain dict instead of util.sortdict so that data can be
38 # serialized as a builtin dict in pickle output
39 return dict(data)
40 @staticmethod
41 def formatlist(data, name, fmt, sep):
42 '''convert iterable to appropriate list format'''
43 return list(data)
44
27 45 class baseformatter(object):
28 def __init__(self, ui, topic, opts):
46 def __init__(self, ui, topic, opts, converter):
29 47 self._ui = ui
30 48 self._topic = topic
31 49 self._style = opts.get("style")
32 50 self._template = opts.get("template")
51 self._converter = converter
33 52 self._item = None
34 53 # function to convert node to string suitable for this output
35 54 self.hexfunc = hex
36 def __nonzero__(self):
37 '''return False if we're not doing real templating so we can
38 skip extra work'''
39 return True
55 def __enter__(self):
56 return self
57 def __exit__(self, exctype, excvalue, traceback):
58 if exctype is None:
59 self.end()
40 60 def _showitem(self):
41 61 '''show a formatted item once all data is collected'''
42 62 pass
@@ -45,6 +65,17 b' class baseformatter(object):'
45 65 if self._item is not None:
46 66 self._showitem()
47 67 self._item = {}
68 def formatdate(self, date, fmt='%a %b %d %H:%M:%S %Y %1%2'):
69 '''convert date tuple to appropriate format'''
70 return self._converter.formatdate(date, fmt)
71 def formatdict(self, data, key='key', value='value', fmt='%s=%s', sep=' '):
72 '''convert dict or key-value pairs to appropriate dict format'''
73 return self._converter.formatdict(data, key, value, fmt, sep)
74 def formatlist(self, data, name, fmt='%s', sep=' '):
75 '''convert iterable to appropriate list format'''
76 # name is mandatory argument for now, but it could be optional if
77 # we have default template keyword, e.g. {item}
78 return self._converter.formatlist(data, name, fmt, sep)
48 79 def data(self, **data):
49 80 '''insert data into item that's not shown in default output'''
50 81 self._item.update(data)
@@ -61,21 +92,55 b' class baseformatter(object):'
61 92 def plain(self, text, **opts):
62 93 '''show raw text for non-templated mode'''
63 94 pass
95 def isplain(self):
96 '''check for plain formatter usage'''
97 return False
98 def nested(self, field):
99 '''sub formatter to store nested data in the specified field'''
100 self._item[field] = data = []
101 return _nestedformatter(self._ui, self._converter, data)
64 102 def end(self):
65 103 '''end output for the formatter'''
66 104 if self._item is not None:
67 105 self._showitem()
68 106
107 class _nestedformatter(baseformatter):
108 '''build sub items and store them in the parent formatter'''
109 def __init__(self, ui, converter, data):
110 baseformatter.__init__(self, ui, topic='', opts={}, converter=converter)
111 self._data = data
112 def _showitem(self):
113 self._data.append(self._item)
114
115 def _iteritems(data):
116 '''iterate key-value pairs in stable order'''
117 if isinstance(data, dict):
118 return sorted(data.iteritems())
119 return data
120
121 class _plainconverter(object):
122 '''convert non-primitive data types to text'''
123 @staticmethod
124 def formatdate(date, fmt):
125 '''stringify date tuple in the given format'''
126 return util.datestr(date, fmt)
127 @staticmethod
128 def formatdict(data, key, value, fmt, sep):
129 '''stringify key-value pairs separated by sep'''
130 return sep.join(fmt % (k, v) for k, v in _iteritems(data))
131 @staticmethod
132 def formatlist(data, name, fmt, sep):
133 '''stringify iterable separated by sep'''
134 return sep.join(fmt % e for e in data)
135
69 136 class plainformatter(baseformatter):
70 137 '''the default text output scheme'''
71 138 def __init__(self, ui, topic, opts):
72 baseformatter.__init__(self, ui, topic, opts)
139 baseformatter.__init__(self, ui, topic, opts, _plainconverter)
73 140 if ui.debugflag:
74 141 self.hexfunc = hex
75 142 else:
76 143 self.hexfunc = short
77 def __nonzero__(self):
78 return False
79 144 def startitem(self):
80 145 pass
81 146 def data(self, **data):
@@ -88,12 +153,17 b' class plainformatter(baseformatter):'
88 153 self._ui.write(deftext % fielddata, **opts)
89 154 def plain(self, text, **opts):
90 155 self._ui.write(text, **opts)
156 def isplain(self):
157 return True
158 def nested(self, field):
159 # nested data will be directly written to ui
160 return self
91 161 def end(self):
92 162 pass
93 163
94 164 class debugformatter(baseformatter):
95 165 def __init__(self, ui, topic, opts):
96 baseformatter.__init__(self, ui, topic, opts)
166 baseformatter.__init__(self, ui, topic, opts, _nullconverter)
97 167 self._ui.write("%s = [\n" % self._topic)
98 168 def _showitem(self):
99 169 self._ui.write(" " + repr(self._item) + ",\n")
@@ -103,7 +173,7 b' class debugformatter(baseformatter):'
103 173
104 174 class pickleformatter(baseformatter):
105 175 def __init__(self, ui, topic, opts):
106 baseformatter.__init__(self, ui, topic, opts)
176 baseformatter.__init__(self, ui, topic, opts, _nullconverter)
107 177 self._data = []
108 178 def _showitem(self):
109 179 self._data.append(self._item)
@@ -112,7 +182,11 b' class pickleformatter(baseformatter):'
112 182 self._ui.write(pickle.dumps(self._data))
113 183
114 184 def _jsonifyobj(v):
115 if isinstance(v, tuple):
185 if isinstance(v, dict):
186 xs = ['"%s": %s' % (encoding.jsonescape(k), _jsonifyobj(u))
187 for k, u in sorted(v.iteritems())]
188 return '{' + ', '.join(xs) + '}'
189 elif isinstance(v, (list, tuple)):
116 190 return '[' + ', '.join(_jsonifyobj(e) for e in v) + ']'
117 191 elif v is None:
118 192 return 'null'
@@ -127,7 +201,7 b' def _jsonifyobj(v):'
127 201
128 202 class jsonformatter(baseformatter):
129 203 def __init__(self, ui, topic, opts):
130 baseformatter.__init__(self, ui, topic, opts)
204 baseformatter.__init__(self, ui, topic, opts, _nullconverter)
131 205 self._ui.write("[")
132 206 self._ui._first = True
133 207 def _showitem(self):
@@ -149,9 +223,32 b' class jsonformatter(baseformatter):'
149 223 baseformatter.end(self)
150 224 self._ui.write("\n]\n")
151 225
226 class _templateconverter(object):
227 '''convert non-primitive data types to be processed by templater'''
228 @staticmethod
229 def formatdate(date, fmt):
230 '''return date tuple'''
231 return date
232 @staticmethod
233 def formatdict(data, key, value, fmt, sep):
234 '''build object that can be evaluated as either plain string or dict'''
235 data = util.sortdict(_iteritems(data))
236 def f():
237 yield _plainconverter.formatdict(data, key, value, fmt, sep)
238 return templatekw._hybrid(f(), data, lambda k: {key: k, value: data[k]},
239 lambda d: fmt % (d[key], d[value]))
240 @staticmethod
241 def formatlist(data, name, fmt, sep):
242 '''build object that can be evaluated as either plain string or list'''
243 data = list(data)
244 def f():
245 yield _plainconverter.formatlist(data, name, fmt, sep)
246 return templatekw._hybrid(f(), data, lambda x: {name: x},
247 lambda d: fmt % d[name])
248
152 249 class templateformatter(baseformatter):
153 250 def __init__(self, ui, topic, opts):
154 baseformatter.__init__(self, ui, topic, opts)
251 baseformatter.__init__(self, ui, topic, opts, _templateconverter)
155 252 self._topic = topic
156 253 self._t = gettemplater(ui, topic, opts.get('template', ''))
157 254 def _showitem(self):
@@ -139,6 +139,19 b' def bisect(changelog, state):'
139 139
140 140 return ([best_node], tot, good)
141 141
142 def extendrange(repo, state, nodes, good):
143 # bisect is incomplete when it ends on a merge node and
144 # one of the parent was not checked.
145 parents = repo[nodes[0]].parents()
146 if len(parents) > 1:
147 if good:
148 side = state['bad']
149 else:
150 side = state['good']
151 num = len(set(i.node() for i in parents) & set(side))
152 if num == 1:
153 return parents[0].ancestor(parents[1])
154 return None
142 155
143 156 def load_state(repo):
144 157 state = {'current': [], 'good': [], 'bad': [], 'skip': []}
@@ -159,6 +172,22 b' def save_state(repo, state):'
159 172 f.write("%s %s\n" % (kind, hex(node)))
160 173 f.close()
161 174
175 def resetstate(repo):
176 """remove any bisect state from the repository"""
177 if repo.vfs.exists("bisect.state"):
178 repo.vfs.unlink("bisect.state")
179
180 def checkstate(state):
181 """check we have both 'good' and 'bad' to define a range
182
183 Raise Abort exception otherwise."""
184 if state['good'] and state['bad']:
185 return True
186 if not state['good']:
187 raise error.Abort(_('cannot bisect (no known good revisions)'))
188 else:
189 raise error.Abort(_('cannot bisect (no known bad revisions)'))
190
162 191 def get(repo, status):
163 192 """
164 193 Return a list of revision(s) that match the given status:
@@ -261,3 +290,29 b' def shortlabel(label):'
261 290 return label[0].upper()
262 291
263 292 return None
293
294 def printresult(ui, repo, state, displayer, nodes, good):
295 if len(nodes) == 1:
296 # narrowed it down to a single revision
297 if good:
298 ui.write(_("The first good revision is:\n"))
299 else:
300 ui.write(_("The first bad revision is:\n"))
301 displayer.show(repo[nodes[0]])
302 extendnode = extendrange(repo, state, nodes, good)
303 if extendnode is not None:
304 ui.write(_('Not all ancestors of this changeset have been'
305 ' checked.\nUse bisect --extend to continue the '
306 'bisection from\nthe common ancestor, %s.\n')
307 % extendnode)
308 else:
309 # multiple possible revisions
310 if good:
311 ui.write(_("Due to skipped revisions, the first "
312 "good revision could be any of:\n"))
313 else:
314 ui.write(_("Due to skipped revisions, the first "
315 "bad revision could be any of:\n"))
316 for n in nodes:
317 displayer.show(repo[n])
318 displayer.close()
@@ -184,14 +184,16 b' def loaddoc(topic, subdir=None):'
184 184 return loader
185 185
186 186 internalstable = sorted([
187 (['bundles'], _('container for exchange of repository data'),
187 (['bundles'], _('Bundles'),
188 188 loaddoc('bundles', subdir='internals')),
189 (['changegroups'], _('representation of revlog data'),
189 (['changegroups'], _('Changegroups'),
190 190 loaddoc('changegroups', subdir='internals')),
191 (['requirements'], _('repository requirements'),
191 (['requirements'], _('Repository Requirements'),
192 192 loaddoc('requirements', subdir='internals')),
193 (['revlogs'], _('revision storage mechanism'),
193 (['revlogs'], _('Revision Logs'),
194 194 loaddoc('revlogs', subdir='internals')),
195 (['wireprotocol'], _('Wire Protocol'),
196 loaddoc('wireprotocol', subdir='internals')),
195 197 ])
196 198
197 199 def internalshelp(ui):
@@ -356,8 +358,8 b' def help_(ui, name, unknowncmd=False, fu'
356 358 mod = extensions.find(name)
357 359 doc = gettext(mod.__doc__) or ''
358 360 if '\n' in doc.strip():
359 msg = _('(use "hg help -e %s" to show help for '
360 'the %s extension)') % (name, name)
361 msg = _("(use 'hg help -e %s' to show help for "
362 "the %s extension)") % (name, name)
361 363 rst.append('\n%s\n' % msg)
362 364 except KeyError:
363 365 pass
@@ -372,7 +374,7 b' def help_(ui, name, unknowncmd=False, fu'
372 374
373 375 if not ui.verbose:
374 376 if not full:
375 rst.append(_('\n(use "hg %s -h" to show more help)\n')
377 rst.append(_("\n(use 'hg %s -h' to show more help)\n")
376 378 % name)
377 379 elif not ui.quiet:
378 380 rst.append(_('\n(some details hidden, use --verbose '
@@ -448,21 +450,21 b' def help_(ui, name, unknowncmd=False, fu'
448 450 rst.append('\n%s\n' % optrst(_("global options"),
449 451 commands.globalopts, ui.verbose))
450 452 if name == 'shortlist':
451 rst.append(_('\n(use "hg help" for the full list '
452 'of commands)\n'))
453 rst.append(_("\n(use 'hg help' for the full list "
454 "of commands)\n"))
453 455 else:
454 456 if name == 'shortlist':
455 rst.append(_('\n(use "hg help" for the full list of commands '
456 'or "hg -v" for details)\n'))
457 rst.append(_("\n(use 'hg help' for the full list of commands "
458 "or 'hg -v' for details)\n"))
457 459 elif name and not full:
458 rst.append(_('\n(use "hg help %s" to show the full help '
459 'text)\n') % name)
460 rst.append(_("\n(use 'hg help %s' to show the full help "
461 "text)\n") % name)
460 462 elif name and cmds and name in cmds.keys():
461 rst.append(_('\n(use "hg help -v -e %s" to show built-in '
462 'aliases and global options)\n') % name)
463 rst.append(_("\n(use 'hg help -v -e %s' to show built-in "
464 "aliases and global options)\n") % name)
463 465 else:
464 rst.append(_('\n(use "hg help -v%s" to show built-in aliases '
465 'and global options)\n')
466 rst.append(_("\n(use 'hg help -v%s' to show built-in aliases "
467 "and global options)\n")
466 468 % (name and " " + name or ""))
467 469 return rst
468 470
@@ -496,8 +498,8 b' def help_(ui, name, unknowncmd=False, fu'
496 498
497 499 try:
498 500 cmdutil.findcmd(name, commands.table)
499 rst.append(_('\nuse "hg help -c %s" to see help for '
500 'the %s command\n') % (name, name))
501 rst.append(_("\nuse 'hg help -c %s' to see help for "
502 "the %s command\n") % (name, name))
501 503 except error.UnknownCommand:
502 504 pass
503 505 return rst
@@ -534,8 +536,8 b' def help_(ui, name, unknowncmd=False, fu'
534 536 modcmds = set([c.partition('|')[0] for c in ct])
535 537 rst.extend(helplist(modcmds.__contains__))
536 538 else:
537 rst.append(_('(use "hg help extensions" for information on enabling'
538 ' extensions)\n'))
539 rst.append(_("(use 'hg help extensions' for information on enabling"
540 " extensions)\n"))
539 541 return rst
540 542
541 543 def helpextcmd(name, subtopic=None):
@@ -547,8 +549,8 b' def help_(ui, name, unknowncmd=False, fu'
547 549 "extension:") % cmd, {ext: doc}, indent=4,
548 550 showdeprecated=True)
549 551 rst.append('\n')
550 rst.append(_('(use "hg help extensions" for information on enabling '
551 'extensions)\n'))
552 rst.append(_("(use 'hg help extensions' for information on enabling "
553 "extensions)\n"))
552 554 return rst
553 555
554 556
@@ -573,7 +575,7 b' def help_(ui, name, unknowncmd=False, fu'
573 575 rst.append('\n')
574 576 if not rst:
575 577 msg = _('no matches')
576 hint = _('try "hg help" for a list of topics')
578 hint = _("try 'hg help' for a list of topics")
577 579 raise error.Abort(msg, hint=hint)
578 580 elif name and name != 'shortlist':
579 581 queries = []
@@ -596,7 +598,7 b' def help_(ui, name, unknowncmd=False, fu'
596 598 raise error.UnknownCommand(name)
597 599 else:
598 600 msg = _('no such help topic: %s') % name
599 hint = _('try "hg help --keyword %s"') % name
601 hint = _("try 'hg help --keyword %s'") % name
600 602 raise error.Abort(msg, hint=hint)
601 603 else:
602 604 # program name
@@ -1393,6 +1393,12 b" collected during profiling, while 'profi"
1393 1393 statistical text report generated from the profiling data. The
1394 1394 profiling is done using lsprof.
1395 1395
1396 ``enabled``
1397 Enable the profiler.
1398 (default: false)
1399
1400 This is equivalent to passing ``--profile`` on the command line.
1401
1396 1402 ``type``
1397 1403 The type of profiler to use.
1398 1404 (default: ls)
@@ -1557,6 +1563,21 b' Controls generic server settings.'
1557 1563 repositories to the exchange format required by the bundle1 data
1558 1564 format can consume a lot of CPU.
1559 1565
1566 ``zliblevel``
1567 Integer between ``-1`` and ``9`` that controls the zlib compression level
1568 for wire protocol commands that send zlib compressed output (notably the
1569 commands that send repository history data).
1570
1571 The default (``-1``) uses the default zlib compression level, which is
1572 likely equivalent to ``6``. ``0`` means no compression. ``9`` means
1573 maximum compression.
1574
1575 Setting this option allows server operators to make trade-offs between
1576 bandwidth and CPU used. Lowering the compression lowers CPU utilization
1577 but sends more bytes to clients.
1578
1579 This option only impacts the HTTP server.
1580
1560 1581 ``smtp``
1561 1582 --------
1562 1583
@@ -1,6 +1,3 b''
1 Bundles
2 =======
3
4 1 A bundle is a container for repository data.
5 2
6 3 Bundles are used as standalone files as well as the interchange format
@@ -8,7 +5,7 b' over the wire protocol used when two Mer'
8 5 each other.
9 6
10 7 Headers
11 -------
8 =======
12 9
13 10 Bundles produced since Mercurial 0.7 (September 2005) have a 4 byte
14 11 header identifying the major bundle type. The header always begins with
@@ -1,6 +1,3 b''
1 Changegroups
2 ============
3
4 1 Changegroups are representations of repository revlog data, specifically
5 2 the changelog, manifest, and filelogs.
6 3
@@ -35,7 +32,7 b' There is a special case chunk that has 0'
35 32 call this an *empty chunk*.
36 33
37 34 Delta Groups
38 ------------
35 ============
39 36
40 37 A *delta group* expresses the content of a revlog as a series of deltas,
41 38 or patches against previous revisions.
@@ -111,21 +108,21 b' changegroup. This allows the delta to be'
111 108 which can result in smaller deltas and more efficient encoding of data.
112 109
113 110 Changeset Segment
114 -----------------
111 =================
115 112
116 113 The *changeset segment* consists of a single *delta group* holding
117 114 changelog data. It is followed by an *empty chunk* to denote the
118 115 boundary to the *manifests segment*.
119 116
120 117 Manifest Segment
121 ----------------
118 ================
122 119
123 120 The *manifest segment* consists of a single *delta group* holding
124 121 manifest data. It is followed by an *empty chunk* to denote the boundary
125 122 to the *filelogs segment*.
126 123
127 124 Filelogs Segment
128 ----------------
125 ================
129 126
130 127 The *filelogs* segment consists of multiple sub-segments, each
131 128 corresponding to an individual file whose data is being described::
@@ -154,4 +151,3 b' Each filelog sub-segment consists of the'
154 151
155 152 That is, a *chunk* consisting of the filename (not terminated or padded)
156 153 followed by N chunks constituting the *delta group* for this file.
157
@@ -1,5 +1,3 b''
1 Requirements
2 ============
3 1
4 2 Repositories contain a file (``.hg/requires``) containing a list of
5 3 features/capabilities that are *required* for clients to interface
@@ -19,7 +17,7 b' The following sections describe the requ'
19 17 Mercurial core distribution.
20 18
21 19 revlogv1
22 --------
20 ========
23 21
24 22 When present, revlogs are version 1 (RevlogNG). RevlogNG was introduced
25 23 in 2006. The ``revlogv1`` requirement has been enabled by default
@@ -28,7 +26,7 b' since the ``requires`` file was introduc'
28 26 If this requirement is not present, version 0 revlogs are assumed.
29 27
30 28 store
31 -----
29 =====
32 30
33 31 The *store* repository layout should be used.
34 32
@@ -36,7 +34,7 b' This requirement has been enabled by def'
36 34 was introduced in Mercurial 0.9.2.
37 35
38 36 fncache
39 -------
37 =======
40 38
41 39 The *fncache* repository layout should be used.
42 40
@@ -48,7 +46,7 b' enabled (which is the default behavior).'
48 46 1.1 (released December 2008).
49 47
50 48 shared
51 ------
49 ======
52 50
53 51 Denotes that the store for a repository is shared from another location
54 52 (defined by the ``.hg/sharedpath`` file).
@@ -58,7 +56,7 b' This requirement is set when a repositor'
58 56 The requirement was added in Mercurial 1.3 (released July 2009).
59 57
60 58 dotencode
61 ---------
59 =========
62 60
63 61 The *dotencode* repository layout should be used.
64 62
@@ -70,7 +68,7 b' is enabled (which is the default behavio'
70 68 Mercurial 1.7 (released November 2010).
71 69
72 70 parentdelta
73 -----------
71 ===========
74 72
75 73 Denotes a revlog delta encoding format that was experimental and
76 74 replaced by *generaldelta*. It should not be seen in the wild because
@@ -80,7 +78,7 b' This requirement was added in Mercurial '
80 78 1.9.
81 79
82 80 generaldelta
83 ------------
81 ============
84 82
85 83 Revlogs should be created with the *generaldelta* flag enabled. The
86 84 generaldelta flag will cause deltas to be encoded against a parent
@@ -91,7 +89,7 b' July 2011). The requirement was disabled'
91 89 default until Mercurial 3.7 (released February 2016).
92 90
93 91 manifestv2
94 ----------
92 ==========
95 93
96 94 Denotes that version 2 of manifests are being used.
97 95
@@ -100,7 +98,7 b' May 2015). The requirement is currently '
100 98 by default.
101 99
102 100 treemanifest
103 ------------
101 ============
104 102
105 103 Denotes that tree manifests are being used. Tree manifests are
106 104 one manifest per directory (as opposed to a single flat manifest).
@@ -1,6 +1,3 b''
1 Revlogs
2 =======
3
4 1 Revision logs - or *revlogs* - are an append only data structure for
5 2 storing discrete entries, or *revisions*. They are the primary storage
6 3 mechanism of repository data.
@@ -28,7 +25,7 b' revision #0 and the second is revision #'
28 25 used to mean *does not exist* or *not defined*.
29 26
30 27 File Format
31 -----------
28 ===========
32 29
33 30 A revlog begins with a 32-bit big endian integer holding version info
34 31 and feature flags. This integer is shared with the first revision
@@ -77,7 +74,7 b' possibly located between index entries. '
77 74 below.
78 75
79 76 RevlogNG Format
80 ---------------
77 ===============
81 78
82 79 RevlogNG (version 1) begins with an index describing the revisions in
83 80 the revlog. If the ``inline`` flag is set, revision data is stored inline,
@@ -129,7 +126,7 b' The first 4 bytes of the revlog are shar'
129 126 and the 6 byte absolute offset field from the first revlog entry.
130 127
131 128 Delta Chains
132 ------------
129 ============
133 130
134 131 Revision data is encoded as a chain of *chunks*. Each chain begins with
135 132 the compressed original full text for that revision. Each subsequent
@@ -153,7 +150,7 b' by default in Mercurial 3.7) activates t'
153 150 computed against an arbitrary revision (almost certainly a parent revision).
154 151
155 152 File Storage
156 ------------
153 ============
157 154
158 155 Revlogs logically consist of an index (metadata of entries) and
159 156 revision data. This data may be stored together in a single file or in
@@ -172,7 +169,7 b' The actual layout of revlog files on dis'
172 169 (possibly containing inline data) and a ``.d`` file holds the revision data.
173 170
174 171 Revision Entries
175 ----------------
172 ================
176 173
177 174 Revision entries consist of an optional 1 byte header followed by an
178 175 encoding of the revision data. The headers are as follows:
@@ -187,7 +184,7 b' x (0x78)'
187 184 The 0x78 value is actually the first byte of the zlib header (CMF byte).
188 185
189 186 Hash Computation
190 ----------------
187 ================
191 188
192 189 The hash of the revision is stored in the index and is used both as a primary
193 190 key and for data integrity verification.
@@ -12,11 +12,17 b' Special characters can be used in quoted'
12 12 e.g., ``\n`` is interpreted as a newline. To prevent them from being
13 13 interpreted, strings can be prefixed with ``r``, e.g. ``r'...'``.
14 14
15 Prefix
16 ======
17
15 18 There is a single prefix operator:
16 19
17 20 ``not x``
18 21 Changesets not in x. Short form is ``! x``.
19 22
23 Infix
24 =====
25
20 26 These are the supported infix operators:
21 27
22 28 ``x::y``
@@ -55,16 +61,40 b' These are the supported infix operators:'
55 61 ``x~n``
56 62 The nth first ancestor of x; ``x~0`` is x; ``x~3`` is ``x^^^``.
57 63
64 ``x ## y``
65 Concatenate strings and identifiers into one string.
66
67 All other prefix, infix and postfix operators have lower priority than
68 ``##``. For example, ``a1 ## a2~2`` is equivalent to ``(a1 ## a2)~2``.
69
70 For example::
71
72 [revsetalias]
73 issue(a1) = grep(r'\bissue[ :]?' ## a1 ## r'\b|\bbug\(' ## a1 ## r'\)')
74
75 ``issue(1234)`` is equivalent to
76 ``grep(r'\bissue[ :]?1234\b|\bbug\(1234\)')``
77 in this case. This matches against all of "issue 1234", "issue:1234",
78 "issue1234" and "bug(1234)".
79
80 Postfix
81 =======
82
58 83 There is a single postfix operator:
59 84
60 85 ``x^``
61 86 Equivalent to ``x^1``, the first parent of each changeset in x.
62 87
88 Predicates
89 ==========
63 90
64 91 The following predicates are supported:
65 92
66 93 .. predicatesmarker
67 94
95 Aliases
96 =======
97
68 98 New predicates (known as "aliases") can be defined, using any combination of
69 99 existing predicates or other aliases. An alias definition looks like::
70 100
@@ -86,18 +116,8 b' For example,'
86 116 defines three aliases, ``h``, ``d``, and ``rs``. ``rs(0:tip, author)`` is
87 117 exactly equivalent to ``reverse(sort(0:tip, author))``.
88 118
89 An infix operator ``##`` can concatenate strings and identifiers into
90 one string. For example::
91
92 [revsetalias]
93 issue(a1) = grep(r'\bissue[ :]?' ## a1 ## r'\b|\bbug\(' ## a1 ## r'\)')
94
95 ``issue(1234)`` is equivalent to ``grep(r'\bissue[ :]?1234\b|\bbug\(1234\)')``
96 in this case. This matches against all of "issue 1234", "issue:1234",
97 "issue1234" and "bug(1234)".
98
99 All other prefix, infix and postfix operators have lower priority than
100 ``##``. For example, ``a1 ## a2~2`` is equivalent to ``(a1 ## a2)~2``.
119 Equivalents
120 ===========
101 121
102 122 Command line equivalents for :hg:`log`::
103 123
@@ -110,6 +130,9 b' Command line equivalents for :hg:`log`::'
110 130 -P x -> !::x
111 131 -l x -> limit(expr, x)
112 132
133 Examples
134 ========
135
113 136 Some sample queries:
114 137
115 138 - Changesets on the default branch::
@@ -43,6 +43,15 b' In addition to filters, there are some b'
43 43
44 44 .. functionsmarker
45 45
46 We provide a limited set of infix arithmetic operations on integers::
47
48 + for addition
49 - for subtraction
50 * for multiplication
51 / for floor division (division rounded to integer nearest -infinity)
52
53 Division fulfils the law x = x / y + mod(x, y).
54
46 55 Also, for any expression that returns a list, there is a list operator::
47 56
48 57 expr % "{template}"
@@ -95,6 +104,10 b' Some sample command line templates:'
95 104
96 105 $ hg log -r 0 --template "files: {join(files, ', ')}\n"
97 106
107 - Join the list of files ending with ".py" with a ", "::
108
109 $ hg log -r 0 --template "pythonfiles: {join(files('**.py'), ', ')}\n"
110
98 111 - Separate non-empty arguments by a " "::
99 112
100 113 $ hg log -r 0 --template "{separate(' ', node, bookmarks, tags}\n"
@@ -195,7 +195,7 b' def defaultdest(source):'
195 195 return ''
196 196 return os.path.basename(os.path.normpath(path))
197 197
198 def share(ui, source, dest=None, update=True, bookmarks=True):
198 def share(ui, source, dest=None, update=True, bookmarks=True, defaultpath=None):
199 199 '''create a shared repository'''
200 200
201 201 if not islocal(source):
@@ -240,10 +240,10 b' def share(ui, source, dest=None, update='
240 240 destvfs.write('sharedpath', sharedpath)
241 241
242 242 r = repository(ui, destwvfs.base)
243 postshare(srcrepo, r, bookmarks=bookmarks)
243 postshare(srcrepo, r, bookmarks=bookmarks, defaultpath=defaultpath)
244 244 _postshareupdate(r, update, checkout=checkout)
245 245
246 def postshare(sourcerepo, destrepo, bookmarks=True):
246 def postshare(sourcerepo, destrepo, bookmarks=True, defaultpath=None):
247 247 """Called after a new shared repo is created.
248 248
249 249 The new repo only has a requirements file and pointer to the source.
@@ -252,17 +252,18 b' def postshare(sourcerepo, destrepo, book'
252 252 Extensions can wrap this function and write additional entries to
253 253 destrepo/.hg/shared to indicate additional pieces of data to be shared.
254 254 """
255 default = sourcerepo.ui.config('paths', 'default')
255 default = defaultpath or sourcerepo.ui.config('paths', 'default')
256 256 if default:
257 257 fp = destrepo.vfs("hgrc", "w", text=True)
258 258 fp.write("[paths]\n")
259 259 fp.write("default = %s\n" % default)
260 260 fp.close()
261 261
262 if bookmarks:
263 fp = destrepo.vfs('shared', 'w')
264 fp.write(sharedbookmarks + '\n')
265 fp.close()
262 with destrepo.wlock():
263 if bookmarks:
264 fp = destrepo.vfs('shared', 'w')
265 fp.write(sharedbookmarks + '\n')
266 fp.close()
266 267
267 268 def _postshareupdate(repo, update, checkout=None):
268 269 """Maybe perform a working directory update after a shared repo is created.
@@ -373,8 +374,15 b' def clonewithshare(ui, peeropts, sharepa'
373 374 clone(ui, peeropts, source, dest=sharepath, pull=True,
374 375 rev=rev, update=False, stream=stream)
375 376
377 # Resolve the value to put in [paths] section for the source.
378 if islocal(source):
379 defaultpath = os.path.abspath(util.urllocalpath(source))
380 else:
381 defaultpath = source
382
376 383 sharerepo = repository(ui, path=sharepath)
377 share(ui, sharerepo, dest=dest, update=False, bookmarks=False)
384 share(ui, sharerepo, dest=dest, update=False, bookmarks=False,
385 defaultpath=defaultpath)
378 386
379 387 # We need to perform a pull against the dest repo to fetch bookmarks
380 388 # and other non-store data that isn't shared by default. In the case of
@@ -737,20 +745,22 b' def updatetotally(ui, repo, checkout, br'
737 745 if movemarkfrom == repo['.'].node():
738 746 pass # no-op update
739 747 elif bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
740 ui.status(_("updating bookmark %s\n") % repo._activebookmark)
748 b = ui.label(repo._activebookmark, 'bookmarks.active')
749 ui.status(_("updating bookmark %s\n") % b)
741 750 else:
742 751 # this can happen with a non-linear update
743 ui.status(_("(leaving bookmark %s)\n") %
744 repo._activebookmark)
752 b = ui.label(repo._activebookmark, 'bookmarks')
753 ui.status(_("(leaving bookmark %s)\n") % b)
745 754 bookmarks.deactivate(repo)
746 755 elif brev in repo._bookmarks:
747 756 if brev != repo._activebookmark:
748 ui.status(_("(activating bookmark %s)\n") % brev)
757 b = ui.label(brev, 'bookmarks.active')
758 ui.status(_("(activating bookmark %s)\n") % b)
749 759 bookmarks.activate(repo, brev)
750 760 elif brev:
751 761 if repo._activebookmark:
752 ui.status(_("(leaving bookmark %s)\n") %
753 repo._activebookmark)
762 b = ui.label(repo._activebookmark, 'bookmarks')
763 ui.status(_("(leaving bookmark %s)\n") % b)
754 764 bookmarks.deactivate(repo)
755 765
756 766 if warndest:
@@ -758,10 +768,11 b' def updatetotally(ui, repo, checkout, br'
758 768
759 769 return ret
760 770
761 def merge(repo, node, force=None, remind=True, mergeforce=False):
771 def merge(repo, node, force=None, remind=True, mergeforce=False, labels=None):
762 772 """Branch merge with node, resolving changes. Return true if any
763 773 unresolved conflicts."""
764 stats = mergemod.update(repo, node, True, force, mergeforce=mergeforce)
774 stats = mergemod.update(repo, node, True, force, mergeforce=mergeforce,
775 labels=labels)
765 776 _showstats(repo, stats)
766 777 if stats[3]:
767 778 repo.ui.status(_("use 'hg resolve' to retry unresolved file merges "
@@ -28,6 +28,7 b' from .. import ('
28 28 error,
29 29 hg,
30 30 hook,
31 profiling,
31 32 repoview,
32 33 templatefilters,
33 34 templater,
@@ -305,8 +306,9 b' class hgweb(object):'
305 306 should be using instances of this class as the WSGI application.
306 307 """
307 308 with self._obtainrepo() as repo:
308 for r in self._runwsgi(req, repo):
309 yield r
309 with profiling.maybeprofile(repo.ui):
310 for r in self._runwsgi(req, repo):
311 yield r
310 312
311 313 def _runwsgi(self, req, repo):
312 314 rctx = requestcontext(self, repo)
@@ -31,6 +31,7 b' from .. import ('
31 31 encoding,
32 32 error,
33 33 hg,
34 profiling,
34 35 scmutil,
35 36 templater,
36 37 ui as uimod,
@@ -217,6 +218,11 b' class hgwebdir(object):'
217 218 return False
218 219
219 220 def run_wsgi(self, req):
221 with profiling.maybeprofile(self.ui):
222 for r in self._runwsgi(req):
223 yield r
224
225 def _runwsgi(self, req):
220 226 try:
221 227 self.refresh()
222 228
@@ -73,14 +73,30 b' class webproto(wireproto.abstractserverp'
73 73 val = self.ui.fout.getvalue()
74 74 self.ui.ferr, self.ui.fout = self.oldio
75 75 return val
76 def groupchunks(self, cg):
77 z = zlib.compressobj()
78 while True:
79 chunk = cg.read(4096)
80 if not chunk:
81 break
82 yield z.compress(chunk)
76
77 def groupchunks(self, fh):
78 def getchunks():
79 while True:
80 chunk = fh.read(32768)
81 if not chunk:
82 break
83 yield chunk
84
85 return self.compresschunks(getchunks())
86
87 def compresschunks(self, chunks):
88 # Don't allow untrusted settings because disabling compression or
89 # setting a very high compression level could lead to flooding
90 # the server's network or CPU.
91 z = zlib.compressobj(self.ui.configint('server', 'zliblevel', -1))
92 for chunk in chunks:
93 data = z.compress(chunk)
94 # Not all calls to compress() emit data. It is cheaper to inspect
95 # that here than to send it via the generator.
96 if data:
97 yield data
83 98 yield z.flush()
99
84 100 def _client(self):
85 101 return 'remote:%s:%s:%s' % (
86 102 self.req.env.get('wsgi.url_scheme') or 'http',
@@ -263,7 +263,7 b' def openlog(opt, default):'
263 263 return open(opt, 'a')
264 264 return default
265 265
266 class MercurialHTTPServer(object, _mixin, httpservermod.httpserver):
266 class MercurialHTTPServer(_mixin, httpservermod.httpserver, object):
267 267
268 268 # SO_REUSEADDR has broken semantics on windows
269 269 if os.name == 'nt':
@@ -31,7 +31,6 b' from .. import ('
31 31 encoding,
32 32 error,
33 33 graphmod,
34 patch,
35 34 revset,
36 35 scmutil,
37 36 templatefilters,
@@ -861,8 +860,6 b' def annotate(web, req, tmpl):'
861 860 fctx = webutil.filectx(web.repo, req)
862 861 f = fctx.path()
863 862 parity = paritygen(web.stripecount)
864 diffopts = patch.difffeatureopts(web.repo.ui, untrusted=True,
865 section='annotate', whitespace=True)
866 863
867 864 def parents(f):
868 865 for p in f.parents():
@@ -877,8 +874,8 b' def annotate(web, req, tmpl):'
877 874 or 'application/octet-stream')
878 875 lines = [((fctx.filectx(fctx.filerev()), 1), '(binary:%s)' % mt)]
879 876 else:
880 lines = fctx.annotate(follow=True, linenumber=True,
881 diffopts=diffopts)
877 lines = webutil.annotate(fctx, web.repo.ui)
878
882 879 previousrev = None
883 880 blockparitygen = paritygen(1)
884 881 for lineno, ((f, targetline), l) in enumerate(lines):
@@ -164,6 +164,11 b' class _siblings(object):'
164 164 def __len__(self):
165 165 return len(self.siblings)
166 166
167 def annotate(fctx, ui):
168 diffopts = patch.difffeatureopts(ui, untrusted=True,
169 section='annotate', whitespace=True)
170 return fctx.annotate(follow=True, linenumber=True, diffopts=diffopts)
171
167 172 def parents(ctx, hide=None):
168 173 if isinstance(ctx, context.basefilectx):
169 174 introrev = ctx.introrev()
@@ -58,6 +58,12 b' class httpsendfile(object):'
58 58 unit=_('kb'), total=self._total)
59 59 return ret
60 60
61 def __enter__(self):
62 return self
63
64 def __exit__(self, exc_type, exc_val, exc_tb):
65 self.close()
66
61 67 # moved here from url.py to avoid a cycle
62 68 def readauthforuri(ui, uri, user):
63 69 # Read configuration
@@ -12,7 +12,10 b' import locale'
12 12 import os
13 13 import sys
14 14
15 from . import encoding
15 from . import (
16 encoding,
17 pycompat,
18 )
16 19
17 20 # modelled after templater.templatepath:
18 21 if getattr(sys, 'frozen', None) is not None:
@@ -27,10 +30,10 b' except NameError:'
27 30
28 31 _languages = None
29 32 if (os.name == 'nt'
30 and 'LANGUAGE' not in os.environ
31 and 'LC_ALL' not in os.environ
32 and 'LC_MESSAGES' not in os.environ
33 and 'LANG' not in os.environ):
33 and 'LANGUAGE' not in encoding.environ
34 and 'LC_ALL' not in encoding.environ
35 and 'LC_MESSAGES' not in encoding.environ
36 and 'LANG' not in encoding.environ):
34 37 # Try to detect UI language by "User Interface Language Management" API
35 38 # if no locale variables are set. Note that locale.getdefaultlocale()
36 39 # uses GetLocaleInfo(), which may be different from UI language.
@@ -46,7 +49,7 b" if (os.name == 'nt'"
46 49 _ugettext = None
47 50
48 51 def setdatapath(datapath):
49 localedir = os.path.join(datapath, 'locale')
52 localedir = os.path.join(datapath, pycompat.sysstr('locale'))
50 53 t = gettextmod.translation('hg', localedir, _languages, fallback=True)
51 54 global _ugettext
52 55 try:
@@ -85,16 +88,18 b' def gettext(message):'
85 88 # means u.encode(sys.getdefaultencoding()).decode(enc). Since
86 89 # the Python encoding defaults to 'ascii', this fails if the
87 90 # translated string use non-ASCII characters.
88 _msgcache[message] = u.encode(encoding.encoding, "replace")
91 encodingstr = pycompat.sysstr(encoding.encoding)
92 _msgcache[message] = u.encode(encodingstr, "replace")
89 93 except LookupError:
90 94 # An unknown encoding results in a LookupError.
91 95 _msgcache[message] = message
92 96 return _msgcache[message]
93 97
94 98 def _plain():
95 if 'HGPLAIN' not in os.environ and 'HGPLAINEXCEPT' not in os.environ:
99 if ('HGPLAIN' not in encoding.environ
100 and 'HGPLAINEXCEPT' not in encoding.environ):
96 101 return False
97 exceptions = os.environ.get('HGPLAINEXCEPT', '').strip().split(',')
102 exceptions = encoding.environ.get('HGPLAINEXCEPT', '').strip().split(',')
98 103 return 'i18n' not in exceptions
99 104
100 105 if _plain():
@@ -149,14 +149,18 b' class localpeer(peer.peerrepository):'
149 149
150 150 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
151 151 **kwargs):
152 cg = exchange.getbundle(self._repo, source, heads=heads,
153 common=common, bundlecaps=bundlecaps, **kwargs)
152 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
153 common=common, bundlecaps=bundlecaps,
154 **kwargs)
155 cb = util.chunkbuffer(chunks)
156
154 157 if bundlecaps is not None and 'HG20' in bundlecaps:
155 158 # When requesting a bundle2, getbundle returns a stream to make the
156 159 # wire level function happier. We need to build a proper object
157 160 # from it in local peer.
158 cg = bundle2.getunbundler(self.ui, cg)
159 return cg
161 return bundle2.getunbundler(self.ui, cb)
162 else:
163 return changegroup.getunbundler('01', cb, None)
160 164
161 165 # TODO We might want to move the next two calls into legacypeer and add
162 166 # unbundle instead.
@@ -504,8 +508,9 b' class localrepository(object):'
504 508 def manifest(self):
505 509 return manifest.manifest(self.svfs)
506 510
507 def dirlog(self, dir):
508 return self.manifest.dirlog(dir)
511 @property
512 def manifestlog(self):
513 return manifest.manifestlog(self.svfs, self)
509 514
510 515 @repofilecache('dirstate')
511 516 def dirstate(self):
@@ -1007,8 +1012,7 b' class localrepository(object):'
1007 1012 def transaction(self, desc, report=None):
1008 1013 if (self.ui.configbool('devel', 'all-warnings')
1009 1014 or self.ui.configbool('devel', 'check-locks')):
1010 l = self._lockref and self._lockref()
1011 if l is None or not l.held:
1015 if self._currentlock(self._lockref) is None:
1012 1016 raise RuntimeError('programming error: transaction requires '
1013 1017 'locking')
1014 1018 tr = self.currenttransaction()
@@ -1246,6 +1250,13 b' class localrepository(object):'
1246 1250 delattr(self.unfiltered(), 'dirstate')
1247 1251
1248 1252 def invalidate(self, clearfilecache=False):
1253 '''Invalidates both store and non-store parts other than dirstate
1254
1255 If a transaction is running, invalidation of store is omitted,
1256 because discarding in-memory changes might cause inconsistency
1257 (e.g. incomplete fncache causes unintentional failure, but
1258 redundant one doesn't).
1259 '''
1249 1260 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1250 1261 for k in self._filecache.keys():
1251 1262 # dirstate is invalidated separately in invalidatedirstate()
@@ -1259,7 +1270,11 b' class localrepository(object):'
1259 1270 except AttributeError:
1260 1271 pass
1261 1272 self.invalidatecaches()
1262 self.store.invalidatecaches()
1273 if not self.currenttransaction():
1274 # TODO: Changing contents of store outside transaction
1275 # causes inconsistency. We should make in-memory store
1276 # changes detectable, and abort if changed.
1277 self.store.invalidatecaches()
1263 1278
1264 1279 def invalidateall(self):
1265 1280 '''Fully invalidates both store and non-store parts, causing the
@@ -1268,6 +1283,7 b' class localrepository(object):'
1268 1283 self.invalidate()
1269 1284 self.invalidatedirstate()
1270 1285
1286 @unfilteredmethod
1271 1287 def _refreshfilecachestats(self, tr):
1272 1288 """Reload stats of cached files so that they are flagged as valid"""
1273 1289 for k, ce in self._filecache.items():
@@ -1290,8 +1306,15 b' class localrepository(object):'
1290 1306 except error.LockHeld as inst:
1291 1307 if not wait:
1292 1308 raise
1293 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1294 (desc, inst.locker))
1309 # show more details for new-style locks
1310 if ':' in inst.locker:
1311 host, pid = inst.locker.split(":", 1)
1312 self.ui.warn(
1313 _("waiting for lock on %s held by process %r "
1314 "on host %r\n") % (desc, pid, host))
1315 else:
1316 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1317 (desc, inst.locker))
1295 1318 # default to 600 seconds timeout
1296 1319 l = lockmod.lock(vfs, lockname,
1297 1320 int(self.ui.config("ui", "timeout", "600")),
@@ -1320,8 +1343,8 b' class localrepository(object):'
1320 1343
1321 1344 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
1322 1345 'wlock' first to avoid a dead-lock hazard.'''
1323 l = self._lockref and self._lockref()
1324 if l is not None and l.held:
1346 l = self._currentlock(self._lockref)
1347 if l is not None:
1325 1348 l.lock()
1326 1349 return l
1327 1350
@@ -1352,8 +1375,7 b' class localrepository(object):'
1352 1375 # acquisition would not cause dead-lock as they would just fail.
1353 1376 if wait and (self.ui.configbool('devel', 'all-warnings')
1354 1377 or self.ui.configbool('devel', 'check-locks')):
1355 l = self._lockref and self._lockref()
1356 if l is not None and l.held:
1378 if self._currentlock(self._lockref) is not None:
1357 1379 self.ui.develwarn('"wlock" acquired after "lock"')
1358 1380
1359 1381 def unlock():
@@ -1607,8 +1629,8 b' class localrepository(object):'
1607 1629 ms = mergemod.mergestate.read(self)
1608 1630
1609 1631 if list(ms.unresolved()):
1610 raise error.Abort(_('unresolved merge conflicts '
1611 '(see "hg help resolve")'))
1632 raise error.Abort(_("unresolved merge conflicts "
1633 "(see 'hg help resolve')"))
1612 1634 if ms.mdstate() != 's' or list(ms.driverresolved()):
1613 1635 raise error.Abort(_('driver-resolved merge conflicts'),
1614 1636 hint=_('run "hg resolve --all" to resolve'))
@@ -1714,9 +1736,9 b' class localrepository(object):'
1714 1736 drop = [f for f in removed if f in m]
1715 1737 for f in drop:
1716 1738 del m[f]
1717 mn = self.manifest.add(m, trp, linkrev,
1718 p1.manifestnode(), p2.manifestnode(),
1719 added, drop)
1739 mn = self.manifestlog.add(m, trp, linkrev,
1740 p1.manifestnode(), p2.manifestnode(),
1741 added, drop)
1720 1742 files = changed + removed
1721 1743 else:
1722 1744 mn = p1.manifestnode()
@@ -8,6 +8,8 b''
8 8 from __future__ import absolute_import, print_function
9 9
10 10 import email
11 import email.charset
12 import email.header
11 13 import os
12 14 import quopri
13 15 import smtplib
@@ -23,7 +25,7 b' from . import ('
23 25 util,
24 26 )
25 27
26 _oldheaderinit = email.Header.Header.__init__
28 _oldheaderinit = email.header.Header.__init__
27 29 def _unifiedheaderinit(self, *args, **kw):
28 30 """
29 31 Python 2.7 introduces a backwards incompatible change
@@ -203,24 +205,33 b' def validateconfig(ui):'
203 205 raise error.Abort(_('%r specified as email transport, '
204 206 'but not in PATH') % method)
205 207
208 def codec2iana(cs):
209 ''''''
210 cs = email.charset.Charset(cs).input_charset.lower()
211
212 # "latin1" normalizes to "iso8859-1", standard calls for "iso-8859-1"
213 if cs.startswith("iso") and not cs.startswith("iso-"):
214 return "iso-" + cs[3:]
215 return cs
216
206 217 def mimetextpatch(s, subtype='plain', display=False):
207 218 '''Return MIME message suitable for a patch.
208 Charset will be detected as utf-8 or (possibly fake) us-ascii.
219 Charset will be detected by first trying to decode as us-ascii, then utf-8,
220 and finally the global encodings. If all those fail, fall back to
221 ISO-8859-1, an encoding with that allows all byte sequences.
209 222 Transfer encodings will be used if necessary.'''
210 223
211 cs = 'us-ascii'
212 if not display:
224 cs = ['us-ascii', 'utf-8', encoding.encoding, encoding.fallbackencoding]
225 if display:
226 return mimetextqp(s, subtype, 'us-ascii')
227 for charset in cs:
213 228 try:
214 s.decode('us-ascii')
229 s.decode(charset)
230 return mimetextqp(s, subtype, codec2iana(charset))
215 231 except UnicodeDecodeError:
216 try:
217 s.decode('utf-8')
218 cs = 'utf-8'
219 except UnicodeDecodeError:
220 # We'll go with us-ascii as a fallback.
221 pass
232 pass
222 233
223 return mimetextqp(s, subtype, cs)
234 return mimetextqp(s, subtype, "iso-8859-1")
224 235
225 236 def mimetextqp(body, subtype, charset):
226 237 '''Return MIME message.
@@ -279,7 +290,7 b' def headencode(ui, s, charsets=None, dis'
279 290 if not display:
280 291 # split into words?
281 292 s, cs = _encode(ui, s, charsets)
282 return str(email.Header.Header(s, cs))
293 return str(email.header.Header(s, cs))
283 294 return s
284 295
285 296 def _addressencode(ui, name, addr, charsets=None):
@@ -330,7 +341,7 b' def mimeencode(ui, s, charsets=None, dis'
330 341 def headdecode(s):
331 342 '''Decodes RFC-2047 header'''
332 343 uparts = []
333 for part, charset in email.Header.decode_header(s):
344 for part, charset in email.header.decode_header(s):
334 345 if charset is not None:
335 346 try:
336 347 uparts.append(part.decode(charset))
@@ -56,10 +56,10 b' static PyObject *nodeof(line *l) {'
56 56 }
57 57 if (l->hash_suffix != '\0') {
58 58 char newhash[21];
59 memcpy(newhash, PyString_AsString(hash), 20);
59 memcpy(newhash, PyBytes_AsString(hash), 20);
60 60 Py_DECREF(hash);
61 61 newhash[20] = l->hash_suffix;
62 hash = PyString_FromStringAndSize(newhash, 21);
62 hash = PyBytes_FromStringAndSize(newhash, 21);
63 63 }
64 64 return hash;
65 65 }
@@ -79,7 +79,7 b' static PyObject *hashflags(line *l)'
79 79
80 80 if (!hash)
81 81 return NULL;
82 flags = PyString_FromStringAndSize(s + hplen - 1, flen);
82 flags = PyBytes_FromStringAndSize(s + hplen - 1, flen);
83 83 if (!flags) {
84 84 Py_DECREF(hash);
85 85 return NULL;
@@ -144,7 +144,7 b' static int lazymanifest_init(lazymanifes'
144 144 if (!PyArg_ParseTuple(args, "S", &pydata)) {
145 145 return -1;
146 146 }
147 err = PyString_AsStringAndSize(pydata, &data, &len);
147 err = PyBytes_AsStringAndSize(pydata, &data, &len);
148 148
149 149 self->dirty = false;
150 150 if (err == -1)
@@ -238,10 +238,10 b' static PyObject *lmiter_iterentriesnext('
238 238 goto done;
239 239 }
240 240 pl = pathlen(l);
241 path = PyString_FromStringAndSize(l->start, pl);
241 path = PyBytes_FromStringAndSize(l->start, pl);
242 242 hash = nodeof(l);
243 243 consumed = pl + 41;
244 flags = PyString_FromStringAndSize(l->start + consumed,
244 flags = PyBytes_FromStringAndSize(l->start + consumed,
245 245 l->len - consumed - 1);
246 246 if (!path || !hash || !flags) {
247 247 goto done;
@@ -254,9 +254,15 b' done:'
254 254 return ret;
255 255 }
256 256
257 #ifdef IS_PY3K
258 #define LAZYMANIFESTENTRIESITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT
259 #else
260 #define LAZYMANIFESTENTRIESITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT \
261 | Py_TPFLAGS_HAVE_ITER
262 #endif
263
257 264 static PyTypeObject lazymanifestEntriesIterator = {
258 PyObject_HEAD_INIT(NULL)
259 0, /*ob_size */
265 PyVarObject_HEAD_INIT(NULL, 0)
260 266 "parsers.lazymanifest.entriesiterator", /*tp_name */
261 267 sizeof(lmIter), /*tp_basicsize */
262 268 0, /*tp_itemsize */
@@ -275,9 +281,7 b' static PyTypeObject lazymanifestEntriesI'
275 281 0, /*tp_getattro */
276 282 0, /*tp_setattro */
277 283 0, /*tp_as_buffer */
278 /* tp_flags: Py_TPFLAGS_HAVE_ITER tells python to
279 use tp_iter and tp_iternext fields. */
280 Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_ITER,
284 LAZYMANIFESTENTRIESITERATOR_TPFLAGS, /* tp_flags */
281 285 "Iterator for 3-tuples in a lazymanifest.", /* tp_doc */
282 286 0, /* tp_traverse */
283 287 0, /* tp_clear */
@@ -295,12 +299,18 b' static PyObject *lmiter_iterkeysnext(PyO'
295 299 return NULL;
296 300 }
297 301 pl = pathlen(l);
298 return PyString_FromStringAndSize(l->start, pl);
302 return PyBytes_FromStringAndSize(l->start, pl);
299 303 }
300 304
305 #ifdef IS_PY3K
306 #define LAZYMANIFESTKEYSITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT
307 #else
308 #define LAZYMANIFESTKEYSITERATOR_TPFLAGS Py_TPFLAGS_DEFAULT \
309 | Py_TPFLAGS_HAVE_ITER
310 #endif
311
301 312 static PyTypeObject lazymanifestKeysIterator = {
302 PyObject_HEAD_INIT(NULL)
303 0, /*ob_size */
313 PyVarObject_HEAD_INIT(NULL, 0)
304 314 "parsers.lazymanifest.keysiterator", /*tp_name */
305 315 sizeof(lmIter), /*tp_basicsize */
306 316 0, /*tp_itemsize */
@@ -319,9 +329,7 b' static PyTypeObject lazymanifestKeysIter'
319 329 0, /*tp_getattro */
320 330 0, /*tp_setattro */
321 331 0, /*tp_as_buffer */
322 /* tp_flags: Py_TPFLAGS_HAVE_ITER tells python to
323 use tp_iter and tp_iternext fields. */
324 Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_ITER,
332 LAZYMANIFESTKEYSITERATOR_TPFLAGS, /* tp_flags */
325 333 "Keys iterator for a lazymanifest.", /* tp_doc */
326 334 0, /* tp_traverse */
327 335 0, /* tp_clear */
@@ -388,12 +396,12 b' static PyObject *lazymanifest_getitem(la'
388 396 {
389 397 line needle;
390 398 line *hit;
391 if (!PyString_Check(key)) {
399 if (!PyBytes_Check(key)) {
392 400 PyErr_Format(PyExc_TypeError,
393 401 "getitem: manifest keys must be a string.");
394 402 return NULL;
395 403 }
396 needle.start = PyString_AsString(key);
404 needle.start = PyBytes_AsString(key);
397 405 hit = bsearch(&needle, self->lines, self->numlines, sizeof(line),
398 406 &linecmp);
399 407 if (!hit || hit->deleted) {
@@ -407,12 +415,12 b' static int lazymanifest_delitem(lazymani'
407 415 {
408 416 line needle;
409 417 line *hit;
410 if (!PyString_Check(key)) {
418 if (!PyBytes_Check(key)) {
411 419 PyErr_Format(PyExc_TypeError,
412 420 "delitem: manifest keys must be a string.");
413 421 return -1;
414 422 }
415 needle.start = PyString_AsString(key);
423 needle.start = PyBytes_AsString(key);
416 424 hit = bsearch(&needle, self->lines, self->numlines, sizeof(line),
417 425 &linecmp);
418 426 if (!hit || hit->deleted) {
@@ -476,7 +484,7 b' static int lazymanifest_setitem('
476 484 char *dest;
477 485 int i;
478 486 line new;
479 if (!PyString_Check(key)) {
487 if (!PyBytes_Check(key)) {
480 488 PyErr_Format(PyExc_TypeError,
481 489 "setitem: manifest keys must be a string.");
482 490 return -1;
@@ -489,17 +497,17 b' static int lazymanifest_setitem('
489 497 "Manifest values must be a tuple of (node, flags).");
490 498 return -1;
491 499 }
492 if (PyString_AsStringAndSize(key, &path, &plen) == -1) {
500 if (PyBytes_AsStringAndSize(key, &path, &plen) == -1) {
493 501 return -1;
494 502 }
495 503
496 504 pyhash = PyTuple_GetItem(value, 0);
497 if (!PyString_Check(pyhash)) {
505 if (!PyBytes_Check(pyhash)) {
498 506 PyErr_Format(PyExc_TypeError,
499 507 "node must be a 20-byte string");
500 508 return -1;
501 509 }
502 hlen = PyString_Size(pyhash);
510 hlen = PyBytes_Size(pyhash);
503 511 /* Some parts of the codebase try and set 21 or 22
504 512 * byte "hash" values in order to perturb things for
505 513 * status. We have to preserve at least the 21st
@@ -511,15 +519,15 b' static int lazymanifest_setitem('
511 519 "node must be a 20-byte string");
512 520 return -1;
513 521 }
514 hash = PyString_AsString(pyhash);
522 hash = PyBytes_AsString(pyhash);
515 523
516 524 pyflags = PyTuple_GetItem(value, 1);
517 if (!PyString_Check(pyflags) || PyString_Size(pyflags) > 1) {
525 if (!PyBytes_Check(pyflags) || PyBytes_Size(pyflags) > 1) {
518 526 PyErr_Format(PyExc_TypeError,
519 527 "flags must a 0 or 1 byte string");
520 528 return -1;
521 529 }
522 if (PyString_AsStringAndSize(pyflags, &flags, &flen) == -1) {
530 if (PyBytes_AsStringAndSize(pyflags, &flags, &flen) == -1) {
523 531 return -1;
524 532 }
525 533 /* one null byte and one newline */
@@ -564,12 +572,12 b' static int lazymanifest_contains(lazyman'
564 572 {
565 573 line needle;
566 574 line *hit;
567 if (!PyString_Check(key)) {
575 if (!PyBytes_Check(key)) {
568 576 /* Our keys are always strings, so if the contains
569 577 * check is for a non-string, just return false. */
570 578 return 0;
571 579 }
572 needle.start = PyString_AsString(key);
580 needle.start = PyBytes_AsString(key);
573 581 hit = bsearch(&needle, self->lines, self->numlines, sizeof(line),
574 582 &linecmp);
575 583 if (!hit || hit->deleted) {
@@ -609,10 +617,10 b' static int compact(lazymanifest *self) {'
609 617 need += self->lines[i].len;
610 618 }
611 619 }
612 pydata = PyString_FromStringAndSize(NULL, need);
620 pydata = PyBytes_FromStringAndSize(NULL, need);
613 621 if (!pydata)
614 622 return -1;
615 data = PyString_AsString(pydata);
623 data = PyBytes_AsString(pydata);
616 624 if (!data) {
617 625 return -1;
618 626 }
@@ -747,7 +755,7 b' static PyObject *lazymanifest_diff(lazym'
747 755 return NULL;
748 756 }
749 757 listclean = (!pyclean) ? false : PyObject_IsTrue(pyclean);
750 es = PyString_FromString("");
758 es = PyBytes_FromString("");
751 759 if (!es) {
752 760 goto nomem;
753 761 }
@@ -787,8 +795,8 b' static PyObject *lazymanifest_diff(lazym'
787 795 result = linecmp(left, right);
788 796 }
789 797 key = result <= 0 ?
790 PyString_FromString(left->start) :
791 PyString_FromString(right->start);
798 PyBytes_FromString(left->start) :
799 PyBytes_FromString(right->start);
792 800 if (!key)
793 801 goto nomem;
794 802 if (result < 0) {
@@ -873,9 +881,14 b' static PyMethodDef lazymanifest_methods['
873 881 {NULL},
874 882 };
875 883
884 #ifdef IS_PY3K
885 #define LAZYMANIFEST_TPFLAGS Py_TPFLAGS_DEFAULT
886 #else
887 #define LAZYMANIFEST_TPFLAGS Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_SEQUENCE_IN
888 #endif
889
876 890 static PyTypeObject lazymanifestType = {
877 PyObject_HEAD_INIT(NULL)
878 0, /* ob_size */
891 PyVarObject_HEAD_INIT(NULL, 0)
879 892 "parsers.lazymanifest", /* tp_name */
880 893 sizeof(lazymanifest), /* tp_basicsize */
881 894 0, /* tp_itemsize */
@@ -894,7 +907,7 b' static PyTypeObject lazymanifestType = {'
894 907 0, /* tp_getattro */
895 908 0, /* tp_setattro */
896 909 0, /* tp_as_buffer */
897 Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_SEQUENCE_IN, /* tp_flags */
910 LAZYMANIFEST_TPFLAGS, /* tp_flags */
898 911 "TODO(augie)", /* tp_doc */
899 912 0, /* tp_traverse */
900 913 0, /* tp_clear */
This diff has been collapsed as it changes many lines, (666 lines changed) Show them Hide them
@@ -104,69 +104,300 b' def _textv2(it):'
104 104 _checkforbidden(files)
105 105 return ''.join(lines)
106 106
107 class _lazymanifest(dict):
108 """This is the pure implementation of lazymanifest.
107 class lazymanifestiter(object):
108 def __init__(self, lm):
109 self.pos = 0
110 self.lm = lm
109 111
110 It has not been optimized *at all* and is not lazy.
111 """
112 def __iter__(self):
113 return self
112 114
113 def __init__(self, data):
114 dict.__init__(self)
115 for f, n, fl in _parse(data):
116 self[f] = n, fl
115 def next(self):
116 try:
117 data, pos = self.lm._get(self.pos)
118 except IndexError:
119 raise StopIteration
120 if pos == -1:
121 self.pos += 1
122 return data[0]
123 self.pos += 1
124 zeropos = data.find('\x00', pos)
125 return data[pos:zeropos]
117 126
118 def __setitem__(self, k, v):
119 node, flag = v
120 assert node is not None
121 if len(node) > 21:
122 node = node[:21] # match c implementation behavior
123 dict.__setitem__(self, k, (node, flag))
127 class lazymanifestiterentries(object):
128 def __init__(self, lm):
129 self.lm = lm
130 self.pos = 0
124 131
125 132 def __iter__(self):
126 return iter(sorted(dict.keys(self)))
133 return self
134
135 def next(self):
136 try:
137 data, pos = self.lm._get(self.pos)
138 except IndexError:
139 raise StopIteration
140 if pos == -1:
141 self.pos += 1
142 return data
143 zeropos = data.find('\x00', pos)
144 hashval = unhexlify(data, self.lm.extrainfo[self.pos],
145 zeropos + 1, 40)
146 flags = self.lm._getflags(data, self.pos, zeropos)
147 self.pos += 1
148 return (data[pos:zeropos], hashval, flags)
149
150 def unhexlify(data, extra, pos, length):
151 s = data[pos:pos + length].decode('hex')
152 if extra:
153 s += chr(extra & 0xff)
154 return s
155
156 def _cmp(a, b):
157 return (a > b) - (a < b)
158
159 class _lazymanifest(object):
160 def __init__(self, data, positions=None, extrainfo=None, extradata=None):
161 if positions is None:
162 self.positions = self.findlines(data)
163 self.extrainfo = [0] * len(self.positions)
164 self.data = data
165 self.extradata = []
166 else:
167 self.positions = positions[:]
168 self.extrainfo = extrainfo[:]
169 self.extradata = extradata[:]
170 self.data = data
171
172 def findlines(self, data):
173 if not data:
174 return []
175 pos = data.find("\n")
176 if pos == -1 or data[-1] != '\n':
177 raise ValueError("Manifest did not end in a newline.")
178 positions = [0]
179 prev = data[:data.find('\x00')]
180 while pos < len(data) - 1 and pos != -1:
181 positions.append(pos + 1)
182 nexts = data[pos + 1:data.find('\x00', pos + 1)]
183 if nexts < prev:
184 raise ValueError("Manifest lines not in sorted order.")
185 prev = nexts
186 pos = data.find("\n", pos + 1)
187 return positions
188
189 def _get(self, index):
190 # get the position encoded in pos:
191 # positive number is an index in 'data'
192 # negative number is in extrapieces
193 pos = self.positions[index]
194 if pos >= 0:
195 return self.data, pos
196 return self.extradata[-pos - 1], -1
197
198 def _getkey(self, pos):
199 if pos >= 0:
200 return self.data[pos:self.data.find('\x00', pos + 1)]
201 return self.extradata[-pos - 1][0]
202
203 def bsearch(self, key):
204 first = 0
205 last = len(self.positions) - 1
206
207 while first <= last:
208 midpoint = (first + last)//2
209 nextpos = self.positions[midpoint]
210 candidate = self._getkey(nextpos)
211 r = _cmp(key, candidate)
212 if r == 0:
213 return midpoint
214 else:
215 if r < 0:
216 last = midpoint - 1
217 else:
218 first = midpoint + 1
219 return -1
127 220
128 def iterkeys(self):
129 return iter(sorted(dict.keys(self)))
221 def bsearch2(self, key):
222 # same as the above, but will always return the position
223 # done for performance reasons
224 first = 0
225 last = len(self.positions) - 1
226
227 while first <= last:
228 midpoint = (first + last)//2
229 nextpos = self.positions[midpoint]
230 candidate = self._getkey(nextpos)
231 r = _cmp(key, candidate)
232 if r == 0:
233 return (midpoint, True)
234 else:
235 if r < 0:
236 last = midpoint - 1
237 else:
238 first = midpoint + 1
239 return (first, False)
240
241 def __contains__(self, key):
242 return self.bsearch(key) != -1
243
244 def _getflags(self, data, needle, pos):
245 start = pos + 41
246 end = data.find("\n", start)
247 if end == -1:
248 end = len(data) - 1
249 if start == end:
250 return ''
251 return self.data[start:end]
130 252
131 def iterentries(self):
132 return ((f, e[0], e[1]) for f, e in sorted(self.iteritems()))
253 def __getitem__(self, key):
254 if not isinstance(key, str):
255 raise TypeError("getitem: manifest keys must be a string.")
256 needle = self.bsearch(key)
257 if needle == -1:
258 raise KeyError
259 data, pos = self._get(needle)
260 if pos == -1:
261 return (data[1], data[2])
262 zeropos = data.find('\x00', pos)
263 assert 0 <= needle <= len(self.positions)
264 assert len(self.extrainfo) == len(self.positions)
265 hashval = unhexlify(data, self.extrainfo[needle], zeropos + 1, 40)
266 flags = self._getflags(data, needle, zeropos)
267 return (hashval, flags)
268
269 def __delitem__(self, key):
270 needle, found = self.bsearch2(key)
271 if not found:
272 raise KeyError
273 cur = self.positions[needle]
274 self.positions = self.positions[:needle] + self.positions[needle + 1:]
275 self.extrainfo = self.extrainfo[:needle] + self.extrainfo[needle + 1:]
276 if cur >= 0:
277 self.data = self.data[:cur] + '\x00' + self.data[cur + 1:]
278
279 def __setitem__(self, key, value):
280 if not isinstance(key, str):
281 raise TypeError("setitem: manifest keys must be a string.")
282 if not isinstance(value, tuple) or len(value) != 2:
283 raise TypeError("Manifest values must be a tuple of (node, flags).")
284 hashval = value[0]
285 if not isinstance(hashval, str) or not 20 <= len(hashval) <= 22:
286 raise TypeError("node must be a 20-byte string")
287 flags = value[1]
288 if len(hashval) == 22:
289 hashval = hashval[:-1]
290 if not isinstance(flags, str) or len(flags) > 1:
291 raise TypeError("flags must a 0 or 1 byte string, got %r", flags)
292 needle, found = self.bsearch2(key)
293 if found:
294 # put the item
295 pos = self.positions[needle]
296 if pos < 0:
297 self.extradata[-pos - 1] = (key, hashval, value[1])
298 else:
299 # just don't bother
300 self.extradata.append((key, hashval, value[1]))
301 self.positions[needle] = -len(self.extradata)
302 else:
303 # not found, put it in with extra positions
304 self.extradata.append((key, hashval, value[1]))
305 self.positions = (self.positions[:needle] + [-len(self.extradata)]
306 + self.positions[needle:])
307 self.extrainfo = (self.extrainfo[:needle] + [0] +
308 self.extrainfo[needle:])
133 309
134 310 def copy(self):
135 c = _lazymanifest('')
136 c.update(self)
137 return c
311 # XXX call _compact like in C?
312 return _lazymanifest(self.data, self.positions, self.extrainfo,
313 self.extradata)
314
315 def _compact(self):
316 # hopefully not called TOO often
317 if len(self.extradata) == 0:
318 return
319 l = []
320 last_cut = 0
321 i = 0
322 offset = 0
323 self.extrainfo = [0] * len(self.positions)
324 while i < len(self.positions):
325 if self.positions[i] >= 0:
326 cur = self.positions[i]
327 last_cut = cur
328 while True:
329 self.positions[i] = offset
330 i += 1
331 if i == len(self.positions) or self.positions[i] < 0:
332 break
333 offset += self.positions[i] - cur
334 cur = self.positions[i]
335 end_cut = self.data.find('\n', cur)
336 if end_cut != -1:
337 end_cut += 1
338 offset += end_cut - cur
339 l.append(self.data[last_cut:end_cut])
340 else:
341 while i < len(self.positions) and self.positions[i] < 0:
342 cur = self.positions[i]
343 t = self.extradata[-cur - 1]
344 l.append(self._pack(t))
345 self.positions[i] = offset
346 if len(t[1]) > 20:
347 self.extrainfo[i] = ord(t[1][21])
348 offset += len(l[-1])
349 i += 1
350 self.data = ''.join(l)
351 self.extradata = []
352
353 def _pack(self, d):
354 return d[0] + '\x00' + d[1][:20].encode('hex') + d[2] + '\n'
355
356 def text(self):
357 self._compact()
358 return self.data
138 359
139 360 def diff(self, m2, clean=False):
140 361 '''Finds changes between the current manifest and m2.'''
362 # XXX think whether efficiency matters here
141 363 diff = {}
142 364
143 for fn, e1 in self.iteritems():
365 for fn, e1, flags in self.iterentries():
144 366 if fn not in m2:
145 diff[fn] = e1, (None, '')
367 diff[fn] = (e1, flags), (None, '')
146 368 else:
147 369 e2 = m2[fn]
148 if e1 != e2:
149 diff[fn] = e1, e2
370 if (e1, flags) != e2:
371 diff[fn] = (e1, flags), e2
150 372 elif clean:
151 373 diff[fn] = None
152 374
153 for fn, e2 in m2.iteritems():
375 for fn, e2, flags in m2.iterentries():
154 376 if fn not in self:
155 diff[fn] = (None, ''), e2
377 diff[fn] = (None, ''), (e2, flags)
156 378
157 379 return diff
158 380
381 def iterentries(self):
382 return lazymanifestiterentries(self)
383
384 def iterkeys(self):
385 return lazymanifestiter(self)
386
387 def __iter__(self):
388 return lazymanifestiter(self)
389
390 def __len__(self):
391 return len(self.positions)
392
159 393 def filtercopy(self, filterfn):
394 # XXX should be optimized
160 395 c = _lazymanifest('')
161 396 for f, n, fl in self.iterentries():
162 397 if filterfn(f):
163 398 c[f] = n, fl
164 399 return c
165 400
166 def text(self):
167 """Get the full data of this manifest as a bytestring."""
168 return _textv1(self.iterentries())
169
170 401 try:
171 402 _lazymanifest = parsers.lazymanifest
172 403 except AttributeError:
@@ -882,6 +1113,8 b' class treemanifest(object):'
882 1113
883 1114 def writesubtrees(self, m1, m2, writesubtree):
884 1115 self._load() # for consistency; should never have any effect here
1116 m1._load()
1117 m2._load()
885 1118 emptytree = treemanifest()
886 1119 for d, subm in self._dirs.iteritems():
887 1120 subp1 = m1._dirs.get(d, emptytree)._node
@@ -890,12 +1123,11 b' class treemanifest(object):'
890 1123 subp1, subp2 = subp2, subp1
891 1124 writesubtree(subm, subp1, subp2)
892 1125
893 class manifest(revlog.revlog):
1126 class manifestrevlog(revlog.revlog):
1127 '''A revlog that stores manifest texts. This is responsible for caching the
1128 full-text manifest contents.
1129 '''
894 1130 def __init__(self, opener, dir='', dirlogcache=None):
895 '''The 'dir' and 'dirlogcache' arguments are for internal use by
896 manifest.manifest only. External users should create a root manifest
897 log with manifest.manifest(opener) and call dirlog() on it.
898 '''
899 1131 # During normal operations, we expect to deal with not more than four
900 1132 # revs at a time (such as during commit --amend). When rebasing large
901 1133 # stacks of commits, the number can go up, hence the config knob below.
@@ -907,17 +1139,18 b' class manifest(revlog.revlog):'
907 1139 cachesize = opts.get('manifestcachesize', cachesize)
908 1140 usetreemanifest = opts.get('treemanifest', usetreemanifest)
909 1141 usemanifestv2 = opts.get('manifestv2', usemanifestv2)
910 self._mancache = util.lrucachedict(cachesize)
911 self._treeinmem = usetreemanifest
1142
912 1143 self._treeondisk = usetreemanifest
913 1144 self._usemanifestv2 = usemanifestv2
1145
1146 self._fulltextcache = util.lrucachedict(cachesize)
1147
914 1148 indexfile = "00manifest.i"
915 1149 if dir:
916 assert self._treeondisk
1150 assert self._treeondisk, 'opts is %r' % opts
917 1151 if not dir.endswith('/'):
918 1152 dir = dir + '/'
919 1153 indexfile = "meta/" + dir + "00manifest.i"
920 revlog.revlog.__init__(self, opener, indexfile)
921 1154 self._dir = dir
922 1155 # The dirlogcache is kept on the root manifest log
923 1156 if dir:
@@ -925,12 +1158,281 b' class manifest(revlog.revlog):'
925 1158 else:
926 1159 self._dirlogcache = {'': self}
927 1160
1161 super(manifestrevlog, self).__init__(opener, indexfile,
1162 checkambig=bool(dir))
1163
1164 @property
1165 def fulltextcache(self):
1166 return self._fulltextcache
1167
1168 def clearcaches(self):
1169 super(manifestrevlog, self).clearcaches()
1170 self._fulltextcache.clear()
1171 self._dirlogcache = {'': self}
1172
1173 def dirlog(self, dir):
1174 if dir:
1175 assert self._treeondisk
1176 if dir not in self._dirlogcache:
1177 self._dirlogcache[dir] = manifestrevlog(self.opener, dir,
1178 self._dirlogcache)
1179 return self._dirlogcache[dir]
1180
1181 def add(self, m, transaction, link, p1, p2, added, removed):
1182 if (p1 in self.fulltextcache and util.safehasattr(m, 'fastdelta')
1183 and not self._usemanifestv2):
1184 # If our first parent is in the manifest cache, we can
1185 # compute a delta here using properties we know about the
1186 # manifest up-front, which may save time later for the
1187 # revlog layer.
1188
1189 _checkforbidden(added)
1190 # combine the changed lists into one sorted iterator
1191 work = heapq.merge([(x, False) for x in added],
1192 [(x, True) for x in removed])
1193
1194 arraytext, deltatext = m.fastdelta(self.fulltextcache[p1], work)
1195 cachedelta = self.rev(p1), deltatext
1196 text = util.buffer(arraytext)
1197 n = self.addrevision(text, transaction, link, p1, p2, cachedelta)
1198 else:
1199 # The first parent manifest isn't already loaded, so we'll
1200 # just encode a fulltext of the manifest and pass that
1201 # through to the revlog layer, and let it handle the delta
1202 # process.
1203 if self._treeondisk:
1204 m1 = self.read(p1)
1205 m2 = self.read(p2)
1206 n = self._addtree(m, transaction, link, m1, m2)
1207 arraytext = None
1208 else:
1209 text = m.text(self._usemanifestv2)
1210 n = self.addrevision(text, transaction, link, p1, p2)
1211 arraytext = array.array('c', text)
1212
1213 if arraytext is not None:
1214 self.fulltextcache[n] = arraytext
1215
1216 return n
1217
1218 def _addtree(self, m, transaction, link, m1, m2):
1219 # If the manifest is unchanged compared to one parent,
1220 # don't write a new revision
1221 if m.unmodifiedsince(m1) or m.unmodifiedsince(m2):
1222 return m.node()
1223 def writesubtree(subm, subp1, subp2):
1224 sublog = self.dirlog(subm.dir())
1225 sublog.add(subm, transaction, link, subp1, subp2, None, None)
1226 m.writesubtrees(m1, m2, writesubtree)
1227 text = m.dirtext(self._usemanifestv2)
1228 # Double-check whether contents are unchanged to one parent
1229 if text == m1.dirtext(self._usemanifestv2):
1230 n = m1.node()
1231 elif text == m2.dirtext(self._usemanifestv2):
1232 n = m2.node()
1233 else:
1234 n = self.addrevision(text, transaction, link, m1.node(), m2.node())
1235 # Save nodeid so parent manifest can calculate its nodeid
1236 m.setnode(n)
1237 return n
1238
1239 class manifestlog(object):
1240 """A collection class representing the collection of manifest snapshots
1241 referenced by commits in the repository.
1242
1243 In this situation, 'manifest' refers to the abstract concept of a snapshot
1244 of the list of files in the given commit. Consumers of the output of this
1245 class do not care about the implementation details of the actual manifests
1246 they receive (i.e. tree or flat or lazily loaded, etc)."""
1247 def __init__(self, opener, repo):
1248 self._repo = repo
1249
1250 usetreemanifest = False
1251
1252 opts = getattr(opener, 'options', None)
1253 if opts is not None:
1254 usetreemanifest = opts.get('treemanifest', usetreemanifest)
1255 self._treeinmem = usetreemanifest
1256
1257 # We'll separate this into it's own cache once oldmanifest is no longer
1258 # used
1259 self._mancache = repo.manifest._mancache
1260
1261 @property
1262 def _revlog(self):
1263 return self._repo.manifest
1264
1265 def __getitem__(self, node):
1266 """Retrieves the manifest instance for the given node. Throws a KeyError
1267 if not found.
1268 """
1269 if node in self._mancache:
1270 cachemf = self._mancache[node]
1271 # The old manifest may put non-ctx manifests in the cache, so skip
1272 # those since they don't implement the full api.
1273 if (isinstance(cachemf, manifestctx) or
1274 isinstance(cachemf, treemanifestctx)):
1275 return cachemf
1276
1277 if self._treeinmem:
1278 m = treemanifestctx(self._revlog, '', node)
1279 else:
1280 m = manifestctx(self._revlog, node)
1281 if node != revlog.nullid:
1282 self._mancache[node] = m
1283 return m
1284
1285 def add(self, m, transaction, link, p1, p2, added, removed):
1286 return self._revlog.add(m, transaction, link, p1, p2, added, removed)
1287
1288 class manifestctx(object):
1289 """A class representing a single revision of a manifest, including its
1290 contents, its parent revs, and its linkrev.
1291 """
1292 def __init__(self, revlog, node):
1293 self._revlog = revlog
1294 self._data = None
1295
1296 self._node = node
1297
1298 # TODO: We eventually want p1, p2, and linkrev exposed on this class,
1299 # but let's add it later when something needs it and we can load it
1300 # lazily.
1301 #self.p1, self.p2 = revlog.parents(node)
1302 #rev = revlog.rev(node)
1303 #self.linkrev = revlog.linkrev(rev)
1304
1305 def node(self):
1306 return self._node
1307
1308 def read(self):
1309 if not self._data:
1310 if self._node == revlog.nullid:
1311 self._data = manifestdict()
1312 else:
1313 text = self._revlog.revision(self._node)
1314 arraytext = array.array('c', text)
1315 self._revlog._fulltextcache[self._node] = arraytext
1316 self._data = manifestdict(text)
1317 return self._data
1318
1319 def readfast(self):
1320 rl = self._revlog
1321 r = rl.rev(self._node)
1322 deltaparent = rl.deltaparent(r)
1323 if deltaparent != revlog.nullrev and deltaparent in rl.parentrevs(r):
1324 return self.readdelta()
1325 return self.read()
1326
1327 def readdelta(self):
1328 revlog = self._revlog
1329 if revlog._usemanifestv2:
1330 # Need to perform a slow delta
1331 r0 = revlog.deltaparent(revlog.rev(self._node))
1332 m0 = manifestctx(revlog, revlog.node(r0)).read()
1333 m1 = self.read()
1334 md = manifestdict()
1335 for f, ((n0, fl0), (n1, fl1)) in m0.diff(m1).iteritems():
1336 if n1:
1337 md[f] = n1
1338 if fl1:
1339 md.setflag(f, fl1)
1340 return md
1341
1342 r = revlog.rev(self._node)
1343 d = mdiff.patchtext(revlog.revdiff(revlog.deltaparent(r), r))
1344 return manifestdict(d)
1345
1346 class treemanifestctx(object):
1347 def __init__(self, revlog, dir, node):
1348 revlog = revlog.dirlog(dir)
1349 self._revlog = revlog
1350 self._dir = dir
1351 self._data = None
1352
1353 self._node = node
1354
1355 # TODO: Load p1/p2/linkrev lazily. They need to be lazily loaded so that
1356 # we can instantiate treemanifestctx objects for directories we don't
1357 # have on disk.
1358 #self.p1, self.p2 = revlog.parents(node)
1359 #rev = revlog.rev(node)
1360 #self.linkrev = revlog.linkrev(rev)
1361
1362 def read(self):
1363 if not self._data:
1364 if self._node == revlog.nullid:
1365 self._data = treemanifest()
1366 elif self._revlog._treeondisk:
1367 m = treemanifest(dir=self._dir)
1368 def gettext():
1369 return self._revlog.revision(self._node)
1370 def readsubtree(dir, subm):
1371 return treemanifestctx(self._revlog, dir, subm).read()
1372 m.read(gettext, readsubtree)
1373 m.setnode(self._node)
1374 self._data = m
1375 else:
1376 text = self._revlog.revision(self._node)
1377 arraytext = array.array('c', text)
1378 self._revlog.fulltextcache[self._node] = arraytext
1379 self._data = treemanifest(dir=self._dir, text=text)
1380
1381 return self._data
1382
1383 def node(self):
1384 return self._node
1385
1386 def readdelta(self):
1387 # Need to perform a slow delta
1388 revlog = self._revlog
1389 r0 = revlog.deltaparent(revlog.rev(self._node))
1390 m0 = treemanifestctx(revlog, self._dir, revlog.node(r0)).read()
1391 m1 = self.read()
1392 md = treemanifest(dir=self._dir)
1393 for f, ((n0, fl0), (n1, fl1)) in m0.diff(m1).iteritems():
1394 if n1:
1395 md[f] = n1
1396 if fl1:
1397 md.setflag(f, fl1)
1398 return md
1399
1400 def readfast(self):
1401 rl = self._revlog
1402 r = rl.rev(self._node)
1403 deltaparent = rl.deltaparent(r)
1404 if deltaparent != revlog.nullrev and deltaparent in rl.parentrevs(r):
1405 return self.readdelta()
1406 return self.read()
1407
1408 class manifest(manifestrevlog):
1409 def __init__(self, opener, dir='', dirlogcache=None):
1410 '''The 'dir' and 'dirlogcache' arguments are for internal use by
1411 manifest.manifest only. External users should create a root manifest
1412 log with manifest.manifest(opener) and call dirlog() on it.
1413 '''
1414 # During normal operations, we expect to deal with not more than four
1415 # revs at a time (such as during commit --amend). When rebasing large
1416 # stacks of commits, the number can go up, hence the config knob below.
1417 cachesize = 4
1418 usetreemanifest = False
1419 opts = getattr(opener, 'options', None)
1420 if opts is not None:
1421 cachesize = opts.get('manifestcachesize', cachesize)
1422 usetreemanifest = opts.get('treemanifest', usetreemanifest)
1423 self._mancache = util.lrucachedict(cachesize)
1424 self._treeinmem = usetreemanifest
1425 super(manifest, self).__init__(opener, dir=dir, dirlogcache=dirlogcache)
1426
928 1427 def _newmanifest(self, data=''):
929 1428 if self._treeinmem:
930 1429 return treemanifest(self._dir, data)
931 1430 return manifestdict(data)
932 1431
933 1432 def dirlog(self, dir):
1433 """This overrides the base revlog implementation to allow construction
1434 'manifest' types instead of manifestrevlog types. This is only needed
1435 until we migrate off the 'manifest' type."""
934 1436 if dir:
935 1437 assert self._treeondisk
936 1438 if dir not in self._dirlogcache:
@@ -973,20 +1475,6 b' class manifest(revlog.revlog):'
973 1475 d = mdiff.patchtext(self.revdiff(self.deltaparent(r), r))
974 1476 return manifestdict(d)
975 1477
976 def readfast(self, node):
977 '''use the faster of readdelta or read
978
979 This will return a manifest which is either only the files
980 added/modified relative to p1, or all files in the
981 manifest. Which one is returned depends on the codepath used
982 to retrieve the data.
983 '''
984 r = self.rev(node)
985 deltaparent = self.deltaparent(r)
986 if deltaparent != revlog.nullrev and deltaparent in self.parentrevs(r):
987 return self.readdelta(node)
988 return self.read(node)
989
990 1478 def readshallowfast(self, node):
991 1479 '''like readfast(), but calls readshallowdelta() instead of readdelta()
992 1480 '''
@@ -1000,7 +1488,11 b' class manifest(revlog.revlog):'
1000 1488 if node == revlog.nullid:
1001 1489 return self._newmanifest() # don't upset local cache
1002 1490 if node in self._mancache:
1003 return self._mancache[node][0]
1491 cached = self._mancache[node]
1492 if (isinstance(cached, manifestctx) or
1493 isinstance(cached, treemanifestctx)):
1494 cached = cached.read()
1495 return cached
1004 1496 if self._treeondisk:
1005 1497 def gettext():
1006 1498 return self.revision(node)
@@ -1014,7 +1506,9 b' class manifest(revlog.revlog):'
1014 1506 text = self.revision(node)
1015 1507 m = self._newmanifest(text)
1016 1508 arraytext = array.array('c', text)
1017 self._mancache[node] = (m, arraytext)
1509 self._mancache[node] = m
1510 if arraytext is not None:
1511 self.fulltextcache[node] = arraytext
1018 1512 return m
1019 1513
1020 1514 def readshallow(self, node):
@@ -1033,64 +1527,6 b' class manifest(revlog.revlog):'
1033 1527 except KeyError:
1034 1528 return None, None
1035 1529
1036 def add(self, m, transaction, link, p1, p2, added, removed):
1037 if (p1 in self._mancache and not self._treeinmem
1038 and not self._usemanifestv2):
1039 # If our first parent is in the manifest cache, we can
1040 # compute a delta here using properties we know about the
1041 # manifest up-front, which may save time later for the
1042 # revlog layer.
1043
1044 _checkforbidden(added)
1045 # combine the changed lists into one sorted iterator
1046 work = heapq.merge([(x, False) for x in added],
1047 [(x, True) for x in removed])
1048
1049 arraytext, deltatext = m.fastdelta(self._mancache[p1][1], work)
1050 cachedelta = self.rev(p1), deltatext
1051 text = util.buffer(arraytext)
1052 n = self.addrevision(text, transaction, link, p1, p2, cachedelta)
1053 else:
1054 # The first parent manifest isn't already loaded, so we'll
1055 # just encode a fulltext of the manifest and pass that
1056 # through to the revlog layer, and let it handle the delta
1057 # process.
1058 if self._treeondisk:
1059 m1 = self.read(p1)
1060 m2 = self.read(p2)
1061 n = self._addtree(m, transaction, link, m1, m2)
1062 arraytext = None
1063 else:
1064 text = m.text(self._usemanifestv2)
1065 n = self.addrevision(text, transaction, link, p1, p2)
1066 arraytext = array.array('c', text)
1067
1068 self._mancache[n] = (m, arraytext)
1069
1070 return n
1071
1072 def _addtree(self, m, transaction, link, m1, m2):
1073 # If the manifest is unchanged compared to one parent,
1074 # don't write a new revision
1075 if m.unmodifiedsince(m1) or m.unmodifiedsince(m2):
1076 return m.node()
1077 def writesubtree(subm, subp1, subp2):
1078 sublog = self.dirlog(subm.dir())
1079 sublog.add(subm, transaction, link, subp1, subp2, None, None)
1080 m.writesubtrees(m1, m2, writesubtree)
1081 text = m.dirtext(self._usemanifestv2)
1082 # Double-check whether contents are unchanged to one parent
1083 if text == m1.dirtext(self._usemanifestv2):
1084 n = m1.node()
1085 elif text == m2.dirtext(self._usemanifestv2):
1086 n = m2.node()
1087 else:
1088 n = self.addrevision(text, transaction, link, m1.node(), m2.node())
1089 # Save nodeid so parent manifest can calculate its nodeid
1090 m.setnode(n)
1091 return n
1092
1093 1530 def clearcaches(self):
1094 1531 super(manifest, self).clearcaches()
1095 1532 self._mancache.clear()
1096 self._dirlogcache = {'': self}
@@ -113,12 +113,11 b' def splitblock(base1, lines1, base2, lin'
113 113 s1 = i1
114 114 s2 = i2
115 115
116 def allblocks(text1, text2, opts=None, lines1=None, lines2=None, refine=False):
116 def allblocks(text1, text2, opts=None, lines1=None, lines2=None):
117 117 """Return (block, type) tuples, where block is an mdiff.blocks
118 118 line entry. type is '=' for blocks matching exactly one another
119 119 (bdiff blocks), '!' for non-matching blocks and '~' for blocks
120 matching only after having filtered blank lines. If refine is True,
121 then '~' blocks are refined and are only made of blank lines.
120 matching only after having filtered blank lines.
122 121 line1 and line2 are text1 and text2 split with splitnewlines() if
123 122 they are already available.
124 123 """
@@ -475,10 +475,12 b' class mergestate(object):'
475 475 flo = fco.flags()
476 476 fla = fca.flags()
477 477 if 'x' in flags + flo + fla and 'l' not in flags + flo + fla:
478 if fca.node() == nullid:
478 if fca.node() == nullid and flags != flo:
479 479 if preresolve:
480 480 self._repo.ui.warn(
481 _('warning: cannot merge flags for %s\n') % afile)
481 _('warning: cannot merge flags for %s '
482 'without common ancestor - keeping local flags\n')
483 % afile)
482 484 elif flags == fla:
483 485 flags = flo
484 486 if preresolve:
@@ -781,7 +783,7 b' def driverconclude(repo, ms, wctx, label'
781 783 def manifestmerge(repo, wctx, p2, pa, branchmerge, force, matcher,
782 784 acceptremote, followcopies):
783 785 """
784 Merge p1 and p2 with ancestor pa and generate merge action list
786 Merge wctx and p2 with ancestor pa and generate merge action list
785 787
786 788 branchmerge and force are as passed in to update
787 789 matcher = matcher to filter file lists
@@ -1036,6 +1038,12 b' def batchremove(repo, actions):'
1036 1038 unlink = util.unlinkpath
1037 1039 wjoin = repo.wjoin
1038 1040 audit = repo.wvfs.audit
1041 try:
1042 cwd = os.getcwd()
1043 except OSError as err:
1044 if err.errno != errno.ENOENT:
1045 raise
1046 cwd = None
1039 1047 i = 0
1040 1048 for f, args, msg in actions:
1041 1049 repo.ui.debug(" %s: %s -> r\n" % (f, msg))
@@ -1053,6 +1061,18 b' def batchremove(repo, actions):'
1053 1061 i += 1
1054 1062 if i > 0:
1055 1063 yield i, f
1064 if cwd:
1065 # cwd was present before we started to remove files
1066 # let's check if it is present after we removed them
1067 try:
1068 os.getcwd()
1069 except OSError as err:
1070 if err.errno != errno.ENOENT:
1071 raise
1072 # Print a warning if cwd was deleted
1073 repo.ui.warn(_("current directory was removed\n"
1074 "(consider changing to repo root: %s)\n") %
1075 repo.root)
1056 1076
1057 1077 def batchget(repo, mctx, actions):
1058 1078 """apply gets to the working directory
@@ -1150,7 +1170,7 b' def applyupdates(repo, actions, wctx, mc'
1150 1170 numupdates = sum(len(l) for m, l in actions.items() if m != 'k')
1151 1171
1152 1172 if [a for a in actions['r'] if a[0] == '.hgsubstate']:
1153 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1173 subrepo.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1154 1174
1155 1175 # remove in parallel (must come first)
1156 1176 z = 0
@@ -1168,7 +1188,7 b' def applyupdates(repo, actions, wctx, mc'
1168 1188 updated = len(actions['g'])
1169 1189
1170 1190 if [a for a in actions['g'] if a[0] == '.hgsubstate']:
1171 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1191 subrepo.submerge(repo, wctx, mctx, wctx, overwrite, labels)
1172 1192
1173 1193 # forget (manifest only, just log it) (must come first)
1174 1194 for f, args, msg in actions['f']:
@@ -1253,7 +1273,7 b' def applyupdates(repo, actions, wctx, mc'
1253 1273 progress(_updating, z, item=f, total=numupdates, unit=_files)
1254 1274 if f == '.hgsubstate': # subrepo states need updating
1255 1275 subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx),
1256 overwrite)
1276 overwrite, labels)
1257 1277 continue
1258 1278 audit(f)
1259 1279 complete, r = ms.preresolve(f, wctx)
@@ -1286,8 +1306,29 b' def applyupdates(repo, actions, wctx, mc'
1286 1306 removed += msremoved
1287 1307
1288 1308 extraactions = ms.actions()
1289 for k, acts in extraactions.iteritems():
1290 actions[k].extend(acts)
1309 if extraactions:
1310 mfiles = set(a[0] for a in actions['m'])
1311 for k, acts in extraactions.iteritems():
1312 actions[k].extend(acts)
1313 # Remove these files from actions['m'] as well. This is important
1314 # because in recordupdates, files in actions['m'] are processed
1315 # after files in other actions, and the merge driver might add
1316 # files to those actions via extraactions above. This can lead to a
1317 # file being recorded twice, with poor results. This is especially
1318 # problematic for actions['r'] (currently only possible with the
1319 # merge driver in the initial merge process; interrupted merges
1320 # don't go through this flow).
1321 #
1322 # The real fix here is to have indexes by both file and action so
1323 # that when the action for a file is changed it is automatically
1324 # reflected in the other action lists. But that involves a more
1325 # complex data structure, so this will do for now.
1326 #
1327 # We don't need to do the same operation for 'dc' and 'cd' because
1328 # those lists aren't consulted again.
1329 mfiles.difference_update(a[0] for a in acts)
1330
1331 actions['m'] = [a for a in actions['m'] if a[0] in mfiles]
1291 1332
1292 1333 progress(_updating, None, total=numupdates, unit=_files)
1293 1334
@@ -1514,15 +1555,16 b' def update(repo, node, branchmerge, forc'
1514 1555 pas = [p1]
1515 1556
1516 1557 # deprecated config: merge.followcopies
1517 followcopies = False
1558 followcopies = repo.ui.configbool('merge', 'followcopies', True)
1518 1559 if overwrite:
1519 1560 pas = [wc]
1561 followcopies = False
1520 1562 elif pas == [p2]: # backwards
1521 pas = [wc.p1()]
1522 elif not branchmerge and not wc.dirty(missing=True):
1523 pass
1524 elif pas[0] and repo.ui.configbool('merge', 'followcopies', True):
1525 followcopies = True
1563 pas = [p1]
1564 elif not pas[0]:
1565 followcopies = False
1566 if not branchmerge and not wc.dirty(missing=True):
1567 followcopies = False
1526 1568
1527 1569 ### calculate phase
1528 1570 actionbyfile, diverge, renamedelete = calculateupdates(
@@ -1535,11 +1577,13 b' def update(repo, node, branchmerge, forc'
1535 1577 if '.hgsubstate' in actionbyfile:
1536 1578 f = '.hgsubstate'
1537 1579 m, args, msg = actionbyfile[f]
1580 prompts = filemerge.partextras(labels)
1581 prompts['f'] = f
1538 1582 if m == 'cd':
1539 1583 if repo.ui.promptchoice(
1540 _("local changed %s which remote deleted\n"
1584 _("local%(l)s changed %(f)s which other%(o)s deleted\n"
1541 1585 "use (c)hanged version or (d)elete?"
1542 "$$ &Changed $$ &Delete") % f, 0):
1586 "$$ &Changed $$ &Delete") % prompts, 0):
1543 1587 actionbyfile[f] = ('r', None, "prompt delete")
1544 1588 elif f in p1:
1545 1589 actionbyfile[f] = ('am', None, "prompt keep")
@@ -1549,9 +1593,9 b' def update(repo, node, branchmerge, forc'
1549 1593 f1, f2, fa, move, anc = args
1550 1594 flags = p2[f2].flags()
1551 1595 if repo.ui.promptchoice(
1552 _("remote changed %s which local deleted\n"
1596 _("other%(o)s changed %(f)s which local%(l)s deleted\n"
1553 1597 "use (c)hanged version or leave (d)eleted?"
1554 "$$ &Changed $$ &Deleted") % f, 0) == 0:
1598 "$$ &Changed $$ &Deleted") % prompts, 0) == 0:
1555 1599 actionbyfile[f] = ('g', (flags, False), "prompt recreating")
1556 1600 else:
1557 1601 del actionbyfile[f]
@@ -1563,7 +1607,7 b' def update(repo, node, branchmerge, forc'
1563 1607 actions[m] = []
1564 1608 actions[m].append((f, args, msg))
1565 1609
1566 if not util.checkcase(repo.path):
1610 if not util.fscasesensitive(repo.path):
1567 1611 # check collision between files only in p2 for clean update
1568 1612 if (not branchmerge and
1569 1613 (force or not wc.dirty(missing=True, branch=False))):
@@ -20,49 +20,33 b''
20 20 of the GNU General Public License, incorporated herein by reference.
21 21 */
22 22
23 #define PY_SSIZE_T_CLEAN
24 #include <Python.h>
25 23 #include <stdlib.h>
26 24 #include <string.h>
27 25
28 #include "util.h"
29 26 #include "bitmanipulation.h"
30
31 static char mpatch_doc[] = "Efficient binary patching.";
32 static PyObject *mpatch_Error;
27 #include "compat.h"
28 #include "mpatch.h"
33 29
34 struct frag {
35 int start, end, len;
36 const char *data;
37 };
38
39 struct flist {
40 struct frag *base, *head, *tail;
41 };
42
43 static struct flist *lalloc(Py_ssize_t size)
30 static struct mpatch_flist *lalloc(ssize_t size)
44 31 {
45 struct flist *a = NULL;
32 struct mpatch_flist *a = NULL;
46 33
47 34 if (size < 1)
48 35 size = 1;
49 36
50 a = (struct flist *)malloc(sizeof(struct flist));
37 a = (struct mpatch_flist *)malloc(sizeof(struct mpatch_flist));
51 38 if (a) {
52 a->base = (struct frag *)malloc(sizeof(struct frag) * size);
39 a->base = (struct mpatch_frag *)malloc(sizeof(struct mpatch_frag) * size);
53 40 if (a->base) {
54 41 a->head = a->tail = a->base;
55 42 return a;
56 43 }
57 44 free(a);
58 a = NULL;
59 45 }
60 if (!PyErr_Occurred())
61 PyErr_NoMemory();
62 46 return NULL;
63 47 }
64 48
65 static void lfree(struct flist *a)
49 void mpatch_lfree(struct mpatch_flist *a)
66 50 {
67 51 if (a) {
68 52 free(a->base);
@@ -70,7 +54,7 b' static void lfree(struct flist *a)'
70 54 }
71 55 }
72 56
73 static Py_ssize_t lsize(struct flist *a)
57 static ssize_t lsize(struct mpatch_flist *a)
74 58 {
75 59 return a->tail - a->head;
76 60 }
@@ -78,9 +62,10 b' static Py_ssize_t lsize(struct flist *a)'
78 62 /* move hunks in source that are less cut to dest, compensating
79 63 for changes in offset. the last hunk may be split if necessary.
80 64 */
81 static int gather(struct flist *dest, struct flist *src, int cut, int offset)
65 static int gather(struct mpatch_flist *dest, struct mpatch_flist *src, int cut,
66 int offset)
82 67 {
83 struct frag *d = dest->tail, *s = src->head;
68 struct mpatch_frag *d = dest->tail, *s = src->head;
84 69 int postend, c, l;
85 70
86 71 while (s != src->tail) {
@@ -123,9 +108,9 b' static int gather(struct flist *dest, st'
123 108 }
124 109
125 110 /* like gather, but with no output list */
126 static int discard(struct flist *src, int cut, int offset)
111 static int discard(struct mpatch_flist *src, int cut, int offset)
127 112 {
128 struct frag *s = src->head;
113 struct mpatch_frag *s = src->head;
129 114 int postend, c, l;
130 115
131 116 while (s != src->tail) {
@@ -160,10 +145,11 b' static int discard(struct flist *src, in'
160 145
161 146 /* combine hunk lists a and b, while adjusting b for offset changes in a/
162 147 this deletes a and b and returns the resultant list. */
163 static struct flist *combine(struct flist *a, struct flist *b)
148 static struct mpatch_flist *combine(struct mpatch_flist *a,
149 struct mpatch_flist *b)
164 150 {
165 struct flist *c = NULL;
166 struct frag *bh, *ct;
151 struct mpatch_flist *c = NULL;
152 struct mpatch_frag *bh, *ct;
167 153 int offset = 0, post;
168 154
169 155 if (a && b)
@@ -189,26 +175,26 b' static struct flist *combine(struct flis'
189 175 }
190 176
191 177 /* hold on to tail from a */
192 memcpy(c->tail, a->head, sizeof(struct frag) * lsize(a));
178 memcpy(c->tail, a->head, sizeof(struct mpatch_frag) * lsize(a));
193 179 c->tail += lsize(a);
194 180 }
195 181
196 lfree(a);
197 lfree(b);
182 mpatch_lfree(a);
183 mpatch_lfree(b);
198 184 return c;
199 185 }
200 186
201 187 /* decode a binary patch into a hunk list */
202 static struct flist *decode(const char *bin, Py_ssize_t len)
188 int mpatch_decode(const char *bin, ssize_t len, struct mpatch_flist **res)
203 189 {
204 struct flist *l;
205 struct frag *lt;
190 struct mpatch_flist *l;
191 struct mpatch_frag *lt;
206 192 int pos = 0;
207 193
208 194 /* assume worst case size, we won't have many of these lists */
209 195 l = lalloc(len / 12 + 1);
210 196 if (!l)
211 return NULL;
197 return MPATCH_ERR_NO_MEM;
212 198
213 199 lt = l->tail;
214 200
@@ -224,28 +210,24 b' static struct flist *decode(const char *'
224 210 }
225 211
226 212 if (pos != len) {
227 if (!PyErr_Occurred())
228 PyErr_SetString(mpatch_Error, "patch cannot be decoded");
229 lfree(l);
230 return NULL;
213 mpatch_lfree(l);
214 return MPATCH_ERR_CANNOT_BE_DECODED;
231 215 }
232 216
233 217 l->tail = lt;
234 return l;
218 *res = l;
219 return 0;
235 220 }
236 221
237 222 /* calculate the size of resultant text */
238 static Py_ssize_t calcsize(Py_ssize_t len, struct flist *l)
223 ssize_t mpatch_calcsize(ssize_t len, struct mpatch_flist *l)
239 224 {
240 Py_ssize_t outlen = 0, last = 0;
241 struct frag *f = l->head;
225 ssize_t outlen = 0, last = 0;
226 struct mpatch_frag *f = l->head;
242 227
243 228 while (f != l->tail) {
244 229 if (f->start < last || f->end > len) {
245 if (!PyErr_Occurred())
246 PyErr_SetString(mpatch_Error,
247 "invalid patch");
248 return -1;
230 return MPATCH_ERR_INVALID_PATCH;
249 231 }
250 232 outlen += f->start - last;
251 233 last = f->end;
@@ -257,18 +239,16 b' static Py_ssize_t calcsize(Py_ssize_t le'
257 239 return outlen;
258 240 }
259 241
260 static int apply(char *buf, const char *orig, Py_ssize_t len, struct flist *l)
242 int mpatch_apply(char *buf, const char *orig, ssize_t len,
243 struct mpatch_flist *l)
261 244 {
262 struct frag *f = l->head;
245 struct mpatch_frag *f = l->head;
263 246 int last = 0;
264 247 char *p = buf;
265 248
266 249 while (f != l->tail) {
267 250 if (f->start < last || f->end > len) {
268 if (!PyErr_Occurred())
269 PyErr_SetString(mpatch_Error,
270 "invalid patch");
271 return 0;
251 return MPATCH_ERR_INVALID_PATCH;
272 252 }
273 253 memcpy(p, orig + last, f->start - last);
274 254 p += f->start - last;
@@ -278,146 +258,23 b' static int apply(char *buf, const char *'
278 258 f++;
279 259 }
280 260 memcpy(p, orig + last, len - last);
281 return 1;
261 return 0;
282 262 }
283 263
284 264 /* recursively generate a patch of all bins between start and end */
285 static struct flist *fold(PyObject *bins, Py_ssize_t start, Py_ssize_t end)
265 struct mpatch_flist *mpatch_fold(void *bins,
266 struct mpatch_flist* (*get_next_item)(void*, ssize_t),
267 ssize_t start, ssize_t end)
286 268 {
287 Py_ssize_t len, blen;
288 const char *buffer;
269 ssize_t len;
289 270
290 271 if (start + 1 == end) {
291 272 /* trivial case, output a decoded list */
292 PyObject *tmp = PyList_GetItem(bins, start);
293 if (!tmp)
294 return NULL;
295 if (PyObject_AsCharBuffer(tmp, &buffer, &blen))
296 return NULL;
297 return decode(buffer, blen);
273 return get_next_item(bins, start);
298 274 }
299 275
300 276 /* divide and conquer, memory management is elsewhere */
301 277 len = (end - start) / 2;
302 return combine(fold(bins, start, start + len),
303 fold(bins, start + len, end));
304 }
305
306 static PyObject *
307 patches(PyObject *self, PyObject *args)
308 {
309 PyObject *text, *bins, *result;
310 struct flist *patch;
311 const char *in;
312 char *out;
313 Py_ssize_t len, outlen, inlen;
314
315 if (!PyArg_ParseTuple(args, "OO:mpatch", &text, &bins))
316 return NULL;
317
318 len = PyList_Size(bins);
319 if (!len) {
320 /* nothing to do */
321 Py_INCREF(text);
322 return text;
323 }
324
325 if (PyObject_AsCharBuffer(text, &in, &inlen))
326 return NULL;
327
328 patch = fold(bins, 0, len);
329 if (!patch)
330 return NULL;
331
332 outlen = calcsize(inlen, patch);
333 if (outlen < 0) {
334 result = NULL;
335 goto cleanup;
336 }
337 result = PyBytes_FromStringAndSize(NULL, outlen);
338 if (!result) {
339 result = NULL;
340 goto cleanup;
341 }
342 out = PyBytes_AsString(result);
343 if (!apply(out, in, inlen, patch)) {
344 Py_DECREF(result);
345 result = NULL;
346 }
347 cleanup:
348 lfree(patch);
349 return result;
278 return combine(mpatch_fold(bins, get_next_item, start, start + len),
279 mpatch_fold(bins, get_next_item, start + len, end));
350 280 }
351
352 /* calculate size of a patched file directly */
353 static PyObject *
354 patchedsize(PyObject *self, PyObject *args)
355 {
356 long orig, start, end, len, outlen = 0, last = 0, pos = 0;
357 Py_ssize_t patchlen;
358 char *bin;
359
360 if (!PyArg_ParseTuple(args, "ls#", &orig, &bin, &patchlen))
361 return NULL;
362
363 while (pos >= 0 && pos < patchlen) {
364 start = getbe32(bin + pos);
365 end = getbe32(bin + pos + 4);
366 len = getbe32(bin + pos + 8);
367 if (start > end)
368 break; /* sanity check */
369 pos += 12 + len;
370 outlen += start - last;
371 last = end;
372 outlen += len;
373 }
374
375 if (pos != patchlen) {
376 if (!PyErr_Occurred())
377 PyErr_SetString(mpatch_Error, "patch cannot be decoded");
378 return NULL;
379 }
380
381 outlen += orig - last;
382 return Py_BuildValue("l", outlen);
383 }
384
385 static PyMethodDef methods[] = {
386 {"patches", patches, METH_VARARGS, "apply a series of patches\n"},
387 {"patchedsize", patchedsize, METH_VARARGS, "calculed patched size\n"},
388 {NULL, NULL}
389 };
390
391 #ifdef IS_PY3K
392 static struct PyModuleDef mpatch_module = {
393 PyModuleDef_HEAD_INIT,
394 "mpatch",
395 mpatch_doc,
396 -1,
397 methods
398 };
399
400 PyMODINIT_FUNC PyInit_mpatch(void)
401 {
402 PyObject *m;
403
404 m = PyModule_Create(&mpatch_module);
405 if (m == NULL)
406 return NULL;
407
408 mpatch_Error = PyErr_NewException("mercurial.mpatch.mpatchError",
409 NULL, NULL);
410 Py_INCREF(mpatch_Error);
411 PyModule_AddObject(m, "mpatchError", mpatch_Error);
412
413 return m;
414 }
415 #else
416 PyMODINIT_FUNC
417 initmpatch(void)
418 {
419 Py_InitModule3("mpatch", methods, mpatch_doc);
420 mpatch_Error = PyErr_NewException("mercurial.mpatch.mpatchError",
421 NULL, NULL);
422 }
423 #endif
@@ -27,288 +27,54 b''
27 27
28 28 #include "util.h"
29 29 #include "bitmanipulation.h"
30 #include "compat.h"
31 #include "mpatch.h"
30 32
31 33 static char mpatch_doc[] = "Efficient binary patching.";
32 34 static PyObject *mpatch_Error;
33 35
34 struct frag {
35 int start, end, len;
36 const char *data;
37 };
38
39 struct flist {
40 struct frag *base, *head, *tail;
41 };
42
43 static struct flist *lalloc(Py_ssize_t size)
36 static void setpyerr(int r)
44 37 {
45 struct flist *a = NULL;
46
47 if (size < 1)
48 size = 1;
49
50 a = (struct flist *)malloc(sizeof(struct flist));
51 if (a) {
52 a->base = (struct frag *)malloc(sizeof(struct frag) * size);
53 if (a->base) {
54 a->head = a->tail = a->base;
55 return a;
56 }
57 free(a);
58 a = NULL;
59 }
60 if (!PyErr_Occurred())
38 switch (r) {
39 case MPATCH_ERR_NO_MEM:
61 40 PyErr_NoMemory();
62 return NULL;
63 }
64
65 static void lfree(struct flist *a)
66 {
67 if (a) {
68 free(a->base);
69 free(a);
41 break;
42 case MPATCH_ERR_CANNOT_BE_DECODED:
43 PyErr_SetString(mpatch_Error, "patch cannot be decoded");
44 break;
45 case MPATCH_ERR_INVALID_PATCH:
46 PyErr_SetString(mpatch_Error, "invalid patch");
47 break;
70 48 }
71 49 }
72 50
73 static Py_ssize_t lsize(struct flist *a)
74 {
75 return a->tail - a->head;
76 }
77
78 /* move hunks in source that are less cut to dest, compensating
79 for changes in offset. the last hunk may be split if necessary.
80 */
81 static int gather(struct flist *dest, struct flist *src, int cut, int offset)
51 struct mpatch_flist *cpygetitem(void *bins, ssize_t pos)
82 52 {
83 struct frag *d = dest->tail, *s = src->head;
84 int postend, c, l;
85
86 while (s != src->tail) {
87 if (s->start + offset >= cut)
88 break; /* we've gone far enough */
89
90 postend = offset + s->start + s->len;
91 if (postend <= cut) {
92 /* save this hunk */
93 offset += s->start + s->len - s->end;
94 *d++ = *s++;
95 }
96 else {
97 /* break up this hunk */
98 c = cut - offset;
99 if (s->end < c)
100 c = s->end;
101 l = cut - offset - s->start;
102 if (s->len < l)
103 l = s->len;
104
105 offset += s->start + l - c;
106
107 d->start = s->start;
108 d->end = c;
109 d->len = l;
110 d->data = s->data;
111 d++;
112 s->start = c;
113 s->len = s->len - l;
114 s->data = s->data + l;
115
116 break;
117 }
118 }
119
120 dest->tail = d;
121 src->head = s;
122 return offset;
123 }
124
125 /* like gather, but with no output list */
126 static int discard(struct flist *src, int cut, int offset)
127 {
128 struct frag *s = src->head;
129 int postend, c, l;
130
131 while (s != src->tail) {
132 if (s->start + offset >= cut)
133 break;
134
135 postend = offset + s->start + s->len;
136 if (postend <= cut) {
137 offset += s->start + s->len - s->end;
138 s++;
139 }
140 else {
141 c = cut - offset;
142 if (s->end < c)
143 c = s->end;
144 l = cut - offset - s->start;
145 if (s->len < l)
146 l = s->len;
53 const char *buffer;
54 struct mpatch_flist *res;
55 ssize_t blen;
56 int r;
147 57
148 offset += s->start + l - c;
149 s->start = c;
150 s->len = s->len - l;
151 s->data = s->data + l;
152
153 break;
154 }
155 }
156
157 src->head = s;
158 return offset;
159 }
160
161 /* combine hunk lists a and b, while adjusting b for offset changes in a/
162 this deletes a and b and returns the resultant list. */
163 static struct flist *combine(struct flist *a, struct flist *b)
164 {
165 struct flist *c = NULL;
166 struct frag *bh, *ct;
167 int offset = 0, post;
168
169 if (a && b)
170 c = lalloc((lsize(a) + lsize(b)) * 2);
171
172 if (c) {
173
174 for (bh = b->head; bh != b->tail; bh++) {
175 /* save old hunks */
176 offset = gather(c, a, bh->start, offset);
177
178 /* discard replaced hunks */
179 post = discard(a, bh->end, offset);
180
181 /* insert new hunk */
182 ct = c->tail;
183 ct->start = bh->start - offset;
184 ct->end = bh->end - post;
185 ct->len = bh->len;
186 ct->data = bh->data;
187 c->tail++;
188 offset = post;
189 }
190
191 /* hold on to tail from a */
192 memcpy(c->tail, a->head, sizeof(struct frag) * lsize(a));
193 c->tail += lsize(a);
194 }
195
196 lfree(a);
197 lfree(b);
198 return c;
199 }
200
201 /* decode a binary patch into a hunk list */
202 static struct flist *decode(const char *bin, Py_ssize_t len)
203 {
204 struct flist *l;
205 struct frag *lt;
206 int pos = 0;
207
208 /* assume worst case size, we won't have many of these lists */
209 l = lalloc(len / 12 + 1);
210 if (!l)
58 PyObject *tmp = PyList_GetItem((PyObject*)bins, pos);
59 if (!tmp)
211 60 return NULL;
212
213 lt = l->tail;
214
215 while (pos >= 0 && pos < len) {
216 lt->start = getbe32(bin + pos);
217 lt->end = getbe32(bin + pos + 4);
218 lt->len = getbe32(bin + pos + 8);
219 lt->data = bin + pos + 12;
220 pos += 12 + lt->len;
221 if (lt->start > lt->end || lt->len < 0)
222 break; /* sanity check */
223 lt++;
224 }
225
226 if (pos != len) {
61 if (PyObject_AsCharBuffer(tmp, &buffer, (Py_ssize_t*)&blen))
62 return NULL;
63 if ((r = mpatch_decode(buffer, blen, &res)) < 0) {
227 64 if (!PyErr_Occurred())
228 PyErr_SetString(mpatch_Error, "patch cannot be decoded");
229 lfree(l);
65 setpyerr(r);
230 66 return NULL;
231 67 }
232
233 l->tail = lt;
234 return l;
235 }
236
237 /* calculate the size of resultant text */
238 static Py_ssize_t calcsize(Py_ssize_t len, struct flist *l)
239 {
240 Py_ssize_t outlen = 0, last = 0;
241 struct frag *f = l->head;
242
243 while (f != l->tail) {
244 if (f->start < last || f->end > len) {
245 if (!PyErr_Occurred())
246 PyErr_SetString(mpatch_Error,
247 "invalid patch");
248 return -1;
249 }
250 outlen += f->start - last;
251 last = f->end;
252 outlen += f->len;
253 f++;
254 }
255
256 outlen += len - last;
257 return outlen;
258 }
259
260 static int apply(char *buf, const char *orig, Py_ssize_t len, struct flist *l)
261 {
262 struct frag *f = l->head;
263 int last = 0;
264 char *p = buf;
265
266 while (f != l->tail) {
267 if (f->start < last || f->end > len) {
268 if (!PyErr_Occurred())
269 PyErr_SetString(mpatch_Error,
270 "invalid patch");
271 return 0;
272 }
273 memcpy(p, orig + last, f->start - last);
274 p += f->start - last;
275 memcpy(p, f->data, f->len);
276 last = f->end;
277 p += f->len;
278 f++;
279 }
280 memcpy(p, orig + last, len - last);
281 return 1;
282 }
283
284 /* recursively generate a patch of all bins between start and end */
285 static struct flist *fold(PyObject *bins, Py_ssize_t start, Py_ssize_t end)
286 {
287 Py_ssize_t len, blen;
288 const char *buffer;
289
290 if (start + 1 == end) {
291 /* trivial case, output a decoded list */
292 PyObject *tmp = PyList_GetItem(bins, start);
293 if (!tmp)
294 return NULL;
295 if (PyObject_AsCharBuffer(tmp, &buffer, &blen))
296 return NULL;
297 return decode(buffer, blen);
298 }
299
300 /* divide and conquer, memory management is elsewhere */
301 len = (end - start) / 2;
302 return combine(fold(bins, start, start + len),
303 fold(bins, start + len, end));
68 return res;
304 69 }
305 70
306 71 static PyObject *
307 72 patches(PyObject *self, PyObject *args)
308 73 {
309 74 PyObject *text, *bins, *result;
310 struct flist *patch;
75 struct mpatch_flist *patch;
311 76 const char *in;
77 int r = 0;
312 78 char *out;
313 79 Py_ssize_t len, outlen, inlen;
314 80
@@ -325,12 +91,16 b' patches(PyObject *self, PyObject *args)'
325 91 if (PyObject_AsCharBuffer(text, &in, &inlen))
326 92 return NULL;
327 93
328 patch = fold(bins, 0, len);
329 if (!patch)
94 patch = mpatch_fold(bins, cpygetitem, 0, len);
95 if (!patch) { /* error already set or memory error */
96 if (!PyErr_Occurred())
97 PyErr_NoMemory();
330 98 return NULL;
99 }
331 100
332 outlen = calcsize(inlen, patch);
101 outlen = mpatch_calcsize(inlen, patch);
333 102 if (outlen < 0) {
103 r = (int)outlen;
334 104 result = NULL;
335 105 goto cleanup;
336 106 }
@@ -340,12 +110,14 b' patches(PyObject *self, PyObject *args)'
340 110 goto cleanup;
341 111 }
342 112 out = PyBytes_AsString(result);
343 if (!apply(out, in, inlen, patch)) {
113 if ((r = mpatch_apply(out, in, inlen, patch)) < 0) {
344 114 Py_DECREF(result);
345 115 result = NULL;
346 116 }
347 117 cleanup:
348 lfree(patch);
118 mpatch_lfree(patch);
119 if (!result && !PyErr_Occurred())
120 setpyerr(r);
349 121 return result;
350 122 }
351 123
@@ -42,9 +42,9 b' Examples:'
42 42
43 43 (A, ())
44 44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker is used:
46 46
47 (A, (C, C))
47 (A, (B, C))
48 48
49 49 We use a single marker to distinguish the "split" case from the "divergence"
50 50 case. If two independent operations rewrite the same changeset A in to A' and
@@ -1236,7 +1236,7 b' def createmarkers(repo, relations, flag='
1236 1236 if not prec.mutable():
1237 1237 raise error.Abort(_("cannot obsolete public changeset: %s")
1238 1238 % prec,
1239 hint='see "hg help phases" for details')
1239 hint="see 'hg help phases' for details")
1240 1240 nprec = prec.node()
1241 1241 nsucs = tuple(s.node() for s in sucs)
1242 1242 npare = None
@@ -63,11 +63,19 b' struct listdir_stat {'
63 63 };
64 64 #endif
65 65
66 #ifdef IS_PY3K
67 #define listdir_slot(name) \
68 static PyObject *listdir_stat_##name(PyObject *self, void *x) \
69 { \
70 return PyLong_FromLong(((struct listdir_stat *)self)->st.name); \
71 }
72 #else
66 73 #define listdir_slot(name) \
67 74 static PyObject *listdir_stat_##name(PyObject *self, void *x) \
68 75 { \
69 76 return PyInt_FromLong(((struct listdir_stat *)self)->st.name); \
70 77 }
78 #endif
71 79
72 80 listdir_slot(st_dev)
73 81 listdir_slot(st_mode)
@@ -624,7 +632,7 b' static PyObject *statfiles(PyObject *sel'
624 632 pypath = PySequence_GetItem(names, i);
625 633 if (!pypath)
626 634 goto bail;
627 path = PyString_AsString(pypath);
635 path = PyBytes_AsString(pypath);
628 636 if (path == NULL) {
629 637 Py_DECREF(pypath);
630 638 PyErr_SetString(PyExc_TypeError, "not a string");
@@ -706,7 +714,7 b' static PyObject *recvfds(PyObject *self,'
706 714 if (!rfdslist)
707 715 goto bail;
708 716 for (i = 0; i < rfdscount; i++) {
709 PyObject *obj = PyInt_FromLong(rfds[i]);
717 PyObject *obj = PyLong_FromLong(rfds[i]);
710 718 if (!obj)
711 719 goto bail;
712 720 PyList_SET_ITEM(rfdslist, i, obj);
@@ -65,7 +65,7 b' class parser(object):'
65 65 # handle infix rules, take as suffix if unambiguous
66 66 infix, suffix = self._elements[token][3:]
67 67 if suffix and not (infix and self._hasnewterm()):
68 expr = (suffix[0], expr)
68 expr = (suffix, expr)
69 69 elif infix:
70 70 expr = (infix[0], expr, self._parseoperand(*infix[1:]))
71 71 else:
@@ -15,6 +15,18 b''
15 15 #include "util.h"
16 16 #include "bitmanipulation.h"
17 17
18 #ifdef IS_PY3K
19 /* The mapping of Python types is meant to be temporary to get Python
20 * 3 to compile. We should remove this once Python 3 support is fully
21 * supported and proper types are used in the extensions themselves. */
22 #define PyInt_Type PyLong_Type
23 #define PyInt_Check PyLong_Check
24 #define PyInt_FromLong PyLong_FromLong
25 #define PyInt_FromSsize_t PyLong_FromSsize_t
26 #define PyInt_AS_LONG PyLong_AS_LONG
27 #define PyInt_AsLong PyLong_AsLong
28 #endif
29
18 30 static char *versionerrortext = "Python minor version mismatch";
19 31
20 32 static int8_t hextable[256] = {
@@ -610,37 +622,37 b' static PyObject *pack_dirstate(PyObject '
610 622 /* Figure out how much we need to allocate. */
611 623 for (nbytes = 40, pos = 0; PyDict_Next(map, &pos, &k, &v);) {
612 624 PyObject *c;
613 if (!PyString_Check(k)) {
625 if (!PyBytes_Check(k)) {
614 626 PyErr_SetString(PyExc_TypeError, "expected string key");
615 627 goto bail;
616 628 }
617 nbytes += PyString_GET_SIZE(k) + 17;
629 nbytes += PyBytes_GET_SIZE(k) + 17;
618 630 c = PyDict_GetItem(copymap, k);
619 631 if (c) {
620 if (!PyString_Check(c)) {
632 if (!PyBytes_Check(c)) {
621 633 PyErr_SetString(PyExc_TypeError,
622 634 "expected string key");
623 635 goto bail;
624 636 }
625 nbytes += PyString_GET_SIZE(c) + 1;
637 nbytes += PyBytes_GET_SIZE(c) + 1;
626 638 }
627 639 }
628 640
629 packobj = PyString_FromStringAndSize(NULL, nbytes);
641 packobj = PyBytes_FromStringAndSize(NULL, nbytes);
630 642 if (packobj == NULL)
631 643 goto bail;
632 644
633 p = PyString_AS_STRING(packobj);
645 p = PyBytes_AS_STRING(packobj);
634 646
635 647 pn = PySequence_ITEM(pl, 0);
636 if (PyString_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
648 if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
637 649 PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash");
638 650 goto bail;
639 651 }
640 652 memcpy(p, s, l);
641 653 p += 20;
642 654 pn = PySequence_ITEM(pl, 1);
643 if (PyString_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
655 if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
644 656 PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash");
645 657 goto bail;
646 658 }
@@ -685,21 +697,21 b' static PyObject *pack_dirstate(PyObject '
685 697 putbe32((uint32_t)mtime, p + 8);
686 698 t = p + 12;
687 699 p += 16;
688 len = PyString_GET_SIZE(k);
689 memcpy(p, PyString_AS_STRING(k), len);
700 len = PyBytes_GET_SIZE(k);
701 memcpy(p, PyBytes_AS_STRING(k), len);
690 702 p += len;
691 703 o = PyDict_GetItem(copymap, k);
692 704 if (o) {
693 705 *p++ = '\0';
694 l = PyString_GET_SIZE(o);
695 memcpy(p, PyString_AS_STRING(o), l);
706 l = PyBytes_GET_SIZE(o);
707 memcpy(p, PyBytes_AS_STRING(o), l);
696 708 p += l;
697 709 len += l + 1;
698 710 }
699 711 putbe32((uint32_t)len, t);
700 712 }
701 713
702 pos = p - PyString_AS_STRING(packobj);
714 pos = p - PyBytes_AS_STRING(packobj);
703 715 if (pos != nbytes) {
704 716 PyErr_Format(PyExc_SystemError, "bad dirstate size: %ld != %ld",
705 717 (long)pos, (long)nbytes);
@@ -796,7 +808,7 b' static const char *index_deref(indexObje'
796 808 return self->offsets[pos];
797 809 }
798 810
799 return PyString_AS_STRING(self->data) + pos * v1_hdrsize;
811 return PyBytes_AS_STRING(self->data) + pos * v1_hdrsize;
800 812 }
801 813
802 814 static inline int index_get_parents(indexObject *self, Py_ssize_t rev,
@@ -926,7 +938,7 b' static const char *index_node(indexObjec'
926 938 PyObject *tuple, *str;
927 939 tuple = PyList_GET_ITEM(self->added, pos - self->length + 1);
928 940 str = PyTuple_GetItem(tuple, 7);
929 return str ? PyString_AS_STRING(str) : NULL;
941 return str ? PyBytes_AS_STRING(str) : NULL;
930 942 }
931 943
932 944 data = index_deref(self, pos);
@@ -937,7 +949,7 b' static int nt_insert(indexObject *self, '
937 949
938 950 static int node_check(PyObject *obj, char **node, Py_ssize_t *nodelen)
939 951 {
940 if (PyString_AsStringAndSize(obj, node, nodelen) == -1)
952 if (PyBytes_AsStringAndSize(obj, node, nodelen) == -1)
941 953 return -1;
942 954 if (*nodelen == 20)
943 955 return 0;
@@ -1825,7 +1837,7 b' static PyObject *index_partialmatch(inde'
1825 1837 case -2:
1826 1838 Py_RETURN_NONE;
1827 1839 case -1:
1828 return PyString_FromStringAndSize(nullid, 20);
1840 return PyBytes_FromStringAndSize(nullid, 20);
1829 1841 }
1830 1842
1831 1843 fullnode = index_node(self, rev);
@@ -1834,7 +1846,7 b' static PyObject *index_partialmatch(inde'
1834 1846 "could not access rev %d", rev);
1835 1847 return NULL;
1836 1848 }
1837 return PyString_FromStringAndSize(fullnode, 20);
1849 return PyBytes_FromStringAndSize(fullnode, 20);
1838 1850 }
1839 1851
1840 1852 static PyObject *index_m_get(indexObject *self, PyObject *args)
@@ -2247,7 +2259,7 b' static void nt_invalidate_added(indexObj'
2247 2259 PyObject *tuple = PyList_GET_ITEM(self->added, i);
2248 2260 PyObject *node = PyTuple_GET_ITEM(tuple, 7);
2249 2261
2250 nt_insert(self, PyString_AS_STRING(node), -1);
2262 nt_insert(self, PyBytes_AS_STRING(node), -1);
2251 2263 }
2252 2264
2253 2265 if (start == 0)
@@ -2264,7 +2276,12 b' static int index_slice_del(indexObject *'
2264 2276 Py_ssize_t length = index_length(self);
2265 2277 int ret = 0;
2266 2278
2279 /* Argument changed from PySliceObject* to PyObject* in Python 3. */
2280 #ifdef IS_PY3K
2281 if (PySlice_GetIndicesEx(item, length,
2282 #else
2267 2283 if (PySlice_GetIndicesEx((PySliceObject*)item, length,
2284 #endif
2268 2285 &start, &stop, &step, &slicelength) < 0)
2269 2286 return -1;
2270 2287
@@ -2372,9 +2389,9 b' static int index_assign_subscript(indexO'
2372 2389 */
2373 2390 static Py_ssize_t inline_scan(indexObject *self, const char **offsets)
2374 2391 {
2375 const char *data = PyString_AS_STRING(self->data);
2392 const char *data = PyBytes_AS_STRING(self->data);
2376 2393 Py_ssize_t pos = 0;
2377 Py_ssize_t end = PyString_GET_SIZE(self->data);
2394 Py_ssize_t end = PyBytes_GET_SIZE(self->data);
2378 2395 long incr = v1_hdrsize;
2379 2396 Py_ssize_t len = 0;
2380 2397
@@ -2416,11 +2433,11 b' static int index_init(indexObject *self,'
2416 2433
2417 2434 if (!PyArg_ParseTuple(args, "OO", &data_obj, &inlined_obj))
2418 2435 return -1;
2419 if (!PyString_Check(data_obj)) {
2436 if (!PyBytes_Check(data_obj)) {
2420 2437 PyErr_SetString(PyExc_TypeError, "data is not a string");
2421 2438 return -1;
2422 2439 }
2423 size = PyString_GET_SIZE(data_obj);
2440 size = PyBytes_GET_SIZE(data_obj);
2424 2441
2425 2442 self->inlined = inlined_obj && PyObject_IsTrue(inlined_obj);
2426 2443 self->data = data_obj;
@@ -2516,8 +2533,7 b' static PyGetSetDef index_getset[] = {'
2516 2533 };
2517 2534
2518 2535 static PyTypeObject indexType = {
2519 PyObject_HEAD_INIT(NULL)
2520 0, /* ob_size */
2536 PyVarObject_HEAD_INIT(NULL, 0)
2521 2537 "parsers.index", /* tp_name */
2522 2538 sizeof(indexObject), /* tp_basicsize */
2523 2539 0, /* tp_itemsize */
@@ -2613,7 +2629,7 b' static PyObject *readshas('
2613 2629 return NULL;
2614 2630 }
2615 2631 for (i = 0; i < num; i++) {
2616 PyObject *hash = PyString_FromStringAndSize(source, hashwidth);
2632 PyObject *hash = PyBytes_FromStringAndSize(source, hashwidth);
2617 2633 if (hash == NULL) {
2618 2634 Py_DECREF(list);
2619 2635 return NULL;
@@ -2669,7 +2685,7 b' static PyObject *fm1readmarker(const cha'
2669 2685 if (data + hashwidth > dataend) {
2670 2686 goto overflow;
2671 2687 }
2672 prec = PyString_FromStringAndSize(data, hashwidth);
2688 prec = PyBytes_FromStringAndSize(data, hashwidth);
2673 2689 data += hashwidth;
2674 2690 if (prec == NULL) {
2675 2691 goto bail;
@@ -2712,9 +2728,9 b' static PyObject *fm1readmarker(const cha'
2712 2728 if (meta + leftsize + rightsize > dataend) {
2713 2729 goto overflow;
2714 2730 }
2715 left = PyString_FromStringAndSize(meta, leftsize);
2731 left = PyBytes_FromStringAndSize(meta, leftsize);
2716 2732 meta += leftsize;
2717 right = PyString_FromStringAndSize(meta, rightsize);
2733 right = PyBytes_FromStringAndSize(meta, rightsize);
2718 2734 meta += rightsize;
2719 2735 tmp = PyTuple_New(2);
2720 2736 if (!left || !right || !tmp) {
@@ -2880,7 +2896,7 b' PyMODINIT_FUNC PyInit_parsers(void)'
2880 2896 PyObject *mod;
2881 2897
2882 2898 if (check_python_version() == -1)
2883 return;
2899 return NULL;
2884 2900 mod = PyModule_Create(&parsers_module);
2885 2901 module_init(mod);
2886 2902 return mod;
@@ -410,11 +410,7 b' class linereader(object):'
410 410 return self.fp.readline()
411 411
412 412 def __iter__(self):
413 while True:
414 l = self.readline()
415 if not l:
416 break
417 yield l
413 return iter(self.readline, '')
418 414
419 415 class abstractbackend(object):
420 416 def __init__(self, ui):
@@ -673,6 +669,8 b' class patchfile(object):'
673 669 self.mode = (False, False)
674 670 if self.missing:
675 671 self.ui.warn(_("unable to find '%s' for patching\n") % self.fname)
672 self.ui.warn(_("(use '--prefix' to apply patch relative to the "
673 "current directory)\n"))
676 674
677 675 self.hash = {}
678 676 self.dirty = 0
@@ -1688,10 +1686,7 b' def scanpatch(fp):'
1688 1686 def scanwhile(first, p):
1689 1687 """scan lr while predicate holds"""
1690 1688 lines = [first]
1691 while True:
1692 line = lr.readline()
1693 if not line:
1694 break
1689 for line in iter(lr.readline, ''):
1695 1690 if p(line):
1696 1691 lines.append(line)
1697 1692 else:
@@ -1699,10 +1694,7 b' def scanpatch(fp):'
1699 1694 break
1700 1695 return lines
1701 1696
1702 while True:
1703 line = lr.readline()
1704 if not line:
1705 break
1697 for line in iter(lr.readline, ''):
1706 1698 if line.startswith('diff --git a/') or line.startswith('diff -r '):
1707 1699 def notheader(line):
1708 1700 s = line.split(None, 1)
@@ -1772,10 +1764,7 b' def iterhunks(fp):'
1772 1764 context = None
1773 1765 lr = linereader(fp)
1774 1766
1775 while True:
1776 x = lr.readline()
1777 if not x:
1778 break
1767 for x in iter(lr.readline, ''):
1779 1768 if state == BFILE and (
1780 1769 (not context and x[0] == '@')
1781 1770 or (context is not False and x.startswith('***************'))
@@ -1963,8 +1952,10 b' def _applydiff(ui, fp, patcher, backend,'
1963 1952 data, mode = None, None
1964 1953 if gp.op in ('RENAME', 'COPY'):
1965 1954 data, mode = store.getfile(gp.oldpath)[:2]
1966 # FIXME: failing getfile has never been handled here
1967 assert data is not None
1955 if data is None:
1956 # This means that the old path does not exist
1957 raise PatchError(_("source file '%s' does not exist")
1958 % gp.oldpath)
1968 1959 if gp.mode:
1969 1960 mode = gp.mode
1970 1961 if gp.op == 'ADD':
@@ -2155,7 +2146,14 b' def difffeatureopts(ui, opts=None, untru'
2155 2146 def get(key, name=None, getter=ui.configbool, forceplain=None):
2156 2147 if opts:
2157 2148 v = opts.get(key)
2158 if v:
2149 # diffopts flags are either None-default (which is passed
2150 # through unchanged, so we can identify unset values), or
2151 # some other falsey default (eg --unified, which defaults
2152 # to an empty string). We only want to override the config
2153 # entries from hgrc with command line values if they
2154 # appear to have been set, which is any truthy value,
2155 # True, or False.
2156 if v or isinstance(v, bool):
2159 2157 return v
2160 2158 if forceplain is not None and ui.plain():
2161 2159 return forceplain
@@ -156,7 +156,7 b' PyObject *encodedir(PyObject *self, PyOb'
156 156 if (!PyArg_ParseTuple(args, "O:encodedir", &pathobj))
157 157 return NULL;
158 158
159 if (PyString_AsStringAndSize(pathobj, &path, &len) == -1) {
159 if (PyBytes_AsStringAndSize(pathobj, &path, &len) == -1) {
160 160 PyErr_SetString(PyExc_TypeError, "expected a string");
161 161 return NULL;
162 162 }
@@ -168,11 +168,12 b' PyObject *encodedir(PyObject *self, PyOb'
168 168 return pathobj;
169 169 }
170 170
171 newobj = PyString_FromStringAndSize(NULL, newlen);
171 newobj = PyBytes_FromStringAndSize(NULL, newlen);
172 172
173 173 if (newobj) {
174 PyString_GET_SIZE(newobj)--;
175 _encodedir(PyString_AS_STRING(newobj), newlen, path,
174 assert(PyBytes_Check(newobj));
175 Py_SIZE(newobj)--;
176 _encodedir(PyBytes_AS_STRING(newobj), newlen, path,
176 177 len + 1);
177 178 }
178 179
@@ -515,9 +516,9 b' PyObject *lowerencode(PyObject *self, Py'
515 516 return NULL;
516 517
517 518 newlen = _lowerencode(NULL, 0, path, len);
518 ret = PyString_FromStringAndSize(NULL, newlen);
519 ret = PyBytes_FromStringAndSize(NULL, newlen);
519 520 if (ret)
520 _lowerencode(PyString_AS_STRING(ret), newlen, path, len);
521 _lowerencode(PyBytes_AS_STRING(ret), newlen, path, len);
521 522
522 523 return ret;
523 524 }
@@ -568,11 +569,11 b' static PyObject *hashmangle(const char *'
568 569 if (lastdot >= 0)
569 570 destsize += len - lastdot - 1;
570 571
571 ret = PyString_FromStringAndSize(NULL, destsize);
572 ret = PyBytes_FromStringAndSize(NULL, destsize);
572 573 if (ret == NULL)
573 574 return NULL;
574 575
575 dest = PyString_AS_STRING(ret);
576 dest = PyBytes_AS_STRING(ret);
576 577 memcopy(dest, &destlen, destsize, "dh/", 3);
577 578
578 579 /* Copy up to dirprefixlen bytes of each path component, up to
@@ -638,7 +639,8 b' static PyObject *hashmangle(const char *'
638 639 memcopy(dest, &destlen, destsize, &src[lastdot],
639 640 len - lastdot - 1);
640 641
641 PyString_GET_SIZE(ret) = destlen;
642 assert(PyBytes_Check(ret));
643 Py_SIZE(ret) = destlen;
642 644
643 645 return ret;
644 646 }
@@ -653,7 +655,7 b' static int sha1hash(char hash[20], const'
653 655 PyObject *shaobj, *hashobj;
654 656
655 657 if (shafunc == NULL) {
656 PyObject *hashlib, *name = PyString_FromString("hashlib");
658 PyObject *hashlib, *name = PyBytes_FromString("hashlib");
657 659
658 660 if (name == NULL)
659 661 return -1;
@@ -686,14 +688,14 b' static int sha1hash(char hash[20], const'
686 688 if (hashobj == NULL)
687 689 return -1;
688 690
689 if (!PyString_Check(hashobj) || PyString_GET_SIZE(hashobj) != 20) {
691 if (!PyBytes_Check(hashobj) || PyBytes_GET_SIZE(hashobj) != 20) {
690 692 PyErr_SetString(PyExc_TypeError,
691 693 "result of digest is not a 20-byte hash");
692 694 Py_DECREF(hashobj);
693 695 return -1;
694 696 }
695 697
696 memcpy(hash, PyString_AS_STRING(hashobj), 20);
698 memcpy(hash, PyBytes_AS_STRING(hashobj), 20);
697 699 Py_DECREF(hashobj);
698 700 return 0;
699 701 }
@@ -731,7 +733,7 b' PyObject *pathencode(PyObject *self, PyO'
731 733 if (!PyArg_ParseTuple(args, "O:pathencode", &pathobj))
732 734 return NULL;
733 735
734 if (PyString_AsStringAndSize(pathobj, &path, &len) == -1) {
736 if (PyBytes_AsStringAndSize(pathobj, &path, &len) == -1) {
735 737 PyErr_SetString(PyExc_TypeError, "expected a string");
736 738 return NULL;
737 739 }
@@ -747,11 +749,12 b' PyObject *pathencode(PyObject *self, PyO'
747 749 return pathobj;
748 750 }
749 751
750 newobj = PyString_FromStringAndSize(NULL, newlen);
752 newobj = PyBytes_FromStringAndSize(NULL, newlen);
751 753
752 754 if (newobj) {
753 PyString_GET_SIZE(newobj)--;
754 basicencode(PyString_AS_STRING(newobj), newlen, path,
755 assert(PyBytes_Check(newobj));
756 Py_SIZE(newobj)--;
757 basicencode(PyBytes_AS_STRING(newobj), newlen, path,
755 758 len + 1);
756 759 }
757 760 }
@@ -40,7 +40,7 b' class pathauditor(object):'
40 40 self.root = root
41 41 self._realfs = realfs
42 42 self.callback = callback
43 if os.path.lexists(root) and not util.checkcase(root):
43 if os.path.lexists(root) and not util.fscasesensitive(root):
44 44 self.normcase = util.normcase
45 45 else:
46 46 self.normcase = lambda x: x
@@ -12,6 +12,10 b' import difflib'
12 12 import re
13 13 import struct
14 14
15 from . import policy
16 policynocffi = policy.policynocffi
17 modulepolicy = policy.policy
18
15 19 def splitnewlines(text):
16 20 '''like str.splitlines, but only split on newlines.'''
17 21 lines = [l + '\n' for l in text.split('\n')]
@@ -96,3 +100,70 b' def fixws(text, allws):'
96 100 text = re.sub('[ \t\r]+', ' ', text)
97 101 text = text.replace(' \n', '\n')
98 102 return text
103
104 if modulepolicy not in policynocffi:
105 try:
106 from _bdiff_cffi import ffi, lib
107 except ImportError:
108 if modulepolicy == 'cffi': # strict cffi import
109 raise
110 else:
111 def blocks(sa, sb):
112 a = ffi.new("struct bdiff_line**")
113 b = ffi.new("struct bdiff_line**")
114 ac = ffi.new("char[]", str(sa))
115 bc = ffi.new("char[]", str(sb))
116 l = ffi.new("struct bdiff_hunk*")
117 try:
118 an = lib.bdiff_splitlines(ac, len(sa), a)
119 bn = lib.bdiff_splitlines(bc, len(sb), b)
120 if not a[0] or not b[0]:
121 raise MemoryError
122 count = lib.bdiff_diff(a[0], an, b[0], bn, l)
123 if count < 0:
124 raise MemoryError
125 rl = [None] * count
126 h = l.next
127 i = 0
128 while h:
129 rl[i] = (h.a1, h.a2, h.b1, h.b2)
130 h = h.next
131 i += 1
132 finally:
133 lib.free(a[0])
134 lib.free(b[0])
135 lib.bdiff_freehunks(l.next)
136 return rl
137
138 def bdiff(sa, sb):
139 a = ffi.new("struct bdiff_line**")
140 b = ffi.new("struct bdiff_line**")
141 ac = ffi.new("char[]", str(sa))
142 bc = ffi.new("char[]", str(sb))
143 l = ffi.new("struct bdiff_hunk*")
144 try:
145 an = lib.bdiff_splitlines(ac, len(sa), a)
146 bn = lib.bdiff_splitlines(bc, len(sb), b)
147 if not a[0] or not b[0]:
148 raise MemoryError
149 count = lib.bdiff_diff(a[0], an, b[0], bn, l)
150 if count < 0:
151 raise MemoryError
152 rl = []
153 h = l.next
154 la = lb = 0
155 while h:
156 if h.a1 != la or h.b1 != lb:
157 lgt = (b[0] + h.b1).l - (b[0] + lb).l
158 rl.append(struct.pack(">lll", (a[0] + la).l - a[0].l,
159 (a[0] + h.a1).l - a[0].l, lgt))
160 rl.append(str(ffi.buffer((b[0] + lb).l, lgt)))
161 la = h.a2
162 lb = h.b2
163 h = h.next
164
165 finally:
166 lib.free(a[0])
167 lib.free(b[0])
168 lib.bdiff_freehunks(l.next)
169 return "".join(rl)
@@ -9,8 +9,10 b' from __future__ import absolute_import'
9 9
10 10 import struct
11 11
12 from . import pycompat
12 from . import policy, pycompat
13 13 stringio = pycompat.stringio
14 modulepolicy = policy.policy
15 policynocffi = policy.policynocffi
14 16
15 17 class mpatchError(Exception):
16 18 """error raised when a delta cannot be decoded
@@ -125,3 +127,44 b' def patchedsize(orig, delta):'
125 127
126 128 outlen += orig - last
127 129 return outlen
130
131 if modulepolicy not in policynocffi:
132 try:
133 from _mpatch_cffi import ffi, lib
134 except ImportError:
135 if modulepolicy == 'cffi': # strict cffi import
136 raise
137 else:
138 @ffi.def_extern()
139 def cffi_get_next_item(arg, pos):
140 all, bins = ffi.from_handle(arg)
141 container = ffi.new("struct mpatch_flist*[1]")
142 to_pass = ffi.new("char[]", str(bins[pos]))
143 all.append(to_pass)
144 r = lib.mpatch_decode(to_pass, len(to_pass) - 1, container)
145 if r < 0:
146 return ffi.NULL
147 return container[0]
148
149 def patches(text, bins):
150 lgt = len(bins)
151 all = []
152 if not lgt:
153 return text
154 arg = (all, bins)
155 patch = lib.mpatch_fold(ffi.new_handle(arg),
156 lib.cffi_get_next_item, 0, lgt)
157 if not patch:
158 raise mpatchError("cannot decode chunk")
159 outlen = lib.mpatch_calcsize(len(text), patch)
160 if outlen < 0:
161 lib.mpatch_lfree(patch)
162 raise mpatchError("inconsistency detected")
163 buf = ffi.new("char[]", outlen)
164 if lib.mpatch_apply(buf, text, len(text), patch) < 0:
165 lib.mpatch_lfree(patch)
166 raise mpatchError("error applying patches")
167 res = ffi.buffer(buf, outlen)[:]
168 lib.mpatch_lfree(patch)
169 return res
170
@@ -120,13 +120,14 b" if sys.platform == 'darwin' and ffi is n"
120 120 if skip == name and tp == statmod.S_ISDIR:
121 121 return []
122 122 if stat:
123 mtime = cur.time.tv_sec
123 mtime = cur.mtime.tv_sec
124 124 mode = (cur.accessmask & ~lib.S_IFMT)| tp
125 125 ret.append((name, tp, stat_res(st_mode=mode, st_mtime=mtime,
126 126 st_size=cur.datalength)))
127 127 else:
128 128 ret.append((name, tp))
129 cur += lgt
129 cur = ffi.cast("val_attrs_t*", int(ffi.cast("intptr_t", cur))
130 + lgt)
130 131 return ret
131 132
132 133 def listdir(path, stat=False, skip=None):
@@ -173,30 +174,30 b" if os.name != 'nt':"
173 174
174 175 class _iovec(ctypes.Structure):
175 176 _fields_ = [
176 ('iov_base', ctypes.c_void_p),
177 ('iov_len', ctypes.c_size_t),
177 (u'iov_base', ctypes.c_void_p),
178 (u'iov_len', ctypes.c_size_t),
178 179 ]
179 180
180 181 class _msghdr(ctypes.Structure):
181 182 _fields_ = [
182 ('msg_name', ctypes.c_void_p),
183 ('msg_namelen', _socklen_t),
184 ('msg_iov', ctypes.POINTER(_iovec)),
185 ('msg_iovlen', _msg_iovlen_t),
186 ('msg_control', ctypes.c_void_p),
187 ('msg_controllen', _msg_controllen_t),
188 ('msg_flags', ctypes.c_int),
183 (u'msg_name', ctypes.c_void_p),
184 (u'msg_namelen', _socklen_t),
185 (u'msg_iov', ctypes.POINTER(_iovec)),
186 (u'msg_iovlen', _msg_iovlen_t),
187 (u'msg_control', ctypes.c_void_p),
188 (u'msg_controllen', _msg_controllen_t),
189 (u'msg_flags', ctypes.c_int),
189 190 ]
190 191
191 192 class _cmsghdr(ctypes.Structure):
192 193 _fields_ = [
193 ('cmsg_len', _cmsg_len_t),
194 ('cmsg_level', ctypes.c_int),
195 ('cmsg_type', ctypes.c_int),
196 ('cmsg_data', ctypes.c_ubyte * 0),
194 (u'cmsg_len', _cmsg_len_t),
195 (u'cmsg_level', ctypes.c_int),
196 (u'cmsg_type', ctypes.c_int),
197 (u'cmsg_data', ctypes.c_ubyte * 0),
197 198 ]
198 199
199 _libc = ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True)
200 _libc = ctypes.CDLL(ctypes.util.find_library(u'c'), use_errno=True)
200 201 _recvmsg = getattr(_libc, 'recvmsg', None)
201 202 if _recvmsg:
202 203 _recvmsg.restype = getattr(ctypes, 'c_ssize_t', ctypes.c_long)
@@ -12,7 +12,9 b' from __future__ import absolute_import'
12 12
13 13 import sys
14 14
15 if sys.version_info[0] < 3:
15 ispy3 = (sys.version_info[0] >= 3)
16
17 if not ispy3:
16 18 import cPickle as pickle
17 19 import cStringIO as io
18 20 import httplib
@@ -29,36 +31,84 b' else:'
29 31 import urllib.parse as urlparse
30 32 import xmlrpc.client as xmlrpclib
31 33
34 if ispy3:
35 import builtins
36 import functools
37 import os
38 fsencode = os.fsencode
39
40 def sysstr(s):
41 """Return a keyword str to be passed to Python functions such as
42 getattr() and str.encode()
43
44 This never raises UnicodeDecodeError. Non-ascii characters are
45 considered invalid and mapped to arbitrary but unique code points
46 such that 'sysstr(a) != sysstr(b)' for all 'a != b'.
47 """
48 if isinstance(s, builtins.str):
49 return s
50 return s.decode(u'latin-1')
51
52 def _wrapattrfunc(f):
53 @functools.wraps(f)
54 def w(object, name, *args):
55 return f(object, sysstr(name), *args)
56 return w
57
58 # these wrappers are automagically imported by hgloader
59 delattr = _wrapattrfunc(builtins.delattr)
60 getattr = _wrapattrfunc(builtins.getattr)
61 hasattr = _wrapattrfunc(builtins.hasattr)
62 setattr = _wrapattrfunc(builtins.setattr)
63 xrange = builtins.range
64
65 else:
66 def sysstr(s):
67 return s
68
69 # Partial backport from os.py in Python 3, which only accepts bytes.
70 # In Python 2, our paths should only ever be bytes, a unicode path
71 # indicates a bug.
72 def fsencode(filename):
73 if isinstance(filename, str):
74 return filename
75 else:
76 raise TypeError(
77 "expect str, not %s" % type(filename).__name__)
78
32 79 stringio = io.StringIO
33 80 empty = _queue.Empty
34 81 queue = _queue.Queue
35 82
36 83 class _pycompatstub(object):
37 pass
38
39 def _alias(alias, origin, items):
40 """ populate a _pycompatstub
84 def __init__(self):
85 self._aliases = {}
41 86
42 copies items from origin to alias
43 """
44 def hgcase(item):
45 return item.replace('_', '').lower()
46 for item in items:
87 def _registeraliases(self, origin, items):
88 """Add items that will be populated at the first access"""
89 items = map(sysstr, items)
90 self._aliases.update(
91 (item.replace(sysstr('_'), sysstr('')).lower(), (origin, item))
92 for item in items)
93
94 def __getattr__(self, name):
47 95 try:
48 setattr(alias, hgcase(item), getattr(origin, item))
49 except AttributeError:
50 pass
96 origin, item = self._aliases[name]
97 except KeyError:
98 raise AttributeError(name)
99 self.__dict__[name] = obj = getattr(origin, item)
100 return obj
51 101
52 102 httpserver = _pycompatstub()
53 103 urlreq = _pycompatstub()
54 104 urlerr = _pycompatstub()
55 try:
105 if not ispy3:
56 106 import BaseHTTPServer
57 107 import CGIHTTPServer
58 108 import SimpleHTTPServer
59 109 import urllib2
60 110 import urllib
61 _alias(urlreq, urllib, (
111 urlreq._registeraliases(urllib, (
62 112 "addclosehook",
63 113 "addinfourl",
64 114 "ftpwrapper",
@@ -71,9 +121,8 b' try:'
71 121 "unquote",
72 122 "url2pathname",
73 123 "urlencode",
74 "urlencode",
75 124 ))
76 _alias(urlreq, urllib2, (
125 urlreq._registeraliases(urllib2, (
77 126 "AbstractHTTPHandler",
78 127 "BaseHandler",
79 128 "build_opener",
@@ -89,24 +138,24 b' try:'
89 138 "Request",
90 139 "urlopen",
91 140 ))
92 _alias(urlerr, urllib2, (
141 urlerr._registeraliases(urllib2, (
93 142 "HTTPError",
94 143 "URLError",
95 144 ))
96 _alias(httpserver, BaseHTTPServer, (
145 httpserver._registeraliases(BaseHTTPServer, (
97 146 "HTTPServer",
98 147 "BaseHTTPRequestHandler",
99 148 ))
100 _alias(httpserver, SimpleHTTPServer, (
149 httpserver._registeraliases(SimpleHTTPServer, (
101 150 "SimpleHTTPRequestHandler",
102 151 ))
103 _alias(httpserver, CGIHTTPServer, (
152 httpserver._registeraliases(CGIHTTPServer, (
104 153 "CGIHTTPRequestHandler",
105 154 ))
106 155
107 except ImportError:
156 else:
108 157 import urllib.request
109 _alias(urlreq, urllib.request, (
158 urlreq._registeraliases(urllib.request, (
110 159 "AbstractHTTPHandler",
111 160 "addclosehook",
112 161 "addinfourl",
@@ -134,20 +183,14 b' except ImportError:'
134 183 "urlopen",
135 184 ))
136 185 import urllib.error
137 _alias(urlerr, urllib.error, (
186 urlerr._registeraliases(urllib.error, (
138 187 "HTTPError",
139 188 "URLError",
140 189 ))
141 190 import http.server
142 _alias(httpserver, http.server, (
191 httpserver._registeraliases(http.server, (
143 192 "HTTPServer",
144 193 "BaseHTTPRequestHandler",
145 194 "SimpleHTTPRequestHandler",
146 195 "CGIHTTPRequestHandler",
147 196 ))
148
149 try:
150 xrange
151 except NameError:
152 import builtins
153 builtins.xrange = range
@@ -8,6 +8,7 b''
8 8 from __future__ import absolute_import
9 9
10 10 from . import (
11 pycompat,
11 12 util,
12 13 )
13 14
@@ -108,6 +109,9 b' class revsetpredicate(_funcregistrarbase'
108 109 Optional argument 'safe' indicates whether a predicate is safe for
109 110 DoS attack (False by default).
110 111
112 Optional argument 'takeorder' indicates whether a predicate function
113 takes ordering policy as the last argument.
114
111 115 'revsetpredicate' instance in example above can be used to
112 116 decorate multiple functions.
113 117
@@ -118,10 +122,11 b' class revsetpredicate(_funcregistrarbase'
118 122 Otherwise, explicit 'revset.loadpredicate()' is needed.
119 123 """
120 124 _getname = _funcregistrarbase._parsefuncdecl
121 _docformat = "``%s``\n %s"
125 _docformat = pycompat.sysstr("``%s``\n %s")
122 126
123 def _extrasetup(self, name, func, safe=False):
127 def _extrasetup(self, name, func, safe=False, takeorder=False):
124 128 func._safe = safe
129 func._takeorder = takeorder
125 130
126 131 class filesetpredicate(_funcregistrarbase):
127 132 """Decorator to register fileset predicate
@@ -156,7 +161,7 b' class filesetpredicate(_funcregistrarbas'
156 161 Otherwise, explicit 'fileset.loadpredicate()' is needed.
157 162 """
158 163 _getname = _funcregistrarbase._parsefuncdecl
159 _docformat = "``%s``\n %s"
164 _docformat = pycompat.sysstr("``%s``\n %s")
160 165
161 166 def _extrasetup(self, name, func, callstatus=False, callexisting=False):
162 167 func._callstatus = callstatus
@@ -165,7 +170,7 b' class filesetpredicate(_funcregistrarbas'
165 170 class _templateregistrarbase(_funcregistrarbase):
166 171 """Base of decorator to register functions as template specific one
167 172 """
168 _docformat = ":%s: %s"
173 _docformat = pycompat.sysstr(":%s: %s")
169 174
170 175 class templatekeyword(_templateregistrarbase):
171 176 """Decorator to register template keyword
@@ -147,9 +147,10 b' def strip(ui, repo, nodelist, backup=Tru'
147 147 vfs.join(backupfile))
148 148 repo.ui.log("backupbundle", "saved backup bundle to %s\n",
149 149 vfs.join(backupfile))
150 if saveheads or savebases:
151 # do not compress partial bundle if we remove it from disk later
152 chgrpfile = _bundle(repo, savebases, saveheads, node, 'temp',
150 tmpbundlefile = None
151 if saveheads:
152 # do not compress temporary bundle if we remove it from disk later
153 tmpbundlefile = _bundle(repo, savebases, saveheads, node, 'temp',
153 154 compress=False)
154 155
155 156 mfst = repo.manifest
@@ -173,32 +174,34 b' def strip(ui, repo, nodelist, backup=Tru'
173 174 if (unencoded.startswith('meta/') and
174 175 unencoded.endswith('00manifest.i')):
175 176 dir = unencoded[5:-12]
176 repo.dirlog(dir).strip(striprev, tr)
177 repo.manifest.dirlog(dir).strip(striprev, tr)
177 178 for fn in files:
178 179 repo.file(fn).strip(striprev, tr)
179 180 tr.endgroup()
180 181
181 182 for i in xrange(offset, len(tr.entries)):
182 183 file, troffset, ignore = tr.entries[i]
183 repo.svfs(file, 'a').truncate(troffset)
184 with repo.svfs(file, 'a', checkambig=True) as fp:
185 fp.truncate(troffset)
184 186 if troffset == 0:
185 187 repo.store.markremoved(file)
186 188
187 if saveheads or savebases:
189 if tmpbundlefile:
188 190 ui.note(_("adding branch\n"))
189 f = vfs.open(chgrpfile, "rb")
190 gen = exchange.readbundle(ui, f, chgrpfile, vfs)
191 f = vfs.open(tmpbundlefile, "rb")
192 gen = exchange.readbundle(ui, f, tmpbundlefile, vfs)
191 193 if not repo.ui.verbose:
192 194 # silence internal shuffling chatter
193 195 repo.ui.pushbuffer()
194 196 if isinstance(gen, bundle2.unbundle20):
195 197 with repo.transaction('strip') as tr:
196 198 tr.hookargs = {'source': 'strip',
197 'url': 'bundle:' + vfs.join(chgrpfile)}
199 'url': 'bundle:' + vfs.join(tmpbundlefile)}
198 200 bundle2.applybundle(repo, gen, tr, source='strip',
199 url='bundle:' + vfs.join(chgrpfile))
201 url='bundle:' + vfs.join(tmpbundlefile))
200 202 else:
201 gen.apply(repo, 'strip', 'bundle:' + vfs.join(chgrpfile), True)
203 gen.apply(repo, 'strip', 'bundle:' + vfs.join(tmpbundlefile),
204 True)
202 205 if not repo.ui.verbose:
203 206 repo.ui.popbuffer()
204 207 f.close()
@@ -227,16 +230,18 b' def strip(ui, repo, nodelist, backup=Tru'
227 230
228 231 except: # re-raises
229 232 if backupfile:
230 ui.warn(_("strip failed, full bundle stored in '%s'\n")
233 ui.warn(_("strip failed, backup bundle stored in '%s'\n")
231 234 % vfs.join(backupfile))
232 elif saveheads:
233 ui.warn(_("strip failed, partial bundle stored in '%s'\n")
234 % vfs.join(chgrpfile))
235 if tmpbundlefile:
236 ui.warn(_("strip failed, unrecovered changes stored in '%s'\n")
237 % vfs.join(tmpbundlefile))
238 ui.warn(_("(fix the problem, then recover the changesets with "
239 "\"hg unbundle '%s'\")\n") % vfs.join(tmpbundlefile))
235 240 raise
236 241 else:
237 if saveheads or savebases:
238 # Remove partial backup only if there were no exceptions
239 vfs.unlink(chgrpfile)
242 if tmpbundlefile:
243 # Remove temporary bundle only if there were no exceptions
244 vfs.unlink(tmpbundlefile)
240 245
241 246 repo.destroyed()
242 247
@@ -212,8 +212,11 b' class revlog(object):'
212 212 fashion, which means we never need to rewrite a file to insert or
213 213 remove data, and can use some simple techniques to avoid the need
214 214 for locking while reading.
215
216 If checkambig, indexfile is opened with checkambig=True at
217 writing, to avoid file stat ambiguity.
215 218 """
216 def __init__(self, opener, indexfile):
219 def __init__(self, opener, indexfile, checkambig=False):
217 220 """
218 221 create a revlog object
219 222
@@ -223,11 +226,13 b' class revlog(object):'
223 226 self.indexfile = indexfile
224 227 self.datafile = indexfile[:-2] + ".d"
225 228 self.opener = opener
229 # When True, indexfile is opened with checkambig=True at writing, to
230 # avoid file stat ambiguity.
231 self._checkambig = checkambig
226 232 # 3-tuple of (node, rev, text) for a raw revision.
227 233 self._cache = None
228 # 2-tuple of (rev, baserev) defining the base revision the delta chain
229 # begins at for a revision.
230 self._basecache = None
234 # Maps rev to chain base rev.
235 self._chainbasecache = util.lrucachedict(100)
231 236 # 2-tuple of (offset, data) of raw data from the revlog at an offset.
232 237 self._chunkcache = (0, '')
233 238 # How much data to read and cache into the raw revlog data cache.
@@ -292,6 +297,8 b' class revlog(object):'
292 297 raise RevlogError(_("index %s unknown format %d")
293 298 % (self.indexfile, fmt))
294 299
300 self.storedeltachains = True
301
295 302 self._io = revlogio()
296 303 if self.version == REVLOGV0:
297 304 self._io = revlogoldio()
@@ -340,7 +347,7 b' class revlog(object):'
340 347
341 348 def clearcaches(self):
342 349 self._cache = None
343 self._basecache = None
350 self._chainbasecache.clear()
344 351 self._chunkcache = (0, '')
345 352 self._pcache = {}
346 353
@@ -390,11 +397,17 b' class revlog(object):'
390 397 def length(self, rev):
391 398 return self.index[rev][1]
392 399 def chainbase(self, rev):
400 base = self._chainbasecache.get(rev)
401 if base is not None:
402 return base
403
393 404 index = self.index
394 405 base = index[rev][3]
395 406 while base != rev:
396 407 rev = base
397 408 base = index[rev][3]
409
410 self._chainbasecache[rev] = base
398 411 return base
399 412 def chainlen(self, rev):
400 413 return self._chaininfo(rev)[0]
@@ -1271,7 +1284,8 b' class revlog(object):'
1271 1284 finally:
1272 1285 df.close()
1273 1286
1274 fp = self.opener(self.indexfile, 'w', atomictemp=True)
1287 fp = self.opener(self.indexfile, 'w', atomictemp=True,
1288 checkambig=self._checkambig)
1275 1289 self.version &= ~(REVLOGNGINLINEDATA)
1276 1290 self._inline = False
1277 1291 for i in self:
@@ -1314,7 +1328,7 b' class revlog(object):'
1314 1328 dfh = None
1315 1329 if not self._inline:
1316 1330 dfh = self.opener(self.datafile, "a+")
1317 ifh = self.opener(self.indexfile, "a+")
1331 ifh = self.opener(self.indexfile, "a+", checkambig=self._checkambig)
1318 1332 try:
1319 1333 return self._addrevision(node, text, transaction, link, p1, p2,
1320 1334 REVIDX_DEFAULT_FLAGS, cachedelta, ifh, dfh)
@@ -1428,30 +1442,24 b' class revlog(object):'
1428 1442 fh = dfh
1429 1443 ptext = self.revision(self.node(rev), _df=fh)
1430 1444 delta = mdiff.textdiff(ptext, t)
1431 data = self.compress(delta)
1432 l = len(data[1]) + len(data[0])
1433 if basecache[0] == rev:
1434 chainbase = basecache[1]
1435 else:
1436 chainbase = self.chainbase(rev)
1437 dist = l + offset - self.start(chainbase)
1445 header, data = self.compress(delta)
1446 deltalen = len(header) + len(data)
1447 chainbase = self.chainbase(rev)
1448 dist = deltalen + offset - self.start(chainbase)
1438 1449 if self._generaldelta:
1439 1450 base = rev
1440 1451 else:
1441 1452 base = chainbase
1442 1453 chainlen, compresseddeltalen = self._chaininfo(rev)
1443 1454 chainlen += 1
1444 compresseddeltalen += l
1445 return dist, l, data, base, chainbase, chainlen, compresseddeltalen
1455 compresseddeltalen += deltalen
1456 return (dist, deltalen, (header, data), base,
1457 chainbase, chainlen, compresseddeltalen)
1446 1458
1447 1459 curr = len(self)
1448 1460 prev = curr - 1
1449 base = chainbase = curr
1450 1461 offset = self.end(prev)
1451 1462 delta = None
1452 if self._basecache is None:
1453 self._basecache = (prev, self.chainbase(prev))
1454 basecache = self._basecache
1455 1463 p1r, p2r = self.rev(p1), self.rev(p2)
1456 1464
1457 1465 # full versions are inserted when the needed deltas
@@ -1463,8 +1471,12 b' class revlog(object):'
1463 1471 textlen = len(text)
1464 1472
1465 1473 # should we try to build a delta?
1466 if prev != nullrev:
1474 if prev != nullrev and self.storedeltachains:
1467 1475 tested = set()
1476 # This condition is true most of the time when processing
1477 # changegroup data into a generaldelta repo. The only time it
1478 # isn't true is if this is the first revision in a delta chain
1479 # or if ``format.generaldelta=true`` disabled ``lazydeltabase``.
1468 1480 if cachedelta and self._generaldelta and self._lazydeltabase:
1469 1481 # Assume what we received from the server is a good choice
1470 1482 # build delta will reuse the cache
@@ -1515,7 +1527,7 b' class revlog(object):'
1515 1527
1516 1528 if type(text) == str: # only accept immutable objects
1517 1529 self._cache = (node, curr, text)
1518 self._basecache = (curr, chainbase)
1530 self._chainbasecache[curr] = chainbase
1519 1531 return node
1520 1532
1521 1533 def _writeentry(self, transaction, ifh, dfh, entry, data, link, offset):
@@ -1569,7 +1581,7 b' class revlog(object):'
1569 1581 end = 0
1570 1582 if r:
1571 1583 end = self.end(r - 1)
1572 ifh = self.opener(self.indexfile, "a+")
1584 ifh = self.opener(self.indexfile, "a+", checkambig=self._checkambig)
1573 1585 isize = r * self._io.size
1574 1586 if self._inline:
1575 1587 transaction.add(self.indexfile, end + isize, r)
@@ -1585,10 +1597,7 b' class revlog(object):'
1585 1597 try:
1586 1598 # loop through our set of deltas
1587 1599 chain = None
1588 while True:
1589 chunkdata = cg.deltachunk(chain)
1590 if not chunkdata:
1591 break
1600 for chunkdata in iter(lambda: cg.deltachunk(chain), {}):
1592 1601 node = chunkdata['node']
1593 1602 p1 = chunkdata['p1']
1594 1603 p2 = chunkdata['p2']
@@ -1646,7 +1655,8 b' class revlog(object):'
1646 1655 # reopen the index
1647 1656 ifh.close()
1648 1657 dfh = self.opener(self.datafile, "a+")
1649 ifh = self.opener(self.indexfile, "a+")
1658 ifh = self.opener(self.indexfile, "a+",
1659 checkambig=self._checkambig)
1650 1660 finally:
1651 1661 if dfh:
1652 1662 dfh.close()
@@ -9,6 +9,7 b' from __future__ import absolute_import'
9 9
10 10 import heapq
11 11 import re
12 import string
12 13
13 14 from .i18n import _
14 15 from . import (
@@ -22,6 +23,7 b' from . import ('
22 23 parser,
23 24 pathutil,
24 25 phases,
26 pycompat,
25 27 registrar,
26 28 repoview,
27 29 util,
@@ -149,18 +151,16 b' elements = {'
149 151 "(": (21, None, ("group", 1, ")"), ("func", 1, ")"), None),
150 152 "##": (20, None, None, ("_concat", 20), None),
151 153 "~": (18, None, None, ("ancestor", 18), None),
152 "^": (18, None, None, ("parent", 18), ("parentpost", 18)),
154 "^": (18, None, None, ("parent", 18), "parentpost"),
153 155 "-": (5, None, ("negate", 19), ("minus", 5), None),
154 "::": (17, None, ("dagrangepre", 17), ("dagrange", 17),
155 ("dagrangepost", 17)),
156 "..": (17, None, ("dagrangepre", 17), ("dagrange", 17),
157 ("dagrangepost", 17)),
158 ":": (15, "rangeall", ("rangepre", 15), ("range", 15), ("rangepost", 15)),
156 "::": (17, None, ("dagrangepre", 17), ("dagrange", 17), "dagrangepost"),
157 "..": (17, None, ("dagrangepre", 17), ("dagrange", 17), "dagrangepost"),
158 ":": (15, "rangeall", ("rangepre", 15), ("range", 15), "rangepost"),
159 159 "not": (10, None, ("not", 10), None, None),
160 160 "!": (10, None, ("not", 10), None, None),
161 161 "and": (5, None, None, ("and", 5), None),
162 162 "&": (5, None, None, ("and", 5), None),
163 "%": (5, None, None, ("only", 5), ("onlypost", 5)),
163 "%": (5, None, None, ("only", 5), "onlypost"),
164 164 "or": (4, None, None, ("or", 4), None),
165 165 "|": (4, None, None, ("or", 4), None),
166 166 "+": (4, None, None, ("or", 4), None),
@@ -175,12 +175,12 b' elements = {'
175 175 keywords = set(['and', 'or', 'not'])
176 176
177 177 # default set of valid characters for the initial letter of symbols
178 _syminitletters = set(c for c in [chr(i) for i in xrange(256)]
179 if c.isalnum() or c in '._@' or ord(c) > 127)
178 _syminitletters = set(
179 string.ascii_letters +
180 string.digits + pycompat.sysstr('._@')) | set(map(chr, xrange(128, 256)))
180 181
181 182 # default set of valid characters for non-initial letters of symbols
182 _symletters = set(c for c in [chr(i) for i in xrange(256)]
183 if c.isalnum() or c in '-._/@' or ord(c) > 127)
183 _symletters = _syminitletters | set(pycompat.sysstr('-/'))
184 184
185 185 def tokenize(program, lookup=None, syminitletters=None, symletters=None):
186 186 '''
@@ -362,14 +362,22 b' def stringset(repo, subset, x):'
362 362 return baseset([x])
363 363 return baseset()
364 364
365 def rangeset(repo, subset, x, y):
365 def rangeset(repo, subset, x, y, order):
366 366 m = getset(repo, fullreposet(repo), x)
367 367 n = getset(repo, fullreposet(repo), y)
368 368
369 369 if not m or not n:
370 370 return baseset()
371 m, n = m.first(), n.last()
372
371 return _makerangeset(repo, subset, m.first(), n.last(), order)
372
373 def rangepre(repo, subset, y, order):
374 # ':y' can't be rewritten to '0:y' since '0' may be hidden
375 n = getset(repo, fullreposet(repo), y)
376 if not n:
377 return baseset()
378 return _makerangeset(repo, subset, 0, n.last(), order)
379
380 def _makerangeset(repo, subset, m, n, order):
373 381 if m == n:
374 382 r = baseset([m])
375 383 elif n == node.wdirrev:
@@ -380,35 +388,43 b' def rangeset(repo, subset, x, y):'
380 388 r = spanset(repo, m, n + 1)
381 389 else:
382 390 r = spanset(repo, m, n - 1)
383 # XXX We should combine with subset first: 'subset & baseset(...)'. This is
384 # necessary to ensure we preserve the order in subset.
385 #
386 # This has performance implication, carrying the sorting over when possible
387 # would be more efficient.
388 return r & subset
389
390 def dagrange(repo, subset, x, y):
391
392 if order == defineorder:
393 return r & subset
394 else:
395 # carrying the sorting over when possible would be more efficient
396 return subset & r
397
398 def dagrange(repo, subset, x, y, order):
391 399 r = fullreposet(repo)
392 400 xs = reachableroots(repo, getset(repo, r, x), getset(repo, r, y),
393 401 includepath=True)
394 402 return subset & xs
395 403
396 def andset(repo, subset, x, y):
404 def andset(repo, subset, x, y, order):
397 405 return getset(repo, getset(repo, subset, x), y)
398 406
399 def differenceset(repo, subset, x, y):
407 def differenceset(repo, subset, x, y, order):
400 408 return getset(repo, subset, x) - getset(repo, subset, y)
401 409
402 def orset(repo, subset, *xs):
410 def _orsetlist(repo, subset, xs):
403 411 assert xs
404 412 if len(xs) == 1:
405 413 return getset(repo, subset, xs[0])
406 414 p = len(xs) // 2
407 a = orset(repo, subset, *xs[:p])
408 b = orset(repo, subset, *xs[p:])
415 a = _orsetlist(repo, subset, xs[:p])
416 b = _orsetlist(repo, subset, xs[p:])
409 417 return a + b
410 418
411 def notset(repo, subset, x):
419 def orset(repo, subset, x, order):
420 xs = getlist(x)
421 if order == followorder:
422 # slow path to take the subset order
423 return subset & _orsetlist(repo, fullreposet(repo), xs)
424 else:
425 return _orsetlist(repo, subset, xs)
426
427 def notset(repo, subset, x, order):
412 428 return subset - getset(repo, subset, x)
413 429
414 430 def listset(repo, subset, *xs):
@@ -418,10 +434,13 b' def listset(repo, subset, *xs):'
418 434 def keyvaluepair(repo, subset, k, v):
419 435 raise error.ParseError(_("can't use a key-value pair in this context"))
420 436
421 def func(repo, subset, a, b):
437 def func(repo, subset, a, b, order):
422 438 f = getsymbol(a)
423 439 if f in symbols:
424 return symbols[f](repo, subset, b)
440 fn = symbols[f]
441 if getattr(fn, '_takeorder', False):
442 return fn(repo, subset, b, order)
443 return fn(repo, subset, b)
425 444
426 445 keep = lambda fn: getattr(fn, '__doc__', None) is not None
427 446
@@ -515,7 +534,7 b' def _firstancestors(repo, subset, x):'
515 534 # Like ``ancestors(set)`` but follows only the first parents.
516 535 return _ancestors(repo, subset, x, followfirst=True)
517 536
518 def ancestorspec(repo, subset, x, n):
537 def ancestorspec(repo, subset, x, n, order):
519 538 """``set~n``
520 539 Changesets that are the Nth ancestor (first parents only) of a changeset
521 540 in set.
@@ -1001,12 +1020,21 b' def first(repo, subset, x):'
1001 1020 return limit(repo, subset, x)
1002 1021
1003 1022 def _follow(repo, subset, x, name, followfirst=False):
1004 l = getargs(x, 0, 1, _("%s takes no arguments or a pattern") % name)
1023 l = getargs(x, 0, 2, _("%s takes no arguments or a pattern "
1024 "and an optional revset") % name)
1005 1025 c = repo['.']
1006 1026 if l:
1007 1027 x = getstring(l[0], _("%s expected a pattern") % name)
1028 rev = None
1029 if len(l) >= 2:
1030 revs = getset(repo, fullreposet(repo), l[1])
1031 if len(revs) != 1:
1032 raise error.RepoLookupError(
1033 _("%s expected one starting revision") % name)
1034 rev = revs.last()
1035 c = repo[rev]
1008 1036 matcher = matchmod.match(repo.root, repo.getcwd(), [x],
1009 ctx=repo[None], default='path')
1037 ctx=repo[rev], default='path')
1010 1038
1011 1039 files = c.manifest().walk(matcher)
1012 1040
@@ -1021,20 +1049,20 b' def _follow(repo, subset, x, name, follo'
1021 1049
1022 1050 return subset & s
1023 1051
1024 @predicate('follow([pattern])', safe=True)
1052 @predicate('follow([pattern[, startrev]])', safe=True)
1025 1053 def follow(repo, subset, x):
1026 1054 """
1027 1055 An alias for ``::.`` (ancestors of the working directory's first parent).
1028 1056 If pattern is specified, the histories of files matching given
1029 pattern is followed, including copies.
1057 pattern in the revision given by startrev are followed, including copies.
1030 1058 """
1031 1059 return _follow(repo, subset, x, 'follow')
1032 1060
1033 1061 @predicate('_followfirst', safe=True)
1034 1062 def _followfirst(repo, subset, x):
1035 # ``followfirst([pattern])``
1036 # Like ``follow([pattern])`` but follows only the first parent of
1037 # every revisions or files revisions.
1063 # ``followfirst([pattern[, startrev]])``
1064 # Like ``follow([pattern[, startrev]])`` but follows only the first parent
1065 # of every revisions or files revisions.
1038 1066 return _follow(repo, subset, x, '_followfirst', followfirst=True)
1039 1067
1040 1068 @predicate('all()', safe=True)
@@ -1519,6 +1547,9 b' def p2(repo, subset, x):'
1519 1547 # some optimisations from the fact this is a baseset.
1520 1548 return subset & ps
1521 1549
1550 def parentpost(repo, subset, x, order):
1551 return p1(repo, subset, x)
1552
1522 1553 @predicate('parents([set])', safe=True)
1523 1554 def parents(repo, subset, x):
1524 1555 """
@@ -1569,7 +1600,7 b' def secret(repo, subset, x):'
1569 1600 target = phases.secret
1570 1601 return _phase(repo, subset, target)
1571 1602
1572 def parentspec(repo, subset, x, n):
1603 def parentspec(repo, subset, x, n, order):
1573 1604 """``set^0``
1574 1605 The set.
1575 1606 ``set^1`` (or ``set^``), ``set^2``
@@ -1590,7 +1621,7 b' def parentspec(repo, subset, x, n):'
1590 1621 ps.add(cl.parentrevs(r)[0])
1591 1622 elif n == 2:
1592 1623 parents = cl.parentrevs(r)
1593 if len(parents) > 1:
1624 if parents[1] != node.nullrev:
1594 1625 ps.add(parents[1])
1595 1626 return subset & ps
1596 1627
@@ -1813,12 +1844,13 b' def matching(repo, subset, x):'
1813 1844
1814 1845 return subset.filter(matches, condrepr=('<matching%r %r>', fields, revs))
1815 1846
1816 @predicate('reverse(set)', safe=True)
1817 def reverse(repo, subset, x):
1847 @predicate('reverse(set)', safe=True, takeorder=True)
1848 def reverse(repo, subset, x, order):
1818 1849 """Reverse order of set.
1819 1850 """
1820 1851 l = getset(repo, subset, x)
1821 l.reverse()
1852 if order == defineorder:
1853 l.reverse()
1822 1854 return l
1823 1855
1824 1856 @predicate('roots(set)', safe=True)
@@ -1880,8 +1912,8 b' def _getsortargs(x):'
1880 1912
1881 1913 return args['set'], keyflags, opts
1882 1914
1883 @predicate('sort(set[, [-]key... [, ...]])', safe=True)
1884 def sort(repo, subset, x):
1915 @predicate('sort(set[, [-]key... [, ...]])', safe=True, takeorder=True)
1916 def sort(repo, subset, x, order):
1885 1917 """Sort set by keys. The default sort order is ascending, specify a key
1886 1918 as ``-key`` to sort in descending order.
1887 1919
@@ -1902,7 +1934,7 b' def sort(repo, subset, x):'
1902 1934 s, keyflags, opts = _getsortargs(x)
1903 1935 revs = getset(repo, subset, s)
1904 1936
1905 if not keyflags:
1937 if not keyflags or order != defineorder:
1906 1938 return revs
1907 1939 if len(keyflags) == 1 and keyflags[0][0] == "rev":
1908 1940 revs.sort(reverse=keyflags[0][1])
@@ -2233,9 +2265,7 b' def wdir(repo, subset, x):'
2233 2265 return baseset([node.wdirrev])
2234 2266 return baseset()
2235 2267
2236 # for internal use
2237 @predicate('_list', safe=True)
2238 def _list(repo, subset, x):
2268 def _orderedlist(repo, subset, x):
2239 2269 s = getstring(x, "internal error")
2240 2270 if not s:
2241 2271 return baseset()
@@ -2264,8 +2294,15 b' def _list(repo, subset, x):'
2264 2294 return baseset(ls)
2265 2295
2266 2296 # for internal use
2267 @predicate('_intlist', safe=True)
2268 def _intlist(repo, subset, x):
2297 @predicate('_list', safe=True, takeorder=True)
2298 def _list(repo, subset, x, order):
2299 if order == followorder:
2300 # slow path to take the subset order
2301 return subset & _orderedlist(repo, fullreposet(repo), x)
2302 else:
2303 return _orderedlist(repo, subset, x)
2304
2305 def _orderedintlist(repo, subset, x):
2269 2306 s = getstring(x, "internal error")
2270 2307 if not s:
2271 2308 return baseset()
@@ -2274,8 +2311,15 b' def _intlist(repo, subset, x):'
2274 2311 return baseset([r for r in ls if r in s])
2275 2312
2276 2313 # for internal use
2277 @predicate('_hexlist', safe=True)
2278 def _hexlist(repo, subset, x):
2314 @predicate('_intlist', safe=True, takeorder=True)
2315 def _intlist(repo, subset, x, order):
2316 if order == followorder:
2317 # slow path to take the subset order
2318 return subset & _orderedintlist(repo, fullreposet(repo), x)
2319 else:
2320 return _orderedintlist(repo, subset, x)
2321
2322 def _orderedhexlist(repo, subset, x):
2279 2323 s = getstring(x, "internal error")
2280 2324 if not s:
2281 2325 return baseset()
@@ -2284,8 +2328,18 b' def _hexlist(repo, subset, x):'
2284 2328 s = subset
2285 2329 return baseset([r for r in ls if r in s])
2286 2330
2331 # for internal use
2332 @predicate('_hexlist', safe=True, takeorder=True)
2333 def _hexlist(repo, subset, x, order):
2334 if order == followorder:
2335 # slow path to take the subset order
2336 return subset & _orderedhexlist(repo, fullreposet(repo), x)
2337 else:
2338 return _orderedhexlist(repo, subset, x)
2339
2287 2340 methods = {
2288 2341 "range": rangeset,
2342 "rangepre": rangepre,
2289 2343 "dagrange": dagrange,
2290 2344 "string": stringset,
2291 2345 "symbol": stringset,
@@ -2298,7 +2352,51 b' methods = {'
2298 2352 "func": func,
2299 2353 "ancestor": ancestorspec,
2300 2354 "parent": parentspec,
2301 "parentpost": p1,
2355 "parentpost": parentpost,
2356 }
2357
2358 # Constants for ordering requirement, used in _analyze():
2359 #
2360 # If 'define', any nested functions and operations can change the ordering of
2361 # the entries in the set. If 'follow', any nested functions and operations
2362 # should take the ordering specified by the first operand to the '&' operator.
2363 #
2364 # For instance,
2365 #
2366 # X & (Y | Z)
2367 # ^ ^^^^^^^
2368 # | follow
2369 # define
2370 #
2371 # will be evaluated as 'or(y(x()), z(x()))', where 'x()' can change the order
2372 # of the entries in the set, but 'y()', 'z()' and 'or()' shouldn't.
2373 #
2374 # 'any' means the order doesn't matter. For instance,
2375 #
2376 # X & !Y
2377 # ^
2378 # any
2379 #
2380 # 'y()' can either enforce its ordering requirement or take the ordering
2381 # specified by 'x()' because 'not()' doesn't care the order.
2382 #
2383 # Transition of ordering requirement:
2384 #
2385 # 1. starts with 'define'
2386 # 2. shifts to 'follow' by 'x & y'
2387 # 3. changes back to 'define' on function call 'f(x)' or function-like
2388 # operation 'x (f) y' because 'f' may have its own ordering requirement
2389 # for 'x' and 'y' (e.g. 'first(x)')
2390 #
2391 anyorder = 'any' # don't care the order
2392 defineorder = 'define' # should define the order
2393 followorder = 'follow' # must follow the current order
2394
2395 # transition table for 'x & y', from the current expression 'x' to 'y'
2396 _tofolloworder = {
2397 anyorder: anyorder,
2398 defineorder: followorder,
2399 followorder: followorder,
2302 2400 }
2303 2401
2304 2402 def _matchonly(revs, bases):
@@ -2316,6 +2414,97 b' def _matchonly(revs, bases):'
2316 2414 and getsymbol(bases[1][1]) == 'ancestors'):
2317 2415 return ('list', revs[2], bases[1][2])
2318 2416
2417 def _fixops(x):
2418 """Rewrite raw parsed tree to resolve ambiguous syntax which cannot be
2419 handled well by our simple top-down parser"""
2420 if not isinstance(x, tuple):
2421 return x
2422
2423 op = x[0]
2424 if op == 'parent':
2425 # x^:y means (x^) : y, not x ^ (:y)
2426 # x^: means (x^) :, not x ^ (:)
2427 post = ('parentpost', x[1])
2428 if x[2][0] == 'dagrangepre':
2429 return _fixops(('dagrange', post, x[2][1]))
2430 elif x[2][0] == 'rangepre':
2431 return _fixops(('range', post, x[2][1]))
2432 elif x[2][0] == 'rangeall':
2433 return _fixops(('rangepost', post))
2434 elif op == 'or':
2435 # make number of arguments deterministic:
2436 # x + y + z -> (or x y z) -> (or (list x y z))
2437 return (op, _fixops(('list',) + x[1:]))
2438
2439 return (op,) + tuple(_fixops(y) for y in x[1:])
2440
2441 def _analyze(x, order):
2442 if x is None:
2443 return x
2444
2445 op = x[0]
2446 if op == 'minus':
2447 return _analyze(('and', x[1], ('not', x[2])), order)
2448 elif op == 'only':
2449 t = ('func', ('symbol', 'only'), ('list', x[1], x[2]))
2450 return _analyze(t, order)
2451 elif op == 'onlypost':
2452 return _analyze(('func', ('symbol', 'only'), x[1]), order)
2453 elif op == 'dagrangepre':
2454 return _analyze(('func', ('symbol', 'ancestors'), x[1]), order)
2455 elif op == 'dagrangepost':
2456 return _analyze(('func', ('symbol', 'descendants'), x[1]), order)
2457 elif op == 'rangeall':
2458 return _analyze(('rangepre', ('string', 'tip')), order)
2459 elif op == 'rangepost':
2460 return _analyze(('range', x[1], ('string', 'tip')), order)
2461 elif op == 'negate':
2462 s = getstring(x[1], _("can't negate that"))
2463 return _analyze(('string', '-' + s), order)
2464 elif op in ('string', 'symbol'):
2465 return x
2466 elif op == 'and':
2467 ta = _analyze(x[1], order)
2468 tb = _analyze(x[2], _tofolloworder[order])
2469 return (op, ta, tb, order)
2470 elif op == 'or':
2471 return (op, _analyze(x[1], order), order)
2472 elif op == 'not':
2473 return (op, _analyze(x[1], anyorder), order)
2474 elif op in ('rangepre', 'parentpost'):
2475 return (op, _analyze(x[1], defineorder), order)
2476 elif op == 'group':
2477 return _analyze(x[1], order)
2478 elif op in ('dagrange', 'range', 'parent', 'ancestor'):
2479 ta = _analyze(x[1], defineorder)
2480 tb = _analyze(x[2], defineorder)
2481 return (op, ta, tb, order)
2482 elif op == 'list':
2483 return (op,) + tuple(_analyze(y, order) for y in x[1:])
2484 elif op == 'keyvalue':
2485 return (op, x[1], _analyze(x[2], order))
2486 elif op == 'func':
2487 f = getsymbol(x[1])
2488 d = defineorder
2489 if f == 'present':
2490 # 'present(set)' is known to return the argument set with no
2491 # modification, so forward the current order to its argument
2492 d = order
2493 return (op, x[1], _analyze(x[2], d), order)
2494 raise ValueError('invalid operator %r' % op)
2495
2496 def analyze(x, order=defineorder):
2497 """Transform raw parsed tree to evaluatable tree which can be fed to
2498 optimize() or getset()
2499
2500 All pseudo operations should be mapped to real operations or functions
2501 defined in methods or symbols table respectively.
2502
2503 'order' specifies how the current expression 'x' is ordered (see the
2504 constants defined above.)
2505 """
2506 return _analyze(x, order)
2507
2319 2508 def _optimize(x, small):
2320 2509 if x is None:
2321 2510 return 0, x
@@ -2325,47 +2514,29 b' def _optimize(x, small):'
2325 2514 smallbonus = .5
2326 2515
2327 2516 op = x[0]
2328 if op == 'minus':
2329 return _optimize(('and', x[1], ('not', x[2])), small)
2330 elif op == 'only':
2331 t = ('func', ('symbol', 'only'), ('list', x[1], x[2]))
2332 return _optimize(t, small)
2333 elif op == 'onlypost':
2334 return _optimize(('func', ('symbol', 'only'), x[1]), small)
2335 elif op == 'dagrangepre':
2336 return _optimize(('func', ('symbol', 'ancestors'), x[1]), small)
2337 elif op == 'dagrangepost':
2338 return _optimize(('func', ('symbol', 'descendants'), x[1]), small)
2339 elif op == 'rangeall':
2340 return _optimize(('range', ('string', '0'), ('string', 'tip')), small)
2341 elif op == 'rangepre':
2342 return _optimize(('range', ('string', '0'), x[1]), small)
2343 elif op == 'rangepost':
2344 return _optimize(('range', x[1], ('string', 'tip')), small)
2345 elif op == 'negate':
2346 s = getstring(x[1], _("can't negate that"))
2347 return _optimize(('string', '-' + s), small)
2348 elif op in 'string symbol negate':
2517 if op in ('string', 'symbol'):
2349 2518 return smallbonus, x # single revisions are small
2350 2519 elif op == 'and':
2351 2520 wa, ta = _optimize(x[1], True)
2352 2521 wb, tb = _optimize(x[2], True)
2522 order = x[3]
2353 2523 w = min(wa, wb)
2354 2524
2355 2525 # (::x and not ::y)/(not ::y and ::x) have a fast path
2356 2526 tm = _matchonly(ta, tb) or _matchonly(tb, ta)
2357 2527 if tm:
2358 return w, ('func', ('symbol', 'only'), tm)
2528 return w, ('func', ('symbol', 'only'), tm, order)
2359 2529
2360 2530 if tb is not None and tb[0] == 'not':
2361 return wa, ('difference', ta, tb[1])
2531 return wa, ('difference', ta, tb[1], order)
2362 2532
2363 2533 if wa > wb:
2364 return w, (op, tb, ta)
2365 return w, (op, ta, tb)
2534 return w, (op, tb, ta, order)
2535 return w, (op, ta, tb, order)
2366 2536 elif op == 'or':
2367 2537 # fast path for machine-generated expression, that is likely to have
2368 2538 # lots of trivial revisions: 'a + b + c()' to '_list(a b) + c()'
2539 order = x[2]
2369 2540 ws, ts, ss = [], [], []
2370 2541 def flushss():
2371 2542 if not ss:
@@ -2374,12 +2545,12 b' def _optimize(x, small):'
2374 2545 w, t = ss[0]
2375 2546 else:
2376 2547 s = '\0'.join(t[1] for w, t in ss)
2377 y = ('func', ('symbol', '_list'), ('string', s))
2548 y = ('func', ('symbol', '_list'), ('string', s), order)
2378 2549 w, t = _optimize(y, False)
2379 2550 ws.append(w)
2380 2551 ts.append(t)
2381 2552 del ss[:]
2382 for y in x[1:]:
2553 for y in getlist(x[1]):
2383 2554 w, t = _optimize(y, False)
2384 2555 if t is not None and (t[0] == 'string' or t[0] == 'symbol'):
2385 2556 ss.append((w, t))
@@ -2393,33 +2564,27 b' def _optimize(x, small):'
2393 2564 # we can't reorder trees by weight because it would change the order.
2394 2565 # ("sort(a + b)" == "sort(b + a)", but "a + b" != "b + a")
2395 2566 # ts = tuple(t for w, t in sorted(zip(ws, ts), key=lambda wt: wt[0]))
2396 return max(ws), (op,) + tuple(ts)
2567 return max(ws), (op, ('list',) + tuple(ts), order)
2397 2568 elif op == 'not':
2398 2569 # Optimize not public() to _notpublic() because we have a fast version
2399 if x[1] == ('func', ('symbol', 'public'), None):
2400 newsym = ('func', ('symbol', '_notpublic'), None)
2570 if x[1][:3] == ('func', ('symbol', 'public'), None):
2571 order = x[1][3]
2572 newsym = ('func', ('symbol', '_notpublic'), None, order)
2401 2573 o = _optimize(newsym, not small)
2402 2574 return o[0], o[1]
2403 2575 else:
2404 2576 o = _optimize(x[1], not small)
2405 return o[0], (op, o[1])
2406 elif op == 'parentpost':
2577 order = x[2]
2578 return o[0], (op, o[1], order)
2579 elif op in ('rangepre', 'parentpost'):
2407 2580 o = _optimize(x[1], small)
2408 return o[0], (op, o[1])
2409 elif op == 'group':
2410 return _optimize(x[1], small)
2411 elif op in 'dagrange range parent ancestorspec':
2412 if op == 'parent':
2413 # x^:y means (x^) : y, not x ^ (:y)
2414 post = ('parentpost', x[1])
2415 if x[2][0] == 'dagrangepre':
2416 return _optimize(('dagrange', post, x[2][1]), small)
2417 elif x[2][0] == 'rangepre':
2418 return _optimize(('range', post, x[2][1]), small)
2419
2581 order = x[2]
2582 return o[0], (op, o[1], order)
2583 elif op in ('dagrange', 'range', 'parent', 'ancestor'):
2420 2584 wa, ta = _optimize(x[1], small)
2421 2585 wb, tb = _optimize(x[2], small)
2422 return wa + wb, (op, ta, tb)
2586 order = x[3]
2587 return wa + wb, (op, ta, tb, order)
2423 2588 elif op == 'list':
2424 2589 ws, ts = zip(*(_optimize(y, small) for y in x[1:]))
2425 2590 return sum(ws), (op,) + ts
@@ -2429,32 +2594,36 b' def _optimize(x, small):'
2429 2594 elif op == 'func':
2430 2595 f = getsymbol(x[1])
2431 2596 wa, ta = _optimize(x[2], small)
2432 if f in ("author branch closed date desc file grep keyword "
2433 "outgoing user"):
2597 if f in ('author', 'branch', 'closed', 'date', 'desc', 'file', 'grep',
2598 'keyword', 'outgoing', 'user', 'destination'):
2434 2599 w = 10 # slow
2435 elif f in "modifies adds removes":
2600 elif f in ('modifies', 'adds', 'removes'):
2436 2601 w = 30 # slower
2437 2602 elif f == "contains":
2438 2603 w = 100 # very slow
2439 2604 elif f == "ancestor":
2440 2605 w = 1 * smallbonus
2441 elif f in "reverse limit first _intlist":
2606 elif f in ('reverse', 'limit', 'first', '_intlist'):
2442 2607 w = 0
2443 elif f in "sort":
2608 elif f == "sort":
2444 2609 w = 10 # assume most sorts look at changelog
2445 2610 else:
2446 2611 w = 1
2447 return w + wa, (op, x[1], ta)
2448 return 1, x
2612 order = x[3]
2613 return w + wa, (op, x[1], ta, order)
2614 raise ValueError('invalid operator %r' % op)
2449 2615
2450 2616 def optimize(tree):
2617 """Optimize evaluatable tree
2618
2619 All pseudo operations should be transformed beforehand.
2620 """
2451 2621 _weight, newtree = _optimize(tree, small=True)
2452 2622 return newtree
2453 2623
2454 2624 # the set of valid characters for the initial letter of symbols in
2455 2625 # alias declarations and definitions
2456 _aliassyminitletters = set(c for c in [chr(i) for i in xrange(256)]
2457 if c.isalnum() or c in '._@$' or ord(c) > 127)
2626 _aliassyminitletters = _syminitletters | set(pycompat.sysstr('$'))
2458 2627
2459 2628 def _parsewith(spec, lookup=None, syminitletters=None):
2460 2629 """Generate a parse tree of given spec with given tokenizing options
@@ -2475,7 +2644,7 b' def _parsewith(spec, lookup=None, symini'
2475 2644 syminitletters=syminitletters))
2476 2645 if pos != len(spec):
2477 2646 raise error.ParseError(_('invalid token'), pos)
2478 return parser.simplifyinfixops(tree, ('list', 'or'))
2647 return _fixops(parser.simplifyinfixops(tree, ('list', 'or')))
2479 2648
2480 2649 class _aliasrules(parser.basealiasrules):
2481 2650 """Parsing and expansion rule set of revset aliases"""
@@ -2496,15 +2665,14 b' class _aliasrules(parser.basealiasrules)'
2496 2665 if tree[0] == 'func' and tree[1][0] == 'symbol':
2497 2666 return tree[1][1], getlist(tree[2])
2498 2667
2499 def expandaliases(ui, tree, showwarning=None):
2668 def expandaliases(ui, tree):
2500 2669 aliases = _aliasrules.buildmap(ui.configitems('revsetalias'))
2501 2670 tree = _aliasrules.expand(aliases, tree)
2502 if showwarning:
2503 # warn about problematic (but not referred) aliases
2504 for name, alias in sorted(aliases.iteritems()):
2505 if alias.error and not alias.warned:
2506 showwarning(_('warning: %s\n') % (alias.error))
2507 alias.warned = True
2671 # warn about problematic (but not referred) aliases
2672 for name, alias in sorted(aliases.iteritems()):
2673 if alias.error and not alias.warned:
2674 ui.warn(_('warning: %s\n') % (alias.error))
2675 alias.warned = True
2508 2676 return tree
2509 2677
2510 2678 def foldconcat(tree):
@@ -2535,13 +2703,21 b' def posttreebuilthook(tree, repo):'
2535 2703 # hook for extensions to execute code on the optimized tree
2536 2704 pass
2537 2705
2538 def match(ui, spec, repo=None):
2539 """Create a matcher for a single revision spec."""
2540 return matchany(ui, [spec], repo=repo)
2541
2542 def matchany(ui, specs, repo=None):
2706 def match(ui, spec, repo=None, order=defineorder):
2707 """Create a matcher for a single revision spec
2708
2709 If order=followorder, a matcher takes the ordering specified by the input
2710 set.
2711 """
2712 return matchany(ui, [spec], repo=repo, order=order)
2713
2714 def matchany(ui, specs, repo=None, order=defineorder):
2543 2715 """Create a matcher that will include any revisions matching one of the
2544 given specs"""
2716 given specs
2717
2718 If order=followorder, a matcher takes the ordering specified by the input
2719 set.
2720 """
2545 2721 if not specs:
2546 2722 def mfunc(repo, subset=None):
2547 2723 return baseset()
@@ -2554,15 +2730,18 b' def matchany(ui, specs, repo=None):'
2554 2730 if len(specs) == 1:
2555 2731 tree = parse(specs[0], lookup)
2556 2732 else:
2557 tree = ('or',) + tuple(parse(s, lookup) for s in specs)
2558 return _makematcher(ui, tree, repo)
2559
2560 def _makematcher(ui, tree, repo):
2733 tree = ('or', ('list',) + tuple(parse(s, lookup) for s in specs))
2734
2561 2735 if ui:
2562 tree = expandaliases(ui, tree, showwarning=ui.warn)
2736 tree = expandaliases(ui, tree)
2563 2737 tree = foldconcat(tree)
2738 tree = analyze(tree, order)
2564 2739 tree = optimize(tree)
2565 2740 posttreebuilthook(tree, repo)
2741 return makematcher(tree)
2742
2743 def makematcher(tree):
2744 """Create a matcher from an evaluatable tree"""
2566 2745 def mfunc(repo, subset=None):
2567 2746 if subset is None:
2568 2747 subset = fullreposet(repo)
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now