Show More
@@ -1,1215 +1,1215 b'' | |||||
1 | # bugzilla.py - bugzilla integration for mercurial |
|
1 | # bugzilla.py - bugzilla integration for mercurial | |
2 | # |
|
2 | # | |
3 | # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com> |
|
3 | # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com> | |
4 | # Copyright 2011-4 Jim Hague <jim.hague@acm.org> |
|
4 | # Copyright 2011-4 Jim Hague <jim.hague@acm.org> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms of the |
|
6 | # This software may be used and distributed according to the terms of the | |
7 | # GNU General Public License version 2 or any later version. |
|
7 | # GNU General Public License version 2 or any later version. | |
8 |
|
8 | |||
9 | '''hooks for integrating with the Bugzilla bug tracker |
|
9 | '''hooks for integrating with the Bugzilla bug tracker | |
10 |
|
10 | |||
11 | This hook extension adds comments on bugs in Bugzilla when changesets |
|
11 | This hook extension adds comments on bugs in Bugzilla when changesets | |
12 | that refer to bugs by Bugzilla ID are seen. The comment is formatted using |
|
12 | that refer to bugs by Bugzilla ID are seen. The comment is formatted using | |
13 | the Mercurial template mechanism. |
|
13 | the Mercurial template mechanism. | |
14 |
|
14 | |||
15 | The bug references can optionally include an update for Bugzilla of the |
|
15 | The bug references can optionally include an update for Bugzilla of the | |
16 | hours spent working on the bug. Bugs can also be marked fixed. |
|
16 | hours spent working on the bug. Bugs can also be marked fixed. | |
17 |
|
17 | |||
18 | Four basic modes of access to Bugzilla are provided: |
|
18 | Four basic modes of access to Bugzilla are provided: | |
19 |
|
19 | |||
20 | 1. Access via the Bugzilla REST-API. Requires bugzilla 5.0 or later. |
|
20 | 1. Access via the Bugzilla REST-API. Requires bugzilla 5.0 or later. | |
21 |
|
21 | |||
22 | 2. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later. |
|
22 | 2. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later. | |
23 |
|
23 | |||
24 | 3. Check data via the Bugzilla XMLRPC interface and submit bug change |
|
24 | 3. Check data via the Bugzilla XMLRPC interface and submit bug change | |
25 | via email to Bugzilla email interface. Requires Bugzilla 3.4 or later. |
|
25 | via email to Bugzilla email interface. Requires Bugzilla 3.4 or later. | |
26 |
|
26 | |||
27 | 4. Writing directly to the Bugzilla database. Only Bugzilla installations |
|
27 | 4. Writing directly to the Bugzilla database. Only Bugzilla installations | |
28 | using MySQL are supported. Requires Python MySQLdb. |
|
28 | using MySQL are supported. Requires Python MySQLdb. | |
29 |
|
29 | |||
30 | Writing directly to the database is susceptible to schema changes, and |
|
30 | Writing directly to the database is susceptible to schema changes, and | |
31 | relies on a Bugzilla contrib script to send out bug change |
|
31 | relies on a Bugzilla contrib script to send out bug change | |
32 | notification emails. This script runs as the user running Mercurial, |
|
32 | notification emails. This script runs as the user running Mercurial, | |
33 | must be run on the host with the Bugzilla install, and requires |
|
33 | must be run on the host with the Bugzilla install, and requires | |
34 | permission to read Bugzilla configuration details and the necessary |
|
34 | permission to read Bugzilla configuration details and the necessary | |
35 | MySQL user and password to have full access rights to the Bugzilla |
|
35 | MySQL user and password to have full access rights to the Bugzilla | |
36 | database. For these reasons this access mode is now considered |
|
36 | database. For these reasons this access mode is now considered | |
37 | deprecated, and will not be updated for new Bugzilla versions going |
|
37 | deprecated, and will not be updated for new Bugzilla versions going | |
38 | forward. Only adding comments is supported in this access mode. |
|
38 | forward. Only adding comments is supported in this access mode. | |
39 |
|
39 | |||
40 | Access via XMLRPC needs a Bugzilla username and password to be specified |
|
40 | Access via XMLRPC needs a Bugzilla username and password to be specified | |
41 | in the configuration. Comments are added under that username. Since the |
|
41 | in the configuration. Comments are added under that username. Since the | |
42 | configuration must be readable by all Mercurial users, it is recommended |
|
42 | configuration must be readable by all Mercurial users, it is recommended | |
43 | that the rights of that user are restricted in Bugzilla to the minimum |
|
43 | that the rights of that user are restricted in Bugzilla to the minimum | |
44 | necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later. |
|
44 | necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later. | |
45 |
|
45 | |||
46 | Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends |
|
46 | Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends | |
47 | email to the Bugzilla email interface to submit comments to bugs. |
|
47 | email to the Bugzilla email interface to submit comments to bugs. | |
48 | The From: address in the email is set to the email address of the Mercurial |
|
48 | The From: address in the email is set to the email address of the Mercurial | |
49 | user, so the comment appears to come from the Mercurial user. In the event |
|
49 | user, so the comment appears to come from the Mercurial user. In the event | |
50 | that the Mercurial user email is not recognized by Bugzilla as a Bugzilla |
|
50 | that the Mercurial user email is not recognized by Bugzilla as a Bugzilla | |
51 | user, the email associated with the Bugzilla username used to log into |
|
51 | user, the email associated with the Bugzilla username used to log into | |
52 | Bugzilla is used instead as the source of the comment. Marking bugs fixed |
|
52 | Bugzilla is used instead as the source of the comment. Marking bugs fixed | |
53 | works on all supported Bugzilla versions. |
|
53 | works on all supported Bugzilla versions. | |
54 |
|
54 | |||
55 | Access via the REST-API needs either a Bugzilla username and password |
|
55 | Access via the REST-API needs either a Bugzilla username and password | |
56 | or an apikey specified in the configuration. Comments are made under |
|
56 | or an apikey specified in the configuration. Comments are made under | |
57 | the given username or the user associated with the apikey in Bugzilla. |
|
57 | the given username or the user associated with the apikey in Bugzilla. | |
58 |
|
58 | |||
59 | Configuration items common to all access modes: |
|
59 | Configuration items common to all access modes: | |
60 |
|
60 | |||
61 | bugzilla.version |
|
61 | bugzilla.version | |
62 | The access type to use. Values recognized are: |
|
62 | The access type to use. Values recognized are: | |
63 |
|
63 | |||
64 | :``restapi``: Bugzilla REST-API, Bugzilla 5.0 and later. |
|
64 | :``restapi``: Bugzilla REST-API, Bugzilla 5.0 and later. | |
65 | :``xmlrpc``: Bugzilla XMLRPC interface. |
|
65 | :``xmlrpc``: Bugzilla XMLRPC interface. | |
66 | :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces. |
|
66 | :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces. | |
67 | :``3.0``: MySQL access, Bugzilla 3.0 and later. |
|
67 | :``3.0``: MySQL access, Bugzilla 3.0 and later. | |
68 | :``2.18``: MySQL access, Bugzilla 2.18 and up to but not |
|
68 | :``2.18``: MySQL access, Bugzilla 2.18 and up to but not | |
69 | including 3.0. |
|
69 | including 3.0. | |
70 | :``2.16``: MySQL access, Bugzilla 2.16 and up to but not |
|
70 | :``2.16``: MySQL access, Bugzilla 2.16 and up to but not | |
71 | including 2.18. |
|
71 | including 2.18. | |
72 |
|
72 | |||
73 | bugzilla.regexp |
|
73 | bugzilla.regexp | |
74 | Regular expression to match bug IDs for update in changeset commit message. |
|
74 | Regular expression to match bug IDs for update in changeset commit message. | |
75 | It must contain one "()" named group ``<ids>`` containing the bug |
|
75 | It must contain one "()" named group ``<ids>`` containing the bug | |
76 | IDs separated by non-digit characters. It may also contain |
|
76 | IDs separated by non-digit characters. It may also contain | |
77 | a named group ``<hours>`` with a floating-point number giving the |
|
77 | a named group ``<hours>`` with a floating-point number giving the | |
78 | hours worked on the bug. If no named groups are present, the first |
|
78 | hours worked on the bug. If no named groups are present, the first | |
79 | "()" group is assumed to contain the bug IDs, and work time is not |
|
79 | "()" group is assumed to contain the bug IDs, and work time is not | |
80 | updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``, |
|
80 | updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``, | |
81 | ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and |
|
81 | ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and | |
82 | variations thereof, followed by an hours number prefixed by ``h`` or |
|
82 | variations thereof, followed by an hours number prefixed by ``h`` or | |
83 | ``hours``, e.g. ``hours 1.5``. Matching is case insensitive. |
|
83 | ``hours``, e.g. ``hours 1.5``. Matching is case insensitive. | |
84 |
|
84 | |||
85 | bugzilla.fixregexp |
|
85 | bugzilla.fixregexp | |
86 | Regular expression to match bug IDs for marking fixed in changeset |
|
86 | Regular expression to match bug IDs for marking fixed in changeset | |
87 | commit message. This must contain a "()" named group ``<ids>` containing |
|
87 | commit message. This must contain a "()" named group ``<ids>` containing | |
88 | the bug IDs separated by non-digit characters. It may also contain |
|
88 | the bug IDs separated by non-digit characters. It may also contain | |
89 | a named group ``<hours>`` with a floating-point number giving the |
|
89 | a named group ``<hours>`` with a floating-point number giving the | |
90 | hours worked on the bug. If no named groups are present, the first |
|
90 | hours worked on the bug. If no named groups are present, the first | |
91 | "()" group is assumed to contain the bug IDs, and work time is not |
|
91 | "()" group is assumed to contain the bug IDs, and work time is not | |
92 | updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``, |
|
92 | updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``, | |
93 | ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and |
|
93 | ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and | |
94 | variations thereof, followed by an hours number prefixed by ``h`` or |
|
94 | variations thereof, followed by an hours number prefixed by ``h`` or | |
95 | ``hours``, e.g. ``hours 1.5``. Matching is case insensitive. |
|
95 | ``hours``, e.g. ``hours 1.5``. Matching is case insensitive. | |
96 |
|
96 | |||
97 | bugzilla.fixstatus |
|
97 | bugzilla.fixstatus | |
98 | The status to set a bug to when marking fixed. Default ``RESOLVED``. |
|
98 | The status to set a bug to when marking fixed. Default ``RESOLVED``. | |
99 |
|
99 | |||
100 | bugzilla.fixresolution |
|
100 | bugzilla.fixresolution | |
101 | The resolution to set a bug to when marking fixed. Default ``FIXED``. |
|
101 | The resolution to set a bug to when marking fixed. Default ``FIXED``. | |
102 |
|
102 | |||
103 | bugzilla.style |
|
103 | bugzilla.style | |
104 | The style file to use when formatting comments. |
|
104 | The style file to use when formatting comments. | |
105 |
|
105 | |||
106 | bugzilla.template |
|
106 | bugzilla.template | |
107 | Template to use when formatting comments. Overrides style if |
|
107 | Template to use when formatting comments. Overrides style if | |
108 | specified. In addition to the usual Mercurial keywords, the |
|
108 | specified. In addition to the usual Mercurial keywords, the | |
109 | extension specifies: |
|
109 | extension specifies: | |
110 |
|
110 | |||
111 | :``{bug}``: The Bugzilla bug ID. |
|
111 | :``{bug}``: The Bugzilla bug ID. | |
112 | :``{root}``: The full pathname of the Mercurial repository. |
|
112 | :``{root}``: The full pathname of the Mercurial repository. | |
113 | :``{webroot}``: Stripped pathname of the Mercurial repository. |
|
113 | :``{webroot}``: Stripped pathname of the Mercurial repository. | |
114 | :``{hgweb}``: Base URL for browsing Mercurial repositories. |
|
114 | :``{hgweb}``: Base URL for browsing Mercurial repositories. | |
115 |
|
115 | |||
116 | Default ``changeset {node|short} in repo {root} refers to bug |
|
116 | Default ``changeset {node|short} in repo {root} refers to bug | |
117 | {bug}.\\ndetails:\\n\\t{desc|tabindent}`` |
|
117 | {bug}.\\ndetails:\\n\\t{desc|tabindent}`` | |
118 |
|
118 | |||
119 | bugzilla.strip |
|
119 | bugzilla.strip | |
120 | The number of path separator characters to strip from the front of |
|
120 | The number of path separator characters to strip from the front of | |
121 | the Mercurial repository path (``{root}`` in templates) to produce |
|
121 | the Mercurial repository path (``{root}`` in templates) to produce | |
122 | ``{webroot}``. For example, a repository with ``{root}`` |
|
122 | ``{webroot}``. For example, a repository with ``{root}`` | |
123 | ``/var/local/my-project`` with a strip of 2 gives a value for |
|
123 | ``/var/local/my-project`` with a strip of 2 gives a value for | |
124 | ``{webroot}`` of ``my-project``. Default 0. |
|
124 | ``{webroot}`` of ``my-project``. Default 0. | |
125 |
|
125 | |||
126 | web.baseurl |
|
126 | web.baseurl | |
127 | Base URL for browsing Mercurial repositories. Referenced from |
|
127 | Base URL for browsing Mercurial repositories. Referenced from | |
128 | templates as ``{hgweb}``. |
|
128 | templates as ``{hgweb}``. | |
129 |
|
129 | |||
130 | Configuration items common to XMLRPC+email and MySQL access modes: |
|
130 | Configuration items common to XMLRPC+email and MySQL access modes: | |
131 |
|
131 | |||
132 | bugzilla.usermap |
|
132 | bugzilla.usermap | |
133 | Path of file containing Mercurial committer email to Bugzilla user email |
|
133 | Path of file containing Mercurial committer email to Bugzilla user email | |
134 | mappings. If specified, the file should contain one mapping per |
|
134 | mappings. If specified, the file should contain one mapping per | |
135 | line:: |
|
135 | line:: | |
136 |
|
136 | |||
137 | committer = Bugzilla user |
|
137 | committer = Bugzilla user | |
138 |
|
138 | |||
139 | See also the ``[usermap]`` section. |
|
139 | See also the ``[usermap]`` section. | |
140 |
|
140 | |||
141 | The ``[usermap]`` section is used to specify mappings of Mercurial |
|
141 | The ``[usermap]`` section is used to specify mappings of Mercurial | |
142 | committer email to Bugzilla user email. See also ``bugzilla.usermap``. |
|
142 | committer email to Bugzilla user email. See also ``bugzilla.usermap``. | |
143 | Contains entries of the form ``committer = Bugzilla user``. |
|
143 | Contains entries of the form ``committer = Bugzilla user``. | |
144 |
|
144 | |||
145 | XMLRPC and REST-API access mode configuration: |
|
145 | XMLRPC and REST-API access mode configuration: | |
146 |
|
146 | |||
147 | bugzilla.bzurl |
|
147 | bugzilla.bzurl | |
148 | The base URL for the Bugzilla installation. |
|
148 | The base URL for the Bugzilla installation. | |
149 | Default ``http://localhost/bugzilla``. |
|
149 | Default ``http://localhost/bugzilla``. | |
150 |
|
150 | |||
151 | bugzilla.user |
|
151 | bugzilla.user | |
152 | The username to use to log into Bugzilla via XMLRPC. Default |
|
152 | The username to use to log into Bugzilla via XMLRPC. Default | |
153 | ``bugs``. |
|
153 | ``bugs``. | |
154 |
|
154 | |||
155 | bugzilla.password |
|
155 | bugzilla.password | |
156 | The password for Bugzilla login. |
|
156 | The password for Bugzilla login. | |
157 |
|
157 | |||
158 | REST-API access mode uses the options listed above as well as: |
|
158 | REST-API access mode uses the options listed above as well as: | |
159 |
|
159 | |||
160 | bugzilla.apikey |
|
160 | bugzilla.apikey | |
161 | An apikey generated on the Bugzilla instance for api access. |
|
161 | An apikey generated on the Bugzilla instance for api access. | |
162 | Using an apikey removes the need to store the user and password |
|
162 | Using an apikey removes the need to store the user and password | |
163 | options. |
|
163 | options. | |
164 |
|
164 | |||
165 | XMLRPC+email access mode uses the XMLRPC access mode configuration items, |
|
165 | XMLRPC+email access mode uses the XMLRPC access mode configuration items, | |
166 | and also: |
|
166 | and also: | |
167 |
|
167 | |||
168 | bugzilla.bzemail |
|
168 | bugzilla.bzemail | |
169 | The Bugzilla email address. |
|
169 | The Bugzilla email address. | |
170 |
|
170 | |||
171 | In addition, the Mercurial email settings must be configured. See the |
|
171 | In addition, the Mercurial email settings must be configured. See the | |
172 | documentation in hgrc(5), sections ``[email]`` and ``[smtp]``. |
|
172 | documentation in hgrc(5), sections ``[email]`` and ``[smtp]``. | |
173 |
|
173 | |||
174 | MySQL access mode configuration: |
|
174 | MySQL access mode configuration: | |
175 |
|
175 | |||
176 | bugzilla.host |
|
176 | bugzilla.host | |
177 | Hostname of the MySQL server holding the Bugzilla database. |
|
177 | Hostname of the MySQL server holding the Bugzilla database. | |
178 | Default ``localhost``. |
|
178 | Default ``localhost``. | |
179 |
|
179 | |||
180 | bugzilla.db |
|
180 | bugzilla.db | |
181 | Name of the Bugzilla database in MySQL. Default ``bugs``. |
|
181 | Name of the Bugzilla database in MySQL. Default ``bugs``. | |
182 |
|
182 | |||
183 | bugzilla.user |
|
183 | bugzilla.user | |
184 | Username to use to access MySQL server. Default ``bugs``. |
|
184 | Username to use to access MySQL server. Default ``bugs``. | |
185 |
|
185 | |||
186 | bugzilla.password |
|
186 | bugzilla.password | |
187 | Password to use to access MySQL server. |
|
187 | Password to use to access MySQL server. | |
188 |
|
188 | |||
189 | bugzilla.timeout |
|
189 | bugzilla.timeout | |
190 | Database connection timeout (seconds). Default 5. |
|
190 | Database connection timeout (seconds). Default 5. | |
191 |
|
191 | |||
192 | bugzilla.bzuser |
|
192 | bugzilla.bzuser | |
193 | Fallback Bugzilla user name to record comments with, if changeset |
|
193 | Fallback Bugzilla user name to record comments with, if changeset | |
194 | committer cannot be found as a Bugzilla user. |
|
194 | committer cannot be found as a Bugzilla user. | |
195 |
|
195 | |||
196 | bugzilla.bzdir |
|
196 | bugzilla.bzdir | |
197 | Bugzilla install directory. Used by default notify. Default |
|
197 | Bugzilla install directory. Used by default notify. Default | |
198 | ``/var/www/html/bugzilla``. |
|
198 | ``/var/www/html/bugzilla``. | |
199 |
|
199 | |||
200 | bugzilla.notify |
|
200 | bugzilla.notify | |
201 | The command to run to get Bugzilla to send bug change notification |
|
201 | The command to run to get Bugzilla to send bug change notification | |
202 | emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug |
|
202 | emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug | |
203 | id) and ``user`` (committer bugzilla email). Default depends on |
|
203 | id) and ``user`` (committer bugzilla email). Default depends on | |
204 | version; from 2.18 it is "cd %(bzdir)s && perl -T |
|
204 | version; from 2.18 it is "cd %(bzdir)s && perl -T | |
205 | contrib/sendbugmail.pl %(id)s %(user)s". |
|
205 | contrib/sendbugmail.pl %(id)s %(user)s". | |
206 |
|
206 | |||
207 | Activating the extension:: |
|
207 | Activating the extension:: | |
208 |
|
208 | |||
209 | [extensions] |
|
209 | [extensions] | |
210 | bugzilla = |
|
210 | bugzilla = | |
211 |
|
211 | |||
212 | [hooks] |
|
212 | [hooks] | |
213 | # run bugzilla hook on every change pulled or pushed in here |
|
213 | # run bugzilla hook on every change pulled or pushed in here | |
214 | incoming.bugzilla = python:hgext.bugzilla.hook |
|
214 | incoming.bugzilla = python:hgext.bugzilla.hook | |
215 |
|
215 | |||
216 | Example configurations: |
|
216 | Example configurations: | |
217 |
|
217 | |||
218 | XMLRPC example configuration. This uses the Bugzilla at |
|
218 | XMLRPC example configuration. This uses the Bugzilla at | |
219 | ``http://my-project.org/bugzilla``, logging in as user |
|
219 | ``http://my-project.org/bugzilla``, logging in as user | |
220 | ``bugmail@my-project.org`` with password ``plugh``. It is used with a |
|
220 | ``bugmail@my-project.org`` with password ``plugh``. It is used with a | |
221 | collection of Mercurial repositories in ``/var/local/hg/repos/``, |
|
221 | collection of Mercurial repositories in ``/var/local/hg/repos/``, | |
222 | with a web interface at ``http://my-project.org/hg``. :: |
|
222 | with a web interface at ``http://my-project.org/hg``. :: | |
223 |
|
223 | |||
224 | [bugzilla] |
|
224 | [bugzilla] | |
225 | bzurl=http://my-project.org/bugzilla |
|
225 | bzurl=http://my-project.org/bugzilla | |
226 | user=bugmail@my-project.org |
|
226 | user=bugmail@my-project.org | |
227 | password=plugh |
|
227 | password=plugh | |
228 | version=xmlrpc |
|
228 | version=xmlrpc | |
229 | template=Changeset {node|short} in {root|basename}. |
|
229 | template=Changeset {node|short} in {root|basename}. | |
230 | {hgweb}/{webroot}/rev/{node|short}\\n |
|
230 | {hgweb}/{webroot}/rev/{node|short}\\n | |
231 | {desc}\\n |
|
231 | {desc}\\n | |
232 | strip=5 |
|
232 | strip=5 | |
233 |
|
233 | |||
234 | [web] |
|
234 | [web] | |
235 | baseurl=http://my-project.org/hg |
|
235 | baseurl=http://my-project.org/hg | |
236 |
|
236 | |||
237 | XMLRPC+email example configuration. This uses the Bugzilla at |
|
237 | XMLRPC+email example configuration. This uses the Bugzilla at | |
238 | ``http://my-project.org/bugzilla``, logging in as user |
|
238 | ``http://my-project.org/bugzilla``, logging in as user | |
239 | ``bugmail@my-project.org`` with password ``plugh``. It is used with a |
|
239 | ``bugmail@my-project.org`` with password ``plugh``. It is used with a | |
240 | collection of Mercurial repositories in ``/var/local/hg/repos/``, |
|
240 | collection of Mercurial repositories in ``/var/local/hg/repos/``, | |
241 | with a web interface at ``http://my-project.org/hg``. Bug comments |
|
241 | with a web interface at ``http://my-project.org/hg``. Bug comments | |
242 | are sent to the Bugzilla email address |
|
242 | are sent to the Bugzilla email address | |
243 | ``bugzilla@my-project.org``. :: |
|
243 | ``bugzilla@my-project.org``. :: | |
244 |
|
244 | |||
245 | [bugzilla] |
|
245 | [bugzilla] | |
246 | bzurl=http://my-project.org/bugzilla |
|
246 | bzurl=http://my-project.org/bugzilla | |
247 | user=bugmail@my-project.org |
|
247 | user=bugmail@my-project.org | |
248 | password=plugh |
|
248 | password=plugh | |
249 | version=xmlrpc+email |
|
249 | version=xmlrpc+email | |
250 | bzemail=bugzilla@my-project.org |
|
250 | bzemail=bugzilla@my-project.org | |
251 | template=Changeset {node|short} in {root|basename}. |
|
251 | template=Changeset {node|short} in {root|basename}. | |
252 | {hgweb}/{webroot}/rev/{node|short}\\n |
|
252 | {hgweb}/{webroot}/rev/{node|short}\\n | |
253 | {desc}\\n |
|
253 | {desc}\\n | |
254 | strip=5 |
|
254 | strip=5 | |
255 |
|
255 | |||
256 | [web] |
|
256 | [web] | |
257 | baseurl=http://my-project.org/hg |
|
257 | baseurl=http://my-project.org/hg | |
258 |
|
258 | |||
259 | [usermap] |
|
259 | [usermap] | |
260 | user@emaildomain.com=user.name@bugzilladomain.com |
|
260 | user@emaildomain.com=user.name@bugzilladomain.com | |
261 |
|
261 | |||
262 | MySQL example configuration. This has a local Bugzilla 3.2 installation |
|
262 | MySQL example configuration. This has a local Bugzilla 3.2 installation | |
263 | in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``, |
|
263 | in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``, | |
264 | the Bugzilla database name is ``bugs`` and MySQL is |
|
264 | the Bugzilla database name is ``bugs`` and MySQL is | |
265 | accessed with MySQL username ``bugs`` password ``XYZZY``. It is used |
|
265 | accessed with MySQL username ``bugs`` password ``XYZZY``. It is used | |
266 | with a collection of Mercurial repositories in ``/var/local/hg/repos/``, |
|
266 | with a collection of Mercurial repositories in ``/var/local/hg/repos/``, | |
267 | with a web interface at ``http://my-project.org/hg``. :: |
|
267 | with a web interface at ``http://my-project.org/hg``. :: | |
268 |
|
268 | |||
269 | [bugzilla] |
|
269 | [bugzilla] | |
270 | host=localhost |
|
270 | host=localhost | |
271 | password=XYZZY |
|
271 | password=XYZZY | |
272 | version=3.0 |
|
272 | version=3.0 | |
273 | bzuser=unknown@domain.com |
|
273 | bzuser=unknown@domain.com | |
274 | bzdir=/opt/bugzilla-3.2 |
|
274 | bzdir=/opt/bugzilla-3.2 | |
275 | template=Changeset {node|short} in {root|basename}. |
|
275 | template=Changeset {node|short} in {root|basename}. | |
276 | {hgweb}/{webroot}/rev/{node|short}\\n |
|
276 | {hgweb}/{webroot}/rev/{node|short}\\n | |
277 | {desc}\\n |
|
277 | {desc}\\n | |
278 | strip=5 |
|
278 | strip=5 | |
279 |
|
279 | |||
280 | [web] |
|
280 | [web] | |
281 | baseurl=http://my-project.org/hg |
|
281 | baseurl=http://my-project.org/hg | |
282 |
|
282 | |||
283 | [usermap] |
|
283 | [usermap] | |
284 | user@emaildomain.com=user.name@bugzilladomain.com |
|
284 | user@emaildomain.com=user.name@bugzilladomain.com | |
285 |
|
285 | |||
286 | All the above add a comment to the Bugzilla bug record of the form:: |
|
286 | All the above add a comment to the Bugzilla bug record of the form:: | |
287 |
|
287 | |||
288 | Changeset 3b16791d6642 in repository-name. |
|
288 | Changeset 3b16791d6642 in repository-name. | |
289 | http://my-project.org/hg/repository-name/rev/3b16791d6642 |
|
289 | http://my-project.org/hg/repository-name/rev/3b16791d6642 | |
290 |
|
290 | |||
291 | Changeset commit comment. Bug 1234. |
|
291 | Changeset commit comment. Bug 1234. | |
292 | ''' |
|
292 | ''' | |
293 |
|
293 | |||
294 | from __future__ import absolute_import |
|
294 | from __future__ import absolute_import | |
295 |
|
295 | |||
296 | import json |
|
296 | import json | |
297 | import re |
|
297 | import re | |
298 | import time |
|
298 | import time | |
299 |
|
299 | |||
300 | from mercurial.i18n import _ |
|
300 | from mercurial.i18n import _ | |
301 | from mercurial.node import short |
|
301 | from mercurial.node import short | |
302 | from mercurial import ( |
|
302 | from mercurial import ( | |
303 | error, |
|
303 | error, | |
304 | logcmdutil, |
|
304 | logcmdutil, | |
305 | mail, |
|
305 | mail, | |
306 | pycompat, |
|
306 | pycompat, | |
307 | registrar, |
|
307 | registrar, | |
308 | url, |
|
308 | url, | |
309 | util, |
|
309 | util, | |
310 | ) |
|
310 | ) | |
311 | from mercurial.utils import ( |
|
311 | from mercurial.utils import ( | |
312 | procutil, |
|
312 | procutil, | |
313 | stringutil, |
|
313 | stringutil, | |
314 | ) |
|
314 | ) | |
315 |
|
315 | |||
316 | xmlrpclib = util.xmlrpclib |
|
316 | xmlrpclib = util.xmlrpclib | |
317 |
|
317 | |||
318 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for |
|
318 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
319 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
319 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
320 | # be specifying the version(s) of Mercurial they are tested with, or |
|
320 | # be specifying the version(s) of Mercurial they are tested with, or | |
321 | # leave the attribute unspecified. |
|
321 | # leave the attribute unspecified. | |
322 | testedwith = b'ships-with-hg-core' |
|
322 | testedwith = b'ships-with-hg-core' | |
323 |
|
323 | |||
324 | configtable = {} |
|
324 | configtable = {} | |
325 | configitem = registrar.configitem(configtable) |
|
325 | configitem = registrar.configitem(configtable) | |
326 |
|
326 | |||
327 | configitem( |
|
327 | configitem( | |
328 | b'bugzilla', b'apikey', default=b'', |
|
328 | b'bugzilla', b'apikey', default=b'', | |
329 | ) |
|
329 | ) | |
330 | configitem( |
|
330 | configitem( | |
331 | b'bugzilla', b'bzdir', default=b'/var/www/html/bugzilla', |
|
331 | b'bugzilla', b'bzdir', default=b'/var/www/html/bugzilla', | |
332 | ) |
|
332 | ) | |
333 | configitem( |
|
333 | configitem( | |
334 | b'bugzilla', b'bzemail', default=None, |
|
334 | b'bugzilla', b'bzemail', default=None, | |
335 | ) |
|
335 | ) | |
336 | configitem( |
|
336 | configitem( | |
337 | b'bugzilla', b'bzurl', default=b'http://localhost/bugzilla/', |
|
337 | b'bugzilla', b'bzurl', default=b'http://localhost/bugzilla/', | |
338 | ) |
|
338 | ) | |
339 | configitem( |
|
339 | configitem( | |
340 | b'bugzilla', b'bzuser', default=None, |
|
340 | b'bugzilla', b'bzuser', default=None, | |
341 | ) |
|
341 | ) | |
342 | configitem( |
|
342 | configitem( | |
343 | b'bugzilla', b'db', default=b'bugs', |
|
343 | b'bugzilla', b'db', default=b'bugs', | |
344 | ) |
|
344 | ) | |
345 | configitem( |
|
345 | configitem( | |
346 | b'bugzilla', |
|
346 | b'bugzilla', | |
347 | b'fixregexp', |
|
347 | b'fixregexp', | |
348 | default=( |
|
348 | default=( | |
349 | br'fix(?:es)?\s*(?:bugs?\s*)?,?\s*' |
|
349 | br'fix(?:es)?\s*(?:bugs?\s*)?,?\s*' | |
350 | br'(?:nos?\.?|num(?:ber)?s?)?\s*' |
|
350 | br'(?:nos?\.?|num(?:ber)?s?)?\s*' | |
351 | br'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)' |
|
351 | br'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)' | |
352 | br'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?' |
|
352 | br'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?' | |
353 | ), |
|
353 | ), | |
354 | ) |
|
354 | ) | |
355 | configitem( |
|
355 | configitem( | |
356 | b'bugzilla', b'fixresolution', default=b'FIXED', |
|
356 | b'bugzilla', b'fixresolution', default=b'FIXED', | |
357 | ) |
|
357 | ) | |
358 | configitem( |
|
358 | configitem( | |
359 | b'bugzilla', b'fixstatus', default=b'RESOLVED', |
|
359 | b'bugzilla', b'fixstatus', default=b'RESOLVED', | |
360 | ) |
|
360 | ) | |
361 | configitem( |
|
361 | configitem( | |
362 | b'bugzilla', b'host', default=b'localhost', |
|
362 | b'bugzilla', b'host', default=b'localhost', | |
363 | ) |
|
363 | ) | |
364 | configitem( |
|
364 | configitem( | |
365 | b'bugzilla', b'notify', default=configitem.dynamicdefault, |
|
365 | b'bugzilla', b'notify', default=configitem.dynamicdefault, | |
366 | ) |
|
366 | ) | |
367 | configitem( |
|
367 | configitem( | |
368 | b'bugzilla', b'password', default=None, |
|
368 | b'bugzilla', b'password', default=None, | |
369 | ) |
|
369 | ) | |
370 | configitem( |
|
370 | configitem( | |
371 | b'bugzilla', |
|
371 | b'bugzilla', | |
372 | b'regexp', |
|
372 | b'regexp', | |
373 | default=( |
|
373 | default=( | |
374 | br'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*' |
|
374 | br'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*' | |
375 | br'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)' |
|
375 | br'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)' | |
376 | br'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?' |
|
376 | br'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?' | |
377 | ), |
|
377 | ), | |
378 | ) |
|
378 | ) | |
379 | configitem( |
|
379 | configitem( | |
380 | b'bugzilla', b'strip', default=0, |
|
380 | b'bugzilla', b'strip', default=0, | |
381 | ) |
|
381 | ) | |
382 | configitem( |
|
382 | configitem( | |
383 | b'bugzilla', b'style', default=None, |
|
383 | b'bugzilla', b'style', default=None, | |
384 | ) |
|
384 | ) | |
385 | configitem( |
|
385 | configitem( | |
386 | b'bugzilla', b'template', default=None, |
|
386 | b'bugzilla', b'template', default=None, | |
387 | ) |
|
387 | ) | |
388 | configitem( |
|
388 | configitem( | |
389 | b'bugzilla', b'timeout', default=5, |
|
389 | b'bugzilla', b'timeout', default=5, | |
390 | ) |
|
390 | ) | |
391 | configitem( |
|
391 | configitem( | |
392 | b'bugzilla', b'user', default=b'bugs', |
|
392 | b'bugzilla', b'user', default=b'bugs', | |
393 | ) |
|
393 | ) | |
394 | configitem( |
|
394 | configitem( | |
395 | b'bugzilla', b'usermap', default=None, |
|
395 | b'bugzilla', b'usermap', default=None, | |
396 | ) |
|
396 | ) | |
397 | configitem( |
|
397 | configitem( | |
398 | b'bugzilla', b'version', default=None, |
|
398 | b'bugzilla', b'version', default=None, | |
399 | ) |
|
399 | ) | |
400 |
|
400 | |||
401 |
|
401 | |||
402 | class bzaccess(object): |
|
402 | class bzaccess(object): | |
403 | '''Base class for access to Bugzilla.''' |
|
403 | '''Base class for access to Bugzilla.''' | |
404 |
|
404 | |||
405 | def __init__(self, ui): |
|
405 | def __init__(self, ui): | |
406 | self.ui = ui |
|
406 | self.ui = ui | |
407 | usermap = self.ui.config(b'bugzilla', b'usermap') |
|
407 | usermap = self.ui.config(b'bugzilla', b'usermap') | |
408 | if usermap: |
|
408 | if usermap: | |
409 | self.ui.readconfig(usermap, sections=[b'usermap']) |
|
409 | self.ui.readconfig(usermap, sections=[b'usermap']) | |
410 |
|
410 | |||
411 | def map_committer(self, user): |
|
411 | def map_committer(self, user): | |
412 | '''map name of committer to Bugzilla user name.''' |
|
412 | '''map name of committer to Bugzilla user name.''' | |
413 | for committer, bzuser in self.ui.configitems(b'usermap'): |
|
413 | for committer, bzuser in self.ui.configitems(b'usermap'): | |
414 | if committer.lower() == user.lower(): |
|
414 | if committer.lower() == user.lower(): | |
415 | return bzuser |
|
415 | return bzuser | |
416 | return user |
|
416 | return user | |
417 |
|
417 | |||
418 | # Methods to be implemented by access classes. |
|
418 | # Methods to be implemented by access classes. | |
419 | # |
|
419 | # | |
420 | # 'bugs' is a dict keyed on bug id, where values are a dict holding |
|
420 | # 'bugs' is a dict keyed on bug id, where values are a dict holding | |
421 | # updates to bug state. Recognized dict keys are: |
|
421 | # updates to bug state. Recognized dict keys are: | |
422 | # |
|
422 | # | |
423 | # 'hours': Value, float containing work hours to be updated. |
|
423 | # 'hours': Value, float containing work hours to be updated. | |
424 | # 'fix': If key present, bug is to be marked fixed. Value ignored. |
|
424 | # 'fix': If key present, bug is to be marked fixed. Value ignored. | |
425 |
|
425 | |||
426 | def filter_real_bug_ids(self, bugs): |
|
426 | def filter_real_bug_ids(self, bugs): | |
427 | '''remove bug IDs that do not exist in Bugzilla from bugs.''' |
|
427 | '''remove bug IDs that do not exist in Bugzilla from bugs.''' | |
428 |
|
428 | |||
429 | def filter_cset_known_bug_ids(self, node, bugs): |
|
429 | def filter_cset_known_bug_ids(self, node, bugs): | |
430 | '''remove bug IDs where node occurs in comment text from bugs.''' |
|
430 | '''remove bug IDs where node occurs in comment text from bugs.''' | |
431 |
|
431 | |||
432 | def updatebug(self, bugid, newstate, text, committer): |
|
432 | def updatebug(self, bugid, newstate, text, committer): | |
433 | '''update the specified bug. Add comment text and set new states. |
|
433 | '''update the specified bug. Add comment text and set new states. | |
434 |
|
434 | |||
435 | If possible add the comment as being from the committer of |
|
435 | If possible add the comment as being from the committer of | |
436 | the changeset. Otherwise use the default Bugzilla user. |
|
436 | the changeset. Otherwise use the default Bugzilla user. | |
437 | ''' |
|
437 | ''' | |
438 |
|
438 | |||
439 | def notify(self, bugs, committer): |
|
439 | def notify(self, bugs, committer): | |
440 | '''Force sending of Bugzilla notification emails. |
|
440 | '''Force sending of Bugzilla notification emails. | |
441 |
|
441 | |||
442 | Only required if the access method does not trigger notification |
|
442 | Only required if the access method does not trigger notification | |
443 | emails automatically. |
|
443 | emails automatically. | |
444 | ''' |
|
444 | ''' | |
445 |
|
445 | |||
446 |
|
446 | |||
447 | # Bugzilla via direct access to MySQL database. |
|
447 | # Bugzilla via direct access to MySQL database. | |
448 | class bzmysql(bzaccess): |
|
448 | class bzmysql(bzaccess): | |
449 | '''Support for direct MySQL access to Bugzilla. |
|
449 | '''Support for direct MySQL access to Bugzilla. | |
450 |
|
450 | |||
451 | The earliest Bugzilla version this is tested with is version 2.16. |
|
451 | The earliest Bugzilla version this is tested with is version 2.16. | |
452 |
|
452 | |||
453 | If your Bugzilla is version 3.4 or above, you are strongly |
|
453 | If your Bugzilla is version 3.4 or above, you are strongly | |
454 | recommended to use the XMLRPC access method instead. |
|
454 | recommended to use the XMLRPC access method instead. | |
455 | ''' |
|
455 | ''' | |
456 |
|
456 | |||
457 | @staticmethod |
|
457 | @staticmethod | |
458 | def sql_buglist(ids): |
|
458 | def sql_buglist(ids): | |
459 | '''return SQL-friendly list of bug ids''' |
|
459 | '''return SQL-friendly list of bug ids''' | |
460 | return b'(' + b','.join(map(str, ids)) + b')' |
|
460 | return b'(' + b','.join(map(str, ids)) + b')' | |
461 |
|
461 | |||
462 | _MySQLdb = None |
|
462 | _MySQLdb = None | |
463 |
|
463 | |||
464 | def __init__(self, ui): |
|
464 | def __init__(self, ui): | |
465 | try: |
|
465 | try: | |
466 | import MySQLdb as mysql |
|
466 | import MySQLdb as mysql | |
467 |
|
467 | |||
468 | bzmysql._MySQLdb = mysql |
|
468 | bzmysql._MySQLdb = mysql | |
469 | except ImportError as err: |
|
469 | except ImportError as err: | |
470 | raise error.Abort( |
|
470 | raise error.Abort( | |
471 | _(b'python mysql support not available: %s') % err |
|
471 | _(b'python mysql support not available: %s') % err | |
472 | ) |
|
472 | ) | |
473 |
|
473 | |||
474 | bzaccess.__init__(self, ui) |
|
474 | bzaccess.__init__(self, ui) | |
475 |
|
475 | |||
476 | host = self.ui.config(b'bugzilla', b'host') |
|
476 | host = self.ui.config(b'bugzilla', b'host') | |
477 | user = self.ui.config(b'bugzilla', b'user') |
|
477 | user = self.ui.config(b'bugzilla', b'user') | |
478 | passwd = self.ui.config(b'bugzilla', b'password') |
|
478 | passwd = self.ui.config(b'bugzilla', b'password') | |
479 | db = self.ui.config(b'bugzilla', b'db') |
|
479 | db = self.ui.config(b'bugzilla', b'db') | |
480 | timeout = int(self.ui.config(b'bugzilla', b'timeout')) |
|
480 | timeout = int(self.ui.config(b'bugzilla', b'timeout')) | |
481 | self.ui.note( |
|
481 | self.ui.note( | |
482 | _(b'connecting to %s:%s as %s, password %s\n') |
|
482 | _(b'connecting to %s:%s as %s, password %s\n') | |
483 | % (host, db, user, b'*' * len(passwd)) |
|
483 | % (host, db, user, b'*' * len(passwd)) | |
484 | ) |
|
484 | ) | |
485 | self.conn = bzmysql._MySQLdb.connect( |
|
485 | self.conn = bzmysql._MySQLdb.connect( | |
486 | host=host, user=user, passwd=passwd, db=db, connect_timeout=timeout |
|
486 | host=host, user=user, passwd=passwd, db=db, connect_timeout=timeout | |
487 | ) |
|
487 | ) | |
488 | self.cursor = self.conn.cursor() |
|
488 | self.cursor = self.conn.cursor() | |
489 | self.longdesc_id = self.get_longdesc_id() |
|
489 | self.longdesc_id = self.get_longdesc_id() | |
490 | self.user_ids = {} |
|
490 | self.user_ids = {} | |
491 | self.default_notify = b"cd %(bzdir)s && ./processmail %(id)s %(user)s" |
|
491 | self.default_notify = b"cd %(bzdir)s && ./processmail %(id)s %(user)s" | |
492 |
|
492 | |||
493 | def run(self, *args, **kwargs): |
|
493 | def run(self, *args, **kwargs): | |
494 | '''run a query.''' |
|
494 | '''run a query.''' | |
495 | self.ui.note(_(b'query: %s %s\n') % (args, kwargs)) |
|
495 | self.ui.note(_(b'query: %s %s\n') % (args, kwargs)) | |
496 | try: |
|
496 | try: | |
497 | self.cursor.execute(*args, **kwargs) |
|
497 | self.cursor.execute(*args, **kwargs) | |
498 | except bzmysql._MySQLdb.MySQLError: |
|
498 | except bzmysql._MySQLdb.MySQLError: | |
499 | self.ui.note(_(b'failed query: %s %s\n') % (args, kwargs)) |
|
499 | self.ui.note(_(b'failed query: %s %s\n') % (args, kwargs)) | |
500 | raise |
|
500 | raise | |
501 |
|
501 | |||
502 | def get_longdesc_id(self): |
|
502 | def get_longdesc_id(self): | |
503 | '''get identity of longdesc field''' |
|
503 | '''get identity of longdesc field''' | |
504 | self.run(b'select fieldid from fielddefs where name = "longdesc"') |
|
504 | self.run(b'select fieldid from fielddefs where name = "longdesc"') | |
505 | ids = self.cursor.fetchall() |
|
505 | ids = self.cursor.fetchall() | |
506 | if len(ids) != 1: |
|
506 | if len(ids) != 1: | |
507 | raise error.Abort(_(b'unknown database schema')) |
|
507 | raise error.Abort(_(b'unknown database schema')) | |
508 | return ids[0][0] |
|
508 | return ids[0][0] | |
509 |
|
509 | |||
510 | def filter_real_bug_ids(self, bugs): |
|
510 | def filter_real_bug_ids(self, bugs): | |
511 | '''filter not-existing bugs from set.''' |
|
511 | '''filter not-existing bugs from set.''' | |
512 | self.run( |
|
512 | self.run( | |
513 | b'select bug_id from bugs where bug_id in %s' |
|
513 | b'select bug_id from bugs where bug_id in %s' | |
514 | % bzmysql.sql_buglist(bugs.keys()) |
|
514 | % bzmysql.sql_buglist(bugs.keys()) | |
515 | ) |
|
515 | ) | |
516 | existing = [id for (id,) in self.cursor.fetchall()] |
|
516 | existing = [id for (id,) in self.cursor.fetchall()] | |
517 | for id in bugs.keys(): |
|
517 | for id in bugs.keys(): | |
518 | if id not in existing: |
|
518 | if id not in existing: | |
519 | self.ui.status(_(b'bug %d does not exist\n') % id) |
|
519 | self.ui.status(_(b'bug %d does not exist\n') % id) | |
520 | del bugs[id] |
|
520 | del bugs[id] | |
521 |
|
521 | |||
522 | def filter_cset_known_bug_ids(self, node, bugs): |
|
522 | def filter_cset_known_bug_ids(self, node, bugs): | |
523 | '''filter bug ids that already refer to this changeset from set.''' |
|
523 | '''filter bug ids that already refer to this changeset from set.''' | |
524 | self.run( |
|
524 | self.run( | |
525 | '''select bug_id from longdescs where |
|
525 | '''select bug_id from longdescs where | |
526 | bug_id in %s and thetext like "%%%s%%"''' |
|
526 | bug_id in %s and thetext like "%%%s%%"''' | |
527 | % (bzmysql.sql_buglist(bugs.keys()), short(node)) |
|
527 | % (bzmysql.sql_buglist(bugs.keys()), short(node)) | |
528 | ) |
|
528 | ) | |
529 | for (id,) in self.cursor.fetchall(): |
|
529 | for (id,) in self.cursor.fetchall(): | |
530 | self.ui.status( |
|
530 | self.ui.status( | |
531 | _(b'bug %d already knows about changeset %s\n') |
|
531 | _(b'bug %d already knows about changeset %s\n') | |
532 | % (id, short(node)) |
|
532 | % (id, short(node)) | |
533 | ) |
|
533 | ) | |
534 | del bugs[id] |
|
534 | del bugs[id] | |
535 |
|
535 | |||
536 | def notify(self, bugs, committer): |
|
536 | def notify(self, bugs, committer): | |
537 | '''tell bugzilla to send mail.''' |
|
537 | '''tell bugzilla to send mail.''' | |
538 | self.ui.status(_(b'telling bugzilla to send mail:\n')) |
|
538 | self.ui.status(_(b'telling bugzilla to send mail:\n')) | |
539 | (user, userid) = self.get_bugzilla_user(committer) |
|
539 | (user, userid) = self.get_bugzilla_user(committer) | |
540 | for id in bugs.keys(): |
|
540 | for id in bugs.keys(): | |
541 | self.ui.status(_(b' bug %s\n') % id) |
|
541 | self.ui.status(_(b' bug %s\n') % id) | |
542 | cmdfmt = self.ui.config(b'bugzilla', b'notify', self.default_notify) |
|
542 | cmdfmt = self.ui.config(b'bugzilla', b'notify', self.default_notify) | |
543 | bzdir = self.ui.config(b'bugzilla', b'bzdir') |
|
543 | bzdir = self.ui.config(b'bugzilla', b'bzdir') | |
544 | try: |
|
544 | try: | |
545 | # Backwards-compatible with old notify string, which |
|
545 | # Backwards-compatible with old notify string, which | |
546 | # took one string. This will throw with a new format |
|
546 | # took one string. This will throw with a new format | |
547 | # string. |
|
547 | # string. | |
548 | cmd = cmdfmt % id |
|
548 | cmd = cmdfmt % id | |
549 | except TypeError: |
|
549 | except TypeError: | |
550 | cmd = cmdfmt % {b'bzdir': bzdir, b'id': id, b'user': user} |
|
550 | cmd = cmdfmt % {b'bzdir': bzdir, b'id': id, b'user': user} | |
551 | self.ui.note(_(b'running notify command %s\n') % cmd) |
|
551 | self.ui.note(_(b'running notify command %s\n') % cmd) | |
552 | fp = procutil.popen(b'(%s) 2>&1' % cmd, b'rb') |
|
552 | fp = procutil.popen(b'(%s) 2>&1' % cmd, b'rb') | |
553 | out = util.fromnativeeol(fp.read()) |
|
553 | out = util.fromnativeeol(fp.read()) | |
554 | ret = fp.close() |
|
554 | ret = fp.close() | |
555 | if ret: |
|
555 | if ret: | |
556 | self.ui.warn(out) |
|
556 | self.ui.warn(out) | |
557 | raise error.Abort( |
|
557 | raise error.Abort( | |
558 | _(b'bugzilla notify command %s') % procutil.explainexit(ret) |
|
558 | _(b'bugzilla notify command %s') % procutil.explainexit(ret) | |
559 | ) |
|
559 | ) | |
560 | self.ui.status(_(b'done\n')) |
|
560 | self.ui.status(_(b'done\n')) | |
561 |
|
561 | |||
562 | def get_user_id(self, user): |
|
562 | def get_user_id(self, user): | |
563 | '''look up numeric bugzilla user id.''' |
|
563 | '''look up numeric bugzilla user id.''' | |
564 | try: |
|
564 | try: | |
565 | return self.user_ids[user] |
|
565 | return self.user_ids[user] | |
566 | except KeyError: |
|
566 | except KeyError: | |
567 | try: |
|
567 | try: | |
568 | userid = int(user) |
|
568 | userid = int(user) | |
569 | except ValueError: |
|
569 | except ValueError: | |
570 | self.ui.note(_(b'looking up user %s\n') % user) |
|
570 | self.ui.note(_(b'looking up user %s\n') % user) | |
571 | self.run( |
|
571 | self.run( | |
572 | '''select userid from profiles |
|
572 | '''select userid from profiles | |
573 | where login_name like %s''', |
|
573 | where login_name like %s''', | |
574 | user, |
|
574 | user, | |
575 | ) |
|
575 | ) | |
576 | all = self.cursor.fetchall() |
|
576 | all = self.cursor.fetchall() | |
577 | if len(all) != 1: |
|
577 | if len(all) != 1: | |
578 | raise KeyError(user) |
|
578 | raise KeyError(user) | |
579 | userid = int(all[0][0]) |
|
579 | userid = int(all[0][0]) | |
580 | self.user_ids[user] = userid |
|
580 | self.user_ids[user] = userid | |
581 | return userid |
|
581 | return userid | |
582 |
|
582 | |||
583 | def get_bugzilla_user(self, committer): |
|
583 | def get_bugzilla_user(self, committer): | |
584 | '''See if committer is a registered bugzilla user. Return |
|
584 | '''See if committer is a registered bugzilla user. Return | |
585 | bugzilla username and userid if so. If not, return default |
|
585 | bugzilla username and userid if so. If not, return default | |
586 | bugzilla username and userid.''' |
|
586 | bugzilla username and userid.''' | |
587 | user = self.map_committer(committer) |
|
587 | user = self.map_committer(committer) | |
588 | try: |
|
588 | try: | |
589 | userid = self.get_user_id(user) |
|
589 | userid = self.get_user_id(user) | |
590 | except KeyError: |
|
590 | except KeyError: | |
591 | try: |
|
591 | try: | |
592 | defaultuser = self.ui.config(b'bugzilla', b'bzuser') |
|
592 | defaultuser = self.ui.config(b'bugzilla', b'bzuser') | |
593 | if not defaultuser: |
|
593 | if not defaultuser: | |
594 | raise error.Abort( |
|
594 | raise error.Abort( | |
595 | _(b'cannot find bugzilla user id for %s') % user |
|
595 | _(b'cannot find bugzilla user id for %s') % user | |
596 | ) |
|
596 | ) | |
597 | userid = self.get_user_id(defaultuser) |
|
597 | userid = self.get_user_id(defaultuser) | |
598 | user = defaultuser |
|
598 | user = defaultuser | |
599 | except KeyError: |
|
599 | except KeyError: | |
600 | raise error.Abort( |
|
600 | raise error.Abort( | |
601 | _(b'cannot find bugzilla user id for %s or %s') |
|
601 | _(b'cannot find bugzilla user id for %s or %s') | |
602 | % (user, defaultuser) |
|
602 | % (user, defaultuser) | |
603 | ) |
|
603 | ) | |
604 | return (user, userid) |
|
604 | return (user, userid) | |
605 |
|
605 | |||
606 | def updatebug(self, bugid, newstate, text, committer): |
|
606 | def updatebug(self, bugid, newstate, text, committer): | |
607 | '''update bug state with comment text. |
|
607 | '''update bug state with comment text. | |
608 |
|
608 | |||
609 | Try adding comment as committer of changeset, otherwise as |
|
609 | Try adding comment as committer of changeset, otherwise as | |
610 | default bugzilla user.''' |
|
610 | default bugzilla user.''' | |
611 | if len(newstate) > 0: |
|
611 | if len(newstate) > 0: | |
612 | self.ui.warn(_(b"Bugzilla/MySQL cannot update bug state\n")) |
|
612 | self.ui.warn(_(b"Bugzilla/MySQL cannot update bug state\n")) | |
613 |
|
613 | |||
614 | (user, userid) = self.get_bugzilla_user(committer) |
|
614 | (user, userid) = self.get_bugzilla_user(committer) | |
615 | now = time.strftime(r'%Y-%m-%d %H:%M:%S') |
|
615 | now = time.strftime(r'%Y-%m-%d %H:%M:%S') | |
616 | self.run( |
|
616 | self.run( | |
617 | '''insert into longdescs |
|
617 | '''insert into longdescs | |
618 | (bug_id, who, bug_when, thetext) |
|
618 | (bug_id, who, bug_when, thetext) | |
619 | values (%s, %s, %s, %s)''', |
|
619 | values (%s, %s, %s, %s)''', | |
620 | (bugid, userid, now, text), |
|
620 | (bugid, userid, now, text), | |
621 | ) |
|
621 | ) | |
622 | self.run( |
|
622 | self.run( | |
623 | '''insert into bugs_activity (bug_id, who, bug_when, fieldid) |
|
623 | '''insert into bugs_activity (bug_id, who, bug_when, fieldid) | |
624 | values (%s, %s, %s, %s)''', |
|
624 | values (%s, %s, %s, %s)''', | |
625 | (bugid, userid, now, self.longdesc_id), |
|
625 | (bugid, userid, now, self.longdesc_id), | |
626 | ) |
|
626 | ) | |
627 | self.conn.commit() |
|
627 | self.conn.commit() | |
628 |
|
628 | |||
629 |
|
629 | |||
630 | class bzmysql_2_18(bzmysql): |
|
630 | class bzmysql_2_18(bzmysql): | |
631 | '''support for bugzilla 2.18 series.''' |
|
631 | '''support for bugzilla 2.18 series.''' | |
632 |
|
632 | |||
633 | def __init__(self, ui): |
|
633 | def __init__(self, ui): | |
634 | bzmysql.__init__(self, ui) |
|
634 | bzmysql.__init__(self, ui) | |
635 | self.default_notify = ( |
|
635 | self.default_notify = ( | |
636 | b"cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s" |
|
636 | b"cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s" | |
637 | ) |
|
637 | ) | |
638 |
|
638 | |||
639 |
|
639 | |||
640 | class bzmysql_3_0(bzmysql_2_18): |
|
640 | class bzmysql_3_0(bzmysql_2_18): | |
641 | '''support for bugzilla 3.0 series.''' |
|
641 | '''support for bugzilla 3.0 series.''' | |
642 |
|
642 | |||
643 | def __init__(self, ui): |
|
643 | def __init__(self, ui): | |
644 | bzmysql_2_18.__init__(self, ui) |
|
644 | bzmysql_2_18.__init__(self, ui) | |
645 |
|
645 | |||
646 | def get_longdesc_id(self): |
|
646 | def get_longdesc_id(self): | |
647 | '''get identity of longdesc field''' |
|
647 | '''get identity of longdesc field''' | |
648 | self.run(b'select id from fielddefs where name = "longdesc"') |
|
648 | self.run(b'select id from fielddefs where name = "longdesc"') | |
649 | ids = self.cursor.fetchall() |
|
649 | ids = self.cursor.fetchall() | |
650 | if len(ids) != 1: |
|
650 | if len(ids) != 1: | |
651 | raise error.Abort(_(b'unknown database schema')) |
|
651 | raise error.Abort(_(b'unknown database schema')) | |
652 | return ids[0][0] |
|
652 | return ids[0][0] | |
653 |
|
653 | |||
654 |
|
654 | |||
655 | # Bugzilla via XMLRPC interface. |
|
655 | # Bugzilla via XMLRPC interface. | |
656 |
|
656 | |||
657 |
|
657 | |||
658 | class cookietransportrequest(object): |
|
658 | class cookietransportrequest(object): | |
659 | """A Transport request method that retains cookies over its lifetime. |
|
659 | """A Transport request method that retains cookies over its lifetime. | |
660 |
|
660 | |||
661 | The regular xmlrpclib transports ignore cookies. Which causes |
|
661 | The regular xmlrpclib transports ignore cookies. Which causes | |
662 | a bit of a problem when you need a cookie-based login, as with |
|
662 | a bit of a problem when you need a cookie-based login, as with | |
663 | the Bugzilla XMLRPC interface prior to 4.4.3. |
|
663 | the Bugzilla XMLRPC interface prior to 4.4.3. | |
664 |
|
664 | |||
665 | So this is a helper for defining a Transport which looks for |
|
665 | So this is a helper for defining a Transport which looks for | |
666 | cookies being set in responses and saves them to add to all future |
|
666 | cookies being set in responses and saves them to add to all future | |
667 | requests. |
|
667 | requests. | |
668 | """ |
|
668 | """ | |
669 |
|
669 | |||
670 | # Inspiration drawn from |
|
670 | # Inspiration drawn from | |
671 | # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html |
|
671 | # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html | |
672 | # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/ |
|
672 | # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/ | |
673 |
|
673 | |||
674 | cookies = [] |
|
674 | cookies = [] | |
675 |
|
675 | |||
676 | def send_cookies(self, connection): |
|
676 | def send_cookies(self, connection): | |
677 | if self.cookies: |
|
677 | if self.cookies: | |
678 | for cookie in self.cookies: |
|
678 | for cookie in self.cookies: | |
679 | connection.putheader(b"Cookie", cookie) |
|
679 | connection.putheader(b"Cookie", cookie) | |
680 |
|
680 | |||
681 | def request(self, host, handler, request_body, verbose=0): |
|
681 | def request(self, host, handler, request_body, verbose=0): | |
682 | self.verbose = verbose |
|
682 | self.verbose = verbose | |
683 | self.accept_gzip_encoding = False |
|
683 | self.accept_gzip_encoding = False | |
684 |
|
684 | |||
685 | # issue XML-RPC request |
|
685 | # issue XML-RPC request | |
686 | h = self.make_connection(host) |
|
686 | h = self.make_connection(host) | |
687 | if verbose: |
|
687 | if verbose: | |
688 | h.set_debuglevel(1) |
|
688 | h.set_debuglevel(1) | |
689 |
|
689 | |||
690 | self.send_request(h, handler, request_body) |
|
690 | self.send_request(h, handler, request_body) | |
691 | self.send_host(h, host) |
|
691 | self.send_host(h, host) | |
692 | self.send_cookies(h) |
|
692 | self.send_cookies(h) | |
693 | self.send_user_agent(h) |
|
693 | self.send_user_agent(h) | |
694 | self.send_content(h, request_body) |
|
694 | self.send_content(h, request_body) | |
695 |
|
695 | |||
696 | # Deal with differences between Python 2.6 and 2.7. |
|
696 | # Deal with differences between Python 2.6 and 2.7. | |
697 | # In the former h is a HTTP(S). In the latter it's a |
|
697 | # In the former h is a HTTP(S). In the latter it's a | |
698 | # HTTP(S)Connection. Luckily, the 2.6 implementation of |
|
698 | # HTTP(S)Connection. Luckily, the 2.6 implementation of | |
699 | # HTTP(S) has an underlying HTTP(S)Connection, so extract |
|
699 | # HTTP(S) has an underlying HTTP(S)Connection, so extract | |
700 | # that and use it. |
|
700 | # that and use it. | |
701 | try: |
|
701 | try: | |
702 | response = h.getresponse() |
|
702 | response = h.getresponse() | |
703 | except AttributeError: |
|
703 | except AttributeError: | |
704 | response = h._conn.getresponse() |
|
704 | response = h._conn.getresponse() | |
705 |
|
705 | |||
706 | # Add any cookie definitions to our list. |
|
706 | # Add any cookie definitions to our list. | |
707 | for header in response.msg.getallmatchingheaders(b"Set-Cookie"): |
|
707 | for header in response.msg.getallmatchingheaders(b"Set-Cookie"): | |
708 | val = header.split(b": ", 1)[1] |
|
708 | val = header.split(b": ", 1)[1] | |
709 | cookie = val.split(b";", 1)[0] |
|
709 | cookie = val.split(b";", 1)[0] | |
710 | self.cookies.append(cookie) |
|
710 | self.cookies.append(cookie) | |
711 |
|
711 | |||
712 | if response.status != 200: |
|
712 | if response.status != 200: | |
713 | raise xmlrpclib.ProtocolError( |
|
713 | raise xmlrpclib.ProtocolError( | |
714 | host + handler, |
|
714 | host + handler, | |
715 | response.status, |
|
715 | response.status, | |
716 | response.reason, |
|
716 | response.reason, | |
717 | response.msg.headers, |
|
717 | response.msg.headers, | |
718 | ) |
|
718 | ) | |
719 |
|
719 | |||
720 | payload = response.read() |
|
720 | payload = response.read() | |
721 | parser, unmarshaller = self.getparser() |
|
721 | parser, unmarshaller = self.getparser() | |
722 | parser.feed(payload) |
|
722 | parser.feed(payload) | |
723 | parser.close() |
|
723 | parser.close() | |
724 |
|
724 | |||
725 | return unmarshaller.close() |
|
725 | return unmarshaller.close() | |
726 |
|
726 | |||
727 |
|
727 | |||
728 | # The explicit calls to the underlying xmlrpclib __init__() methods are |
|
728 | # The explicit calls to the underlying xmlrpclib __init__() methods are | |
729 | # necessary. The xmlrpclib.Transport classes are old-style classes, and |
|
729 | # necessary. The xmlrpclib.Transport classes are old-style classes, and | |
730 | # it turns out their __init__() doesn't get called when doing multiple |
|
730 | # it turns out their __init__() doesn't get called when doing multiple | |
731 | # inheritance with a new-style class. |
|
731 | # inheritance with a new-style class. | |
732 | class cookietransport(cookietransportrequest, xmlrpclib.Transport): |
|
732 | class cookietransport(cookietransportrequest, xmlrpclib.Transport): | |
733 | def __init__(self, use_datetime=0): |
|
733 | def __init__(self, use_datetime=0): | |
734 | if util.safehasattr(xmlrpclib.Transport, "__init__"): |
|
734 | if util.safehasattr(xmlrpclib.Transport, "__init__"): | |
735 | xmlrpclib.Transport.__init__(self, use_datetime) |
|
735 | xmlrpclib.Transport.__init__(self, use_datetime) | |
736 |
|
736 | |||
737 |
|
737 | |||
738 | class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport): |
|
738 | class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport): | |
739 | def __init__(self, use_datetime=0): |
|
739 | def __init__(self, use_datetime=0): | |
740 | if util.safehasattr(xmlrpclib.Transport, "__init__"): |
|
740 | if util.safehasattr(xmlrpclib.Transport, "__init__"): | |
741 | xmlrpclib.SafeTransport.__init__(self, use_datetime) |
|
741 | xmlrpclib.SafeTransport.__init__(self, use_datetime) | |
742 |
|
742 | |||
743 |
|
743 | |||
744 | class bzxmlrpc(bzaccess): |
|
744 | class bzxmlrpc(bzaccess): | |
745 | """Support for access to Bugzilla via the Bugzilla XMLRPC API. |
|
745 | """Support for access to Bugzilla via the Bugzilla XMLRPC API. | |
746 |
|
746 | |||
747 | Requires a minimum Bugzilla version 3.4. |
|
747 | Requires a minimum Bugzilla version 3.4. | |
748 | """ |
|
748 | """ | |
749 |
|
749 | |||
750 | def __init__(self, ui): |
|
750 | def __init__(self, ui): | |
751 | bzaccess.__init__(self, ui) |
|
751 | bzaccess.__init__(self, ui) | |
752 |
|
752 | |||
753 | bzweb = self.ui.config(b'bugzilla', b'bzurl') |
|
753 | bzweb = self.ui.config(b'bugzilla', b'bzurl') | |
754 | bzweb = bzweb.rstrip(b"/") + b"/xmlrpc.cgi" |
|
754 | bzweb = bzweb.rstrip(b"/") + b"/xmlrpc.cgi" | |
755 |
|
755 | |||
756 | user = self.ui.config(b'bugzilla', b'user') |
|
756 | user = self.ui.config(b'bugzilla', b'user') | |
757 | passwd = self.ui.config(b'bugzilla', b'password') |
|
757 | passwd = self.ui.config(b'bugzilla', b'password') | |
758 |
|
758 | |||
759 | self.fixstatus = self.ui.config(b'bugzilla', b'fixstatus') |
|
759 | self.fixstatus = self.ui.config(b'bugzilla', b'fixstatus') | |
760 | self.fixresolution = self.ui.config(b'bugzilla', b'fixresolution') |
|
760 | self.fixresolution = self.ui.config(b'bugzilla', b'fixresolution') | |
761 |
|
761 | |||
762 | self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb)) |
|
762 | self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb)) | |
763 | ver = self.bzproxy.Bugzilla.version()[b'version'].split(b'.') |
|
763 | ver = self.bzproxy.Bugzilla.version()[b'version'].split(b'.') | |
764 | self.bzvermajor = int(ver[0]) |
|
764 | self.bzvermajor = int(ver[0]) | |
765 | self.bzverminor = int(ver[1]) |
|
765 | self.bzverminor = int(ver[1]) | |
766 | login = self.bzproxy.User.login( |
|
766 | login = self.bzproxy.User.login( | |
767 | {b'login': user, b'password': passwd, b'restrict_login': True} |
|
767 | {b'login': user, b'password': passwd, b'restrict_login': True} | |
768 | ) |
|
768 | ) | |
769 | self.bztoken = login.get(b'token', b'') |
|
769 | self.bztoken = login.get(b'token', b'') | |
770 |
|
770 | |||
771 | def transport(self, uri): |
|
771 | def transport(self, uri): | |
772 | if util.urlreq.urlparse(uri, b"http")[0] == b"https": |
|
772 | if util.urlreq.urlparse(uri, b"http")[0] == b"https": | |
773 | return cookiesafetransport() |
|
773 | return cookiesafetransport() | |
774 | else: |
|
774 | else: | |
775 | return cookietransport() |
|
775 | return cookietransport() | |
776 |
|
776 | |||
777 | def get_bug_comments(self, id): |
|
777 | def get_bug_comments(self, id): | |
778 | """Return a string with all comment text for a bug.""" |
|
778 | """Return a string with all comment text for a bug.""" | |
779 | c = self.bzproxy.Bug.comments( |
|
779 | c = self.bzproxy.Bug.comments( | |
780 | {b'ids': [id], b'include_fields': [b'text'], b'token': self.bztoken} |
|
780 | {b'ids': [id], b'include_fields': [b'text'], b'token': self.bztoken} | |
781 | ) |
|
781 | ) | |
782 | return b''.join( |
|
782 | return b''.join( | |
783 | [t[b'text'] for t in c[b'bugs'][b'%d' % id][b'comments']] |
|
783 | [t[b'text'] for t in c[b'bugs'][b'%d' % id][b'comments']] | |
784 | ) |
|
784 | ) | |
785 |
|
785 | |||
786 | def filter_real_bug_ids(self, bugs): |
|
786 | def filter_real_bug_ids(self, bugs): | |
787 | probe = self.bzproxy.Bug.get( |
|
787 | probe = self.bzproxy.Bug.get( | |
788 | { |
|
788 | { | |
789 | b'ids': sorted(bugs.keys()), |
|
789 | b'ids': sorted(bugs.keys()), | |
790 | b'include_fields': [], |
|
790 | b'include_fields': [], | |
791 | b'permissive': True, |
|
791 | b'permissive': True, | |
792 | b'token': self.bztoken, |
|
792 | b'token': self.bztoken, | |
793 | } |
|
793 | } | |
794 | ) |
|
794 | ) | |
795 | for badbug in probe[b'faults']: |
|
795 | for badbug in probe[b'faults']: | |
796 | id = badbug[b'id'] |
|
796 | id = badbug[b'id'] | |
797 | self.ui.status(_(b'bug %d does not exist\n') % id) |
|
797 | self.ui.status(_(b'bug %d does not exist\n') % id) | |
798 | del bugs[id] |
|
798 | del bugs[id] | |
799 |
|
799 | |||
800 | def filter_cset_known_bug_ids(self, node, bugs): |
|
800 | def filter_cset_known_bug_ids(self, node, bugs): | |
801 | for id in sorted(bugs.keys()): |
|
801 | for id in sorted(bugs.keys()): | |
802 | if self.get_bug_comments(id).find(short(node)) != -1: |
|
802 | if self.get_bug_comments(id).find(short(node)) != -1: | |
803 | self.ui.status( |
|
803 | self.ui.status( | |
804 | _(b'bug %d already knows about changeset %s\n') |
|
804 | _(b'bug %d already knows about changeset %s\n') | |
805 | % (id, short(node)) |
|
805 | % (id, short(node)) | |
806 | ) |
|
806 | ) | |
807 | del bugs[id] |
|
807 | del bugs[id] | |
808 |
|
808 | |||
809 | def updatebug(self, bugid, newstate, text, committer): |
|
809 | def updatebug(self, bugid, newstate, text, committer): | |
810 | args = {} |
|
810 | args = {} | |
811 | if b'hours' in newstate: |
|
811 | if b'hours' in newstate: | |
812 | args[b'work_time'] = newstate[b'hours'] |
|
812 | args[b'work_time'] = newstate[b'hours'] | |
813 |
|
813 | |||
814 | if self.bzvermajor >= 4: |
|
814 | if self.bzvermajor >= 4: | |
815 | args[b'ids'] = [bugid] |
|
815 | args[b'ids'] = [bugid] | |
816 | args[b'comment'] = {b'body': text} |
|
816 | args[b'comment'] = {b'body': text} | |
817 | if b'fix' in newstate: |
|
817 | if b'fix' in newstate: | |
818 | args[b'status'] = self.fixstatus |
|
818 | args[b'status'] = self.fixstatus | |
819 | args[b'resolution'] = self.fixresolution |
|
819 | args[b'resolution'] = self.fixresolution | |
820 | args[b'token'] = self.bztoken |
|
820 | args[b'token'] = self.bztoken | |
821 | self.bzproxy.Bug.update(args) |
|
821 | self.bzproxy.Bug.update(args) | |
822 | else: |
|
822 | else: | |
823 | if b'fix' in newstate: |
|
823 | if b'fix' in newstate: | |
824 | self.ui.warn( |
|
824 | self.ui.warn( | |
825 | _( |
|
825 | _( | |
826 | b"Bugzilla/XMLRPC needs Bugzilla 4.0 or later " |
|
826 | b"Bugzilla/XMLRPC needs Bugzilla 4.0 or later " | |
827 | b"to mark bugs fixed\n" |
|
827 | b"to mark bugs fixed\n" | |
828 | ) |
|
828 | ) | |
829 | ) |
|
829 | ) | |
830 | args[b'id'] = bugid |
|
830 | args[b'id'] = bugid | |
831 | args[b'comment'] = text |
|
831 | args[b'comment'] = text | |
832 | self.bzproxy.Bug.add_comment(args) |
|
832 | self.bzproxy.Bug.add_comment(args) | |
833 |
|
833 | |||
834 |
|
834 | |||
835 | class bzxmlrpcemail(bzxmlrpc): |
|
835 | class bzxmlrpcemail(bzxmlrpc): | |
836 | """Read data from Bugzilla via XMLRPC, send updates via email. |
|
836 | """Read data from Bugzilla via XMLRPC, send updates via email. | |
837 |
|
837 | |||
838 | Advantages of sending updates via email: |
|
838 | Advantages of sending updates via email: | |
839 | 1. Comments can be added as any user, not just logged in user. |
|
839 | 1. Comments can be added as any user, not just logged in user. | |
840 | 2. Bug statuses or other fields not accessible via XMLRPC can |
|
840 | 2. Bug statuses or other fields not accessible via XMLRPC can | |
841 | potentially be updated. |
|
841 | potentially be updated. | |
842 |
|
842 | |||
843 | There is no XMLRPC function to change bug status before Bugzilla |
|
843 | There is no XMLRPC function to change bug status before Bugzilla | |
844 | 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0. |
|
844 | 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0. | |
845 | But bugs can be marked fixed via email from 3.4 onwards. |
|
845 | But bugs can be marked fixed via email from 3.4 onwards. | |
846 | """ |
|
846 | """ | |
847 |
|
847 | |||
848 | # The email interface changes subtly between 3.4 and 3.6. In 3.4, |
|
848 | # The email interface changes subtly between 3.4 and 3.6. In 3.4, | |
849 | # in-email fields are specified as '@<fieldname> = <value>'. In |
|
849 | # in-email fields are specified as '@<fieldname> = <value>'. In | |
850 | # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id |
|
850 | # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id | |
851 | # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards |
|
851 | # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards | |
852 | # compatibility, but rather than rely on this use the new format for |
|
852 | # compatibility, but rather than rely on this use the new format for | |
853 | # 4.0 onwards. |
|
853 | # 4.0 onwards. | |
854 |
|
854 | |||
855 | def __init__(self, ui): |
|
855 | def __init__(self, ui): | |
856 | bzxmlrpc.__init__(self, ui) |
|
856 | bzxmlrpc.__init__(self, ui) | |
857 |
|
857 | |||
858 | self.bzemail = self.ui.config(b'bugzilla', b'bzemail') |
|
858 | self.bzemail = self.ui.config(b'bugzilla', b'bzemail') | |
859 | if not self.bzemail: |
|
859 | if not self.bzemail: | |
860 | raise error.Abort(_(b"configuration 'bzemail' missing")) |
|
860 | raise error.Abort(_(b"configuration 'bzemail' missing")) | |
861 | mail.validateconfig(self.ui) |
|
861 | mail.validateconfig(self.ui) | |
862 |
|
862 | |||
863 | def makecommandline(self, fieldname, value): |
|
863 | def makecommandline(self, fieldname, value): | |
864 | if self.bzvermajor >= 4: |
|
864 | if self.bzvermajor >= 4: | |
865 | return b"@%s %s" % (fieldname, pycompat.bytestr(value)) |
|
865 | return b"@%s %s" % (fieldname, pycompat.bytestr(value)) | |
866 | else: |
|
866 | else: | |
867 | if fieldname == b"id": |
|
867 | if fieldname == b"id": | |
868 | fieldname = b"bug_id" |
|
868 | fieldname = b"bug_id" | |
869 | return b"@%s = %s" % (fieldname, pycompat.bytestr(value)) |
|
869 | return b"@%s = %s" % (fieldname, pycompat.bytestr(value)) | |
870 |
|
870 | |||
871 | def send_bug_modify_email(self, bugid, commands, comment, committer): |
|
871 | def send_bug_modify_email(self, bugid, commands, comment, committer): | |
872 | '''send modification message to Bugzilla bug via email. |
|
872 | '''send modification message to Bugzilla bug via email. | |
873 |
|
873 | |||
874 | The message format is documented in the Bugzilla email_in.pl |
|
874 | The message format is documented in the Bugzilla email_in.pl | |
875 | specification. commands is a list of command lines, comment is the |
|
875 | specification. commands is a list of command lines, comment is the | |
876 | comment text. |
|
876 | comment text. | |
877 |
|
877 | |||
878 | To stop users from crafting commit comments with |
|
878 | To stop users from crafting commit comments with | |
879 | Bugzilla commands, specify the bug ID via the message body, rather |
|
879 | Bugzilla commands, specify the bug ID via the message body, rather | |
880 | than the subject line, and leave a blank line after it. |
|
880 | than the subject line, and leave a blank line after it. | |
881 | ''' |
|
881 | ''' | |
882 | user = self.map_committer(committer) |
|
882 | user = self.map_committer(committer) | |
883 | matches = self.bzproxy.User.get( |
|
883 | matches = self.bzproxy.User.get( | |
884 | {b'match': [user], b'token': self.bztoken} |
|
884 | {b'match': [user], b'token': self.bztoken} | |
885 | ) |
|
885 | ) | |
886 | if not matches[b'users']: |
|
886 | if not matches[b'users']: | |
887 | user = self.ui.config(b'bugzilla', b'user') |
|
887 | user = self.ui.config(b'bugzilla', b'user') | |
888 | matches = self.bzproxy.User.get( |
|
888 | matches = self.bzproxy.User.get( | |
889 | {b'match': [user], b'token': self.bztoken} |
|
889 | {b'match': [user], b'token': self.bztoken} | |
890 | ) |
|
890 | ) | |
891 | if not matches[b'users']: |
|
891 | if not matches[b'users']: | |
892 | raise error.Abort( |
|
892 | raise error.Abort( | |
893 | _(b"default bugzilla user %s email not found") % user |
|
893 | _(b"default bugzilla user %s email not found") % user | |
894 | ) |
|
894 | ) | |
895 | user = matches[b'users'][0][b'email'] |
|
895 | user = matches[b'users'][0][b'email'] | |
896 | commands.append(self.makecommandline(b"id", bugid)) |
|
896 | commands.append(self.makecommandline(b"id", bugid)) | |
897 |
|
897 | |||
898 | text = b"\n".join(commands) + b"\n\n" + comment |
|
898 | text = b"\n".join(commands) + b"\n\n" + comment | |
899 |
|
899 | |||
900 | _charsets = mail._charsets(self.ui) |
|
900 | _charsets = mail._charsets(self.ui) | |
901 | user = mail.addressencode(self.ui, user, _charsets) |
|
901 | user = mail.addressencode(self.ui, user, _charsets) | |
902 | bzemail = mail.addressencode(self.ui, self.bzemail, _charsets) |
|
902 | bzemail = mail.addressencode(self.ui, self.bzemail, _charsets) | |
903 | msg = mail.mimeencode(self.ui, text, _charsets) |
|
903 | msg = mail.mimeencode(self.ui, text, _charsets) | |
904 | msg[b'From'] = user |
|
904 | msg[b'From'] = user | |
905 | msg[b'To'] = bzemail |
|
905 | msg[b'To'] = bzemail | |
906 | msg[b'Subject'] = mail.headencode( |
|
906 | msg[b'Subject'] = mail.headencode( | |
907 | self.ui, b"Bug modification", _charsets |
|
907 | self.ui, b"Bug modification", _charsets | |
908 | ) |
|
908 | ) | |
909 | sendmail = mail.connect(self.ui) |
|
909 | sendmail = mail.connect(self.ui) | |
910 | sendmail(user, bzemail, msg.as_string()) |
|
910 | sendmail(user, bzemail, msg.as_string()) | |
911 |
|
911 | |||
912 | def updatebug(self, bugid, newstate, text, committer): |
|
912 | def updatebug(self, bugid, newstate, text, committer): | |
913 | cmds = [] |
|
913 | cmds = [] | |
914 | if b'hours' in newstate: |
|
914 | if b'hours' in newstate: | |
915 | cmds.append(self.makecommandline(b"work_time", newstate[b'hours'])) |
|
915 | cmds.append(self.makecommandline(b"work_time", newstate[b'hours'])) | |
916 | if b'fix' in newstate: |
|
916 | if b'fix' in newstate: | |
917 | cmds.append(self.makecommandline(b"bug_status", self.fixstatus)) |
|
917 | cmds.append(self.makecommandline(b"bug_status", self.fixstatus)) | |
918 | cmds.append(self.makecommandline(b"resolution", self.fixresolution)) |
|
918 | cmds.append(self.makecommandline(b"resolution", self.fixresolution)) | |
919 | self.send_bug_modify_email(bugid, cmds, text, committer) |
|
919 | self.send_bug_modify_email(bugid, cmds, text, committer) | |
920 |
|
920 | |||
921 |
|
921 | |||
922 | class NotFound(LookupError): |
|
922 | class NotFound(LookupError): | |
923 | pass |
|
923 | pass | |
924 |
|
924 | |||
925 |
|
925 | |||
926 | class bzrestapi(bzaccess): |
|
926 | class bzrestapi(bzaccess): | |
927 | """Read and write bugzilla data using the REST API available since |
|
927 | """Read and write bugzilla data using the REST API available since | |
928 | Bugzilla 5.0. |
|
928 | Bugzilla 5.0. | |
929 | """ |
|
929 | """ | |
930 |
|
930 | |||
931 | def __init__(self, ui): |
|
931 | def __init__(self, ui): | |
932 | bzaccess.__init__(self, ui) |
|
932 | bzaccess.__init__(self, ui) | |
933 | bz = self.ui.config(b'bugzilla', b'bzurl') |
|
933 | bz = self.ui.config(b'bugzilla', b'bzurl') | |
934 | self.bzroot = b'/'.join([bz, b'rest']) |
|
934 | self.bzroot = b'/'.join([bz, b'rest']) | |
935 | self.apikey = self.ui.config(b'bugzilla', b'apikey') |
|
935 | self.apikey = self.ui.config(b'bugzilla', b'apikey') | |
936 | self.user = self.ui.config(b'bugzilla', b'user') |
|
936 | self.user = self.ui.config(b'bugzilla', b'user') | |
937 | self.passwd = self.ui.config(b'bugzilla', b'password') |
|
937 | self.passwd = self.ui.config(b'bugzilla', b'password') | |
938 | self.fixstatus = self.ui.config(b'bugzilla', b'fixstatus') |
|
938 | self.fixstatus = self.ui.config(b'bugzilla', b'fixstatus') | |
939 | self.fixresolution = self.ui.config(b'bugzilla', b'fixresolution') |
|
939 | self.fixresolution = self.ui.config(b'bugzilla', b'fixresolution') | |
940 |
|
940 | |||
941 | def apiurl(self, targets, include_fields=None): |
|
941 | def apiurl(self, targets, include_fields=None): | |
942 | url = b'/'.join([self.bzroot] + [pycompat.bytestr(t) for t in targets]) |
|
942 | url = b'/'.join([self.bzroot] + [pycompat.bytestr(t) for t in targets]) | |
943 | qv = {} |
|
943 | qv = {} | |
944 | if self.apikey: |
|
944 | if self.apikey: | |
945 | qv[b'api_key'] = self.apikey |
|
945 | qv[b'api_key'] = self.apikey | |
946 | elif self.user and self.passwd: |
|
946 | elif self.user and self.passwd: | |
947 | qv[b'login'] = self.user |
|
947 | qv[b'login'] = self.user | |
948 | qv[b'password'] = self.passwd |
|
948 | qv[b'password'] = self.passwd | |
949 | if include_fields: |
|
949 | if include_fields: | |
950 | qv[b'include_fields'] = include_fields |
|
950 | qv[b'include_fields'] = include_fields | |
951 | if qv: |
|
951 | if qv: | |
952 | url = b'%s?%s' % (url, util.urlreq.urlencode(qv)) |
|
952 | url = b'%s?%s' % (url, util.urlreq.urlencode(qv)) | |
953 | return url |
|
953 | return url | |
954 |
|
954 | |||
955 | def _fetch(self, burl): |
|
955 | def _fetch(self, burl): | |
956 | try: |
|
956 | try: | |
957 | resp = url.open(self.ui, burl) |
|
957 | resp = url.open(self.ui, burl) | |
958 |
return |
|
958 | return pycompat.json_loads(resp.read()) | |
959 | except util.urlerr.httperror as inst: |
|
959 | except util.urlerr.httperror as inst: | |
960 | if inst.code == 401: |
|
960 | if inst.code == 401: | |
961 | raise error.Abort(_(b'authorization failed')) |
|
961 | raise error.Abort(_(b'authorization failed')) | |
962 | if inst.code == 404: |
|
962 | if inst.code == 404: | |
963 | raise NotFound() |
|
963 | raise NotFound() | |
964 | else: |
|
964 | else: | |
965 | raise |
|
965 | raise | |
966 |
|
966 | |||
967 | def _submit(self, burl, data, method=b'POST'): |
|
967 | def _submit(self, burl, data, method=b'POST'): | |
968 | data = json.dumps(data) |
|
968 | data = json.dumps(data) | |
969 | if method == b'PUT': |
|
969 | if method == b'PUT': | |
970 |
|
970 | |||
971 | class putrequest(util.urlreq.request): |
|
971 | class putrequest(util.urlreq.request): | |
972 | def get_method(self): |
|
972 | def get_method(self): | |
973 | return b'PUT' |
|
973 | return b'PUT' | |
974 |
|
974 | |||
975 | request_type = putrequest |
|
975 | request_type = putrequest | |
976 | else: |
|
976 | else: | |
977 | request_type = util.urlreq.request |
|
977 | request_type = util.urlreq.request | |
978 | req = request_type(burl, data, {b'Content-Type': b'application/json'}) |
|
978 | req = request_type(burl, data, {b'Content-Type': b'application/json'}) | |
979 | try: |
|
979 | try: | |
980 | resp = url.opener(self.ui).open(req) |
|
980 | resp = url.opener(self.ui).open(req) | |
981 |
return |
|
981 | return pycompat.json_loads(resp.read()) | |
982 | except util.urlerr.httperror as inst: |
|
982 | except util.urlerr.httperror as inst: | |
983 | if inst.code == 401: |
|
983 | if inst.code == 401: | |
984 | raise error.Abort(_(b'authorization failed')) |
|
984 | raise error.Abort(_(b'authorization failed')) | |
985 | if inst.code == 404: |
|
985 | if inst.code == 404: | |
986 | raise NotFound() |
|
986 | raise NotFound() | |
987 | else: |
|
987 | else: | |
988 | raise |
|
988 | raise | |
989 |
|
989 | |||
990 | def filter_real_bug_ids(self, bugs): |
|
990 | def filter_real_bug_ids(self, bugs): | |
991 | '''remove bug IDs that do not exist in Bugzilla from bugs.''' |
|
991 | '''remove bug IDs that do not exist in Bugzilla from bugs.''' | |
992 | badbugs = set() |
|
992 | badbugs = set() | |
993 | for bugid in bugs: |
|
993 | for bugid in bugs: | |
994 | burl = self.apiurl((b'bug', bugid), include_fields=b'status') |
|
994 | burl = self.apiurl((b'bug', bugid), include_fields=b'status') | |
995 | try: |
|
995 | try: | |
996 | self._fetch(burl) |
|
996 | self._fetch(burl) | |
997 | except NotFound: |
|
997 | except NotFound: | |
998 | badbugs.add(bugid) |
|
998 | badbugs.add(bugid) | |
999 | for bugid in badbugs: |
|
999 | for bugid in badbugs: | |
1000 | del bugs[bugid] |
|
1000 | del bugs[bugid] | |
1001 |
|
1001 | |||
1002 | def filter_cset_known_bug_ids(self, node, bugs): |
|
1002 | def filter_cset_known_bug_ids(self, node, bugs): | |
1003 | '''remove bug IDs where node occurs in comment text from bugs.''' |
|
1003 | '''remove bug IDs where node occurs in comment text from bugs.''' | |
1004 | sn = short(node) |
|
1004 | sn = short(node) | |
1005 | for bugid in bugs.keys(): |
|
1005 | for bugid in bugs.keys(): | |
1006 | burl = self.apiurl( |
|
1006 | burl = self.apiurl( | |
1007 | (b'bug', bugid, b'comment'), include_fields=b'text' |
|
1007 | (b'bug', bugid, b'comment'), include_fields=b'text' | |
1008 | ) |
|
1008 | ) | |
1009 | result = self._fetch(burl) |
|
1009 | result = self._fetch(burl) | |
1010 | comments = result[b'bugs'][pycompat.bytestr(bugid)][b'comments'] |
|
1010 | comments = result[b'bugs'][pycompat.bytestr(bugid)][b'comments'] | |
1011 | if any(sn in c[b'text'] for c in comments): |
|
1011 | if any(sn in c[b'text'] for c in comments): | |
1012 | self.ui.status( |
|
1012 | self.ui.status( | |
1013 | _(b'bug %d already knows about changeset %s\n') |
|
1013 | _(b'bug %d already knows about changeset %s\n') | |
1014 | % (bugid, sn) |
|
1014 | % (bugid, sn) | |
1015 | ) |
|
1015 | ) | |
1016 | del bugs[bugid] |
|
1016 | del bugs[bugid] | |
1017 |
|
1017 | |||
1018 | def updatebug(self, bugid, newstate, text, committer): |
|
1018 | def updatebug(self, bugid, newstate, text, committer): | |
1019 | '''update the specified bug. Add comment text and set new states. |
|
1019 | '''update the specified bug. Add comment text and set new states. | |
1020 |
|
1020 | |||
1021 | If possible add the comment as being from the committer of |
|
1021 | If possible add the comment as being from the committer of | |
1022 | the changeset. Otherwise use the default Bugzilla user. |
|
1022 | the changeset. Otherwise use the default Bugzilla user. | |
1023 | ''' |
|
1023 | ''' | |
1024 | bugmod = {} |
|
1024 | bugmod = {} | |
1025 | if b'hours' in newstate: |
|
1025 | if b'hours' in newstate: | |
1026 | bugmod[b'work_time'] = newstate[b'hours'] |
|
1026 | bugmod[b'work_time'] = newstate[b'hours'] | |
1027 | if b'fix' in newstate: |
|
1027 | if b'fix' in newstate: | |
1028 | bugmod[b'status'] = self.fixstatus |
|
1028 | bugmod[b'status'] = self.fixstatus | |
1029 | bugmod[b'resolution'] = self.fixresolution |
|
1029 | bugmod[b'resolution'] = self.fixresolution | |
1030 | if bugmod: |
|
1030 | if bugmod: | |
1031 | # if we have to change the bugs state do it here |
|
1031 | # if we have to change the bugs state do it here | |
1032 | bugmod[b'comment'] = { |
|
1032 | bugmod[b'comment'] = { | |
1033 | b'comment': text, |
|
1033 | b'comment': text, | |
1034 | b'is_private': False, |
|
1034 | b'is_private': False, | |
1035 | b'is_markdown': False, |
|
1035 | b'is_markdown': False, | |
1036 | } |
|
1036 | } | |
1037 | burl = self.apiurl((b'bug', bugid)) |
|
1037 | burl = self.apiurl((b'bug', bugid)) | |
1038 | self._submit(burl, bugmod, method=b'PUT') |
|
1038 | self._submit(burl, bugmod, method=b'PUT') | |
1039 | self.ui.debug(b'updated bug %s\n' % bugid) |
|
1039 | self.ui.debug(b'updated bug %s\n' % bugid) | |
1040 | else: |
|
1040 | else: | |
1041 | burl = self.apiurl((b'bug', bugid, b'comment')) |
|
1041 | burl = self.apiurl((b'bug', bugid, b'comment')) | |
1042 | self._submit( |
|
1042 | self._submit( | |
1043 | burl, |
|
1043 | burl, | |
1044 | { |
|
1044 | { | |
1045 | b'comment': text, |
|
1045 | b'comment': text, | |
1046 | b'is_private': False, |
|
1046 | b'is_private': False, | |
1047 | b'is_markdown': False, |
|
1047 | b'is_markdown': False, | |
1048 | }, |
|
1048 | }, | |
1049 | ) |
|
1049 | ) | |
1050 | self.ui.debug(b'added comment to bug %s\n' % bugid) |
|
1050 | self.ui.debug(b'added comment to bug %s\n' % bugid) | |
1051 |
|
1051 | |||
1052 | def notify(self, bugs, committer): |
|
1052 | def notify(self, bugs, committer): | |
1053 | '''Force sending of Bugzilla notification emails. |
|
1053 | '''Force sending of Bugzilla notification emails. | |
1054 |
|
1054 | |||
1055 | Only required if the access method does not trigger notification |
|
1055 | Only required if the access method does not trigger notification | |
1056 | emails automatically. |
|
1056 | emails automatically. | |
1057 | ''' |
|
1057 | ''' | |
1058 | pass |
|
1058 | pass | |
1059 |
|
1059 | |||
1060 |
|
1060 | |||
1061 | class bugzilla(object): |
|
1061 | class bugzilla(object): | |
1062 | # supported versions of bugzilla. different versions have |
|
1062 | # supported versions of bugzilla. different versions have | |
1063 | # different schemas. |
|
1063 | # different schemas. | |
1064 | _versions = { |
|
1064 | _versions = { | |
1065 | b'2.16': bzmysql, |
|
1065 | b'2.16': bzmysql, | |
1066 | b'2.18': bzmysql_2_18, |
|
1066 | b'2.18': bzmysql_2_18, | |
1067 | b'3.0': bzmysql_3_0, |
|
1067 | b'3.0': bzmysql_3_0, | |
1068 | b'xmlrpc': bzxmlrpc, |
|
1068 | b'xmlrpc': bzxmlrpc, | |
1069 | b'xmlrpc+email': bzxmlrpcemail, |
|
1069 | b'xmlrpc+email': bzxmlrpcemail, | |
1070 | b'restapi': bzrestapi, |
|
1070 | b'restapi': bzrestapi, | |
1071 | } |
|
1071 | } | |
1072 |
|
1072 | |||
1073 | def __init__(self, ui, repo): |
|
1073 | def __init__(self, ui, repo): | |
1074 | self.ui = ui |
|
1074 | self.ui = ui | |
1075 | self.repo = repo |
|
1075 | self.repo = repo | |
1076 |
|
1076 | |||
1077 | bzversion = self.ui.config(b'bugzilla', b'version') |
|
1077 | bzversion = self.ui.config(b'bugzilla', b'version') | |
1078 | try: |
|
1078 | try: | |
1079 | bzclass = bugzilla._versions[bzversion] |
|
1079 | bzclass = bugzilla._versions[bzversion] | |
1080 | except KeyError: |
|
1080 | except KeyError: | |
1081 | raise error.Abort( |
|
1081 | raise error.Abort( | |
1082 | _(b'bugzilla version %s not supported') % bzversion |
|
1082 | _(b'bugzilla version %s not supported') % bzversion | |
1083 | ) |
|
1083 | ) | |
1084 | self.bzdriver = bzclass(self.ui) |
|
1084 | self.bzdriver = bzclass(self.ui) | |
1085 |
|
1085 | |||
1086 | self.bug_re = re.compile( |
|
1086 | self.bug_re = re.compile( | |
1087 | self.ui.config(b'bugzilla', b'regexp'), re.IGNORECASE |
|
1087 | self.ui.config(b'bugzilla', b'regexp'), re.IGNORECASE | |
1088 | ) |
|
1088 | ) | |
1089 | self.fix_re = re.compile( |
|
1089 | self.fix_re = re.compile( | |
1090 | self.ui.config(b'bugzilla', b'fixregexp'), re.IGNORECASE |
|
1090 | self.ui.config(b'bugzilla', b'fixregexp'), re.IGNORECASE | |
1091 | ) |
|
1091 | ) | |
1092 | self.split_re = re.compile(br'\D+') |
|
1092 | self.split_re = re.compile(br'\D+') | |
1093 |
|
1093 | |||
1094 | def find_bugs(self, ctx): |
|
1094 | def find_bugs(self, ctx): | |
1095 | '''return bugs dictionary created from commit comment. |
|
1095 | '''return bugs dictionary created from commit comment. | |
1096 |
|
1096 | |||
1097 | Extract bug info from changeset comments. Filter out any that are |
|
1097 | Extract bug info from changeset comments. Filter out any that are | |
1098 | not known to Bugzilla, and any that already have a reference to |
|
1098 | not known to Bugzilla, and any that already have a reference to | |
1099 | the given changeset in their comments. |
|
1099 | the given changeset in their comments. | |
1100 | ''' |
|
1100 | ''' | |
1101 | start = 0 |
|
1101 | start = 0 | |
1102 | hours = 0.0 |
|
1102 | hours = 0.0 | |
1103 | bugs = {} |
|
1103 | bugs = {} | |
1104 | bugmatch = self.bug_re.search(ctx.description(), start) |
|
1104 | bugmatch = self.bug_re.search(ctx.description(), start) | |
1105 | fixmatch = self.fix_re.search(ctx.description(), start) |
|
1105 | fixmatch = self.fix_re.search(ctx.description(), start) | |
1106 | while True: |
|
1106 | while True: | |
1107 | bugattribs = {} |
|
1107 | bugattribs = {} | |
1108 | if not bugmatch and not fixmatch: |
|
1108 | if not bugmatch and not fixmatch: | |
1109 | break |
|
1109 | break | |
1110 | if not bugmatch: |
|
1110 | if not bugmatch: | |
1111 | m = fixmatch |
|
1111 | m = fixmatch | |
1112 | elif not fixmatch: |
|
1112 | elif not fixmatch: | |
1113 | m = bugmatch |
|
1113 | m = bugmatch | |
1114 | else: |
|
1114 | else: | |
1115 | if bugmatch.start() < fixmatch.start(): |
|
1115 | if bugmatch.start() < fixmatch.start(): | |
1116 | m = bugmatch |
|
1116 | m = bugmatch | |
1117 | else: |
|
1117 | else: | |
1118 | m = fixmatch |
|
1118 | m = fixmatch | |
1119 | start = m.end() |
|
1119 | start = m.end() | |
1120 | if m is bugmatch: |
|
1120 | if m is bugmatch: | |
1121 | bugmatch = self.bug_re.search(ctx.description(), start) |
|
1121 | bugmatch = self.bug_re.search(ctx.description(), start) | |
1122 | if b'fix' in bugattribs: |
|
1122 | if b'fix' in bugattribs: | |
1123 | del bugattribs[b'fix'] |
|
1123 | del bugattribs[b'fix'] | |
1124 | else: |
|
1124 | else: | |
1125 | fixmatch = self.fix_re.search(ctx.description(), start) |
|
1125 | fixmatch = self.fix_re.search(ctx.description(), start) | |
1126 | bugattribs[b'fix'] = None |
|
1126 | bugattribs[b'fix'] = None | |
1127 |
|
1127 | |||
1128 | try: |
|
1128 | try: | |
1129 | ids = m.group(b'ids') |
|
1129 | ids = m.group(b'ids') | |
1130 | except IndexError: |
|
1130 | except IndexError: | |
1131 | ids = m.group(1) |
|
1131 | ids = m.group(1) | |
1132 | try: |
|
1132 | try: | |
1133 | hours = float(m.group(b'hours')) |
|
1133 | hours = float(m.group(b'hours')) | |
1134 | bugattribs[b'hours'] = hours |
|
1134 | bugattribs[b'hours'] = hours | |
1135 | except IndexError: |
|
1135 | except IndexError: | |
1136 | pass |
|
1136 | pass | |
1137 | except TypeError: |
|
1137 | except TypeError: | |
1138 | pass |
|
1138 | pass | |
1139 | except ValueError: |
|
1139 | except ValueError: | |
1140 | self.ui.status(_(b"%s: invalid hours\n") % m.group(b'hours')) |
|
1140 | self.ui.status(_(b"%s: invalid hours\n") % m.group(b'hours')) | |
1141 |
|
1141 | |||
1142 | for id in self.split_re.split(ids): |
|
1142 | for id in self.split_re.split(ids): | |
1143 | if not id: |
|
1143 | if not id: | |
1144 | continue |
|
1144 | continue | |
1145 | bugs[int(id)] = bugattribs |
|
1145 | bugs[int(id)] = bugattribs | |
1146 | if bugs: |
|
1146 | if bugs: | |
1147 | self.bzdriver.filter_real_bug_ids(bugs) |
|
1147 | self.bzdriver.filter_real_bug_ids(bugs) | |
1148 | if bugs: |
|
1148 | if bugs: | |
1149 | self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs) |
|
1149 | self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs) | |
1150 | return bugs |
|
1150 | return bugs | |
1151 |
|
1151 | |||
1152 | def update(self, bugid, newstate, ctx): |
|
1152 | def update(self, bugid, newstate, ctx): | |
1153 | '''update bugzilla bug with reference to changeset.''' |
|
1153 | '''update bugzilla bug with reference to changeset.''' | |
1154 |
|
1154 | |||
1155 | def webroot(root): |
|
1155 | def webroot(root): | |
1156 | '''strip leading prefix of repo root and turn into |
|
1156 | '''strip leading prefix of repo root and turn into | |
1157 | url-safe path.''' |
|
1157 | url-safe path.''' | |
1158 | count = int(self.ui.config(b'bugzilla', b'strip')) |
|
1158 | count = int(self.ui.config(b'bugzilla', b'strip')) | |
1159 | root = util.pconvert(root) |
|
1159 | root = util.pconvert(root) | |
1160 | while count > 0: |
|
1160 | while count > 0: | |
1161 | c = root.find(b'/') |
|
1161 | c = root.find(b'/') | |
1162 | if c == -1: |
|
1162 | if c == -1: | |
1163 | break |
|
1163 | break | |
1164 | root = root[c + 1 :] |
|
1164 | root = root[c + 1 :] | |
1165 | count -= 1 |
|
1165 | count -= 1 | |
1166 | return root |
|
1166 | return root | |
1167 |
|
1167 | |||
1168 | mapfile = None |
|
1168 | mapfile = None | |
1169 | tmpl = self.ui.config(b'bugzilla', b'template') |
|
1169 | tmpl = self.ui.config(b'bugzilla', b'template') | |
1170 | if not tmpl: |
|
1170 | if not tmpl: | |
1171 | mapfile = self.ui.config(b'bugzilla', b'style') |
|
1171 | mapfile = self.ui.config(b'bugzilla', b'style') | |
1172 | if not mapfile and not tmpl: |
|
1172 | if not mapfile and not tmpl: | |
1173 | tmpl = _( |
|
1173 | tmpl = _( | |
1174 | b'changeset {node|short} in repo {root} refers ' |
|
1174 | b'changeset {node|short} in repo {root} refers ' | |
1175 | b'to bug {bug}.\ndetails:\n\t{desc|tabindent}' |
|
1175 | b'to bug {bug}.\ndetails:\n\t{desc|tabindent}' | |
1176 | ) |
|
1176 | ) | |
1177 | spec = logcmdutil.templatespec(tmpl, mapfile) |
|
1177 | spec = logcmdutil.templatespec(tmpl, mapfile) | |
1178 | t = logcmdutil.changesettemplater(self.ui, self.repo, spec) |
|
1178 | t = logcmdutil.changesettemplater(self.ui, self.repo, spec) | |
1179 | self.ui.pushbuffer() |
|
1179 | self.ui.pushbuffer() | |
1180 | t.show( |
|
1180 | t.show( | |
1181 | ctx, |
|
1181 | ctx, | |
1182 | changes=ctx.changeset(), |
|
1182 | changes=ctx.changeset(), | |
1183 | bug=pycompat.bytestr(bugid), |
|
1183 | bug=pycompat.bytestr(bugid), | |
1184 | hgweb=self.ui.config(b'web', b'baseurl'), |
|
1184 | hgweb=self.ui.config(b'web', b'baseurl'), | |
1185 | root=self.repo.root, |
|
1185 | root=self.repo.root, | |
1186 | webroot=webroot(self.repo.root), |
|
1186 | webroot=webroot(self.repo.root), | |
1187 | ) |
|
1187 | ) | |
1188 | data = self.ui.popbuffer() |
|
1188 | data = self.ui.popbuffer() | |
1189 | self.bzdriver.updatebug( |
|
1189 | self.bzdriver.updatebug( | |
1190 | bugid, newstate, data, stringutil.email(ctx.user()) |
|
1190 | bugid, newstate, data, stringutil.email(ctx.user()) | |
1191 | ) |
|
1191 | ) | |
1192 |
|
1192 | |||
1193 | def notify(self, bugs, committer): |
|
1193 | def notify(self, bugs, committer): | |
1194 | '''ensure Bugzilla users are notified of bug change.''' |
|
1194 | '''ensure Bugzilla users are notified of bug change.''' | |
1195 | self.bzdriver.notify(bugs, committer) |
|
1195 | self.bzdriver.notify(bugs, committer) | |
1196 |
|
1196 | |||
1197 |
|
1197 | |||
1198 | def hook(ui, repo, hooktype, node=None, **kwargs): |
|
1198 | def hook(ui, repo, hooktype, node=None, **kwargs): | |
1199 | '''add comment to bugzilla for each changeset that refers to a |
|
1199 | '''add comment to bugzilla for each changeset that refers to a | |
1200 | bugzilla bug id. only add a comment once per bug, so same change |
|
1200 | bugzilla bug id. only add a comment once per bug, so same change | |
1201 | seen multiple times does not fill bug with duplicate data.''' |
|
1201 | seen multiple times does not fill bug with duplicate data.''' | |
1202 | if node is None: |
|
1202 | if node is None: | |
1203 | raise error.Abort( |
|
1203 | raise error.Abort( | |
1204 | _(b'hook type %s does not pass a changeset id') % hooktype |
|
1204 | _(b'hook type %s does not pass a changeset id') % hooktype | |
1205 | ) |
|
1205 | ) | |
1206 | try: |
|
1206 | try: | |
1207 | bz = bugzilla(ui, repo) |
|
1207 | bz = bugzilla(ui, repo) | |
1208 | ctx = repo[node] |
|
1208 | ctx = repo[node] | |
1209 | bugs = bz.find_bugs(ctx) |
|
1209 | bugs = bz.find_bugs(ctx) | |
1210 | if bugs: |
|
1210 | if bugs: | |
1211 | for bug in bugs: |
|
1211 | for bug in bugs: | |
1212 | bz.update(bug, bugs[bug], ctx) |
|
1212 | bz.update(bug, bugs[bug], ctx) | |
1213 | bz.notify(bugs, stringutil.email(ctx.user())) |
|
1213 | bz.notify(bugs, stringutil.email(ctx.user())) | |
1214 | except Exception as e: |
|
1214 | except Exception as e: | |
1215 | raise error.Abort(_(b'Bugzilla error: %s') % e) |
|
1215 | raise error.Abort(_(b'Bugzilla error: %s') % e) |
@@ -1,883 +1,882 b'' | |||||
1 | # fix - rewrite file content in changesets and working copy |
|
1 | # fix - rewrite file content in changesets and working copy | |
2 | # |
|
2 | # | |
3 | # Copyright 2018 Google LLC. |
|
3 | # Copyright 2018 Google LLC. | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 | """rewrite file content in changesets or working copy (EXPERIMENTAL) |
|
7 | """rewrite file content in changesets or working copy (EXPERIMENTAL) | |
8 |
|
8 | |||
9 | Provides a command that runs configured tools on the contents of modified files, |
|
9 | Provides a command that runs configured tools on the contents of modified files, | |
10 | writing back any fixes to the working copy or replacing changesets. |
|
10 | writing back any fixes to the working copy or replacing changesets. | |
11 |
|
11 | |||
12 | Here is an example configuration that causes :hg:`fix` to apply automatic |
|
12 | Here is an example configuration that causes :hg:`fix` to apply automatic | |
13 | formatting fixes to modified lines in C++ code:: |
|
13 | formatting fixes to modified lines in C++ code:: | |
14 |
|
14 | |||
15 | [fix] |
|
15 | [fix] | |
16 | clang-format:command=clang-format --assume-filename={rootpath} |
|
16 | clang-format:command=clang-format --assume-filename={rootpath} | |
17 | clang-format:linerange=--lines={first}:{last} |
|
17 | clang-format:linerange=--lines={first}:{last} | |
18 | clang-format:pattern=set:**.cpp or **.hpp |
|
18 | clang-format:pattern=set:**.cpp or **.hpp | |
19 |
|
19 | |||
20 | The :command suboption forms the first part of the shell command that will be |
|
20 | The :command suboption forms the first part of the shell command that will be | |
21 | used to fix a file. The content of the file is passed on standard input, and the |
|
21 | used to fix a file. The content of the file is passed on standard input, and the | |
22 | fixed file content is expected on standard output. Any output on standard error |
|
22 | fixed file content is expected on standard output. Any output on standard error | |
23 | will be displayed as a warning. If the exit status is not zero, the file will |
|
23 | will be displayed as a warning. If the exit status is not zero, the file will | |
24 | not be affected. A placeholder warning is displayed if there is a non-zero exit |
|
24 | not be affected. A placeholder warning is displayed if there is a non-zero exit | |
25 | status but no standard error output. Some values may be substituted into the |
|
25 | status but no standard error output. Some values may be substituted into the | |
26 | command:: |
|
26 | command:: | |
27 |
|
27 | |||
28 | {rootpath} The path of the file being fixed, relative to the repo root |
|
28 | {rootpath} The path of the file being fixed, relative to the repo root | |
29 | {basename} The name of the file being fixed, without the directory path |
|
29 | {basename} The name of the file being fixed, without the directory path | |
30 |
|
30 | |||
31 | If the :linerange suboption is set, the tool will only be run if there are |
|
31 | If the :linerange suboption is set, the tool will only be run if there are | |
32 | changed lines in a file. The value of this suboption is appended to the shell |
|
32 | changed lines in a file. The value of this suboption is appended to the shell | |
33 | command once for every range of changed lines in the file. Some values may be |
|
33 | command once for every range of changed lines in the file. Some values may be | |
34 | substituted into the command:: |
|
34 | substituted into the command:: | |
35 |
|
35 | |||
36 | {first} The 1-based line number of the first line in the modified range |
|
36 | {first} The 1-based line number of the first line in the modified range | |
37 | {last} The 1-based line number of the last line in the modified range |
|
37 | {last} The 1-based line number of the last line in the modified range | |
38 |
|
38 | |||
39 | Deleted sections of a file will be ignored by :linerange, because there is no |
|
39 | Deleted sections of a file will be ignored by :linerange, because there is no | |
40 | corresponding line range in the version being fixed. |
|
40 | corresponding line range in the version being fixed. | |
41 |
|
41 | |||
42 | By default, tools that set :linerange will only be executed if there is at least |
|
42 | By default, tools that set :linerange will only be executed if there is at least | |
43 | one changed line range. This is meant to prevent accidents like running a code |
|
43 | one changed line range. This is meant to prevent accidents like running a code | |
44 | formatter in such a way that it unexpectedly reformats the whole file. If such a |
|
44 | formatter in such a way that it unexpectedly reformats the whole file. If such a | |
45 | tool needs to operate on unchanged files, it should set the :skipclean suboption |
|
45 | tool needs to operate on unchanged files, it should set the :skipclean suboption | |
46 | to false. |
|
46 | to false. | |
47 |
|
47 | |||
48 | The :pattern suboption determines which files will be passed through each |
|
48 | The :pattern suboption determines which files will be passed through each | |
49 | configured tool. See :hg:`help patterns` for possible values. However, all |
|
49 | configured tool. See :hg:`help patterns` for possible values. However, all | |
50 | patterns are relative to the repo root, even if that text says they are relative |
|
50 | patterns are relative to the repo root, even if that text says they are relative | |
51 | to the current working directory. If there are file arguments to :hg:`fix`, the |
|
51 | to the current working directory. If there are file arguments to :hg:`fix`, the | |
52 | intersection of these patterns is used. |
|
52 | intersection of these patterns is used. | |
53 |
|
53 | |||
54 | There is also a configurable limit for the maximum size of file that will be |
|
54 | There is also a configurable limit for the maximum size of file that will be | |
55 | processed by :hg:`fix`:: |
|
55 | processed by :hg:`fix`:: | |
56 |
|
56 | |||
57 | [fix] |
|
57 | [fix] | |
58 | maxfilesize = 2MB |
|
58 | maxfilesize = 2MB | |
59 |
|
59 | |||
60 | Normally, execution of configured tools will continue after a failure (indicated |
|
60 | Normally, execution of configured tools will continue after a failure (indicated | |
61 | by a non-zero exit status). It can also be configured to abort after the first |
|
61 | by a non-zero exit status). It can also be configured to abort after the first | |
62 | such failure, so that no files will be affected if any tool fails. This abort |
|
62 | such failure, so that no files will be affected if any tool fails. This abort | |
63 | will also cause :hg:`fix` to exit with a non-zero status:: |
|
63 | will also cause :hg:`fix` to exit with a non-zero status:: | |
64 |
|
64 | |||
65 | [fix] |
|
65 | [fix] | |
66 | failure = abort |
|
66 | failure = abort | |
67 |
|
67 | |||
68 | When multiple tools are configured to affect a file, they execute in an order |
|
68 | When multiple tools are configured to affect a file, they execute in an order | |
69 | defined by the :priority suboption. The priority suboption has a default value |
|
69 | defined by the :priority suboption. The priority suboption has a default value | |
70 | of zero for each tool. Tools are executed in order of descending priority. The |
|
70 | of zero for each tool. Tools are executed in order of descending priority. The | |
71 | execution order of tools with equal priority is unspecified. For example, you |
|
71 | execution order of tools with equal priority is unspecified. For example, you | |
72 | could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers |
|
72 | could use the 'sort' and 'head' utilities to keep only the 10 smallest numbers | |
73 | in a text file by ensuring that 'sort' runs before 'head':: |
|
73 | in a text file by ensuring that 'sort' runs before 'head':: | |
74 |
|
74 | |||
75 | [fix] |
|
75 | [fix] | |
76 | sort:command = sort -n |
|
76 | sort:command = sort -n | |
77 | head:command = head -n 10 |
|
77 | head:command = head -n 10 | |
78 | sort:pattern = numbers.txt |
|
78 | sort:pattern = numbers.txt | |
79 | head:pattern = numbers.txt |
|
79 | head:pattern = numbers.txt | |
80 | sort:priority = 2 |
|
80 | sort:priority = 2 | |
81 | head:priority = 1 |
|
81 | head:priority = 1 | |
82 |
|
82 | |||
83 | To account for changes made by each tool, the line numbers used for incremental |
|
83 | To account for changes made by each tool, the line numbers used for incremental | |
84 | formatting are recomputed before executing the next tool. So, each tool may see |
|
84 | formatting are recomputed before executing the next tool. So, each tool may see | |
85 | different values for the arguments added by the :linerange suboption. |
|
85 | different values for the arguments added by the :linerange suboption. | |
86 |
|
86 | |||
87 | Each fixer tool is allowed to return some metadata in addition to the fixed file |
|
87 | Each fixer tool is allowed to return some metadata in addition to the fixed file | |
88 | content. The metadata must be placed before the file content on stdout, |
|
88 | content. The metadata must be placed before the file content on stdout, | |
89 | separated from the file content by a zero byte. The metadata is parsed as a JSON |
|
89 | separated from the file content by a zero byte. The metadata is parsed as a JSON | |
90 | value (so, it should be UTF-8 encoded and contain no zero bytes). A fixer tool |
|
90 | value (so, it should be UTF-8 encoded and contain no zero bytes). A fixer tool | |
91 | is expected to produce this metadata encoding if and only if the :metadata |
|
91 | is expected to produce this metadata encoding if and only if the :metadata | |
92 | suboption is true:: |
|
92 | suboption is true:: | |
93 |
|
93 | |||
94 | [fix] |
|
94 | [fix] | |
95 | tool:command = tool --prepend-json-metadata |
|
95 | tool:command = tool --prepend-json-metadata | |
96 | tool:metadata = true |
|
96 | tool:metadata = true | |
97 |
|
97 | |||
98 | The metadata values are passed to hooks, which can be used to print summaries or |
|
98 | The metadata values are passed to hooks, which can be used to print summaries or | |
99 | perform other post-fixing work. The supported hooks are:: |
|
99 | perform other post-fixing work. The supported hooks are:: | |
100 |
|
100 | |||
101 | "postfixfile" |
|
101 | "postfixfile" | |
102 | Run once for each file in each revision where any fixer tools made changes |
|
102 | Run once for each file in each revision where any fixer tools made changes | |
103 | to the file content. Provides "$HG_REV" and "$HG_PATH" to identify the file, |
|
103 | to the file content. Provides "$HG_REV" and "$HG_PATH" to identify the file, | |
104 | and "$HG_METADATA" with a map of fixer names to metadata values from fixer |
|
104 | and "$HG_METADATA" with a map of fixer names to metadata values from fixer | |
105 | tools that affected the file. Fixer tools that didn't affect the file have a |
|
105 | tools that affected the file. Fixer tools that didn't affect the file have a | |
106 | valueof None. Only fixer tools that executed are present in the metadata. |
|
106 | valueof None. Only fixer tools that executed are present in the metadata. | |
107 |
|
107 | |||
108 | "postfix" |
|
108 | "postfix" | |
109 | Run once after all files and revisions have been handled. Provides |
|
109 | Run once after all files and revisions have been handled. Provides | |
110 | "$HG_REPLACEMENTS" with information about what revisions were created and |
|
110 | "$HG_REPLACEMENTS" with information about what revisions were created and | |
111 | made obsolete. Provides a boolean "$HG_WDIRWRITTEN" to indicate whether any |
|
111 | made obsolete. Provides a boolean "$HG_WDIRWRITTEN" to indicate whether any | |
112 | files in the working copy were updated. Provides a list "$HG_METADATA" |
|
112 | files in the working copy were updated. Provides a list "$HG_METADATA" | |
113 | mapping fixer tool names to lists of metadata values returned from |
|
113 | mapping fixer tool names to lists of metadata values returned from | |
114 | executions that modified a file. This aggregates the same metadata |
|
114 | executions that modified a file. This aggregates the same metadata | |
115 | previously passed to the "postfixfile" hook. |
|
115 | previously passed to the "postfixfile" hook. | |
116 |
|
116 | |||
117 | Fixer tools are run the in repository's root directory. This allows them to read |
|
117 | Fixer tools are run the in repository's root directory. This allows them to read | |
118 | configuration files from the working copy, or even write to the working copy. |
|
118 | configuration files from the working copy, or even write to the working copy. | |
119 | The working copy is not updated to match the revision being fixed. In fact, |
|
119 | The working copy is not updated to match the revision being fixed. In fact, | |
120 | several revisions may be fixed in parallel. Writes to the working copy are not |
|
120 | several revisions may be fixed in parallel. Writes to the working copy are not | |
121 | amended into the revision being fixed; fixer tools should always write fixed |
|
121 | amended into the revision being fixed; fixer tools should always write fixed | |
122 | file content back to stdout as documented above. |
|
122 | file content back to stdout as documented above. | |
123 | """ |
|
123 | """ | |
124 |
|
124 | |||
125 | from __future__ import absolute_import |
|
125 | from __future__ import absolute_import | |
126 |
|
126 | |||
127 | import collections |
|
127 | import collections | |
128 | import itertools |
|
128 | import itertools | |
129 | import json |
|
|||
130 | import os |
|
129 | import os | |
131 | import re |
|
130 | import re | |
132 | import subprocess |
|
131 | import subprocess | |
133 |
|
132 | |||
134 | from mercurial.i18n import _ |
|
133 | from mercurial.i18n import _ | |
135 | from mercurial.node import nullrev |
|
134 | from mercurial.node import nullrev | |
136 | from mercurial.node import wdirrev |
|
135 | from mercurial.node import wdirrev | |
137 |
|
136 | |||
138 | from mercurial.utils import procutil |
|
137 | from mercurial.utils import procutil | |
139 |
|
138 | |||
140 | from mercurial import ( |
|
139 | from mercurial import ( | |
141 | cmdutil, |
|
140 | cmdutil, | |
142 | context, |
|
141 | context, | |
143 | copies, |
|
142 | copies, | |
144 | error, |
|
143 | error, | |
145 | match as matchmod, |
|
144 | match as matchmod, | |
146 | mdiff, |
|
145 | mdiff, | |
147 | merge, |
|
146 | merge, | |
148 | obsolete, |
|
147 | obsolete, | |
149 | pycompat, |
|
148 | pycompat, | |
150 | registrar, |
|
149 | registrar, | |
151 | scmutil, |
|
150 | scmutil, | |
152 | util, |
|
151 | util, | |
153 | worker, |
|
152 | worker, | |
154 | ) |
|
153 | ) | |
155 |
|
154 | |||
156 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for |
|
155 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
157 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
156 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
158 | # be specifying the version(s) of Mercurial they are tested with, or |
|
157 | # be specifying the version(s) of Mercurial they are tested with, or | |
159 | # leave the attribute unspecified. |
|
158 | # leave the attribute unspecified. | |
160 | testedwith = b'ships-with-hg-core' |
|
159 | testedwith = b'ships-with-hg-core' | |
161 |
|
160 | |||
162 | cmdtable = {} |
|
161 | cmdtable = {} | |
163 | command = registrar.command(cmdtable) |
|
162 | command = registrar.command(cmdtable) | |
164 |
|
163 | |||
165 | configtable = {} |
|
164 | configtable = {} | |
166 | configitem = registrar.configitem(configtable) |
|
165 | configitem = registrar.configitem(configtable) | |
167 |
|
166 | |||
168 | # Register the suboptions allowed for each configured fixer, and default values. |
|
167 | # Register the suboptions allowed for each configured fixer, and default values. | |
169 | FIXER_ATTRS = { |
|
168 | FIXER_ATTRS = { | |
170 | b'command': None, |
|
169 | b'command': None, | |
171 | b'linerange': None, |
|
170 | b'linerange': None, | |
172 | b'pattern': None, |
|
171 | b'pattern': None, | |
173 | b'priority': 0, |
|
172 | b'priority': 0, | |
174 | b'metadata': False, |
|
173 | b'metadata': False, | |
175 | b'skipclean': True, |
|
174 | b'skipclean': True, | |
176 | b'enabled': True, |
|
175 | b'enabled': True, | |
177 | } |
|
176 | } | |
178 |
|
177 | |||
179 | for key, default in FIXER_ATTRS.items(): |
|
178 | for key, default in FIXER_ATTRS.items(): | |
180 | configitem(b'fix', b'.*:%s$' % key, default=default, generic=True) |
|
179 | configitem(b'fix', b'.*:%s$' % key, default=default, generic=True) | |
181 |
|
180 | |||
182 | # A good default size allows most source code files to be fixed, but avoids |
|
181 | # A good default size allows most source code files to be fixed, but avoids | |
183 | # letting fixer tools choke on huge inputs, which could be surprising to the |
|
182 | # letting fixer tools choke on huge inputs, which could be surprising to the | |
184 | # user. |
|
183 | # user. | |
185 | configitem(b'fix', b'maxfilesize', default=b'2MB') |
|
184 | configitem(b'fix', b'maxfilesize', default=b'2MB') | |
186 |
|
185 | |||
187 | # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero. |
|
186 | # Allow fix commands to exit non-zero if an executed fixer tool exits non-zero. | |
188 | # This helps users do shell scripts that stop when a fixer tool signals a |
|
187 | # This helps users do shell scripts that stop when a fixer tool signals a | |
189 | # problem. |
|
188 | # problem. | |
190 | configitem(b'fix', b'failure', default=b'continue') |
|
189 | configitem(b'fix', b'failure', default=b'continue') | |
191 |
|
190 | |||
192 |
|
191 | |||
193 | def checktoolfailureaction(ui, message, hint=None): |
|
192 | def checktoolfailureaction(ui, message, hint=None): | |
194 | """Abort with 'message' if fix.failure=abort""" |
|
193 | """Abort with 'message' if fix.failure=abort""" | |
195 | action = ui.config(b'fix', b'failure') |
|
194 | action = ui.config(b'fix', b'failure') | |
196 | if action not in (b'continue', b'abort'): |
|
195 | if action not in (b'continue', b'abort'): | |
197 | raise error.Abort( |
|
196 | raise error.Abort( | |
198 | _(b'unknown fix.failure action: %s') % (action,), |
|
197 | _(b'unknown fix.failure action: %s') % (action,), | |
199 | hint=_(b'use "continue" or "abort"'), |
|
198 | hint=_(b'use "continue" or "abort"'), | |
200 | ) |
|
199 | ) | |
201 | if action == b'abort': |
|
200 | if action == b'abort': | |
202 | raise error.Abort(message, hint=hint) |
|
201 | raise error.Abort(message, hint=hint) | |
203 |
|
202 | |||
204 |
|
203 | |||
205 | allopt = (b'', b'all', False, _(b'fix all non-public non-obsolete revisions')) |
|
204 | allopt = (b'', b'all', False, _(b'fix all non-public non-obsolete revisions')) | |
206 | baseopt = ( |
|
205 | baseopt = ( | |
207 | b'', |
|
206 | b'', | |
208 | b'base', |
|
207 | b'base', | |
209 | [], |
|
208 | [], | |
210 | _( |
|
209 | _( | |
211 | b'revisions to diff against (overrides automatic ' |
|
210 | b'revisions to diff against (overrides automatic ' | |
212 | b'selection, and applies to every revision being ' |
|
211 | b'selection, and applies to every revision being ' | |
213 | b'fixed)' |
|
212 | b'fixed)' | |
214 | ), |
|
213 | ), | |
215 | _(b'REV'), |
|
214 | _(b'REV'), | |
216 | ) |
|
215 | ) | |
217 | revopt = (b'r', b'rev', [], _(b'revisions to fix'), _(b'REV')) |
|
216 | revopt = (b'r', b'rev', [], _(b'revisions to fix'), _(b'REV')) | |
218 | wdiropt = (b'w', b'working-dir', False, _(b'fix the working directory')) |
|
217 | wdiropt = (b'w', b'working-dir', False, _(b'fix the working directory')) | |
219 | wholeopt = (b'', b'whole', False, _(b'always fix every line of a file')) |
|
218 | wholeopt = (b'', b'whole', False, _(b'always fix every line of a file')) | |
220 | usage = _(b'[OPTION]... [FILE]...') |
|
219 | usage = _(b'[OPTION]... [FILE]...') | |
221 |
|
220 | |||
222 |
|
221 | |||
223 | @command( |
|
222 | @command( | |
224 | b'fix', |
|
223 | b'fix', | |
225 | [allopt, baseopt, revopt, wdiropt, wholeopt], |
|
224 | [allopt, baseopt, revopt, wdiropt, wholeopt], | |
226 | usage, |
|
225 | usage, | |
227 | helpcategory=command.CATEGORY_FILE_CONTENTS, |
|
226 | helpcategory=command.CATEGORY_FILE_CONTENTS, | |
228 | ) |
|
227 | ) | |
229 | def fix(ui, repo, *pats, **opts): |
|
228 | def fix(ui, repo, *pats, **opts): | |
230 | """rewrite file content in changesets or working directory |
|
229 | """rewrite file content in changesets or working directory | |
231 |
|
230 | |||
232 | Runs any configured tools to fix the content of files. Only affects files |
|
231 | Runs any configured tools to fix the content of files. Only affects files | |
233 | with changes, unless file arguments are provided. Only affects changed lines |
|
232 | with changes, unless file arguments are provided. Only affects changed lines | |
234 | of files, unless the --whole flag is used. Some tools may always affect the |
|
233 | of files, unless the --whole flag is used. Some tools may always affect the | |
235 | whole file regardless of --whole. |
|
234 | whole file regardless of --whole. | |
236 |
|
235 | |||
237 | If revisions are specified with --rev, those revisions will be checked, and |
|
236 | If revisions are specified with --rev, those revisions will be checked, and | |
238 | they may be replaced with new revisions that have fixed file content. It is |
|
237 | they may be replaced with new revisions that have fixed file content. It is | |
239 | desirable to specify all descendants of each specified revision, so that the |
|
238 | desirable to specify all descendants of each specified revision, so that the | |
240 | fixes propagate to the descendants. If all descendants are fixed at the same |
|
239 | fixes propagate to the descendants. If all descendants are fixed at the same | |
241 | time, no merging, rebasing, or evolution will be required. |
|
240 | time, no merging, rebasing, or evolution will be required. | |
242 |
|
241 | |||
243 | If --working-dir is used, files with uncommitted changes in the working copy |
|
242 | If --working-dir is used, files with uncommitted changes in the working copy | |
244 | will be fixed. If the checked-out revision is also fixed, the working |
|
243 | will be fixed. If the checked-out revision is also fixed, the working | |
245 | directory will update to the replacement revision. |
|
244 | directory will update to the replacement revision. | |
246 |
|
245 | |||
247 | When determining what lines of each file to fix at each revision, the whole |
|
246 | When determining what lines of each file to fix at each revision, the whole | |
248 | set of revisions being fixed is considered, so that fixes to earlier |
|
247 | set of revisions being fixed is considered, so that fixes to earlier | |
249 | revisions are not forgotten in later ones. The --base flag can be used to |
|
248 | revisions are not forgotten in later ones. The --base flag can be used to | |
250 | override this default behavior, though it is not usually desirable to do so. |
|
249 | override this default behavior, though it is not usually desirable to do so. | |
251 | """ |
|
250 | """ | |
252 | opts = pycompat.byteskwargs(opts) |
|
251 | opts = pycompat.byteskwargs(opts) | |
253 | if opts[b'all']: |
|
252 | if opts[b'all']: | |
254 | if opts[b'rev']: |
|
253 | if opts[b'rev']: | |
255 | raise error.Abort(_(b'cannot specify both "--rev" and "--all"')) |
|
254 | raise error.Abort(_(b'cannot specify both "--rev" and "--all"')) | |
256 | opts[b'rev'] = [b'not public() and not obsolete()'] |
|
255 | opts[b'rev'] = [b'not public() and not obsolete()'] | |
257 | opts[b'working_dir'] = True |
|
256 | opts[b'working_dir'] = True | |
258 | with repo.wlock(), repo.lock(), repo.transaction(b'fix'): |
|
257 | with repo.wlock(), repo.lock(), repo.transaction(b'fix'): | |
259 | revstofix = getrevstofix(ui, repo, opts) |
|
258 | revstofix = getrevstofix(ui, repo, opts) | |
260 | basectxs = getbasectxs(repo, opts, revstofix) |
|
259 | basectxs = getbasectxs(repo, opts, revstofix) | |
261 | workqueue, numitems = getworkqueue( |
|
260 | workqueue, numitems = getworkqueue( | |
262 | ui, repo, pats, opts, revstofix, basectxs |
|
261 | ui, repo, pats, opts, revstofix, basectxs | |
263 | ) |
|
262 | ) | |
264 | fixers = getfixers(ui) |
|
263 | fixers = getfixers(ui) | |
265 |
|
264 | |||
266 | # There are no data dependencies between the workers fixing each file |
|
265 | # There are no data dependencies between the workers fixing each file | |
267 | # revision, so we can use all available parallelism. |
|
266 | # revision, so we can use all available parallelism. | |
268 | def getfixes(items): |
|
267 | def getfixes(items): | |
269 | for rev, path in items: |
|
268 | for rev, path in items: | |
270 | ctx = repo[rev] |
|
269 | ctx = repo[rev] | |
271 | olddata = ctx[path].data() |
|
270 | olddata = ctx[path].data() | |
272 | metadata, newdata = fixfile( |
|
271 | metadata, newdata = fixfile( | |
273 | ui, repo, opts, fixers, ctx, path, basectxs[rev] |
|
272 | ui, repo, opts, fixers, ctx, path, basectxs[rev] | |
274 | ) |
|
273 | ) | |
275 | # Don't waste memory/time passing unchanged content back, but |
|
274 | # Don't waste memory/time passing unchanged content back, but | |
276 | # produce one result per item either way. |
|
275 | # produce one result per item either way. | |
277 | yield ( |
|
276 | yield ( | |
278 | rev, |
|
277 | rev, | |
279 | path, |
|
278 | path, | |
280 | metadata, |
|
279 | metadata, | |
281 | newdata if newdata != olddata else None, |
|
280 | newdata if newdata != olddata else None, | |
282 | ) |
|
281 | ) | |
283 |
|
282 | |||
284 | results = worker.worker( |
|
283 | results = worker.worker( | |
285 | ui, 1.0, getfixes, tuple(), workqueue, threadsafe=False |
|
284 | ui, 1.0, getfixes, tuple(), workqueue, threadsafe=False | |
286 | ) |
|
285 | ) | |
287 |
|
286 | |||
288 | # We have to hold on to the data for each successor revision in memory |
|
287 | # We have to hold on to the data for each successor revision in memory | |
289 | # until all its parents are committed. We ensure this by committing and |
|
288 | # until all its parents are committed. We ensure this by committing and | |
290 | # freeing memory for the revisions in some topological order. This |
|
289 | # freeing memory for the revisions in some topological order. This | |
291 | # leaves a little bit of memory efficiency on the table, but also makes |
|
290 | # leaves a little bit of memory efficiency on the table, but also makes | |
292 | # the tests deterministic. It might also be considered a feature since |
|
291 | # the tests deterministic. It might also be considered a feature since | |
293 | # it makes the results more easily reproducible. |
|
292 | # it makes the results more easily reproducible. | |
294 | filedata = collections.defaultdict(dict) |
|
293 | filedata = collections.defaultdict(dict) | |
295 | aggregatemetadata = collections.defaultdict(list) |
|
294 | aggregatemetadata = collections.defaultdict(list) | |
296 | replacements = {} |
|
295 | replacements = {} | |
297 | wdirwritten = False |
|
296 | wdirwritten = False | |
298 | commitorder = sorted(revstofix, reverse=True) |
|
297 | commitorder = sorted(revstofix, reverse=True) | |
299 | with ui.makeprogress( |
|
298 | with ui.makeprogress( | |
300 | topic=_(b'fixing'), unit=_(b'files'), total=sum(numitems.values()) |
|
299 | topic=_(b'fixing'), unit=_(b'files'), total=sum(numitems.values()) | |
301 | ) as progress: |
|
300 | ) as progress: | |
302 | for rev, path, filerevmetadata, newdata in results: |
|
301 | for rev, path, filerevmetadata, newdata in results: | |
303 | progress.increment(item=path) |
|
302 | progress.increment(item=path) | |
304 | for fixername, fixermetadata in filerevmetadata.items(): |
|
303 | for fixername, fixermetadata in filerevmetadata.items(): | |
305 | aggregatemetadata[fixername].append(fixermetadata) |
|
304 | aggregatemetadata[fixername].append(fixermetadata) | |
306 | if newdata is not None: |
|
305 | if newdata is not None: | |
307 | filedata[rev][path] = newdata |
|
306 | filedata[rev][path] = newdata | |
308 | hookargs = { |
|
307 | hookargs = { | |
309 | b'rev': rev, |
|
308 | b'rev': rev, | |
310 | b'path': path, |
|
309 | b'path': path, | |
311 | b'metadata': filerevmetadata, |
|
310 | b'metadata': filerevmetadata, | |
312 | } |
|
311 | } | |
313 | repo.hook( |
|
312 | repo.hook( | |
314 | b'postfixfile', |
|
313 | b'postfixfile', | |
315 | throw=False, |
|
314 | throw=False, | |
316 | **pycompat.strkwargs(hookargs) |
|
315 | **pycompat.strkwargs(hookargs) | |
317 | ) |
|
316 | ) | |
318 | numitems[rev] -= 1 |
|
317 | numitems[rev] -= 1 | |
319 | # Apply the fixes for this and any other revisions that are |
|
318 | # Apply the fixes for this and any other revisions that are | |
320 | # ready and sitting at the front of the queue. Using a loop here |
|
319 | # ready and sitting at the front of the queue. Using a loop here | |
321 | # prevents the queue from being blocked by the first revision to |
|
320 | # prevents the queue from being blocked by the first revision to | |
322 | # be ready out of order. |
|
321 | # be ready out of order. | |
323 | while commitorder and not numitems[commitorder[-1]]: |
|
322 | while commitorder and not numitems[commitorder[-1]]: | |
324 | rev = commitorder.pop() |
|
323 | rev = commitorder.pop() | |
325 | ctx = repo[rev] |
|
324 | ctx = repo[rev] | |
326 | if rev == wdirrev: |
|
325 | if rev == wdirrev: | |
327 | writeworkingdir(repo, ctx, filedata[rev], replacements) |
|
326 | writeworkingdir(repo, ctx, filedata[rev], replacements) | |
328 | wdirwritten = bool(filedata[rev]) |
|
327 | wdirwritten = bool(filedata[rev]) | |
329 | else: |
|
328 | else: | |
330 | replacerev(ui, repo, ctx, filedata[rev], replacements) |
|
329 | replacerev(ui, repo, ctx, filedata[rev], replacements) | |
331 | del filedata[rev] |
|
330 | del filedata[rev] | |
332 |
|
331 | |||
333 | cleanup(repo, replacements, wdirwritten) |
|
332 | cleanup(repo, replacements, wdirwritten) | |
334 | hookargs = { |
|
333 | hookargs = { | |
335 | b'replacements': replacements, |
|
334 | b'replacements': replacements, | |
336 | b'wdirwritten': wdirwritten, |
|
335 | b'wdirwritten': wdirwritten, | |
337 | b'metadata': aggregatemetadata, |
|
336 | b'metadata': aggregatemetadata, | |
338 | } |
|
337 | } | |
339 | repo.hook(b'postfix', throw=True, **pycompat.strkwargs(hookargs)) |
|
338 | repo.hook(b'postfix', throw=True, **pycompat.strkwargs(hookargs)) | |
340 |
|
339 | |||
341 |
|
340 | |||
342 | def cleanup(repo, replacements, wdirwritten): |
|
341 | def cleanup(repo, replacements, wdirwritten): | |
343 | """Calls scmutil.cleanupnodes() with the given replacements. |
|
342 | """Calls scmutil.cleanupnodes() with the given replacements. | |
344 |
|
343 | |||
345 | "replacements" is a dict from nodeid to nodeid, with one key and one value |
|
344 | "replacements" is a dict from nodeid to nodeid, with one key and one value | |
346 | for every revision that was affected by fixing. This is slightly different |
|
345 | for every revision that was affected by fixing. This is slightly different | |
347 | from cleanupnodes(). |
|
346 | from cleanupnodes(). | |
348 |
|
347 | |||
349 | "wdirwritten" is a bool which tells whether the working copy was affected by |
|
348 | "wdirwritten" is a bool which tells whether the working copy was affected by | |
350 | fixing, since it has no entry in "replacements". |
|
349 | fixing, since it has no entry in "replacements". | |
351 |
|
350 | |||
352 | Useful as a hook point for extending "hg fix" with output summarizing the |
|
351 | Useful as a hook point for extending "hg fix" with output summarizing the | |
353 | effects of the command, though we choose not to output anything here. |
|
352 | effects of the command, though we choose not to output anything here. | |
354 | """ |
|
353 | """ | |
355 | replacements = { |
|
354 | replacements = { | |
356 | prec: [succ] for prec, succ in pycompat.iteritems(replacements) |
|
355 | prec: [succ] for prec, succ in pycompat.iteritems(replacements) | |
357 | } |
|
356 | } | |
358 | scmutil.cleanupnodes(repo, replacements, b'fix', fixphase=True) |
|
357 | scmutil.cleanupnodes(repo, replacements, b'fix', fixphase=True) | |
359 |
|
358 | |||
360 |
|
359 | |||
361 | def getworkqueue(ui, repo, pats, opts, revstofix, basectxs): |
|
360 | def getworkqueue(ui, repo, pats, opts, revstofix, basectxs): | |
362 | """"Constructs the list of files to be fixed at specific revisions |
|
361 | """"Constructs the list of files to be fixed at specific revisions | |
363 |
|
362 | |||
364 | It is up to the caller how to consume the work items, and the only |
|
363 | It is up to the caller how to consume the work items, and the only | |
365 | dependence between them is that replacement revisions must be committed in |
|
364 | dependence between them is that replacement revisions must be committed in | |
366 | topological order. Each work item represents a file in the working copy or |
|
365 | topological order. Each work item represents a file in the working copy or | |
367 | in some revision that should be fixed and written back to the working copy |
|
366 | in some revision that should be fixed and written back to the working copy | |
368 | or into a replacement revision. |
|
367 | or into a replacement revision. | |
369 |
|
368 | |||
370 | Work items for the same revision are grouped together, so that a worker |
|
369 | Work items for the same revision are grouped together, so that a worker | |
371 | pool starting with the first N items in parallel is likely to finish the |
|
370 | pool starting with the first N items in parallel is likely to finish the | |
372 | first revision's work before other revisions. This can allow us to write |
|
371 | first revision's work before other revisions. This can allow us to write | |
373 | the result to disk and reduce memory footprint. At time of writing, the |
|
372 | the result to disk and reduce memory footprint. At time of writing, the | |
374 | partition strategy in worker.py seems favorable to this. We also sort the |
|
373 | partition strategy in worker.py seems favorable to this. We also sort the | |
375 | items by ascending revision number to match the order in which we commit |
|
374 | items by ascending revision number to match the order in which we commit | |
376 | the fixes later. |
|
375 | the fixes later. | |
377 | """ |
|
376 | """ | |
378 | workqueue = [] |
|
377 | workqueue = [] | |
379 | numitems = collections.defaultdict(int) |
|
378 | numitems = collections.defaultdict(int) | |
380 | maxfilesize = ui.configbytes(b'fix', b'maxfilesize') |
|
379 | maxfilesize = ui.configbytes(b'fix', b'maxfilesize') | |
381 | for rev in sorted(revstofix): |
|
380 | for rev in sorted(revstofix): | |
382 | fixctx = repo[rev] |
|
381 | fixctx = repo[rev] | |
383 | match = scmutil.match(fixctx, pats, opts) |
|
382 | match = scmutil.match(fixctx, pats, opts) | |
384 | for path in sorted( |
|
383 | for path in sorted( | |
385 | pathstofix(ui, repo, pats, opts, match, basectxs[rev], fixctx) |
|
384 | pathstofix(ui, repo, pats, opts, match, basectxs[rev], fixctx) | |
386 | ): |
|
385 | ): | |
387 | fctx = fixctx[path] |
|
386 | fctx = fixctx[path] | |
388 | if fctx.islink(): |
|
387 | if fctx.islink(): | |
389 | continue |
|
388 | continue | |
390 | if fctx.size() > maxfilesize: |
|
389 | if fctx.size() > maxfilesize: | |
391 | ui.warn( |
|
390 | ui.warn( | |
392 | _(b'ignoring file larger than %s: %s\n') |
|
391 | _(b'ignoring file larger than %s: %s\n') | |
393 | % (util.bytecount(maxfilesize), path) |
|
392 | % (util.bytecount(maxfilesize), path) | |
394 | ) |
|
393 | ) | |
395 | continue |
|
394 | continue | |
396 | workqueue.append((rev, path)) |
|
395 | workqueue.append((rev, path)) | |
397 | numitems[rev] += 1 |
|
396 | numitems[rev] += 1 | |
398 | return workqueue, numitems |
|
397 | return workqueue, numitems | |
399 |
|
398 | |||
400 |
|
399 | |||
401 | def getrevstofix(ui, repo, opts): |
|
400 | def getrevstofix(ui, repo, opts): | |
402 | """Returns the set of revision numbers that should be fixed""" |
|
401 | """Returns the set of revision numbers that should be fixed""" | |
403 | revs = set(scmutil.revrange(repo, opts[b'rev'])) |
|
402 | revs = set(scmutil.revrange(repo, opts[b'rev'])) | |
404 | for rev in revs: |
|
403 | for rev in revs: | |
405 | checkfixablectx(ui, repo, repo[rev]) |
|
404 | checkfixablectx(ui, repo, repo[rev]) | |
406 | if revs: |
|
405 | if revs: | |
407 | cmdutil.checkunfinished(repo) |
|
406 | cmdutil.checkunfinished(repo) | |
408 | checknodescendants(repo, revs) |
|
407 | checknodescendants(repo, revs) | |
409 | if opts.get(b'working_dir'): |
|
408 | if opts.get(b'working_dir'): | |
410 | revs.add(wdirrev) |
|
409 | revs.add(wdirrev) | |
411 | if list(merge.mergestate.read(repo).unresolved()): |
|
410 | if list(merge.mergestate.read(repo).unresolved()): | |
412 | raise error.Abort(b'unresolved conflicts', hint=b"use 'hg resolve'") |
|
411 | raise error.Abort(b'unresolved conflicts', hint=b"use 'hg resolve'") | |
413 | if not revs: |
|
412 | if not revs: | |
414 | raise error.Abort( |
|
413 | raise error.Abort( | |
415 | b'no changesets specified', hint=b'use --rev or --working-dir' |
|
414 | b'no changesets specified', hint=b'use --rev or --working-dir' | |
416 | ) |
|
415 | ) | |
417 | return revs |
|
416 | return revs | |
418 |
|
417 | |||
419 |
|
418 | |||
420 | def checknodescendants(repo, revs): |
|
419 | def checknodescendants(repo, revs): | |
421 | if not obsolete.isenabled(repo, obsolete.allowunstableopt) and repo.revs( |
|
420 | if not obsolete.isenabled(repo, obsolete.allowunstableopt) and repo.revs( | |
422 | b'(%ld::) - (%ld)', revs, revs |
|
421 | b'(%ld::) - (%ld)', revs, revs | |
423 | ): |
|
422 | ): | |
424 | raise error.Abort( |
|
423 | raise error.Abort( | |
425 | _(b'can only fix a changeset together with all its descendants') |
|
424 | _(b'can only fix a changeset together with all its descendants') | |
426 | ) |
|
425 | ) | |
427 |
|
426 | |||
428 |
|
427 | |||
429 | def checkfixablectx(ui, repo, ctx): |
|
428 | def checkfixablectx(ui, repo, ctx): | |
430 | """Aborts if the revision shouldn't be replaced with a fixed one.""" |
|
429 | """Aborts if the revision shouldn't be replaced with a fixed one.""" | |
431 | if not ctx.mutable(): |
|
430 | if not ctx.mutable(): | |
432 | raise error.Abort( |
|
431 | raise error.Abort( | |
433 | b'can\'t fix immutable changeset %s' |
|
432 | b'can\'t fix immutable changeset %s' | |
434 | % (scmutil.formatchangeid(ctx),) |
|
433 | % (scmutil.formatchangeid(ctx),) | |
435 | ) |
|
434 | ) | |
436 | if ctx.obsolete(): |
|
435 | if ctx.obsolete(): | |
437 | # It would be better to actually check if the revision has a successor. |
|
436 | # It would be better to actually check if the revision has a successor. | |
438 | allowdivergence = ui.configbool( |
|
437 | allowdivergence = ui.configbool( | |
439 | b'experimental', b'evolution.allowdivergence' |
|
438 | b'experimental', b'evolution.allowdivergence' | |
440 | ) |
|
439 | ) | |
441 | if not allowdivergence: |
|
440 | if not allowdivergence: | |
442 | raise error.Abort( |
|
441 | raise error.Abort( | |
443 | b'fixing obsolete revision could cause divergence' |
|
442 | b'fixing obsolete revision could cause divergence' | |
444 | ) |
|
443 | ) | |
445 |
|
444 | |||
446 |
|
445 | |||
447 | def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx): |
|
446 | def pathstofix(ui, repo, pats, opts, match, basectxs, fixctx): | |
448 | """Returns the set of files that should be fixed in a context |
|
447 | """Returns the set of files that should be fixed in a context | |
449 |
|
448 | |||
450 | The result depends on the base contexts; we include any file that has |
|
449 | The result depends on the base contexts; we include any file that has | |
451 | changed relative to any of the base contexts. Base contexts should be |
|
450 | changed relative to any of the base contexts. Base contexts should be | |
452 | ancestors of the context being fixed. |
|
451 | ancestors of the context being fixed. | |
453 | """ |
|
452 | """ | |
454 | files = set() |
|
453 | files = set() | |
455 | for basectx in basectxs: |
|
454 | for basectx in basectxs: | |
456 | stat = basectx.status( |
|
455 | stat = basectx.status( | |
457 | fixctx, match=match, listclean=bool(pats), listunknown=bool(pats) |
|
456 | fixctx, match=match, listclean=bool(pats), listunknown=bool(pats) | |
458 | ) |
|
457 | ) | |
459 | files.update( |
|
458 | files.update( | |
460 | set( |
|
459 | set( | |
461 | itertools.chain( |
|
460 | itertools.chain( | |
462 | stat.added, stat.modified, stat.clean, stat.unknown |
|
461 | stat.added, stat.modified, stat.clean, stat.unknown | |
463 | ) |
|
462 | ) | |
464 | ) |
|
463 | ) | |
465 | ) |
|
464 | ) | |
466 | return files |
|
465 | return files | |
467 |
|
466 | |||
468 |
|
467 | |||
469 | def lineranges(opts, path, basectxs, fixctx, content2): |
|
468 | def lineranges(opts, path, basectxs, fixctx, content2): | |
470 | """Returns the set of line ranges that should be fixed in a file |
|
469 | """Returns the set of line ranges that should be fixed in a file | |
471 |
|
470 | |||
472 | Of the form [(10, 20), (30, 40)]. |
|
471 | Of the form [(10, 20), (30, 40)]. | |
473 |
|
472 | |||
474 | This depends on the given base contexts; we must consider lines that have |
|
473 | This depends on the given base contexts; we must consider lines that have | |
475 | changed versus any of the base contexts, and whether the file has been |
|
474 | changed versus any of the base contexts, and whether the file has been | |
476 | renamed versus any of them. |
|
475 | renamed versus any of them. | |
477 |
|
476 | |||
478 | Another way to understand this is that we exclude line ranges that are |
|
477 | Another way to understand this is that we exclude line ranges that are | |
479 | common to the file in all base contexts. |
|
478 | common to the file in all base contexts. | |
480 | """ |
|
479 | """ | |
481 | if opts.get(b'whole'): |
|
480 | if opts.get(b'whole'): | |
482 | # Return a range containing all lines. Rely on the diff implementation's |
|
481 | # Return a range containing all lines. Rely on the diff implementation's | |
483 | # idea of how many lines are in the file, instead of reimplementing it. |
|
482 | # idea of how many lines are in the file, instead of reimplementing it. | |
484 | return difflineranges(b'', content2) |
|
483 | return difflineranges(b'', content2) | |
485 |
|
484 | |||
486 | rangeslist = [] |
|
485 | rangeslist = [] | |
487 | for basectx in basectxs: |
|
486 | for basectx in basectxs: | |
488 | basepath = copies.pathcopies(basectx, fixctx).get(path, path) |
|
487 | basepath = copies.pathcopies(basectx, fixctx).get(path, path) | |
489 | if basepath in basectx: |
|
488 | if basepath in basectx: | |
490 | content1 = basectx[basepath].data() |
|
489 | content1 = basectx[basepath].data() | |
491 | else: |
|
490 | else: | |
492 | content1 = b'' |
|
491 | content1 = b'' | |
493 | rangeslist.extend(difflineranges(content1, content2)) |
|
492 | rangeslist.extend(difflineranges(content1, content2)) | |
494 | return unionranges(rangeslist) |
|
493 | return unionranges(rangeslist) | |
495 |
|
494 | |||
496 |
|
495 | |||
497 | def unionranges(rangeslist): |
|
496 | def unionranges(rangeslist): | |
498 | """Return the union of some closed intervals |
|
497 | """Return the union of some closed intervals | |
499 |
|
498 | |||
500 | >>> unionranges([]) |
|
499 | >>> unionranges([]) | |
501 | [] |
|
500 | [] | |
502 | >>> unionranges([(1, 100)]) |
|
501 | >>> unionranges([(1, 100)]) | |
503 | [(1, 100)] |
|
502 | [(1, 100)] | |
504 | >>> unionranges([(1, 100), (1, 100)]) |
|
503 | >>> unionranges([(1, 100), (1, 100)]) | |
505 | [(1, 100)] |
|
504 | [(1, 100)] | |
506 | >>> unionranges([(1, 100), (2, 100)]) |
|
505 | >>> unionranges([(1, 100), (2, 100)]) | |
507 | [(1, 100)] |
|
506 | [(1, 100)] | |
508 | >>> unionranges([(1, 99), (1, 100)]) |
|
507 | >>> unionranges([(1, 99), (1, 100)]) | |
509 | [(1, 100)] |
|
508 | [(1, 100)] | |
510 | >>> unionranges([(1, 100), (40, 60)]) |
|
509 | >>> unionranges([(1, 100), (40, 60)]) | |
511 | [(1, 100)] |
|
510 | [(1, 100)] | |
512 | >>> unionranges([(1, 49), (50, 100)]) |
|
511 | >>> unionranges([(1, 49), (50, 100)]) | |
513 | [(1, 100)] |
|
512 | [(1, 100)] | |
514 | >>> unionranges([(1, 48), (50, 100)]) |
|
513 | >>> unionranges([(1, 48), (50, 100)]) | |
515 | [(1, 48), (50, 100)] |
|
514 | [(1, 48), (50, 100)] | |
516 | >>> unionranges([(1, 2), (3, 4), (5, 6)]) |
|
515 | >>> unionranges([(1, 2), (3, 4), (5, 6)]) | |
517 | [(1, 6)] |
|
516 | [(1, 6)] | |
518 | """ |
|
517 | """ | |
519 | rangeslist = sorted(set(rangeslist)) |
|
518 | rangeslist = sorted(set(rangeslist)) | |
520 | unioned = [] |
|
519 | unioned = [] | |
521 | if rangeslist: |
|
520 | if rangeslist: | |
522 | unioned, rangeslist = [rangeslist[0]], rangeslist[1:] |
|
521 | unioned, rangeslist = [rangeslist[0]], rangeslist[1:] | |
523 | for a, b in rangeslist: |
|
522 | for a, b in rangeslist: | |
524 | c, d = unioned[-1] |
|
523 | c, d = unioned[-1] | |
525 | if a > d + 1: |
|
524 | if a > d + 1: | |
526 | unioned.append((a, b)) |
|
525 | unioned.append((a, b)) | |
527 | else: |
|
526 | else: | |
528 | unioned[-1] = (c, max(b, d)) |
|
527 | unioned[-1] = (c, max(b, d)) | |
529 | return unioned |
|
528 | return unioned | |
530 |
|
529 | |||
531 |
|
530 | |||
532 | def difflineranges(content1, content2): |
|
531 | def difflineranges(content1, content2): | |
533 | """Return list of line number ranges in content2 that differ from content1. |
|
532 | """Return list of line number ranges in content2 that differ from content1. | |
534 |
|
533 | |||
535 | Line numbers are 1-based. The numbers are the first and last line contained |
|
534 | Line numbers are 1-based. The numbers are the first and last line contained | |
536 | in the range. Single-line ranges have the same line number for the first and |
|
535 | in the range. Single-line ranges have the same line number for the first and | |
537 | last line. Excludes any empty ranges that result from lines that are only |
|
536 | last line. Excludes any empty ranges that result from lines that are only | |
538 | present in content1. Relies on mdiff's idea of where the line endings are in |
|
537 | present in content1. Relies on mdiff's idea of where the line endings are in | |
539 | the string. |
|
538 | the string. | |
540 |
|
539 | |||
541 | >>> from mercurial import pycompat |
|
540 | >>> from mercurial import pycompat | |
542 | >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)]) |
|
541 | >>> lines = lambda s: b'\\n'.join([c for c in pycompat.iterbytestr(s)]) | |
543 | >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b)) |
|
542 | >>> difflineranges2 = lambda a, b: difflineranges(lines(a), lines(b)) | |
544 | >>> difflineranges2(b'', b'') |
|
543 | >>> difflineranges2(b'', b'') | |
545 | [] |
|
544 | [] | |
546 | >>> difflineranges2(b'a', b'') |
|
545 | >>> difflineranges2(b'a', b'') | |
547 | [] |
|
546 | [] | |
548 | >>> difflineranges2(b'', b'A') |
|
547 | >>> difflineranges2(b'', b'A') | |
549 | [(1, 1)] |
|
548 | [(1, 1)] | |
550 | >>> difflineranges2(b'a', b'a') |
|
549 | >>> difflineranges2(b'a', b'a') | |
551 | [] |
|
550 | [] | |
552 | >>> difflineranges2(b'a', b'A') |
|
551 | >>> difflineranges2(b'a', b'A') | |
553 | [(1, 1)] |
|
552 | [(1, 1)] | |
554 | >>> difflineranges2(b'ab', b'') |
|
553 | >>> difflineranges2(b'ab', b'') | |
555 | [] |
|
554 | [] | |
556 | >>> difflineranges2(b'', b'AB') |
|
555 | >>> difflineranges2(b'', b'AB') | |
557 | [(1, 2)] |
|
556 | [(1, 2)] | |
558 | >>> difflineranges2(b'abc', b'ac') |
|
557 | >>> difflineranges2(b'abc', b'ac') | |
559 | [] |
|
558 | [] | |
560 | >>> difflineranges2(b'ab', b'aCb') |
|
559 | >>> difflineranges2(b'ab', b'aCb') | |
561 | [(2, 2)] |
|
560 | [(2, 2)] | |
562 | >>> difflineranges2(b'abc', b'aBc') |
|
561 | >>> difflineranges2(b'abc', b'aBc') | |
563 | [(2, 2)] |
|
562 | [(2, 2)] | |
564 | >>> difflineranges2(b'ab', b'AB') |
|
563 | >>> difflineranges2(b'ab', b'AB') | |
565 | [(1, 2)] |
|
564 | [(1, 2)] | |
566 | >>> difflineranges2(b'abcde', b'aBcDe') |
|
565 | >>> difflineranges2(b'abcde', b'aBcDe') | |
567 | [(2, 2), (4, 4)] |
|
566 | [(2, 2), (4, 4)] | |
568 | >>> difflineranges2(b'abcde', b'aBCDe') |
|
567 | >>> difflineranges2(b'abcde', b'aBCDe') | |
569 | [(2, 4)] |
|
568 | [(2, 4)] | |
570 | """ |
|
569 | """ | |
571 | ranges = [] |
|
570 | ranges = [] | |
572 | for lines, kind in mdiff.allblocks(content1, content2): |
|
571 | for lines, kind in mdiff.allblocks(content1, content2): | |
573 | firstline, lastline = lines[2:4] |
|
572 | firstline, lastline = lines[2:4] | |
574 | if kind == b'!' and firstline != lastline: |
|
573 | if kind == b'!' and firstline != lastline: | |
575 | ranges.append((firstline + 1, lastline)) |
|
574 | ranges.append((firstline + 1, lastline)) | |
576 | return ranges |
|
575 | return ranges | |
577 |
|
576 | |||
578 |
|
577 | |||
579 | def getbasectxs(repo, opts, revstofix): |
|
578 | def getbasectxs(repo, opts, revstofix): | |
580 | """Returns a map of the base contexts for each revision |
|
579 | """Returns a map of the base contexts for each revision | |
581 |
|
580 | |||
582 | The base contexts determine which lines are considered modified when we |
|
581 | The base contexts determine which lines are considered modified when we | |
583 | attempt to fix just the modified lines in a file. It also determines which |
|
582 | attempt to fix just the modified lines in a file. It also determines which | |
584 | files we attempt to fix, so it is important to compute this even when |
|
583 | files we attempt to fix, so it is important to compute this even when | |
585 | --whole is used. |
|
584 | --whole is used. | |
586 | """ |
|
585 | """ | |
587 | # The --base flag overrides the usual logic, and we give every revision |
|
586 | # The --base flag overrides the usual logic, and we give every revision | |
588 | # exactly the set of baserevs that the user specified. |
|
587 | # exactly the set of baserevs that the user specified. | |
589 | if opts.get(b'base'): |
|
588 | if opts.get(b'base'): | |
590 | baserevs = set(scmutil.revrange(repo, opts.get(b'base'))) |
|
589 | baserevs = set(scmutil.revrange(repo, opts.get(b'base'))) | |
591 | if not baserevs: |
|
590 | if not baserevs: | |
592 | baserevs = {nullrev} |
|
591 | baserevs = {nullrev} | |
593 | basectxs = {repo[rev] for rev in baserevs} |
|
592 | basectxs = {repo[rev] for rev in baserevs} | |
594 | return {rev: basectxs for rev in revstofix} |
|
593 | return {rev: basectxs for rev in revstofix} | |
595 |
|
594 | |||
596 | # Proceed in topological order so that we can easily determine each |
|
595 | # Proceed in topological order so that we can easily determine each | |
597 | # revision's baserevs by looking at its parents and their baserevs. |
|
596 | # revision's baserevs by looking at its parents and their baserevs. | |
598 | basectxs = collections.defaultdict(set) |
|
597 | basectxs = collections.defaultdict(set) | |
599 | for rev in sorted(revstofix): |
|
598 | for rev in sorted(revstofix): | |
600 | ctx = repo[rev] |
|
599 | ctx = repo[rev] | |
601 | for pctx in ctx.parents(): |
|
600 | for pctx in ctx.parents(): | |
602 | if pctx.rev() in basectxs: |
|
601 | if pctx.rev() in basectxs: | |
603 | basectxs[rev].update(basectxs[pctx.rev()]) |
|
602 | basectxs[rev].update(basectxs[pctx.rev()]) | |
604 | else: |
|
603 | else: | |
605 | basectxs[rev].add(pctx) |
|
604 | basectxs[rev].add(pctx) | |
606 | return basectxs |
|
605 | return basectxs | |
607 |
|
606 | |||
608 |
|
607 | |||
609 | def fixfile(ui, repo, opts, fixers, fixctx, path, basectxs): |
|
608 | def fixfile(ui, repo, opts, fixers, fixctx, path, basectxs): | |
610 | """Run any configured fixers that should affect the file in this context |
|
609 | """Run any configured fixers that should affect the file in this context | |
611 |
|
610 | |||
612 | Returns the file content that results from applying the fixers in some order |
|
611 | Returns the file content that results from applying the fixers in some order | |
613 | starting with the file's content in the fixctx. Fixers that support line |
|
612 | starting with the file's content in the fixctx. Fixers that support line | |
614 | ranges will affect lines that have changed relative to any of the basectxs |
|
613 | ranges will affect lines that have changed relative to any of the basectxs | |
615 | (i.e. they will only avoid lines that are common to all basectxs). |
|
614 | (i.e. they will only avoid lines that are common to all basectxs). | |
616 |
|
615 | |||
617 | A fixer tool's stdout will become the file's new content if and only if it |
|
616 | A fixer tool's stdout will become the file's new content if and only if it | |
618 | exits with code zero. The fixer tool's working directory is the repository's |
|
617 | exits with code zero. The fixer tool's working directory is the repository's | |
619 | root. |
|
618 | root. | |
620 | """ |
|
619 | """ | |
621 | metadata = {} |
|
620 | metadata = {} | |
622 | newdata = fixctx[path].data() |
|
621 | newdata = fixctx[path].data() | |
623 | for fixername, fixer in pycompat.iteritems(fixers): |
|
622 | for fixername, fixer in pycompat.iteritems(fixers): | |
624 | if fixer.affects(opts, fixctx, path): |
|
623 | if fixer.affects(opts, fixctx, path): | |
625 | ranges = lineranges(opts, path, basectxs, fixctx, newdata) |
|
624 | ranges = lineranges(opts, path, basectxs, fixctx, newdata) | |
626 | command = fixer.command(ui, path, ranges) |
|
625 | command = fixer.command(ui, path, ranges) | |
627 | if command is None: |
|
626 | if command is None: | |
628 | continue |
|
627 | continue | |
629 | ui.debug(b'subprocess: %s\n' % (command,)) |
|
628 | ui.debug(b'subprocess: %s\n' % (command,)) | |
630 | proc = subprocess.Popen( |
|
629 | proc = subprocess.Popen( | |
631 | procutil.tonativestr(command), |
|
630 | procutil.tonativestr(command), | |
632 | shell=True, |
|
631 | shell=True, | |
633 | cwd=procutil.tonativestr(repo.root), |
|
632 | cwd=procutil.tonativestr(repo.root), | |
634 | stdin=subprocess.PIPE, |
|
633 | stdin=subprocess.PIPE, | |
635 | stdout=subprocess.PIPE, |
|
634 | stdout=subprocess.PIPE, | |
636 | stderr=subprocess.PIPE, |
|
635 | stderr=subprocess.PIPE, | |
637 | ) |
|
636 | ) | |
638 | stdout, stderr = proc.communicate(newdata) |
|
637 | stdout, stderr = proc.communicate(newdata) | |
639 | if stderr: |
|
638 | if stderr: | |
640 | showstderr(ui, fixctx.rev(), fixername, stderr) |
|
639 | showstderr(ui, fixctx.rev(), fixername, stderr) | |
641 | newerdata = stdout |
|
640 | newerdata = stdout | |
642 | if fixer.shouldoutputmetadata(): |
|
641 | if fixer.shouldoutputmetadata(): | |
643 | try: |
|
642 | try: | |
644 | metadatajson, newerdata = stdout.split(b'\0', 1) |
|
643 | metadatajson, newerdata = stdout.split(b'\0', 1) | |
645 |
metadata[fixername] = |
|
644 | metadata[fixername] = pycompat.json_loads(metadatajson) | |
646 | except ValueError: |
|
645 | except ValueError: | |
647 | ui.warn( |
|
646 | ui.warn( | |
648 | _(b'ignored invalid output from fixer tool: %s\n') |
|
647 | _(b'ignored invalid output from fixer tool: %s\n') | |
649 | % (fixername,) |
|
648 | % (fixername,) | |
650 | ) |
|
649 | ) | |
651 | continue |
|
650 | continue | |
652 | else: |
|
651 | else: | |
653 | metadata[fixername] = None |
|
652 | metadata[fixername] = None | |
654 | if proc.returncode == 0: |
|
653 | if proc.returncode == 0: | |
655 | newdata = newerdata |
|
654 | newdata = newerdata | |
656 | else: |
|
655 | else: | |
657 | if not stderr: |
|
656 | if not stderr: | |
658 | message = _(b'exited with status %d\n') % (proc.returncode,) |
|
657 | message = _(b'exited with status %d\n') % (proc.returncode,) | |
659 | showstderr(ui, fixctx.rev(), fixername, message) |
|
658 | showstderr(ui, fixctx.rev(), fixername, message) | |
660 | checktoolfailureaction( |
|
659 | checktoolfailureaction( | |
661 | ui, |
|
660 | ui, | |
662 | _(b'no fixes will be applied'), |
|
661 | _(b'no fixes will be applied'), | |
663 | hint=_( |
|
662 | hint=_( | |
664 | b'use --config fix.failure=continue to apply any ' |
|
663 | b'use --config fix.failure=continue to apply any ' | |
665 | b'successful fixes anyway' |
|
664 | b'successful fixes anyway' | |
666 | ), |
|
665 | ), | |
667 | ) |
|
666 | ) | |
668 | return metadata, newdata |
|
667 | return metadata, newdata | |
669 |
|
668 | |||
670 |
|
669 | |||
671 | def showstderr(ui, rev, fixername, stderr): |
|
670 | def showstderr(ui, rev, fixername, stderr): | |
672 | """Writes the lines of the stderr string as warnings on the ui |
|
671 | """Writes the lines of the stderr string as warnings on the ui | |
673 |
|
672 | |||
674 | Uses the revision number and fixername to give more context to each line of |
|
673 | Uses the revision number and fixername to give more context to each line of | |
675 | the error message. Doesn't include file names, since those take up a lot of |
|
674 | the error message. Doesn't include file names, since those take up a lot of | |
676 | space and would tend to be included in the error message if they were |
|
675 | space and would tend to be included in the error message if they were | |
677 | relevant. |
|
676 | relevant. | |
678 | """ |
|
677 | """ | |
679 | for line in re.split(b'[\r\n]+', stderr): |
|
678 | for line in re.split(b'[\r\n]+', stderr): | |
680 | if line: |
|
679 | if line: | |
681 | ui.warn(b'[') |
|
680 | ui.warn(b'[') | |
682 | if rev is None: |
|
681 | if rev is None: | |
683 | ui.warn(_(b'wdir'), label=b'evolve.rev') |
|
682 | ui.warn(_(b'wdir'), label=b'evolve.rev') | |
684 | else: |
|
683 | else: | |
685 | ui.warn((str(rev)), label=b'evolve.rev') |
|
684 | ui.warn((str(rev)), label=b'evolve.rev') | |
686 | ui.warn(b'] %s: %s\n' % (fixername, line)) |
|
685 | ui.warn(b'] %s: %s\n' % (fixername, line)) | |
687 |
|
686 | |||
688 |
|
687 | |||
689 | def writeworkingdir(repo, ctx, filedata, replacements): |
|
688 | def writeworkingdir(repo, ctx, filedata, replacements): | |
690 | """Write new content to the working copy and check out the new p1 if any |
|
689 | """Write new content to the working copy and check out the new p1 if any | |
691 |
|
690 | |||
692 | We check out a new revision if and only if we fixed something in both the |
|
691 | We check out a new revision if and only if we fixed something in both the | |
693 | working directory and its parent revision. This avoids the need for a full |
|
692 | working directory and its parent revision. This avoids the need for a full | |
694 | update/merge, and means that the working directory simply isn't affected |
|
693 | update/merge, and means that the working directory simply isn't affected | |
695 | unless the --working-dir flag is given. |
|
694 | unless the --working-dir flag is given. | |
696 |
|
695 | |||
697 | Directly updates the dirstate for the affected files. |
|
696 | Directly updates the dirstate for the affected files. | |
698 | """ |
|
697 | """ | |
699 | for path, data in pycompat.iteritems(filedata): |
|
698 | for path, data in pycompat.iteritems(filedata): | |
700 | fctx = ctx[path] |
|
699 | fctx = ctx[path] | |
701 | fctx.write(data, fctx.flags()) |
|
700 | fctx.write(data, fctx.flags()) | |
702 | if repo.dirstate[path] == b'n': |
|
701 | if repo.dirstate[path] == b'n': | |
703 | repo.dirstate.normallookup(path) |
|
702 | repo.dirstate.normallookup(path) | |
704 |
|
703 | |||
705 | oldparentnodes = repo.dirstate.parents() |
|
704 | oldparentnodes = repo.dirstate.parents() | |
706 | newparentnodes = [replacements.get(n, n) for n in oldparentnodes] |
|
705 | newparentnodes = [replacements.get(n, n) for n in oldparentnodes] | |
707 | if newparentnodes != oldparentnodes: |
|
706 | if newparentnodes != oldparentnodes: | |
708 | repo.setparents(*newparentnodes) |
|
707 | repo.setparents(*newparentnodes) | |
709 |
|
708 | |||
710 |
|
709 | |||
711 | def replacerev(ui, repo, ctx, filedata, replacements): |
|
710 | def replacerev(ui, repo, ctx, filedata, replacements): | |
712 | """Commit a new revision like the given one, but with file content changes |
|
711 | """Commit a new revision like the given one, but with file content changes | |
713 |
|
712 | |||
714 | "ctx" is the original revision to be replaced by a modified one. |
|
713 | "ctx" is the original revision to be replaced by a modified one. | |
715 |
|
714 | |||
716 | "filedata" is a dict that maps paths to their new file content. All other |
|
715 | "filedata" is a dict that maps paths to their new file content. All other | |
717 | paths will be recreated from the original revision without changes. |
|
716 | paths will be recreated from the original revision without changes. | |
718 | "filedata" may contain paths that didn't exist in the original revision; |
|
717 | "filedata" may contain paths that didn't exist in the original revision; | |
719 | they will be added. |
|
718 | they will be added. | |
720 |
|
719 | |||
721 | "replacements" is a dict that maps a single node to a single node, and it is |
|
720 | "replacements" is a dict that maps a single node to a single node, and it is | |
722 | updated to indicate the original revision is replaced by the newly created |
|
721 | updated to indicate the original revision is replaced by the newly created | |
723 | one. No entry is added if the replacement's node already exists. |
|
722 | one. No entry is added if the replacement's node already exists. | |
724 |
|
723 | |||
725 | The new revision has the same parents as the old one, unless those parents |
|
724 | The new revision has the same parents as the old one, unless those parents | |
726 | have already been replaced, in which case those replacements are the parents |
|
725 | have already been replaced, in which case those replacements are the parents | |
727 | of this new revision. Thus, if revisions are replaced in topological order, |
|
726 | of this new revision. Thus, if revisions are replaced in topological order, | |
728 | there is no need to rebase them into the original topology later. |
|
727 | there is no need to rebase them into the original topology later. | |
729 | """ |
|
728 | """ | |
730 |
|
729 | |||
731 | p1rev, p2rev = repo.changelog.parentrevs(ctx.rev()) |
|
730 | p1rev, p2rev = repo.changelog.parentrevs(ctx.rev()) | |
732 | p1ctx, p2ctx = repo[p1rev], repo[p2rev] |
|
731 | p1ctx, p2ctx = repo[p1rev], repo[p2rev] | |
733 | newp1node = replacements.get(p1ctx.node(), p1ctx.node()) |
|
732 | newp1node = replacements.get(p1ctx.node(), p1ctx.node()) | |
734 | newp2node = replacements.get(p2ctx.node(), p2ctx.node()) |
|
733 | newp2node = replacements.get(p2ctx.node(), p2ctx.node()) | |
735 |
|
734 | |||
736 | # We don't want to create a revision that has no changes from the original, |
|
735 | # We don't want to create a revision that has no changes from the original, | |
737 | # but we should if the original revision's parent has been replaced. |
|
736 | # but we should if the original revision's parent has been replaced. | |
738 | # Otherwise, we would produce an orphan that needs no actual human |
|
737 | # Otherwise, we would produce an orphan that needs no actual human | |
739 | # intervention to evolve. We can't rely on commit() to avoid creating the |
|
738 | # intervention to evolve. We can't rely on commit() to avoid creating the | |
740 | # un-needed revision because the extra field added below produces a new hash |
|
739 | # un-needed revision because the extra field added below produces a new hash | |
741 | # regardless of file content changes. |
|
740 | # regardless of file content changes. | |
742 | if ( |
|
741 | if ( | |
743 | not filedata |
|
742 | not filedata | |
744 | and p1ctx.node() not in replacements |
|
743 | and p1ctx.node() not in replacements | |
745 | and p2ctx.node() not in replacements |
|
744 | and p2ctx.node() not in replacements | |
746 | ): |
|
745 | ): | |
747 | return |
|
746 | return | |
748 |
|
747 | |||
749 | def filectxfn(repo, memctx, path): |
|
748 | def filectxfn(repo, memctx, path): | |
750 | if path not in ctx: |
|
749 | if path not in ctx: | |
751 | return None |
|
750 | return None | |
752 | fctx = ctx[path] |
|
751 | fctx = ctx[path] | |
753 | copysource = fctx.copysource() |
|
752 | copysource = fctx.copysource() | |
754 | return context.memfilectx( |
|
753 | return context.memfilectx( | |
755 | repo, |
|
754 | repo, | |
756 | memctx, |
|
755 | memctx, | |
757 | path=fctx.path(), |
|
756 | path=fctx.path(), | |
758 | data=filedata.get(path, fctx.data()), |
|
757 | data=filedata.get(path, fctx.data()), | |
759 | islink=fctx.islink(), |
|
758 | islink=fctx.islink(), | |
760 | isexec=fctx.isexec(), |
|
759 | isexec=fctx.isexec(), | |
761 | copysource=copysource, |
|
760 | copysource=copysource, | |
762 | ) |
|
761 | ) | |
763 |
|
762 | |||
764 | extra = ctx.extra().copy() |
|
763 | extra = ctx.extra().copy() | |
765 | extra[b'fix_source'] = ctx.hex() |
|
764 | extra[b'fix_source'] = ctx.hex() | |
766 |
|
765 | |||
767 | memctx = context.memctx( |
|
766 | memctx = context.memctx( | |
768 | repo, |
|
767 | repo, | |
769 | parents=(newp1node, newp2node), |
|
768 | parents=(newp1node, newp2node), | |
770 | text=ctx.description(), |
|
769 | text=ctx.description(), | |
771 | files=set(ctx.files()) | set(filedata.keys()), |
|
770 | files=set(ctx.files()) | set(filedata.keys()), | |
772 | filectxfn=filectxfn, |
|
771 | filectxfn=filectxfn, | |
773 | user=ctx.user(), |
|
772 | user=ctx.user(), | |
774 | date=ctx.date(), |
|
773 | date=ctx.date(), | |
775 | extra=extra, |
|
774 | extra=extra, | |
776 | branch=ctx.branch(), |
|
775 | branch=ctx.branch(), | |
777 | editor=None, |
|
776 | editor=None, | |
778 | ) |
|
777 | ) | |
779 | sucnode = memctx.commit() |
|
778 | sucnode = memctx.commit() | |
780 | prenode = ctx.node() |
|
779 | prenode = ctx.node() | |
781 | if prenode == sucnode: |
|
780 | if prenode == sucnode: | |
782 | ui.debug(b'node %s already existed\n' % (ctx.hex())) |
|
781 | ui.debug(b'node %s already existed\n' % (ctx.hex())) | |
783 | else: |
|
782 | else: | |
784 | replacements[ctx.node()] = sucnode |
|
783 | replacements[ctx.node()] = sucnode | |
785 |
|
784 | |||
786 |
|
785 | |||
787 | def getfixers(ui): |
|
786 | def getfixers(ui): | |
788 | """Returns a map of configured fixer tools indexed by their names |
|
787 | """Returns a map of configured fixer tools indexed by their names | |
789 |
|
788 | |||
790 | Each value is a Fixer object with methods that implement the behavior of the |
|
789 | Each value is a Fixer object with methods that implement the behavior of the | |
791 | fixer's config suboptions. Does not validate the config values. |
|
790 | fixer's config suboptions. Does not validate the config values. | |
792 | """ |
|
791 | """ | |
793 | fixers = {} |
|
792 | fixers = {} | |
794 | for name in fixernames(ui): |
|
793 | for name in fixernames(ui): | |
795 | enabled = ui.configbool(b'fix', name + b':enabled') |
|
794 | enabled = ui.configbool(b'fix', name + b':enabled') | |
796 | command = ui.config(b'fix', name + b':command') |
|
795 | command = ui.config(b'fix', name + b':command') | |
797 | pattern = ui.config(b'fix', name + b':pattern') |
|
796 | pattern = ui.config(b'fix', name + b':pattern') | |
798 | linerange = ui.config(b'fix', name + b':linerange') |
|
797 | linerange = ui.config(b'fix', name + b':linerange') | |
799 | priority = ui.configint(b'fix', name + b':priority') |
|
798 | priority = ui.configint(b'fix', name + b':priority') | |
800 | metadata = ui.configbool(b'fix', name + b':metadata') |
|
799 | metadata = ui.configbool(b'fix', name + b':metadata') | |
801 | skipclean = ui.configbool(b'fix', name + b':skipclean') |
|
800 | skipclean = ui.configbool(b'fix', name + b':skipclean') | |
802 | # Don't use a fixer if it has no pattern configured. It would be |
|
801 | # Don't use a fixer if it has no pattern configured. It would be | |
803 | # dangerous to let it affect all files. It would be pointless to let it |
|
802 | # dangerous to let it affect all files. It would be pointless to let it | |
804 | # affect no files. There is no reasonable subset of files to use as the |
|
803 | # affect no files. There is no reasonable subset of files to use as the | |
805 | # default. |
|
804 | # default. | |
806 | if command is None: |
|
805 | if command is None: | |
807 | ui.warn( |
|
806 | ui.warn( | |
808 | _(b'fixer tool has no command configuration: %s\n') % (name,) |
|
807 | _(b'fixer tool has no command configuration: %s\n') % (name,) | |
809 | ) |
|
808 | ) | |
810 | elif pattern is None: |
|
809 | elif pattern is None: | |
811 | ui.warn( |
|
810 | ui.warn( | |
812 | _(b'fixer tool has no pattern configuration: %s\n') % (name,) |
|
811 | _(b'fixer tool has no pattern configuration: %s\n') % (name,) | |
813 | ) |
|
812 | ) | |
814 | elif not enabled: |
|
813 | elif not enabled: | |
815 | ui.debug(b'ignoring disabled fixer tool: %s\n' % (name,)) |
|
814 | ui.debug(b'ignoring disabled fixer tool: %s\n' % (name,)) | |
816 | else: |
|
815 | else: | |
817 | fixers[name] = Fixer( |
|
816 | fixers[name] = Fixer( | |
818 | command, pattern, linerange, priority, metadata, skipclean |
|
817 | command, pattern, linerange, priority, metadata, skipclean | |
819 | ) |
|
818 | ) | |
820 | return collections.OrderedDict( |
|
819 | return collections.OrderedDict( | |
821 | sorted(fixers.items(), key=lambda item: item[1]._priority, reverse=True) |
|
820 | sorted(fixers.items(), key=lambda item: item[1]._priority, reverse=True) | |
822 | ) |
|
821 | ) | |
823 |
|
822 | |||
824 |
|
823 | |||
825 | def fixernames(ui): |
|
824 | def fixernames(ui): | |
826 | """Returns the names of [fix] config options that have suboptions""" |
|
825 | """Returns the names of [fix] config options that have suboptions""" | |
827 | names = set() |
|
826 | names = set() | |
828 | for k, v in ui.configitems(b'fix'): |
|
827 | for k, v in ui.configitems(b'fix'): | |
829 | if b':' in k: |
|
828 | if b':' in k: | |
830 | names.add(k.split(b':', 1)[0]) |
|
829 | names.add(k.split(b':', 1)[0]) | |
831 | return names |
|
830 | return names | |
832 |
|
831 | |||
833 |
|
832 | |||
834 | class Fixer(object): |
|
833 | class Fixer(object): | |
835 | """Wraps the raw config values for a fixer with methods""" |
|
834 | """Wraps the raw config values for a fixer with methods""" | |
836 |
|
835 | |||
837 | def __init__( |
|
836 | def __init__( | |
838 | self, command, pattern, linerange, priority, metadata, skipclean |
|
837 | self, command, pattern, linerange, priority, metadata, skipclean | |
839 | ): |
|
838 | ): | |
840 | self._command = command |
|
839 | self._command = command | |
841 | self._pattern = pattern |
|
840 | self._pattern = pattern | |
842 | self._linerange = linerange |
|
841 | self._linerange = linerange | |
843 | self._priority = priority |
|
842 | self._priority = priority | |
844 | self._metadata = metadata |
|
843 | self._metadata = metadata | |
845 | self._skipclean = skipclean |
|
844 | self._skipclean = skipclean | |
846 |
|
845 | |||
847 | def affects(self, opts, fixctx, path): |
|
846 | def affects(self, opts, fixctx, path): | |
848 | """Should this fixer run on the file at the given path and context?""" |
|
847 | """Should this fixer run on the file at the given path and context?""" | |
849 | repo = fixctx.repo() |
|
848 | repo = fixctx.repo() | |
850 | matcher = matchmod.match( |
|
849 | matcher = matchmod.match( | |
851 | repo.root, repo.root, [self._pattern], ctx=fixctx |
|
850 | repo.root, repo.root, [self._pattern], ctx=fixctx | |
852 | ) |
|
851 | ) | |
853 | return matcher(path) |
|
852 | return matcher(path) | |
854 |
|
853 | |||
855 | def shouldoutputmetadata(self): |
|
854 | def shouldoutputmetadata(self): | |
856 | """Should the stdout of this fixer start with JSON and a null byte?""" |
|
855 | """Should the stdout of this fixer start with JSON and a null byte?""" | |
857 | return self._metadata |
|
856 | return self._metadata | |
858 |
|
857 | |||
859 | def command(self, ui, path, ranges): |
|
858 | def command(self, ui, path, ranges): | |
860 | """A shell command to use to invoke this fixer on the given file/lines |
|
859 | """A shell command to use to invoke this fixer on the given file/lines | |
861 |
|
860 | |||
862 | May return None if there is no appropriate command to run for the given |
|
861 | May return None if there is no appropriate command to run for the given | |
863 | parameters. |
|
862 | parameters. | |
864 | """ |
|
863 | """ | |
865 | expand = cmdutil.rendercommandtemplate |
|
864 | expand = cmdutil.rendercommandtemplate | |
866 | parts = [ |
|
865 | parts = [ | |
867 | expand( |
|
866 | expand( | |
868 | ui, |
|
867 | ui, | |
869 | self._command, |
|
868 | self._command, | |
870 | {b'rootpath': path, b'basename': os.path.basename(path)}, |
|
869 | {b'rootpath': path, b'basename': os.path.basename(path)}, | |
871 | ) |
|
870 | ) | |
872 | ] |
|
871 | ] | |
873 | if self._linerange: |
|
872 | if self._linerange: | |
874 | if self._skipclean and not ranges: |
|
873 | if self._skipclean and not ranges: | |
875 | # No line ranges to fix, so don't run the fixer. |
|
874 | # No line ranges to fix, so don't run the fixer. | |
876 | return None |
|
875 | return None | |
877 | for first, last in ranges: |
|
876 | for first, last in ranges: | |
878 | parts.append( |
|
877 | parts.append( | |
879 | expand( |
|
878 | expand( | |
880 | ui, self._linerange, {b'first': first, b'last': last} |
|
879 | ui, self._linerange, {b'first': first, b'last': last} | |
881 | ) |
|
880 | ) | |
882 | ) |
|
881 | ) | |
883 | return b' '.join(parts) |
|
882 | return b' '.join(parts) |
@@ -1,746 +1,746 b'' | |||||
1 | # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages |
|
1 | # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages | |
2 | # |
|
2 | # | |
3 | # Copyright 2017 Facebook, Inc. |
|
3 | # Copyright 2017 Facebook, Inc. | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
|
7 | |||
8 | from __future__ import absolute_import |
|
8 | from __future__ import absolute_import | |
9 |
|
9 | |||
10 | import contextlib |
|
10 | import contextlib | |
11 | import errno |
|
11 | import errno | |
12 | import hashlib |
|
12 | import hashlib | |
13 | import json |
|
13 | import json | |
14 | import os |
|
14 | import os | |
15 | import re |
|
15 | import re | |
16 | import socket |
|
16 | import socket | |
17 |
|
17 | |||
18 | from mercurial.i18n import _ |
|
18 | from mercurial.i18n import _ | |
19 | from mercurial.pycompat import getattr |
|
19 | from mercurial.pycompat import getattr | |
20 |
|
20 | |||
21 | from mercurial import ( |
|
21 | from mercurial import ( | |
22 | encoding, |
|
22 | encoding, | |
23 | error, |
|
23 | error, | |
24 | node, |
|
24 | node, | |
25 | pathutil, |
|
25 | pathutil, | |
26 | pycompat, |
|
26 | pycompat, | |
27 | url as urlmod, |
|
27 | url as urlmod, | |
28 | util, |
|
28 | util, | |
29 | vfs as vfsmod, |
|
29 | vfs as vfsmod, | |
30 | worker, |
|
30 | worker, | |
31 | ) |
|
31 | ) | |
32 |
|
32 | |||
33 | from mercurial.utils import stringutil |
|
33 | from mercurial.utils import stringutil | |
34 |
|
34 | |||
35 | from ..largefiles import lfutil |
|
35 | from ..largefiles import lfutil | |
36 |
|
36 | |||
37 | # 64 bytes for SHA256 |
|
37 | # 64 bytes for SHA256 | |
38 | _lfsre = re.compile(br'\A[a-f0-9]{64}\Z') |
|
38 | _lfsre = re.compile(br'\A[a-f0-9]{64}\Z') | |
39 |
|
39 | |||
40 |
|
40 | |||
41 | class lfsvfs(vfsmod.vfs): |
|
41 | class lfsvfs(vfsmod.vfs): | |
42 | def join(self, path): |
|
42 | def join(self, path): | |
43 | """split the path at first two characters, like: XX/XXXXX...""" |
|
43 | """split the path at first two characters, like: XX/XXXXX...""" | |
44 | if not _lfsre.match(path): |
|
44 | if not _lfsre.match(path): | |
45 | raise error.ProgrammingError(b'unexpected lfs path: %s' % path) |
|
45 | raise error.ProgrammingError(b'unexpected lfs path: %s' % path) | |
46 | return super(lfsvfs, self).join(path[0:2], path[2:]) |
|
46 | return super(lfsvfs, self).join(path[0:2], path[2:]) | |
47 |
|
47 | |||
48 | def walk(self, path=None, onerror=None): |
|
48 | def walk(self, path=None, onerror=None): | |
49 | """Yield (dirpath, [], oids) tuple for blobs under path |
|
49 | """Yield (dirpath, [], oids) tuple for blobs under path | |
50 |
|
50 | |||
51 | Oids only exist in the root of this vfs, so dirpath is always ''. |
|
51 | Oids only exist in the root of this vfs, so dirpath is always ''. | |
52 | """ |
|
52 | """ | |
53 | root = os.path.normpath(self.base) |
|
53 | root = os.path.normpath(self.base) | |
54 | # when dirpath == root, dirpath[prefixlen:] becomes empty |
|
54 | # when dirpath == root, dirpath[prefixlen:] becomes empty | |
55 | # because len(dirpath) < prefixlen. |
|
55 | # because len(dirpath) < prefixlen. | |
56 | prefixlen = len(pathutil.normasprefix(root)) |
|
56 | prefixlen = len(pathutil.normasprefix(root)) | |
57 | oids = [] |
|
57 | oids = [] | |
58 |
|
58 | |||
59 | for dirpath, dirs, files in os.walk( |
|
59 | for dirpath, dirs, files in os.walk( | |
60 | self.reljoin(self.base, path or b''), onerror=onerror |
|
60 | self.reljoin(self.base, path or b''), onerror=onerror | |
61 | ): |
|
61 | ): | |
62 | dirpath = dirpath[prefixlen:] |
|
62 | dirpath = dirpath[prefixlen:] | |
63 |
|
63 | |||
64 | # Silently skip unexpected files and directories |
|
64 | # Silently skip unexpected files and directories | |
65 | if len(dirpath) == 2: |
|
65 | if len(dirpath) == 2: | |
66 | oids.extend( |
|
66 | oids.extend( | |
67 | [dirpath + f for f in files if _lfsre.match(dirpath + f)] |
|
67 | [dirpath + f for f in files if _lfsre.match(dirpath + f)] | |
68 | ) |
|
68 | ) | |
69 |
|
69 | |||
70 | yield (b'', [], oids) |
|
70 | yield (b'', [], oids) | |
71 |
|
71 | |||
72 |
|
72 | |||
73 | class nullvfs(lfsvfs): |
|
73 | class nullvfs(lfsvfs): | |
74 | def __init__(self): |
|
74 | def __init__(self): | |
75 | pass |
|
75 | pass | |
76 |
|
76 | |||
77 | def exists(self, oid): |
|
77 | def exists(self, oid): | |
78 | return False |
|
78 | return False | |
79 |
|
79 | |||
80 | def read(self, oid): |
|
80 | def read(self, oid): | |
81 | # store.read() calls into here if the blob doesn't exist in its |
|
81 | # store.read() calls into here if the blob doesn't exist in its | |
82 | # self.vfs. Raise the same error as a normal vfs when asked to read a |
|
82 | # self.vfs. Raise the same error as a normal vfs when asked to read a | |
83 | # file that doesn't exist. The only difference is the full file path |
|
83 | # file that doesn't exist. The only difference is the full file path | |
84 | # isn't available in the error. |
|
84 | # isn't available in the error. | |
85 | raise IOError( |
|
85 | raise IOError( | |
86 | errno.ENOENT, |
|
86 | errno.ENOENT, | |
87 | pycompat.sysstr(b'%s: No such file or directory' % oid), |
|
87 | pycompat.sysstr(b'%s: No such file or directory' % oid), | |
88 | ) |
|
88 | ) | |
89 |
|
89 | |||
90 | def walk(self, path=None, onerror=None): |
|
90 | def walk(self, path=None, onerror=None): | |
91 | return (b'', [], []) |
|
91 | return (b'', [], []) | |
92 |
|
92 | |||
93 | def write(self, oid, data): |
|
93 | def write(self, oid, data): | |
94 | pass |
|
94 | pass | |
95 |
|
95 | |||
96 |
|
96 | |||
97 | class filewithprogress(object): |
|
97 | class filewithprogress(object): | |
98 | """a file-like object that supports __len__ and read. |
|
98 | """a file-like object that supports __len__ and read. | |
99 |
|
99 | |||
100 | Useful to provide progress information for how many bytes are read. |
|
100 | Useful to provide progress information for how many bytes are read. | |
101 | """ |
|
101 | """ | |
102 |
|
102 | |||
103 | def __init__(self, fp, callback): |
|
103 | def __init__(self, fp, callback): | |
104 | self._fp = fp |
|
104 | self._fp = fp | |
105 | self._callback = callback # func(readsize) |
|
105 | self._callback = callback # func(readsize) | |
106 | fp.seek(0, os.SEEK_END) |
|
106 | fp.seek(0, os.SEEK_END) | |
107 | self._len = fp.tell() |
|
107 | self._len = fp.tell() | |
108 | fp.seek(0) |
|
108 | fp.seek(0) | |
109 |
|
109 | |||
110 | def __len__(self): |
|
110 | def __len__(self): | |
111 | return self._len |
|
111 | return self._len | |
112 |
|
112 | |||
113 | def read(self, size): |
|
113 | def read(self, size): | |
114 | if self._fp is None: |
|
114 | if self._fp is None: | |
115 | return b'' |
|
115 | return b'' | |
116 | data = self._fp.read(size) |
|
116 | data = self._fp.read(size) | |
117 | if data: |
|
117 | if data: | |
118 | if self._callback: |
|
118 | if self._callback: | |
119 | self._callback(len(data)) |
|
119 | self._callback(len(data)) | |
120 | else: |
|
120 | else: | |
121 | self._fp.close() |
|
121 | self._fp.close() | |
122 | self._fp = None |
|
122 | self._fp = None | |
123 | return data |
|
123 | return data | |
124 |
|
124 | |||
125 |
|
125 | |||
126 | class local(object): |
|
126 | class local(object): | |
127 | """Local blobstore for large file contents. |
|
127 | """Local blobstore for large file contents. | |
128 |
|
128 | |||
129 | This blobstore is used both as a cache and as a staging area for large blobs |
|
129 | This blobstore is used both as a cache and as a staging area for large blobs | |
130 | to be uploaded to the remote blobstore. |
|
130 | to be uploaded to the remote blobstore. | |
131 | """ |
|
131 | """ | |
132 |
|
132 | |||
133 | def __init__(self, repo): |
|
133 | def __init__(self, repo): | |
134 | fullpath = repo.svfs.join(b'lfs/objects') |
|
134 | fullpath = repo.svfs.join(b'lfs/objects') | |
135 | self.vfs = lfsvfs(fullpath) |
|
135 | self.vfs = lfsvfs(fullpath) | |
136 |
|
136 | |||
137 | if repo.ui.configbool(b'experimental', b'lfs.disableusercache'): |
|
137 | if repo.ui.configbool(b'experimental', b'lfs.disableusercache'): | |
138 | self.cachevfs = nullvfs() |
|
138 | self.cachevfs = nullvfs() | |
139 | else: |
|
139 | else: | |
140 | usercache = lfutil._usercachedir(repo.ui, b'lfs') |
|
140 | usercache = lfutil._usercachedir(repo.ui, b'lfs') | |
141 | self.cachevfs = lfsvfs(usercache) |
|
141 | self.cachevfs = lfsvfs(usercache) | |
142 | self.ui = repo.ui |
|
142 | self.ui = repo.ui | |
143 |
|
143 | |||
144 | def open(self, oid): |
|
144 | def open(self, oid): | |
145 | """Open a read-only file descriptor to the named blob, in either the |
|
145 | """Open a read-only file descriptor to the named blob, in either the | |
146 | usercache or the local store.""" |
|
146 | usercache or the local store.""" | |
147 | # The usercache is the most likely place to hold the file. Commit will |
|
147 | # The usercache is the most likely place to hold the file. Commit will | |
148 | # write to both it and the local store, as will anything that downloads |
|
148 | # write to both it and the local store, as will anything that downloads | |
149 | # the blobs. However, things like clone without an update won't |
|
149 | # the blobs. However, things like clone without an update won't | |
150 | # populate the local store. For an init + push of a local clone, |
|
150 | # populate the local store. For an init + push of a local clone, | |
151 | # the usercache is the only place it _could_ be. If not present, the |
|
151 | # the usercache is the only place it _could_ be. If not present, the | |
152 | # missing file msg here will indicate the local repo, not the usercache. |
|
152 | # missing file msg here will indicate the local repo, not the usercache. | |
153 | if self.cachevfs.exists(oid): |
|
153 | if self.cachevfs.exists(oid): | |
154 | return self.cachevfs(oid, b'rb') |
|
154 | return self.cachevfs(oid, b'rb') | |
155 |
|
155 | |||
156 | return self.vfs(oid, b'rb') |
|
156 | return self.vfs(oid, b'rb') | |
157 |
|
157 | |||
158 | def download(self, oid, src): |
|
158 | def download(self, oid, src): | |
159 | """Read the blob from the remote source in chunks, verify the content, |
|
159 | """Read the blob from the remote source in chunks, verify the content, | |
160 | and write to this local blobstore.""" |
|
160 | and write to this local blobstore.""" | |
161 | sha256 = hashlib.sha256() |
|
161 | sha256 = hashlib.sha256() | |
162 |
|
162 | |||
163 | with self.vfs(oid, b'wb', atomictemp=True) as fp: |
|
163 | with self.vfs(oid, b'wb', atomictemp=True) as fp: | |
164 | for chunk in util.filechunkiter(src, size=1048576): |
|
164 | for chunk in util.filechunkiter(src, size=1048576): | |
165 | fp.write(chunk) |
|
165 | fp.write(chunk) | |
166 | sha256.update(chunk) |
|
166 | sha256.update(chunk) | |
167 |
|
167 | |||
168 | realoid = node.hex(sha256.digest()) |
|
168 | realoid = node.hex(sha256.digest()) | |
169 | if realoid != oid: |
|
169 | if realoid != oid: | |
170 | raise LfsCorruptionError( |
|
170 | raise LfsCorruptionError( | |
171 | _(b'corrupt remote lfs object: %s') % oid |
|
171 | _(b'corrupt remote lfs object: %s') % oid | |
172 | ) |
|
172 | ) | |
173 |
|
173 | |||
174 | self._linktousercache(oid) |
|
174 | self._linktousercache(oid) | |
175 |
|
175 | |||
176 | def write(self, oid, data): |
|
176 | def write(self, oid, data): | |
177 | """Write blob to local blobstore. |
|
177 | """Write blob to local blobstore. | |
178 |
|
178 | |||
179 | This should only be called from the filelog during a commit or similar. |
|
179 | This should only be called from the filelog during a commit or similar. | |
180 | As such, there is no need to verify the data. Imports from a remote |
|
180 | As such, there is no need to verify the data. Imports from a remote | |
181 | store must use ``download()`` instead.""" |
|
181 | store must use ``download()`` instead.""" | |
182 | with self.vfs(oid, b'wb', atomictemp=True) as fp: |
|
182 | with self.vfs(oid, b'wb', atomictemp=True) as fp: | |
183 | fp.write(data) |
|
183 | fp.write(data) | |
184 |
|
184 | |||
185 | self._linktousercache(oid) |
|
185 | self._linktousercache(oid) | |
186 |
|
186 | |||
187 | def linkfromusercache(self, oid): |
|
187 | def linkfromusercache(self, oid): | |
188 | """Link blobs found in the user cache into this store. |
|
188 | """Link blobs found in the user cache into this store. | |
189 |
|
189 | |||
190 | The server module needs to do this when it lets the client know not to |
|
190 | The server module needs to do this when it lets the client know not to | |
191 | upload the blob, to ensure it is always available in this store. |
|
191 | upload the blob, to ensure it is always available in this store. | |
192 | Normally this is done implicitly when the client reads or writes the |
|
192 | Normally this is done implicitly when the client reads or writes the | |
193 | blob, but that doesn't happen when the server tells the client that it |
|
193 | blob, but that doesn't happen when the server tells the client that it | |
194 | already has the blob. |
|
194 | already has the blob. | |
195 | """ |
|
195 | """ | |
196 | if not isinstance(self.cachevfs, nullvfs) and not self.vfs.exists(oid): |
|
196 | if not isinstance(self.cachevfs, nullvfs) and not self.vfs.exists(oid): | |
197 | self.ui.note(_(b'lfs: found %s in the usercache\n') % oid) |
|
197 | self.ui.note(_(b'lfs: found %s in the usercache\n') % oid) | |
198 | lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid)) |
|
198 | lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid)) | |
199 |
|
199 | |||
200 | def _linktousercache(self, oid): |
|
200 | def _linktousercache(self, oid): | |
201 | # XXX: should we verify the content of the cache, and hardlink back to |
|
201 | # XXX: should we verify the content of the cache, and hardlink back to | |
202 | # the local store on success, but truncate, write and link on failure? |
|
202 | # the local store on success, but truncate, write and link on failure? | |
203 | if not self.cachevfs.exists(oid) and not isinstance( |
|
203 | if not self.cachevfs.exists(oid) and not isinstance( | |
204 | self.cachevfs, nullvfs |
|
204 | self.cachevfs, nullvfs | |
205 | ): |
|
205 | ): | |
206 | self.ui.note(_(b'lfs: adding %s to the usercache\n') % oid) |
|
206 | self.ui.note(_(b'lfs: adding %s to the usercache\n') % oid) | |
207 | lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid)) |
|
207 | lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid)) | |
208 |
|
208 | |||
209 | def read(self, oid, verify=True): |
|
209 | def read(self, oid, verify=True): | |
210 | """Read blob from local blobstore.""" |
|
210 | """Read blob from local blobstore.""" | |
211 | if not self.vfs.exists(oid): |
|
211 | if not self.vfs.exists(oid): | |
212 | blob = self._read(self.cachevfs, oid, verify) |
|
212 | blob = self._read(self.cachevfs, oid, verify) | |
213 |
|
213 | |||
214 | # Even if revlog will verify the content, it needs to be verified |
|
214 | # Even if revlog will verify the content, it needs to be verified | |
215 | # now before making the hardlink to avoid propagating corrupt blobs. |
|
215 | # now before making the hardlink to avoid propagating corrupt blobs. | |
216 | # Don't abort if corruption is detected, because `hg verify` will |
|
216 | # Don't abort if corruption is detected, because `hg verify` will | |
217 | # give more useful info about the corruption- simply don't add the |
|
217 | # give more useful info about the corruption- simply don't add the | |
218 | # hardlink. |
|
218 | # hardlink. | |
219 | if verify or node.hex(hashlib.sha256(blob).digest()) == oid: |
|
219 | if verify or node.hex(hashlib.sha256(blob).digest()) == oid: | |
220 | self.ui.note(_(b'lfs: found %s in the usercache\n') % oid) |
|
220 | self.ui.note(_(b'lfs: found %s in the usercache\n') % oid) | |
221 | lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid)) |
|
221 | lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid)) | |
222 | else: |
|
222 | else: | |
223 | self.ui.note(_(b'lfs: found %s in the local lfs store\n') % oid) |
|
223 | self.ui.note(_(b'lfs: found %s in the local lfs store\n') % oid) | |
224 | blob = self._read(self.vfs, oid, verify) |
|
224 | blob = self._read(self.vfs, oid, verify) | |
225 | return blob |
|
225 | return blob | |
226 |
|
226 | |||
227 | def _read(self, vfs, oid, verify): |
|
227 | def _read(self, vfs, oid, verify): | |
228 | """Read blob (after verifying) from the given store""" |
|
228 | """Read blob (after verifying) from the given store""" | |
229 | blob = vfs.read(oid) |
|
229 | blob = vfs.read(oid) | |
230 | if verify: |
|
230 | if verify: | |
231 | _verify(oid, blob) |
|
231 | _verify(oid, blob) | |
232 | return blob |
|
232 | return blob | |
233 |
|
233 | |||
234 | def verify(self, oid): |
|
234 | def verify(self, oid): | |
235 | """Indicate whether or not the hash of the underlying file matches its |
|
235 | """Indicate whether or not the hash of the underlying file matches its | |
236 | name.""" |
|
236 | name.""" | |
237 | sha256 = hashlib.sha256() |
|
237 | sha256 = hashlib.sha256() | |
238 |
|
238 | |||
239 | with self.open(oid) as fp: |
|
239 | with self.open(oid) as fp: | |
240 | for chunk in util.filechunkiter(fp, size=1048576): |
|
240 | for chunk in util.filechunkiter(fp, size=1048576): | |
241 | sha256.update(chunk) |
|
241 | sha256.update(chunk) | |
242 |
|
242 | |||
243 | return oid == node.hex(sha256.digest()) |
|
243 | return oid == node.hex(sha256.digest()) | |
244 |
|
244 | |||
245 | def has(self, oid): |
|
245 | def has(self, oid): | |
246 | """Returns True if the local blobstore contains the requested blob, |
|
246 | """Returns True if the local blobstore contains the requested blob, | |
247 | False otherwise.""" |
|
247 | False otherwise.""" | |
248 | return self.cachevfs.exists(oid) or self.vfs.exists(oid) |
|
248 | return self.cachevfs.exists(oid) or self.vfs.exists(oid) | |
249 |
|
249 | |||
250 |
|
250 | |||
251 | def _urlerrorreason(urlerror): |
|
251 | def _urlerrorreason(urlerror): | |
252 | '''Create a friendly message for the given URLError to be used in an |
|
252 | '''Create a friendly message for the given URLError to be used in an | |
253 | LfsRemoteError message. |
|
253 | LfsRemoteError message. | |
254 | ''' |
|
254 | ''' | |
255 | inst = urlerror |
|
255 | inst = urlerror | |
256 |
|
256 | |||
257 | if isinstance(urlerror.reason, Exception): |
|
257 | if isinstance(urlerror.reason, Exception): | |
258 | inst = urlerror.reason |
|
258 | inst = urlerror.reason | |
259 |
|
259 | |||
260 | if util.safehasattr(inst, b'reason'): |
|
260 | if util.safehasattr(inst, b'reason'): | |
261 | try: # usually it is in the form (errno, strerror) |
|
261 | try: # usually it is in the form (errno, strerror) | |
262 | reason = inst.reason.args[1] |
|
262 | reason = inst.reason.args[1] | |
263 | except (AttributeError, IndexError): |
|
263 | except (AttributeError, IndexError): | |
264 | # it might be anything, for example a string |
|
264 | # it might be anything, for example a string | |
265 | reason = inst.reason |
|
265 | reason = inst.reason | |
266 | if isinstance(reason, pycompat.unicode): |
|
266 | if isinstance(reason, pycompat.unicode): | |
267 | # SSLError of Python 2.7.9 contains a unicode |
|
267 | # SSLError of Python 2.7.9 contains a unicode | |
268 | reason = encoding.unitolocal(reason) |
|
268 | reason = encoding.unitolocal(reason) | |
269 | return reason |
|
269 | return reason | |
270 | elif getattr(inst, "strerror", None): |
|
270 | elif getattr(inst, "strerror", None): | |
271 | return encoding.strtolocal(inst.strerror) |
|
271 | return encoding.strtolocal(inst.strerror) | |
272 | else: |
|
272 | else: | |
273 | return stringutil.forcebytestr(urlerror) |
|
273 | return stringutil.forcebytestr(urlerror) | |
274 |
|
274 | |||
275 |
|
275 | |||
276 | class lfsauthhandler(util.urlreq.basehandler): |
|
276 | class lfsauthhandler(util.urlreq.basehandler): | |
277 | handler_order = 480 # Before HTTPDigestAuthHandler (== 490) |
|
277 | handler_order = 480 # Before HTTPDigestAuthHandler (== 490) | |
278 |
|
278 | |||
279 | def http_error_401(self, req, fp, code, msg, headers): |
|
279 | def http_error_401(self, req, fp, code, msg, headers): | |
280 | """Enforces that any authentication performed is HTTP Basic |
|
280 | """Enforces that any authentication performed is HTTP Basic | |
281 | Authentication. No authentication is also acceptable. |
|
281 | Authentication. No authentication is also acceptable. | |
282 | """ |
|
282 | """ | |
283 | authreq = headers.get(r'www-authenticate', None) |
|
283 | authreq = headers.get(r'www-authenticate', None) | |
284 | if authreq: |
|
284 | if authreq: | |
285 | scheme = authreq.split()[0] |
|
285 | scheme = authreq.split()[0] | |
286 |
|
286 | |||
287 | if scheme.lower() != r'basic': |
|
287 | if scheme.lower() != r'basic': | |
288 | msg = _(b'the server must support Basic Authentication') |
|
288 | msg = _(b'the server must support Basic Authentication') | |
289 | raise util.urlerr.httperror( |
|
289 | raise util.urlerr.httperror( | |
290 | req.get_full_url(), |
|
290 | req.get_full_url(), | |
291 | code, |
|
291 | code, | |
292 | encoding.strfromlocal(msg), |
|
292 | encoding.strfromlocal(msg), | |
293 | headers, |
|
293 | headers, | |
294 | fp, |
|
294 | fp, | |
295 | ) |
|
295 | ) | |
296 | return None |
|
296 | return None | |
297 |
|
297 | |||
298 |
|
298 | |||
299 | class _gitlfsremote(object): |
|
299 | class _gitlfsremote(object): | |
300 | def __init__(self, repo, url): |
|
300 | def __init__(self, repo, url): | |
301 | ui = repo.ui |
|
301 | ui = repo.ui | |
302 | self.ui = ui |
|
302 | self.ui = ui | |
303 | baseurl, authinfo = url.authinfo() |
|
303 | baseurl, authinfo = url.authinfo() | |
304 | self.baseurl = baseurl.rstrip(b'/') |
|
304 | self.baseurl = baseurl.rstrip(b'/') | |
305 | useragent = repo.ui.config(b'experimental', b'lfs.user-agent') |
|
305 | useragent = repo.ui.config(b'experimental', b'lfs.user-agent') | |
306 | if not useragent: |
|
306 | if not useragent: | |
307 | useragent = b'git-lfs/2.3.4 (Mercurial %s)' % util.version() |
|
307 | useragent = b'git-lfs/2.3.4 (Mercurial %s)' % util.version() | |
308 | self.urlopener = urlmod.opener(ui, authinfo, useragent) |
|
308 | self.urlopener = urlmod.opener(ui, authinfo, useragent) | |
309 | self.urlopener.add_handler(lfsauthhandler()) |
|
309 | self.urlopener.add_handler(lfsauthhandler()) | |
310 | self.retry = ui.configint(b'lfs', b'retry') |
|
310 | self.retry = ui.configint(b'lfs', b'retry') | |
311 |
|
311 | |||
312 | def writebatch(self, pointers, fromstore): |
|
312 | def writebatch(self, pointers, fromstore): | |
313 | """Batch upload from local to remote blobstore.""" |
|
313 | """Batch upload from local to remote blobstore.""" | |
314 | self._batch(_deduplicate(pointers), fromstore, b'upload') |
|
314 | self._batch(_deduplicate(pointers), fromstore, b'upload') | |
315 |
|
315 | |||
316 | def readbatch(self, pointers, tostore): |
|
316 | def readbatch(self, pointers, tostore): | |
317 | """Batch download from remote to local blostore.""" |
|
317 | """Batch download from remote to local blostore.""" | |
318 | self._batch(_deduplicate(pointers), tostore, b'download') |
|
318 | self._batch(_deduplicate(pointers), tostore, b'download') | |
319 |
|
319 | |||
320 | def _batchrequest(self, pointers, action): |
|
320 | def _batchrequest(self, pointers, action): | |
321 | """Get metadata about objects pointed by pointers for given action |
|
321 | """Get metadata about objects pointed by pointers for given action | |
322 |
|
322 | |||
323 | Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]} |
|
323 | Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]} | |
324 | See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md |
|
324 | See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md | |
325 | """ |
|
325 | """ | |
326 | objects = [ |
|
326 | objects = [ | |
327 | {r'oid': pycompat.strurl(p.oid()), r'size': p.size()} |
|
327 | {r'oid': pycompat.strurl(p.oid()), r'size': p.size()} | |
328 | for p in pointers |
|
328 | for p in pointers | |
329 | ] |
|
329 | ] | |
330 | requestdata = pycompat.bytesurl( |
|
330 | requestdata = pycompat.bytesurl( | |
331 | json.dumps( |
|
331 | json.dumps( | |
332 | {r'objects': objects, r'operation': pycompat.strurl(action),} |
|
332 | {r'objects': objects, r'operation': pycompat.strurl(action),} | |
333 | ) |
|
333 | ) | |
334 | ) |
|
334 | ) | |
335 | url = b'%s/objects/batch' % self.baseurl |
|
335 | url = b'%s/objects/batch' % self.baseurl | |
336 | batchreq = util.urlreq.request(pycompat.strurl(url), data=requestdata) |
|
336 | batchreq = util.urlreq.request(pycompat.strurl(url), data=requestdata) | |
337 | batchreq.add_header(r'Accept', r'application/vnd.git-lfs+json') |
|
337 | batchreq.add_header(r'Accept', r'application/vnd.git-lfs+json') | |
338 | batchreq.add_header(r'Content-Type', r'application/vnd.git-lfs+json') |
|
338 | batchreq.add_header(r'Content-Type', r'application/vnd.git-lfs+json') | |
339 | try: |
|
339 | try: | |
340 | with contextlib.closing(self.urlopener.open(batchreq)) as rsp: |
|
340 | with contextlib.closing(self.urlopener.open(batchreq)) as rsp: | |
341 | rawjson = rsp.read() |
|
341 | rawjson = rsp.read() | |
342 | except util.urlerr.httperror as ex: |
|
342 | except util.urlerr.httperror as ex: | |
343 | hints = { |
|
343 | hints = { | |
344 | 400: _( |
|
344 | 400: _( | |
345 | b'check that lfs serving is enabled on %s and "%s" is ' |
|
345 | b'check that lfs serving is enabled on %s and "%s" is ' | |
346 | b'supported' |
|
346 | b'supported' | |
347 | ) |
|
347 | ) | |
348 | % (self.baseurl, action), |
|
348 | % (self.baseurl, action), | |
349 | 404: _(b'the "lfs.url" config may be used to override %s') |
|
349 | 404: _(b'the "lfs.url" config may be used to override %s') | |
350 | % self.baseurl, |
|
350 | % self.baseurl, | |
351 | } |
|
351 | } | |
352 | hint = hints.get(ex.code, _(b'api=%s, action=%s') % (url, action)) |
|
352 | hint = hints.get(ex.code, _(b'api=%s, action=%s') % (url, action)) | |
353 | raise LfsRemoteError( |
|
353 | raise LfsRemoteError( | |
354 | _(b'LFS HTTP error: %s') % stringutil.forcebytestr(ex), |
|
354 | _(b'LFS HTTP error: %s') % stringutil.forcebytestr(ex), | |
355 | hint=hint, |
|
355 | hint=hint, | |
356 | ) |
|
356 | ) | |
357 | except util.urlerr.urlerror as ex: |
|
357 | except util.urlerr.urlerror as ex: | |
358 | hint = ( |
|
358 | hint = ( | |
359 | _(b'the "lfs.url" config may be used to override %s') |
|
359 | _(b'the "lfs.url" config may be used to override %s') | |
360 | % self.baseurl |
|
360 | % self.baseurl | |
361 | ) |
|
361 | ) | |
362 | raise LfsRemoteError( |
|
362 | raise LfsRemoteError( | |
363 | _(b'LFS error: %s') % _urlerrorreason(ex), hint=hint |
|
363 | _(b'LFS error: %s') % _urlerrorreason(ex), hint=hint | |
364 | ) |
|
364 | ) | |
365 | try: |
|
365 | try: | |
366 |
response = |
|
366 | response = pycompat.json_loads(rawjson) | |
367 | except ValueError: |
|
367 | except ValueError: | |
368 | raise LfsRemoteError( |
|
368 | raise LfsRemoteError( | |
369 | _(b'LFS server returns invalid JSON: %s') |
|
369 | _(b'LFS server returns invalid JSON: %s') | |
370 | % rawjson.encode("utf-8") |
|
370 | % rawjson.encode("utf-8") | |
371 | ) |
|
371 | ) | |
372 |
|
372 | |||
373 | if self.ui.debugflag: |
|
373 | if self.ui.debugflag: | |
374 | self.ui.debug(b'Status: %d\n' % rsp.status) |
|
374 | self.ui.debug(b'Status: %d\n' % rsp.status) | |
375 | # lfs-test-server and hg serve return headers in different order |
|
375 | # lfs-test-server and hg serve return headers in different order | |
376 | headers = pycompat.bytestr(rsp.info()).strip() |
|
376 | headers = pycompat.bytestr(rsp.info()).strip() | |
377 | self.ui.debug(b'%s\n' % b'\n'.join(sorted(headers.splitlines()))) |
|
377 | self.ui.debug(b'%s\n' % b'\n'.join(sorted(headers.splitlines()))) | |
378 |
|
378 | |||
379 | if r'objects' in response: |
|
379 | if r'objects' in response: | |
380 | response[r'objects'] = sorted( |
|
380 | response[r'objects'] = sorted( | |
381 | response[r'objects'], key=lambda p: p[r'oid'] |
|
381 | response[r'objects'], key=lambda p: p[r'oid'] | |
382 | ) |
|
382 | ) | |
383 | self.ui.debug( |
|
383 | self.ui.debug( | |
384 | b'%s\n' |
|
384 | b'%s\n' | |
385 | % pycompat.bytesurl( |
|
385 | % pycompat.bytesurl( | |
386 | json.dumps( |
|
386 | json.dumps( | |
387 | response, |
|
387 | response, | |
388 | indent=2, |
|
388 | indent=2, | |
389 | separators=(r'', r': '), |
|
389 | separators=(r'', r': '), | |
390 | sort_keys=True, |
|
390 | sort_keys=True, | |
391 | ) |
|
391 | ) | |
392 | ) |
|
392 | ) | |
393 | ) |
|
393 | ) | |
394 |
|
394 | |||
395 | def encodestr(x): |
|
395 | def encodestr(x): | |
396 | if isinstance(x, pycompat.unicode): |
|
396 | if isinstance(x, pycompat.unicode): | |
397 | return x.encode('utf-8') |
|
397 | return x.encode('utf-8') | |
398 | return x |
|
398 | return x | |
399 |
|
399 | |||
400 | return pycompat.rapply(encodestr, response) |
|
400 | return pycompat.rapply(encodestr, response) | |
401 |
|
401 | |||
402 | def _checkforservererror(self, pointers, responses, action): |
|
402 | def _checkforservererror(self, pointers, responses, action): | |
403 | """Scans errors from objects |
|
403 | """Scans errors from objects | |
404 |
|
404 | |||
405 | Raises LfsRemoteError if any objects have an error""" |
|
405 | Raises LfsRemoteError if any objects have an error""" | |
406 | for response in responses: |
|
406 | for response in responses: | |
407 | # The server should return 404 when objects cannot be found. Some |
|
407 | # The server should return 404 when objects cannot be found. Some | |
408 | # server implementation (ex. lfs-test-server) does not set "error" |
|
408 | # server implementation (ex. lfs-test-server) does not set "error" | |
409 | # but just removes "download" from "actions". Treat that case |
|
409 | # but just removes "download" from "actions". Treat that case | |
410 | # as the same as 404 error. |
|
410 | # as the same as 404 error. | |
411 | if b'error' not in response: |
|
411 | if b'error' not in response: | |
412 | if action == b'download' and action not in response.get( |
|
412 | if action == b'download' and action not in response.get( | |
413 | b'actions', [] |
|
413 | b'actions', [] | |
414 | ): |
|
414 | ): | |
415 | code = 404 |
|
415 | code = 404 | |
416 | else: |
|
416 | else: | |
417 | continue |
|
417 | continue | |
418 | else: |
|
418 | else: | |
419 | # An error dict without a code doesn't make much sense, so |
|
419 | # An error dict without a code doesn't make much sense, so | |
420 | # treat as a server error. |
|
420 | # treat as a server error. | |
421 | code = response.get(b'error').get(b'code', 500) |
|
421 | code = response.get(b'error').get(b'code', 500) | |
422 |
|
422 | |||
423 | ptrmap = {p.oid(): p for p in pointers} |
|
423 | ptrmap = {p.oid(): p for p in pointers} | |
424 | p = ptrmap.get(response[b'oid'], None) |
|
424 | p = ptrmap.get(response[b'oid'], None) | |
425 | if p: |
|
425 | if p: | |
426 | filename = getattr(p, 'filename', b'unknown') |
|
426 | filename = getattr(p, 'filename', b'unknown') | |
427 | errors = { |
|
427 | errors = { | |
428 | 404: b'The object does not exist', |
|
428 | 404: b'The object does not exist', | |
429 | 410: b'The object was removed by the owner', |
|
429 | 410: b'The object was removed by the owner', | |
430 | 422: b'Validation error', |
|
430 | 422: b'Validation error', | |
431 | 500: b'Internal server error', |
|
431 | 500: b'Internal server error', | |
432 | } |
|
432 | } | |
433 | msg = errors.get(code, b'status code %d' % code) |
|
433 | msg = errors.get(code, b'status code %d' % code) | |
434 | raise LfsRemoteError( |
|
434 | raise LfsRemoteError( | |
435 | _(b'LFS server error for "%s": %s') % (filename, msg) |
|
435 | _(b'LFS server error for "%s": %s') % (filename, msg) | |
436 | ) |
|
436 | ) | |
437 | else: |
|
437 | else: | |
438 | raise LfsRemoteError( |
|
438 | raise LfsRemoteError( | |
439 | _(b'LFS server error. Unsolicited response for oid %s') |
|
439 | _(b'LFS server error. Unsolicited response for oid %s') | |
440 | % response[b'oid'] |
|
440 | % response[b'oid'] | |
441 | ) |
|
441 | ) | |
442 |
|
442 | |||
443 | def _extractobjects(self, response, pointers, action): |
|
443 | def _extractobjects(self, response, pointers, action): | |
444 | """extract objects from response of the batch API |
|
444 | """extract objects from response of the batch API | |
445 |
|
445 | |||
446 | response: parsed JSON object returned by batch API |
|
446 | response: parsed JSON object returned by batch API | |
447 | return response['objects'] filtered by action |
|
447 | return response['objects'] filtered by action | |
448 | raise if any object has an error |
|
448 | raise if any object has an error | |
449 | """ |
|
449 | """ | |
450 | # Scan errors from objects - fail early |
|
450 | # Scan errors from objects - fail early | |
451 | objects = response.get(b'objects', []) |
|
451 | objects = response.get(b'objects', []) | |
452 | self._checkforservererror(pointers, objects, action) |
|
452 | self._checkforservererror(pointers, objects, action) | |
453 |
|
453 | |||
454 | # Filter objects with given action. Practically, this skips uploading |
|
454 | # Filter objects with given action. Practically, this skips uploading | |
455 | # objects which exist in the server. |
|
455 | # objects which exist in the server. | |
456 | filteredobjects = [ |
|
456 | filteredobjects = [ | |
457 | o for o in objects if action in o.get(b'actions', []) |
|
457 | o for o in objects if action in o.get(b'actions', []) | |
458 | ] |
|
458 | ] | |
459 |
|
459 | |||
460 | return filteredobjects |
|
460 | return filteredobjects | |
461 |
|
461 | |||
462 | def _basictransfer(self, obj, action, localstore): |
|
462 | def _basictransfer(self, obj, action, localstore): | |
463 | """Download or upload a single object using basic transfer protocol |
|
463 | """Download or upload a single object using basic transfer protocol | |
464 |
|
464 | |||
465 | obj: dict, an object description returned by batch API |
|
465 | obj: dict, an object description returned by batch API | |
466 | action: string, one of ['upload', 'download'] |
|
466 | action: string, one of ['upload', 'download'] | |
467 | localstore: blobstore.local |
|
467 | localstore: blobstore.local | |
468 |
|
468 | |||
469 | See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\ |
|
469 | See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\ | |
470 | basic-transfers.md |
|
470 | basic-transfers.md | |
471 | """ |
|
471 | """ | |
472 | oid = obj[b'oid'] |
|
472 | oid = obj[b'oid'] | |
473 | href = obj[b'actions'][action].get(b'href') |
|
473 | href = obj[b'actions'][action].get(b'href') | |
474 | headers = obj[b'actions'][action].get(b'header', {}).items() |
|
474 | headers = obj[b'actions'][action].get(b'header', {}).items() | |
475 |
|
475 | |||
476 | request = util.urlreq.request(pycompat.strurl(href)) |
|
476 | request = util.urlreq.request(pycompat.strurl(href)) | |
477 | if action == b'upload': |
|
477 | if action == b'upload': | |
478 | # If uploading blobs, read data from local blobstore. |
|
478 | # If uploading blobs, read data from local blobstore. | |
479 | if not localstore.verify(oid): |
|
479 | if not localstore.verify(oid): | |
480 | raise error.Abort( |
|
480 | raise error.Abort( | |
481 | _(b'detected corrupt lfs object: %s') % oid, |
|
481 | _(b'detected corrupt lfs object: %s') % oid, | |
482 | hint=_(b'run hg verify'), |
|
482 | hint=_(b'run hg verify'), | |
483 | ) |
|
483 | ) | |
484 | request.data = filewithprogress(localstore.open(oid), None) |
|
484 | request.data = filewithprogress(localstore.open(oid), None) | |
485 | request.get_method = lambda: r'PUT' |
|
485 | request.get_method = lambda: r'PUT' | |
486 | request.add_header(r'Content-Type', r'application/octet-stream') |
|
486 | request.add_header(r'Content-Type', r'application/octet-stream') | |
487 | request.add_header(r'Content-Length', len(request.data)) |
|
487 | request.add_header(r'Content-Length', len(request.data)) | |
488 |
|
488 | |||
489 | for k, v in headers: |
|
489 | for k, v in headers: | |
490 | request.add_header(pycompat.strurl(k), pycompat.strurl(v)) |
|
490 | request.add_header(pycompat.strurl(k), pycompat.strurl(v)) | |
491 |
|
491 | |||
492 | response = b'' |
|
492 | response = b'' | |
493 | try: |
|
493 | try: | |
494 | with contextlib.closing(self.urlopener.open(request)) as req: |
|
494 | with contextlib.closing(self.urlopener.open(request)) as req: | |
495 | ui = self.ui # Shorten debug lines |
|
495 | ui = self.ui # Shorten debug lines | |
496 | if self.ui.debugflag: |
|
496 | if self.ui.debugflag: | |
497 | ui.debug(b'Status: %d\n' % req.status) |
|
497 | ui.debug(b'Status: %d\n' % req.status) | |
498 | # lfs-test-server and hg serve return headers in different |
|
498 | # lfs-test-server and hg serve return headers in different | |
499 | # order |
|
499 | # order | |
500 | headers = pycompat.bytestr(req.info()).strip() |
|
500 | headers = pycompat.bytestr(req.info()).strip() | |
501 | ui.debug(b'%s\n' % b'\n'.join(sorted(headers.splitlines()))) |
|
501 | ui.debug(b'%s\n' % b'\n'.join(sorted(headers.splitlines()))) | |
502 |
|
502 | |||
503 | if action == b'download': |
|
503 | if action == b'download': | |
504 | # If downloading blobs, store downloaded data to local |
|
504 | # If downloading blobs, store downloaded data to local | |
505 | # blobstore |
|
505 | # blobstore | |
506 | localstore.download(oid, req) |
|
506 | localstore.download(oid, req) | |
507 | else: |
|
507 | else: | |
508 | while True: |
|
508 | while True: | |
509 | data = req.read(1048576) |
|
509 | data = req.read(1048576) | |
510 | if not data: |
|
510 | if not data: | |
511 | break |
|
511 | break | |
512 | response += data |
|
512 | response += data | |
513 | if response: |
|
513 | if response: | |
514 | ui.debug(b'lfs %s response: %s' % (action, response)) |
|
514 | ui.debug(b'lfs %s response: %s' % (action, response)) | |
515 | except util.urlerr.httperror as ex: |
|
515 | except util.urlerr.httperror as ex: | |
516 | if self.ui.debugflag: |
|
516 | if self.ui.debugflag: | |
517 | self.ui.debug( |
|
517 | self.ui.debug( | |
518 | b'%s: %s\n' % (oid, ex.read()) |
|
518 | b'%s: %s\n' % (oid, ex.read()) | |
519 | ) # XXX: also bytes? |
|
519 | ) # XXX: also bytes? | |
520 | raise LfsRemoteError( |
|
520 | raise LfsRemoteError( | |
521 | _(b'LFS HTTP error: %s (oid=%s, action=%s)') |
|
521 | _(b'LFS HTTP error: %s (oid=%s, action=%s)') | |
522 | % (stringutil.forcebytestr(ex), oid, action) |
|
522 | % (stringutil.forcebytestr(ex), oid, action) | |
523 | ) |
|
523 | ) | |
524 | except util.urlerr.urlerror as ex: |
|
524 | except util.urlerr.urlerror as ex: | |
525 | hint = _(b'attempted connection to %s') % pycompat.bytesurl( |
|
525 | hint = _(b'attempted connection to %s') % pycompat.bytesurl( | |
526 | util.urllibcompat.getfullurl(request) |
|
526 | util.urllibcompat.getfullurl(request) | |
527 | ) |
|
527 | ) | |
528 | raise LfsRemoteError( |
|
528 | raise LfsRemoteError( | |
529 | _(b'LFS error: %s') % _urlerrorreason(ex), hint=hint |
|
529 | _(b'LFS error: %s') % _urlerrorreason(ex), hint=hint | |
530 | ) |
|
530 | ) | |
531 |
|
531 | |||
532 | def _batch(self, pointers, localstore, action): |
|
532 | def _batch(self, pointers, localstore, action): | |
533 | if action not in [b'upload', b'download']: |
|
533 | if action not in [b'upload', b'download']: | |
534 | raise error.ProgrammingError(b'invalid Git-LFS action: %s' % action) |
|
534 | raise error.ProgrammingError(b'invalid Git-LFS action: %s' % action) | |
535 |
|
535 | |||
536 | response = self._batchrequest(pointers, action) |
|
536 | response = self._batchrequest(pointers, action) | |
537 | objects = self._extractobjects(response, pointers, action) |
|
537 | objects = self._extractobjects(response, pointers, action) | |
538 | total = sum(x.get(b'size', 0) for x in objects) |
|
538 | total = sum(x.get(b'size', 0) for x in objects) | |
539 | sizes = {} |
|
539 | sizes = {} | |
540 | for obj in objects: |
|
540 | for obj in objects: | |
541 | sizes[obj.get(b'oid')] = obj.get(b'size', 0) |
|
541 | sizes[obj.get(b'oid')] = obj.get(b'size', 0) | |
542 | topic = { |
|
542 | topic = { | |
543 | b'upload': _(b'lfs uploading'), |
|
543 | b'upload': _(b'lfs uploading'), | |
544 | b'download': _(b'lfs downloading'), |
|
544 | b'download': _(b'lfs downloading'), | |
545 | }[action] |
|
545 | }[action] | |
546 | if len(objects) > 1: |
|
546 | if len(objects) > 1: | |
547 | self.ui.note( |
|
547 | self.ui.note( | |
548 | _(b'lfs: need to transfer %d objects (%s)\n') |
|
548 | _(b'lfs: need to transfer %d objects (%s)\n') | |
549 | % (len(objects), util.bytecount(total)) |
|
549 | % (len(objects), util.bytecount(total)) | |
550 | ) |
|
550 | ) | |
551 |
|
551 | |||
552 | def transfer(chunk): |
|
552 | def transfer(chunk): | |
553 | for obj in chunk: |
|
553 | for obj in chunk: | |
554 | objsize = obj.get(b'size', 0) |
|
554 | objsize = obj.get(b'size', 0) | |
555 | if self.ui.verbose: |
|
555 | if self.ui.verbose: | |
556 | if action == b'download': |
|
556 | if action == b'download': | |
557 | msg = _(b'lfs: downloading %s (%s)\n') |
|
557 | msg = _(b'lfs: downloading %s (%s)\n') | |
558 | elif action == b'upload': |
|
558 | elif action == b'upload': | |
559 | msg = _(b'lfs: uploading %s (%s)\n') |
|
559 | msg = _(b'lfs: uploading %s (%s)\n') | |
560 | self.ui.note( |
|
560 | self.ui.note( | |
561 | msg % (obj.get(b'oid'), util.bytecount(objsize)) |
|
561 | msg % (obj.get(b'oid'), util.bytecount(objsize)) | |
562 | ) |
|
562 | ) | |
563 | retry = self.retry |
|
563 | retry = self.retry | |
564 | while True: |
|
564 | while True: | |
565 | try: |
|
565 | try: | |
566 | self._basictransfer(obj, action, localstore) |
|
566 | self._basictransfer(obj, action, localstore) | |
567 | yield 1, obj.get(b'oid') |
|
567 | yield 1, obj.get(b'oid') | |
568 | break |
|
568 | break | |
569 | except socket.error as ex: |
|
569 | except socket.error as ex: | |
570 | if retry > 0: |
|
570 | if retry > 0: | |
571 | self.ui.note( |
|
571 | self.ui.note( | |
572 | _(b'lfs: failed: %r (remaining retry %d)\n') |
|
572 | _(b'lfs: failed: %r (remaining retry %d)\n') | |
573 | % (stringutil.forcebytestr(ex), retry) |
|
573 | % (stringutil.forcebytestr(ex), retry) | |
574 | ) |
|
574 | ) | |
575 | retry -= 1 |
|
575 | retry -= 1 | |
576 | continue |
|
576 | continue | |
577 | raise |
|
577 | raise | |
578 |
|
578 | |||
579 | # Until https multiplexing gets sorted out |
|
579 | # Until https multiplexing gets sorted out | |
580 | if self.ui.configbool(b'experimental', b'lfs.worker-enable'): |
|
580 | if self.ui.configbool(b'experimental', b'lfs.worker-enable'): | |
581 | oids = worker.worker( |
|
581 | oids = worker.worker( | |
582 | self.ui, |
|
582 | self.ui, | |
583 | 0.1, |
|
583 | 0.1, | |
584 | transfer, |
|
584 | transfer, | |
585 | (), |
|
585 | (), | |
586 | sorted(objects, key=lambda o: o.get(b'oid')), |
|
586 | sorted(objects, key=lambda o: o.get(b'oid')), | |
587 | ) |
|
587 | ) | |
588 | else: |
|
588 | else: | |
589 | oids = transfer(sorted(objects, key=lambda o: o.get(b'oid'))) |
|
589 | oids = transfer(sorted(objects, key=lambda o: o.get(b'oid'))) | |
590 |
|
590 | |||
591 | with self.ui.makeprogress(topic, total=total) as progress: |
|
591 | with self.ui.makeprogress(topic, total=total) as progress: | |
592 | progress.update(0) |
|
592 | progress.update(0) | |
593 | processed = 0 |
|
593 | processed = 0 | |
594 | blobs = 0 |
|
594 | blobs = 0 | |
595 | for _one, oid in oids: |
|
595 | for _one, oid in oids: | |
596 | processed += sizes[oid] |
|
596 | processed += sizes[oid] | |
597 | blobs += 1 |
|
597 | blobs += 1 | |
598 | progress.update(processed) |
|
598 | progress.update(processed) | |
599 | self.ui.note(_(b'lfs: processed: %s\n') % oid) |
|
599 | self.ui.note(_(b'lfs: processed: %s\n') % oid) | |
600 |
|
600 | |||
601 | if blobs > 0: |
|
601 | if blobs > 0: | |
602 | if action == b'upload': |
|
602 | if action == b'upload': | |
603 | self.ui.status( |
|
603 | self.ui.status( | |
604 | _(b'lfs: uploaded %d files (%s)\n') |
|
604 | _(b'lfs: uploaded %d files (%s)\n') | |
605 | % (blobs, util.bytecount(processed)) |
|
605 | % (blobs, util.bytecount(processed)) | |
606 | ) |
|
606 | ) | |
607 | elif action == b'download': |
|
607 | elif action == b'download': | |
608 | self.ui.status( |
|
608 | self.ui.status( | |
609 | _(b'lfs: downloaded %d files (%s)\n') |
|
609 | _(b'lfs: downloaded %d files (%s)\n') | |
610 | % (blobs, util.bytecount(processed)) |
|
610 | % (blobs, util.bytecount(processed)) | |
611 | ) |
|
611 | ) | |
612 |
|
612 | |||
613 | def __del__(self): |
|
613 | def __del__(self): | |
614 | # copied from mercurial/httppeer.py |
|
614 | # copied from mercurial/httppeer.py | |
615 | urlopener = getattr(self, 'urlopener', None) |
|
615 | urlopener = getattr(self, 'urlopener', None) | |
616 | if urlopener: |
|
616 | if urlopener: | |
617 | for h in urlopener.handlers: |
|
617 | for h in urlopener.handlers: | |
618 | h.close() |
|
618 | h.close() | |
619 | getattr(h, "close_all", lambda: None)() |
|
619 | getattr(h, "close_all", lambda: None)() | |
620 |
|
620 | |||
621 |
|
621 | |||
622 | class _dummyremote(object): |
|
622 | class _dummyremote(object): | |
623 | """Dummy store storing blobs to temp directory.""" |
|
623 | """Dummy store storing blobs to temp directory.""" | |
624 |
|
624 | |||
625 | def __init__(self, repo, url): |
|
625 | def __init__(self, repo, url): | |
626 | fullpath = repo.vfs.join(b'lfs', url.path) |
|
626 | fullpath = repo.vfs.join(b'lfs', url.path) | |
627 | self.vfs = lfsvfs(fullpath) |
|
627 | self.vfs = lfsvfs(fullpath) | |
628 |
|
628 | |||
629 | def writebatch(self, pointers, fromstore): |
|
629 | def writebatch(self, pointers, fromstore): | |
630 | for p in _deduplicate(pointers): |
|
630 | for p in _deduplicate(pointers): | |
631 | content = fromstore.read(p.oid(), verify=True) |
|
631 | content = fromstore.read(p.oid(), verify=True) | |
632 | with self.vfs(p.oid(), b'wb', atomictemp=True) as fp: |
|
632 | with self.vfs(p.oid(), b'wb', atomictemp=True) as fp: | |
633 | fp.write(content) |
|
633 | fp.write(content) | |
634 |
|
634 | |||
635 | def readbatch(self, pointers, tostore): |
|
635 | def readbatch(self, pointers, tostore): | |
636 | for p in _deduplicate(pointers): |
|
636 | for p in _deduplicate(pointers): | |
637 | with self.vfs(p.oid(), b'rb') as fp: |
|
637 | with self.vfs(p.oid(), b'rb') as fp: | |
638 | tostore.download(p.oid(), fp) |
|
638 | tostore.download(p.oid(), fp) | |
639 |
|
639 | |||
640 |
|
640 | |||
641 | class _nullremote(object): |
|
641 | class _nullremote(object): | |
642 | """Null store storing blobs to /dev/null.""" |
|
642 | """Null store storing blobs to /dev/null.""" | |
643 |
|
643 | |||
644 | def __init__(self, repo, url): |
|
644 | def __init__(self, repo, url): | |
645 | pass |
|
645 | pass | |
646 |
|
646 | |||
647 | def writebatch(self, pointers, fromstore): |
|
647 | def writebatch(self, pointers, fromstore): | |
648 | pass |
|
648 | pass | |
649 |
|
649 | |||
650 | def readbatch(self, pointers, tostore): |
|
650 | def readbatch(self, pointers, tostore): | |
651 | pass |
|
651 | pass | |
652 |
|
652 | |||
653 |
|
653 | |||
654 | class _promptremote(object): |
|
654 | class _promptremote(object): | |
655 | """Prompt user to set lfs.url when accessed.""" |
|
655 | """Prompt user to set lfs.url when accessed.""" | |
656 |
|
656 | |||
657 | def __init__(self, repo, url): |
|
657 | def __init__(self, repo, url): | |
658 | pass |
|
658 | pass | |
659 |
|
659 | |||
660 | def writebatch(self, pointers, fromstore, ui=None): |
|
660 | def writebatch(self, pointers, fromstore, ui=None): | |
661 | self._prompt() |
|
661 | self._prompt() | |
662 |
|
662 | |||
663 | def readbatch(self, pointers, tostore, ui=None): |
|
663 | def readbatch(self, pointers, tostore, ui=None): | |
664 | self._prompt() |
|
664 | self._prompt() | |
665 |
|
665 | |||
666 | def _prompt(self): |
|
666 | def _prompt(self): | |
667 | raise error.Abort(_(b'lfs.url needs to be configured')) |
|
667 | raise error.Abort(_(b'lfs.url needs to be configured')) | |
668 |
|
668 | |||
669 |
|
669 | |||
670 | _storemap = { |
|
670 | _storemap = { | |
671 | b'https': _gitlfsremote, |
|
671 | b'https': _gitlfsremote, | |
672 | b'http': _gitlfsremote, |
|
672 | b'http': _gitlfsremote, | |
673 | b'file': _dummyremote, |
|
673 | b'file': _dummyremote, | |
674 | b'null': _nullremote, |
|
674 | b'null': _nullremote, | |
675 | None: _promptremote, |
|
675 | None: _promptremote, | |
676 | } |
|
676 | } | |
677 |
|
677 | |||
678 |
|
678 | |||
679 | def _deduplicate(pointers): |
|
679 | def _deduplicate(pointers): | |
680 | """Remove any duplicate oids that exist in the list""" |
|
680 | """Remove any duplicate oids that exist in the list""" | |
681 | reduced = util.sortdict() |
|
681 | reduced = util.sortdict() | |
682 | for p in pointers: |
|
682 | for p in pointers: | |
683 | reduced[p.oid()] = p |
|
683 | reduced[p.oid()] = p | |
684 | return reduced.values() |
|
684 | return reduced.values() | |
685 |
|
685 | |||
686 |
|
686 | |||
687 | def _verify(oid, content): |
|
687 | def _verify(oid, content): | |
688 | realoid = node.hex(hashlib.sha256(content).digest()) |
|
688 | realoid = node.hex(hashlib.sha256(content).digest()) | |
689 | if realoid != oid: |
|
689 | if realoid != oid: | |
690 | raise LfsCorruptionError( |
|
690 | raise LfsCorruptionError( | |
691 | _(b'detected corrupt lfs object: %s') % oid, |
|
691 | _(b'detected corrupt lfs object: %s') % oid, | |
692 | hint=_(b'run hg verify'), |
|
692 | hint=_(b'run hg verify'), | |
693 | ) |
|
693 | ) | |
694 |
|
694 | |||
695 |
|
695 | |||
696 | def remote(repo, remote=None): |
|
696 | def remote(repo, remote=None): | |
697 | """remotestore factory. return a store in _storemap depending on config |
|
697 | """remotestore factory. return a store in _storemap depending on config | |
698 |
|
698 | |||
699 | If ``lfs.url`` is specified, use that remote endpoint. Otherwise, try to |
|
699 | If ``lfs.url`` is specified, use that remote endpoint. Otherwise, try to | |
700 | infer the endpoint, based on the remote repository using the same path |
|
700 | infer the endpoint, based on the remote repository using the same path | |
701 | adjustments as git. As an extension, 'http' is supported as well so that |
|
701 | adjustments as git. As an extension, 'http' is supported as well so that | |
702 | ``hg serve`` works out of the box. |
|
702 | ``hg serve`` works out of the box. | |
703 |
|
703 | |||
704 | https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md |
|
704 | https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md | |
705 | """ |
|
705 | """ | |
706 | lfsurl = repo.ui.config(b'lfs', b'url') |
|
706 | lfsurl = repo.ui.config(b'lfs', b'url') | |
707 | url = util.url(lfsurl or b'') |
|
707 | url = util.url(lfsurl or b'') | |
708 | if lfsurl is None: |
|
708 | if lfsurl is None: | |
709 | if remote: |
|
709 | if remote: | |
710 | path = remote |
|
710 | path = remote | |
711 | elif util.safehasattr(repo, b'_subtoppath'): |
|
711 | elif util.safehasattr(repo, b'_subtoppath'): | |
712 | # The pull command sets this during the optional update phase, which |
|
712 | # The pull command sets this during the optional update phase, which | |
713 | # tells exactly where the pull originated, whether 'paths.default' |
|
713 | # tells exactly where the pull originated, whether 'paths.default' | |
714 | # or explicit. |
|
714 | # or explicit. | |
715 | path = repo._subtoppath |
|
715 | path = repo._subtoppath | |
716 | else: |
|
716 | else: | |
717 | # TODO: investigate 'paths.remote:lfsurl' style path customization, |
|
717 | # TODO: investigate 'paths.remote:lfsurl' style path customization, | |
718 | # and fall back to inferring from 'paths.remote' if unspecified. |
|
718 | # and fall back to inferring from 'paths.remote' if unspecified. | |
719 | path = repo.ui.config(b'paths', b'default') or b'' |
|
719 | path = repo.ui.config(b'paths', b'default') or b'' | |
720 |
|
720 | |||
721 | defaulturl = util.url(path) |
|
721 | defaulturl = util.url(path) | |
722 |
|
722 | |||
723 | # TODO: support local paths as well. |
|
723 | # TODO: support local paths as well. | |
724 | # TODO: consider the ssh -> https transformation that git applies |
|
724 | # TODO: consider the ssh -> https transformation that git applies | |
725 | if defaulturl.scheme in (b'http', b'https'): |
|
725 | if defaulturl.scheme in (b'http', b'https'): | |
726 | if defaulturl.path and defaulturl.path[:-1] != b'/': |
|
726 | if defaulturl.path and defaulturl.path[:-1] != b'/': | |
727 | defaulturl.path += b'/' |
|
727 | defaulturl.path += b'/' | |
728 | defaulturl.path = (defaulturl.path or b'') + b'.git/info/lfs' |
|
728 | defaulturl.path = (defaulturl.path or b'') + b'.git/info/lfs' | |
729 |
|
729 | |||
730 | url = util.url(bytes(defaulturl)) |
|
730 | url = util.url(bytes(defaulturl)) | |
731 | repo.ui.note(_(b'lfs: assuming remote store: %s\n') % url) |
|
731 | repo.ui.note(_(b'lfs: assuming remote store: %s\n') % url) | |
732 |
|
732 | |||
733 | scheme = url.scheme |
|
733 | scheme = url.scheme | |
734 | if scheme not in _storemap: |
|
734 | if scheme not in _storemap: | |
735 | raise error.Abort(_(b'lfs: unknown url scheme: %s') % scheme) |
|
735 | raise error.Abort(_(b'lfs: unknown url scheme: %s') % scheme) | |
736 | return _storemap[scheme](repo, url) |
|
736 | return _storemap[scheme](repo, url) | |
737 |
|
737 | |||
738 |
|
738 | |||
739 | class LfsRemoteError(error.StorageError): |
|
739 | class LfsRemoteError(error.StorageError): | |
740 | pass |
|
740 | pass | |
741 |
|
741 | |||
742 |
|
742 | |||
743 | class LfsCorruptionError(error.Abort): |
|
743 | class LfsCorruptionError(error.Abort): | |
744 | """Raised when a corrupt blob is detected, aborting an operation |
|
744 | """Raised when a corrupt blob is detected, aborting an operation | |
745 |
|
745 | |||
746 | It exists to allow specialized handling on the server side.""" |
|
746 | It exists to allow specialized handling on the server side.""" |
@@ -1,370 +1,370 b'' | |||||
1 | # wireprotolfsserver.py - lfs protocol server side implementation |
|
1 | # wireprotolfsserver.py - lfs protocol server side implementation | |
2 | # |
|
2 | # | |
3 | # Copyright 2018 Matt Harbison <matt_harbison@yahoo.com> |
|
3 | # Copyright 2018 Matt Harbison <matt_harbison@yahoo.com> | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
|
7 | |||
8 | from __future__ import absolute_import |
|
8 | from __future__ import absolute_import | |
9 |
|
9 | |||
10 | import datetime |
|
10 | import datetime | |
11 | import errno |
|
11 | import errno | |
12 | import json |
|
12 | import json | |
13 | import traceback |
|
13 | import traceback | |
14 |
|
14 | |||
15 | from mercurial.hgweb import common as hgwebcommon |
|
15 | from mercurial.hgweb import common as hgwebcommon | |
16 |
|
16 | |||
17 | from mercurial import ( |
|
17 | from mercurial import ( | |
18 | exthelper, |
|
18 | exthelper, | |
19 | pycompat, |
|
19 | pycompat, | |
20 | util, |
|
20 | util, | |
21 | wireprotoserver, |
|
21 | wireprotoserver, | |
22 | ) |
|
22 | ) | |
23 |
|
23 | |||
24 | from . import blobstore |
|
24 | from . import blobstore | |
25 |
|
25 | |||
26 | HTTP_OK = hgwebcommon.HTTP_OK |
|
26 | HTTP_OK = hgwebcommon.HTTP_OK | |
27 | HTTP_CREATED = hgwebcommon.HTTP_CREATED |
|
27 | HTTP_CREATED = hgwebcommon.HTTP_CREATED | |
28 | HTTP_BAD_REQUEST = hgwebcommon.HTTP_BAD_REQUEST |
|
28 | HTTP_BAD_REQUEST = hgwebcommon.HTTP_BAD_REQUEST | |
29 | HTTP_NOT_FOUND = hgwebcommon.HTTP_NOT_FOUND |
|
29 | HTTP_NOT_FOUND = hgwebcommon.HTTP_NOT_FOUND | |
30 | HTTP_METHOD_NOT_ALLOWED = hgwebcommon.HTTP_METHOD_NOT_ALLOWED |
|
30 | HTTP_METHOD_NOT_ALLOWED = hgwebcommon.HTTP_METHOD_NOT_ALLOWED | |
31 | HTTP_NOT_ACCEPTABLE = hgwebcommon.HTTP_NOT_ACCEPTABLE |
|
31 | HTTP_NOT_ACCEPTABLE = hgwebcommon.HTTP_NOT_ACCEPTABLE | |
32 | HTTP_UNSUPPORTED_MEDIA_TYPE = hgwebcommon.HTTP_UNSUPPORTED_MEDIA_TYPE |
|
32 | HTTP_UNSUPPORTED_MEDIA_TYPE = hgwebcommon.HTTP_UNSUPPORTED_MEDIA_TYPE | |
33 |
|
33 | |||
34 | eh = exthelper.exthelper() |
|
34 | eh = exthelper.exthelper() | |
35 |
|
35 | |||
36 |
|
36 | |||
37 | @eh.wrapfunction(wireprotoserver, b'handlewsgirequest') |
|
37 | @eh.wrapfunction(wireprotoserver, b'handlewsgirequest') | |
38 | def handlewsgirequest(orig, rctx, req, res, checkperm): |
|
38 | def handlewsgirequest(orig, rctx, req, res, checkperm): | |
39 | """Wrap wireprotoserver.handlewsgirequest() to possibly process an LFS |
|
39 | """Wrap wireprotoserver.handlewsgirequest() to possibly process an LFS | |
40 | request if it is left unprocessed by the wrapped method. |
|
40 | request if it is left unprocessed by the wrapped method. | |
41 | """ |
|
41 | """ | |
42 | if orig(rctx, req, res, checkperm): |
|
42 | if orig(rctx, req, res, checkperm): | |
43 | return True |
|
43 | return True | |
44 |
|
44 | |||
45 | if not rctx.repo.ui.configbool(b'experimental', b'lfs.serve'): |
|
45 | if not rctx.repo.ui.configbool(b'experimental', b'lfs.serve'): | |
46 | return False |
|
46 | return False | |
47 |
|
47 | |||
48 | if not util.safehasattr(rctx.repo.svfs, 'lfslocalblobstore'): |
|
48 | if not util.safehasattr(rctx.repo.svfs, 'lfslocalblobstore'): | |
49 | return False |
|
49 | return False | |
50 |
|
50 | |||
51 | if not req.dispatchpath: |
|
51 | if not req.dispatchpath: | |
52 | return False |
|
52 | return False | |
53 |
|
53 | |||
54 | try: |
|
54 | try: | |
55 | if req.dispatchpath == b'.git/info/lfs/objects/batch': |
|
55 | if req.dispatchpath == b'.git/info/lfs/objects/batch': | |
56 | checkperm(rctx, req, b'pull') |
|
56 | checkperm(rctx, req, b'pull') | |
57 | return _processbatchrequest(rctx.repo, req, res) |
|
57 | return _processbatchrequest(rctx.repo, req, res) | |
58 | # TODO: reserve and use a path in the proposed http wireprotocol /api/ |
|
58 | # TODO: reserve and use a path in the proposed http wireprotocol /api/ | |
59 | # namespace? |
|
59 | # namespace? | |
60 | elif req.dispatchpath.startswith(b'.hg/lfs/objects'): |
|
60 | elif req.dispatchpath.startswith(b'.hg/lfs/objects'): | |
61 | return _processbasictransfer( |
|
61 | return _processbasictransfer( | |
62 | rctx.repo, req, res, lambda perm: checkperm(rctx, req, perm) |
|
62 | rctx.repo, req, res, lambda perm: checkperm(rctx, req, perm) | |
63 | ) |
|
63 | ) | |
64 | return False |
|
64 | return False | |
65 | except hgwebcommon.ErrorResponse as e: |
|
65 | except hgwebcommon.ErrorResponse as e: | |
66 | # XXX: copied from the handler surrounding wireprotoserver._callhttp() |
|
66 | # XXX: copied from the handler surrounding wireprotoserver._callhttp() | |
67 | # in the wrapped function. Should this be moved back to hgweb to |
|
67 | # in the wrapped function. Should this be moved back to hgweb to | |
68 | # be a common handler? |
|
68 | # be a common handler? | |
69 | for k, v in e.headers: |
|
69 | for k, v in e.headers: | |
70 | res.headers[k] = v |
|
70 | res.headers[k] = v | |
71 | res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e)) |
|
71 | res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e)) | |
72 | res.setbodybytes(b'0\n%s\n' % pycompat.bytestr(e)) |
|
72 | res.setbodybytes(b'0\n%s\n' % pycompat.bytestr(e)) | |
73 | return True |
|
73 | return True | |
74 |
|
74 | |||
75 |
|
75 | |||
76 | def _sethttperror(res, code, message=None): |
|
76 | def _sethttperror(res, code, message=None): | |
77 | res.status = hgwebcommon.statusmessage(code, message=message) |
|
77 | res.status = hgwebcommon.statusmessage(code, message=message) | |
78 | res.headers[b'Content-Type'] = b'text/plain; charset=utf-8' |
|
78 | res.headers[b'Content-Type'] = b'text/plain; charset=utf-8' | |
79 | res.setbodybytes(b'') |
|
79 | res.setbodybytes(b'') | |
80 |
|
80 | |||
81 |
|
81 | |||
82 | def _logexception(req): |
|
82 | def _logexception(req): | |
83 | """Write information about the current exception to wsgi.errors.""" |
|
83 | """Write information about the current exception to wsgi.errors.""" | |
84 | tb = pycompat.sysbytes(traceback.format_exc()) |
|
84 | tb = pycompat.sysbytes(traceback.format_exc()) | |
85 | errorlog = req.rawenv[b'wsgi.errors'] |
|
85 | errorlog = req.rawenv[b'wsgi.errors'] | |
86 |
|
86 | |||
87 | uri = b'' |
|
87 | uri = b'' | |
88 | if req.apppath: |
|
88 | if req.apppath: | |
89 | uri += req.apppath |
|
89 | uri += req.apppath | |
90 | uri += b'/' + req.dispatchpath |
|
90 | uri += b'/' + req.dispatchpath | |
91 |
|
91 | |||
92 | errorlog.write( |
|
92 | errorlog.write( | |
93 | b"Exception happened while processing request '%s':\n%s" % (uri, tb) |
|
93 | b"Exception happened while processing request '%s':\n%s" % (uri, tb) | |
94 | ) |
|
94 | ) | |
95 |
|
95 | |||
96 |
|
96 | |||
97 | def _processbatchrequest(repo, req, res): |
|
97 | def _processbatchrequest(repo, req, res): | |
98 | """Handle a request for the Batch API, which is the gateway to granting file |
|
98 | """Handle a request for the Batch API, which is the gateway to granting file | |
99 | access. |
|
99 | access. | |
100 |
|
100 | |||
101 | https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md |
|
101 | https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md | |
102 | """ |
|
102 | """ | |
103 |
|
103 | |||
104 | # Mercurial client request: |
|
104 | # Mercurial client request: | |
105 | # |
|
105 | # | |
106 | # HOST: localhost:$HGPORT |
|
106 | # HOST: localhost:$HGPORT | |
107 | # ACCEPT: application/vnd.git-lfs+json |
|
107 | # ACCEPT: application/vnd.git-lfs+json | |
108 | # ACCEPT-ENCODING: identity |
|
108 | # ACCEPT-ENCODING: identity | |
109 | # USER-AGENT: git-lfs/2.3.4 (Mercurial 4.5.2+1114-f48b9754f04c+20180316) |
|
109 | # USER-AGENT: git-lfs/2.3.4 (Mercurial 4.5.2+1114-f48b9754f04c+20180316) | |
110 | # Content-Length: 125 |
|
110 | # Content-Length: 125 | |
111 | # Content-Type: application/vnd.git-lfs+json |
|
111 | # Content-Type: application/vnd.git-lfs+json | |
112 | # |
|
112 | # | |
113 | # { |
|
113 | # { | |
114 | # "objects": [ |
|
114 | # "objects": [ | |
115 | # { |
|
115 | # { | |
116 | # "oid": "31cf...8e5b" |
|
116 | # "oid": "31cf...8e5b" | |
117 | # "size": 12 |
|
117 | # "size": 12 | |
118 | # } |
|
118 | # } | |
119 | # ] |
|
119 | # ] | |
120 | # "operation": "upload" |
|
120 | # "operation": "upload" | |
121 | # } |
|
121 | # } | |
122 |
|
122 | |||
123 | if req.method != b'POST': |
|
123 | if req.method != b'POST': | |
124 | _sethttperror(res, HTTP_METHOD_NOT_ALLOWED) |
|
124 | _sethttperror(res, HTTP_METHOD_NOT_ALLOWED) | |
125 | return True |
|
125 | return True | |
126 |
|
126 | |||
127 | if req.headers[b'Content-Type'] != b'application/vnd.git-lfs+json': |
|
127 | if req.headers[b'Content-Type'] != b'application/vnd.git-lfs+json': | |
128 | _sethttperror(res, HTTP_UNSUPPORTED_MEDIA_TYPE) |
|
128 | _sethttperror(res, HTTP_UNSUPPORTED_MEDIA_TYPE) | |
129 | return True |
|
129 | return True | |
130 |
|
130 | |||
131 | if req.headers[b'Accept'] != b'application/vnd.git-lfs+json': |
|
131 | if req.headers[b'Accept'] != b'application/vnd.git-lfs+json': | |
132 | _sethttperror(res, HTTP_NOT_ACCEPTABLE) |
|
132 | _sethttperror(res, HTTP_NOT_ACCEPTABLE) | |
133 | return True |
|
133 | return True | |
134 |
|
134 | |||
135 | # XXX: specify an encoding? |
|
135 | # XXX: specify an encoding? | |
136 |
lfsreq = |
|
136 | lfsreq = pycompat.json_loads(req.bodyfh.read()) | |
137 |
|
137 | |||
138 | # If no transfer handlers are explicitly requested, 'basic' is assumed. |
|
138 | # If no transfer handlers are explicitly requested, 'basic' is assumed. | |
139 | if r'basic' not in lfsreq.get(r'transfers', [r'basic']): |
|
139 | if r'basic' not in lfsreq.get(r'transfers', [r'basic']): | |
140 | _sethttperror( |
|
140 | _sethttperror( | |
141 | res, |
|
141 | res, | |
142 | HTTP_BAD_REQUEST, |
|
142 | HTTP_BAD_REQUEST, | |
143 | b'Only the basic LFS transfer handler is supported', |
|
143 | b'Only the basic LFS transfer handler is supported', | |
144 | ) |
|
144 | ) | |
145 | return True |
|
145 | return True | |
146 |
|
146 | |||
147 | operation = lfsreq.get(r'operation') |
|
147 | operation = lfsreq.get(r'operation') | |
148 | operation = pycompat.bytestr(operation) |
|
148 | operation = pycompat.bytestr(operation) | |
149 |
|
149 | |||
150 | if operation not in (b'upload', b'download'): |
|
150 | if operation not in (b'upload', b'download'): | |
151 | _sethttperror( |
|
151 | _sethttperror( | |
152 | res, |
|
152 | res, | |
153 | HTTP_BAD_REQUEST, |
|
153 | HTTP_BAD_REQUEST, | |
154 | b'Unsupported LFS transfer operation: %s' % operation, |
|
154 | b'Unsupported LFS transfer operation: %s' % operation, | |
155 | ) |
|
155 | ) | |
156 | return True |
|
156 | return True | |
157 |
|
157 | |||
158 | localstore = repo.svfs.lfslocalblobstore |
|
158 | localstore = repo.svfs.lfslocalblobstore | |
159 |
|
159 | |||
160 | objects = [ |
|
160 | objects = [ | |
161 | p |
|
161 | p | |
162 | for p in _batchresponseobjects( |
|
162 | for p in _batchresponseobjects( | |
163 | req, lfsreq.get(r'objects', []), operation, localstore |
|
163 | req, lfsreq.get(r'objects', []), operation, localstore | |
164 | ) |
|
164 | ) | |
165 | ] |
|
165 | ] | |
166 |
|
166 | |||
167 | rsp = { |
|
167 | rsp = { | |
168 | r'transfer': r'basic', |
|
168 | r'transfer': r'basic', | |
169 | r'objects': objects, |
|
169 | r'objects': objects, | |
170 | } |
|
170 | } | |
171 |
|
171 | |||
172 | res.status = hgwebcommon.statusmessage(HTTP_OK) |
|
172 | res.status = hgwebcommon.statusmessage(HTTP_OK) | |
173 | res.headers[b'Content-Type'] = b'application/vnd.git-lfs+json' |
|
173 | res.headers[b'Content-Type'] = b'application/vnd.git-lfs+json' | |
174 | res.setbodybytes(pycompat.bytestr(json.dumps(rsp))) |
|
174 | res.setbodybytes(pycompat.bytestr(json.dumps(rsp))) | |
175 |
|
175 | |||
176 | return True |
|
176 | return True | |
177 |
|
177 | |||
178 |
|
178 | |||
179 | def _batchresponseobjects(req, objects, action, store): |
|
179 | def _batchresponseobjects(req, objects, action, store): | |
180 | """Yield one dictionary of attributes for the Batch API response for each |
|
180 | """Yield one dictionary of attributes for the Batch API response for each | |
181 | object in the list. |
|
181 | object in the list. | |
182 |
|
182 | |||
183 | req: The parsedrequest for the Batch API request |
|
183 | req: The parsedrequest for the Batch API request | |
184 | objects: The list of objects in the Batch API object request list |
|
184 | objects: The list of objects in the Batch API object request list | |
185 | action: 'upload' or 'download' |
|
185 | action: 'upload' or 'download' | |
186 | store: The local blob store for servicing requests""" |
|
186 | store: The local blob store for servicing requests""" | |
187 |
|
187 | |||
188 | # Successful lfs-test-server response to solict an upload: |
|
188 | # Successful lfs-test-server response to solict an upload: | |
189 | # { |
|
189 | # { | |
190 | # u'objects': [{ |
|
190 | # u'objects': [{ | |
191 | # u'size': 12, |
|
191 | # u'size': 12, | |
192 | # u'oid': u'31cf...8e5b', |
|
192 | # u'oid': u'31cf...8e5b', | |
193 | # u'actions': { |
|
193 | # u'actions': { | |
194 | # u'upload': { |
|
194 | # u'upload': { | |
195 | # u'href': u'http://localhost:$HGPORT/objects/31cf...8e5b', |
|
195 | # u'href': u'http://localhost:$HGPORT/objects/31cf...8e5b', | |
196 | # u'expires_at': u'0001-01-01T00:00:00Z', |
|
196 | # u'expires_at': u'0001-01-01T00:00:00Z', | |
197 | # u'header': { |
|
197 | # u'header': { | |
198 | # u'Accept': u'application/vnd.git-lfs' |
|
198 | # u'Accept': u'application/vnd.git-lfs' | |
199 | # } |
|
199 | # } | |
200 | # } |
|
200 | # } | |
201 | # } |
|
201 | # } | |
202 | # }] |
|
202 | # }] | |
203 | # } |
|
203 | # } | |
204 |
|
204 | |||
205 | # TODO: Sort out the expires_at/expires_in/authenticated keys. |
|
205 | # TODO: Sort out the expires_at/expires_in/authenticated keys. | |
206 |
|
206 | |||
207 | for obj in objects: |
|
207 | for obj in objects: | |
208 | # Convert unicode to ASCII to create a filesystem path |
|
208 | # Convert unicode to ASCII to create a filesystem path | |
209 | soid = obj.get(r'oid') |
|
209 | soid = obj.get(r'oid') | |
210 | oid = soid.encode(r'ascii') |
|
210 | oid = soid.encode(r'ascii') | |
211 | rsp = { |
|
211 | rsp = { | |
212 | r'oid': soid, |
|
212 | r'oid': soid, | |
213 | r'size': obj.get(r'size'), # XXX: should this check the local size? |
|
213 | r'size': obj.get(r'size'), # XXX: should this check the local size? | |
214 | # r'authenticated': True, |
|
214 | # r'authenticated': True, | |
215 | } |
|
215 | } | |
216 |
|
216 | |||
217 | exists = True |
|
217 | exists = True | |
218 | verifies = False |
|
218 | verifies = False | |
219 |
|
219 | |||
220 | # Verify an existing file on the upload request, so that the client is |
|
220 | # Verify an existing file on the upload request, so that the client is | |
221 | # solicited to re-upload if it corrupt locally. Download requests are |
|
221 | # solicited to re-upload if it corrupt locally. Download requests are | |
222 | # also verified, so the error can be flagged in the Batch API response. |
|
222 | # also verified, so the error can be flagged in the Batch API response. | |
223 | # (Maybe we can use this to short circuit the download for `hg verify`, |
|
223 | # (Maybe we can use this to short circuit the download for `hg verify`, | |
224 | # IFF the client can assert that the remote end is an hg server.) |
|
224 | # IFF the client can assert that the remote end is an hg server.) | |
225 | # Otherwise, it's potentially overkill on download, since it is also |
|
225 | # Otherwise, it's potentially overkill on download, since it is also | |
226 | # verified as the file is streamed to the caller. |
|
226 | # verified as the file is streamed to the caller. | |
227 | try: |
|
227 | try: | |
228 | verifies = store.verify(oid) |
|
228 | verifies = store.verify(oid) | |
229 | if verifies and action == b'upload': |
|
229 | if verifies and action == b'upload': | |
230 | # The client will skip this upload, but make sure it remains |
|
230 | # The client will skip this upload, but make sure it remains | |
231 | # available locally. |
|
231 | # available locally. | |
232 | store.linkfromusercache(oid) |
|
232 | store.linkfromusercache(oid) | |
233 | except IOError as inst: |
|
233 | except IOError as inst: | |
234 | if inst.errno != errno.ENOENT: |
|
234 | if inst.errno != errno.ENOENT: | |
235 | _logexception(req) |
|
235 | _logexception(req) | |
236 |
|
236 | |||
237 | rsp[r'error'] = { |
|
237 | rsp[r'error'] = { | |
238 | r'code': 500, |
|
238 | r'code': 500, | |
239 | r'message': inst.strerror or r'Internal Server Server', |
|
239 | r'message': inst.strerror or r'Internal Server Server', | |
240 | } |
|
240 | } | |
241 | yield rsp |
|
241 | yield rsp | |
242 | continue |
|
242 | continue | |
243 |
|
243 | |||
244 | exists = False |
|
244 | exists = False | |
245 |
|
245 | |||
246 | # Items are always listed for downloads. They are dropped for uploads |
|
246 | # Items are always listed for downloads. They are dropped for uploads | |
247 | # IFF they already exist locally. |
|
247 | # IFF they already exist locally. | |
248 | if action == b'download': |
|
248 | if action == b'download': | |
249 | if not exists: |
|
249 | if not exists: | |
250 | rsp[r'error'] = { |
|
250 | rsp[r'error'] = { | |
251 | r'code': 404, |
|
251 | r'code': 404, | |
252 | r'message': r"The object does not exist", |
|
252 | r'message': r"The object does not exist", | |
253 | } |
|
253 | } | |
254 | yield rsp |
|
254 | yield rsp | |
255 | continue |
|
255 | continue | |
256 |
|
256 | |||
257 | elif not verifies: |
|
257 | elif not verifies: | |
258 | rsp[r'error'] = { |
|
258 | rsp[r'error'] = { | |
259 | r'code': 422, # XXX: is this the right code? |
|
259 | r'code': 422, # XXX: is this the right code? | |
260 | r'message': r"The object is corrupt", |
|
260 | r'message': r"The object is corrupt", | |
261 | } |
|
261 | } | |
262 | yield rsp |
|
262 | yield rsp | |
263 | continue |
|
263 | continue | |
264 |
|
264 | |||
265 | elif verifies: |
|
265 | elif verifies: | |
266 | yield rsp # Skip 'actions': already uploaded |
|
266 | yield rsp # Skip 'actions': already uploaded | |
267 | continue |
|
267 | continue | |
268 |
|
268 | |||
269 | expiresat = datetime.datetime.now() + datetime.timedelta(minutes=10) |
|
269 | expiresat = datetime.datetime.now() + datetime.timedelta(minutes=10) | |
270 |
|
270 | |||
271 | def _buildheader(): |
|
271 | def _buildheader(): | |
272 | # The spec doesn't mention the Accept header here, but avoid |
|
272 | # The spec doesn't mention the Accept header here, but avoid | |
273 | # a gratuitous deviation from lfs-test-server in the test |
|
273 | # a gratuitous deviation from lfs-test-server in the test | |
274 | # output. |
|
274 | # output. | |
275 | hdr = {r'Accept': r'application/vnd.git-lfs'} |
|
275 | hdr = {r'Accept': r'application/vnd.git-lfs'} | |
276 |
|
276 | |||
277 | auth = req.headers.get(b'Authorization', b'') |
|
277 | auth = req.headers.get(b'Authorization', b'') | |
278 | if auth.startswith(b'Basic '): |
|
278 | if auth.startswith(b'Basic '): | |
279 | hdr[r'Authorization'] = pycompat.strurl(auth) |
|
279 | hdr[r'Authorization'] = pycompat.strurl(auth) | |
280 |
|
280 | |||
281 | return hdr |
|
281 | return hdr | |
282 |
|
282 | |||
283 | rsp[r'actions'] = { |
|
283 | rsp[r'actions'] = { | |
284 | r'%s' |
|
284 | r'%s' | |
285 | % pycompat.strurl(action): { |
|
285 | % pycompat.strurl(action): { | |
286 | r'href': pycompat.strurl( |
|
286 | r'href': pycompat.strurl( | |
287 | b'%s%s/.hg/lfs/objects/%s' % (req.baseurl, req.apppath, oid) |
|
287 | b'%s%s/.hg/lfs/objects/%s' % (req.baseurl, req.apppath, oid) | |
288 | ), |
|
288 | ), | |
289 | # datetime.isoformat() doesn't include the 'Z' suffix |
|
289 | # datetime.isoformat() doesn't include the 'Z' suffix | |
290 | r"expires_at": expiresat.strftime(r'%Y-%m-%dT%H:%M:%SZ'), |
|
290 | r"expires_at": expiresat.strftime(r'%Y-%m-%dT%H:%M:%SZ'), | |
291 | r'header': _buildheader(), |
|
291 | r'header': _buildheader(), | |
292 | } |
|
292 | } | |
293 | } |
|
293 | } | |
294 |
|
294 | |||
295 | yield rsp |
|
295 | yield rsp | |
296 |
|
296 | |||
297 |
|
297 | |||
298 | def _processbasictransfer(repo, req, res, checkperm): |
|
298 | def _processbasictransfer(repo, req, res, checkperm): | |
299 | """Handle a single file upload (PUT) or download (GET) action for the Basic |
|
299 | """Handle a single file upload (PUT) or download (GET) action for the Basic | |
300 | Transfer Adapter. |
|
300 | Transfer Adapter. | |
301 |
|
301 | |||
302 | After determining if the request is for an upload or download, the access |
|
302 | After determining if the request is for an upload or download, the access | |
303 | must be checked by calling ``checkperm()`` with either 'pull' or 'upload' |
|
303 | must be checked by calling ``checkperm()`` with either 'pull' or 'upload' | |
304 | before accessing the files. |
|
304 | before accessing the files. | |
305 |
|
305 | |||
306 | https://github.com/git-lfs/git-lfs/blob/master/docs/api/basic-transfers.md |
|
306 | https://github.com/git-lfs/git-lfs/blob/master/docs/api/basic-transfers.md | |
307 | """ |
|
307 | """ | |
308 |
|
308 | |||
309 | method = req.method |
|
309 | method = req.method | |
310 | oid = req.dispatchparts[-1] |
|
310 | oid = req.dispatchparts[-1] | |
311 | localstore = repo.svfs.lfslocalblobstore |
|
311 | localstore = repo.svfs.lfslocalblobstore | |
312 |
|
312 | |||
313 | if len(req.dispatchparts) != 4: |
|
313 | if len(req.dispatchparts) != 4: | |
314 | _sethttperror(res, HTTP_NOT_FOUND) |
|
314 | _sethttperror(res, HTTP_NOT_FOUND) | |
315 | return True |
|
315 | return True | |
316 |
|
316 | |||
317 | if method == b'PUT': |
|
317 | if method == b'PUT': | |
318 | checkperm(b'upload') |
|
318 | checkperm(b'upload') | |
319 |
|
319 | |||
320 | # TODO: verify Content-Type? |
|
320 | # TODO: verify Content-Type? | |
321 |
|
321 | |||
322 | existed = localstore.has(oid) |
|
322 | existed = localstore.has(oid) | |
323 |
|
323 | |||
324 | # TODO: how to handle timeouts? The body proxy handles limiting to |
|
324 | # TODO: how to handle timeouts? The body proxy handles limiting to | |
325 | # Content-Length, but what happens if a client sends less than it |
|
325 | # Content-Length, but what happens if a client sends less than it | |
326 | # says it will? |
|
326 | # says it will? | |
327 |
|
327 | |||
328 | statusmessage = hgwebcommon.statusmessage |
|
328 | statusmessage = hgwebcommon.statusmessage | |
329 | try: |
|
329 | try: | |
330 | localstore.download(oid, req.bodyfh) |
|
330 | localstore.download(oid, req.bodyfh) | |
331 | res.status = statusmessage(HTTP_OK if existed else HTTP_CREATED) |
|
331 | res.status = statusmessage(HTTP_OK if existed else HTTP_CREATED) | |
332 | except blobstore.LfsCorruptionError: |
|
332 | except blobstore.LfsCorruptionError: | |
333 | _logexception(req) |
|
333 | _logexception(req) | |
334 |
|
334 | |||
335 | # XXX: Is this the right code? |
|
335 | # XXX: Is this the right code? | |
336 | res.status = statusmessage(422, b'corrupt blob') |
|
336 | res.status = statusmessage(422, b'corrupt blob') | |
337 |
|
337 | |||
338 | # There's no payload here, but this is the header that lfs-test-server |
|
338 | # There's no payload here, but this is the header that lfs-test-server | |
339 | # sends back. This eliminates some gratuitous test output conditionals. |
|
339 | # sends back. This eliminates some gratuitous test output conditionals. | |
340 | res.headers[b'Content-Type'] = b'text/plain; charset=utf-8' |
|
340 | res.headers[b'Content-Type'] = b'text/plain; charset=utf-8' | |
341 | res.setbodybytes(b'') |
|
341 | res.setbodybytes(b'') | |
342 |
|
342 | |||
343 | return True |
|
343 | return True | |
344 | elif method == b'GET': |
|
344 | elif method == b'GET': | |
345 | checkperm(b'pull') |
|
345 | checkperm(b'pull') | |
346 |
|
346 | |||
347 | res.status = hgwebcommon.statusmessage(HTTP_OK) |
|
347 | res.status = hgwebcommon.statusmessage(HTTP_OK) | |
348 | res.headers[b'Content-Type'] = b'application/octet-stream' |
|
348 | res.headers[b'Content-Type'] = b'application/octet-stream' | |
349 |
|
349 | |||
350 | try: |
|
350 | try: | |
351 | # TODO: figure out how to send back the file in chunks, instead of |
|
351 | # TODO: figure out how to send back the file in chunks, instead of | |
352 | # reading the whole thing. (Also figure out how to send back |
|
352 | # reading the whole thing. (Also figure out how to send back | |
353 | # an error status if an IOError occurs after a partial write |
|
353 | # an error status if an IOError occurs after a partial write | |
354 | # in that case. Here, everything is read before starting.) |
|
354 | # in that case. Here, everything is read before starting.) | |
355 | res.setbodybytes(localstore.read(oid)) |
|
355 | res.setbodybytes(localstore.read(oid)) | |
356 | except blobstore.LfsCorruptionError: |
|
356 | except blobstore.LfsCorruptionError: | |
357 | _logexception(req) |
|
357 | _logexception(req) | |
358 |
|
358 | |||
359 | # XXX: Is this the right code? |
|
359 | # XXX: Is this the right code? | |
360 | res.status = hgwebcommon.statusmessage(422, b'corrupt blob') |
|
360 | res.status = hgwebcommon.statusmessage(422, b'corrupt blob') | |
361 | res.setbodybytes(b'') |
|
361 | res.setbodybytes(b'') | |
362 |
|
362 | |||
363 | return True |
|
363 | return True | |
364 | else: |
|
364 | else: | |
365 | _sethttperror( |
|
365 | _sethttperror( | |
366 | res, |
|
366 | res, | |
367 | HTTP_METHOD_NOT_ALLOWED, |
|
367 | HTTP_METHOD_NOT_ALLOWED, | |
368 | message=b'Unsupported LFS transfer method: %s' % method, |
|
368 | message=b'Unsupported LFS transfer method: %s' % method, | |
369 | ) |
|
369 | ) | |
370 | return True |
|
370 | return True |
@@ -1,1651 +1,1651 b'' | |||||
1 | # phabricator.py - simple Phabricator integration |
|
1 | # phabricator.py - simple Phabricator integration | |
2 | # |
|
2 | # | |
3 | # Copyright 2017 Facebook, Inc. |
|
3 | # Copyright 2017 Facebook, Inc. | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 | """simple Phabricator integration (EXPERIMENTAL) |
|
7 | """simple Phabricator integration (EXPERIMENTAL) | |
8 |
|
8 | |||
9 | This extension provides a ``phabsend`` command which sends a stack of |
|
9 | This extension provides a ``phabsend`` command which sends a stack of | |
10 | changesets to Phabricator, and a ``phabread`` command which prints a stack of |
|
10 | changesets to Phabricator, and a ``phabread`` command which prints a stack of | |
11 | revisions in a format suitable for :hg:`import`, and a ``phabupdate`` command |
|
11 | revisions in a format suitable for :hg:`import`, and a ``phabupdate`` command | |
12 | to update statuses in batch. |
|
12 | to update statuses in batch. | |
13 |
|
13 | |||
14 | By default, Phabricator requires ``Test Plan`` which might prevent some |
|
14 | By default, Phabricator requires ``Test Plan`` which might prevent some | |
15 | changeset from being sent. The requirement could be disabled by changing |
|
15 | changeset from being sent. The requirement could be disabled by changing | |
16 | ``differential.require-test-plan-field`` config server side. |
|
16 | ``differential.require-test-plan-field`` config server side. | |
17 |
|
17 | |||
18 | Config:: |
|
18 | Config:: | |
19 |
|
19 | |||
20 | [phabricator] |
|
20 | [phabricator] | |
21 | # Phabricator URL |
|
21 | # Phabricator URL | |
22 | url = https://phab.example.com/ |
|
22 | url = https://phab.example.com/ | |
23 |
|
23 | |||
24 | # Repo callsign. If a repo has a URL https://$HOST/diffusion/FOO, then its |
|
24 | # Repo callsign. If a repo has a URL https://$HOST/diffusion/FOO, then its | |
25 | # callsign is "FOO". |
|
25 | # callsign is "FOO". | |
26 | callsign = FOO |
|
26 | callsign = FOO | |
27 |
|
27 | |||
28 | # curl command to use. If not set (default), use builtin HTTP library to |
|
28 | # curl command to use. If not set (default), use builtin HTTP library to | |
29 | # communicate. If set, use the specified curl command. This could be useful |
|
29 | # communicate. If set, use the specified curl command. This could be useful | |
30 | # if you need to specify advanced options that is not easily supported by |
|
30 | # if you need to specify advanced options that is not easily supported by | |
31 | # the internal library. |
|
31 | # the internal library. | |
32 | curlcmd = curl --connect-timeout 2 --retry 3 --silent |
|
32 | curlcmd = curl --connect-timeout 2 --retry 3 --silent | |
33 |
|
33 | |||
34 | [auth] |
|
34 | [auth] | |
35 | example.schemes = https |
|
35 | example.schemes = https | |
36 | example.prefix = phab.example.com |
|
36 | example.prefix = phab.example.com | |
37 |
|
37 | |||
38 | # API token. Get it from https://$HOST/conduit/login/ |
|
38 | # API token. Get it from https://$HOST/conduit/login/ | |
39 | example.phabtoken = cli-xxxxxxxxxxxxxxxxxxxxxxxxxxxx |
|
39 | example.phabtoken = cli-xxxxxxxxxxxxxxxxxxxxxxxxxxxx | |
40 | """ |
|
40 | """ | |
41 |
|
41 | |||
42 | from __future__ import absolute_import |
|
42 | from __future__ import absolute_import | |
43 |
|
43 | |||
44 | import base64 |
|
44 | import base64 | |
45 | import contextlib |
|
45 | import contextlib | |
46 | import hashlib |
|
46 | import hashlib | |
47 | import itertools |
|
47 | import itertools | |
48 | import json |
|
48 | import json | |
49 | import mimetypes |
|
49 | import mimetypes | |
50 | import operator |
|
50 | import operator | |
51 | import re |
|
51 | import re | |
52 |
|
52 | |||
53 | from mercurial.node import bin, nullid |
|
53 | from mercurial.node import bin, nullid | |
54 | from mercurial.i18n import _ |
|
54 | from mercurial.i18n import _ | |
55 | from mercurial.pycompat import getattr |
|
55 | from mercurial.pycompat import getattr | |
56 | from mercurial.thirdparty import attr |
|
56 | from mercurial.thirdparty import attr | |
57 | from mercurial import ( |
|
57 | from mercurial import ( | |
58 | cmdutil, |
|
58 | cmdutil, | |
59 | context, |
|
59 | context, | |
60 | encoding, |
|
60 | encoding, | |
61 | error, |
|
61 | error, | |
62 | exthelper, |
|
62 | exthelper, | |
63 | httpconnection as httpconnectionmod, |
|
63 | httpconnection as httpconnectionmod, | |
64 | match, |
|
64 | match, | |
65 | mdiff, |
|
65 | mdiff, | |
66 | obsutil, |
|
66 | obsutil, | |
67 | parser, |
|
67 | parser, | |
68 | patch, |
|
68 | patch, | |
69 | phases, |
|
69 | phases, | |
70 | pycompat, |
|
70 | pycompat, | |
71 | scmutil, |
|
71 | scmutil, | |
72 | smartset, |
|
72 | smartset, | |
73 | tags, |
|
73 | tags, | |
74 | templatefilters, |
|
74 | templatefilters, | |
75 | templateutil, |
|
75 | templateutil, | |
76 | url as urlmod, |
|
76 | url as urlmod, | |
77 | util, |
|
77 | util, | |
78 | ) |
|
78 | ) | |
79 | from mercurial.utils import ( |
|
79 | from mercurial.utils import ( | |
80 | procutil, |
|
80 | procutil, | |
81 | stringutil, |
|
81 | stringutil, | |
82 | ) |
|
82 | ) | |
83 |
|
83 | |||
84 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for |
|
84 | # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for | |
85 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should |
|
85 | # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should | |
86 | # be specifying the version(s) of Mercurial they are tested with, or |
|
86 | # be specifying the version(s) of Mercurial they are tested with, or | |
87 | # leave the attribute unspecified. |
|
87 | # leave the attribute unspecified. | |
88 | testedwith = b'ships-with-hg-core' |
|
88 | testedwith = b'ships-with-hg-core' | |
89 |
|
89 | |||
90 | eh = exthelper.exthelper() |
|
90 | eh = exthelper.exthelper() | |
91 |
|
91 | |||
92 | cmdtable = eh.cmdtable |
|
92 | cmdtable = eh.cmdtable | |
93 | command = eh.command |
|
93 | command = eh.command | |
94 | configtable = eh.configtable |
|
94 | configtable = eh.configtable | |
95 | templatekeyword = eh.templatekeyword |
|
95 | templatekeyword = eh.templatekeyword | |
96 |
|
96 | |||
97 | # developer config: phabricator.batchsize |
|
97 | # developer config: phabricator.batchsize | |
98 | eh.configitem( |
|
98 | eh.configitem( | |
99 | b'phabricator', b'batchsize', default=12, |
|
99 | b'phabricator', b'batchsize', default=12, | |
100 | ) |
|
100 | ) | |
101 | eh.configitem( |
|
101 | eh.configitem( | |
102 | b'phabricator', b'callsign', default=None, |
|
102 | b'phabricator', b'callsign', default=None, | |
103 | ) |
|
103 | ) | |
104 | eh.configitem( |
|
104 | eh.configitem( | |
105 | b'phabricator', b'curlcmd', default=None, |
|
105 | b'phabricator', b'curlcmd', default=None, | |
106 | ) |
|
106 | ) | |
107 | # developer config: phabricator.repophid |
|
107 | # developer config: phabricator.repophid | |
108 | eh.configitem( |
|
108 | eh.configitem( | |
109 | b'phabricator', b'repophid', default=None, |
|
109 | b'phabricator', b'repophid', default=None, | |
110 | ) |
|
110 | ) | |
111 | eh.configitem( |
|
111 | eh.configitem( | |
112 | b'phabricator', b'url', default=None, |
|
112 | b'phabricator', b'url', default=None, | |
113 | ) |
|
113 | ) | |
114 | eh.configitem( |
|
114 | eh.configitem( | |
115 | b'phabsend', b'confirm', default=False, |
|
115 | b'phabsend', b'confirm', default=False, | |
116 | ) |
|
116 | ) | |
117 |
|
117 | |||
118 | colortable = { |
|
118 | colortable = { | |
119 | b'phabricator.action.created': b'green', |
|
119 | b'phabricator.action.created': b'green', | |
120 | b'phabricator.action.skipped': b'magenta', |
|
120 | b'phabricator.action.skipped': b'magenta', | |
121 | b'phabricator.action.updated': b'magenta', |
|
121 | b'phabricator.action.updated': b'magenta', | |
122 | b'phabricator.desc': b'', |
|
122 | b'phabricator.desc': b'', | |
123 | b'phabricator.drev': b'bold', |
|
123 | b'phabricator.drev': b'bold', | |
124 | b'phabricator.node': b'', |
|
124 | b'phabricator.node': b'', | |
125 | } |
|
125 | } | |
126 |
|
126 | |||
127 | _VCR_FLAGS = [ |
|
127 | _VCR_FLAGS = [ | |
128 | ( |
|
128 | ( | |
129 | b'', |
|
129 | b'', | |
130 | b'test-vcr', |
|
130 | b'test-vcr', | |
131 | b'', |
|
131 | b'', | |
132 | _( |
|
132 | _( | |
133 | b'Path to a vcr file. If nonexistent, will record a new vcr transcript' |
|
133 | b'Path to a vcr file. If nonexistent, will record a new vcr transcript' | |
134 | b', otherwise will mock all http requests using the specified vcr file.' |
|
134 | b', otherwise will mock all http requests using the specified vcr file.' | |
135 | b' (ADVANCED)' |
|
135 | b' (ADVANCED)' | |
136 | ), |
|
136 | ), | |
137 | ), |
|
137 | ), | |
138 | ] |
|
138 | ] | |
139 |
|
139 | |||
140 |
|
140 | |||
141 | def vcrcommand(name, flags, spec, helpcategory=None, optionalrepo=False): |
|
141 | def vcrcommand(name, flags, spec, helpcategory=None, optionalrepo=False): | |
142 | fullflags = flags + _VCR_FLAGS |
|
142 | fullflags = flags + _VCR_FLAGS | |
143 |
|
143 | |||
144 | def hgmatcher(r1, r2): |
|
144 | def hgmatcher(r1, r2): | |
145 | if r1.uri != r2.uri or r1.method != r2.method: |
|
145 | if r1.uri != r2.uri or r1.method != r2.method: | |
146 | return False |
|
146 | return False | |
147 | r1params = util.urlreq.parseqs(r1.body) |
|
147 | r1params = util.urlreq.parseqs(r1.body) | |
148 | r2params = util.urlreq.parseqs(r2.body) |
|
148 | r2params = util.urlreq.parseqs(r2.body) | |
149 | for key in r1params: |
|
149 | for key in r1params: | |
150 | if key not in r2params: |
|
150 | if key not in r2params: | |
151 | return False |
|
151 | return False | |
152 | value = r1params[key][0] |
|
152 | value = r1params[key][0] | |
153 | # we want to compare json payloads without worrying about ordering |
|
153 | # we want to compare json payloads without worrying about ordering | |
154 | if value.startswith(b'{') and value.endswith(b'}'): |
|
154 | if value.startswith(b'{') and value.endswith(b'}'): | |
155 |
r1json = |
|
155 | r1json = pycompat.json_loads(value) | |
156 |
r2json = |
|
156 | r2json = pycompat.json_loads(r2params[key][0]) | |
157 | if r1json != r2json: |
|
157 | if r1json != r2json: | |
158 | return False |
|
158 | return False | |
159 | elif r2params[key][0] != value: |
|
159 | elif r2params[key][0] != value: | |
160 | return False |
|
160 | return False | |
161 | return True |
|
161 | return True | |
162 |
|
162 | |||
163 | def sanitiserequest(request): |
|
163 | def sanitiserequest(request): | |
164 | request.body = re.sub( |
|
164 | request.body = re.sub( | |
165 | br'cli-[a-z0-9]+', br'cli-hahayouwish', request.body |
|
165 | br'cli-[a-z0-9]+', br'cli-hahayouwish', request.body | |
166 | ) |
|
166 | ) | |
167 | return request |
|
167 | return request | |
168 |
|
168 | |||
169 | def sanitiseresponse(response): |
|
169 | def sanitiseresponse(response): | |
170 | if r'set-cookie' in response[r'headers']: |
|
170 | if r'set-cookie' in response[r'headers']: | |
171 | del response[r'headers'][r'set-cookie'] |
|
171 | del response[r'headers'][r'set-cookie'] | |
172 | return response |
|
172 | return response | |
173 |
|
173 | |||
174 | def decorate(fn): |
|
174 | def decorate(fn): | |
175 | def inner(*args, **kwargs): |
|
175 | def inner(*args, **kwargs): | |
176 | cassette = pycompat.fsdecode(kwargs.pop(r'test_vcr', None)) |
|
176 | cassette = pycompat.fsdecode(kwargs.pop(r'test_vcr', None)) | |
177 | if cassette: |
|
177 | if cassette: | |
178 | import hgdemandimport |
|
178 | import hgdemandimport | |
179 |
|
179 | |||
180 | with hgdemandimport.deactivated(): |
|
180 | with hgdemandimport.deactivated(): | |
181 | import vcr as vcrmod |
|
181 | import vcr as vcrmod | |
182 | import vcr.stubs as stubs |
|
182 | import vcr.stubs as stubs | |
183 |
|
183 | |||
184 | vcr = vcrmod.VCR( |
|
184 | vcr = vcrmod.VCR( | |
185 | serializer=r'json', |
|
185 | serializer=r'json', | |
186 | before_record_request=sanitiserequest, |
|
186 | before_record_request=sanitiserequest, | |
187 | before_record_response=sanitiseresponse, |
|
187 | before_record_response=sanitiseresponse, | |
188 | custom_patches=[ |
|
188 | custom_patches=[ | |
189 | ( |
|
189 | ( | |
190 | urlmod, |
|
190 | urlmod, | |
191 | r'httpconnection', |
|
191 | r'httpconnection', | |
192 | stubs.VCRHTTPConnection, |
|
192 | stubs.VCRHTTPConnection, | |
193 | ), |
|
193 | ), | |
194 | ( |
|
194 | ( | |
195 | urlmod, |
|
195 | urlmod, | |
196 | r'httpsconnection', |
|
196 | r'httpsconnection', | |
197 | stubs.VCRHTTPSConnection, |
|
197 | stubs.VCRHTTPSConnection, | |
198 | ), |
|
198 | ), | |
199 | ], |
|
199 | ], | |
200 | ) |
|
200 | ) | |
201 | vcr.register_matcher(r'hgmatcher', hgmatcher) |
|
201 | vcr.register_matcher(r'hgmatcher', hgmatcher) | |
202 | with vcr.use_cassette(cassette, match_on=[r'hgmatcher']): |
|
202 | with vcr.use_cassette(cassette, match_on=[r'hgmatcher']): | |
203 | return fn(*args, **kwargs) |
|
203 | return fn(*args, **kwargs) | |
204 | return fn(*args, **kwargs) |
|
204 | return fn(*args, **kwargs) | |
205 |
|
205 | |||
206 | inner.__name__ = fn.__name__ |
|
206 | inner.__name__ = fn.__name__ | |
207 | inner.__doc__ = fn.__doc__ |
|
207 | inner.__doc__ = fn.__doc__ | |
208 | return command( |
|
208 | return command( | |
209 | name, |
|
209 | name, | |
210 | fullflags, |
|
210 | fullflags, | |
211 | spec, |
|
211 | spec, | |
212 | helpcategory=helpcategory, |
|
212 | helpcategory=helpcategory, | |
213 | optionalrepo=optionalrepo, |
|
213 | optionalrepo=optionalrepo, | |
214 | )(inner) |
|
214 | )(inner) | |
215 |
|
215 | |||
216 | return decorate |
|
216 | return decorate | |
217 |
|
217 | |||
218 |
|
218 | |||
219 | def urlencodenested(params): |
|
219 | def urlencodenested(params): | |
220 | """like urlencode, but works with nested parameters. |
|
220 | """like urlencode, but works with nested parameters. | |
221 |
|
221 | |||
222 | For example, if params is {'a': ['b', 'c'], 'd': {'e': 'f'}}, it will be |
|
222 | For example, if params is {'a': ['b', 'c'], 'd': {'e': 'f'}}, it will be | |
223 | flattened to {'a[0]': 'b', 'a[1]': 'c', 'd[e]': 'f'} and then passed to |
|
223 | flattened to {'a[0]': 'b', 'a[1]': 'c', 'd[e]': 'f'} and then passed to | |
224 | urlencode. Note: the encoding is consistent with PHP's http_build_query. |
|
224 | urlencode. Note: the encoding is consistent with PHP's http_build_query. | |
225 | """ |
|
225 | """ | |
226 | flatparams = util.sortdict() |
|
226 | flatparams = util.sortdict() | |
227 |
|
227 | |||
228 | def process(prefix, obj): |
|
228 | def process(prefix, obj): | |
229 | if isinstance(obj, bool): |
|
229 | if isinstance(obj, bool): | |
230 | obj = {True: b'true', False: b'false'}[obj] # Python -> PHP form |
|
230 | obj = {True: b'true', False: b'false'}[obj] # Python -> PHP form | |
231 | lister = lambda l: [(b'%d' % k, v) for k, v in enumerate(l)] |
|
231 | lister = lambda l: [(b'%d' % k, v) for k, v in enumerate(l)] | |
232 | items = {list: lister, dict: lambda x: x.items()}.get(type(obj)) |
|
232 | items = {list: lister, dict: lambda x: x.items()}.get(type(obj)) | |
233 | if items is None: |
|
233 | if items is None: | |
234 | flatparams[prefix] = obj |
|
234 | flatparams[prefix] = obj | |
235 | else: |
|
235 | else: | |
236 | for k, v in items(obj): |
|
236 | for k, v in items(obj): | |
237 | if prefix: |
|
237 | if prefix: | |
238 | process(b'%s[%s]' % (prefix, k), v) |
|
238 | process(b'%s[%s]' % (prefix, k), v) | |
239 | else: |
|
239 | else: | |
240 | process(k, v) |
|
240 | process(k, v) | |
241 |
|
241 | |||
242 | process(b'', params) |
|
242 | process(b'', params) | |
243 | return util.urlreq.urlencode(flatparams) |
|
243 | return util.urlreq.urlencode(flatparams) | |
244 |
|
244 | |||
245 |
|
245 | |||
246 | def readurltoken(ui): |
|
246 | def readurltoken(ui): | |
247 | """return conduit url, token and make sure they exist |
|
247 | """return conduit url, token and make sure they exist | |
248 |
|
248 | |||
249 | Currently read from [auth] config section. In the future, it might |
|
249 | Currently read from [auth] config section. In the future, it might | |
250 | make sense to read from .arcconfig and .arcrc as well. |
|
250 | make sense to read from .arcconfig and .arcrc as well. | |
251 | """ |
|
251 | """ | |
252 | url = ui.config(b'phabricator', b'url') |
|
252 | url = ui.config(b'phabricator', b'url') | |
253 | if not url: |
|
253 | if not url: | |
254 | raise error.Abort( |
|
254 | raise error.Abort( | |
255 | _(b'config %s.%s is required') % (b'phabricator', b'url') |
|
255 | _(b'config %s.%s is required') % (b'phabricator', b'url') | |
256 | ) |
|
256 | ) | |
257 |
|
257 | |||
258 | res = httpconnectionmod.readauthforuri(ui, url, util.url(url).user) |
|
258 | res = httpconnectionmod.readauthforuri(ui, url, util.url(url).user) | |
259 | token = None |
|
259 | token = None | |
260 |
|
260 | |||
261 | if res: |
|
261 | if res: | |
262 | group, auth = res |
|
262 | group, auth = res | |
263 |
|
263 | |||
264 | ui.debug(b"using auth.%s.* for authentication\n" % group) |
|
264 | ui.debug(b"using auth.%s.* for authentication\n" % group) | |
265 |
|
265 | |||
266 | token = auth.get(b'phabtoken') |
|
266 | token = auth.get(b'phabtoken') | |
267 |
|
267 | |||
268 | if not token: |
|
268 | if not token: | |
269 | raise error.Abort( |
|
269 | raise error.Abort( | |
270 | _(b'Can\'t find conduit token associated to %s') % (url,) |
|
270 | _(b'Can\'t find conduit token associated to %s') % (url,) | |
271 | ) |
|
271 | ) | |
272 |
|
272 | |||
273 | return url, token |
|
273 | return url, token | |
274 |
|
274 | |||
275 |
|
275 | |||
276 | def callconduit(ui, name, params): |
|
276 | def callconduit(ui, name, params): | |
277 | """call Conduit API, params is a dict. return json.loads result, or None""" |
|
277 | """call Conduit API, params is a dict. return json.loads result, or None""" | |
278 | host, token = readurltoken(ui) |
|
278 | host, token = readurltoken(ui) | |
279 | url, authinfo = util.url(b'/'.join([host, b'api', name])).authinfo() |
|
279 | url, authinfo = util.url(b'/'.join([host, b'api', name])).authinfo() | |
280 | ui.debug(b'Conduit Call: %s %s\n' % (url, pycompat.byterepr(params))) |
|
280 | ui.debug(b'Conduit Call: %s %s\n' % (url, pycompat.byterepr(params))) | |
281 | params = params.copy() |
|
281 | params = params.copy() | |
282 | params[b'__conduit__'] = { |
|
282 | params[b'__conduit__'] = { | |
283 | b'token': token, |
|
283 | b'token': token, | |
284 | } |
|
284 | } | |
285 | rawdata = { |
|
285 | rawdata = { | |
286 | b'params': templatefilters.json(params), |
|
286 | b'params': templatefilters.json(params), | |
287 | b'output': b'json', |
|
287 | b'output': b'json', | |
288 | b'__conduit__': 1, |
|
288 | b'__conduit__': 1, | |
289 | } |
|
289 | } | |
290 | data = urlencodenested(rawdata) |
|
290 | data = urlencodenested(rawdata) | |
291 | curlcmd = ui.config(b'phabricator', b'curlcmd') |
|
291 | curlcmd = ui.config(b'phabricator', b'curlcmd') | |
292 | if curlcmd: |
|
292 | if curlcmd: | |
293 | sin, sout = procutil.popen2( |
|
293 | sin, sout = procutil.popen2( | |
294 | b'%s -d @- %s' % (curlcmd, procutil.shellquote(url)) |
|
294 | b'%s -d @- %s' % (curlcmd, procutil.shellquote(url)) | |
295 | ) |
|
295 | ) | |
296 | sin.write(data) |
|
296 | sin.write(data) | |
297 | sin.close() |
|
297 | sin.close() | |
298 | body = sout.read() |
|
298 | body = sout.read() | |
299 | else: |
|
299 | else: | |
300 | urlopener = urlmod.opener(ui, authinfo) |
|
300 | urlopener = urlmod.opener(ui, authinfo) | |
301 | request = util.urlreq.request(pycompat.strurl(url), data=data) |
|
301 | request = util.urlreq.request(pycompat.strurl(url), data=data) | |
302 | with contextlib.closing(urlopener.open(request)) as rsp: |
|
302 | with contextlib.closing(urlopener.open(request)) as rsp: | |
303 | body = rsp.read() |
|
303 | body = rsp.read() | |
304 | ui.debug(b'Conduit Response: %s\n' % body) |
|
304 | ui.debug(b'Conduit Response: %s\n' % body) | |
305 | parsed = pycompat.rapply( |
|
305 | parsed = pycompat.rapply( | |
306 | lambda x: encoding.unitolocal(x) |
|
306 | lambda x: encoding.unitolocal(x) | |
307 | if isinstance(x, pycompat.unicode) |
|
307 | if isinstance(x, pycompat.unicode) | |
308 | else x, |
|
308 | else x, | |
309 | # json.loads only accepts bytes from py3.6+ |
|
309 | # json.loads only accepts bytes from py3.6+ | |
310 |
|
|
310 | pycompat.json_loads(encoding.unifromlocal(body)), | |
311 | ) |
|
311 | ) | |
312 | if parsed.get(b'error_code'): |
|
312 | if parsed.get(b'error_code'): | |
313 | msg = _(b'Conduit Error (%s): %s') % ( |
|
313 | msg = _(b'Conduit Error (%s): %s') % ( | |
314 | parsed[b'error_code'], |
|
314 | parsed[b'error_code'], | |
315 | parsed[b'error_info'], |
|
315 | parsed[b'error_info'], | |
316 | ) |
|
316 | ) | |
317 | raise error.Abort(msg) |
|
317 | raise error.Abort(msg) | |
318 | return parsed[b'result'] |
|
318 | return parsed[b'result'] | |
319 |
|
319 | |||
320 |
|
320 | |||
321 | @vcrcommand(b'debugcallconduit', [], _(b'METHOD'), optionalrepo=True) |
|
321 | @vcrcommand(b'debugcallconduit', [], _(b'METHOD'), optionalrepo=True) | |
322 | def debugcallconduit(ui, repo, name): |
|
322 | def debugcallconduit(ui, repo, name): | |
323 | """call Conduit API |
|
323 | """call Conduit API | |
324 |
|
324 | |||
325 | Call parameters are read from stdin as a JSON blob. Result will be written |
|
325 | Call parameters are read from stdin as a JSON blob. Result will be written | |
326 | to stdout as a JSON blob. |
|
326 | to stdout as a JSON blob. | |
327 | """ |
|
327 | """ | |
328 | # json.loads only accepts bytes from 3.6+ |
|
328 | # json.loads only accepts bytes from 3.6+ | |
329 | rawparams = encoding.unifromlocal(ui.fin.read()) |
|
329 | rawparams = encoding.unifromlocal(ui.fin.read()) | |
330 | # json.loads only returns unicode strings |
|
330 | # json.loads only returns unicode strings | |
331 | params = pycompat.rapply( |
|
331 | params = pycompat.rapply( | |
332 | lambda x: encoding.unitolocal(x) |
|
332 | lambda x: encoding.unitolocal(x) | |
333 | if isinstance(x, pycompat.unicode) |
|
333 | if isinstance(x, pycompat.unicode) | |
334 | else x, |
|
334 | else x, | |
335 |
|
|
335 | pycompat.json_loads(rawparams), | |
336 | ) |
|
336 | ) | |
337 | # json.dumps only accepts unicode strings |
|
337 | # json.dumps only accepts unicode strings | |
338 | result = pycompat.rapply( |
|
338 | result = pycompat.rapply( | |
339 | lambda x: encoding.unifromlocal(x) if isinstance(x, bytes) else x, |
|
339 | lambda x: encoding.unifromlocal(x) if isinstance(x, bytes) else x, | |
340 | callconduit(ui, name, params), |
|
340 | callconduit(ui, name, params), | |
341 | ) |
|
341 | ) | |
342 | s = json.dumps(result, sort_keys=True, indent=2, separators=(u',', u': ')) |
|
342 | s = json.dumps(result, sort_keys=True, indent=2, separators=(u',', u': ')) | |
343 | ui.write(b'%s\n' % encoding.unitolocal(s)) |
|
343 | ui.write(b'%s\n' % encoding.unitolocal(s)) | |
344 |
|
344 | |||
345 |
|
345 | |||
346 | def getrepophid(repo): |
|
346 | def getrepophid(repo): | |
347 | """given callsign, return repository PHID or None""" |
|
347 | """given callsign, return repository PHID or None""" | |
348 | # developer config: phabricator.repophid |
|
348 | # developer config: phabricator.repophid | |
349 | repophid = repo.ui.config(b'phabricator', b'repophid') |
|
349 | repophid = repo.ui.config(b'phabricator', b'repophid') | |
350 | if repophid: |
|
350 | if repophid: | |
351 | return repophid |
|
351 | return repophid | |
352 | callsign = repo.ui.config(b'phabricator', b'callsign') |
|
352 | callsign = repo.ui.config(b'phabricator', b'callsign') | |
353 | if not callsign: |
|
353 | if not callsign: | |
354 | return None |
|
354 | return None | |
355 | query = callconduit( |
|
355 | query = callconduit( | |
356 | repo.ui, |
|
356 | repo.ui, | |
357 | b'diffusion.repository.search', |
|
357 | b'diffusion.repository.search', | |
358 | {b'constraints': {b'callsigns': [callsign]}}, |
|
358 | {b'constraints': {b'callsigns': [callsign]}}, | |
359 | ) |
|
359 | ) | |
360 | if len(query[b'data']) == 0: |
|
360 | if len(query[b'data']) == 0: | |
361 | return None |
|
361 | return None | |
362 | repophid = query[b'data'][0][b'phid'] |
|
362 | repophid = query[b'data'][0][b'phid'] | |
363 | repo.ui.setconfig(b'phabricator', b'repophid', repophid) |
|
363 | repo.ui.setconfig(b'phabricator', b'repophid', repophid) | |
364 | return repophid |
|
364 | return repophid | |
365 |
|
365 | |||
366 |
|
366 | |||
367 | _differentialrevisiontagre = re.compile(br'\AD([1-9][0-9]*)\Z') |
|
367 | _differentialrevisiontagre = re.compile(br'\AD([1-9][0-9]*)\Z') | |
368 | _differentialrevisiondescre = re.compile( |
|
368 | _differentialrevisiondescre = re.compile( | |
369 | br'^Differential Revision:\s*(?P<url>(?:.*)D(?P<id>[1-9][0-9]*))$', re.M |
|
369 | br'^Differential Revision:\s*(?P<url>(?:.*)D(?P<id>[1-9][0-9]*))$', re.M | |
370 | ) |
|
370 | ) | |
371 |
|
371 | |||
372 |
|
372 | |||
373 | def getoldnodedrevmap(repo, nodelist): |
|
373 | def getoldnodedrevmap(repo, nodelist): | |
374 | """find previous nodes that has been sent to Phabricator |
|
374 | """find previous nodes that has been sent to Phabricator | |
375 |
|
375 | |||
376 | return {node: (oldnode, Differential diff, Differential Revision ID)} |
|
376 | return {node: (oldnode, Differential diff, Differential Revision ID)} | |
377 | for node in nodelist with known previous sent versions, or associated |
|
377 | for node in nodelist with known previous sent versions, or associated | |
378 | Differential Revision IDs. ``oldnode`` and ``Differential diff`` could |
|
378 | Differential Revision IDs. ``oldnode`` and ``Differential diff`` could | |
379 | be ``None``. |
|
379 | be ``None``. | |
380 |
|
380 | |||
381 | Examines commit messages like "Differential Revision:" to get the |
|
381 | Examines commit messages like "Differential Revision:" to get the | |
382 | association information. |
|
382 | association information. | |
383 |
|
383 | |||
384 | If such commit message line is not found, examines all precursors and their |
|
384 | If such commit message line is not found, examines all precursors and their | |
385 | tags. Tags with format like "D1234" are considered a match and the node |
|
385 | tags. Tags with format like "D1234" are considered a match and the node | |
386 | with that tag, and the number after "D" (ex. 1234) will be returned. |
|
386 | with that tag, and the number after "D" (ex. 1234) will be returned. | |
387 |
|
387 | |||
388 | The ``old node``, if not None, is guaranteed to be the last diff of |
|
388 | The ``old node``, if not None, is guaranteed to be the last diff of | |
389 | corresponding Differential Revision, and exist in the repo. |
|
389 | corresponding Differential Revision, and exist in the repo. | |
390 | """ |
|
390 | """ | |
391 | unfi = repo.unfiltered() |
|
391 | unfi = repo.unfiltered() | |
392 | nodemap = unfi.changelog.nodemap |
|
392 | nodemap = unfi.changelog.nodemap | |
393 |
|
393 | |||
394 | result = {} # {node: (oldnode?, lastdiff?, drev)} |
|
394 | result = {} # {node: (oldnode?, lastdiff?, drev)} | |
395 | toconfirm = {} # {node: (force, {precnode}, drev)} |
|
395 | toconfirm = {} # {node: (force, {precnode}, drev)} | |
396 | for node in nodelist: |
|
396 | for node in nodelist: | |
397 | ctx = unfi[node] |
|
397 | ctx = unfi[node] | |
398 | # For tags like "D123", put them into "toconfirm" to verify later |
|
398 | # For tags like "D123", put them into "toconfirm" to verify later | |
399 | precnodes = list(obsutil.allpredecessors(unfi.obsstore, [node])) |
|
399 | precnodes = list(obsutil.allpredecessors(unfi.obsstore, [node])) | |
400 | for n in precnodes: |
|
400 | for n in precnodes: | |
401 | if n in nodemap: |
|
401 | if n in nodemap: | |
402 | for tag in unfi.nodetags(n): |
|
402 | for tag in unfi.nodetags(n): | |
403 | m = _differentialrevisiontagre.match(tag) |
|
403 | m = _differentialrevisiontagre.match(tag) | |
404 | if m: |
|
404 | if m: | |
405 | toconfirm[node] = (0, set(precnodes), int(m.group(1))) |
|
405 | toconfirm[node] = (0, set(precnodes), int(m.group(1))) | |
406 | continue |
|
406 | continue | |
407 |
|
407 | |||
408 | # Check commit message |
|
408 | # Check commit message | |
409 | m = _differentialrevisiondescre.search(ctx.description()) |
|
409 | m = _differentialrevisiondescre.search(ctx.description()) | |
410 | if m: |
|
410 | if m: | |
411 | toconfirm[node] = (1, set(precnodes), int(m.group(r'id'))) |
|
411 | toconfirm[node] = (1, set(precnodes), int(m.group(r'id'))) | |
412 |
|
412 | |||
413 | # Double check if tags are genuine by collecting all old nodes from |
|
413 | # Double check if tags are genuine by collecting all old nodes from | |
414 | # Phabricator, and expect precursors overlap with it. |
|
414 | # Phabricator, and expect precursors overlap with it. | |
415 | if toconfirm: |
|
415 | if toconfirm: | |
416 | drevs = [drev for force, precs, drev in toconfirm.values()] |
|
416 | drevs = [drev for force, precs, drev in toconfirm.values()] | |
417 | alldiffs = callconduit( |
|
417 | alldiffs = callconduit( | |
418 | unfi.ui, b'differential.querydiffs', {b'revisionIDs': drevs} |
|
418 | unfi.ui, b'differential.querydiffs', {b'revisionIDs': drevs} | |
419 | ) |
|
419 | ) | |
420 | getnode = lambda d: bin(getdiffmeta(d).get(b'node', b'')) or None |
|
420 | getnode = lambda d: bin(getdiffmeta(d).get(b'node', b'')) or None | |
421 | for newnode, (force, precset, drev) in toconfirm.items(): |
|
421 | for newnode, (force, precset, drev) in toconfirm.items(): | |
422 | diffs = [ |
|
422 | diffs = [ | |
423 | d for d in alldiffs.values() if int(d[b'revisionID']) == drev |
|
423 | d for d in alldiffs.values() if int(d[b'revisionID']) == drev | |
424 | ] |
|
424 | ] | |
425 |
|
425 | |||
426 | # "precursors" as known by Phabricator |
|
426 | # "precursors" as known by Phabricator | |
427 | phprecset = set(getnode(d) for d in diffs) |
|
427 | phprecset = set(getnode(d) for d in diffs) | |
428 |
|
428 | |||
429 | # Ignore if precursors (Phabricator and local repo) do not overlap, |
|
429 | # Ignore if precursors (Phabricator and local repo) do not overlap, | |
430 | # and force is not set (when commit message says nothing) |
|
430 | # and force is not set (when commit message says nothing) | |
431 | if not force and not bool(phprecset & precset): |
|
431 | if not force and not bool(phprecset & precset): | |
432 | tagname = b'D%d' % drev |
|
432 | tagname = b'D%d' % drev | |
433 | tags.tag( |
|
433 | tags.tag( | |
434 | repo, |
|
434 | repo, | |
435 | tagname, |
|
435 | tagname, | |
436 | nullid, |
|
436 | nullid, | |
437 | message=None, |
|
437 | message=None, | |
438 | user=None, |
|
438 | user=None, | |
439 | date=None, |
|
439 | date=None, | |
440 | local=True, |
|
440 | local=True, | |
441 | ) |
|
441 | ) | |
442 | unfi.ui.warn( |
|
442 | unfi.ui.warn( | |
443 | _( |
|
443 | _( | |
444 | b'D%s: local tag removed - does not match ' |
|
444 | b'D%s: local tag removed - does not match ' | |
445 | b'Differential history\n' |
|
445 | b'Differential history\n' | |
446 | ) |
|
446 | ) | |
447 | % drev |
|
447 | % drev | |
448 | ) |
|
448 | ) | |
449 | continue |
|
449 | continue | |
450 |
|
450 | |||
451 | # Find the last node using Phabricator metadata, and make sure it |
|
451 | # Find the last node using Phabricator metadata, and make sure it | |
452 | # exists in the repo |
|
452 | # exists in the repo | |
453 | oldnode = lastdiff = None |
|
453 | oldnode = lastdiff = None | |
454 | if diffs: |
|
454 | if diffs: | |
455 | lastdiff = max(diffs, key=lambda d: int(d[b'id'])) |
|
455 | lastdiff = max(diffs, key=lambda d: int(d[b'id'])) | |
456 | oldnode = getnode(lastdiff) |
|
456 | oldnode = getnode(lastdiff) | |
457 | if oldnode and oldnode not in nodemap: |
|
457 | if oldnode and oldnode not in nodemap: | |
458 | oldnode = None |
|
458 | oldnode = None | |
459 |
|
459 | |||
460 | result[newnode] = (oldnode, lastdiff, drev) |
|
460 | result[newnode] = (oldnode, lastdiff, drev) | |
461 |
|
461 | |||
462 | return result |
|
462 | return result | |
463 |
|
463 | |||
464 |
|
464 | |||
465 | def getdiff(ctx, diffopts): |
|
465 | def getdiff(ctx, diffopts): | |
466 | """plain-text diff without header (user, commit message, etc)""" |
|
466 | """plain-text diff without header (user, commit message, etc)""" | |
467 | output = util.stringio() |
|
467 | output = util.stringio() | |
468 | for chunk, _label in patch.diffui( |
|
468 | for chunk, _label in patch.diffui( | |
469 | ctx.repo(), ctx.p1().node(), ctx.node(), None, opts=diffopts |
|
469 | ctx.repo(), ctx.p1().node(), ctx.node(), None, opts=diffopts | |
470 | ): |
|
470 | ): | |
471 | output.write(chunk) |
|
471 | output.write(chunk) | |
472 | return output.getvalue() |
|
472 | return output.getvalue() | |
473 |
|
473 | |||
474 |
|
474 | |||
475 | class DiffChangeType(object): |
|
475 | class DiffChangeType(object): | |
476 | ADD = 1 |
|
476 | ADD = 1 | |
477 | CHANGE = 2 |
|
477 | CHANGE = 2 | |
478 | DELETE = 3 |
|
478 | DELETE = 3 | |
479 | MOVE_AWAY = 4 |
|
479 | MOVE_AWAY = 4 | |
480 | COPY_AWAY = 5 |
|
480 | COPY_AWAY = 5 | |
481 | MOVE_HERE = 6 |
|
481 | MOVE_HERE = 6 | |
482 | COPY_HERE = 7 |
|
482 | COPY_HERE = 7 | |
483 | MULTICOPY = 8 |
|
483 | MULTICOPY = 8 | |
484 |
|
484 | |||
485 |
|
485 | |||
486 | class DiffFileType(object): |
|
486 | class DiffFileType(object): | |
487 | TEXT = 1 |
|
487 | TEXT = 1 | |
488 | IMAGE = 2 |
|
488 | IMAGE = 2 | |
489 | BINARY = 3 |
|
489 | BINARY = 3 | |
490 |
|
490 | |||
491 |
|
491 | |||
492 | @attr.s |
|
492 | @attr.s | |
493 | class phabhunk(dict): |
|
493 | class phabhunk(dict): | |
494 | """Represents a Differential hunk, which is owned by a Differential change |
|
494 | """Represents a Differential hunk, which is owned by a Differential change | |
495 | """ |
|
495 | """ | |
496 |
|
496 | |||
497 | oldOffset = attr.ib(default=0) # camelcase-required |
|
497 | oldOffset = attr.ib(default=0) # camelcase-required | |
498 | oldLength = attr.ib(default=0) # camelcase-required |
|
498 | oldLength = attr.ib(default=0) # camelcase-required | |
499 | newOffset = attr.ib(default=0) # camelcase-required |
|
499 | newOffset = attr.ib(default=0) # camelcase-required | |
500 | newLength = attr.ib(default=0) # camelcase-required |
|
500 | newLength = attr.ib(default=0) # camelcase-required | |
501 | corpus = attr.ib(default='') |
|
501 | corpus = attr.ib(default='') | |
502 | # These get added to the phabchange's equivalents |
|
502 | # These get added to the phabchange's equivalents | |
503 | addLines = attr.ib(default=0) # camelcase-required |
|
503 | addLines = attr.ib(default=0) # camelcase-required | |
504 | delLines = attr.ib(default=0) # camelcase-required |
|
504 | delLines = attr.ib(default=0) # camelcase-required | |
505 |
|
505 | |||
506 |
|
506 | |||
507 | @attr.s |
|
507 | @attr.s | |
508 | class phabchange(object): |
|
508 | class phabchange(object): | |
509 | """Represents a Differential change, owns Differential hunks and owned by a |
|
509 | """Represents a Differential change, owns Differential hunks and owned by a | |
510 | Differential diff. Each one represents one file in a diff. |
|
510 | Differential diff. Each one represents one file in a diff. | |
511 | """ |
|
511 | """ | |
512 |
|
512 | |||
513 | currentPath = attr.ib(default=None) # camelcase-required |
|
513 | currentPath = attr.ib(default=None) # camelcase-required | |
514 | oldPath = attr.ib(default=None) # camelcase-required |
|
514 | oldPath = attr.ib(default=None) # camelcase-required | |
515 | awayPaths = attr.ib(default=attr.Factory(list)) # camelcase-required |
|
515 | awayPaths = attr.ib(default=attr.Factory(list)) # camelcase-required | |
516 | metadata = attr.ib(default=attr.Factory(dict)) |
|
516 | metadata = attr.ib(default=attr.Factory(dict)) | |
517 | oldProperties = attr.ib(default=attr.Factory(dict)) # camelcase-required |
|
517 | oldProperties = attr.ib(default=attr.Factory(dict)) # camelcase-required | |
518 | newProperties = attr.ib(default=attr.Factory(dict)) # camelcase-required |
|
518 | newProperties = attr.ib(default=attr.Factory(dict)) # camelcase-required | |
519 | type = attr.ib(default=DiffChangeType.CHANGE) |
|
519 | type = attr.ib(default=DiffChangeType.CHANGE) | |
520 | fileType = attr.ib(default=DiffFileType.TEXT) # camelcase-required |
|
520 | fileType = attr.ib(default=DiffFileType.TEXT) # camelcase-required | |
521 | commitHash = attr.ib(default=None) # camelcase-required |
|
521 | commitHash = attr.ib(default=None) # camelcase-required | |
522 | addLines = attr.ib(default=0) # camelcase-required |
|
522 | addLines = attr.ib(default=0) # camelcase-required | |
523 | delLines = attr.ib(default=0) # camelcase-required |
|
523 | delLines = attr.ib(default=0) # camelcase-required | |
524 | hunks = attr.ib(default=attr.Factory(list)) |
|
524 | hunks = attr.ib(default=attr.Factory(list)) | |
525 |
|
525 | |||
526 | def copynewmetadatatoold(self): |
|
526 | def copynewmetadatatoold(self): | |
527 | for key in list(self.metadata.keys()): |
|
527 | for key in list(self.metadata.keys()): | |
528 | newkey = key.replace(b'new:', b'old:') |
|
528 | newkey = key.replace(b'new:', b'old:') | |
529 | self.metadata[newkey] = self.metadata[key] |
|
529 | self.metadata[newkey] = self.metadata[key] | |
530 |
|
530 | |||
531 | def addoldmode(self, value): |
|
531 | def addoldmode(self, value): | |
532 | self.oldProperties[b'unix:filemode'] = value |
|
532 | self.oldProperties[b'unix:filemode'] = value | |
533 |
|
533 | |||
534 | def addnewmode(self, value): |
|
534 | def addnewmode(self, value): | |
535 | self.newProperties[b'unix:filemode'] = value |
|
535 | self.newProperties[b'unix:filemode'] = value | |
536 |
|
536 | |||
537 | def addhunk(self, hunk): |
|
537 | def addhunk(self, hunk): | |
538 | if not isinstance(hunk, phabhunk): |
|
538 | if not isinstance(hunk, phabhunk): | |
539 | raise error.Abort(b'phabchange.addhunk only takes phabhunks') |
|
539 | raise error.Abort(b'phabchange.addhunk only takes phabhunks') | |
540 | self.hunks.append(pycompat.byteskwargs(attr.asdict(hunk))) |
|
540 | self.hunks.append(pycompat.byteskwargs(attr.asdict(hunk))) | |
541 | # It's useful to include these stats since the Phab web UI shows them, |
|
541 | # It's useful to include these stats since the Phab web UI shows them, | |
542 | # and uses them to estimate how large a change a Revision is. Also used |
|
542 | # and uses them to estimate how large a change a Revision is. Also used | |
543 | # in email subjects for the [+++--] bit. |
|
543 | # in email subjects for the [+++--] bit. | |
544 | self.addLines += hunk.addLines |
|
544 | self.addLines += hunk.addLines | |
545 | self.delLines += hunk.delLines |
|
545 | self.delLines += hunk.delLines | |
546 |
|
546 | |||
547 |
|
547 | |||
548 | @attr.s |
|
548 | @attr.s | |
549 | class phabdiff(object): |
|
549 | class phabdiff(object): | |
550 | """Represents a Differential diff, owns Differential changes. Corresponds |
|
550 | """Represents a Differential diff, owns Differential changes. Corresponds | |
551 | to a commit. |
|
551 | to a commit. | |
552 | """ |
|
552 | """ | |
553 |
|
553 | |||
554 | # Doesn't seem to be any reason to send this (output of uname -n) |
|
554 | # Doesn't seem to be any reason to send this (output of uname -n) | |
555 | sourceMachine = attr.ib(default=b'') # camelcase-required |
|
555 | sourceMachine = attr.ib(default=b'') # camelcase-required | |
556 | sourcePath = attr.ib(default=b'/') # camelcase-required |
|
556 | sourcePath = attr.ib(default=b'/') # camelcase-required | |
557 | sourceControlBaseRevision = attr.ib(default=b'0' * 40) # camelcase-required |
|
557 | sourceControlBaseRevision = attr.ib(default=b'0' * 40) # camelcase-required | |
558 | sourceControlPath = attr.ib(default=b'/') # camelcase-required |
|
558 | sourceControlPath = attr.ib(default=b'/') # camelcase-required | |
559 | sourceControlSystem = attr.ib(default=b'hg') # camelcase-required |
|
559 | sourceControlSystem = attr.ib(default=b'hg') # camelcase-required | |
560 | branch = attr.ib(default=b'default') |
|
560 | branch = attr.ib(default=b'default') | |
561 | bookmark = attr.ib(default=None) |
|
561 | bookmark = attr.ib(default=None) | |
562 | creationMethod = attr.ib(default=b'phabsend') # camelcase-required |
|
562 | creationMethod = attr.ib(default=b'phabsend') # camelcase-required | |
563 | lintStatus = attr.ib(default=b'none') # camelcase-required |
|
563 | lintStatus = attr.ib(default=b'none') # camelcase-required | |
564 | unitStatus = attr.ib(default=b'none') # camelcase-required |
|
564 | unitStatus = attr.ib(default=b'none') # camelcase-required | |
565 | changes = attr.ib(default=attr.Factory(dict)) |
|
565 | changes = attr.ib(default=attr.Factory(dict)) | |
566 | repositoryPHID = attr.ib(default=None) # camelcase-required |
|
566 | repositoryPHID = attr.ib(default=None) # camelcase-required | |
567 |
|
567 | |||
568 | def addchange(self, change): |
|
568 | def addchange(self, change): | |
569 | if not isinstance(change, phabchange): |
|
569 | if not isinstance(change, phabchange): | |
570 | raise error.Abort(b'phabdiff.addchange only takes phabchanges') |
|
570 | raise error.Abort(b'phabdiff.addchange only takes phabchanges') | |
571 | self.changes[change.currentPath] = pycompat.byteskwargs( |
|
571 | self.changes[change.currentPath] = pycompat.byteskwargs( | |
572 | attr.asdict(change) |
|
572 | attr.asdict(change) | |
573 | ) |
|
573 | ) | |
574 |
|
574 | |||
575 |
|
575 | |||
576 | def maketext(pchange, ctx, fname): |
|
576 | def maketext(pchange, ctx, fname): | |
577 | """populate the phabchange for a text file""" |
|
577 | """populate the phabchange for a text file""" | |
578 | repo = ctx.repo() |
|
578 | repo = ctx.repo() | |
579 | fmatcher = match.exact([fname]) |
|
579 | fmatcher = match.exact([fname]) | |
580 | diffopts = mdiff.diffopts(git=True, context=32767) |
|
580 | diffopts = mdiff.diffopts(git=True, context=32767) | |
581 | _pfctx, _fctx, header, fhunks = next( |
|
581 | _pfctx, _fctx, header, fhunks = next( | |
582 | patch.diffhunks(repo, ctx.p1(), ctx, fmatcher, opts=diffopts) |
|
582 | patch.diffhunks(repo, ctx.p1(), ctx, fmatcher, opts=diffopts) | |
583 | ) |
|
583 | ) | |
584 |
|
584 | |||
585 | for fhunk in fhunks: |
|
585 | for fhunk in fhunks: | |
586 | (oldOffset, oldLength, newOffset, newLength), lines = fhunk |
|
586 | (oldOffset, oldLength, newOffset, newLength), lines = fhunk | |
587 | corpus = b''.join(lines[1:]) |
|
587 | corpus = b''.join(lines[1:]) | |
588 | shunk = list(header) |
|
588 | shunk = list(header) | |
589 | shunk.extend(lines) |
|
589 | shunk.extend(lines) | |
590 | _mf, _mt, addLines, delLines, _hb = patch.diffstatsum( |
|
590 | _mf, _mt, addLines, delLines, _hb = patch.diffstatsum( | |
591 | patch.diffstatdata(util.iterlines(shunk)) |
|
591 | patch.diffstatdata(util.iterlines(shunk)) | |
592 | ) |
|
592 | ) | |
593 | pchange.addhunk( |
|
593 | pchange.addhunk( | |
594 | phabhunk( |
|
594 | phabhunk( | |
595 | oldOffset, |
|
595 | oldOffset, | |
596 | oldLength, |
|
596 | oldLength, | |
597 | newOffset, |
|
597 | newOffset, | |
598 | newLength, |
|
598 | newLength, | |
599 | corpus, |
|
599 | corpus, | |
600 | addLines, |
|
600 | addLines, | |
601 | delLines, |
|
601 | delLines, | |
602 | ) |
|
602 | ) | |
603 | ) |
|
603 | ) | |
604 |
|
604 | |||
605 |
|
605 | |||
606 | def uploadchunks(fctx, fphid): |
|
606 | def uploadchunks(fctx, fphid): | |
607 | """upload large binary files as separate chunks. |
|
607 | """upload large binary files as separate chunks. | |
608 | Phab requests chunking over 8MiB, and splits into 4MiB chunks |
|
608 | Phab requests chunking over 8MiB, and splits into 4MiB chunks | |
609 | """ |
|
609 | """ | |
610 | ui = fctx.repo().ui |
|
610 | ui = fctx.repo().ui | |
611 | chunks = callconduit(ui, b'file.querychunks', {b'filePHID': fphid}) |
|
611 | chunks = callconduit(ui, b'file.querychunks', {b'filePHID': fphid}) | |
612 | progress = ui.makeprogress( |
|
612 | progress = ui.makeprogress( | |
613 | _(b'uploading file chunks'), unit=_(b'chunks'), total=len(chunks) |
|
613 | _(b'uploading file chunks'), unit=_(b'chunks'), total=len(chunks) | |
614 | ) |
|
614 | ) | |
615 | for chunk in chunks: |
|
615 | for chunk in chunks: | |
616 | progress.increment() |
|
616 | progress.increment() | |
617 | if chunk[b'complete']: |
|
617 | if chunk[b'complete']: | |
618 | continue |
|
618 | continue | |
619 | bstart = int(chunk[b'byteStart']) |
|
619 | bstart = int(chunk[b'byteStart']) | |
620 | bend = int(chunk[b'byteEnd']) |
|
620 | bend = int(chunk[b'byteEnd']) | |
621 | callconduit( |
|
621 | callconduit( | |
622 | ui, |
|
622 | ui, | |
623 | b'file.uploadchunk', |
|
623 | b'file.uploadchunk', | |
624 | { |
|
624 | { | |
625 | b'filePHID': fphid, |
|
625 | b'filePHID': fphid, | |
626 | b'byteStart': bstart, |
|
626 | b'byteStart': bstart, | |
627 | b'data': base64.b64encode(fctx.data()[bstart:bend]), |
|
627 | b'data': base64.b64encode(fctx.data()[bstart:bend]), | |
628 | b'dataEncoding': b'base64', |
|
628 | b'dataEncoding': b'base64', | |
629 | }, |
|
629 | }, | |
630 | ) |
|
630 | ) | |
631 | progress.complete() |
|
631 | progress.complete() | |
632 |
|
632 | |||
633 |
|
633 | |||
634 | def uploadfile(fctx): |
|
634 | def uploadfile(fctx): | |
635 | """upload binary files to Phabricator""" |
|
635 | """upload binary files to Phabricator""" | |
636 | repo = fctx.repo() |
|
636 | repo = fctx.repo() | |
637 | ui = repo.ui |
|
637 | ui = repo.ui | |
638 | fname = fctx.path() |
|
638 | fname = fctx.path() | |
639 | size = fctx.size() |
|
639 | size = fctx.size() | |
640 | fhash = pycompat.bytestr(hashlib.sha256(fctx.data()).hexdigest()) |
|
640 | fhash = pycompat.bytestr(hashlib.sha256(fctx.data()).hexdigest()) | |
641 |
|
641 | |||
642 | # an allocate call is required first to see if an upload is even required |
|
642 | # an allocate call is required first to see if an upload is even required | |
643 | # (Phab might already have it) and to determine if chunking is needed |
|
643 | # (Phab might already have it) and to determine if chunking is needed | |
644 | allocateparams = { |
|
644 | allocateparams = { | |
645 | b'name': fname, |
|
645 | b'name': fname, | |
646 | b'contentLength': size, |
|
646 | b'contentLength': size, | |
647 | b'contentHash': fhash, |
|
647 | b'contentHash': fhash, | |
648 | } |
|
648 | } | |
649 | filealloc = callconduit(ui, b'file.allocate', allocateparams) |
|
649 | filealloc = callconduit(ui, b'file.allocate', allocateparams) | |
650 | fphid = filealloc[b'filePHID'] |
|
650 | fphid = filealloc[b'filePHID'] | |
651 |
|
651 | |||
652 | if filealloc[b'upload']: |
|
652 | if filealloc[b'upload']: | |
653 | ui.write(_(b'uploading %s\n') % bytes(fctx)) |
|
653 | ui.write(_(b'uploading %s\n') % bytes(fctx)) | |
654 | if not fphid: |
|
654 | if not fphid: | |
655 | uploadparams = { |
|
655 | uploadparams = { | |
656 | b'name': fname, |
|
656 | b'name': fname, | |
657 | b'data_base64': base64.b64encode(fctx.data()), |
|
657 | b'data_base64': base64.b64encode(fctx.data()), | |
658 | } |
|
658 | } | |
659 | fphid = callconduit(ui, b'file.upload', uploadparams) |
|
659 | fphid = callconduit(ui, b'file.upload', uploadparams) | |
660 | else: |
|
660 | else: | |
661 | uploadchunks(fctx, fphid) |
|
661 | uploadchunks(fctx, fphid) | |
662 | else: |
|
662 | else: | |
663 | ui.debug(b'server already has %s\n' % bytes(fctx)) |
|
663 | ui.debug(b'server already has %s\n' % bytes(fctx)) | |
664 |
|
664 | |||
665 | if not fphid: |
|
665 | if not fphid: | |
666 | raise error.Abort(b'Upload of %s failed.' % bytes(fctx)) |
|
666 | raise error.Abort(b'Upload of %s failed.' % bytes(fctx)) | |
667 |
|
667 | |||
668 | return fphid |
|
668 | return fphid | |
669 |
|
669 | |||
670 |
|
670 | |||
671 | def addoldbinary(pchange, fctx, originalfname): |
|
671 | def addoldbinary(pchange, fctx, originalfname): | |
672 | """add the metadata for the previous version of a binary file to the |
|
672 | """add the metadata for the previous version of a binary file to the | |
673 | phabchange for the new version |
|
673 | phabchange for the new version | |
674 | """ |
|
674 | """ | |
675 | oldfctx = fctx.p1()[originalfname] |
|
675 | oldfctx = fctx.p1()[originalfname] | |
676 | if fctx.cmp(oldfctx): |
|
676 | if fctx.cmp(oldfctx): | |
677 | # Files differ, add the old one |
|
677 | # Files differ, add the old one | |
678 | pchange.metadata[b'old:file:size'] = oldfctx.size() |
|
678 | pchange.metadata[b'old:file:size'] = oldfctx.size() | |
679 | mimeguess, _enc = mimetypes.guess_type( |
|
679 | mimeguess, _enc = mimetypes.guess_type( | |
680 | encoding.unifromlocal(oldfctx.path()) |
|
680 | encoding.unifromlocal(oldfctx.path()) | |
681 | ) |
|
681 | ) | |
682 | if mimeguess: |
|
682 | if mimeguess: | |
683 | pchange.metadata[b'old:file:mime-type'] = pycompat.bytestr( |
|
683 | pchange.metadata[b'old:file:mime-type'] = pycompat.bytestr( | |
684 | mimeguess |
|
684 | mimeguess | |
685 | ) |
|
685 | ) | |
686 | fphid = uploadfile(oldfctx) |
|
686 | fphid = uploadfile(oldfctx) | |
687 | pchange.metadata[b'old:binary-phid'] = fphid |
|
687 | pchange.metadata[b'old:binary-phid'] = fphid | |
688 | else: |
|
688 | else: | |
689 | # If it's left as IMAGE/BINARY web UI might try to display it |
|
689 | # If it's left as IMAGE/BINARY web UI might try to display it | |
690 | pchange.fileType = DiffFileType.TEXT |
|
690 | pchange.fileType = DiffFileType.TEXT | |
691 | pchange.copynewmetadatatoold() |
|
691 | pchange.copynewmetadatatoold() | |
692 |
|
692 | |||
693 |
|
693 | |||
694 | def makebinary(pchange, fctx): |
|
694 | def makebinary(pchange, fctx): | |
695 | """populate the phabchange for a binary file""" |
|
695 | """populate the phabchange for a binary file""" | |
696 | pchange.fileType = DiffFileType.BINARY |
|
696 | pchange.fileType = DiffFileType.BINARY | |
697 | fphid = uploadfile(fctx) |
|
697 | fphid = uploadfile(fctx) | |
698 | pchange.metadata[b'new:binary-phid'] = fphid |
|
698 | pchange.metadata[b'new:binary-phid'] = fphid | |
699 | pchange.metadata[b'new:file:size'] = fctx.size() |
|
699 | pchange.metadata[b'new:file:size'] = fctx.size() | |
700 | mimeguess, _enc = mimetypes.guess_type(encoding.unifromlocal(fctx.path())) |
|
700 | mimeguess, _enc = mimetypes.guess_type(encoding.unifromlocal(fctx.path())) | |
701 | if mimeguess: |
|
701 | if mimeguess: | |
702 | mimeguess = pycompat.bytestr(mimeguess) |
|
702 | mimeguess = pycompat.bytestr(mimeguess) | |
703 | pchange.metadata[b'new:file:mime-type'] = mimeguess |
|
703 | pchange.metadata[b'new:file:mime-type'] = mimeguess | |
704 | if mimeguess.startswith(b'image/'): |
|
704 | if mimeguess.startswith(b'image/'): | |
705 | pchange.fileType = DiffFileType.IMAGE |
|
705 | pchange.fileType = DiffFileType.IMAGE | |
706 |
|
706 | |||
707 |
|
707 | |||
708 | # Copied from mercurial/patch.py |
|
708 | # Copied from mercurial/patch.py | |
709 | gitmode = {b'l': b'120000', b'x': b'100755', b'': b'100644'} |
|
709 | gitmode = {b'l': b'120000', b'x': b'100755', b'': b'100644'} | |
710 |
|
710 | |||
711 |
|
711 | |||
712 | def notutf8(fctx): |
|
712 | def notutf8(fctx): | |
713 | """detect non-UTF-8 text files since Phabricator requires them to be marked |
|
713 | """detect non-UTF-8 text files since Phabricator requires them to be marked | |
714 | as binary |
|
714 | as binary | |
715 | """ |
|
715 | """ | |
716 | try: |
|
716 | try: | |
717 | fctx.data().decode('utf-8') |
|
717 | fctx.data().decode('utf-8') | |
718 | if fctx.parents(): |
|
718 | if fctx.parents(): | |
719 | fctx.p1().data().decode('utf-8') |
|
719 | fctx.p1().data().decode('utf-8') | |
720 | return False |
|
720 | return False | |
721 | except UnicodeDecodeError: |
|
721 | except UnicodeDecodeError: | |
722 | fctx.repo().ui.write( |
|
722 | fctx.repo().ui.write( | |
723 | _(b'file %s detected as non-UTF-8, marked as binary\n') |
|
723 | _(b'file %s detected as non-UTF-8, marked as binary\n') | |
724 | % fctx.path() |
|
724 | % fctx.path() | |
725 | ) |
|
725 | ) | |
726 | return True |
|
726 | return True | |
727 |
|
727 | |||
728 |
|
728 | |||
729 | def addremoved(pdiff, ctx, removed): |
|
729 | def addremoved(pdiff, ctx, removed): | |
730 | """add removed files to the phabdiff. Shouldn't include moves""" |
|
730 | """add removed files to the phabdiff. Shouldn't include moves""" | |
731 | for fname in removed: |
|
731 | for fname in removed: | |
732 | pchange = phabchange( |
|
732 | pchange = phabchange( | |
733 | currentPath=fname, oldPath=fname, type=DiffChangeType.DELETE |
|
733 | currentPath=fname, oldPath=fname, type=DiffChangeType.DELETE | |
734 | ) |
|
734 | ) | |
735 | pchange.addoldmode(gitmode[ctx.p1()[fname].flags()]) |
|
735 | pchange.addoldmode(gitmode[ctx.p1()[fname].flags()]) | |
736 | fctx = ctx.p1()[fname] |
|
736 | fctx = ctx.p1()[fname] | |
737 | if not (fctx.isbinary() or notutf8(fctx)): |
|
737 | if not (fctx.isbinary() or notutf8(fctx)): | |
738 | maketext(pchange, ctx, fname) |
|
738 | maketext(pchange, ctx, fname) | |
739 |
|
739 | |||
740 | pdiff.addchange(pchange) |
|
740 | pdiff.addchange(pchange) | |
741 |
|
741 | |||
742 |
|
742 | |||
743 | def addmodified(pdiff, ctx, modified): |
|
743 | def addmodified(pdiff, ctx, modified): | |
744 | """add modified files to the phabdiff""" |
|
744 | """add modified files to the phabdiff""" | |
745 | for fname in modified: |
|
745 | for fname in modified: | |
746 | fctx = ctx[fname] |
|
746 | fctx = ctx[fname] | |
747 | pchange = phabchange(currentPath=fname, oldPath=fname) |
|
747 | pchange = phabchange(currentPath=fname, oldPath=fname) | |
748 | filemode = gitmode[ctx[fname].flags()] |
|
748 | filemode = gitmode[ctx[fname].flags()] | |
749 | originalmode = gitmode[ctx.p1()[fname].flags()] |
|
749 | originalmode = gitmode[ctx.p1()[fname].flags()] | |
750 | if filemode != originalmode: |
|
750 | if filemode != originalmode: | |
751 | pchange.addoldmode(originalmode) |
|
751 | pchange.addoldmode(originalmode) | |
752 | pchange.addnewmode(filemode) |
|
752 | pchange.addnewmode(filemode) | |
753 |
|
753 | |||
754 | if fctx.isbinary() or notutf8(fctx): |
|
754 | if fctx.isbinary() or notutf8(fctx): | |
755 | makebinary(pchange, fctx) |
|
755 | makebinary(pchange, fctx) | |
756 | addoldbinary(pchange, fctx, fname) |
|
756 | addoldbinary(pchange, fctx, fname) | |
757 | else: |
|
757 | else: | |
758 | maketext(pchange, ctx, fname) |
|
758 | maketext(pchange, ctx, fname) | |
759 |
|
759 | |||
760 | pdiff.addchange(pchange) |
|
760 | pdiff.addchange(pchange) | |
761 |
|
761 | |||
762 |
|
762 | |||
763 | def addadded(pdiff, ctx, added, removed): |
|
763 | def addadded(pdiff, ctx, added, removed): | |
764 | """add file adds to the phabdiff, both new files and copies/moves""" |
|
764 | """add file adds to the phabdiff, both new files and copies/moves""" | |
765 | # Keep track of files that've been recorded as moved/copied, so if there are |
|
765 | # Keep track of files that've been recorded as moved/copied, so if there are | |
766 | # additional copies we can mark them (moves get removed from removed) |
|
766 | # additional copies we can mark them (moves get removed from removed) | |
767 | copiedchanges = {} |
|
767 | copiedchanges = {} | |
768 | movedchanges = {} |
|
768 | movedchanges = {} | |
769 | for fname in added: |
|
769 | for fname in added: | |
770 | fctx = ctx[fname] |
|
770 | fctx = ctx[fname] | |
771 | pchange = phabchange(currentPath=fname) |
|
771 | pchange = phabchange(currentPath=fname) | |
772 |
|
772 | |||
773 | filemode = gitmode[ctx[fname].flags()] |
|
773 | filemode = gitmode[ctx[fname].flags()] | |
774 | renamed = fctx.renamed() |
|
774 | renamed = fctx.renamed() | |
775 |
|
775 | |||
776 | if renamed: |
|
776 | if renamed: | |
777 | originalfname = renamed[0] |
|
777 | originalfname = renamed[0] | |
778 | originalmode = gitmode[ctx.p1()[originalfname].flags()] |
|
778 | originalmode = gitmode[ctx.p1()[originalfname].flags()] | |
779 | pchange.oldPath = originalfname |
|
779 | pchange.oldPath = originalfname | |
780 |
|
780 | |||
781 | if originalfname in removed: |
|
781 | if originalfname in removed: | |
782 | origpchange = phabchange( |
|
782 | origpchange = phabchange( | |
783 | currentPath=originalfname, |
|
783 | currentPath=originalfname, | |
784 | oldPath=originalfname, |
|
784 | oldPath=originalfname, | |
785 | type=DiffChangeType.MOVE_AWAY, |
|
785 | type=DiffChangeType.MOVE_AWAY, | |
786 | awayPaths=[fname], |
|
786 | awayPaths=[fname], | |
787 | ) |
|
787 | ) | |
788 | movedchanges[originalfname] = origpchange |
|
788 | movedchanges[originalfname] = origpchange | |
789 | removed.remove(originalfname) |
|
789 | removed.remove(originalfname) | |
790 | pchange.type = DiffChangeType.MOVE_HERE |
|
790 | pchange.type = DiffChangeType.MOVE_HERE | |
791 | elif originalfname in movedchanges: |
|
791 | elif originalfname in movedchanges: | |
792 | movedchanges[originalfname].type = DiffChangeType.MULTICOPY |
|
792 | movedchanges[originalfname].type = DiffChangeType.MULTICOPY | |
793 | movedchanges[originalfname].awayPaths.append(fname) |
|
793 | movedchanges[originalfname].awayPaths.append(fname) | |
794 | pchange.type = DiffChangeType.COPY_HERE |
|
794 | pchange.type = DiffChangeType.COPY_HERE | |
795 | else: # pure copy |
|
795 | else: # pure copy | |
796 | if originalfname not in copiedchanges: |
|
796 | if originalfname not in copiedchanges: | |
797 | origpchange = phabchange( |
|
797 | origpchange = phabchange( | |
798 | currentPath=originalfname, type=DiffChangeType.COPY_AWAY |
|
798 | currentPath=originalfname, type=DiffChangeType.COPY_AWAY | |
799 | ) |
|
799 | ) | |
800 | copiedchanges[originalfname] = origpchange |
|
800 | copiedchanges[originalfname] = origpchange | |
801 | else: |
|
801 | else: | |
802 | origpchange = copiedchanges[originalfname] |
|
802 | origpchange = copiedchanges[originalfname] | |
803 | origpchange.awayPaths.append(fname) |
|
803 | origpchange.awayPaths.append(fname) | |
804 | pchange.type = DiffChangeType.COPY_HERE |
|
804 | pchange.type = DiffChangeType.COPY_HERE | |
805 |
|
805 | |||
806 | if filemode != originalmode: |
|
806 | if filemode != originalmode: | |
807 | pchange.addoldmode(originalmode) |
|
807 | pchange.addoldmode(originalmode) | |
808 | pchange.addnewmode(filemode) |
|
808 | pchange.addnewmode(filemode) | |
809 | else: # Brand-new file |
|
809 | else: # Brand-new file | |
810 | pchange.addnewmode(gitmode[fctx.flags()]) |
|
810 | pchange.addnewmode(gitmode[fctx.flags()]) | |
811 | pchange.type = DiffChangeType.ADD |
|
811 | pchange.type = DiffChangeType.ADD | |
812 |
|
812 | |||
813 | if fctx.isbinary() or notutf8(fctx): |
|
813 | if fctx.isbinary() or notutf8(fctx): | |
814 | makebinary(pchange, fctx) |
|
814 | makebinary(pchange, fctx) | |
815 | if renamed: |
|
815 | if renamed: | |
816 | addoldbinary(pchange, fctx, originalfname) |
|
816 | addoldbinary(pchange, fctx, originalfname) | |
817 | else: |
|
817 | else: | |
818 | maketext(pchange, ctx, fname) |
|
818 | maketext(pchange, ctx, fname) | |
819 |
|
819 | |||
820 | pdiff.addchange(pchange) |
|
820 | pdiff.addchange(pchange) | |
821 |
|
821 | |||
822 | for _path, copiedchange in copiedchanges.items(): |
|
822 | for _path, copiedchange in copiedchanges.items(): | |
823 | pdiff.addchange(copiedchange) |
|
823 | pdiff.addchange(copiedchange) | |
824 | for _path, movedchange in movedchanges.items(): |
|
824 | for _path, movedchange in movedchanges.items(): | |
825 | pdiff.addchange(movedchange) |
|
825 | pdiff.addchange(movedchange) | |
826 |
|
826 | |||
827 |
|
827 | |||
828 | def creatediff(ctx): |
|
828 | def creatediff(ctx): | |
829 | """create a Differential Diff""" |
|
829 | """create a Differential Diff""" | |
830 | repo = ctx.repo() |
|
830 | repo = ctx.repo() | |
831 | repophid = getrepophid(repo) |
|
831 | repophid = getrepophid(repo) | |
832 | # Create a "Differential Diff" via "differential.creatediff" API |
|
832 | # Create a "Differential Diff" via "differential.creatediff" API | |
833 | pdiff = phabdiff( |
|
833 | pdiff = phabdiff( | |
834 | sourceControlBaseRevision=b'%s' % ctx.p1().hex(), |
|
834 | sourceControlBaseRevision=b'%s' % ctx.p1().hex(), | |
835 | branch=b'%s' % ctx.branch(), |
|
835 | branch=b'%s' % ctx.branch(), | |
836 | ) |
|
836 | ) | |
837 | modified, added, removed, _d, _u, _i, _c = ctx.p1().status(ctx) |
|
837 | modified, added, removed, _d, _u, _i, _c = ctx.p1().status(ctx) | |
838 | # addadded will remove moved files from removed, so addremoved won't get |
|
838 | # addadded will remove moved files from removed, so addremoved won't get | |
839 | # them |
|
839 | # them | |
840 | addadded(pdiff, ctx, added, removed) |
|
840 | addadded(pdiff, ctx, added, removed) | |
841 | addmodified(pdiff, ctx, modified) |
|
841 | addmodified(pdiff, ctx, modified) | |
842 | addremoved(pdiff, ctx, removed) |
|
842 | addremoved(pdiff, ctx, removed) | |
843 | if repophid: |
|
843 | if repophid: | |
844 | pdiff.repositoryPHID = repophid |
|
844 | pdiff.repositoryPHID = repophid | |
845 | diff = callconduit( |
|
845 | diff = callconduit( | |
846 | repo.ui, |
|
846 | repo.ui, | |
847 | b'differential.creatediff', |
|
847 | b'differential.creatediff', | |
848 | pycompat.byteskwargs(attr.asdict(pdiff)), |
|
848 | pycompat.byteskwargs(attr.asdict(pdiff)), | |
849 | ) |
|
849 | ) | |
850 | if not diff: |
|
850 | if not diff: | |
851 | raise error.Abort(_(b'cannot create diff for %s') % ctx) |
|
851 | raise error.Abort(_(b'cannot create diff for %s') % ctx) | |
852 | return diff |
|
852 | return diff | |
853 |
|
853 | |||
854 |
|
854 | |||
855 | def writediffproperties(ctx, diff): |
|
855 | def writediffproperties(ctx, diff): | |
856 | """write metadata to diff so patches could be applied losslessly""" |
|
856 | """write metadata to diff so patches could be applied losslessly""" | |
857 | # creatediff returns with a diffid but query returns with an id |
|
857 | # creatediff returns with a diffid but query returns with an id | |
858 | diffid = diff.get(b'diffid', diff.get(b'id')) |
|
858 | diffid = diff.get(b'diffid', diff.get(b'id')) | |
859 | params = { |
|
859 | params = { | |
860 | b'diff_id': diffid, |
|
860 | b'diff_id': diffid, | |
861 | b'name': b'hg:meta', |
|
861 | b'name': b'hg:meta', | |
862 | b'data': templatefilters.json( |
|
862 | b'data': templatefilters.json( | |
863 | { |
|
863 | { | |
864 | b'user': ctx.user(), |
|
864 | b'user': ctx.user(), | |
865 | b'date': b'%d %d' % ctx.date(), |
|
865 | b'date': b'%d %d' % ctx.date(), | |
866 | b'branch': ctx.branch(), |
|
866 | b'branch': ctx.branch(), | |
867 | b'node': ctx.hex(), |
|
867 | b'node': ctx.hex(), | |
868 | b'parent': ctx.p1().hex(), |
|
868 | b'parent': ctx.p1().hex(), | |
869 | } |
|
869 | } | |
870 | ), |
|
870 | ), | |
871 | } |
|
871 | } | |
872 | callconduit(ctx.repo().ui, b'differential.setdiffproperty', params) |
|
872 | callconduit(ctx.repo().ui, b'differential.setdiffproperty', params) | |
873 |
|
873 | |||
874 | params = { |
|
874 | params = { | |
875 | b'diff_id': diffid, |
|
875 | b'diff_id': diffid, | |
876 | b'name': b'local:commits', |
|
876 | b'name': b'local:commits', | |
877 | b'data': templatefilters.json( |
|
877 | b'data': templatefilters.json( | |
878 | { |
|
878 | { | |
879 | ctx.hex(): { |
|
879 | ctx.hex(): { | |
880 | b'author': stringutil.person(ctx.user()), |
|
880 | b'author': stringutil.person(ctx.user()), | |
881 | b'authorEmail': stringutil.email(ctx.user()), |
|
881 | b'authorEmail': stringutil.email(ctx.user()), | |
882 | b'time': int(ctx.date()[0]), |
|
882 | b'time': int(ctx.date()[0]), | |
883 | b'commit': ctx.hex(), |
|
883 | b'commit': ctx.hex(), | |
884 | b'parents': [ctx.p1().hex()], |
|
884 | b'parents': [ctx.p1().hex()], | |
885 | b'branch': ctx.branch(), |
|
885 | b'branch': ctx.branch(), | |
886 | }, |
|
886 | }, | |
887 | } |
|
887 | } | |
888 | ), |
|
888 | ), | |
889 | } |
|
889 | } | |
890 | callconduit(ctx.repo().ui, b'differential.setdiffproperty', params) |
|
890 | callconduit(ctx.repo().ui, b'differential.setdiffproperty', params) | |
891 |
|
891 | |||
892 |
|
892 | |||
893 | def createdifferentialrevision( |
|
893 | def createdifferentialrevision( | |
894 | ctx, |
|
894 | ctx, | |
895 | revid=None, |
|
895 | revid=None, | |
896 | parentrevphid=None, |
|
896 | parentrevphid=None, | |
897 | oldnode=None, |
|
897 | oldnode=None, | |
898 | olddiff=None, |
|
898 | olddiff=None, | |
899 | actions=None, |
|
899 | actions=None, | |
900 | comment=None, |
|
900 | comment=None, | |
901 | ): |
|
901 | ): | |
902 | """create or update a Differential Revision |
|
902 | """create or update a Differential Revision | |
903 |
|
903 | |||
904 | If revid is None, create a new Differential Revision, otherwise update |
|
904 | If revid is None, create a new Differential Revision, otherwise update | |
905 | revid. If parentrevphid is not None, set it as a dependency. |
|
905 | revid. If parentrevphid is not None, set it as a dependency. | |
906 |
|
906 | |||
907 | If oldnode is not None, check if the patch content (without commit message |
|
907 | If oldnode is not None, check if the patch content (without commit message | |
908 | and metadata) has changed before creating another diff. |
|
908 | and metadata) has changed before creating another diff. | |
909 |
|
909 | |||
910 | If actions is not None, they will be appended to the transaction. |
|
910 | If actions is not None, they will be appended to the transaction. | |
911 | """ |
|
911 | """ | |
912 | repo = ctx.repo() |
|
912 | repo = ctx.repo() | |
913 | if oldnode: |
|
913 | if oldnode: | |
914 | diffopts = mdiff.diffopts(git=True, context=32767) |
|
914 | diffopts = mdiff.diffopts(git=True, context=32767) | |
915 | oldctx = repo.unfiltered()[oldnode] |
|
915 | oldctx = repo.unfiltered()[oldnode] | |
916 | neednewdiff = getdiff(ctx, diffopts) != getdiff(oldctx, diffopts) |
|
916 | neednewdiff = getdiff(ctx, diffopts) != getdiff(oldctx, diffopts) | |
917 | else: |
|
917 | else: | |
918 | neednewdiff = True |
|
918 | neednewdiff = True | |
919 |
|
919 | |||
920 | transactions = [] |
|
920 | transactions = [] | |
921 | if neednewdiff: |
|
921 | if neednewdiff: | |
922 | diff = creatediff(ctx) |
|
922 | diff = creatediff(ctx) | |
923 | transactions.append({b'type': b'update', b'value': diff[b'phid']}) |
|
923 | transactions.append({b'type': b'update', b'value': diff[b'phid']}) | |
924 | if comment: |
|
924 | if comment: | |
925 | transactions.append({b'type': b'comment', b'value': comment}) |
|
925 | transactions.append({b'type': b'comment', b'value': comment}) | |
926 | else: |
|
926 | else: | |
927 | # Even if we don't need to upload a new diff because the patch content |
|
927 | # Even if we don't need to upload a new diff because the patch content | |
928 | # does not change. We might still need to update its metadata so |
|
928 | # does not change. We might still need to update its metadata so | |
929 | # pushers could know the correct node metadata. |
|
929 | # pushers could know the correct node metadata. | |
930 | assert olddiff |
|
930 | assert olddiff | |
931 | diff = olddiff |
|
931 | diff = olddiff | |
932 | writediffproperties(ctx, diff) |
|
932 | writediffproperties(ctx, diff) | |
933 |
|
933 | |||
934 | # Set the parent Revision every time, so commit re-ordering is picked-up |
|
934 | # Set the parent Revision every time, so commit re-ordering is picked-up | |
935 | if parentrevphid: |
|
935 | if parentrevphid: | |
936 | transactions.append( |
|
936 | transactions.append( | |
937 | {b'type': b'parents.set', b'value': [parentrevphid]} |
|
937 | {b'type': b'parents.set', b'value': [parentrevphid]} | |
938 | ) |
|
938 | ) | |
939 |
|
939 | |||
940 | if actions: |
|
940 | if actions: | |
941 | transactions += actions |
|
941 | transactions += actions | |
942 |
|
942 | |||
943 | # Parse commit message and update related fields. |
|
943 | # Parse commit message and update related fields. | |
944 | desc = ctx.description() |
|
944 | desc = ctx.description() | |
945 | info = callconduit( |
|
945 | info = callconduit( | |
946 | repo.ui, b'differential.parsecommitmessage', {b'corpus': desc} |
|
946 | repo.ui, b'differential.parsecommitmessage', {b'corpus': desc} | |
947 | ) |
|
947 | ) | |
948 | for k, v in info[b'fields'].items(): |
|
948 | for k, v in info[b'fields'].items(): | |
949 | if k in [b'title', b'summary', b'testPlan']: |
|
949 | if k in [b'title', b'summary', b'testPlan']: | |
950 | transactions.append({b'type': k, b'value': v}) |
|
950 | transactions.append({b'type': k, b'value': v}) | |
951 |
|
951 | |||
952 | params = {b'transactions': transactions} |
|
952 | params = {b'transactions': transactions} | |
953 | if revid is not None: |
|
953 | if revid is not None: | |
954 | # Update an existing Differential Revision |
|
954 | # Update an existing Differential Revision | |
955 | params[b'objectIdentifier'] = revid |
|
955 | params[b'objectIdentifier'] = revid | |
956 |
|
956 | |||
957 | revision = callconduit(repo.ui, b'differential.revision.edit', params) |
|
957 | revision = callconduit(repo.ui, b'differential.revision.edit', params) | |
958 | if not revision: |
|
958 | if not revision: | |
959 | raise error.Abort(_(b'cannot create revision for %s') % ctx) |
|
959 | raise error.Abort(_(b'cannot create revision for %s') % ctx) | |
960 |
|
960 | |||
961 | return revision, diff |
|
961 | return revision, diff | |
962 |
|
962 | |||
963 |
|
963 | |||
964 | def userphids(repo, names): |
|
964 | def userphids(repo, names): | |
965 | """convert user names to PHIDs""" |
|
965 | """convert user names to PHIDs""" | |
966 | names = [name.lower() for name in names] |
|
966 | names = [name.lower() for name in names] | |
967 | query = {b'constraints': {b'usernames': names}} |
|
967 | query = {b'constraints': {b'usernames': names}} | |
968 | result = callconduit(repo.ui, b'user.search', query) |
|
968 | result = callconduit(repo.ui, b'user.search', query) | |
969 | # username not found is not an error of the API. So check if we have missed |
|
969 | # username not found is not an error of the API. So check if we have missed | |
970 | # some names here. |
|
970 | # some names here. | |
971 | data = result[b'data'] |
|
971 | data = result[b'data'] | |
972 | resolved = set(entry[b'fields'][b'username'].lower() for entry in data) |
|
972 | resolved = set(entry[b'fields'][b'username'].lower() for entry in data) | |
973 | unresolved = set(names) - resolved |
|
973 | unresolved = set(names) - resolved | |
974 | if unresolved: |
|
974 | if unresolved: | |
975 | raise error.Abort( |
|
975 | raise error.Abort( | |
976 | _(b'unknown username: %s') % b' '.join(sorted(unresolved)) |
|
976 | _(b'unknown username: %s') % b' '.join(sorted(unresolved)) | |
977 | ) |
|
977 | ) | |
978 | return [entry[b'phid'] for entry in data] |
|
978 | return [entry[b'phid'] for entry in data] | |
979 |
|
979 | |||
980 |
|
980 | |||
981 | @vcrcommand( |
|
981 | @vcrcommand( | |
982 | b'phabsend', |
|
982 | b'phabsend', | |
983 | [ |
|
983 | [ | |
984 | (b'r', b'rev', [], _(b'revisions to send'), _(b'REV')), |
|
984 | (b'r', b'rev', [], _(b'revisions to send'), _(b'REV')), | |
985 | (b'', b'amend', True, _(b'update commit messages')), |
|
985 | (b'', b'amend', True, _(b'update commit messages')), | |
986 | (b'', b'reviewer', [], _(b'specify reviewers')), |
|
986 | (b'', b'reviewer', [], _(b'specify reviewers')), | |
987 | (b'', b'blocker', [], _(b'specify blocking reviewers')), |
|
987 | (b'', b'blocker', [], _(b'specify blocking reviewers')), | |
988 | ( |
|
988 | ( | |
989 | b'm', |
|
989 | b'm', | |
990 | b'comment', |
|
990 | b'comment', | |
991 | b'', |
|
991 | b'', | |
992 | _(b'add a comment to Revisions with new/updated Diffs'), |
|
992 | _(b'add a comment to Revisions with new/updated Diffs'), | |
993 | ), |
|
993 | ), | |
994 | (b'', b'confirm', None, _(b'ask for confirmation before sending')), |
|
994 | (b'', b'confirm', None, _(b'ask for confirmation before sending')), | |
995 | ], |
|
995 | ], | |
996 | _(b'REV [OPTIONS]'), |
|
996 | _(b'REV [OPTIONS]'), | |
997 | helpcategory=command.CATEGORY_IMPORT_EXPORT, |
|
997 | helpcategory=command.CATEGORY_IMPORT_EXPORT, | |
998 | ) |
|
998 | ) | |
999 | def phabsend(ui, repo, *revs, **opts): |
|
999 | def phabsend(ui, repo, *revs, **opts): | |
1000 | """upload changesets to Phabricator |
|
1000 | """upload changesets to Phabricator | |
1001 |
|
1001 | |||
1002 | If there are multiple revisions specified, they will be send as a stack |
|
1002 | If there are multiple revisions specified, they will be send as a stack | |
1003 | with a linear dependencies relationship using the order specified by the |
|
1003 | with a linear dependencies relationship using the order specified by the | |
1004 | revset. |
|
1004 | revset. | |
1005 |
|
1005 | |||
1006 | For the first time uploading changesets, local tags will be created to |
|
1006 | For the first time uploading changesets, local tags will be created to | |
1007 | maintain the association. After the first time, phabsend will check |
|
1007 | maintain the association. After the first time, phabsend will check | |
1008 | obsstore and tags information so it can figure out whether to update an |
|
1008 | obsstore and tags information so it can figure out whether to update an | |
1009 | existing Differential Revision, or create a new one. |
|
1009 | existing Differential Revision, or create a new one. | |
1010 |
|
1010 | |||
1011 | If --amend is set, update commit messages so they have the |
|
1011 | If --amend is set, update commit messages so they have the | |
1012 | ``Differential Revision`` URL, remove related tags. This is similar to what |
|
1012 | ``Differential Revision`` URL, remove related tags. This is similar to what | |
1013 | arcanist will do, and is more desired in author-push workflows. Otherwise, |
|
1013 | arcanist will do, and is more desired in author-push workflows. Otherwise, | |
1014 | use local tags to record the ``Differential Revision`` association. |
|
1014 | use local tags to record the ``Differential Revision`` association. | |
1015 |
|
1015 | |||
1016 | The --confirm option lets you confirm changesets before sending them. You |
|
1016 | The --confirm option lets you confirm changesets before sending them. You | |
1017 | can also add following to your configuration file to make it default |
|
1017 | can also add following to your configuration file to make it default | |
1018 | behaviour:: |
|
1018 | behaviour:: | |
1019 |
|
1019 | |||
1020 | [phabsend] |
|
1020 | [phabsend] | |
1021 | confirm = true |
|
1021 | confirm = true | |
1022 |
|
1022 | |||
1023 | phabsend will check obsstore and the above association to decide whether to |
|
1023 | phabsend will check obsstore and the above association to decide whether to | |
1024 | update an existing Differential Revision, or create a new one. |
|
1024 | update an existing Differential Revision, or create a new one. | |
1025 | """ |
|
1025 | """ | |
1026 | opts = pycompat.byteskwargs(opts) |
|
1026 | opts = pycompat.byteskwargs(opts) | |
1027 | revs = list(revs) + opts.get(b'rev', []) |
|
1027 | revs = list(revs) + opts.get(b'rev', []) | |
1028 | revs = scmutil.revrange(repo, revs) |
|
1028 | revs = scmutil.revrange(repo, revs) | |
1029 |
|
1029 | |||
1030 | if not revs: |
|
1030 | if not revs: | |
1031 | raise error.Abort(_(b'phabsend requires at least one changeset')) |
|
1031 | raise error.Abort(_(b'phabsend requires at least one changeset')) | |
1032 | if opts.get(b'amend'): |
|
1032 | if opts.get(b'amend'): | |
1033 | cmdutil.checkunfinished(repo) |
|
1033 | cmdutil.checkunfinished(repo) | |
1034 |
|
1034 | |||
1035 | # {newnode: (oldnode, olddiff, olddrev} |
|
1035 | # {newnode: (oldnode, olddiff, olddrev} | |
1036 | oldmap = getoldnodedrevmap(repo, [repo[r].node() for r in revs]) |
|
1036 | oldmap = getoldnodedrevmap(repo, [repo[r].node() for r in revs]) | |
1037 |
|
1037 | |||
1038 | confirm = ui.configbool(b'phabsend', b'confirm') |
|
1038 | confirm = ui.configbool(b'phabsend', b'confirm') | |
1039 | confirm |= bool(opts.get(b'confirm')) |
|
1039 | confirm |= bool(opts.get(b'confirm')) | |
1040 | if confirm: |
|
1040 | if confirm: | |
1041 | confirmed = _confirmbeforesend(repo, revs, oldmap) |
|
1041 | confirmed = _confirmbeforesend(repo, revs, oldmap) | |
1042 | if not confirmed: |
|
1042 | if not confirmed: | |
1043 | raise error.Abort(_(b'phabsend cancelled')) |
|
1043 | raise error.Abort(_(b'phabsend cancelled')) | |
1044 |
|
1044 | |||
1045 | actions = [] |
|
1045 | actions = [] | |
1046 | reviewers = opts.get(b'reviewer', []) |
|
1046 | reviewers = opts.get(b'reviewer', []) | |
1047 | blockers = opts.get(b'blocker', []) |
|
1047 | blockers = opts.get(b'blocker', []) | |
1048 | phids = [] |
|
1048 | phids = [] | |
1049 | if reviewers: |
|
1049 | if reviewers: | |
1050 | phids.extend(userphids(repo, reviewers)) |
|
1050 | phids.extend(userphids(repo, reviewers)) | |
1051 | if blockers: |
|
1051 | if blockers: | |
1052 | phids.extend( |
|
1052 | phids.extend( | |
1053 | map(lambda phid: b'blocking(%s)' % phid, userphids(repo, blockers)) |
|
1053 | map(lambda phid: b'blocking(%s)' % phid, userphids(repo, blockers)) | |
1054 | ) |
|
1054 | ) | |
1055 | if phids: |
|
1055 | if phids: | |
1056 | actions.append({b'type': b'reviewers.add', b'value': phids}) |
|
1056 | actions.append({b'type': b'reviewers.add', b'value': phids}) | |
1057 |
|
1057 | |||
1058 | drevids = [] # [int] |
|
1058 | drevids = [] # [int] | |
1059 | diffmap = {} # {newnode: diff} |
|
1059 | diffmap = {} # {newnode: diff} | |
1060 |
|
1060 | |||
1061 | # Send patches one by one so we know their Differential Revision PHIDs and |
|
1061 | # Send patches one by one so we know their Differential Revision PHIDs and | |
1062 | # can provide dependency relationship |
|
1062 | # can provide dependency relationship | |
1063 | lastrevphid = None |
|
1063 | lastrevphid = None | |
1064 | for rev in revs: |
|
1064 | for rev in revs: | |
1065 | ui.debug(b'sending rev %d\n' % rev) |
|
1065 | ui.debug(b'sending rev %d\n' % rev) | |
1066 | ctx = repo[rev] |
|
1066 | ctx = repo[rev] | |
1067 |
|
1067 | |||
1068 | # Get Differential Revision ID |
|
1068 | # Get Differential Revision ID | |
1069 | oldnode, olddiff, revid = oldmap.get(ctx.node(), (None, None, None)) |
|
1069 | oldnode, olddiff, revid = oldmap.get(ctx.node(), (None, None, None)) | |
1070 | if oldnode != ctx.node() or opts.get(b'amend'): |
|
1070 | if oldnode != ctx.node() or opts.get(b'amend'): | |
1071 | # Create or update Differential Revision |
|
1071 | # Create or update Differential Revision | |
1072 | revision, diff = createdifferentialrevision( |
|
1072 | revision, diff = createdifferentialrevision( | |
1073 | ctx, |
|
1073 | ctx, | |
1074 | revid, |
|
1074 | revid, | |
1075 | lastrevphid, |
|
1075 | lastrevphid, | |
1076 | oldnode, |
|
1076 | oldnode, | |
1077 | olddiff, |
|
1077 | olddiff, | |
1078 | actions, |
|
1078 | actions, | |
1079 | opts.get(b'comment'), |
|
1079 | opts.get(b'comment'), | |
1080 | ) |
|
1080 | ) | |
1081 | diffmap[ctx.node()] = diff |
|
1081 | diffmap[ctx.node()] = diff | |
1082 | newrevid = int(revision[b'object'][b'id']) |
|
1082 | newrevid = int(revision[b'object'][b'id']) | |
1083 | newrevphid = revision[b'object'][b'phid'] |
|
1083 | newrevphid = revision[b'object'][b'phid'] | |
1084 | if revid: |
|
1084 | if revid: | |
1085 | action = b'updated' |
|
1085 | action = b'updated' | |
1086 | else: |
|
1086 | else: | |
1087 | action = b'created' |
|
1087 | action = b'created' | |
1088 |
|
1088 | |||
1089 | # Create a local tag to note the association, if commit message |
|
1089 | # Create a local tag to note the association, if commit message | |
1090 | # does not have it already |
|
1090 | # does not have it already | |
1091 | m = _differentialrevisiondescre.search(ctx.description()) |
|
1091 | m = _differentialrevisiondescre.search(ctx.description()) | |
1092 | if not m or int(m.group(r'id')) != newrevid: |
|
1092 | if not m or int(m.group(r'id')) != newrevid: | |
1093 | tagname = b'D%d' % newrevid |
|
1093 | tagname = b'D%d' % newrevid | |
1094 | tags.tag( |
|
1094 | tags.tag( | |
1095 | repo, |
|
1095 | repo, | |
1096 | tagname, |
|
1096 | tagname, | |
1097 | ctx.node(), |
|
1097 | ctx.node(), | |
1098 | message=None, |
|
1098 | message=None, | |
1099 | user=None, |
|
1099 | user=None, | |
1100 | date=None, |
|
1100 | date=None, | |
1101 | local=True, |
|
1101 | local=True, | |
1102 | ) |
|
1102 | ) | |
1103 | else: |
|
1103 | else: | |
1104 | # Nothing changed. But still set "newrevphid" so the next revision |
|
1104 | # Nothing changed. But still set "newrevphid" so the next revision | |
1105 | # could depend on this one and "newrevid" for the summary line. |
|
1105 | # could depend on this one and "newrevid" for the summary line. | |
1106 | newrevphid = querydrev(repo, b'%d' % revid)[0][b'phid'] |
|
1106 | newrevphid = querydrev(repo, b'%d' % revid)[0][b'phid'] | |
1107 | newrevid = revid |
|
1107 | newrevid = revid | |
1108 | action = b'skipped' |
|
1108 | action = b'skipped' | |
1109 |
|
1109 | |||
1110 | actiondesc = ui.label( |
|
1110 | actiondesc = ui.label( | |
1111 | { |
|
1111 | { | |
1112 | b'created': _(b'created'), |
|
1112 | b'created': _(b'created'), | |
1113 | b'skipped': _(b'skipped'), |
|
1113 | b'skipped': _(b'skipped'), | |
1114 | b'updated': _(b'updated'), |
|
1114 | b'updated': _(b'updated'), | |
1115 | }[action], |
|
1115 | }[action], | |
1116 | b'phabricator.action.%s' % action, |
|
1116 | b'phabricator.action.%s' % action, | |
1117 | ) |
|
1117 | ) | |
1118 | drevdesc = ui.label(b'D%d' % newrevid, b'phabricator.drev') |
|
1118 | drevdesc = ui.label(b'D%d' % newrevid, b'phabricator.drev') | |
1119 | nodedesc = ui.label(bytes(ctx), b'phabricator.node') |
|
1119 | nodedesc = ui.label(bytes(ctx), b'phabricator.node') | |
1120 | desc = ui.label(ctx.description().split(b'\n')[0], b'phabricator.desc') |
|
1120 | desc = ui.label(ctx.description().split(b'\n')[0], b'phabricator.desc') | |
1121 | ui.write( |
|
1121 | ui.write( | |
1122 | _(b'%s - %s - %s: %s\n') % (drevdesc, actiondesc, nodedesc, desc) |
|
1122 | _(b'%s - %s - %s: %s\n') % (drevdesc, actiondesc, nodedesc, desc) | |
1123 | ) |
|
1123 | ) | |
1124 | drevids.append(newrevid) |
|
1124 | drevids.append(newrevid) | |
1125 | lastrevphid = newrevphid |
|
1125 | lastrevphid = newrevphid | |
1126 |
|
1126 | |||
1127 | # Update commit messages and remove tags |
|
1127 | # Update commit messages and remove tags | |
1128 | if opts.get(b'amend'): |
|
1128 | if opts.get(b'amend'): | |
1129 | unfi = repo.unfiltered() |
|
1129 | unfi = repo.unfiltered() | |
1130 | drevs = callconduit(ui, b'differential.query', {b'ids': drevids}) |
|
1130 | drevs = callconduit(ui, b'differential.query', {b'ids': drevids}) | |
1131 | with repo.wlock(), repo.lock(), repo.transaction(b'phabsend'): |
|
1131 | with repo.wlock(), repo.lock(), repo.transaction(b'phabsend'): | |
1132 | wnode = unfi[b'.'].node() |
|
1132 | wnode = unfi[b'.'].node() | |
1133 | mapping = {} # {oldnode: [newnode]} |
|
1133 | mapping = {} # {oldnode: [newnode]} | |
1134 | for i, rev in enumerate(revs): |
|
1134 | for i, rev in enumerate(revs): | |
1135 | old = unfi[rev] |
|
1135 | old = unfi[rev] | |
1136 | drevid = drevids[i] |
|
1136 | drevid = drevids[i] | |
1137 | drev = [d for d in drevs if int(d[b'id']) == drevid][0] |
|
1137 | drev = [d for d in drevs if int(d[b'id']) == drevid][0] | |
1138 | newdesc = getdescfromdrev(drev) |
|
1138 | newdesc = getdescfromdrev(drev) | |
1139 | # Make sure commit message contain "Differential Revision" |
|
1139 | # Make sure commit message contain "Differential Revision" | |
1140 | if old.description() != newdesc: |
|
1140 | if old.description() != newdesc: | |
1141 | if old.phase() == phases.public: |
|
1141 | if old.phase() == phases.public: | |
1142 | ui.warn( |
|
1142 | ui.warn( | |
1143 | _(b"warning: not updating public commit %s\n") |
|
1143 | _(b"warning: not updating public commit %s\n") | |
1144 | % scmutil.formatchangeid(old) |
|
1144 | % scmutil.formatchangeid(old) | |
1145 | ) |
|
1145 | ) | |
1146 | continue |
|
1146 | continue | |
1147 | parents = [ |
|
1147 | parents = [ | |
1148 | mapping.get(old.p1().node(), (old.p1(),))[0], |
|
1148 | mapping.get(old.p1().node(), (old.p1(),))[0], | |
1149 | mapping.get(old.p2().node(), (old.p2(),))[0], |
|
1149 | mapping.get(old.p2().node(), (old.p2(),))[0], | |
1150 | ] |
|
1150 | ] | |
1151 | new = context.metadataonlyctx( |
|
1151 | new = context.metadataonlyctx( | |
1152 | repo, |
|
1152 | repo, | |
1153 | old, |
|
1153 | old, | |
1154 | parents=parents, |
|
1154 | parents=parents, | |
1155 | text=newdesc, |
|
1155 | text=newdesc, | |
1156 | user=old.user(), |
|
1156 | user=old.user(), | |
1157 | date=old.date(), |
|
1157 | date=old.date(), | |
1158 | extra=old.extra(), |
|
1158 | extra=old.extra(), | |
1159 | ) |
|
1159 | ) | |
1160 |
|
1160 | |||
1161 | newnode = new.commit() |
|
1161 | newnode = new.commit() | |
1162 |
|
1162 | |||
1163 | mapping[old.node()] = [newnode] |
|
1163 | mapping[old.node()] = [newnode] | |
1164 | # Update diff property |
|
1164 | # Update diff property | |
1165 | # If it fails just warn and keep going, otherwise the DREV |
|
1165 | # If it fails just warn and keep going, otherwise the DREV | |
1166 | # associations will be lost |
|
1166 | # associations will be lost | |
1167 | try: |
|
1167 | try: | |
1168 | writediffproperties(unfi[newnode], diffmap[old.node()]) |
|
1168 | writediffproperties(unfi[newnode], diffmap[old.node()]) | |
1169 | except util.urlerr.urlerror: |
|
1169 | except util.urlerr.urlerror: | |
1170 | ui.warnnoi18n( |
|
1170 | ui.warnnoi18n( | |
1171 | b'Failed to update metadata for D%d\n' % drevid |
|
1171 | b'Failed to update metadata for D%d\n' % drevid | |
1172 | ) |
|
1172 | ) | |
1173 | # Remove local tags since it's no longer necessary |
|
1173 | # Remove local tags since it's no longer necessary | |
1174 | tagname = b'D%d' % drevid |
|
1174 | tagname = b'D%d' % drevid | |
1175 | if tagname in repo.tags(): |
|
1175 | if tagname in repo.tags(): | |
1176 | tags.tag( |
|
1176 | tags.tag( | |
1177 | repo, |
|
1177 | repo, | |
1178 | tagname, |
|
1178 | tagname, | |
1179 | nullid, |
|
1179 | nullid, | |
1180 | message=None, |
|
1180 | message=None, | |
1181 | user=None, |
|
1181 | user=None, | |
1182 | date=None, |
|
1182 | date=None, | |
1183 | local=True, |
|
1183 | local=True, | |
1184 | ) |
|
1184 | ) | |
1185 | scmutil.cleanupnodes(repo, mapping, b'phabsend', fixphase=True) |
|
1185 | scmutil.cleanupnodes(repo, mapping, b'phabsend', fixphase=True) | |
1186 | if wnode in mapping: |
|
1186 | if wnode in mapping: | |
1187 | unfi.setparents(mapping[wnode][0]) |
|
1187 | unfi.setparents(mapping[wnode][0]) | |
1188 |
|
1188 | |||
1189 |
|
1189 | |||
1190 | # Map from "hg:meta" keys to header understood by "hg import". The order is |
|
1190 | # Map from "hg:meta" keys to header understood by "hg import". The order is | |
1191 | # consistent with "hg export" output. |
|
1191 | # consistent with "hg export" output. | |
1192 | _metanamemap = util.sortdict( |
|
1192 | _metanamemap = util.sortdict( | |
1193 | [ |
|
1193 | [ | |
1194 | (b'user', b'User'), |
|
1194 | (b'user', b'User'), | |
1195 | (b'date', b'Date'), |
|
1195 | (b'date', b'Date'), | |
1196 | (b'branch', b'Branch'), |
|
1196 | (b'branch', b'Branch'), | |
1197 | (b'node', b'Node ID'), |
|
1197 | (b'node', b'Node ID'), | |
1198 | (b'parent', b'Parent '), |
|
1198 | (b'parent', b'Parent '), | |
1199 | ] |
|
1199 | ] | |
1200 | ) |
|
1200 | ) | |
1201 |
|
1201 | |||
1202 |
|
1202 | |||
1203 | def _confirmbeforesend(repo, revs, oldmap): |
|
1203 | def _confirmbeforesend(repo, revs, oldmap): | |
1204 | url, token = readurltoken(repo.ui) |
|
1204 | url, token = readurltoken(repo.ui) | |
1205 | ui = repo.ui |
|
1205 | ui = repo.ui | |
1206 | for rev in revs: |
|
1206 | for rev in revs: | |
1207 | ctx = repo[rev] |
|
1207 | ctx = repo[rev] | |
1208 | desc = ctx.description().splitlines()[0] |
|
1208 | desc = ctx.description().splitlines()[0] | |
1209 | oldnode, olddiff, drevid = oldmap.get(ctx.node(), (None, None, None)) |
|
1209 | oldnode, olddiff, drevid = oldmap.get(ctx.node(), (None, None, None)) | |
1210 | if drevid: |
|
1210 | if drevid: | |
1211 | drevdesc = ui.label(b'D%d' % drevid, b'phabricator.drev') |
|
1211 | drevdesc = ui.label(b'D%d' % drevid, b'phabricator.drev') | |
1212 | else: |
|
1212 | else: | |
1213 | drevdesc = ui.label(_(b'NEW'), b'phabricator.drev') |
|
1213 | drevdesc = ui.label(_(b'NEW'), b'phabricator.drev') | |
1214 |
|
1214 | |||
1215 | ui.write( |
|
1215 | ui.write( | |
1216 | _(b'%s - %s: %s\n') |
|
1216 | _(b'%s - %s: %s\n') | |
1217 | % ( |
|
1217 | % ( | |
1218 | drevdesc, |
|
1218 | drevdesc, | |
1219 | ui.label(bytes(ctx), b'phabricator.node'), |
|
1219 | ui.label(bytes(ctx), b'phabricator.node'), | |
1220 | ui.label(desc, b'phabricator.desc'), |
|
1220 | ui.label(desc, b'phabricator.desc'), | |
1221 | ) |
|
1221 | ) | |
1222 | ) |
|
1222 | ) | |
1223 |
|
1223 | |||
1224 | if ui.promptchoice( |
|
1224 | if ui.promptchoice( | |
1225 | _(b'Send the above changes to %s (yn)?$$ &Yes $$ &No') % url |
|
1225 | _(b'Send the above changes to %s (yn)?$$ &Yes $$ &No') % url | |
1226 | ): |
|
1226 | ): | |
1227 | return False |
|
1227 | return False | |
1228 |
|
1228 | |||
1229 | return True |
|
1229 | return True | |
1230 |
|
1230 | |||
1231 |
|
1231 | |||
1232 | _knownstatusnames = { |
|
1232 | _knownstatusnames = { | |
1233 | b'accepted', |
|
1233 | b'accepted', | |
1234 | b'needsreview', |
|
1234 | b'needsreview', | |
1235 | b'needsrevision', |
|
1235 | b'needsrevision', | |
1236 | b'closed', |
|
1236 | b'closed', | |
1237 | b'abandoned', |
|
1237 | b'abandoned', | |
1238 | } |
|
1238 | } | |
1239 |
|
1239 | |||
1240 |
|
1240 | |||
1241 | def _getstatusname(drev): |
|
1241 | def _getstatusname(drev): | |
1242 | """get normalized status name from a Differential Revision""" |
|
1242 | """get normalized status name from a Differential Revision""" | |
1243 | return drev[b'statusName'].replace(b' ', b'').lower() |
|
1243 | return drev[b'statusName'].replace(b' ', b'').lower() | |
1244 |
|
1244 | |||
1245 |
|
1245 | |||
1246 | # Small language to specify differential revisions. Support symbols: (), :X, |
|
1246 | # Small language to specify differential revisions. Support symbols: (), :X, | |
1247 | # +, and -. |
|
1247 | # +, and -. | |
1248 |
|
1248 | |||
1249 | _elements = { |
|
1249 | _elements = { | |
1250 | # token-type: binding-strength, primary, prefix, infix, suffix |
|
1250 | # token-type: binding-strength, primary, prefix, infix, suffix | |
1251 | b'(': (12, None, (b'group', 1, b')'), None, None), |
|
1251 | b'(': (12, None, (b'group', 1, b')'), None, None), | |
1252 | b':': (8, None, (b'ancestors', 8), None, None), |
|
1252 | b':': (8, None, (b'ancestors', 8), None, None), | |
1253 | b'&': (5, None, None, (b'and_', 5), None), |
|
1253 | b'&': (5, None, None, (b'and_', 5), None), | |
1254 | b'+': (4, None, None, (b'add', 4), None), |
|
1254 | b'+': (4, None, None, (b'add', 4), None), | |
1255 | b'-': (4, None, None, (b'sub', 4), None), |
|
1255 | b'-': (4, None, None, (b'sub', 4), None), | |
1256 | b')': (0, None, None, None, None), |
|
1256 | b')': (0, None, None, None, None), | |
1257 | b'symbol': (0, b'symbol', None, None, None), |
|
1257 | b'symbol': (0, b'symbol', None, None, None), | |
1258 | b'end': (0, None, None, None, None), |
|
1258 | b'end': (0, None, None, None, None), | |
1259 | } |
|
1259 | } | |
1260 |
|
1260 | |||
1261 |
|
1261 | |||
1262 | def _tokenize(text): |
|
1262 | def _tokenize(text): | |
1263 | view = memoryview(text) # zero-copy slice |
|
1263 | view = memoryview(text) # zero-copy slice | |
1264 | special = b'():+-& ' |
|
1264 | special = b'():+-& ' | |
1265 | pos = 0 |
|
1265 | pos = 0 | |
1266 | length = len(text) |
|
1266 | length = len(text) | |
1267 | while pos < length: |
|
1267 | while pos < length: | |
1268 | symbol = b''.join( |
|
1268 | symbol = b''.join( | |
1269 | itertools.takewhile( |
|
1269 | itertools.takewhile( | |
1270 | lambda ch: ch not in special, pycompat.iterbytestr(view[pos:]) |
|
1270 | lambda ch: ch not in special, pycompat.iterbytestr(view[pos:]) | |
1271 | ) |
|
1271 | ) | |
1272 | ) |
|
1272 | ) | |
1273 | if symbol: |
|
1273 | if symbol: | |
1274 | yield (b'symbol', symbol, pos) |
|
1274 | yield (b'symbol', symbol, pos) | |
1275 | pos += len(symbol) |
|
1275 | pos += len(symbol) | |
1276 | else: # special char, ignore space |
|
1276 | else: # special char, ignore space | |
1277 | if text[pos : pos + 1] != b' ': |
|
1277 | if text[pos : pos + 1] != b' ': | |
1278 | yield (text[pos : pos + 1], None, pos) |
|
1278 | yield (text[pos : pos + 1], None, pos) | |
1279 | pos += 1 |
|
1279 | pos += 1 | |
1280 | yield (b'end', None, pos) |
|
1280 | yield (b'end', None, pos) | |
1281 |
|
1281 | |||
1282 |
|
1282 | |||
1283 | def _parse(text): |
|
1283 | def _parse(text): | |
1284 | tree, pos = parser.parser(_elements).parse(_tokenize(text)) |
|
1284 | tree, pos = parser.parser(_elements).parse(_tokenize(text)) | |
1285 | if pos != len(text): |
|
1285 | if pos != len(text): | |
1286 | raise error.ParseError(b'invalid token', pos) |
|
1286 | raise error.ParseError(b'invalid token', pos) | |
1287 | return tree |
|
1287 | return tree | |
1288 |
|
1288 | |||
1289 |
|
1289 | |||
1290 | def _parsedrev(symbol): |
|
1290 | def _parsedrev(symbol): | |
1291 | """str -> int or None, ex. 'D45' -> 45; '12' -> 12; 'x' -> None""" |
|
1291 | """str -> int or None, ex. 'D45' -> 45; '12' -> 12; 'x' -> None""" | |
1292 | if symbol.startswith(b'D') and symbol[1:].isdigit(): |
|
1292 | if symbol.startswith(b'D') and symbol[1:].isdigit(): | |
1293 | return int(symbol[1:]) |
|
1293 | return int(symbol[1:]) | |
1294 | if symbol.isdigit(): |
|
1294 | if symbol.isdigit(): | |
1295 | return int(symbol) |
|
1295 | return int(symbol) | |
1296 |
|
1296 | |||
1297 |
|
1297 | |||
1298 | def _prefetchdrevs(tree): |
|
1298 | def _prefetchdrevs(tree): | |
1299 | """return ({single-drev-id}, {ancestor-drev-id}) to prefetch""" |
|
1299 | """return ({single-drev-id}, {ancestor-drev-id}) to prefetch""" | |
1300 | drevs = set() |
|
1300 | drevs = set() | |
1301 | ancestordrevs = set() |
|
1301 | ancestordrevs = set() | |
1302 | op = tree[0] |
|
1302 | op = tree[0] | |
1303 | if op == b'symbol': |
|
1303 | if op == b'symbol': | |
1304 | r = _parsedrev(tree[1]) |
|
1304 | r = _parsedrev(tree[1]) | |
1305 | if r: |
|
1305 | if r: | |
1306 | drevs.add(r) |
|
1306 | drevs.add(r) | |
1307 | elif op == b'ancestors': |
|
1307 | elif op == b'ancestors': | |
1308 | r, a = _prefetchdrevs(tree[1]) |
|
1308 | r, a = _prefetchdrevs(tree[1]) | |
1309 | drevs.update(r) |
|
1309 | drevs.update(r) | |
1310 | ancestordrevs.update(r) |
|
1310 | ancestordrevs.update(r) | |
1311 | ancestordrevs.update(a) |
|
1311 | ancestordrevs.update(a) | |
1312 | else: |
|
1312 | else: | |
1313 | for t in tree[1:]: |
|
1313 | for t in tree[1:]: | |
1314 | r, a = _prefetchdrevs(t) |
|
1314 | r, a = _prefetchdrevs(t) | |
1315 | drevs.update(r) |
|
1315 | drevs.update(r) | |
1316 | ancestordrevs.update(a) |
|
1316 | ancestordrevs.update(a) | |
1317 | return drevs, ancestordrevs |
|
1317 | return drevs, ancestordrevs | |
1318 |
|
1318 | |||
1319 |
|
1319 | |||
1320 | def querydrev(repo, spec): |
|
1320 | def querydrev(repo, spec): | |
1321 | """return a list of "Differential Revision" dicts |
|
1321 | """return a list of "Differential Revision" dicts | |
1322 |
|
1322 | |||
1323 | spec is a string using a simple query language, see docstring in phabread |
|
1323 | spec is a string using a simple query language, see docstring in phabread | |
1324 | for details. |
|
1324 | for details. | |
1325 |
|
1325 | |||
1326 | A "Differential Revision dict" looks like: |
|
1326 | A "Differential Revision dict" looks like: | |
1327 |
|
1327 | |||
1328 | { |
|
1328 | { | |
1329 | "id": "2", |
|
1329 | "id": "2", | |
1330 | "phid": "PHID-DREV-672qvysjcczopag46qty", |
|
1330 | "phid": "PHID-DREV-672qvysjcczopag46qty", | |
1331 | "title": "example", |
|
1331 | "title": "example", | |
1332 | "uri": "https://phab.example.com/D2", |
|
1332 | "uri": "https://phab.example.com/D2", | |
1333 | "dateCreated": "1499181406", |
|
1333 | "dateCreated": "1499181406", | |
1334 | "dateModified": "1499182103", |
|
1334 | "dateModified": "1499182103", | |
1335 | "authorPHID": "PHID-USER-tv3ohwc4v4jeu34otlye", |
|
1335 | "authorPHID": "PHID-USER-tv3ohwc4v4jeu34otlye", | |
1336 | "status": "0", |
|
1336 | "status": "0", | |
1337 | "statusName": "Needs Review", |
|
1337 | "statusName": "Needs Review", | |
1338 | "properties": [], |
|
1338 | "properties": [], | |
1339 | "branch": null, |
|
1339 | "branch": null, | |
1340 | "summary": "", |
|
1340 | "summary": "", | |
1341 | "testPlan": "", |
|
1341 | "testPlan": "", | |
1342 | "lineCount": "2", |
|
1342 | "lineCount": "2", | |
1343 | "activeDiffPHID": "PHID-DIFF-xoqnjkobbm6k4dk6hi72", |
|
1343 | "activeDiffPHID": "PHID-DIFF-xoqnjkobbm6k4dk6hi72", | |
1344 | "diffs": [ |
|
1344 | "diffs": [ | |
1345 | "3", |
|
1345 | "3", | |
1346 | "4", |
|
1346 | "4", | |
1347 | ], |
|
1347 | ], | |
1348 | "commits": [], |
|
1348 | "commits": [], | |
1349 | "reviewers": [], |
|
1349 | "reviewers": [], | |
1350 | "ccs": [], |
|
1350 | "ccs": [], | |
1351 | "hashes": [], |
|
1351 | "hashes": [], | |
1352 | "auxiliary": { |
|
1352 | "auxiliary": { | |
1353 | "phabricator:projects": [], |
|
1353 | "phabricator:projects": [], | |
1354 | "phabricator:depends-on": [ |
|
1354 | "phabricator:depends-on": [ | |
1355 | "PHID-DREV-gbapp366kutjebt7agcd" |
|
1355 | "PHID-DREV-gbapp366kutjebt7agcd" | |
1356 | ] |
|
1356 | ] | |
1357 | }, |
|
1357 | }, | |
1358 | "repositoryPHID": "PHID-REPO-hub2hx62ieuqeheznasv", |
|
1358 | "repositoryPHID": "PHID-REPO-hub2hx62ieuqeheznasv", | |
1359 | "sourcePath": null |
|
1359 | "sourcePath": null | |
1360 | } |
|
1360 | } | |
1361 | """ |
|
1361 | """ | |
1362 |
|
1362 | |||
1363 | def fetch(params): |
|
1363 | def fetch(params): | |
1364 | """params -> single drev or None""" |
|
1364 | """params -> single drev or None""" | |
1365 | key = (params.get(b'ids') or params.get(b'phids') or [None])[0] |
|
1365 | key = (params.get(b'ids') or params.get(b'phids') or [None])[0] | |
1366 | if key in prefetched: |
|
1366 | if key in prefetched: | |
1367 | return prefetched[key] |
|
1367 | return prefetched[key] | |
1368 | drevs = callconduit(repo.ui, b'differential.query', params) |
|
1368 | drevs = callconduit(repo.ui, b'differential.query', params) | |
1369 | # Fill prefetched with the result |
|
1369 | # Fill prefetched with the result | |
1370 | for drev in drevs: |
|
1370 | for drev in drevs: | |
1371 | prefetched[drev[b'phid']] = drev |
|
1371 | prefetched[drev[b'phid']] = drev | |
1372 | prefetched[int(drev[b'id'])] = drev |
|
1372 | prefetched[int(drev[b'id'])] = drev | |
1373 | if key not in prefetched: |
|
1373 | if key not in prefetched: | |
1374 | raise error.Abort( |
|
1374 | raise error.Abort( | |
1375 | _(b'cannot get Differential Revision %r') % params |
|
1375 | _(b'cannot get Differential Revision %r') % params | |
1376 | ) |
|
1376 | ) | |
1377 | return prefetched[key] |
|
1377 | return prefetched[key] | |
1378 |
|
1378 | |||
1379 | def getstack(topdrevids): |
|
1379 | def getstack(topdrevids): | |
1380 | """given a top, get a stack from the bottom, [id] -> [id]""" |
|
1380 | """given a top, get a stack from the bottom, [id] -> [id]""" | |
1381 | visited = set() |
|
1381 | visited = set() | |
1382 | result = [] |
|
1382 | result = [] | |
1383 | queue = [{b'ids': [i]} for i in topdrevids] |
|
1383 | queue = [{b'ids': [i]} for i in topdrevids] | |
1384 | while queue: |
|
1384 | while queue: | |
1385 | params = queue.pop() |
|
1385 | params = queue.pop() | |
1386 | drev = fetch(params) |
|
1386 | drev = fetch(params) | |
1387 | if drev[b'id'] in visited: |
|
1387 | if drev[b'id'] in visited: | |
1388 | continue |
|
1388 | continue | |
1389 | visited.add(drev[b'id']) |
|
1389 | visited.add(drev[b'id']) | |
1390 | result.append(int(drev[b'id'])) |
|
1390 | result.append(int(drev[b'id'])) | |
1391 | auxiliary = drev.get(b'auxiliary', {}) |
|
1391 | auxiliary = drev.get(b'auxiliary', {}) | |
1392 | depends = auxiliary.get(b'phabricator:depends-on', []) |
|
1392 | depends = auxiliary.get(b'phabricator:depends-on', []) | |
1393 | for phid in depends: |
|
1393 | for phid in depends: | |
1394 | queue.append({b'phids': [phid]}) |
|
1394 | queue.append({b'phids': [phid]}) | |
1395 | result.reverse() |
|
1395 | result.reverse() | |
1396 | return smartset.baseset(result) |
|
1396 | return smartset.baseset(result) | |
1397 |
|
1397 | |||
1398 | # Initialize prefetch cache |
|
1398 | # Initialize prefetch cache | |
1399 | prefetched = {} # {id or phid: drev} |
|
1399 | prefetched = {} # {id or phid: drev} | |
1400 |
|
1400 | |||
1401 | tree = _parse(spec) |
|
1401 | tree = _parse(spec) | |
1402 | drevs, ancestordrevs = _prefetchdrevs(tree) |
|
1402 | drevs, ancestordrevs = _prefetchdrevs(tree) | |
1403 |
|
1403 | |||
1404 | # developer config: phabricator.batchsize |
|
1404 | # developer config: phabricator.batchsize | |
1405 | batchsize = repo.ui.configint(b'phabricator', b'batchsize') |
|
1405 | batchsize = repo.ui.configint(b'phabricator', b'batchsize') | |
1406 |
|
1406 | |||
1407 | # Prefetch Differential Revisions in batch |
|
1407 | # Prefetch Differential Revisions in batch | |
1408 | tofetch = set(drevs) |
|
1408 | tofetch = set(drevs) | |
1409 | for r in ancestordrevs: |
|
1409 | for r in ancestordrevs: | |
1410 | tofetch.update(range(max(1, r - batchsize), r + 1)) |
|
1410 | tofetch.update(range(max(1, r - batchsize), r + 1)) | |
1411 | if drevs: |
|
1411 | if drevs: | |
1412 | fetch({b'ids': list(tofetch)}) |
|
1412 | fetch({b'ids': list(tofetch)}) | |
1413 | validids = sorted(set(getstack(list(ancestordrevs))) | set(drevs)) |
|
1413 | validids = sorted(set(getstack(list(ancestordrevs))) | set(drevs)) | |
1414 |
|
1414 | |||
1415 | # Walk through the tree, return smartsets |
|
1415 | # Walk through the tree, return smartsets | |
1416 | def walk(tree): |
|
1416 | def walk(tree): | |
1417 | op = tree[0] |
|
1417 | op = tree[0] | |
1418 | if op == b'symbol': |
|
1418 | if op == b'symbol': | |
1419 | drev = _parsedrev(tree[1]) |
|
1419 | drev = _parsedrev(tree[1]) | |
1420 | if drev: |
|
1420 | if drev: | |
1421 | return smartset.baseset([drev]) |
|
1421 | return smartset.baseset([drev]) | |
1422 | elif tree[1] in _knownstatusnames: |
|
1422 | elif tree[1] in _knownstatusnames: | |
1423 | drevs = [ |
|
1423 | drevs = [ | |
1424 | r |
|
1424 | r | |
1425 | for r in validids |
|
1425 | for r in validids | |
1426 | if _getstatusname(prefetched[r]) == tree[1] |
|
1426 | if _getstatusname(prefetched[r]) == tree[1] | |
1427 | ] |
|
1427 | ] | |
1428 | return smartset.baseset(drevs) |
|
1428 | return smartset.baseset(drevs) | |
1429 | else: |
|
1429 | else: | |
1430 | raise error.Abort(_(b'unknown symbol: %s') % tree[1]) |
|
1430 | raise error.Abort(_(b'unknown symbol: %s') % tree[1]) | |
1431 | elif op in {b'and_', b'add', b'sub'}: |
|
1431 | elif op in {b'and_', b'add', b'sub'}: | |
1432 | assert len(tree) == 3 |
|
1432 | assert len(tree) == 3 | |
1433 | return getattr(operator, op)(walk(tree[1]), walk(tree[2])) |
|
1433 | return getattr(operator, op)(walk(tree[1]), walk(tree[2])) | |
1434 | elif op == b'group': |
|
1434 | elif op == b'group': | |
1435 | return walk(tree[1]) |
|
1435 | return walk(tree[1]) | |
1436 | elif op == b'ancestors': |
|
1436 | elif op == b'ancestors': | |
1437 | return getstack(walk(tree[1])) |
|
1437 | return getstack(walk(tree[1])) | |
1438 | else: |
|
1438 | else: | |
1439 | raise error.ProgrammingError(b'illegal tree: %r' % tree) |
|
1439 | raise error.ProgrammingError(b'illegal tree: %r' % tree) | |
1440 |
|
1440 | |||
1441 | return [prefetched[r] for r in walk(tree)] |
|
1441 | return [prefetched[r] for r in walk(tree)] | |
1442 |
|
1442 | |||
1443 |
|
1443 | |||
1444 | def getdescfromdrev(drev): |
|
1444 | def getdescfromdrev(drev): | |
1445 | """get description (commit message) from "Differential Revision" |
|
1445 | """get description (commit message) from "Differential Revision" | |
1446 |
|
1446 | |||
1447 | This is similar to differential.getcommitmessage API. But we only care |
|
1447 | This is similar to differential.getcommitmessage API. But we only care | |
1448 | about limited fields: title, summary, test plan, and URL. |
|
1448 | about limited fields: title, summary, test plan, and URL. | |
1449 | """ |
|
1449 | """ | |
1450 | title = drev[b'title'] |
|
1450 | title = drev[b'title'] | |
1451 | summary = drev[b'summary'].rstrip() |
|
1451 | summary = drev[b'summary'].rstrip() | |
1452 | testplan = drev[b'testPlan'].rstrip() |
|
1452 | testplan = drev[b'testPlan'].rstrip() | |
1453 | if testplan: |
|
1453 | if testplan: | |
1454 | testplan = b'Test Plan:\n%s' % testplan |
|
1454 | testplan = b'Test Plan:\n%s' % testplan | |
1455 | uri = b'Differential Revision: %s' % drev[b'uri'] |
|
1455 | uri = b'Differential Revision: %s' % drev[b'uri'] | |
1456 | return b'\n\n'.join(filter(None, [title, summary, testplan, uri])) |
|
1456 | return b'\n\n'.join(filter(None, [title, summary, testplan, uri])) | |
1457 |
|
1457 | |||
1458 |
|
1458 | |||
1459 | def getdiffmeta(diff): |
|
1459 | def getdiffmeta(diff): | |
1460 | """get commit metadata (date, node, user, p1) from a diff object |
|
1460 | """get commit metadata (date, node, user, p1) from a diff object | |
1461 |
|
1461 | |||
1462 | The metadata could be "hg:meta", sent by phabsend, like: |
|
1462 | The metadata could be "hg:meta", sent by phabsend, like: | |
1463 |
|
1463 | |||
1464 | "properties": { |
|
1464 | "properties": { | |
1465 | "hg:meta": { |
|
1465 | "hg:meta": { | |
1466 | "date": "1499571514 25200", |
|
1466 | "date": "1499571514 25200", | |
1467 | "node": "98c08acae292b2faf60a279b4189beb6cff1414d", |
|
1467 | "node": "98c08acae292b2faf60a279b4189beb6cff1414d", | |
1468 | "user": "Foo Bar <foo@example.com>", |
|
1468 | "user": "Foo Bar <foo@example.com>", | |
1469 | "parent": "6d0abad76b30e4724a37ab8721d630394070fe16" |
|
1469 | "parent": "6d0abad76b30e4724a37ab8721d630394070fe16" | |
1470 | } |
|
1470 | } | |
1471 | } |
|
1471 | } | |
1472 |
|
1472 | |||
1473 | Or converted from "local:commits", sent by "arc", like: |
|
1473 | Or converted from "local:commits", sent by "arc", like: | |
1474 |
|
1474 | |||
1475 | "properties": { |
|
1475 | "properties": { | |
1476 | "local:commits": { |
|
1476 | "local:commits": { | |
1477 | "98c08acae292b2faf60a279b4189beb6cff1414d": { |
|
1477 | "98c08acae292b2faf60a279b4189beb6cff1414d": { | |
1478 | "author": "Foo Bar", |
|
1478 | "author": "Foo Bar", | |
1479 | "time": 1499546314, |
|
1479 | "time": 1499546314, | |
1480 | "branch": "default", |
|
1480 | "branch": "default", | |
1481 | "tag": "", |
|
1481 | "tag": "", | |
1482 | "commit": "98c08acae292b2faf60a279b4189beb6cff1414d", |
|
1482 | "commit": "98c08acae292b2faf60a279b4189beb6cff1414d", | |
1483 | "rev": "98c08acae292b2faf60a279b4189beb6cff1414d", |
|
1483 | "rev": "98c08acae292b2faf60a279b4189beb6cff1414d", | |
1484 | "local": "1000", |
|
1484 | "local": "1000", | |
1485 | "parents": ["6d0abad76b30e4724a37ab8721d630394070fe16"], |
|
1485 | "parents": ["6d0abad76b30e4724a37ab8721d630394070fe16"], | |
1486 | "summary": "...", |
|
1486 | "summary": "...", | |
1487 | "message": "...", |
|
1487 | "message": "...", | |
1488 | "authorEmail": "foo@example.com" |
|
1488 | "authorEmail": "foo@example.com" | |
1489 | } |
|
1489 | } | |
1490 | } |
|
1490 | } | |
1491 | } |
|
1491 | } | |
1492 |
|
1492 | |||
1493 | Note: metadata extracted from "local:commits" will lose time zone |
|
1493 | Note: metadata extracted from "local:commits" will lose time zone | |
1494 | information. |
|
1494 | information. | |
1495 | """ |
|
1495 | """ | |
1496 | props = diff.get(b'properties') or {} |
|
1496 | props = diff.get(b'properties') or {} | |
1497 | meta = props.get(b'hg:meta') |
|
1497 | meta = props.get(b'hg:meta') | |
1498 | if not meta: |
|
1498 | if not meta: | |
1499 | if props.get(b'local:commits'): |
|
1499 | if props.get(b'local:commits'): | |
1500 | commit = sorted(props[b'local:commits'].values())[0] |
|
1500 | commit = sorted(props[b'local:commits'].values())[0] | |
1501 | meta = {} |
|
1501 | meta = {} | |
1502 | if b'author' in commit and b'authorEmail' in commit: |
|
1502 | if b'author' in commit and b'authorEmail' in commit: | |
1503 | meta[b'user'] = b'%s <%s>' % ( |
|
1503 | meta[b'user'] = b'%s <%s>' % ( | |
1504 | commit[b'author'], |
|
1504 | commit[b'author'], | |
1505 | commit[b'authorEmail'], |
|
1505 | commit[b'authorEmail'], | |
1506 | ) |
|
1506 | ) | |
1507 | if b'time' in commit: |
|
1507 | if b'time' in commit: | |
1508 | meta[b'date'] = b'%d 0' % int(commit[b'time']) |
|
1508 | meta[b'date'] = b'%d 0' % int(commit[b'time']) | |
1509 | if b'branch' in commit: |
|
1509 | if b'branch' in commit: | |
1510 | meta[b'branch'] = commit[b'branch'] |
|
1510 | meta[b'branch'] = commit[b'branch'] | |
1511 | node = commit.get(b'commit', commit.get(b'rev')) |
|
1511 | node = commit.get(b'commit', commit.get(b'rev')) | |
1512 | if node: |
|
1512 | if node: | |
1513 | meta[b'node'] = node |
|
1513 | meta[b'node'] = node | |
1514 | if len(commit.get(b'parents', ())) >= 1: |
|
1514 | if len(commit.get(b'parents', ())) >= 1: | |
1515 | meta[b'parent'] = commit[b'parents'][0] |
|
1515 | meta[b'parent'] = commit[b'parents'][0] | |
1516 | else: |
|
1516 | else: | |
1517 | meta = {} |
|
1517 | meta = {} | |
1518 | if b'date' not in meta and b'dateCreated' in diff: |
|
1518 | if b'date' not in meta and b'dateCreated' in diff: | |
1519 | meta[b'date'] = b'%s 0' % diff[b'dateCreated'] |
|
1519 | meta[b'date'] = b'%s 0' % diff[b'dateCreated'] | |
1520 | if b'branch' not in meta and diff.get(b'branch'): |
|
1520 | if b'branch' not in meta and diff.get(b'branch'): | |
1521 | meta[b'branch'] = diff[b'branch'] |
|
1521 | meta[b'branch'] = diff[b'branch'] | |
1522 | if b'parent' not in meta and diff.get(b'sourceControlBaseRevision'): |
|
1522 | if b'parent' not in meta and diff.get(b'sourceControlBaseRevision'): | |
1523 | meta[b'parent'] = diff[b'sourceControlBaseRevision'] |
|
1523 | meta[b'parent'] = diff[b'sourceControlBaseRevision'] | |
1524 | return meta |
|
1524 | return meta | |
1525 |
|
1525 | |||
1526 |
|
1526 | |||
1527 | def readpatch(repo, drevs, write): |
|
1527 | def readpatch(repo, drevs, write): | |
1528 | """generate plain-text patch readable by 'hg import' |
|
1528 | """generate plain-text patch readable by 'hg import' | |
1529 |
|
1529 | |||
1530 | write is usually ui.write. drevs is what "querydrev" returns, results of |
|
1530 | write is usually ui.write. drevs is what "querydrev" returns, results of | |
1531 | "differential.query". |
|
1531 | "differential.query". | |
1532 | """ |
|
1532 | """ | |
1533 | # Prefetch hg:meta property for all diffs |
|
1533 | # Prefetch hg:meta property for all diffs | |
1534 | diffids = sorted(set(max(int(v) for v in drev[b'diffs']) for drev in drevs)) |
|
1534 | diffids = sorted(set(max(int(v) for v in drev[b'diffs']) for drev in drevs)) | |
1535 | diffs = callconduit(repo.ui, b'differential.querydiffs', {b'ids': diffids}) |
|
1535 | diffs = callconduit(repo.ui, b'differential.querydiffs', {b'ids': diffids}) | |
1536 |
|
1536 | |||
1537 | # Generate patch for each drev |
|
1537 | # Generate patch for each drev | |
1538 | for drev in drevs: |
|
1538 | for drev in drevs: | |
1539 | repo.ui.note(_(b'reading D%s\n') % drev[b'id']) |
|
1539 | repo.ui.note(_(b'reading D%s\n') % drev[b'id']) | |
1540 |
|
1540 | |||
1541 | diffid = max(int(v) for v in drev[b'diffs']) |
|
1541 | diffid = max(int(v) for v in drev[b'diffs']) | |
1542 | body = callconduit( |
|
1542 | body = callconduit( | |
1543 | repo.ui, b'differential.getrawdiff', {b'diffID': diffid} |
|
1543 | repo.ui, b'differential.getrawdiff', {b'diffID': diffid} | |
1544 | ) |
|
1544 | ) | |
1545 | desc = getdescfromdrev(drev) |
|
1545 | desc = getdescfromdrev(drev) | |
1546 | header = b'# HG changeset patch\n' |
|
1546 | header = b'# HG changeset patch\n' | |
1547 |
|
1547 | |||
1548 | # Try to preserve metadata from hg:meta property. Write hg patch |
|
1548 | # Try to preserve metadata from hg:meta property. Write hg patch | |
1549 | # headers that can be read by the "import" command. See patchheadermap |
|
1549 | # headers that can be read by the "import" command. See patchheadermap | |
1550 | # and extract in mercurial/patch.py for supported headers. |
|
1550 | # and extract in mercurial/patch.py for supported headers. | |
1551 | meta = getdiffmeta(diffs[b'%d' % diffid]) |
|
1551 | meta = getdiffmeta(diffs[b'%d' % diffid]) | |
1552 | for k in _metanamemap.keys(): |
|
1552 | for k in _metanamemap.keys(): | |
1553 | if k in meta: |
|
1553 | if k in meta: | |
1554 | header += b'# %s %s\n' % (_metanamemap[k], meta[k]) |
|
1554 | header += b'# %s %s\n' % (_metanamemap[k], meta[k]) | |
1555 |
|
1555 | |||
1556 | content = b'%s%s\n%s' % (header, desc, body) |
|
1556 | content = b'%s%s\n%s' % (header, desc, body) | |
1557 | write(content) |
|
1557 | write(content) | |
1558 |
|
1558 | |||
1559 |
|
1559 | |||
1560 | @vcrcommand( |
|
1560 | @vcrcommand( | |
1561 | b'phabread', |
|
1561 | b'phabread', | |
1562 | [(b'', b'stack', False, _(b'read dependencies'))], |
|
1562 | [(b'', b'stack', False, _(b'read dependencies'))], | |
1563 | _(b'DREVSPEC [OPTIONS]'), |
|
1563 | _(b'DREVSPEC [OPTIONS]'), | |
1564 | helpcategory=command.CATEGORY_IMPORT_EXPORT, |
|
1564 | helpcategory=command.CATEGORY_IMPORT_EXPORT, | |
1565 | ) |
|
1565 | ) | |
1566 | def phabread(ui, repo, spec, **opts): |
|
1566 | def phabread(ui, repo, spec, **opts): | |
1567 | """print patches from Phabricator suitable for importing |
|
1567 | """print patches from Phabricator suitable for importing | |
1568 |
|
1568 | |||
1569 | DREVSPEC could be a Differential Revision identity, like ``D123``, or just |
|
1569 | DREVSPEC could be a Differential Revision identity, like ``D123``, or just | |
1570 | the number ``123``. It could also have common operators like ``+``, ``-``, |
|
1570 | the number ``123``. It could also have common operators like ``+``, ``-``, | |
1571 | ``&``, ``(``, ``)`` for complex queries. Prefix ``:`` could be used to |
|
1571 | ``&``, ``(``, ``)`` for complex queries. Prefix ``:`` could be used to | |
1572 | select a stack. |
|
1572 | select a stack. | |
1573 |
|
1573 | |||
1574 | ``abandoned``, ``accepted``, ``closed``, ``needsreview``, ``needsrevision`` |
|
1574 | ``abandoned``, ``accepted``, ``closed``, ``needsreview``, ``needsrevision`` | |
1575 | could be used to filter patches by status. For performance reason, they |
|
1575 | could be used to filter patches by status. For performance reason, they | |
1576 | only represent a subset of non-status selections and cannot be used alone. |
|
1576 | only represent a subset of non-status selections and cannot be used alone. | |
1577 |
|
1577 | |||
1578 | For example, ``:D6+8-(2+D4)`` selects a stack up to D6, plus D8 and exclude |
|
1578 | For example, ``:D6+8-(2+D4)`` selects a stack up to D6, plus D8 and exclude | |
1579 | D2 and D4. ``:D9 & needsreview`` selects "Needs Review" revisions in a |
|
1579 | D2 and D4. ``:D9 & needsreview`` selects "Needs Review" revisions in a | |
1580 | stack up to D9. |
|
1580 | stack up to D9. | |
1581 |
|
1581 | |||
1582 | If --stack is given, follow dependencies information and read all patches. |
|
1582 | If --stack is given, follow dependencies information and read all patches. | |
1583 | It is equivalent to the ``:`` operator. |
|
1583 | It is equivalent to the ``:`` operator. | |
1584 | """ |
|
1584 | """ | |
1585 | opts = pycompat.byteskwargs(opts) |
|
1585 | opts = pycompat.byteskwargs(opts) | |
1586 | if opts.get(b'stack'): |
|
1586 | if opts.get(b'stack'): | |
1587 | spec = b':(%s)' % spec |
|
1587 | spec = b':(%s)' % spec | |
1588 | drevs = querydrev(repo, spec) |
|
1588 | drevs = querydrev(repo, spec) | |
1589 | readpatch(repo, drevs, ui.write) |
|
1589 | readpatch(repo, drevs, ui.write) | |
1590 |
|
1590 | |||
1591 |
|
1591 | |||
1592 | @vcrcommand( |
|
1592 | @vcrcommand( | |
1593 | b'phabupdate', |
|
1593 | b'phabupdate', | |
1594 | [ |
|
1594 | [ | |
1595 | (b'', b'accept', False, _(b'accept revisions')), |
|
1595 | (b'', b'accept', False, _(b'accept revisions')), | |
1596 | (b'', b'reject', False, _(b'reject revisions')), |
|
1596 | (b'', b'reject', False, _(b'reject revisions')), | |
1597 | (b'', b'abandon', False, _(b'abandon revisions')), |
|
1597 | (b'', b'abandon', False, _(b'abandon revisions')), | |
1598 | (b'', b'reclaim', False, _(b'reclaim revisions')), |
|
1598 | (b'', b'reclaim', False, _(b'reclaim revisions')), | |
1599 | (b'm', b'comment', b'', _(b'comment on the last revision')), |
|
1599 | (b'm', b'comment', b'', _(b'comment on the last revision')), | |
1600 | ], |
|
1600 | ], | |
1601 | _(b'DREVSPEC [OPTIONS]'), |
|
1601 | _(b'DREVSPEC [OPTIONS]'), | |
1602 | helpcategory=command.CATEGORY_IMPORT_EXPORT, |
|
1602 | helpcategory=command.CATEGORY_IMPORT_EXPORT, | |
1603 | ) |
|
1603 | ) | |
1604 | def phabupdate(ui, repo, spec, **opts): |
|
1604 | def phabupdate(ui, repo, spec, **opts): | |
1605 | """update Differential Revision in batch |
|
1605 | """update Differential Revision in batch | |
1606 |
|
1606 | |||
1607 | DREVSPEC selects revisions. See :hg:`help phabread` for its usage. |
|
1607 | DREVSPEC selects revisions. See :hg:`help phabread` for its usage. | |
1608 | """ |
|
1608 | """ | |
1609 | opts = pycompat.byteskwargs(opts) |
|
1609 | opts = pycompat.byteskwargs(opts) | |
1610 | flags = [n for n in b'accept reject abandon reclaim'.split() if opts.get(n)] |
|
1610 | flags = [n for n in b'accept reject abandon reclaim'.split() if opts.get(n)] | |
1611 | if len(flags) > 1: |
|
1611 | if len(flags) > 1: | |
1612 | raise error.Abort(_(b'%s cannot be used together') % b', '.join(flags)) |
|
1612 | raise error.Abort(_(b'%s cannot be used together') % b', '.join(flags)) | |
1613 |
|
1613 | |||
1614 | actions = [] |
|
1614 | actions = [] | |
1615 | for f in flags: |
|
1615 | for f in flags: | |
1616 | actions.append({b'type': f, b'value': True}) |
|
1616 | actions.append({b'type': f, b'value': True}) | |
1617 |
|
1617 | |||
1618 | drevs = querydrev(repo, spec) |
|
1618 | drevs = querydrev(repo, spec) | |
1619 | for i, drev in enumerate(drevs): |
|
1619 | for i, drev in enumerate(drevs): | |
1620 | if i + 1 == len(drevs) and opts.get(b'comment'): |
|
1620 | if i + 1 == len(drevs) and opts.get(b'comment'): | |
1621 | actions.append({b'type': b'comment', b'value': opts[b'comment']}) |
|
1621 | actions.append({b'type': b'comment', b'value': opts[b'comment']}) | |
1622 | if actions: |
|
1622 | if actions: | |
1623 | params = { |
|
1623 | params = { | |
1624 | b'objectIdentifier': drev[b'phid'], |
|
1624 | b'objectIdentifier': drev[b'phid'], | |
1625 | b'transactions': actions, |
|
1625 | b'transactions': actions, | |
1626 | } |
|
1626 | } | |
1627 | callconduit(ui, b'differential.revision.edit', params) |
|
1627 | callconduit(ui, b'differential.revision.edit', params) | |
1628 |
|
1628 | |||
1629 |
|
1629 | |||
1630 | @eh.templatekeyword(b'phabreview', requires={b'ctx'}) |
|
1630 | @eh.templatekeyword(b'phabreview', requires={b'ctx'}) | |
1631 | def template_review(context, mapping): |
|
1631 | def template_review(context, mapping): | |
1632 | """:phabreview: Object describing the review for this changeset. |
|
1632 | """:phabreview: Object describing the review for this changeset. | |
1633 | Has attributes `url` and `id`. |
|
1633 | Has attributes `url` and `id`. | |
1634 | """ |
|
1634 | """ | |
1635 | ctx = context.resource(mapping, b'ctx') |
|
1635 | ctx = context.resource(mapping, b'ctx') | |
1636 | m = _differentialrevisiondescre.search(ctx.description()) |
|
1636 | m = _differentialrevisiondescre.search(ctx.description()) | |
1637 | if m: |
|
1637 | if m: | |
1638 | return templateutil.hybriddict( |
|
1638 | return templateutil.hybriddict( | |
1639 | {b'url': m.group(r'url'), b'id': b"D%s" % m.group(r'id'),} |
|
1639 | {b'url': m.group(r'url'), b'id': b"D%s" % m.group(r'id'),} | |
1640 | ) |
|
1640 | ) | |
1641 | else: |
|
1641 | else: | |
1642 | tags = ctx.repo().nodetags(ctx.node()) |
|
1642 | tags = ctx.repo().nodetags(ctx.node()) | |
1643 | for t in tags: |
|
1643 | for t in tags: | |
1644 | if _differentialrevisiontagre.match(t): |
|
1644 | if _differentialrevisiontagre.match(t): | |
1645 | url = ctx.repo().ui.config(b'phabricator', b'url') |
|
1645 | url = ctx.repo().ui.config(b'phabricator', b'url') | |
1646 | if not url.endswith(b'/'): |
|
1646 | if not url.endswith(b'/'): | |
1647 | url += b'/' |
|
1647 | url += b'/' | |
1648 | url += t |
|
1648 | url += t | |
1649 |
|
1649 | |||
1650 | return templateutil.hybriddict({b'url': url, b'id': t,}) |
|
1650 | return templateutil.hybriddict({b'url': url, b'id': t,}) | |
1651 | return None |
|
1651 | return None |
@@ -1,454 +1,499 b'' | |||||
1 | # pycompat.py - portability shim for python 3 |
|
1 | # pycompat.py - portability shim for python 3 | |
2 | # |
|
2 | # | |
3 | # This software may be used and distributed according to the terms of the |
|
3 | # This software may be used and distributed according to the terms of the | |
4 | # GNU General Public License version 2 or any later version. |
|
4 | # GNU General Public License version 2 or any later version. | |
5 |
|
5 | |||
6 | """Mercurial portability shim for python 3. |
|
6 | """Mercurial portability shim for python 3. | |
7 |
|
7 | |||
8 | This contains aliases to hide python version-specific details from the core. |
|
8 | This contains aliases to hide python version-specific details from the core. | |
9 | """ |
|
9 | """ | |
10 |
|
10 | |||
11 | from __future__ import absolute_import |
|
11 | from __future__ import absolute_import | |
12 |
|
12 | |||
13 | import getopt |
|
13 | import getopt | |
14 | import inspect |
|
14 | import inspect | |
|
15 | import json | |||
15 | import os |
|
16 | import os | |
16 | import shlex |
|
17 | import shlex | |
17 | import sys |
|
18 | import sys | |
18 | import tempfile |
|
19 | import tempfile | |
19 |
|
20 | |||
20 | ispy3 = sys.version_info[0] >= 3 |
|
21 | ispy3 = sys.version_info[0] >= 3 | |
21 | ispypy = r'__pypy__' in sys.builtin_module_names |
|
22 | ispypy = r'__pypy__' in sys.builtin_module_names | |
22 |
|
23 | |||
23 | if not ispy3: |
|
24 | if not ispy3: | |
24 | import cookielib |
|
25 | import cookielib | |
25 | import cPickle as pickle |
|
26 | import cPickle as pickle | |
26 | import httplib |
|
27 | import httplib | |
27 | import Queue as queue |
|
28 | import Queue as queue | |
28 | import SocketServer as socketserver |
|
29 | import SocketServer as socketserver | |
29 | import xmlrpclib |
|
30 | import xmlrpclib | |
30 |
|
31 | |||
31 | from .thirdparty.concurrent import futures |
|
32 | from .thirdparty.concurrent import futures | |
32 |
|
33 | |||
33 | def future_set_exception_info(f, exc_info): |
|
34 | def future_set_exception_info(f, exc_info): | |
34 | f.set_exception_info(*exc_info) |
|
35 | f.set_exception_info(*exc_info) | |
35 |
|
36 | |||
36 |
|
37 | |||
37 | else: |
|
38 | else: | |
38 | import concurrent.futures as futures |
|
39 | import concurrent.futures as futures | |
39 | import http.cookiejar as cookielib |
|
40 | import http.cookiejar as cookielib | |
40 | import http.client as httplib |
|
41 | import http.client as httplib | |
41 | import pickle |
|
42 | import pickle | |
42 | import queue as queue |
|
43 | import queue as queue | |
43 | import socketserver |
|
44 | import socketserver | |
44 | import xmlrpc.client as xmlrpclib |
|
45 | import xmlrpc.client as xmlrpclib | |
45 |
|
46 | |||
46 | def future_set_exception_info(f, exc_info): |
|
47 | def future_set_exception_info(f, exc_info): | |
47 | f.set_exception(exc_info[0]) |
|
48 | f.set_exception(exc_info[0]) | |
48 |
|
49 | |||
49 |
|
50 | |||
50 | def identity(a): |
|
51 | def identity(a): | |
51 | return a |
|
52 | return a | |
52 |
|
53 | |||
53 |
|
54 | |||
54 | def _rapply(f, xs): |
|
55 | def _rapply(f, xs): | |
55 | if xs is None: |
|
56 | if xs is None: | |
56 | # assume None means non-value of optional data |
|
57 | # assume None means non-value of optional data | |
57 | return xs |
|
58 | return xs | |
58 | if isinstance(xs, (list, set, tuple)): |
|
59 | if isinstance(xs, (list, set, tuple)): | |
59 | return type(xs)(_rapply(f, x) for x in xs) |
|
60 | return type(xs)(_rapply(f, x) for x in xs) | |
60 | if isinstance(xs, dict): |
|
61 | if isinstance(xs, dict): | |
61 | return type(xs)((_rapply(f, k), _rapply(f, v)) for k, v in xs.items()) |
|
62 | return type(xs)((_rapply(f, k), _rapply(f, v)) for k, v in xs.items()) | |
62 | return f(xs) |
|
63 | return f(xs) | |
63 |
|
64 | |||
64 |
|
65 | |||
65 | def rapply(f, xs): |
|
66 | def rapply(f, xs): | |
66 | """Apply function recursively to every item preserving the data structure |
|
67 | """Apply function recursively to every item preserving the data structure | |
67 |
|
68 | |||
68 | >>> def f(x): |
|
69 | >>> def f(x): | |
69 | ... return 'f(%s)' % x |
|
70 | ... return 'f(%s)' % x | |
70 | >>> rapply(f, None) is None |
|
71 | >>> rapply(f, None) is None | |
71 | True |
|
72 | True | |
72 | >>> rapply(f, 'a') |
|
73 | >>> rapply(f, 'a') | |
73 | 'f(a)' |
|
74 | 'f(a)' | |
74 | >>> rapply(f, {'a'}) == {'f(a)'} |
|
75 | >>> rapply(f, {'a'}) == {'f(a)'} | |
75 | True |
|
76 | True | |
76 | >>> rapply(f, ['a', 'b', None, {'c': 'd'}, []]) |
|
77 | >>> rapply(f, ['a', 'b', None, {'c': 'd'}, []]) | |
77 | ['f(a)', 'f(b)', None, {'f(c)': 'f(d)'}, []] |
|
78 | ['f(a)', 'f(b)', None, {'f(c)': 'f(d)'}, []] | |
78 |
|
79 | |||
79 | >>> xs = [object()] |
|
80 | >>> xs = [object()] | |
80 | >>> rapply(identity, xs) is xs |
|
81 | >>> rapply(identity, xs) is xs | |
81 | True |
|
82 | True | |
82 | """ |
|
83 | """ | |
83 | if f is identity: |
|
84 | if f is identity: | |
84 | # fast path mainly for py2 |
|
85 | # fast path mainly for py2 | |
85 | return xs |
|
86 | return xs | |
86 | return _rapply(f, xs) |
|
87 | return _rapply(f, xs) | |
87 |
|
88 | |||
88 |
|
89 | |||
89 | if ispy3: |
|
90 | if ispy3: | |
90 | import builtins |
|
91 | import builtins | |
|
92 | import codecs | |||
91 | import functools |
|
93 | import functools | |
92 | import io |
|
94 | import io | |
93 | import struct |
|
95 | import struct | |
94 |
|
96 | |||
95 | fsencode = os.fsencode |
|
97 | fsencode = os.fsencode | |
96 | fsdecode = os.fsdecode |
|
98 | fsdecode = os.fsdecode | |
97 | oscurdir = os.curdir.encode('ascii') |
|
99 | oscurdir = os.curdir.encode('ascii') | |
98 | oslinesep = os.linesep.encode('ascii') |
|
100 | oslinesep = os.linesep.encode('ascii') | |
99 | osname = os.name.encode('ascii') |
|
101 | osname = os.name.encode('ascii') | |
100 | ospathsep = os.pathsep.encode('ascii') |
|
102 | ospathsep = os.pathsep.encode('ascii') | |
101 | ospardir = os.pardir.encode('ascii') |
|
103 | ospardir = os.pardir.encode('ascii') | |
102 | ossep = os.sep.encode('ascii') |
|
104 | ossep = os.sep.encode('ascii') | |
103 | osaltsep = os.altsep |
|
105 | osaltsep = os.altsep | |
104 | if osaltsep: |
|
106 | if osaltsep: | |
105 | osaltsep = osaltsep.encode('ascii') |
|
107 | osaltsep = osaltsep.encode('ascii') | |
106 |
|
108 | |||
107 | sysplatform = sys.platform.encode('ascii') |
|
109 | sysplatform = sys.platform.encode('ascii') | |
108 | sysexecutable = sys.executable |
|
110 | sysexecutable = sys.executable | |
109 | if sysexecutable: |
|
111 | if sysexecutable: | |
110 | sysexecutable = os.fsencode(sysexecutable) |
|
112 | sysexecutable = os.fsencode(sysexecutable) | |
111 | bytesio = io.BytesIO |
|
113 | bytesio = io.BytesIO | |
112 | # TODO deprecate stringio name, as it is a lie on Python 3. |
|
114 | # TODO deprecate stringio name, as it is a lie on Python 3. | |
113 | stringio = bytesio |
|
115 | stringio = bytesio | |
114 |
|
116 | |||
115 | def maplist(*args): |
|
117 | def maplist(*args): | |
116 | return list(map(*args)) |
|
118 | return list(map(*args)) | |
117 |
|
119 | |||
118 | def rangelist(*args): |
|
120 | def rangelist(*args): | |
119 | return list(range(*args)) |
|
121 | return list(range(*args)) | |
120 |
|
122 | |||
121 | def ziplist(*args): |
|
123 | def ziplist(*args): | |
122 | return list(zip(*args)) |
|
124 | return list(zip(*args)) | |
123 |
|
125 | |||
124 | rawinput = input |
|
126 | rawinput = input | |
125 | getargspec = inspect.getfullargspec |
|
127 | getargspec = inspect.getfullargspec | |
126 |
|
128 | |||
127 | long = int |
|
129 | long = int | |
128 |
|
130 | |||
129 | # TODO: .buffer might not exist if std streams were replaced; we'll need |
|
131 | # TODO: .buffer might not exist if std streams were replaced; we'll need | |
130 | # a silly wrapper to make a bytes stream backed by a unicode one. |
|
132 | # a silly wrapper to make a bytes stream backed by a unicode one. | |
131 | stdin = sys.stdin.buffer |
|
133 | stdin = sys.stdin.buffer | |
132 | stdout = sys.stdout.buffer |
|
134 | stdout = sys.stdout.buffer | |
133 | stderr = sys.stderr.buffer |
|
135 | stderr = sys.stderr.buffer | |
134 |
|
136 | |||
135 | # Since Python 3 converts argv to wchar_t type by Py_DecodeLocale() on Unix, |
|
137 | # Since Python 3 converts argv to wchar_t type by Py_DecodeLocale() on Unix, | |
136 | # we can use os.fsencode() to get back bytes argv. |
|
138 | # we can use os.fsencode() to get back bytes argv. | |
137 | # |
|
139 | # | |
138 | # https://hg.python.org/cpython/file/v3.5.1/Programs/python.c#l55 |
|
140 | # https://hg.python.org/cpython/file/v3.5.1/Programs/python.c#l55 | |
139 | # |
|
141 | # | |
140 | # TODO: On Windows, the native argv is wchar_t, so we'll need a different |
|
142 | # TODO: On Windows, the native argv is wchar_t, so we'll need a different | |
141 | # workaround to simulate the Python 2 (i.e. ANSI Win32 API) behavior. |
|
143 | # workaround to simulate the Python 2 (i.e. ANSI Win32 API) behavior. | |
142 | if getattr(sys, 'argv', None) is not None: |
|
144 | if getattr(sys, 'argv', None) is not None: | |
143 | sysargv = list(map(os.fsencode, sys.argv)) |
|
145 | sysargv = list(map(os.fsencode, sys.argv)) | |
144 |
|
146 | |||
145 | bytechr = struct.Struct(r'>B').pack |
|
147 | bytechr = struct.Struct(r'>B').pack | |
146 | byterepr = b'%r'.__mod__ |
|
148 | byterepr = b'%r'.__mod__ | |
147 |
|
149 | |||
148 | class bytestr(bytes): |
|
150 | class bytestr(bytes): | |
149 | """A bytes which mostly acts as a Python 2 str |
|
151 | """A bytes which mostly acts as a Python 2 str | |
150 |
|
152 | |||
151 | >>> bytestr(), bytestr(bytearray(b'foo')), bytestr(u'ascii'), bytestr(1) |
|
153 | >>> bytestr(), bytestr(bytearray(b'foo')), bytestr(u'ascii'), bytestr(1) | |
152 | ('', 'foo', 'ascii', '1') |
|
154 | ('', 'foo', 'ascii', '1') | |
153 | >>> s = bytestr(b'foo') |
|
155 | >>> s = bytestr(b'foo') | |
154 | >>> assert s is bytestr(s) |
|
156 | >>> assert s is bytestr(s) | |
155 |
|
157 | |||
156 | __bytes__() should be called if provided: |
|
158 | __bytes__() should be called if provided: | |
157 |
|
159 | |||
158 | >>> class bytesable(object): |
|
160 | >>> class bytesable(object): | |
159 | ... def __bytes__(self): |
|
161 | ... def __bytes__(self): | |
160 | ... return b'bytes' |
|
162 | ... return b'bytes' | |
161 | >>> bytestr(bytesable()) |
|
163 | >>> bytestr(bytesable()) | |
162 | 'bytes' |
|
164 | 'bytes' | |
163 |
|
165 | |||
164 | There's no implicit conversion from non-ascii str as its encoding is |
|
166 | There's no implicit conversion from non-ascii str as its encoding is | |
165 | unknown: |
|
167 | unknown: | |
166 |
|
168 | |||
167 | >>> bytestr(chr(0x80)) # doctest: +ELLIPSIS |
|
169 | >>> bytestr(chr(0x80)) # doctest: +ELLIPSIS | |
168 | Traceback (most recent call last): |
|
170 | Traceback (most recent call last): | |
169 | ... |
|
171 | ... | |
170 | UnicodeEncodeError: ... |
|
172 | UnicodeEncodeError: ... | |
171 |
|
173 | |||
172 | Comparison between bytestr and bytes should work: |
|
174 | Comparison between bytestr and bytes should work: | |
173 |
|
175 | |||
174 | >>> assert bytestr(b'foo') == b'foo' |
|
176 | >>> assert bytestr(b'foo') == b'foo' | |
175 | >>> assert b'foo' == bytestr(b'foo') |
|
177 | >>> assert b'foo' == bytestr(b'foo') | |
176 | >>> assert b'f' in bytestr(b'foo') |
|
178 | >>> assert b'f' in bytestr(b'foo') | |
177 | >>> assert bytestr(b'f') in b'foo' |
|
179 | >>> assert bytestr(b'f') in b'foo' | |
178 |
|
180 | |||
179 | Sliced elements should be bytes, not integer: |
|
181 | Sliced elements should be bytes, not integer: | |
180 |
|
182 | |||
181 | >>> s[1], s[:2] |
|
183 | >>> s[1], s[:2] | |
182 | (b'o', b'fo') |
|
184 | (b'o', b'fo') | |
183 | >>> list(s), list(reversed(s)) |
|
185 | >>> list(s), list(reversed(s)) | |
184 | ([b'f', b'o', b'o'], [b'o', b'o', b'f']) |
|
186 | ([b'f', b'o', b'o'], [b'o', b'o', b'f']) | |
185 |
|
187 | |||
186 | As bytestr type isn't propagated across operations, you need to cast |
|
188 | As bytestr type isn't propagated across operations, you need to cast | |
187 | bytes to bytestr explicitly: |
|
189 | bytes to bytestr explicitly: | |
188 |
|
190 | |||
189 | >>> s = bytestr(b'foo').upper() |
|
191 | >>> s = bytestr(b'foo').upper() | |
190 | >>> t = bytestr(s) |
|
192 | >>> t = bytestr(s) | |
191 | >>> s[0], t[0] |
|
193 | >>> s[0], t[0] | |
192 | (70, b'F') |
|
194 | (70, b'F') | |
193 |
|
195 | |||
194 | Be careful to not pass a bytestr object to a function which expects |
|
196 | Be careful to not pass a bytestr object to a function which expects | |
195 | bytearray-like behavior. |
|
197 | bytearray-like behavior. | |
196 |
|
198 | |||
197 | >>> t = bytes(t) # cast to bytes |
|
199 | >>> t = bytes(t) # cast to bytes | |
198 | >>> assert type(t) is bytes |
|
200 | >>> assert type(t) is bytes | |
199 | """ |
|
201 | """ | |
200 |
|
202 | |||
201 | def __new__(cls, s=b''): |
|
203 | def __new__(cls, s=b''): | |
202 | if isinstance(s, bytestr): |
|
204 | if isinstance(s, bytestr): | |
203 | return s |
|
205 | return s | |
204 | if not isinstance( |
|
206 | if not isinstance( | |
205 | s, (bytes, bytearray) |
|
207 | s, (bytes, bytearray) | |
206 | ) and not hasattr( # hasattr-py3-only |
|
208 | ) and not hasattr( # hasattr-py3-only | |
207 | s, u'__bytes__' |
|
209 | s, u'__bytes__' | |
208 | ): |
|
210 | ): | |
209 | s = str(s).encode('ascii') |
|
211 | s = str(s).encode('ascii') | |
210 | return bytes.__new__(cls, s) |
|
212 | return bytes.__new__(cls, s) | |
211 |
|
213 | |||
212 | def __getitem__(self, key): |
|
214 | def __getitem__(self, key): | |
213 | s = bytes.__getitem__(self, key) |
|
215 | s = bytes.__getitem__(self, key) | |
214 | if not isinstance(s, bytes): |
|
216 | if not isinstance(s, bytes): | |
215 | s = bytechr(s) |
|
217 | s = bytechr(s) | |
216 | return s |
|
218 | return s | |
217 |
|
219 | |||
218 | def __iter__(self): |
|
220 | def __iter__(self): | |
219 | return iterbytestr(bytes.__iter__(self)) |
|
221 | return iterbytestr(bytes.__iter__(self)) | |
220 |
|
222 | |||
221 | def __repr__(self): |
|
223 | def __repr__(self): | |
222 | return bytes.__repr__(self)[1:] # drop b'' |
|
224 | return bytes.__repr__(self)[1:] # drop b'' | |
223 |
|
225 | |||
224 | def iterbytestr(s): |
|
226 | def iterbytestr(s): | |
225 | """Iterate bytes as if it were a str object of Python 2""" |
|
227 | """Iterate bytes as if it were a str object of Python 2""" | |
226 | return map(bytechr, s) |
|
228 | return map(bytechr, s) | |
227 |
|
229 | |||
228 | def maybebytestr(s): |
|
230 | def maybebytestr(s): | |
229 | """Promote bytes to bytestr""" |
|
231 | """Promote bytes to bytestr""" | |
230 | if isinstance(s, bytes): |
|
232 | if isinstance(s, bytes): | |
231 | return bytestr(s) |
|
233 | return bytestr(s) | |
232 | return s |
|
234 | return s | |
233 |
|
235 | |||
234 | def sysbytes(s): |
|
236 | def sysbytes(s): | |
235 | """Convert an internal str (e.g. keyword, __doc__) back to bytes |
|
237 | """Convert an internal str (e.g. keyword, __doc__) back to bytes | |
236 |
|
238 | |||
237 | This never raises UnicodeEncodeError, but only ASCII characters |
|
239 | This never raises UnicodeEncodeError, but only ASCII characters | |
238 | can be round-trip by sysstr(sysbytes(s)). |
|
240 | can be round-trip by sysstr(sysbytes(s)). | |
239 | """ |
|
241 | """ | |
240 | return s.encode('utf-8') |
|
242 | return s.encode('utf-8') | |
241 |
|
243 | |||
242 | def sysstr(s): |
|
244 | def sysstr(s): | |
243 | """Return a keyword str to be passed to Python functions such as |
|
245 | """Return a keyword str to be passed to Python functions such as | |
244 | getattr() and str.encode() |
|
246 | getattr() and str.encode() | |
245 |
|
247 | |||
246 | This never raises UnicodeDecodeError. Non-ascii characters are |
|
248 | This never raises UnicodeDecodeError. Non-ascii characters are | |
247 | considered invalid and mapped to arbitrary but unique code points |
|
249 | considered invalid and mapped to arbitrary but unique code points | |
248 | such that 'sysstr(a) != sysstr(b)' for all 'a != b'. |
|
250 | such that 'sysstr(a) != sysstr(b)' for all 'a != b'. | |
249 | """ |
|
251 | """ | |
250 | if isinstance(s, builtins.str): |
|
252 | if isinstance(s, builtins.str): | |
251 | return s |
|
253 | return s | |
252 | return s.decode('latin-1') |
|
254 | return s.decode('latin-1') | |
253 |
|
255 | |||
254 | def strurl(url): |
|
256 | def strurl(url): | |
255 | """Converts a bytes url back to str""" |
|
257 | """Converts a bytes url back to str""" | |
256 | if isinstance(url, bytes): |
|
258 | if isinstance(url, bytes): | |
257 | return url.decode('ascii') |
|
259 | return url.decode('ascii') | |
258 | return url |
|
260 | return url | |
259 |
|
261 | |||
260 | def bytesurl(url): |
|
262 | def bytesurl(url): | |
261 | """Converts a str url to bytes by encoding in ascii""" |
|
263 | """Converts a str url to bytes by encoding in ascii""" | |
262 | if isinstance(url, str): |
|
264 | if isinstance(url, str): | |
263 | return url.encode('ascii') |
|
265 | return url.encode('ascii') | |
264 | return url |
|
266 | return url | |
265 |
|
267 | |||
266 | def raisewithtb(exc, tb): |
|
268 | def raisewithtb(exc, tb): | |
267 | """Raise exception with the given traceback""" |
|
269 | """Raise exception with the given traceback""" | |
268 | raise exc.with_traceback(tb) |
|
270 | raise exc.with_traceback(tb) | |
269 |
|
271 | |||
270 | def getdoc(obj): |
|
272 | def getdoc(obj): | |
271 | """Get docstring as bytes; may be None so gettext() won't confuse it |
|
273 | """Get docstring as bytes; may be None so gettext() won't confuse it | |
272 | with _('')""" |
|
274 | with _('')""" | |
273 | doc = getattr(obj, '__doc__', None) |
|
275 | doc = getattr(obj, '__doc__', None) | |
274 | if doc is None: |
|
276 | if doc is None: | |
275 | return doc |
|
277 | return doc | |
276 | return sysbytes(doc) |
|
278 | return sysbytes(doc) | |
277 |
|
279 | |||
278 | def _wrapattrfunc(f): |
|
280 | def _wrapattrfunc(f): | |
279 | @functools.wraps(f) |
|
281 | @functools.wraps(f) | |
280 | def w(object, name, *args): |
|
282 | def w(object, name, *args): | |
281 | return f(object, sysstr(name), *args) |
|
283 | return f(object, sysstr(name), *args) | |
282 |
|
284 | |||
283 | return w |
|
285 | return w | |
284 |
|
286 | |||
285 | # these wrappers are automagically imported by hgloader |
|
287 | # these wrappers are automagically imported by hgloader | |
286 | delattr = _wrapattrfunc(builtins.delattr) |
|
288 | delattr = _wrapattrfunc(builtins.delattr) | |
287 | getattr = _wrapattrfunc(builtins.getattr) |
|
289 | getattr = _wrapattrfunc(builtins.getattr) | |
288 | hasattr = _wrapattrfunc(builtins.hasattr) |
|
290 | hasattr = _wrapattrfunc(builtins.hasattr) | |
289 | setattr = _wrapattrfunc(builtins.setattr) |
|
291 | setattr = _wrapattrfunc(builtins.setattr) | |
290 | xrange = builtins.range |
|
292 | xrange = builtins.range | |
291 | unicode = str |
|
293 | unicode = str | |
292 |
|
294 | |||
293 | def open(name, mode=b'r', buffering=-1, encoding=None): |
|
295 | def open(name, mode=b'r', buffering=-1, encoding=None): | |
294 | return builtins.open(name, sysstr(mode), buffering, encoding) |
|
296 | return builtins.open(name, sysstr(mode), buffering, encoding) | |
295 |
|
297 | |||
296 | safehasattr = _wrapattrfunc(builtins.hasattr) |
|
298 | safehasattr = _wrapattrfunc(builtins.hasattr) | |
297 |
|
299 | |||
298 | def _getoptbwrapper(orig, args, shortlist, namelist): |
|
300 | def _getoptbwrapper(orig, args, shortlist, namelist): | |
299 | """ |
|
301 | """ | |
300 | Takes bytes arguments, converts them to unicode, pass them to |
|
302 | Takes bytes arguments, converts them to unicode, pass them to | |
301 | getopt.getopt(), convert the returned values back to bytes and then |
|
303 | getopt.getopt(), convert the returned values back to bytes and then | |
302 | return them for Python 3 compatibility as getopt.getopt() don't accepts |
|
304 | return them for Python 3 compatibility as getopt.getopt() don't accepts | |
303 | bytes on Python 3. |
|
305 | bytes on Python 3. | |
304 | """ |
|
306 | """ | |
305 | args = [a.decode('latin-1') for a in args] |
|
307 | args = [a.decode('latin-1') for a in args] | |
306 | shortlist = shortlist.decode('latin-1') |
|
308 | shortlist = shortlist.decode('latin-1') | |
307 | namelist = [a.decode('latin-1') for a in namelist] |
|
309 | namelist = [a.decode('latin-1') for a in namelist] | |
308 | opts, args = orig(args, shortlist, namelist) |
|
310 | opts, args = orig(args, shortlist, namelist) | |
309 | opts = [(a[0].encode('latin-1'), a[1].encode('latin-1')) for a in opts] |
|
311 | opts = [(a[0].encode('latin-1'), a[1].encode('latin-1')) for a in opts] | |
310 | args = [a.encode('latin-1') for a in args] |
|
312 | args = [a.encode('latin-1') for a in args] | |
311 | return opts, args |
|
313 | return opts, args | |
312 |
|
314 | |||
313 | def strkwargs(dic): |
|
315 | def strkwargs(dic): | |
314 | """ |
|
316 | """ | |
315 | Converts the keys of a python dictonary to str i.e. unicodes so that |
|
317 | Converts the keys of a python dictonary to str i.e. unicodes so that | |
316 | they can be passed as keyword arguments as dictonaries with bytes keys |
|
318 | they can be passed as keyword arguments as dictonaries with bytes keys | |
317 | can't be passed as keyword arguments to functions on Python 3. |
|
319 | can't be passed as keyword arguments to functions on Python 3. | |
318 | """ |
|
320 | """ | |
319 | dic = dict((k.decode('latin-1'), v) for k, v in dic.items()) |
|
321 | dic = dict((k.decode('latin-1'), v) for k, v in dic.items()) | |
320 | return dic |
|
322 | return dic | |
321 |
|
323 | |||
322 | def byteskwargs(dic): |
|
324 | def byteskwargs(dic): | |
323 | """ |
|
325 | """ | |
324 | Converts keys of python dictonaries to bytes as they were converted to |
|
326 | Converts keys of python dictonaries to bytes as they were converted to | |
325 | str to pass that dictonary as a keyword argument on Python 3. |
|
327 | str to pass that dictonary as a keyword argument on Python 3. | |
326 | """ |
|
328 | """ | |
327 | dic = dict((k.encode('latin-1'), v) for k, v in dic.items()) |
|
329 | dic = dict((k.encode('latin-1'), v) for k, v in dic.items()) | |
328 | return dic |
|
330 | return dic | |
329 |
|
331 | |||
330 | # TODO: handle shlex.shlex(). |
|
332 | # TODO: handle shlex.shlex(). | |
331 | def shlexsplit(s, comments=False, posix=True): |
|
333 | def shlexsplit(s, comments=False, posix=True): | |
332 | """ |
|
334 | """ | |
333 | Takes bytes argument, convert it to str i.e. unicodes, pass that into |
|
335 | Takes bytes argument, convert it to str i.e. unicodes, pass that into | |
334 | shlex.split(), convert the returned value to bytes and return that for |
|
336 | shlex.split(), convert the returned value to bytes and return that for | |
335 | Python 3 compatibility as shelx.split() don't accept bytes on Python 3. |
|
337 | Python 3 compatibility as shelx.split() don't accept bytes on Python 3. | |
336 | """ |
|
338 | """ | |
337 | ret = shlex.split(s.decode('latin-1'), comments, posix) |
|
339 | ret = shlex.split(s.decode('latin-1'), comments, posix) | |
338 | return [a.encode('latin-1') for a in ret] |
|
340 | return [a.encode('latin-1') for a in ret] | |
339 |
|
341 | |||
340 | iteritems = lambda x: x.items() |
|
342 | iteritems = lambda x: x.items() | |
341 | itervalues = lambda x: x.values() |
|
343 | itervalues = lambda x: x.values() | |
342 |
|
344 | |||
|
345 | # Python 3.5's json.load and json.loads require str. We polyfill its | |||
|
346 | # code for detecting encoding from bytes. | |||
|
347 | if sys.version_info[0:2] < (3, 6): | |||
|
348 | ||||
|
349 | def _detect_encoding(b): | |||
|
350 | bstartswith = b.startswith | |||
|
351 | if bstartswith((codecs.BOM_UTF32_BE, codecs.BOM_UTF32_LE)): | |||
|
352 | return 'utf-32' | |||
|
353 | if bstartswith((codecs.BOM_UTF16_BE, codecs.BOM_UTF16_LE)): | |||
|
354 | return 'utf-16' | |||
|
355 | if bstartswith(codecs.BOM_UTF8): | |||
|
356 | return 'utf-8-sig' | |||
|
357 | ||||
|
358 | if len(b) >= 4: | |||
|
359 | if not b[0]: | |||
|
360 | # 00 00 -- -- - utf-32-be | |||
|
361 | # 00 XX -- -- - utf-16-be | |||
|
362 | return 'utf-16-be' if b[1] else 'utf-32-be' | |||
|
363 | if not b[1]: | |||
|
364 | # XX 00 00 00 - utf-32-le | |||
|
365 | # XX 00 00 XX - utf-16-le | |||
|
366 | # XX 00 XX -- - utf-16-le | |||
|
367 | return 'utf-16-le' if b[2] or b[3] else 'utf-32-le' | |||
|
368 | elif len(b) == 2: | |||
|
369 | if not b[0]: | |||
|
370 | # 00 XX - utf-16-be | |||
|
371 | return 'utf-16-be' | |||
|
372 | if not b[1]: | |||
|
373 | # XX 00 - utf-16-le | |||
|
374 | return 'utf-16-le' | |||
|
375 | # default | |||
|
376 | return 'utf-8' | |||
|
377 | ||||
|
378 | def json_loads(s, *args, **kwargs): | |||
|
379 | if isinstance(s, (bytes, bytearray)): | |||
|
380 | s = s.decode(_detect_encoding(s), 'surrogatepass') | |||
|
381 | ||||
|
382 | return json.loads(s, *args, **kwargs) | |||
|
383 | ||||
|
384 | else: | |||
|
385 | json_loads = json.loads | |||
|
386 | ||||
343 | else: |
|
387 | else: | |
344 | import cStringIO |
|
388 | import cStringIO | |
345 |
|
389 | |||
346 | xrange = xrange |
|
390 | xrange = xrange | |
347 | unicode = unicode |
|
391 | unicode = unicode | |
348 | bytechr = chr |
|
392 | bytechr = chr | |
349 | byterepr = repr |
|
393 | byterepr = repr | |
350 | bytestr = str |
|
394 | bytestr = str | |
351 | iterbytestr = iter |
|
395 | iterbytestr = iter | |
352 | maybebytestr = identity |
|
396 | maybebytestr = identity | |
353 | sysbytes = identity |
|
397 | sysbytes = identity | |
354 | sysstr = identity |
|
398 | sysstr = identity | |
355 | strurl = identity |
|
399 | strurl = identity | |
356 | bytesurl = identity |
|
400 | bytesurl = identity | |
357 | open = open |
|
401 | open = open | |
358 | delattr = delattr |
|
402 | delattr = delattr | |
359 | getattr = getattr |
|
403 | getattr = getattr | |
360 | hasattr = hasattr |
|
404 | hasattr = hasattr | |
361 | setattr = setattr |
|
405 | setattr = setattr | |
362 |
|
406 | |||
363 | # this can't be parsed on Python 3 |
|
407 | # this can't be parsed on Python 3 | |
364 | exec(b'def raisewithtb(exc, tb):\n raise exc, None, tb\n') |
|
408 | exec(b'def raisewithtb(exc, tb):\n raise exc, None, tb\n') | |
365 |
|
409 | |||
366 | def fsencode(filename): |
|
410 | def fsencode(filename): | |
367 | """ |
|
411 | """ | |
368 | Partial backport from os.py in Python 3, which only accepts bytes. |
|
412 | Partial backport from os.py in Python 3, which only accepts bytes. | |
369 | In Python 2, our paths should only ever be bytes, a unicode path |
|
413 | In Python 2, our paths should only ever be bytes, a unicode path | |
370 | indicates a bug. |
|
414 | indicates a bug. | |
371 | """ |
|
415 | """ | |
372 | if isinstance(filename, str): |
|
416 | if isinstance(filename, str): | |
373 | return filename |
|
417 | return filename | |
374 | else: |
|
418 | else: | |
375 | raise TypeError(r"expect str, not %s" % type(filename).__name__) |
|
419 | raise TypeError(r"expect str, not %s" % type(filename).__name__) | |
376 |
|
420 | |||
377 | # In Python 2, fsdecode() has a very chance to receive bytes. So it's |
|
421 | # In Python 2, fsdecode() has a very chance to receive bytes. So it's | |
378 | # better not to touch Python 2 part as it's already working fine. |
|
422 | # better not to touch Python 2 part as it's already working fine. | |
379 | fsdecode = identity |
|
423 | fsdecode = identity | |
380 |
|
424 | |||
381 | def getdoc(obj): |
|
425 | def getdoc(obj): | |
382 | return getattr(obj, '__doc__', None) |
|
426 | return getattr(obj, '__doc__', None) | |
383 |
|
427 | |||
384 | _notset = object() |
|
428 | _notset = object() | |
385 |
|
429 | |||
386 | def safehasattr(thing, attr): |
|
430 | def safehasattr(thing, attr): | |
387 | return getattr(thing, attr, _notset) is not _notset |
|
431 | return getattr(thing, attr, _notset) is not _notset | |
388 |
|
432 | |||
389 | def _getoptbwrapper(orig, args, shortlist, namelist): |
|
433 | def _getoptbwrapper(orig, args, shortlist, namelist): | |
390 | return orig(args, shortlist, namelist) |
|
434 | return orig(args, shortlist, namelist) | |
391 |
|
435 | |||
392 | strkwargs = identity |
|
436 | strkwargs = identity | |
393 | byteskwargs = identity |
|
437 | byteskwargs = identity | |
394 |
|
438 | |||
395 | oscurdir = os.curdir |
|
439 | oscurdir = os.curdir | |
396 | oslinesep = os.linesep |
|
440 | oslinesep = os.linesep | |
397 | osname = os.name |
|
441 | osname = os.name | |
398 | ospathsep = os.pathsep |
|
442 | ospathsep = os.pathsep | |
399 | ospardir = os.pardir |
|
443 | ospardir = os.pardir | |
400 | ossep = os.sep |
|
444 | ossep = os.sep | |
401 | osaltsep = os.altsep |
|
445 | osaltsep = os.altsep | |
402 | long = long |
|
446 | long = long | |
403 | stdin = sys.stdin |
|
447 | stdin = sys.stdin | |
404 | stdout = sys.stdout |
|
448 | stdout = sys.stdout | |
405 | stderr = sys.stderr |
|
449 | stderr = sys.stderr | |
406 | if getattr(sys, 'argv', None) is not None: |
|
450 | if getattr(sys, 'argv', None) is not None: | |
407 | sysargv = sys.argv |
|
451 | sysargv = sys.argv | |
408 | sysplatform = sys.platform |
|
452 | sysplatform = sys.platform | |
409 | sysexecutable = sys.executable |
|
453 | sysexecutable = sys.executable | |
410 | shlexsplit = shlex.split |
|
454 | shlexsplit = shlex.split | |
411 | bytesio = cStringIO.StringIO |
|
455 | bytesio = cStringIO.StringIO | |
412 | stringio = bytesio |
|
456 | stringio = bytesio | |
413 | maplist = map |
|
457 | maplist = map | |
414 | rangelist = range |
|
458 | rangelist = range | |
415 | ziplist = zip |
|
459 | ziplist = zip | |
416 | rawinput = raw_input |
|
460 | rawinput = raw_input | |
417 | getargspec = inspect.getargspec |
|
461 | getargspec = inspect.getargspec | |
418 | iteritems = lambda x: x.iteritems() |
|
462 | iteritems = lambda x: x.iteritems() | |
419 | itervalues = lambda x: x.itervalues() |
|
463 | itervalues = lambda x: x.itervalues() | |
|
464 | json_loads = json.loads | |||
420 |
|
465 | |||
421 | isjython = sysplatform.startswith(b'java') |
|
466 | isjython = sysplatform.startswith(b'java') | |
422 |
|
467 | |||
423 | isdarwin = sysplatform.startswith(b'darwin') |
|
468 | isdarwin = sysplatform.startswith(b'darwin') | |
424 | islinux = sysplatform.startswith(b'linux') |
|
469 | islinux = sysplatform.startswith(b'linux') | |
425 | isposix = osname == b'posix' |
|
470 | isposix = osname == b'posix' | |
426 | iswindows = osname == b'nt' |
|
471 | iswindows = osname == b'nt' | |
427 |
|
472 | |||
428 |
|
473 | |||
429 | def getoptb(args, shortlist, namelist): |
|
474 | def getoptb(args, shortlist, namelist): | |
430 | return _getoptbwrapper(getopt.getopt, args, shortlist, namelist) |
|
475 | return _getoptbwrapper(getopt.getopt, args, shortlist, namelist) | |
431 |
|
476 | |||
432 |
|
477 | |||
433 | def gnugetoptb(args, shortlist, namelist): |
|
478 | def gnugetoptb(args, shortlist, namelist): | |
434 | return _getoptbwrapper(getopt.gnu_getopt, args, shortlist, namelist) |
|
479 | return _getoptbwrapper(getopt.gnu_getopt, args, shortlist, namelist) | |
435 |
|
480 | |||
436 |
|
481 | |||
437 | def mkdtemp(suffix=b'', prefix=b'tmp', dir=None): |
|
482 | def mkdtemp(suffix=b'', prefix=b'tmp', dir=None): | |
438 | return tempfile.mkdtemp(suffix, prefix, dir) |
|
483 | return tempfile.mkdtemp(suffix, prefix, dir) | |
439 |
|
484 | |||
440 |
|
485 | |||
441 | # text=True is not supported; use util.from/tonativeeol() instead |
|
486 | # text=True is not supported; use util.from/tonativeeol() instead | |
442 | def mkstemp(suffix=b'', prefix=b'tmp', dir=None): |
|
487 | def mkstemp(suffix=b'', prefix=b'tmp', dir=None): | |
443 | return tempfile.mkstemp(suffix, prefix, dir) |
|
488 | return tempfile.mkstemp(suffix, prefix, dir) | |
444 |
|
489 | |||
445 |
|
490 | |||
446 | # mode must include 'b'ytes as encoding= is not supported |
|
491 | # mode must include 'b'ytes as encoding= is not supported | |
447 | def namedtempfile( |
|
492 | def namedtempfile( | |
448 | mode=b'w+b', bufsize=-1, suffix=b'', prefix=b'tmp', dir=None, delete=True |
|
493 | mode=b'w+b', bufsize=-1, suffix=b'', prefix=b'tmp', dir=None, delete=True | |
449 | ): |
|
494 | ): | |
450 | mode = sysstr(mode) |
|
495 | mode = sysstr(mode) | |
451 | assert r'b' in mode |
|
496 | assert r'b' in mode | |
452 | return tempfile.NamedTemporaryFile( |
|
497 | return tempfile.NamedTemporaryFile( | |
453 | mode, bufsize, suffix=suffix, prefix=prefix, dir=dir, delete=delete |
|
498 | mode, bufsize, suffix=suffix, prefix=prefix, dir=dir, delete=delete | |
454 | ) |
|
499 | ) |
@@ -1,124 +1,124 b'' | |||||
1 | #!/usr/bin/env python |
|
1 | #!/usr/bin/env python | |
2 |
|
2 | |||
3 | """This does HTTP GET requests given a host:port and path and returns |
|
3 | """This does HTTP GET requests given a host:port and path and returns | |
4 | a subset of the headers plus the body of the result.""" |
|
4 | a subset of the headers plus the body of the result.""" | |
5 |
|
5 | |||
6 | from __future__ import absolute_import |
|
6 | from __future__ import absolute_import | |
7 |
|
7 | |||
8 | import argparse |
|
8 | import argparse | |
9 | import json |
|
9 | import json | |
10 | import os |
|
10 | import os | |
11 | import sys |
|
11 | import sys | |
12 |
|
12 | |||
13 | from mercurial import ( |
|
13 | from mercurial import ( | |
14 | pycompat, |
|
14 | pycompat, | |
15 | util, |
|
15 | util, | |
16 | ) |
|
16 | ) | |
17 |
|
17 | |||
18 | httplib = util.httplib |
|
18 | httplib = util.httplib | |
19 |
|
19 | |||
20 | try: |
|
20 | try: | |
21 | import msvcrt |
|
21 | import msvcrt | |
22 |
|
22 | |||
23 | msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) |
|
23 | msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) | |
24 | msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY) |
|
24 | msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY) | |
25 | except ImportError: |
|
25 | except ImportError: | |
26 | pass |
|
26 | pass | |
27 |
|
27 | |||
28 | stdout = getattr(sys.stdout, 'buffer', sys.stdout) |
|
28 | stdout = getattr(sys.stdout, 'buffer', sys.stdout) | |
29 |
|
29 | |||
30 | parser = argparse.ArgumentParser() |
|
30 | parser = argparse.ArgumentParser() | |
31 | parser.add_argument('--twice', action='store_true') |
|
31 | parser.add_argument('--twice', action='store_true') | |
32 | parser.add_argument('--headeronly', action='store_true') |
|
32 | parser.add_argument('--headeronly', action='store_true') | |
33 | parser.add_argument('--json', action='store_true') |
|
33 | parser.add_argument('--json', action='store_true') | |
34 | parser.add_argument('--hgproto') |
|
34 | parser.add_argument('--hgproto') | |
35 | parser.add_argument( |
|
35 | parser.add_argument( | |
36 | '--requestheader', |
|
36 | '--requestheader', | |
37 | nargs='*', |
|
37 | nargs='*', | |
38 | default=[], |
|
38 | default=[], | |
39 | help='Send an additional HTTP request header. Argument ' |
|
39 | help='Send an additional HTTP request header. Argument ' | |
40 | 'value is <header>=<value>', |
|
40 | 'value is <header>=<value>', | |
41 | ) |
|
41 | ) | |
42 | parser.add_argument('--bodyfile', help='Write HTTP response body to a file') |
|
42 | parser.add_argument('--bodyfile', help='Write HTTP response body to a file') | |
43 | parser.add_argument('host') |
|
43 | parser.add_argument('host') | |
44 | parser.add_argument('path') |
|
44 | parser.add_argument('path') | |
45 | parser.add_argument('show', nargs='*') |
|
45 | parser.add_argument('show', nargs='*') | |
46 |
|
46 | |||
47 | args = parser.parse_args() |
|
47 | args = parser.parse_args() | |
48 |
|
48 | |||
49 | twice = args.twice |
|
49 | twice = args.twice | |
50 | headeronly = args.headeronly |
|
50 | headeronly = args.headeronly | |
51 | formatjson = args.json |
|
51 | formatjson = args.json | |
52 | hgproto = args.hgproto |
|
52 | hgproto = args.hgproto | |
53 | requestheaders = args.requestheader |
|
53 | requestheaders = args.requestheader | |
54 |
|
54 | |||
55 | tag = None |
|
55 | tag = None | |
56 |
|
56 | |||
57 |
|
57 | |||
58 | def request(host, path, show): |
|
58 | def request(host, path, show): | |
59 | assert not path.startswith('/'), path |
|
59 | assert not path.startswith('/'), path | |
60 | global tag |
|
60 | global tag | |
61 | headers = {} |
|
61 | headers = {} | |
62 | if tag: |
|
62 | if tag: | |
63 | headers['If-None-Match'] = tag |
|
63 | headers['If-None-Match'] = tag | |
64 | if hgproto: |
|
64 | if hgproto: | |
65 | headers['X-HgProto-1'] = hgproto |
|
65 | headers['X-HgProto-1'] = hgproto | |
66 |
|
66 | |||
67 | for header in requestheaders: |
|
67 | for header in requestheaders: | |
68 | key, value = header.split('=', 1) |
|
68 | key, value = header.split('=', 1) | |
69 | headers[key] = value |
|
69 | headers[key] = value | |
70 |
|
70 | |||
71 | conn = httplib.HTTPConnection(host) |
|
71 | conn = httplib.HTTPConnection(host) | |
72 | conn.request("GET", '/' + path, None, headers) |
|
72 | conn.request("GET", '/' + path, None, headers) | |
73 | response = conn.getresponse() |
|
73 | response = conn.getresponse() | |
74 | stdout.write( |
|
74 | stdout.write( | |
75 | b'%d %s\n' % (response.status, response.reason.encode('ascii')) |
|
75 | b'%d %s\n' % (response.status, response.reason.encode('ascii')) | |
76 | ) |
|
76 | ) | |
77 | if show[:1] == ['-']: |
|
77 | if show[:1] == ['-']: | |
78 | show = sorted( |
|
78 | show = sorted( | |
79 | h for h, v in response.getheaders() if h.lower() not in show |
|
79 | h for h, v in response.getheaders() if h.lower() not in show | |
80 | ) |
|
80 | ) | |
81 | for h in [h.lower() for h in show]: |
|
81 | for h in [h.lower() for h in show]: | |
82 | if response.getheader(h, None) is not None: |
|
82 | if response.getheader(h, None) is not None: | |
83 | stdout.write( |
|
83 | stdout.write( | |
84 | b"%s: %s\n" |
|
84 | b"%s: %s\n" | |
85 | % (h.encode('ascii'), response.getheader(h).encode('ascii')) |
|
85 | % (h.encode('ascii'), response.getheader(h).encode('ascii')) | |
86 | ) |
|
86 | ) | |
87 | if not headeronly: |
|
87 | if not headeronly: | |
88 | stdout.write(b'\n') |
|
88 | stdout.write(b'\n') | |
89 | data = response.read() |
|
89 | data = response.read() | |
90 |
|
90 | |||
91 | if args.bodyfile: |
|
91 | if args.bodyfile: | |
92 | bodyfh = open(args.bodyfile, 'wb') |
|
92 | bodyfh = open(args.bodyfile, 'wb') | |
93 | else: |
|
93 | else: | |
94 | bodyfh = stdout |
|
94 | bodyfh = stdout | |
95 |
|
95 | |||
96 | # Pretty print JSON. This also has the beneficial side-effect |
|
96 | # Pretty print JSON. This also has the beneficial side-effect | |
97 | # of verifying emitted JSON is well-formed. |
|
97 | # of verifying emitted JSON is well-formed. | |
98 | if formatjson: |
|
98 | if formatjson: | |
99 | # json.dumps() will print trailing newlines. Eliminate them |
|
99 | # json.dumps() will print trailing newlines. Eliminate them | |
100 | # to make tests easier to write. |
|
100 | # to make tests easier to write. | |
101 |
data = |
|
101 | data = pycompat.json_loads(data) | |
102 | lines = json.dumps(data, sort_keys=True, indent=2).splitlines() |
|
102 | lines = json.dumps(data, sort_keys=True, indent=2).splitlines() | |
103 | for line in lines: |
|
103 | for line in lines: | |
104 | bodyfh.write(pycompat.sysbytes(line.rstrip())) |
|
104 | bodyfh.write(pycompat.sysbytes(line.rstrip())) | |
105 | bodyfh.write(b'\n') |
|
105 | bodyfh.write(b'\n') | |
106 | else: |
|
106 | else: | |
107 | bodyfh.write(data) |
|
107 | bodyfh.write(data) | |
108 |
|
108 | |||
109 | if args.bodyfile: |
|
109 | if args.bodyfile: | |
110 | bodyfh.close() |
|
110 | bodyfh.close() | |
111 |
|
111 | |||
112 | if twice and response.getheader('ETag', None): |
|
112 | if twice and response.getheader('ETag', None): | |
113 | tag = response.getheader('ETag') |
|
113 | tag = response.getheader('ETag') | |
114 |
|
114 | |||
115 | return response.status |
|
115 | return response.status | |
116 |
|
116 | |||
117 |
|
117 | |||
118 | status = request(args.host, args.path, args.show) |
|
118 | status = request(args.host, args.path, args.show) | |
119 | if twice: |
|
119 | if twice: | |
120 | status = request(args.host, args.path, args.show) |
|
120 | status = request(args.host, args.path, args.show) | |
121 |
|
121 | |||
122 | if 200 <= status <= 305: |
|
122 | if 200 <= status <= 305: | |
123 | sys.exit(0) |
|
123 | sys.exit(0) | |
124 | sys.exit(1) |
|
124 | sys.exit(1) |
General Comments 0
You need to be logged in to leave comments.
Login now