Show More
@@ -1,2315 +1,2322 b'' | |||||
1 | # bundle2.py - generic container format to transmit arbitrary data. |
|
1 | # bundle2.py - generic container format to transmit arbitrary data. | |
2 | # |
|
2 | # | |
3 | # Copyright 2013 Facebook, Inc. |
|
3 | # Copyright 2013 Facebook, Inc. | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 | """Handling of the new bundle2 format |
|
7 | """Handling of the new bundle2 format | |
8 |
|
8 | |||
9 | The goal of bundle2 is to act as an atomically packet to transmit a set of |
|
9 | The goal of bundle2 is to act as an atomically packet to transmit a set of | |
10 | payloads in an application agnostic way. It consist in a sequence of "parts" |
|
10 | payloads in an application agnostic way. It consist in a sequence of "parts" | |
11 | that will be handed to and processed by the application layer. |
|
11 | that will be handed to and processed by the application layer. | |
12 |
|
12 | |||
13 |
|
13 | |||
14 | General format architecture |
|
14 | General format architecture | |
15 | =========================== |
|
15 | =========================== | |
16 |
|
16 | |||
17 | The format is architectured as follow |
|
17 | The format is architectured as follow | |
18 |
|
18 | |||
19 | - magic string |
|
19 | - magic string | |
20 | - stream level parameters |
|
20 | - stream level parameters | |
21 | - payload parts (any number) |
|
21 | - payload parts (any number) | |
22 | - end of stream marker. |
|
22 | - end of stream marker. | |
23 |
|
23 | |||
24 | the Binary format |
|
24 | the Binary format | |
25 | ============================ |
|
25 | ============================ | |
26 |
|
26 | |||
27 | All numbers are unsigned and big-endian. |
|
27 | All numbers are unsigned and big-endian. | |
28 |
|
28 | |||
29 | stream level parameters |
|
29 | stream level parameters | |
30 | ------------------------ |
|
30 | ------------------------ | |
31 |
|
31 | |||
32 | Binary format is as follow |
|
32 | Binary format is as follow | |
33 |
|
33 | |||
34 | :params size: int32 |
|
34 | :params size: int32 | |
35 |
|
35 | |||
36 | The total number of Bytes used by the parameters |
|
36 | The total number of Bytes used by the parameters | |
37 |
|
37 | |||
38 | :params value: arbitrary number of Bytes |
|
38 | :params value: arbitrary number of Bytes | |
39 |
|
39 | |||
40 | A blob of `params size` containing the serialized version of all stream level |
|
40 | A blob of `params size` containing the serialized version of all stream level | |
41 | parameters. |
|
41 | parameters. | |
42 |
|
42 | |||
43 | The blob contains a space separated list of parameters. Parameters with value |
|
43 | The blob contains a space separated list of parameters. Parameters with value | |
44 | are stored in the form `<name>=<value>`. Both name and value are urlquoted. |
|
44 | are stored in the form `<name>=<value>`. Both name and value are urlquoted. | |
45 |
|
45 | |||
46 | Empty name are obviously forbidden. |
|
46 | Empty name are obviously forbidden. | |
47 |
|
47 | |||
48 | Name MUST start with a letter. If this first letter is lower case, the |
|
48 | Name MUST start with a letter. If this first letter is lower case, the | |
49 | parameter is advisory and can be safely ignored. However when the first |
|
49 | parameter is advisory and can be safely ignored. However when the first | |
50 | letter is capital, the parameter is mandatory and the bundling process MUST |
|
50 | letter is capital, the parameter is mandatory and the bundling process MUST | |
51 | stop if he is not able to proceed it. |
|
51 | stop if he is not able to proceed it. | |
52 |
|
52 | |||
53 | Stream parameters use a simple textual format for two main reasons: |
|
53 | Stream parameters use a simple textual format for two main reasons: | |
54 |
|
54 | |||
55 | - Stream level parameters should remain simple and we want to discourage any |
|
55 | - Stream level parameters should remain simple and we want to discourage any | |
56 | crazy usage. |
|
56 | crazy usage. | |
57 | - Textual data allow easy human inspection of a bundle2 header in case of |
|
57 | - Textual data allow easy human inspection of a bundle2 header in case of | |
58 | troubles. |
|
58 | troubles. | |
59 |
|
59 | |||
60 | Any Applicative level options MUST go into a bundle2 part instead. |
|
60 | Any Applicative level options MUST go into a bundle2 part instead. | |
61 |
|
61 | |||
62 | Payload part |
|
62 | Payload part | |
63 | ------------------------ |
|
63 | ------------------------ | |
64 |
|
64 | |||
65 | Binary format is as follow |
|
65 | Binary format is as follow | |
66 |
|
66 | |||
67 | :header size: int32 |
|
67 | :header size: int32 | |
68 |
|
68 | |||
69 | The total number of Bytes used by the part header. When the header is empty |
|
69 | The total number of Bytes used by the part header. When the header is empty | |
70 | (size = 0) this is interpreted as the end of stream marker. |
|
70 | (size = 0) this is interpreted as the end of stream marker. | |
71 |
|
71 | |||
72 | :header: |
|
72 | :header: | |
73 |
|
73 | |||
74 | The header defines how to interpret the part. It contains two piece of |
|
74 | The header defines how to interpret the part. It contains two piece of | |
75 | data: the part type, and the part parameters. |
|
75 | data: the part type, and the part parameters. | |
76 |
|
76 | |||
77 | The part type is used to route an application level handler, that can |
|
77 | The part type is used to route an application level handler, that can | |
78 | interpret payload. |
|
78 | interpret payload. | |
79 |
|
79 | |||
80 | Part parameters are passed to the application level handler. They are |
|
80 | Part parameters are passed to the application level handler. They are | |
81 | meant to convey information that will help the application level object to |
|
81 | meant to convey information that will help the application level object to | |
82 | interpret the part payload. |
|
82 | interpret the part payload. | |
83 |
|
83 | |||
84 | The binary format of the header is has follow |
|
84 | The binary format of the header is has follow | |
85 |
|
85 | |||
86 | :typesize: (one byte) |
|
86 | :typesize: (one byte) | |
87 |
|
87 | |||
88 | :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*) |
|
88 | :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*) | |
89 |
|
89 | |||
90 | :partid: A 32bits integer (unique in the bundle) that can be used to refer |
|
90 | :partid: A 32bits integer (unique in the bundle) that can be used to refer | |
91 | to this part. |
|
91 | to this part. | |
92 |
|
92 | |||
93 | :parameters: |
|
93 | :parameters: | |
94 |
|
94 | |||
95 | Part's parameter may have arbitrary content, the binary structure is:: |
|
95 | Part's parameter may have arbitrary content, the binary structure is:: | |
96 |
|
96 | |||
97 | <mandatory-count><advisory-count><param-sizes><param-data> |
|
97 | <mandatory-count><advisory-count><param-sizes><param-data> | |
98 |
|
98 | |||
99 | :mandatory-count: 1 byte, number of mandatory parameters |
|
99 | :mandatory-count: 1 byte, number of mandatory parameters | |
100 |
|
100 | |||
101 | :advisory-count: 1 byte, number of advisory parameters |
|
101 | :advisory-count: 1 byte, number of advisory parameters | |
102 |
|
102 | |||
103 | :param-sizes: |
|
103 | :param-sizes: | |
104 |
|
104 | |||
105 | N couple of bytes, where N is the total number of parameters. Each |
|
105 | N couple of bytes, where N is the total number of parameters. Each | |
106 | couple contains (<size-of-key>, <size-of-value) for one parameter. |
|
106 | couple contains (<size-of-key>, <size-of-value) for one parameter. | |
107 |
|
107 | |||
108 | :param-data: |
|
108 | :param-data: | |
109 |
|
109 | |||
110 | A blob of bytes from which each parameter key and value can be |
|
110 | A blob of bytes from which each parameter key and value can be | |
111 | retrieved using the list of size couples stored in the previous |
|
111 | retrieved using the list of size couples stored in the previous | |
112 | field. |
|
112 | field. | |
113 |
|
113 | |||
114 | Mandatory parameters comes first, then the advisory ones. |
|
114 | Mandatory parameters comes first, then the advisory ones. | |
115 |
|
115 | |||
116 | Each parameter's key MUST be unique within the part. |
|
116 | Each parameter's key MUST be unique within the part. | |
117 |
|
117 | |||
118 | :payload: |
|
118 | :payload: | |
119 |
|
119 | |||
120 | payload is a series of `<chunksize><chunkdata>`. |
|
120 | payload is a series of `<chunksize><chunkdata>`. | |
121 |
|
121 | |||
122 | `chunksize` is an int32, `chunkdata` are plain bytes (as much as |
|
122 | `chunksize` is an int32, `chunkdata` are plain bytes (as much as | |
123 | `chunksize` says)` The payload part is concluded by a zero size chunk. |
|
123 | `chunksize` says)` The payload part is concluded by a zero size chunk. | |
124 |
|
124 | |||
125 | The current implementation always produces either zero or one chunk. |
|
125 | The current implementation always produces either zero or one chunk. | |
126 | This is an implementation limitation that will ultimately be lifted. |
|
126 | This is an implementation limitation that will ultimately be lifted. | |
127 |
|
127 | |||
128 | `chunksize` can be negative to trigger special case processing. No such |
|
128 | `chunksize` can be negative to trigger special case processing. No such | |
129 | processing is in place yet. |
|
129 | processing is in place yet. | |
130 |
|
130 | |||
131 | Bundle processing |
|
131 | Bundle processing | |
132 | ============================ |
|
132 | ============================ | |
133 |
|
133 | |||
134 | Each part is processed in order using a "part handler". Handler are registered |
|
134 | Each part is processed in order using a "part handler". Handler are registered | |
135 | for a certain part type. |
|
135 | for a certain part type. | |
136 |
|
136 | |||
137 | The matching of a part to its handler is case insensitive. The case of the |
|
137 | The matching of a part to its handler is case insensitive. The case of the | |
138 | part type is used to know if a part is mandatory or advisory. If the Part type |
|
138 | part type is used to know if a part is mandatory or advisory. If the Part type | |
139 | contains any uppercase char it is considered mandatory. When no handler is |
|
139 | contains any uppercase char it is considered mandatory. When no handler is | |
140 | known for a Mandatory part, the process is aborted and an exception is raised. |
|
140 | known for a Mandatory part, the process is aborted and an exception is raised. | |
141 | If the part is advisory and no handler is known, the part is ignored. When the |
|
141 | If the part is advisory and no handler is known, the part is ignored. When the | |
142 | process is aborted, the full bundle is still read from the stream to keep the |
|
142 | process is aborted, the full bundle is still read from the stream to keep the | |
143 | channel usable. But none of the part read from an abort are processed. In the |
|
143 | channel usable. But none of the part read from an abort are processed. In the | |
144 | future, dropping the stream may become an option for channel we do not care to |
|
144 | future, dropping the stream may become an option for channel we do not care to | |
145 | preserve. |
|
145 | preserve. | |
146 | """ |
|
146 | """ | |
147 |
|
147 | |||
148 | from __future__ import absolute_import, division |
|
148 | from __future__ import absolute_import, division | |
149 |
|
149 | |||
150 | import collections |
|
150 | import collections | |
151 | import errno |
|
151 | import errno | |
152 | import os |
|
152 | import os | |
153 | import re |
|
153 | import re | |
154 | import string |
|
154 | import string | |
155 | import struct |
|
155 | import struct | |
156 | import sys |
|
156 | import sys | |
157 |
|
157 | |||
158 | from .i18n import _ |
|
158 | from .i18n import _ | |
159 | from . import ( |
|
159 | from . import ( | |
160 | bookmarks, |
|
160 | bookmarks, | |
161 | changegroup, |
|
161 | changegroup, | |
162 | encoding, |
|
162 | encoding, | |
163 | error, |
|
163 | error, | |
164 | node as nodemod, |
|
164 | node as nodemod, | |
165 | obsolete, |
|
165 | obsolete, | |
166 | phases, |
|
166 | phases, | |
167 | pushkey, |
|
167 | pushkey, | |
168 | pycompat, |
|
168 | pycompat, | |
169 | streamclone, |
|
169 | streamclone, | |
170 | tags, |
|
170 | tags, | |
171 | url, |
|
171 | url, | |
172 | util, |
|
172 | util, | |
173 | ) |
|
173 | ) | |
174 | from .utils import ( |
|
174 | from .utils import ( | |
175 | stringutil, |
|
175 | stringutil, | |
176 | ) |
|
176 | ) | |
177 |
|
177 | |||
178 | urlerr = util.urlerr |
|
178 | urlerr = util.urlerr | |
179 | urlreq = util.urlreq |
|
179 | urlreq = util.urlreq | |
180 |
|
180 | |||
181 | _pack = struct.pack |
|
181 | _pack = struct.pack | |
182 | _unpack = struct.unpack |
|
182 | _unpack = struct.unpack | |
183 |
|
183 | |||
184 | _fstreamparamsize = '>i' |
|
184 | _fstreamparamsize = '>i' | |
185 | _fpartheadersize = '>i' |
|
185 | _fpartheadersize = '>i' | |
186 | _fparttypesize = '>B' |
|
186 | _fparttypesize = '>B' | |
187 | _fpartid = '>I' |
|
187 | _fpartid = '>I' | |
188 | _fpayloadsize = '>i' |
|
188 | _fpayloadsize = '>i' | |
189 | _fpartparamcount = '>BB' |
|
189 | _fpartparamcount = '>BB' | |
190 |
|
190 | |||
191 | preferedchunksize = 32768 |
|
191 | preferedchunksize = 32768 | |
192 |
|
192 | |||
193 | _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]') |
|
193 | _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]') | |
194 |
|
194 | |||
195 | def outdebug(ui, message): |
|
195 | def outdebug(ui, message): | |
196 | """debug regarding output stream (bundling)""" |
|
196 | """debug regarding output stream (bundling)""" | |
197 | if ui.configbool('devel', 'bundle2.debug'): |
|
197 | if ui.configbool('devel', 'bundle2.debug'): | |
198 | ui.debug('bundle2-output: %s\n' % message) |
|
198 | ui.debug('bundle2-output: %s\n' % message) | |
199 |
|
199 | |||
200 | def indebug(ui, message): |
|
200 | def indebug(ui, message): | |
201 | """debug on input stream (unbundling)""" |
|
201 | """debug on input stream (unbundling)""" | |
202 | if ui.configbool('devel', 'bundle2.debug'): |
|
202 | if ui.configbool('devel', 'bundle2.debug'): | |
203 | ui.debug('bundle2-input: %s\n' % message) |
|
203 | ui.debug('bundle2-input: %s\n' % message) | |
204 |
|
204 | |||
205 | def validateparttype(parttype): |
|
205 | def validateparttype(parttype): | |
206 | """raise ValueError if a parttype contains invalid character""" |
|
206 | """raise ValueError if a parttype contains invalid character""" | |
207 | if _parttypeforbidden.search(parttype): |
|
207 | if _parttypeforbidden.search(parttype): | |
208 | raise ValueError(parttype) |
|
208 | raise ValueError(parttype) | |
209 |
|
209 | |||
210 | def _makefpartparamsizes(nbparams): |
|
210 | def _makefpartparamsizes(nbparams): | |
211 | """return a struct format to read part parameter sizes |
|
211 | """return a struct format to read part parameter sizes | |
212 |
|
212 | |||
213 | The number parameters is variable so we need to build that format |
|
213 | The number parameters is variable so we need to build that format | |
214 | dynamically. |
|
214 | dynamically. | |
215 | """ |
|
215 | """ | |
216 | return '>'+('BB'*nbparams) |
|
216 | return '>'+('BB'*nbparams) | |
217 |
|
217 | |||
218 | parthandlermapping = {} |
|
218 | parthandlermapping = {} | |
219 |
|
219 | |||
220 | def parthandler(parttype, params=()): |
|
220 | def parthandler(parttype, params=()): | |
221 | """decorator that register a function as a bundle2 part handler |
|
221 | """decorator that register a function as a bundle2 part handler | |
222 |
|
222 | |||
223 | eg:: |
|
223 | eg:: | |
224 |
|
224 | |||
225 | @parthandler('myparttype', ('mandatory', 'param', 'handled')) |
|
225 | @parthandler('myparttype', ('mandatory', 'param', 'handled')) | |
226 | def myparttypehandler(...): |
|
226 | def myparttypehandler(...): | |
227 | '''process a part of type "my part".''' |
|
227 | '''process a part of type "my part".''' | |
228 | ... |
|
228 | ... | |
229 | """ |
|
229 | """ | |
230 | validateparttype(parttype) |
|
230 | validateparttype(parttype) | |
231 | def _decorator(func): |
|
231 | def _decorator(func): | |
232 | lparttype = parttype.lower() # enforce lower case matching. |
|
232 | lparttype = parttype.lower() # enforce lower case matching. | |
233 | assert lparttype not in parthandlermapping |
|
233 | assert lparttype not in parthandlermapping | |
234 | parthandlermapping[lparttype] = func |
|
234 | parthandlermapping[lparttype] = func | |
235 | func.params = frozenset(params) |
|
235 | func.params = frozenset(params) | |
236 | return func |
|
236 | return func | |
237 | return _decorator |
|
237 | return _decorator | |
238 |
|
238 | |||
239 | class unbundlerecords(object): |
|
239 | class unbundlerecords(object): | |
240 | """keep record of what happens during and unbundle |
|
240 | """keep record of what happens during and unbundle | |
241 |
|
241 | |||
242 | New records are added using `records.add('cat', obj)`. Where 'cat' is a |
|
242 | New records are added using `records.add('cat', obj)`. Where 'cat' is a | |
243 | category of record and obj is an arbitrary object. |
|
243 | category of record and obj is an arbitrary object. | |
244 |
|
244 | |||
245 | `records['cat']` will return all entries of this category 'cat'. |
|
245 | `records['cat']` will return all entries of this category 'cat'. | |
246 |
|
246 | |||
247 | Iterating on the object itself will yield `('category', obj)` tuples |
|
247 | Iterating on the object itself will yield `('category', obj)` tuples | |
248 | for all entries. |
|
248 | for all entries. | |
249 |
|
249 | |||
250 | All iterations happens in chronological order. |
|
250 | All iterations happens in chronological order. | |
251 | """ |
|
251 | """ | |
252 |
|
252 | |||
253 | def __init__(self): |
|
253 | def __init__(self): | |
254 | self._categories = {} |
|
254 | self._categories = {} | |
255 | self._sequences = [] |
|
255 | self._sequences = [] | |
256 | self._replies = {} |
|
256 | self._replies = {} | |
257 |
|
257 | |||
258 | def add(self, category, entry, inreplyto=None): |
|
258 | def add(self, category, entry, inreplyto=None): | |
259 | """add a new record of a given category. |
|
259 | """add a new record of a given category. | |
260 |
|
260 | |||
261 | The entry can then be retrieved in the list returned by |
|
261 | The entry can then be retrieved in the list returned by | |
262 | self['category'].""" |
|
262 | self['category'].""" | |
263 | self._categories.setdefault(category, []).append(entry) |
|
263 | self._categories.setdefault(category, []).append(entry) | |
264 | self._sequences.append((category, entry)) |
|
264 | self._sequences.append((category, entry)) | |
265 | if inreplyto is not None: |
|
265 | if inreplyto is not None: | |
266 | self.getreplies(inreplyto).add(category, entry) |
|
266 | self.getreplies(inreplyto).add(category, entry) | |
267 |
|
267 | |||
268 | def getreplies(self, partid): |
|
268 | def getreplies(self, partid): | |
269 | """get the records that are replies to a specific part""" |
|
269 | """get the records that are replies to a specific part""" | |
270 | return self._replies.setdefault(partid, unbundlerecords()) |
|
270 | return self._replies.setdefault(partid, unbundlerecords()) | |
271 |
|
271 | |||
272 | def __getitem__(self, cat): |
|
272 | def __getitem__(self, cat): | |
273 | return tuple(self._categories.get(cat, ())) |
|
273 | return tuple(self._categories.get(cat, ())) | |
274 |
|
274 | |||
275 | def __iter__(self): |
|
275 | def __iter__(self): | |
276 | return iter(self._sequences) |
|
276 | return iter(self._sequences) | |
277 |
|
277 | |||
278 | def __len__(self): |
|
278 | def __len__(self): | |
279 | return len(self._sequences) |
|
279 | return len(self._sequences) | |
280 |
|
280 | |||
281 | def __nonzero__(self): |
|
281 | def __nonzero__(self): | |
282 | return bool(self._sequences) |
|
282 | return bool(self._sequences) | |
283 |
|
283 | |||
284 | __bool__ = __nonzero__ |
|
284 | __bool__ = __nonzero__ | |
285 |
|
285 | |||
286 | class bundleoperation(object): |
|
286 | class bundleoperation(object): | |
287 | """an object that represents a single bundling process |
|
287 | """an object that represents a single bundling process | |
288 |
|
288 | |||
289 | Its purpose is to carry unbundle-related objects and states. |
|
289 | Its purpose is to carry unbundle-related objects and states. | |
290 |
|
290 | |||
291 | A new object should be created at the beginning of each bundle processing. |
|
291 | A new object should be created at the beginning of each bundle processing. | |
292 | The object is to be returned by the processing function. |
|
292 | The object is to be returned by the processing function. | |
293 |
|
293 | |||
294 | The object has very little content now it will ultimately contain: |
|
294 | The object has very little content now it will ultimately contain: | |
295 | * an access to the repo the bundle is applied to, |
|
295 | * an access to the repo the bundle is applied to, | |
296 | * a ui object, |
|
296 | * a ui object, | |
297 | * a way to retrieve a transaction to add changes to the repo, |
|
297 | * a way to retrieve a transaction to add changes to the repo, | |
298 | * a way to record the result of processing each part, |
|
298 | * a way to record the result of processing each part, | |
299 | * a way to construct a bundle response when applicable. |
|
299 | * a way to construct a bundle response when applicable. | |
300 | """ |
|
300 | """ | |
301 |
|
301 | |||
302 | def __init__(self, repo, transactiongetter, captureoutput=True, source=''): |
|
302 | def __init__(self, repo, transactiongetter, captureoutput=True, source=''): | |
303 | self.repo = repo |
|
303 | self.repo = repo | |
304 | self.ui = repo.ui |
|
304 | self.ui = repo.ui | |
305 | self.records = unbundlerecords() |
|
305 | self.records = unbundlerecords() | |
306 | self.reply = None |
|
306 | self.reply = None | |
307 | self.captureoutput = captureoutput |
|
307 | self.captureoutput = captureoutput | |
308 | self.hookargs = {} |
|
308 | self.hookargs = {} | |
309 | self._gettransaction = transactiongetter |
|
309 | self._gettransaction = transactiongetter | |
310 | # carries value that can modify part behavior |
|
310 | # carries value that can modify part behavior | |
311 | self.modes = {} |
|
311 | self.modes = {} | |
312 | self.source = source |
|
312 | self.source = source | |
313 |
|
313 | |||
314 | def gettransaction(self): |
|
314 | def gettransaction(self): | |
315 | transaction = self._gettransaction() |
|
315 | transaction = self._gettransaction() | |
316 |
|
316 | |||
317 | if self.hookargs: |
|
317 | if self.hookargs: | |
318 | # the ones added to the transaction supercede those added |
|
318 | # the ones added to the transaction supercede those added | |
319 | # to the operation. |
|
319 | # to the operation. | |
320 | self.hookargs.update(transaction.hookargs) |
|
320 | self.hookargs.update(transaction.hookargs) | |
321 | transaction.hookargs = self.hookargs |
|
321 | transaction.hookargs = self.hookargs | |
322 |
|
322 | |||
323 | # mark the hookargs as flushed. further attempts to add to |
|
323 | # mark the hookargs as flushed. further attempts to add to | |
324 | # hookargs will result in an abort. |
|
324 | # hookargs will result in an abort. | |
325 | self.hookargs = None |
|
325 | self.hookargs = None | |
326 |
|
326 | |||
327 | return transaction |
|
327 | return transaction | |
328 |
|
328 | |||
329 | def addhookargs(self, hookargs): |
|
329 | def addhookargs(self, hookargs): | |
330 | if self.hookargs is None: |
|
330 | if self.hookargs is None: | |
331 | raise error.ProgrammingError('attempted to add hookargs to ' |
|
331 | raise error.ProgrammingError('attempted to add hookargs to ' | |
332 | 'operation after transaction started') |
|
332 | 'operation after transaction started') | |
333 | self.hookargs.update(hookargs) |
|
333 | self.hookargs.update(hookargs) | |
334 |
|
334 | |||
335 | class TransactionUnavailable(RuntimeError): |
|
335 | class TransactionUnavailable(RuntimeError): | |
336 | pass |
|
336 | pass | |
337 |
|
337 | |||
338 | def _notransaction(): |
|
338 | def _notransaction(): | |
339 | """default method to get a transaction while processing a bundle |
|
339 | """default method to get a transaction while processing a bundle | |
340 |
|
340 | |||
341 | Raise an exception to highlight the fact that no transaction was expected |
|
341 | Raise an exception to highlight the fact that no transaction was expected | |
342 | to be created""" |
|
342 | to be created""" | |
343 | raise TransactionUnavailable() |
|
343 | raise TransactionUnavailable() | |
344 |
|
344 | |||
345 | def applybundle(repo, unbundler, tr, source, url=None, **kwargs): |
|
345 | def applybundle(repo, unbundler, tr, source, url=None, **kwargs): | |
346 | # transform me into unbundler.apply() as soon as the freeze is lifted |
|
346 | # transform me into unbundler.apply() as soon as the freeze is lifted | |
347 | if isinstance(unbundler, unbundle20): |
|
347 | if isinstance(unbundler, unbundle20): | |
348 | tr.hookargs['bundle2'] = '1' |
|
348 | tr.hookargs['bundle2'] = '1' | |
349 | if source is not None and 'source' not in tr.hookargs: |
|
349 | if source is not None and 'source' not in tr.hookargs: | |
350 | tr.hookargs['source'] = source |
|
350 | tr.hookargs['source'] = source | |
351 | if url is not None and 'url' not in tr.hookargs: |
|
351 | if url is not None and 'url' not in tr.hookargs: | |
352 | tr.hookargs['url'] = url |
|
352 | tr.hookargs['url'] = url | |
353 | return processbundle(repo, unbundler, lambda: tr, source=source) |
|
353 | return processbundle(repo, unbundler, lambda: tr, source=source) | |
354 | else: |
|
354 | else: | |
355 | # the transactiongetter won't be used, but we might as well set it |
|
355 | # the transactiongetter won't be used, but we might as well set it | |
356 | op = bundleoperation(repo, lambda: tr, source=source) |
|
356 | op = bundleoperation(repo, lambda: tr, source=source) | |
357 | _processchangegroup(op, unbundler, tr, source, url, **kwargs) |
|
357 | _processchangegroup(op, unbundler, tr, source, url, **kwargs) | |
358 | return op |
|
358 | return op | |
359 |
|
359 | |||
360 | class partiterator(object): |
|
360 | class partiterator(object): | |
361 | def __init__(self, repo, op, unbundler): |
|
361 | def __init__(self, repo, op, unbundler): | |
362 | self.repo = repo |
|
362 | self.repo = repo | |
363 | self.op = op |
|
363 | self.op = op | |
364 | self.unbundler = unbundler |
|
364 | self.unbundler = unbundler | |
365 | self.iterator = None |
|
365 | self.iterator = None | |
366 | self.count = 0 |
|
366 | self.count = 0 | |
367 | self.current = None |
|
367 | self.current = None | |
368 |
|
368 | |||
369 | def __enter__(self): |
|
369 | def __enter__(self): | |
370 | def func(): |
|
370 | def func(): | |
371 | itr = enumerate(self.unbundler.iterparts()) |
|
371 | itr = enumerate(self.unbundler.iterparts()) | |
372 | for count, p in itr: |
|
372 | for count, p in itr: | |
373 | self.count = count |
|
373 | self.count = count | |
374 | self.current = p |
|
374 | self.current = p | |
375 | yield p |
|
375 | yield p | |
376 | p.consume() |
|
376 | p.consume() | |
377 | self.current = None |
|
377 | self.current = None | |
378 | self.iterator = func() |
|
378 | self.iterator = func() | |
379 | return self.iterator |
|
379 | return self.iterator | |
380 |
|
380 | |||
381 | def __exit__(self, type, exc, tb): |
|
381 | def __exit__(self, type, exc, tb): | |
382 | if not self.iterator: |
|
382 | if not self.iterator: | |
383 | return |
|
383 | return | |
384 |
|
384 | |||
385 | # Only gracefully abort in a normal exception situation. User aborts |
|
385 | # Only gracefully abort in a normal exception situation. User aborts | |
386 | # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception, |
|
386 | # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception, | |
387 | # and should not gracefully cleanup. |
|
387 | # and should not gracefully cleanup. | |
388 | if isinstance(exc, Exception): |
|
388 | if isinstance(exc, Exception): | |
389 | # Any exceptions seeking to the end of the bundle at this point are |
|
389 | # Any exceptions seeking to the end of the bundle at this point are | |
390 | # almost certainly related to the underlying stream being bad. |
|
390 | # almost certainly related to the underlying stream being bad. | |
391 | # And, chances are that the exception we're handling is related to |
|
391 | # And, chances are that the exception we're handling is related to | |
392 | # getting in that bad state. So, we swallow the seeking error and |
|
392 | # getting in that bad state. So, we swallow the seeking error and | |
393 | # re-raise the original error. |
|
393 | # re-raise the original error. | |
394 | seekerror = False |
|
394 | seekerror = False | |
395 | try: |
|
395 | try: | |
396 | if self.current: |
|
396 | if self.current: | |
397 | # consume the part content to not corrupt the stream. |
|
397 | # consume the part content to not corrupt the stream. | |
398 | self.current.consume() |
|
398 | self.current.consume() | |
399 |
|
399 | |||
400 | for part in self.iterator: |
|
400 | for part in self.iterator: | |
401 | # consume the bundle content |
|
401 | # consume the bundle content | |
402 | part.consume() |
|
402 | part.consume() | |
403 | except Exception: |
|
403 | except Exception: | |
404 | seekerror = True |
|
404 | seekerror = True | |
405 |
|
405 | |||
406 | # Small hack to let caller code distinguish exceptions from bundle2 |
|
406 | # Small hack to let caller code distinguish exceptions from bundle2 | |
407 | # processing from processing the old format. This is mostly needed |
|
407 | # processing from processing the old format. This is mostly needed | |
408 | # to handle different return codes to unbundle according to the type |
|
408 | # to handle different return codes to unbundle according to the type | |
409 | # of bundle. We should probably clean up or drop this return code |
|
409 | # of bundle. We should probably clean up or drop this return code | |
410 | # craziness in a future version. |
|
410 | # craziness in a future version. | |
411 | exc.duringunbundle2 = True |
|
411 | exc.duringunbundle2 = True | |
412 | salvaged = [] |
|
412 | salvaged = [] | |
413 | replycaps = None |
|
413 | replycaps = None | |
414 | if self.op.reply is not None: |
|
414 | if self.op.reply is not None: | |
415 | salvaged = self.op.reply.salvageoutput() |
|
415 | salvaged = self.op.reply.salvageoutput() | |
416 | replycaps = self.op.reply.capabilities |
|
416 | replycaps = self.op.reply.capabilities | |
417 | exc._replycaps = replycaps |
|
417 | exc._replycaps = replycaps | |
418 | exc._bundle2salvagedoutput = salvaged |
|
418 | exc._bundle2salvagedoutput = salvaged | |
419 |
|
419 | |||
420 | # Re-raising from a variable loses the original stack. So only use |
|
420 | # Re-raising from a variable loses the original stack. So only use | |
421 | # that form if we need to. |
|
421 | # that form if we need to. | |
422 | if seekerror: |
|
422 | if seekerror: | |
423 | raise exc |
|
423 | raise exc | |
424 |
|
424 | |||
425 | self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' % |
|
425 | self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' % | |
426 | self.count) |
|
426 | self.count) | |
427 |
|
427 | |||
428 | def processbundle(repo, unbundler, transactiongetter=None, op=None, source=''): |
|
428 | def processbundle(repo, unbundler, transactiongetter=None, op=None, source=''): | |
429 | """This function process a bundle, apply effect to/from a repo |
|
429 | """This function process a bundle, apply effect to/from a repo | |
430 |
|
430 | |||
431 | It iterates over each part then searches for and uses the proper handling |
|
431 | It iterates over each part then searches for and uses the proper handling | |
432 | code to process the part. Parts are processed in order. |
|
432 | code to process the part. Parts are processed in order. | |
433 |
|
433 | |||
434 | Unknown Mandatory part will abort the process. |
|
434 | Unknown Mandatory part will abort the process. | |
435 |
|
435 | |||
436 | It is temporarily possible to provide a prebuilt bundleoperation to the |
|
436 | It is temporarily possible to provide a prebuilt bundleoperation to the | |
437 | function. This is used to ensure output is properly propagated in case of |
|
437 | function. This is used to ensure output is properly propagated in case of | |
438 | an error during the unbundling. This output capturing part will likely be |
|
438 | an error during the unbundling. This output capturing part will likely be | |
439 | reworked and this ability will probably go away in the process. |
|
439 | reworked and this ability will probably go away in the process. | |
440 | """ |
|
440 | """ | |
441 | if op is None: |
|
441 | if op is None: | |
442 | if transactiongetter is None: |
|
442 | if transactiongetter is None: | |
443 | transactiongetter = _notransaction |
|
443 | transactiongetter = _notransaction | |
444 | op = bundleoperation(repo, transactiongetter, source=source) |
|
444 | op = bundleoperation(repo, transactiongetter, source=source) | |
445 | # todo: |
|
445 | # todo: | |
446 | # - replace this is a init function soon. |
|
446 | # - replace this is a init function soon. | |
447 | # - exception catching |
|
447 | # - exception catching | |
448 | unbundler.params |
|
448 | unbundler.params | |
449 | if repo.ui.debugflag: |
|
449 | if repo.ui.debugflag: | |
450 | msg = ['bundle2-input-bundle:'] |
|
450 | msg = ['bundle2-input-bundle:'] | |
451 | if unbundler.params: |
|
451 | if unbundler.params: | |
452 | msg.append(' %i params' % len(unbundler.params)) |
|
452 | msg.append(' %i params' % len(unbundler.params)) | |
453 | if op._gettransaction is None or op._gettransaction is _notransaction: |
|
453 | if op._gettransaction is None or op._gettransaction is _notransaction: | |
454 | msg.append(' no-transaction') |
|
454 | msg.append(' no-transaction') | |
455 | else: |
|
455 | else: | |
456 | msg.append(' with-transaction') |
|
456 | msg.append(' with-transaction') | |
457 | msg.append('\n') |
|
457 | msg.append('\n') | |
458 | repo.ui.debug(''.join(msg)) |
|
458 | repo.ui.debug(''.join(msg)) | |
459 |
|
459 | |||
460 | processparts(repo, op, unbundler) |
|
460 | processparts(repo, op, unbundler) | |
461 |
|
461 | |||
462 | return op |
|
462 | return op | |
463 |
|
463 | |||
464 | def processparts(repo, op, unbundler): |
|
464 | def processparts(repo, op, unbundler): | |
465 | with partiterator(repo, op, unbundler) as parts: |
|
465 | with partiterator(repo, op, unbundler) as parts: | |
466 | for part in parts: |
|
466 | for part in parts: | |
467 | _processpart(op, part) |
|
467 | _processpart(op, part) | |
468 |
|
468 | |||
469 | def _processchangegroup(op, cg, tr, source, url, **kwargs): |
|
469 | def _processchangegroup(op, cg, tr, source, url, **kwargs): | |
470 | ret = cg.apply(op.repo, tr, source, url, **kwargs) |
|
470 | ret = cg.apply(op.repo, tr, source, url, **kwargs) | |
471 | op.records.add('changegroup', { |
|
471 | op.records.add('changegroup', { | |
472 | 'return': ret, |
|
472 | 'return': ret, | |
473 | }) |
|
473 | }) | |
474 | return ret |
|
474 | return ret | |
475 |
|
475 | |||
476 | def _gethandler(op, part): |
|
476 | def _gethandler(op, part): | |
477 | status = 'unknown' # used by debug output |
|
477 | status = 'unknown' # used by debug output | |
478 | try: |
|
478 | try: | |
479 | handler = parthandlermapping.get(part.type) |
|
479 | handler = parthandlermapping.get(part.type) | |
480 | if handler is None: |
|
480 | if handler is None: | |
481 | status = 'unsupported-type' |
|
481 | status = 'unsupported-type' | |
482 | raise error.BundleUnknownFeatureError(parttype=part.type) |
|
482 | raise error.BundleUnknownFeatureError(parttype=part.type) | |
483 | indebug(op.ui, 'found a handler for part %s' % part.type) |
|
483 | indebug(op.ui, 'found a handler for part %s' % part.type) | |
484 | unknownparams = part.mandatorykeys - handler.params |
|
484 | unknownparams = part.mandatorykeys - handler.params | |
485 | if unknownparams: |
|
485 | if unknownparams: | |
486 | unknownparams = list(unknownparams) |
|
486 | unknownparams = list(unknownparams) | |
487 | unknownparams.sort() |
|
487 | unknownparams.sort() | |
488 | status = 'unsupported-params (%s)' % ', '.join(unknownparams) |
|
488 | status = 'unsupported-params (%s)' % ', '.join(unknownparams) | |
489 | raise error.BundleUnknownFeatureError(parttype=part.type, |
|
489 | raise error.BundleUnknownFeatureError(parttype=part.type, | |
490 | params=unknownparams) |
|
490 | params=unknownparams) | |
491 | status = 'supported' |
|
491 | status = 'supported' | |
492 | except error.BundleUnknownFeatureError as exc: |
|
492 | except error.BundleUnknownFeatureError as exc: | |
493 | if part.mandatory: # mandatory parts |
|
493 | if part.mandatory: # mandatory parts | |
494 | raise |
|
494 | raise | |
495 | indebug(op.ui, 'ignoring unsupported advisory part %s' % exc) |
|
495 | indebug(op.ui, 'ignoring unsupported advisory part %s' % exc) | |
496 | return # skip to part processing |
|
496 | return # skip to part processing | |
497 | finally: |
|
497 | finally: | |
498 | if op.ui.debugflag: |
|
498 | if op.ui.debugflag: | |
499 | msg = ['bundle2-input-part: "%s"' % part.type] |
|
499 | msg = ['bundle2-input-part: "%s"' % part.type] | |
500 | if not part.mandatory: |
|
500 | if not part.mandatory: | |
501 | msg.append(' (advisory)') |
|
501 | msg.append(' (advisory)') | |
502 | nbmp = len(part.mandatorykeys) |
|
502 | nbmp = len(part.mandatorykeys) | |
503 | nbap = len(part.params) - nbmp |
|
503 | nbap = len(part.params) - nbmp | |
504 | if nbmp or nbap: |
|
504 | if nbmp or nbap: | |
505 | msg.append(' (params:') |
|
505 | msg.append(' (params:') | |
506 | if nbmp: |
|
506 | if nbmp: | |
507 | msg.append(' %i mandatory' % nbmp) |
|
507 | msg.append(' %i mandatory' % nbmp) | |
508 | if nbap: |
|
508 | if nbap: | |
509 | msg.append(' %i advisory' % nbmp) |
|
509 | msg.append(' %i advisory' % nbmp) | |
510 | msg.append(')') |
|
510 | msg.append(')') | |
511 | msg.append(' %s\n' % status) |
|
511 | msg.append(' %s\n' % status) | |
512 | op.ui.debug(''.join(msg)) |
|
512 | op.ui.debug(''.join(msg)) | |
513 |
|
513 | |||
514 | return handler |
|
514 | return handler | |
515 |
|
515 | |||
516 | def _processpart(op, part): |
|
516 | def _processpart(op, part): | |
517 | """process a single part from a bundle |
|
517 | """process a single part from a bundle | |
518 |
|
518 | |||
519 | The part is guaranteed to have been fully consumed when the function exits |
|
519 | The part is guaranteed to have been fully consumed when the function exits | |
520 | (even if an exception is raised).""" |
|
520 | (even if an exception is raised).""" | |
521 | handler = _gethandler(op, part) |
|
521 | handler = _gethandler(op, part) | |
522 | if handler is None: |
|
522 | if handler is None: | |
523 | return |
|
523 | return | |
524 |
|
524 | |||
525 | # handler is called outside the above try block so that we don't |
|
525 | # handler is called outside the above try block so that we don't | |
526 | # risk catching KeyErrors from anything other than the |
|
526 | # risk catching KeyErrors from anything other than the | |
527 | # parthandlermapping lookup (any KeyError raised by handler() |
|
527 | # parthandlermapping lookup (any KeyError raised by handler() | |
528 | # itself represents a defect of a different variety). |
|
528 | # itself represents a defect of a different variety). | |
529 | output = None |
|
529 | output = None | |
530 | if op.captureoutput and op.reply is not None: |
|
530 | if op.captureoutput and op.reply is not None: | |
531 | op.ui.pushbuffer(error=True, subproc=True) |
|
531 | op.ui.pushbuffer(error=True, subproc=True) | |
532 | output = '' |
|
532 | output = '' | |
533 | try: |
|
533 | try: | |
534 | handler(op, part) |
|
534 | handler(op, part) | |
535 | finally: |
|
535 | finally: | |
536 | if output is not None: |
|
536 | if output is not None: | |
537 | output = op.ui.popbuffer() |
|
537 | output = op.ui.popbuffer() | |
538 | if output: |
|
538 | if output: | |
539 | outpart = op.reply.newpart('output', data=output, |
|
539 | outpart = op.reply.newpart('output', data=output, | |
540 | mandatory=False) |
|
540 | mandatory=False) | |
541 | outpart.addparam( |
|
541 | outpart.addparam( | |
542 | 'in-reply-to', pycompat.bytestr(part.id), mandatory=False) |
|
542 | 'in-reply-to', pycompat.bytestr(part.id), mandatory=False) | |
543 |
|
543 | |||
544 | def decodecaps(blob): |
|
544 | def decodecaps(blob): | |
545 | """decode a bundle2 caps bytes blob into a dictionary |
|
545 | """decode a bundle2 caps bytes blob into a dictionary | |
546 |
|
546 | |||
547 | The blob is a list of capabilities (one per line) |
|
547 | The blob is a list of capabilities (one per line) | |
548 | Capabilities may have values using a line of the form:: |
|
548 | Capabilities may have values using a line of the form:: | |
549 |
|
549 | |||
550 | capability=value1,value2,value3 |
|
550 | capability=value1,value2,value3 | |
551 |
|
551 | |||
552 | The values are always a list.""" |
|
552 | The values are always a list.""" | |
553 | caps = {} |
|
553 | caps = {} | |
554 | for line in blob.splitlines(): |
|
554 | for line in blob.splitlines(): | |
555 | if not line: |
|
555 | if not line: | |
556 | continue |
|
556 | continue | |
557 | if '=' not in line: |
|
557 | if '=' not in line: | |
558 | key, vals = line, () |
|
558 | key, vals = line, () | |
559 | else: |
|
559 | else: | |
560 | key, vals = line.split('=', 1) |
|
560 | key, vals = line.split('=', 1) | |
561 | vals = vals.split(',') |
|
561 | vals = vals.split(',') | |
562 | key = urlreq.unquote(key) |
|
562 | key = urlreq.unquote(key) | |
563 | vals = [urlreq.unquote(v) for v in vals] |
|
563 | vals = [urlreq.unquote(v) for v in vals] | |
564 | caps[key] = vals |
|
564 | caps[key] = vals | |
565 | return caps |
|
565 | return caps | |
566 |
|
566 | |||
567 | def encodecaps(caps): |
|
567 | def encodecaps(caps): | |
568 | """encode a bundle2 caps dictionary into a bytes blob""" |
|
568 | """encode a bundle2 caps dictionary into a bytes blob""" | |
569 | chunks = [] |
|
569 | chunks = [] | |
570 | for ca in sorted(caps): |
|
570 | for ca in sorted(caps): | |
571 | vals = caps[ca] |
|
571 | vals = caps[ca] | |
572 | ca = urlreq.quote(ca) |
|
572 | ca = urlreq.quote(ca) | |
573 | vals = [urlreq.quote(v) for v in vals] |
|
573 | vals = [urlreq.quote(v) for v in vals] | |
574 | if vals: |
|
574 | if vals: | |
575 | ca = "%s=%s" % (ca, ','.join(vals)) |
|
575 | ca = "%s=%s" % (ca, ','.join(vals)) | |
576 | chunks.append(ca) |
|
576 | chunks.append(ca) | |
577 | return '\n'.join(chunks) |
|
577 | return '\n'.join(chunks) | |
578 |
|
578 | |||
579 | bundletypes = { |
|
579 | bundletypes = { | |
580 | "": ("", 'UN'), # only when using unbundle on ssh and old http servers |
|
580 | "": ("", 'UN'), # only when using unbundle on ssh and old http servers | |
581 | # since the unification ssh accepts a header but there |
|
581 | # since the unification ssh accepts a header but there | |
582 | # is no capability signaling it. |
|
582 | # is no capability signaling it. | |
583 | "HG20": (), # special-cased below |
|
583 | "HG20": (), # special-cased below | |
584 | "HG10UN": ("HG10UN", 'UN'), |
|
584 | "HG10UN": ("HG10UN", 'UN'), | |
585 | "HG10BZ": ("HG10", 'BZ'), |
|
585 | "HG10BZ": ("HG10", 'BZ'), | |
586 | "HG10GZ": ("HG10GZ", 'GZ'), |
|
586 | "HG10GZ": ("HG10GZ", 'GZ'), | |
587 | } |
|
587 | } | |
588 |
|
588 | |||
589 | # hgweb uses this list to communicate its preferred type |
|
589 | # hgweb uses this list to communicate its preferred type | |
590 | bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN'] |
|
590 | bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN'] | |
591 |
|
591 | |||
592 | class bundle20(object): |
|
592 | class bundle20(object): | |
593 | """represent an outgoing bundle2 container |
|
593 | """represent an outgoing bundle2 container | |
594 |
|
594 | |||
595 | Use the `addparam` method to add stream level parameter. and `newpart` to |
|
595 | Use the `addparam` method to add stream level parameter. and `newpart` to | |
596 | populate it. Then call `getchunks` to retrieve all the binary chunks of |
|
596 | populate it. Then call `getchunks` to retrieve all the binary chunks of | |
597 | data that compose the bundle2 container.""" |
|
597 | data that compose the bundle2 container.""" | |
598 |
|
598 | |||
599 | _magicstring = 'HG20' |
|
599 | _magicstring = 'HG20' | |
600 |
|
600 | |||
601 | def __init__(self, ui, capabilities=()): |
|
601 | def __init__(self, ui, capabilities=()): | |
602 | self.ui = ui |
|
602 | self.ui = ui | |
603 | self._params = [] |
|
603 | self._params = [] | |
604 | self._parts = [] |
|
604 | self._parts = [] | |
605 | self.capabilities = dict(capabilities) |
|
605 | self.capabilities = dict(capabilities) | |
606 | self._compengine = util.compengines.forbundletype('UN') |
|
606 | self._compengine = util.compengines.forbundletype('UN') | |
607 | self._compopts = None |
|
607 | self._compopts = None | |
608 | # If compression is being handled by a consumer of the raw |
|
608 | # If compression is being handled by a consumer of the raw | |
609 | # data (e.g. the wire protocol), unsetting this flag tells |
|
609 | # data (e.g. the wire protocol), unsetting this flag tells | |
610 | # consumers that the bundle is best left uncompressed. |
|
610 | # consumers that the bundle is best left uncompressed. | |
611 | self.prefercompressed = True |
|
611 | self.prefercompressed = True | |
612 |
|
612 | |||
613 | def setcompression(self, alg, compopts=None): |
|
613 | def setcompression(self, alg, compopts=None): | |
614 | """setup core part compression to <alg>""" |
|
614 | """setup core part compression to <alg>""" | |
615 | if alg in (None, 'UN'): |
|
615 | if alg in (None, 'UN'): | |
616 | return |
|
616 | return | |
617 | assert not any(n.lower() == 'compression' for n, v in self._params) |
|
617 | assert not any(n.lower() == 'compression' for n, v in self._params) | |
618 | self.addparam('Compression', alg) |
|
618 | self.addparam('Compression', alg) | |
619 | self._compengine = util.compengines.forbundletype(alg) |
|
619 | self._compengine = util.compengines.forbundletype(alg) | |
620 | self._compopts = compopts |
|
620 | self._compopts = compopts | |
621 |
|
621 | |||
622 | @property |
|
622 | @property | |
623 | def nbparts(self): |
|
623 | def nbparts(self): | |
624 | """total number of parts added to the bundler""" |
|
624 | """total number of parts added to the bundler""" | |
625 | return len(self._parts) |
|
625 | return len(self._parts) | |
626 |
|
626 | |||
627 | # methods used to defines the bundle2 content |
|
627 | # methods used to defines the bundle2 content | |
628 | def addparam(self, name, value=None): |
|
628 | def addparam(self, name, value=None): | |
629 | """add a stream level parameter""" |
|
629 | """add a stream level parameter""" | |
630 | if not name: |
|
630 | if not name: | |
631 | raise error.ProgrammingError(b'empty parameter name') |
|
631 | raise error.ProgrammingError(b'empty parameter name') | |
632 | if name[0:1] not in pycompat.bytestr(string.ascii_letters): |
|
632 | if name[0:1] not in pycompat.bytestr(string.ascii_letters): | |
633 | raise error.ProgrammingError(b'non letter first character: %s' |
|
633 | raise error.ProgrammingError(b'non letter first character: %s' | |
634 | % name) |
|
634 | % name) | |
635 | self._params.append((name, value)) |
|
635 | self._params.append((name, value)) | |
636 |
|
636 | |||
637 | def addpart(self, part): |
|
637 | def addpart(self, part): | |
638 | """add a new part to the bundle2 container |
|
638 | """add a new part to the bundle2 container | |
639 |
|
639 | |||
640 | Parts contains the actual applicative payload.""" |
|
640 | Parts contains the actual applicative payload.""" | |
641 | assert part.id is None |
|
641 | assert part.id is None | |
642 | part.id = len(self._parts) # very cheap counter |
|
642 | part.id = len(self._parts) # very cheap counter | |
643 | self._parts.append(part) |
|
643 | self._parts.append(part) | |
644 |
|
644 | |||
645 | def newpart(self, typeid, *args, **kwargs): |
|
645 | def newpart(self, typeid, *args, **kwargs): | |
646 | """create a new part and add it to the containers |
|
646 | """create a new part and add it to the containers | |
647 |
|
647 | |||
648 | As the part is directly added to the containers. For now, this means |
|
648 | As the part is directly added to the containers. For now, this means | |
649 | that any failure to properly initialize the part after calling |
|
649 | that any failure to properly initialize the part after calling | |
650 | ``newpart`` should result in a failure of the whole bundling process. |
|
650 | ``newpart`` should result in a failure of the whole bundling process. | |
651 |
|
651 | |||
652 | You can still fall back to manually create and add if you need better |
|
652 | You can still fall back to manually create and add if you need better | |
653 | control.""" |
|
653 | control.""" | |
654 | part = bundlepart(typeid, *args, **kwargs) |
|
654 | part = bundlepart(typeid, *args, **kwargs) | |
655 | self.addpart(part) |
|
655 | self.addpart(part) | |
656 | return part |
|
656 | return part | |
657 |
|
657 | |||
658 | # methods used to generate the bundle2 stream |
|
658 | # methods used to generate the bundle2 stream | |
659 | def getchunks(self): |
|
659 | def getchunks(self): | |
660 | if self.ui.debugflag: |
|
660 | if self.ui.debugflag: | |
661 | msg = ['bundle2-output-bundle: "%s",' % self._magicstring] |
|
661 | msg = ['bundle2-output-bundle: "%s",' % self._magicstring] | |
662 | if self._params: |
|
662 | if self._params: | |
663 | msg.append(' (%i params)' % len(self._params)) |
|
663 | msg.append(' (%i params)' % len(self._params)) | |
664 | msg.append(' %i parts total\n' % len(self._parts)) |
|
664 | msg.append(' %i parts total\n' % len(self._parts)) | |
665 | self.ui.debug(''.join(msg)) |
|
665 | self.ui.debug(''.join(msg)) | |
666 | outdebug(self.ui, 'start emission of %s stream' % self._magicstring) |
|
666 | outdebug(self.ui, 'start emission of %s stream' % self._magicstring) | |
667 | yield self._magicstring |
|
667 | yield self._magicstring | |
668 | param = self._paramchunk() |
|
668 | param = self._paramchunk() | |
669 | outdebug(self.ui, 'bundle parameter: %s' % param) |
|
669 | outdebug(self.ui, 'bundle parameter: %s' % param) | |
670 | yield _pack(_fstreamparamsize, len(param)) |
|
670 | yield _pack(_fstreamparamsize, len(param)) | |
671 | if param: |
|
671 | if param: | |
672 | yield param |
|
672 | yield param | |
673 | for chunk in self._compengine.compressstream(self._getcorechunk(), |
|
673 | for chunk in self._compengine.compressstream(self._getcorechunk(), | |
674 | self._compopts): |
|
674 | self._compopts): | |
675 | yield chunk |
|
675 | yield chunk | |
676 |
|
676 | |||
677 | def _paramchunk(self): |
|
677 | def _paramchunk(self): | |
678 | """return a encoded version of all stream parameters""" |
|
678 | """return a encoded version of all stream parameters""" | |
679 | blocks = [] |
|
679 | blocks = [] | |
680 | for par, value in self._params: |
|
680 | for par, value in self._params: | |
681 | par = urlreq.quote(par) |
|
681 | par = urlreq.quote(par) | |
682 | if value is not None: |
|
682 | if value is not None: | |
683 | value = urlreq.quote(value) |
|
683 | value = urlreq.quote(value) | |
684 | par = '%s=%s' % (par, value) |
|
684 | par = '%s=%s' % (par, value) | |
685 | blocks.append(par) |
|
685 | blocks.append(par) | |
686 | return ' '.join(blocks) |
|
686 | return ' '.join(blocks) | |
687 |
|
687 | |||
688 | def _getcorechunk(self): |
|
688 | def _getcorechunk(self): | |
689 | """yield chunk for the core part of the bundle |
|
689 | """yield chunk for the core part of the bundle | |
690 |
|
690 | |||
691 | (all but headers and parameters)""" |
|
691 | (all but headers and parameters)""" | |
692 | outdebug(self.ui, 'start of parts') |
|
692 | outdebug(self.ui, 'start of parts') | |
693 | for part in self._parts: |
|
693 | for part in self._parts: | |
694 | outdebug(self.ui, 'bundle part: "%s"' % part.type) |
|
694 | outdebug(self.ui, 'bundle part: "%s"' % part.type) | |
695 | for chunk in part.getchunks(ui=self.ui): |
|
695 | for chunk in part.getchunks(ui=self.ui): | |
696 | yield chunk |
|
696 | yield chunk | |
697 | outdebug(self.ui, 'end of bundle') |
|
697 | outdebug(self.ui, 'end of bundle') | |
698 | yield _pack(_fpartheadersize, 0) |
|
698 | yield _pack(_fpartheadersize, 0) | |
699 |
|
699 | |||
700 |
|
700 | |||
701 | def salvageoutput(self): |
|
701 | def salvageoutput(self): | |
702 | """return a list with a copy of all output parts in the bundle |
|
702 | """return a list with a copy of all output parts in the bundle | |
703 |
|
703 | |||
704 | This is meant to be used during error handling to make sure we preserve |
|
704 | This is meant to be used during error handling to make sure we preserve | |
705 | server output""" |
|
705 | server output""" | |
706 | salvaged = [] |
|
706 | salvaged = [] | |
707 | for part in self._parts: |
|
707 | for part in self._parts: | |
708 | if part.type.startswith('output'): |
|
708 | if part.type.startswith('output'): | |
709 | salvaged.append(part.copy()) |
|
709 | salvaged.append(part.copy()) | |
710 | return salvaged |
|
710 | return salvaged | |
711 |
|
711 | |||
712 |
|
712 | |||
713 | class unpackermixin(object): |
|
713 | class unpackermixin(object): | |
714 | """A mixin to extract bytes and struct data from a stream""" |
|
714 | """A mixin to extract bytes and struct data from a stream""" | |
715 |
|
715 | |||
716 | def __init__(self, fp): |
|
716 | def __init__(self, fp): | |
717 | self._fp = fp |
|
717 | self._fp = fp | |
718 |
|
718 | |||
719 | def _unpack(self, format): |
|
719 | def _unpack(self, format): | |
720 | """unpack this struct format from the stream |
|
720 | """unpack this struct format from the stream | |
721 |
|
721 | |||
722 | This method is meant for internal usage by the bundle2 protocol only. |
|
722 | This method is meant for internal usage by the bundle2 protocol only. | |
723 | They directly manipulate the low level stream including bundle2 level |
|
723 | They directly manipulate the low level stream including bundle2 level | |
724 | instruction. |
|
724 | instruction. | |
725 |
|
725 | |||
726 | Do not use it to implement higher-level logic or methods.""" |
|
726 | Do not use it to implement higher-level logic or methods.""" | |
727 | data = self._readexact(struct.calcsize(format)) |
|
727 | data = self._readexact(struct.calcsize(format)) | |
728 | return _unpack(format, data) |
|
728 | return _unpack(format, data) | |
729 |
|
729 | |||
730 | def _readexact(self, size): |
|
730 | def _readexact(self, size): | |
731 | """read exactly <size> bytes from the stream |
|
731 | """read exactly <size> bytes from the stream | |
732 |
|
732 | |||
733 | This method is meant for internal usage by the bundle2 protocol only. |
|
733 | This method is meant for internal usage by the bundle2 protocol only. | |
734 | They directly manipulate the low level stream including bundle2 level |
|
734 | They directly manipulate the low level stream including bundle2 level | |
735 | instruction. |
|
735 | instruction. | |
736 |
|
736 | |||
737 | Do not use it to implement higher-level logic or methods.""" |
|
737 | Do not use it to implement higher-level logic or methods.""" | |
738 | return changegroup.readexactly(self._fp, size) |
|
738 | return changegroup.readexactly(self._fp, size) | |
739 |
|
739 | |||
740 | def getunbundler(ui, fp, magicstring=None): |
|
740 | def getunbundler(ui, fp, magicstring=None): | |
741 | """return a valid unbundler object for a given magicstring""" |
|
741 | """return a valid unbundler object for a given magicstring""" | |
742 | if magicstring is None: |
|
742 | if magicstring is None: | |
743 | magicstring = changegroup.readexactly(fp, 4) |
|
743 | magicstring = changegroup.readexactly(fp, 4) | |
744 | magic, version = magicstring[0:2], magicstring[2:4] |
|
744 | magic, version = magicstring[0:2], magicstring[2:4] | |
745 | if magic != 'HG': |
|
745 | if magic != 'HG': | |
746 | ui.debug( |
|
746 | ui.debug( | |
747 | "error: invalid magic: %r (version %r), should be 'HG'\n" |
|
747 | "error: invalid magic: %r (version %r), should be 'HG'\n" | |
748 | % (magic, version)) |
|
748 | % (magic, version)) | |
749 | raise error.Abort(_('not a Mercurial bundle')) |
|
749 | raise error.Abort(_('not a Mercurial bundle')) | |
750 | unbundlerclass = formatmap.get(version) |
|
750 | unbundlerclass = formatmap.get(version) | |
751 | if unbundlerclass is None: |
|
751 | if unbundlerclass is None: | |
752 | raise error.Abort(_('unknown bundle version %s') % version) |
|
752 | raise error.Abort(_('unknown bundle version %s') % version) | |
753 | unbundler = unbundlerclass(ui, fp) |
|
753 | unbundler = unbundlerclass(ui, fp) | |
754 | indebug(ui, 'start processing of %s stream' % magicstring) |
|
754 | indebug(ui, 'start processing of %s stream' % magicstring) | |
755 | return unbundler |
|
755 | return unbundler | |
756 |
|
756 | |||
757 | class unbundle20(unpackermixin): |
|
757 | class unbundle20(unpackermixin): | |
758 | """interpret a bundle2 stream |
|
758 | """interpret a bundle2 stream | |
759 |
|
759 | |||
760 | This class is fed with a binary stream and yields parts through its |
|
760 | This class is fed with a binary stream and yields parts through its | |
761 | `iterparts` methods.""" |
|
761 | `iterparts` methods.""" | |
762 |
|
762 | |||
763 | _magicstring = 'HG20' |
|
763 | _magicstring = 'HG20' | |
764 |
|
764 | |||
765 | def __init__(self, ui, fp): |
|
765 | def __init__(self, ui, fp): | |
766 | """If header is specified, we do not read it out of the stream.""" |
|
766 | """If header is specified, we do not read it out of the stream.""" | |
767 | self.ui = ui |
|
767 | self.ui = ui | |
768 | self._compengine = util.compengines.forbundletype('UN') |
|
768 | self._compengine = util.compengines.forbundletype('UN') | |
769 | self._compressed = None |
|
769 | self._compressed = None | |
770 | super(unbundle20, self).__init__(fp) |
|
770 | super(unbundle20, self).__init__(fp) | |
771 |
|
771 | |||
772 | @util.propertycache |
|
772 | @util.propertycache | |
773 | def params(self): |
|
773 | def params(self): | |
774 | """dictionary of stream level parameters""" |
|
774 | """dictionary of stream level parameters""" | |
775 | indebug(self.ui, 'reading bundle2 stream parameters') |
|
775 | indebug(self.ui, 'reading bundle2 stream parameters') | |
776 | params = {} |
|
776 | params = {} | |
777 | paramssize = self._unpack(_fstreamparamsize)[0] |
|
777 | paramssize = self._unpack(_fstreamparamsize)[0] | |
778 | if paramssize < 0: |
|
778 | if paramssize < 0: | |
779 | raise error.BundleValueError('negative bundle param size: %i' |
|
779 | raise error.BundleValueError('negative bundle param size: %i' | |
780 | % paramssize) |
|
780 | % paramssize) | |
781 | if paramssize: |
|
781 | if paramssize: | |
782 | params = self._readexact(paramssize) |
|
782 | params = self._readexact(paramssize) | |
783 | params = self._processallparams(params) |
|
783 | params = self._processallparams(params) | |
784 | return params |
|
784 | return params | |
785 |
|
785 | |||
786 | def _processallparams(self, paramsblock): |
|
786 | def _processallparams(self, paramsblock): | |
787 | """""" |
|
787 | """""" | |
788 | params = util.sortdict() |
|
788 | params = util.sortdict() | |
789 | for p in paramsblock.split(' '): |
|
789 | for p in paramsblock.split(' '): | |
790 | p = p.split('=', 1) |
|
790 | p = p.split('=', 1) | |
791 | p = [urlreq.unquote(i) for i in p] |
|
791 | p = [urlreq.unquote(i) for i in p] | |
792 | if len(p) < 2: |
|
792 | if len(p) < 2: | |
793 | p.append(None) |
|
793 | p.append(None) | |
794 | self._processparam(*p) |
|
794 | self._processparam(*p) | |
795 | params[p[0]] = p[1] |
|
795 | params[p[0]] = p[1] | |
796 | return params |
|
796 | return params | |
797 |
|
797 | |||
798 |
|
798 | |||
799 | def _processparam(self, name, value): |
|
799 | def _processparam(self, name, value): | |
800 | """process a parameter, applying its effect if needed |
|
800 | """process a parameter, applying its effect if needed | |
801 |
|
801 | |||
802 | Parameter starting with a lower case letter are advisory and will be |
|
802 | Parameter starting with a lower case letter are advisory and will be | |
803 | ignored when unknown. Those starting with an upper case letter are |
|
803 | ignored when unknown. Those starting with an upper case letter are | |
804 | mandatory and will this function will raise a KeyError when unknown. |
|
804 | mandatory and will this function will raise a KeyError when unknown. | |
805 |
|
805 | |||
806 | Note: no option are currently supported. Any input will be either |
|
806 | Note: no option are currently supported. Any input will be either | |
807 | ignored or failing. |
|
807 | ignored or failing. | |
808 | """ |
|
808 | """ | |
809 | if not name: |
|
809 | if not name: | |
810 | raise ValueError(r'empty parameter name') |
|
810 | raise ValueError(r'empty parameter name') | |
811 | if name[0:1] not in pycompat.bytestr(string.ascii_letters): |
|
811 | if name[0:1] not in pycompat.bytestr(string.ascii_letters): | |
812 | raise ValueError(r'non letter first character: %s' % name) |
|
812 | raise ValueError(r'non letter first character: %s' % name) | |
813 | try: |
|
813 | try: | |
814 | handler = b2streamparamsmap[name.lower()] |
|
814 | handler = b2streamparamsmap[name.lower()] | |
815 | except KeyError: |
|
815 | except KeyError: | |
816 | if name[0:1].islower(): |
|
816 | if name[0:1].islower(): | |
817 | indebug(self.ui, "ignoring unknown parameter %s" % name) |
|
817 | indebug(self.ui, "ignoring unknown parameter %s" % name) | |
818 | else: |
|
818 | else: | |
819 | raise error.BundleUnknownFeatureError(params=(name,)) |
|
819 | raise error.BundleUnknownFeatureError(params=(name,)) | |
820 | else: |
|
820 | else: | |
821 | handler(self, name, value) |
|
821 | handler(self, name, value) | |
822 |
|
822 | |||
823 | def _forwardchunks(self): |
|
823 | def _forwardchunks(self): | |
824 | """utility to transfer a bundle2 as binary |
|
824 | """utility to transfer a bundle2 as binary | |
825 |
|
825 | |||
826 | This is made necessary by the fact the 'getbundle' command over 'ssh' |
|
826 | This is made necessary by the fact the 'getbundle' command over 'ssh' | |
827 | have no way to know then the reply end, relying on the bundle to be |
|
827 | have no way to know then the reply end, relying on the bundle to be | |
828 | interpreted to know its end. This is terrible and we are sorry, but we |
|
828 | interpreted to know its end. This is terrible and we are sorry, but we | |
829 | needed to move forward to get general delta enabled. |
|
829 | needed to move forward to get general delta enabled. | |
830 | """ |
|
830 | """ | |
831 | yield self._magicstring |
|
831 | yield self._magicstring | |
832 | assert 'params' not in vars(self) |
|
832 | assert 'params' not in vars(self) | |
833 | paramssize = self._unpack(_fstreamparamsize)[0] |
|
833 | paramssize = self._unpack(_fstreamparamsize)[0] | |
834 | if paramssize < 0: |
|
834 | if paramssize < 0: | |
835 | raise error.BundleValueError('negative bundle param size: %i' |
|
835 | raise error.BundleValueError('negative bundle param size: %i' | |
836 | % paramssize) |
|
836 | % paramssize) | |
837 | yield _pack(_fstreamparamsize, paramssize) |
|
837 | yield _pack(_fstreamparamsize, paramssize) | |
838 | if paramssize: |
|
838 | if paramssize: | |
839 | params = self._readexact(paramssize) |
|
839 | params = self._readexact(paramssize) | |
840 | self._processallparams(params) |
|
840 | self._processallparams(params) | |
841 | yield params |
|
841 | yield params | |
842 | assert self._compengine.bundletype()[1] == 'UN' |
|
842 | assert self._compengine.bundletype()[1] == 'UN' | |
843 | # From there, payload might need to be decompressed |
|
843 | # From there, payload might need to be decompressed | |
844 | self._fp = self._compengine.decompressorreader(self._fp) |
|
844 | self._fp = self._compengine.decompressorreader(self._fp) | |
845 | emptycount = 0 |
|
845 | emptycount = 0 | |
846 | while emptycount < 2: |
|
846 | while emptycount < 2: | |
847 | # so we can brainlessly loop |
|
847 | # so we can brainlessly loop | |
848 | assert _fpartheadersize == _fpayloadsize |
|
848 | assert _fpartheadersize == _fpayloadsize | |
849 | size = self._unpack(_fpartheadersize)[0] |
|
849 | size = self._unpack(_fpartheadersize)[0] | |
850 | yield _pack(_fpartheadersize, size) |
|
850 | yield _pack(_fpartheadersize, size) | |
851 | if size: |
|
851 | if size: | |
852 | emptycount = 0 |
|
852 | emptycount = 0 | |
853 | else: |
|
853 | else: | |
854 | emptycount += 1 |
|
854 | emptycount += 1 | |
855 | continue |
|
855 | continue | |
856 | if size == flaginterrupt: |
|
856 | if size == flaginterrupt: | |
857 | continue |
|
857 | continue | |
858 | elif size < 0: |
|
858 | elif size < 0: | |
859 | raise error.BundleValueError('negative chunk size: %i') |
|
859 | raise error.BundleValueError('negative chunk size: %i') | |
860 | yield self._readexact(size) |
|
860 | yield self._readexact(size) | |
861 |
|
861 | |||
862 |
|
862 | |||
863 | def iterparts(self, seekable=False): |
|
863 | def iterparts(self, seekable=False): | |
864 | """yield all parts contained in the stream""" |
|
864 | """yield all parts contained in the stream""" | |
865 | cls = seekableunbundlepart if seekable else unbundlepart |
|
865 | cls = seekableunbundlepart if seekable else unbundlepart | |
866 | # make sure param have been loaded |
|
866 | # make sure param have been loaded | |
867 | self.params |
|
867 | self.params | |
868 | # From there, payload need to be decompressed |
|
868 | # From there, payload need to be decompressed | |
869 | self._fp = self._compengine.decompressorreader(self._fp) |
|
869 | self._fp = self._compengine.decompressorreader(self._fp) | |
870 | indebug(self.ui, 'start extraction of bundle2 parts') |
|
870 | indebug(self.ui, 'start extraction of bundle2 parts') | |
871 | headerblock = self._readpartheader() |
|
871 | headerblock = self._readpartheader() | |
872 | while headerblock is not None: |
|
872 | while headerblock is not None: | |
873 | part = cls(self.ui, headerblock, self._fp) |
|
873 | part = cls(self.ui, headerblock, self._fp) | |
874 | yield part |
|
874 | yield part | |
875 | # Ensure part is fully consumed so we can start reading the next |
|
875 | # Ensure part is fully consumed so we can start reading the next | |
876 | # part. |
|
876 | # part. | |
877 | part.consume() |
|
877 | part.consume() | |
878 |
|
878 | |||
879 | headerblock = self._readpartheader() |
|
879 | headerblock = self._readpartheader() | |
880 | indebug(self.ui, 'end of bundle2 stream') |
|
880 | indebug(self.ui, 'end of bundle2 stream') | |
881 |
|
881 | |||
882 | def _readpartheader(self): |
|
882 | def _readpartheader(self): | |
883 | """reads a part header size and return the bytes blob |
|
883 | """reads a part header size and return the bytes blob | |
884 |
|
884 | |||
885 | returns None if empty""" |
|
885 | returns None if empty""" | |
886 | headersize = self._unpack(_fpartheadersize)[0] |
|
886 | headersize = self._unpack(_fpartheadersize)[0] | |
887 | if headersize < 0: |
|
887 | if headersize < 0: | |
888 | raise error.BundleValueError('negative part header size: %i' |
|
888 | raise error.BundleValueError('negative part header size: %i' | |
889 | % headersize) |
|
889 | % headersize) | |
890 | indebug(self.ui, 'part header size: %i' % headersize) |
|
890 | indebug(self.ui, 'part header size: %i' % headersize) | |
891 | if headersize: |
|
891 | if headersize: | |
892 | return self._readexact(headersize) |
|
892 | return self._readexact(headersize) | |
893 | return None |
|
893 | return None | |
894 |
|
894 | |||
895 | def compressed(self): |
|
895 | def compressed(self): | |
896 | self.params # load params |
|
896 | self.params # load params | |
897 | return self._compressed |
|
897 | return self._compressed | |
898 |
|
898 | |||
899 | def close(self): |
|
899 | def close(self): | |
900 | """close underlying file""" |
|
900 | """close underlying file""" | |
901 | if util.safehasattr(self._fp, 'close'): |
|
901 | if util.safehasattr(self._fp, 'close'): | |
902 | return self._fp.close() |
|
902 | return self._fp.close() | |
903 |
|
903 | |||
904 | formatmap = {'20': unbundle20} |
|
904 | formatmap = {'20': unbundle20} | |
905 |
|
905 | |||
906 | b2streamparamsmap = {} |
|
906 | b2streamparamsmap = {} | |
907 |
|
907 | |||
908 | def b2streamparamhandler(name): |
|
908 | def b2streamparamhandler(name): | |
909 | """register a handler for a stream level parameter""" |
|
909 | """register a handler for a stream level parameter""" | |
910 | def decorator(func): |
|
910 | def decorator(func): | |
911 | assert name not in formatmap |
|
911 | assert name not in formatmap | |
912 | b2streamparamsmap[name] = func |
|
912 | b2streamparamsmap[name] = func | |
913 | return func |
|
913 | return func | |
914 | return decorator |
|
914 | return decorator | |
915 |
|
915 | |||
916 | @b2streamparamhandler('compression') |
|
916 | @b2streamparamhandler('compression') | |
917 | def processcompression(unbundler, param, value): |
|
917 | def processcompression(unbundler, param, value): | |
918 | """read compression parameter and install payload decompression""" |
|
918 | """read compression parameter and install payload decompression""" | |
919 | if value not in util.compengines.supportedbundletypes: |
|
919 | if value not in util.compengines.supportedbundletypes: | |
920 | raise error.BundleUnknownFeatureError(params=(param,), |
|
920 | raise error.BundleUnknownFeatureError(params=(param,), | |
921 | values=(value,)) |
|
921 | values=(value,)) | |
922 | unbundler._compengine = util.compengines.forbundletype(value) |
|
922 | unbundler._compengine = util.compengines.forbundletype(value) | |
923 | if value is not None: |
|
923 | if value is not None: | |
924 | unbundler._compressed = True |
|
924 | unbundler._compressed = True | |
925 |
|
925 | |||
926 | class bundlepart(object): |
|
926 | class bundlepart(object): | |
927 | """A bundle2 part contains application level payload |
|
927 | """A bundle2 part contains application level payload | |
928 |
|
928 | |||
929 | The part `type` is used to route the part to the application level |
|
929 | The part `type` is used to route the part to the application level | |
930 | handler. |
|
930 | handler. | |
931 |
|
931 | |||
932 | The part payload is contained in ``part.data``. It could be raw bytes or a |
|
932 | The part payload is contained in ``part.data``. It could be raw bytes or a | |
933 | generator of byte chunks. |
|
933 | generator of byte chunks. | |
934 |
|
934 | |||
935 | You can add parameters to the part using the ``addparam`` method. |
|
935 | You can add parameters to the part using the ``addparam`` method. | |
936 | Parameters can be either mandatory (default) or advisory. Remote side |
|
936 | Parameters can be either mandatory (default) or advisory. Remote side | |
937 | should be able to safely ignore the advisory ones. |
|
937 | should be able to safely ignore the advisory ones. | |
938 |
|
938 | |||
939 | Both data and parameters cannot be modified after the generation has begun. |
|
939 | Both data and parameters cannot be modified after the generation has begun. | |
940 | """ |
|
940 | """ | |
941 |
|
941 | |||
942 | def __init__(self, parttype, mandatoryparams=(), advisoryparams=(), |
|
942 | def __init__(self, parttype, mandatoryparams=(), advisoryparams=(), | |
943 | data='', mandatory=True): |
|
943 | data='', mandatory=True): | |
944 | validateparttype(parttype) |
|
944 | validateparttype(parttype) | |
945 | self.id = None |
|
945 | self.id = None | |
946 | self.type = parttype |
|
946 | self.type = parttype | |
947 | self._data = data |
|
947 | self._data = data | |
948 | self._mandatoryparams = list(mandatoryparams) |
|
948 | self._mandatoryparams = list(mandatoryparams) | |
949 | self._advisoryparams = list(advisoryparams) |
|
949 | self._advisoryparams = list(advisoryparams) | |
950 | # checking for duplicated entries |
|
950 | # checking for duplicated entries | |
951 | self._seenparams = set() |
|
951 | self._seenparams = set() | |
952 | for pname, __ in self._mandatoryparams + self._advisoryparams: |
|
952 | for pname, __ in self._mandatoryparams + self._advisoryparams: | |
953 | if pname in self._seenparams: |
|
953 | if pname in self._seenparams: | |
954 | raise error.ProgrammingError('duplicated params: %s' % pname) |
|
954 | raise error.ProgrammingError('duplicated params: %s' % pname) | |
955 | self._seenparams.add(pname) |
|
955 | self._seenparams.add(pname) | |
956 | # status of the part's generation: |
|
956 | # status of the part's generation: | |
957 | # - None: not started, |
|
957 | # - None: not started, | |
958 | # - False: currently generated, |
|
958 | # - False: currently generated, | |
959 | # - True: generation done. |
|
959 | # - True: generation done. | |
960 | self._generated = None |
|
960 | self._generated = None | |
961 | self.mandatory = mandatory |
|
961 | self.mandatory = mandatory | |
962 |
|
962 | |||
963 | def __repr__(self): |
|
963 | def __repr__(self): | |
964 | cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__) |
|
964 | cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__) | |
965 | return ('<%s object at %x; id: %s; type: %s; mandatory: %s>' |
|
965 | return ('<%s object at %x; id: %s; type: %s; mandatory: %s>' | |
966 | % (cls, id(self), self.id, self.type, self.mandatory)) |
|
966 | % (cls, id(self), self.id, self.type, self.mandatory)) | |
967 |
|
967 | |||
968 | def copy(self): |
|
968 | def copy(self): | |
969 | """return a copy of the part |
|
969 | """return a copy of the part | |
970 |
|
970 | |||
971 | The new part have the very same content but no partid assigned yet. |
|
971 | The new part have the very same content but no partid assigned yet. | |
972 | Parts with generated data cannot be copied.""" |
|
972 | Parts with generated data cannot be copied.""" | |
973 | assert not util.safehasattr(self.data, 'next') |
|
973 | assert not util.safehasattr(self.data, 'next') | |
974 | return self.__class__(self.type, self._mandatoryparams, |
|
974 | return self.__class__(self.type, self._mandatoryparams, | |
975 | self._advisoryparams, self._data, self.mandatory) |
|
975 | self._advisoryparams, self._data, self.mandatory) | |
976 |
|
976 | |||
977 | # methods used to defines the part content |
|
977 | # methods used to defines the part content | |
978 | @property |
|
978 | @property | |
979 | def data(self): |
|
979 | def data(self): | |
980 | return self._data |
|
980 | return self._data | |
981 |
|
981 | |||
982 | @data.setter |
|
982 | @data.setter | |
983 | def data(self, data): |
|
983 | def data(self, data): | |
984 | if self._generated is not None: |
|
984 | if self._generated is not None: | |
985 | raise error.ReadOnlyPartError('part is being generated') |
|
985 | raise error.ReadOnlyPartError('part is being generated') | |
986 | self._data = data |
|
986 | self._data = data | |
987 |
|
987 | |||
988 | @property |
|
988 | @property | |
989 | def mandatoryparams(self): |
|
989 | def mandatoryparams(self): | |
990 | # make it an immutable tuple to force people through ``addparam`` |
|
990 | # make it an immutable tuple to force people through ``addparam`` | |
991 | return tuple(self._mandatoryparams) |
|
991 | return tuple(self._mandatoryparams) | |
992 |
|
992 | |||
993 | @property |
|
993 | @property | |
994 | def advisoryparams(self): |
|
994 | def advisoryparams(self): | |
995 | # make it an immutable tuple to force people through ``addparam`` |
|
995 | # make it an immutable tuple to force people through ``addparam`` | |
996 | return tuple(self._advisoryparams) |
|
996 | return tuple(self._advisoryparams) | |
997 |
|
997 | |||
998 | def addparam(self, name, value='', mandatory=True): |
|
998 | def addparam(self, name, value='', mandatory=True): | |
999 | """add a parameter to the part |
|
999 | """add a parameter to the part | |
1000 |
|
1000 | |||
1001 | If 'mandatory' is set to True, the remote handler must claim support |
|
1001 | If 'mandatory' is set to True, the remote handler must claim support | |
1002 | for this parameter or the unbundling will be aborted. |
|
1002 | for this parameter or the unbundling will be aborted. | |
1003 |
|
1003 | |||
1004 | The 'name' and 'value' cannot exceed 255 bytes each. |
|
1004 | The 'name' and 'value' cannot exceed 255 bytes each. | |
1005 | """ |
|
1005 | """ | |
1006 | if self._generated is not None: |
|
1006 | if self._generated is not None: | |
1007 | raise error.ReadOnlyPartError('part is being generated') |
|
1007 | raise error.ReadOnlyPartError('part is being generated') | |
1008 | if name in self._seenparams: |
|
1008 | if name in self._seenparams: | |
1009 | raise ValueError('duplicated params: %s' % name) |
|
1009 | raise ValueError('duplicated params: %s' % name) | |
1010 | self._seenparams.add(name) |
|
1010 | self._seenparams.add(name) | |
1011 | params = self._advisoryparams |
|
1011 | params = self._advisoryparams | |
1012 | if mandatory: |
|
1012 | if mandatory: | |
1013 | params = self._mandatoryparams |
|
1013 | params = self._mandatoryparams | |
1014 | params.append((name, value)) |
|
1014 | params.append((name, value)) | |
1015 |
|
1015 | |||
1016 | # methods used to generates the bundle2 stream |
|
1016 | # methods used to generates the bundle2 stream | |
1017 | def getchunks(self, ui): |
|
1017 | def getchunks(self, ui): | |
1018 | if self._generated is not None: |
|
1018 | if self._generated is not None: | |
1019 | raise error.ProgrammingError('part can only be consumed once') |
|
1019 | raise error.ProgrammingError('part can only be consumed once') | |
1020 | self._generated = False |
|
1020 | self._generated = False | |
1021 |
|
1021 | |||
1022 | if ui.debugflag: |
|
1022 | if ui.debugflag: | |
1023 | msg = ['bundle2-output-part: "%s"' % self.type] |
|
1023 | msg = ['bundle2-output-part: "%s"' % self.type] | |
1024 | if not self.mandatory: |
|
1024 | if not self.mandatory: | |
1025 | msg.append(' (advisory)') |
|
1025 | msg.append(' (advisory)') | |
1026 | nbmp = len(self.mandatoryparams) |
|
1026 | nbmp = len(self.mandatoryparams) | |
1027 | nbap = len(self.advisoryparams) |
|
1027 | nbap = len(self.advisoryparams) | |
1028 | if nbmp or nbap: |
|
1028 | if nbmp or nbap: | |
1029 | msg.append(' (params:') |
|
1029 | msg.append(' (params:') | |
1030 | if nbmp: |
|
1030 | if nbmp: | |
1031 | msg.append(' %i mandatory' % nbmp) |
|
1031 | msg.append(' %i mandatory' % nbmp) | |
1032 | if nbap: |
|
1032 | if nbap: | |
1033 | msg.append(' %i advisory' % nbmp) |
|
1033 | msg.append(' %i advisory' % nbmp) | |
1034 | msg.append(')') |
|
1034 | msg.append(')') | |
1035 | if not self.data: |
|
1035 | if not self.data: | |
1036 | msg.append(' empty payload') |
|
1036 | msg.append(' empty payload') | |
1037 | elif (util.safehasattr(self.data, 'next') |
|
1037 | elif (util.safehasattr(self.data, 'next') | |
1038 | or util.safehasattr(self.data, '__next__')): |
|
1038 | or util.safehasattr(self.data, '__next__')): | |
1039 | msg.append(' streamed payload') |
|
1039 | msg.append(' streamed payload') | |
1040 | else: |
|
1040 | else: | |
1041 | msg.append(' %i bytes payload' % len(self.data)) |
|
1041 | msg.append(' %i bytes payload' % len(self.data)) | |
1042 | msg.append('\n') |
|
1042 | msg.append('\n') | |
1043 | ui.debug(''.join(msg)) |
|
1043 | ui.debug(''.join(msg)) | |
1044 |
|
1044 | |||
1045 | #### header |
|
1045 | #### header | |
1046 | if self.mandatory: |
|
1046 | if self.mandatory: | |
1047 | parttype = self.type.upper() |
|
1047 | parttype = self.type.upper() | |
1048 | else: |
|
1048 | else: | |
1049 | parttype = self.type.lower() |
|
1049 | parttype = self.type.lower() | |
1050 | outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype)) |
|
1050 | outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype)) | |
1051 | ## parttype |
|
1051 | ## parttype | |
1052 | header = [_pack(_fparttypesize, len(parttype)), |
|
1052 | header = [_pack(_fparttypesize, len(parttype)), | |
1053 | parttype, _pack(_fpartid, self.id), |
|
1053 | parttype, _pack(_fpartid, self.id), | |
1054 | ] |
|
1054 | ] | |
1055 | ## parameters |
|
1055 | ## parameters | |
1056 | # count |
|
1056 | # count | |
1057 | manpar = self.mandatoryparams |
|
1057 | manpar = self.mandatoryparams | |
1058 | advpar = self.advisoryparams |
|
1058 | advpar = self.advisoryparams | |
1059 | header.append(_pack(_fpartparamcount, len(manpar), len(advpar))) |
|
1059 | header.append(_pack(_fpartparamcount, len(manpar), len(advpar))) | |
1060 | # size |
|
1060 | # size | |
1061 | parsizes = [] |
|
1061 | parsizes = [] | |
1062 | for key, value in manpar: |
|
1062 | for key, value in manpar: | |
1063 | parsizes.append(len(key)) |
|
1063 | parsizes.append(len(key)) | |
1064 | parsizes.append(len(value)) |
|
1064 | parsizes.append(len(value)) | |
1065 | for key, value in advpar: |
|
1065 | for key, value in advpar: | |
1066 | parsizes.append(len(key)) |
|
1066 | parsizes.append(len(key)) | |
1067 | parsizes.append(len(value)) |
|
1067 | parsizes.append(len(value)) | |
1068 | paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes) |
|
1068 | paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes) | |
1069 | header.append(paramsizes) |
|
1069 | header.append(paramsizes) | |
1070 | # key, value |
|
1070 | # key, value | |
1071 | for key, value in manpar: |
|
1071 | for key, value in manpar: | |
1072 | header.append(key) |
|
1072 | header.append(key) | |
1073 | header.append(value) |
|
1073 | header.append(value) | |
1074 | for key, value in advpar: |
|
1074 | for key, value in advpar: | |
1075 | header.append(key) |
|
1075 | header.append(key) | |
1076 | header.append(value) |
|
1076 | header.append(value) | |
1077 | ## finalize header |
|
1077 | ## finalize header | |
1078 | try: |
|
1078 | try: | |
1079 | headerchunk = ''.join(header) |
|
1079 | headerchunk = ''.join(header) | |
1080 | except TypeError: |
|
1080 | except TypeError: | |
1081 | raise TypeError(r'Found a non-bytes trying to ' |
|
1081 | raise TypeError(r'Found a non-bytes trying to ' | |
1082 | r'build bundle part header: %r' % header) |
|
1082 | r'build bundle part header: %r' % header) | |
1083 | outdebug(ui, 'header chunk size: %i' % len(headerchunk)) |
|
1083 | outdebug(ui, 'header chunk size: %i' % len(headerchunk)) | |
1084 | yield _pack(_fpartheadersize, len(headerchunk)) |
|
1084 | yield _pack(_fpartheadersize, len(headerchunk)) | |
1085 | yield headerchunk |
|
1085 | yield headerchunk | |
1086 | ## payload |
|
1086 | ## payload | |
1087 | try: |
|
1087 | try: | |
1088 | for chunk in self._payloadchunks(): |
|
1088 | for chunk in self._payloadchunks(): | |
1089 | outdebug(ui, 'payload chunk size: %i' % len(chunk)) |
|
1089 | outdebug(ui, 'payload chunk size: %i' % len(chunk)) | |
1090 | yield _pack(_fpayloadsize, len(chunk)) |
|
1090 | yield _pack(_fpayloadsize, len(chunk)) | |
1091 | yield chunk |
|
1091 | yield chunk | |
1092 | except GeneratorExit: |
|
1092 | except GeneratorExit: | |
1093 | # GeneratorExit means that nobody is listening for our |
|
1093 | # GeneratorExit means that nobody is listening for our | |
1094 | # results anyway, so just bail quickly rather than trying |
|
1094 | # results anyway, so just bail quickly rather than trying | |
1095 | # to produce an error part. |
|
1095 | # to produce an error part. | |
1096 | ui.debug('bundle2-generatorexit\n') |
|
1096 | ui.debug('bundle2-generatorexit\n') | |
1097 | raise |
|
1097 | raise | |
1098 | except BaseException as exc: |
|
1098 | except BaseException as exc: | |
1099 | bexc = stringutil.forcebytestr(exc) |
|
1099 | bexc = stringutil.forcebytestr(exc) | |
1100 | # backup exception data for later |
|
1100 | # backup exception data for later | |
1101 | ui.debug('bundle2-input-stream-interrupt: encoding exception %s' |
|
1101 | ui.debug('bundle2-input-stream-interrupt: encoding exception %s' | |
1102 | % bexc) |
|
1102 | % bexc) | |
1103 | tb = sys.exc_info()[2] |
|
1103 | tb = sys.exc_info()[2] | |
1104 | msg = 'unexpected error: %s' % bexc |
|
1104 | msg = 'unexpected error: %s' % bexc | |
1105 | interpart = bundlepart('error:abort', [('message', msg)], |
|
1105 | interpart = bundlepart('error:abort', [('message', msg)], | |
1106 | mandatory=False) |
|
1106 | mandatory=False) | |
1107 | interpart.id = 0 |
|
1107 | interpart.id = 0 | |
1108 | yield _pack(_fpayloadsize, -1) |
|
1108 | yield _pack(_fpayloadsize, -1) | |
1109 | for chunk in interpart.getchunks(ui=ui): |
|
1109 | for chunk in interpart.getchunks(ui=ui): | |
1110 | yield chunk |
|
1110 | yield chunk | |
1111 | outdebug(ui, 'closing payload chunk') |
|
1111 | outdebug(ui, 'closing payload chunk') | |
1112 | # abort current part payload |
|
1112 | # abort current part payload | |
1113 | yield _pack(_fpayloadsize, 0) |
|
1113 | yield _pack(_fpayloadsize, 0) | |
1114 | pycompat.raisewithtb(exc, tb) |
|
1114 | pycompat.raisewithtb(exc, tb) | |
1115 | # end of payload |
|
1115 | # end of payload | |
1116 | outdebug(ui, 'closing payload chunk') |
|
1116 | outdebug(ui, 'closing payload chunk') | |
1117 | yield _pack(_fpayloadsize, 0) |
|
1117 | yield _pack(_fpayloadsize, 0) | |
1118 | self._generated = True |
|
1118 | self._generated = True | |
1119 |
|
1119 | |||
1120 | def _payloadchunks(self): |
|
1120 | def _payloadchunks(self): | |
1121 | """yield chunks of a the part payload |
|
1121 | """yield chunks of a the part payload | |
1122 |
|
1122 | |||
1123 | Exists to handle the different methods to provide data to a part.""" |
|
1123 | Exists to handle the different methods to provide data to a part.""" | |
1124 | # we only support fixed size data now. |
|
1124 | # we only support fixed size data now. | |
1125 | # This will be improved in the future. |
|
1125 | # This will be improved in the future. | |
1126 | if (util.safehasattr(self.data, 'next') |
|
1126 | if (util.safehasattr(self.data, 'next') | |
1127 | or util.safehasattr(self.data, '__next__')): |
|
1127 | or util.safehasattr(self.data, '__next__')): | |
1128 | buff = util.chunkbuffer(self.data) |
|
1128 | buff = util.chunkbuffer(self.data) | |
1129 | chunk = buff.read(preferedchunksize) |
|
1129 | chunk = buff.read(preferedchunksize) | |
1130 | while chunk: |
|
1130 | while chunk: | |
1131 | yield chunk |
|
1131 | yield chunk | |
1132 | chunk = buff.read(preferedchunksize) |
|
1132 | chunk = buff.read(preferedchunksize) | |
1133 | elif len(self.data): |
|
1133 | elif len(self.data): | |
1134 | yield self.data |
|
1134 | yield self.data | |
1135 |
|
1135 | |||
1136 |
|
1136 | |||
1137 | flaginterrupt = -1 |
|
1137 | flaginterrupt = -1 | |
1138 |
|
1138 | |||
1139 | class interrupthandler(unpackermixin): |
|
1139 | class interrupthandler(unpackermixin): | |
1140 | """read one part and process it with restricted capability |
|
1140 | """read one part and process it with restricted capability | |
1141 |
|
1141 | |||
1142 | This allows to transmit exception raised on the producer size during part |
|
1142 | This allows to transmit exception raised on the producer size during part | |
1143 | iteration while the consumer is reading a part. |
|
1143 | iteration while the consumer is reading a part. | |
1144 |
|
1144 | |||
1145 | Part processed in this manner only have access to a ui object,""" |
|
1145 | Part processed in this manner only have access to a ui object,""" | |
1146 |
|
1146 | |||
1147 | def __init__(self, ui, fp): |
|
1147 | def __init__(self, ui, fp): | |
1148 | super(interrupthandler, self).__init__(fp) |
|
1148 | super(interrupthandler, self).__init__(fp) | |
1149 | self.ui = ui |
|
1149 | self.ui = ui | |
1150 |
|
1150 | |||
1151 | def _readpartheader(self): |
|
1151 | def _readpartheader(self): | |
1152 | """reads a part header size and return the bytes blob |
|
1152 | """reads a part header size and return the bytes blob | |
1153 |
|
1153 | |||
1154 | returns None if empty""" |
|
1154 | returns None if empty""" | |
1155 | headersize = self._unpack(_fpartheadersize)[0] |
|
1155 | headersize = self._unpack(_fpartheadersize)[0] | |
1156 | if headersize < 0: |
|
1156 | if headersize < 0: | |
1157 | raise error.BundleValueError('negative part header size: %i' |
|
1157 | raise error.BundleValueError('negative part header size: %i' | |
1158 | % headersize) |
|
1158 | % headersize) | |
1159 | indebug(self.ui, 'part header size: %i\n' % headersize) |
|
1159 | indebug(self.ui, 'part header size: %i\n' % headersize) | |
1160 | if headersize: |
|
1160 | if headersize: | |
1161 | return self._readexact(headersize) |
|
1161 | return self._readexact(headersize) | |
1162 | return None |
|
1162 | return None | |
1163 |
|
1163 | |||
1164 | def __call__(self): |
|
1164 | def __call__(self): | |
1165 |
|
1165 | |||
1166 | self.ui.debug('bundle2-input-stream-interrupt:' |
|
1166 | self.ui.debug('bundle2-input-stream-interrupt:' | |
1167 | ' opening out of band context\n') |
|
1167 | ' opening out of band context\n') | |
1168 | indebug(self.ui, 'bundle2 stream interruption, looking for a part.') |
|
1168 | indebug(self.ui, 'bundle2 stream interruption, looking for a part.') | |
1169 | headerblock = self._readpartheader() |
|
1169 | headerblock = self._readpartheader() | |
1170 | if headerblock is None: |
|
1170 | if headerblock is None: | |
1171 | indebug(self.ui, 'no part found during interruption.') |
|
1171 | indebug(self.ui, 'no part found during interruption.') | |
1172 | return |
|
1172 | return | |
1173 | part = unbundlepart(self.ui, headerblock, self._fp) |
|
1173 | part = unbundlepart(self.ui, headerblock, self._fp) | |
1174 | op = interruptoperation(self.ui) |
|
1174 | op = interruptoperation(self.ui) | |
1175 | hardabort = False |
|
1175 | hardabort = False | |
1176 | try: |
|
1176 | try: | |
1177 | _processpart(op, part) |
|
1177 | _processpart(op, part) | |
1178 | except (SystemExit, KeyboardInterrupt): |
|
1178 | except (SystemExit, KeyboardInterrupt): | |
1179 | hardabort = True |
|
1179 | hardabort = True | |
1180 | raise |
|
1180 | raise | |
1181 | finally: |
|
1181 | finally: | |
1182 | if not hardabort: |
|
1182 | if not hardabort: | |
1183 | part.consume() |
|
1183 | part.consume() | |
1184 | self.ui.debug('bundle2-input-stream-interrupt:' |
|
1184 | self.ui.debug('bundle2-input-stream-interrupt:' | |
1185 | ' closing out of band context\n') |
|
1185 | ' closing out of band context\n') | |
1186 |
|
1186 | |||
1187 | class interruptoperation(object): |
|
1187 | class interruptoperation(object): | |
1188 | """A limited operation to be use by part handler during interruption |
|
1188 | """A limited operation to be use by part handler during interruption | |
1189 |
|
1189 | |||
1190 | It only have access to an ui object. |
|
1190 | It only have access to an ui object. | |
1191 | """ |
|
1191 | """ | |
1192 |
|
1192 | |||
1193 | def __init__(self, ui): |
|
1193 | def __init__(self, ui): | |
1194 | self.ui = ui |
|
1194 | self.ui = ui | |
1195 | self.reply = None |
|
1195 | self.reply = None | |
1196 | self.captureoutput = False |
|
1196 | self.captureoutput = False | |
1197 |
|
1197 | |||
1198 | @property |
|
1198 | @property | |
1199 | def repo(self): |
|
1199 | def repo(self): | |
1200 | raise error.ProgrammingError('no repo access from stream interruption') |
|
1200 | raise error.ProgrammingError('no repo access from stream interruption') | |
1201 |
|
1201 | |||
1202 | def gettransaction(self): |
|
1202 | def gettransaction(self): | |
1203 | raise TransactionUnavailable('no repo access from stream interruption') |
|
1203 | raise TransactionUnavailable('no repo access from stream interruption') | |
1204 |
|
1204 | |||
1205 | def decodepayloadchunks(ui, fh): |
|
1205 | def decodepayloadchunks(ui, fh): | |
1206 | """Reads bundle2 part payload data into chunks. |
|
1206 | """Reads bundle2 part payload data into chunks. | |
1207 |
|
1207 | |||
1208 | Part payload data consists of framed chunks. This function takes |
|
1208 | Part payload data consists of framed chunks. This function takes | |
1209 | a file handle and emits those chunks. |
|
1209 | a file handle and emits those chunks. | |
1210 | """ |
|
1210 | """ | |
1211 | dolog = ui.configbool('devel', 'bundle2.debug') |
|
1211 | dolog = ui.configbool('devel', 'bundle2.debug') | |
1212 | debug = ui.debug |
|
1212 | debug = ui.debug | |
1213 |
|
1213 | |||
1214 | headerstruct = struct.Struct(_fpayloadsize) |
|
1214 | headerstruct = struct.Struct(_fpayloadsize) | |
1215 | headersize = headerstruct.size |
|
1215 | headersize = headerstruct.size | |
1216 | unpack = headerstruct.unpack |
|
1216 | unpack = headerstruct.unpack | |
1217 |
|
1217 | |||
1218 | readexactly = changegroup.readexactly |
|
1218 | readexactly = changegroup.readexactly | |
1219 | read = fh.read |
|
1219 | read = fh.read | |
1220 |
|
1220 | |||
1221 | chunksize = unpack(readexactly(fh, headersize))[0] |
|
1221 | chunksize = unpack(readexactly(fh, headersize))[0] | |
1222 | indebug(ui, 'payload chunk size: %i' % chunksize) |
|
1222 | indebug(ui, 'payload chunk size: %i' % chunksize) | |
1223 |
|
1223 | |||
1224 | # changegroup.readexactly() is inlined below for performance. |
|
1224 | # changegroup.readexactly() is inlined below for performance. | |
1225 | while chunksize: |
|
1225 | while chunksize: | |
1226 | if chunksize >= 0: |
|
1226 | if chunksize >= 0: | |
1227 | s = read(chunksize) |
|
1227 | s = read(chunksize) | |
1228 | if len(s) < chunksize: |
|
1228 | if len(s) < chunksize: | |
1229 | raise error.Abort(_('stream ended unexpectedly ' |
|
1229 | raise error.Abort(_('stream ended unexpectedly ' | |
1230 | ' (got %d bytes, expected %d)') % |
|
1230 | ' (got %d bytes, expected %d)') % | |
1231 | (len(s), chunksize)) |
|
1231 | (len(s), chunksize)) | |
1232 |
|
1232 | |||
1233 | yield s |
|
1233 | yield s | |
1234 | elif chunksize == flaginterrupt: |
|
1234 | elif chunksize == flaginterrupt: | |
1235 | # Interrupt "signal" detected. The regular stream is interrupted |
|
1235 | # Interrupt "signal" detected. The regular stream is interrupted | |
1236 | # and a bundle2 part follows. Consume it. |
|
1236 | # and a bundle2 part follows. Consume it. | |
1237 | interrupthandler(ui, fh)() |
|
1237 | interrupthandler(ui, fh)() | |
1238 | else: |
|
1238 | else: | |
1239 | raise error.BundleValueError( |
|
1239 | raise error.BundleValueError( | |
1240 | 'negative payload chunk size: %s' % chunksize) |
|
1240 | 'negative payload chunk size: %s' % chunksize) | |
1241 |
|
1241 | |||
1242 | s = read(headersize) |
|
1242 | s = read(headersize) | |
1243 | if len(s) < headersize: |
|
1243 | if len(s) < headersize: | |
1244 | raise error.Abort(_('stream ended unexpectedly ' |
|
1244 | raise error.Abort(_('stream ended unexpectedly ' | |
1245 | ' (got %d bytes, expected %d)') % |
|
1245 | ' (got %d bytes, expected %d)') % | |
1246 | (len(s), chunksize)) |
|
1246 | (len(s), chunksize)) | |
1247 |
|
1247 | |||
1248 | chunksize = unpack(s)[0] |
|
1248 | chunksize = unpack(s)[0] | |
1249 |
|
1249 | |||
1250 | # indebug() inlined for performance. |
|
1250 | # indebug() inlined for performance. | |
1251 | if dolog: |
|
1251 | if dolog: | |
1252 | debug('bundle2-input: payload chunk size: %i\n' % chunksize) |
|
1252 | debug('bundle2-input: payload chunk size: %i\n' % chunksize) | |
1253 |
|
1253 | |||
1254 | class unbundlepart(unpackermixin): |
|
1254 | class unbundlepart(unpackermixin): | |
1255 | """a bundle part read from a bundle""" |
|
1255 | """a bundle part read from a bundle""" | |
1256 |
|
1256 | |||
1257 | def __init__(self, ui, header, fp): |
|
1257 | def __init__(self, ui, header, fp): | |
1258 | super(unbundlepart, self).__init__(fp) |
|
1258 | super(unbundlepart, self).__init__(fp) | |
1259 | self._seekable = (util.safehasattr(fp, 'seek') and |
|
1259 | self._seekable = (util.safehasattr(fp, 'seek') and | |
1260 | util.safehasattr(fp, 'tell')) |
|
1260 | util.safehasattr(fp, 'tell')) | |
1261 | self.ui = ui |
|
1261 | self.ui = ui | |
1262 | # unbundle state attr |
|
1262 | # unbundle state attr | |
1263 | self._headerdata = header |
|
1263 | self._headerdata = header | |
1264 | self._headeroffset = 0 |
|
1264 | self._headeroffset = 0 | |
1265 | self._initialized = False |
|
1265 | self._initialized = False | |
1266 | self.consumed = False |
|
1266 | self.consumed = False | |
1267 | # part data |
|
1267 | # part data | |
1268 | self.id = None |
|
1268 | self.id = None | |
1269 | self.type = None |
|
1269 | self.type = None | |
1270 | self.mandatoryparams = None |
|
1270 | self.mandatoryparams = None | |
1271 | self.advisoryparams = None |
|
1271 | self.advisoryparams = None | |
1272 | self.params = None |
|
1272 | self.params = None | |
1273 | self.mandatorykeys = () |
|
1273 | self.mandatorykeys = () | |
1274 | self._readheader() |
|
1274 | self._readheader() | |
1275 | self._mandatory = None |
|
1275 | self._mandatory = None | |
1276 | self._pos = 0 |
|
1276 | self._pos = 0 | |
1277 |
|
1277 | |||
1278 | def _fromheader(self, size): |
|
1278 | def _fromheader(self, size): | |
1279 | """return the next <size> byte from the header""" |
|
1279 | """return the next <size> byte from the header""" | |
1280 | offset = self._headeroffset |
|
1280 | offset = self._headeroffset | |
1281 | data = self._headerdata[offset:(offset + size)] |
|
1281 | data = self._headerdata[offset:(offset + size)] | |
1282 | self._headeroffset = offset + size |
|
1282 | self._headeroffset = offset + size | |
1283 | return data |
|
1283 | return data | |
1284 |
|
1284 | |||
1285 | def _unpackheader(self, format): |
|
1285 | def _unpackheader(self, format): | |
1286 | """read given format from header |
|
1286 | """read given format from header | |
1287 |
|
1287 | |||
1288 | This automatically compute the size of the format to read.""" |
|
1288 | This automatically compute the size of the format to read.""" | |
1289 | data = self._fromheader(struct.calcsize(format)) |
|
1289 | data = self._fromheader(struct.calcsize(format)) | |
1290 | return _unpack(format, data) |
|
1290 | return _unpack(format, data) | |
1291 |
|
1291 | |||
1292 | def _initparams(self, mandatoryparams, advisoryparams): |
|
1292 | def _initparams(self, mandatoryparams, advisoryparams): | |
1293 | """internal function to setup all logic related parameters""" |
|
1293 | """internal function to setup all logic related parameters""" | |
1294 | # make it read only to prevent people touching it by mistake. |
|
1294 | # make it read only to prevent people touching it by mistake. | |
1295 | self.mandatoryparams = tuple(mandatoryparams) |
|
1295 | self.mandatoryparams = tuple(mandatoryparams) | |
1296 | self.advisoryparams = tuple(advisoryparams) |
|
1296 | self.advisoryparams = tuple(advisoryparams) | |
1297 | # user friendly UI |
|
1297 | # user friendly UI | |
1298 | self.params = util.sortdict(self.mandatoryparams) |
|
1298 | self.params = util.sortdict(self.mandatoryparams) | |
1299 | self.params.update(self.advisoryparams) |
|
1299 | self.params.update(self.advisoryparams) | |
1300 | self.mandatorykeys = frozenset(p[0] for p in mandatoryparams) |
|
1300 | self.mandatorykeys = frozenset(p[0] for p in mandatoryparams) | |
1301 |
|
1301 | |||
1302 | def _readheader(self): |
|
1302 | def _readheader(self): | |
1303 | """read the header and setup the object""" |
|
1303 | """read the header and setup the object""" | |
1304 | typesize = self._unpackheader(_fparttypesize)[0] |
|
1304 | typesize = self._unpackheader(_fparttypesize)[0] | |
1305 | self.type = self._fromheader(typesize) |
|
1305 | self.type = self._fromheader(typesize) | |
1306 | indebug(self.ui, 'part type: "%s"' % self.type) |
|
1306 | indebug(self.ui, 'part type: "%s"' % self.type) | |
1307 | self.id = self._unpackheader(_fpartid)[0] |
|
1307 | self.id = self._unpackheader(_fpartid)[0] | |
1308 | indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id)) |
|
1308 | indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id)) | |
1309 | # extract mandatory bit from type |
|
1309 | # extract mandatory bit from type | |
1310 | self.mandatory = (self.type != self.type.lower()) |
|
1310 | self.mandatory = (self.type != self.type.lower()) | |
1311 | self.type = self.type.lower() |
|
1311 | self.type = self.type.lower() | |
1312 | ## reading parameters |
|
1312 | ## reading parameters | |
1313 | # param count |
|
1313 | # param count | |
1314 | mancount, advcount = self._unpackheader(_fpartparamcount) |
|
1314 | mancount, advcount = self._unpackheader(_fpartparamcount) | |
1315 | indebug(self.ui, 'part parameters: %i' % (mancount + advcount)) |
|
1315 | indebug(self.ui, 'part parameters: %i' % (mancount + advcount)) | |
1316 | # param size |
|
1316 | # param size | |
1317 | fparamsizes = _makefpartparamsizes(mancount + advcount) |
|
1317 | fparamsizes = _makefpartparamsizes(mancount + advcount) | |
1318 | paramsizes = self._unpackheader(fparamsizes) |
|
1318 | paramsizes = self._unpackheader(fparamsizes) | |
1319 | # make it a list of couple again |
|
1319 | # make it a list of couple again | |
1320 | paramsizes = list(zip(paramsizes[::2], paramsizes[1::2])) |
|
1320 | paramsizes = list(zip(paramsizes[::2], paramsizes[1::2])) | |
1321 | # split mandatory from advisory |
|
1321 | # split mandatory from advisory | |
1322 | mansizes = paramsizes[:mancount] |
|
1322 | mansizes = paramsizes[:mancount] | |
1323 | advsizes = paramsizes[mancount:] |
|
1323 | advsizes = paramsizes[mancount:] | |
1324 | # retrieve param value |
|
1324 | # retrieve param value | |
1325 | manparams = [] |
|
1325 | manparams = [] | |
1326 | for key, value in mansizes: |
|
1326 | for key, value in mansizes: | |
1327 | manparams.append((self._fromheader(key), self._fromheader(value))) |
|
1327 | manparams.append((self._fromheader(key), self._fromheader(value))) | |
1328 | advparams = [] |
|
1328 | advparams = [] | |
1329 | for key, value in advsizes: |
|
1329 | for key, value in advsizes: | |
1330 | advparams.append((self._fromheader(key), self._fromheader(value))) |
|
1330 | advparams.append((self._fromheader(key), self._fromheader(value))) | |
1331 | self._initparams(manparams, advparams) |
|
1331 | self._initparams(manparams, advparams) | |
1332 | ## part payload |
|
1332 | ## part payload | |
1333 | self._payloadstream = util.chunkbuffer(self._payloadchunks()) |
|
1333 | self._payloadstream = util.chunkbuffer(self._payloadchunks()) | |
1334 | # we read the data, tell it |
|
1334 | # we read the data, tell it | |
1335 | self._initialized = True |
|
1335 | self._initialized = True | |
1336 |
|
1336 | |||
1337 | def _payloadchunks(self): |
|
1337 | def _payloadchunks(self): | |
1338 | """Generator of decoded chunks in the payload.""" |
|
1338 | """Generator of decoded chunks in the payload.""" | |
1339 | return decodepayloadchunks(self.ui, self._fp) |
|
1339 | return decodepayloadchunks(self.ui, self._fp) | |
1340 |
|
1340 | |||
1341 | def consume(self): |
|
1341 | def consume(self): | |
1342 | """Read the part payload until completion. |
|
1342 | """Read the part payload until completion. | |
1343 |
|
1343 | |||
1344 | By consuming the part data, the underlying stream read offset will |
|
1344 | By consuming the part data, the underlying stream read offset will | |
1345 | be advanced to the next part (or end of stream). |
|
1345 | be advanced to the next part (or end of stream). | |
1346 | """ |
|
1346 | """ | |
1347 | if self.consumed: |
|
1347 | if self.consumed: | |
1348 | return |
|
1348 | return | |
1349 |
|
1349 | |||
1350 | chunk = self.read(32768) |
|
1350 | chunk = self.read(32768) | |
1351 | while chunk: |
|
1351 | while chunk: | |
1352 | self._pos += len(chunk) |
|
1352 | self._pos += len(chunk) | |
1353 | chunk = self.read(32768) |
|
1353 | chunk = self.read(32768) | |
1354 |
|
1354 | |||
1355 | def read(self, size=None): |
|
1355 | def read(self, size=None): | |
1356 | """read payload data""" |
|
1356 | """read payload data""" | |
1357 | if not self._initialized: |
|
1357 | if not self._initialized: | |
1358 | self._readheader() |
|
1358 | self._readheader() | |
1359 | if size is None: |
|
1359 | if size is None: | |
1360 | data = self._payloadstream.read() |
|
1360 | data = self._payloadstream.read() | |
1361 | else: |
|
1361 | else: | |
1362 | data = self._payloadstream.read(size) |
|
1362 | data = self._payloadstream.read(size) | |
1363 | self._pos += len(data) |
|
1363 | self._pos += len(data) | |
1364 | if size is None or len(data) < size: |
|
1364 | if size is None or len(data) < size: | |
1365 | if not self.consumed and self._pos: |
|
1365 | if not self.consumed and self._pos: | |
1366 | self.ui.debug('bundle2-input-part: total payload size %i\n' |
|
1366 | self.ui.debug('bundle2-input-part: total payload size %i\n' | |
1367 | % self._pos) |
|
1367 | % self._pos) | |
1368 | self.consumed = True |
|
1368 | self.consumed = True | |
1369 | return data |
|
1369 | return data | |
1370 |
|
1370 | |||
1371 | class seekableunbundlepart(unbundlepart): |
|
1371 | class seekableunbundlepart(unbundlepart): | |
1372 | """A bundle2 part in a bundle that is seekable. |
|
1372 | """A bundle2 part in a bundle that is seekable. | |
1373 |
|
1373 | |||
1374 | Regular ``unbundlepart`` instances can only be read once. This class |
|
1374 | Regular ``unbundlepart`` instances can only be read once. This class | |
1375 | extends ``unbundlepart`` to enable bi-directional seeking within the |
|
1375 | extends ``unbundlepart`` to enable bi-directional seeking within the | |
1376 | part. |
|
1376 | part. | |
1377 |
|
1377 | |||
1378 | Bundle2 part data consists of framed chunks. Offsets when seeking |
|
1378 | Bundle2 part data consists of framed chunks. Offsets when seeking | |
1379 | refer to the decoded data, not the offsets in the underlying bundle2 |
|
1379 | refer to the decoded data, not the offsets in the underlying bundle2 | |
1380 | stream. |
|
1380 | stream. | |
1381 |
|
1381 | |||
1382 | To facilitate quickly seeking within the decoded data, instances of this |
|
1382 | To facilitate quickly seeking within the decoded data, instances of this | |
1383 | class maintain a mapping between offsets in the underlying stream and |
|
1383 | class maintain a mapping between offsets in the underlying stream and | |
1384 | the decoded payload. This mapping will consume memory in proportion |
|
1384 | the decoded payload. This mapping will consume memory in proportion | |
1385 | to the number of chunks within the payload (which almost certainly |
|
1385 | to the number of chunks within the payload (which almost certainly | |
1386 | increases in proportion with the size of the part). |
|
1386 | increases in proportion with the size of the part). | |
1387 | """ |
|
1387 | """ | |
1388 | def __init__(self, ui, header, fp): |
|
1388 | def __init__(self, ui, header, fp): | |
1389 | # (payload, file) offsets for chunk starts. |
|
1389 | # (payload, file) offsets for chunk starts. | |
1390 | self._chunkindex = [] |
|
1390 | self._chunkindex = [] | |
1391 |
|
1391 | |||
1392 | super(seekableunbundlepart, self).__init__(ui, header, fp) |
|
1392 | super(seekableunbundlepart, self).__init__(ui, header, fp) | |
1393 |
|
1393 | |||
1394 | def _payloadchunks(self, chunknum=0): |
|
1394 | def _payloadchunks(self, chunknum=0): | |
1395 | '''seek to specified chunk and start yielding data''' |
|
1395 | '''seek to specified chunk and start yielding data''' | |
1396 | if len(self._chunkindex) == 0: |
|
1396 | if len(self._chunkindex) == 0: | |
1397 | assert chunknum == 0, 'Must start with chunk 0' |
|
1397 | assert chunknum == 0, 'Must start with chunk 0' | |
1398 | self._chunkindex.append((0, self._tellfp())) |
|
1398 | self._chunkindex.append((0, self._tellfp())) | |
1399 | else: |
|
1399 | else: | |
1400 | assert chunknum < len(self._chunkindex), \ |
|
1400 | assert chunknum < len(self._chunkindex), \ | |
1401 | 'Unknown chunk %d' % chunknum |
|
1401 | 'Unknown chunk %d' % chunknum | |
1402 | self._seekfp(self._chunkindex[chunknum][1]) |
|
1402 | self._seekfp(self._chunkindex[chunknum][1]) | |
1403 |
|
1403 | |||
1404 | pos = self._chunkindex[chunknum][0] |
|
1404 | pos = self._chunkindex[chunknum][0] | |
1405 |
|
1405 | |||
1406 | for chunk in decodepayloadchunks(self.ui, self._fp): |
|
1406 | for chunk in decodepayloadchunks(self.ui, self._fp): | |
1407 | chunknum += 1 |
|
1407 | chunknum += 1 | |
1408 | pos += len(chunk) |
|
1408 | pos += len(chunk) | |
1409 | if chunknum == len(self._chunkindex): |
|
1409 | if chunknum == len(self._chunkindex): | |
1410 | self._chunkindex.append((pos, self._tellfp())) |
|
1410 | self._chunkindex.append((pos, self._tellfp())) | |
1411 |
|
1411 | |||
1412 | yield chunk |
|
1412 | yield chunk | |
1413 |
|
1413 | |||
1414 | def _findchunk(self, pos): |
|
1414 | def _findchunk(self, pos): | |
1415 | '''for a given payload position, return a chunk number and offset''' |
|
1415 | '''for a given payload position, return a chunk number and offset''' | |
1416 | for chunk, (ppos, fpos) in enumerate(self._chunkindex): |
|
1416 | for chunk, (ppos, fpos) in enumerate(self._chunkindex): | |
1417 | if ppos == pos: |
|
1417 | if ppos == pos: | |
1418 | return chunk, 0 |
|
1418 | return chunk, 0 | |
1419 | elif ppos > pos: |
|
1419 | elif ppos > pos: | |
1420 | return chunk - 1, pos - self._chunkindex[chunk - 1][0] |
|
1420 | return chunk - 1, pos - self._chunkindex[chunk - 1][0] | |
1421 | raise ValueError('Unknown chunk') |
|
1421 | raise ValueError('Unknown chunk') | |
1422 |
|
1422 | |||
1423 | def tell(self): |
|
1423 | def tell(self): | |
1424 | return self._pos |
|
1424 | return self._pos | |
1425 |
|
1425 | |||
1426 | def seek(self, offset, whence=os.SEEK_SET): |
|
1426 | def seek(self, offset, whence=os.SEEK_SET): | |
1427 | if whence == os.SEEK_SET: |
|
1427 | if whence == os.SEEK_SET: | |
1428 | newpos = offset |
|
1428 | newpos = offset | |
1429 | elif whence == os.SEEK_CUR: |
|
1429 | elif whence == os.SEEK_CUR: | |
1430 | newpos = self._pos + offset |
|
1430 | newpos = self._pos + offset | |
1431 | elif whence == os.SEEK_END: |
|
1431 | elif whence == os.SEEK_END: | |
1432 | if not self.consumed: |
|
1432 | if not self.consumed: | |
1433 | # Can't use self.consume() here because it advances self._pos. |
|
1433 | # Can't use self.consume() here because it advances self._pos. | |
1434 | chunk = self.read(32768) |
|
1434 | chunk = self.read(32768) | |
1435 | while chunk: |
|
1435 | while chunk: | |
1436 | chunk = self.read(32768) |
|
1436 | chunk = self.read(32768) | |
1437 | newpos = self._chunkindex[-1][0] - offset |
|
1437 | newpos = self._chunkindex[-1][0] - offset | |
1438 | else: |
|
1438 | else: | |
1439 | raise ValueError('Unknown whence value: %r' % (whence,)) |
|
1439 | raise ValueError('Unknown whence value: %r' % (whence,)) | |
1440 |
|
1440 | |||
1441 | if newpos > self._chunkindex[-1][0] and not self.consumed: |
|
1441 | if newpos > self._chunkindex[-1][0] and not self.consumed: | |
1442 | # Can't use self.consume() here because it advances self._pos. |
|
1442 | # Can't use self.consume() here because it advances self._pos. | |
1443 | chunk = self.read(32768) |
|
1443 | chunk = self.read(32768) | |
1444 | while chunk: |
|
1444 | while chunk: | |
1445 | chunk = self.read(32668) |
|
1445 | chunk = self.read(32668) | |
1446 |
|
1446 | |||
1447 | if not 0 <= newpos <= self._chunkindex[-1][0]: |
|
1447 | if not 0 <= newpos <= self._chunkindex[-1][0]: | |
1448 | raise ValueError('Offset out of range') |
|
1448 | raise ValueError('Offset out of range') | |
1449 |
|
1449 | |||
1450 | if self._pos != newpos: |
|
1450 | if self._pos != newpos: | |
1451 | chunk, internaloffset = self._findchunk(newpos) |
|
1451 | chunk, internaloffset = self._findchunk(newpos) | |
1452 | self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk)) |
|
1452 | self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk)) | |
1453 | adjust = self.read(internaloffset) |
|
1453 | adjust = self.read(internaloffset) | |
1454 | if len(adjust) != internaloffset: |
|
1454 | if len(adjust) != internaloffset: | |
1455 | raise error.Abort(_('Seek failed\n')) |
|
1455 | raise error.Abort(_('Seek failed\n')) | |
1456 | self._pos = newpos |
|
1456 | self._pos = newpos | |
1457 |
|
1457 | |||
1458 | def _seekfp(self, offset, whence=0): |
|
1458 | def _seekfp(self, offset, whence=0): | |
1459 | """move the underlying file pointer |
|
1459 | """move the underlying file pointer | |
1460 |
|
1460 | |||
1461 | This method is meant for internal usage by the bundle2 protocol only. |
|
1461 | This method is meant for internal usage by the bundle2 protocol only. | |
1462 | They directly manipulate the low level stream including bundle2 level |
|
1462 | They directly manipulate the low level stream including bundle2 level | |
1463 | instruction. |
|
1463 | instruction. | |
1464 |
|
1464 | |||
1465 | Do not use it to implement higher-level logic or methods.""" |
|
1465 | Do not use it to implement higher-level logic or methods.""" | |
1466 | if self._seekable: |
|
1466 | if self._seekable: | |
1467 | return self._fp.seek(offset, whence) |
|
1467 | return self._fp.seek(offset, whence) | |
1468 | else: |
|
1468 | else: | |
1469 | raise NotImplementedError(_('File pointer is not seekable')) |
|
1469 | raise NotImplementedError(_('File pointer is not seekable')) | |
1470 |
|
1470 | |||
1471 | def _tellfp(self): |
|
1471 | def _tellfp(self): | |
1472 | """return the file offset, or None if file is not seekable |
|
1472 | """return the file offset, or None if file is not seekable | |
1473 |
|
1473 | |||
1474 | This method is meant for internal usage by the bundle2 protocol only. |
|
1474 | This method is meant for internal usage by the bundle2 protocol only. | |
1475 | They directly manipulate the low level stream including bundle2 level |
|
1475 | They directly manipulate the low level stream including bundle2 level | |
1476 | instruction. |
|
1476 | instruction. | |
1477 |
|
1477 | |||
1478 | Do not use it to implement higher-level logic or methods.""" |
|
1478 | Do not use it to implement higher-level logic or methods.""" | |
1479 | if self._seekable: |
|
1479 | if self._seekable: | |
1480 | try: |
|
1480 | try: | |
1481 | return self._fp.tell() |
|
1481 | return self._fp.tell() | |
1482 | except IOError as e: |
|
1482 | except IOError as e: | |
1483 | if e.errno == errno.ESPIPE: |
|
1483 | if e.errno == errno.ESPIPE: | |
1484 | self._seekable = False |
|
1484 | self._seekable = False | |
1485 | else: |
|
1485 | else: | |
1486 | raise |
|
1486 | raise | |
1487 | return None |
|
1487 | return None | |
1488 |
|
1488 | |||
1489 | # These are only the static capabilities. |
|
1489 | # These are only the static capabilities. | |
1490 | # Check the 'getrepocaps' function for the rest. |
|
1490 | # Check the 'getrepocaps' function for the rest. | |
1491 | capabilities = {'HG20': (), |
|
1491 | capabilities = {'HG20': (), | |
1492 | 'bookmarks': (), |
|
1492 | 'bookmarks': (), | |
1493 | 'error': ('abort', 'unsupportedcontent', 'pushraced', |
|
1493 | 'error': ('abort', 'unsupportedcontent', 'pushraced', | |
1494 | 'pushkey'), |
|
1494 | 'pushkey'), | |
1495 | 'listkeys': (), |
|
1495 | 'listkeys': (), | |
1496 | 'pushkey': (), |
|
1496 | 'pushkey': (), | |
1497 | 'digests': tuple(sorted(util.DIGESTS.keys())), |
|
1497 | 'digests': tuple(sorted(util.DIGESTS.keys())), | |
1498 | 'remote-changegroup': ('http', 'https'), |
|
1498 | 'remote-changegroup': ('http', 'https'), | |
1499 | 'hgtagsfnodes': (), |
|
1499 | 'hgtagsfnodes': (), | |
1500 | 'rev-branch-cache': (), |
|
1500 | 'rev-branch-cache': (), | |
1501 | 'phases': ('heads',), |
|
1501 | 'phases': ('heads',), | |
1502 | 'stream': ('v2',), |
|
1502 | 'stream': ('v2',), | |
1503 | } |
|
1503 | } | |
1504 |
|
1504 | |||
1505 | def getrepocaps(repo, allowpushback=False, role=None): |
|
1505 | def getrepocaps(repo, allowpushback=False, role=None): | |
1506 | """return the bundle2 capabilities for a given repo |
|
1506 | """return the bundle2 capabilities for a given repo | |
1507 |
|
1507 | |||
1508 | Exists to allow extensions (like evolution) to mutate the capabilities. |
|
1508 | Exists to allow extensions (like evolution) to mutate the capabilities. | |
1509 |
|
1509 | |||
1510 | The returned value is used for servers advertising their capabilities as |
|
1510 | The returned value is used for servers advertising their capabilities as | |
1511 | well as clients advertising their capabilities to servers as part of |
|
1511 | well as clients advertising their capabilities to servers as part of | |
1512 | bundle2 requests. The ``role`` argument specifies which is which. |
|
1512 | bundle2 requests. The ``role`` argument specifies which is which. | |
1513 | """ |
|
1513 | """ | |
1514 | if role not in ('client', 'server'): |
|
1514 | if role not in ('client', 'server'): | |
1515 | raise error.ProgrammingError('role argument must be client or server') |
|
1515 | raise error.ProgrammingError('role argument must be client or server') | |
1516 |
|
1516 | |||
1517 | caps = capabilities.copy() |
|
1517 | caps = capabilities.copy() | |
1518 | caps['changegroup'] = tuple(sorted( |
|
1518 | caps['changegroup'] = tuple(sorted( | |
1519 | changegroup.supportedincomingversions(repo))) |
|
1519 | changegroup.supportedincomingversions(repo))) | |
1520 | if obsolete.isenabled(repo, obsolete.exchangeopt): |
|
1520 | if obsolete.isenabled(repo, obsolete.exchangeopt): | |
1521 | supportedformat = tuple('V%i' % v for v in obsolete.formats) |
|
1521 | supportedformat = tuple('V%i' % v for v in obsolete.formats) | |
1522 | caps['obsmarkers'] = supportedformat |
|
1522 | caps['obsmarkers'] = supportedformat | |
1523 | if allowpushback: |
|
1523 | if allowpushback: | |
1524 | caps['pushback'] = () |
|
1524 | caps['pushback'] = () | |
1525 | cpmode = repo.ui.config('server', 'concurrent-push-mode') |
|
1525 | cpmode = repo.ui.config('server', 'concurrent-push-mode') | |
1526 | if cpmode == 'check-related': |
|
1526 | if cpmode == 'check-related': | |
1527 | caps['checkheads'] = ('related',) |
|
1527 | caps['checkheads'] = ('related',) | |
1528 | if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'): |
|
1528 | if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'): | |
1529 | caps.pop('phases') |
|
1529 | caps.pop('phases') | |
1530 |
|
1530 | |||
1531 | # Don't advertise stream clone support in server mode if not configured. |
|
1531 | # Don't advertise stream clone support in server mode if not configured. | |
1532 | if role == 'server': |
|
1532 | if role == 'server': | |
1533 | streamsupported = repo.ui.configbool('server', 'uncompressed', |
|
1533 | streamsupported = repo.ui.configbool('server', 'uncompressed', | |
1534 | untrusted=True) |
|
1534 | untrusted=True) | |
1535 | featuresupported = repo.ui.configbool('server', 'bundle2.stream') |
|
1535 | featuresupported = repo.ui.configbool('server', 'bundle2.stream') | |
1536 |
|
1536 | |||
1537 | if not streamsupported or not featuresupported: |
|
1537 | if not streamsupported or not featuresupported: | |
1538 | caps.pop('stream') |
|
1538 | caps.pop('stream') | |
1539 | # Else always advertise support on client, because payload support |
|
1539 | # Else always advertise support on client, because payload support | |
1540 | # should always be advertised. |
|
1540 | # should always be advertised. | |
1541 |
|
1541 | |||
1542 | return caps |
|
1542 | return caps | |
1543 |
|
1543 | |||
1544 | def bundle2caps(remote): |
|
1544 | def bundle2caps(remote): | |
1545 | """return the bundle capabilities of a peer as dict""" |
|
1545 | """return the bundle capabilities of a peer as dict""" | |
1546 | raw = remote.capable('bundle2') |
|
1546 | raw = remote.capable('bundle2') | |
1547 | if not raw and raw != '': |
|
1547 | if not raw and raw != '': | |
1548 | return {} |
|
1548 | return {} | |
1549 | capsblob = urlreq.unquote(remote.capable('bundle2')) |
|
1549 | capsblob = urlreq.unquote(remote.capable('bundle2')) | |
1550 | return decodecaps(capsblob) |
|
1550 | return decodecaps(capsblob) | |
1551 |
|
1551 | |||
1552 | def obsmarkersversion(caps): |
|
1552 | def obsmarkersversion(caps): | |
1553 | """extract the list of supported obsmarkers versions from a bundle2caps dict |
|
1553 | """extract the list of supported obsmarkers versions from a bundle2caps dict | |
1554 | """ |
|
1554 | """ | |
1555 | obscaps = caps.get('obsmarkers', ()) |
|
1555 | obscaps = caps.get('obsmarkers', ()) | |
1556 | return [int(c[1:]) for c in obscaps if c.startswith('V')] |
|
1556 | return [int(c[1:]) for c in obscaps if c.startswith('V')] | |
1557 |
|
1557 | |||
1558 | def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts, |
|
1558 | def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts, | |
1559 | vfs=None, compression=None, compopts=None): |
|
1559 | vfs=None, compression=None, compopts=None): | |
1560 | if bundletype.startswith('HG10'): |
|
1560 | if bundletype.startswith('HG10'): | |
1561 | cg = changegroup.makechangegroup(repo, outgoing, '01', source) |
|
1561 | cg = changegroup.makechangegroup(repo, outgoing, '01', source) | |
1562 | return writebundle(ui, cg, filename, bundletype, vfs=vfs, |
|
1562 | return writebundle(ui, cg, filename, bundletype, vfs=vfs, | |
1563 | compression=compression, compopts=compopts) |
|
1563 | compression=compression, compopts=compopts) | |
1564 | elif not bundletype.startswith('HG20'): |
|
1564 | elif not bundletype.startswith('HG20'): | |
1565 | raise error.ProgrammingError('unknown bundle type: %s' % bundletype) |
|
1565 | raise error.ProgrammingError('unknown bundle type: %s' % bundletype) | |
1566 |
|
1566 | |||
1567 | caps = {} |
|
1567 | caps = {} | |
1568 | if 'obsolescence' in opts: |
|
1568 | if 'obsolescence' in opts: | |
1569 | caps['obsmarkers'] = ('V1',) |
|
1569 | caps['obsmarkers'] = ('V1',) | |
1570 | bundle = bundle20(ui, caps) |
|
1570 | bundle = bundle20(ui, caps) | |
1571 | bundle.setcompression(compression, compopts) |
|
1571 | bundle.setcompression(compression, compopts) | |
1572 | _addpartsfromopts(ui, repo, bundle, source, outgoing, opts) |
|
1572 | _addpartsfromopts(ui, repo, bundle, source, outgoing, opts) | |
1573 | chunkiter = bundle.getchunks() |
|
1573 | chunkiter = bundle.getchunks() | |
1574 |
|
1574 | |||
1575 | return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs) |
|
1575 | return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs) | |
1576 |
|
1576 | |||
1577 | def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts): |
|
1577 | def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts): | |
1578 | # We should eventually reconcile this logic with the one behind |
|
1578 | # We should eventually reconcile this logic with the one behind | |
1579 | # 'exchange.getbundle2partsgenerator'. |
|
1579 | # 'exchange.getbundle2partsgenerator'. | |
1580 | # |
|
1580 | # | |
1581 | # The type of input from 'getbundle' and 'writenewbundle' are a bit |
|
1581 | # The type of input from 'getbundle' and 'writenewbundle' are a bit | |
1582 | # different right now. So we keep them separated for now for the sake of |
|
1582 | # different right now. So we keep them separated for now for the sake of | |
1583 | # simplicity. |
|
1583 | # simplicity. | |
1584 |
|
1584 | |||
1585 | # we might not always want a changegroup in such bundle, for example in |
|
1585 | # we might not always want a changegroup in such bundle, for example in | |
1586 | # stream bundles |
|
1586 | # stream bundles | |
1587 | if opts.get('changegroup', True): |
|
1587 | if opts.get('changegroup', True): | |
1588 | cgversion = opts.get('cg.version') |
|
1588 | cgversion = opts.get('cg.version') | |
1589 | if cgversion is None: |
|
1589 | if cgversion is None: | |
1590 | cgversion = changegroup.safeversion(repo) |
|
1590 | cgversion = changegroup.safeversion(repo) | |
1591 | cg = changegroup.makechangegroup(repo, outgoing, cgversion, source) |
|
1591 | cg = changegroup.makechangegroup(repo, outgoing, cgversion, source) | |
1592 | part = bundler.newpart('changegroup', data=cg.getchunks()) |
|
1592 | part = bundler.newpart('changegroup', data=cg.getchunks()) | |
1593 | part.addparam('version', cg.version) |
|
1593 | part.addparam('version', cg.version) | |
1594 | if 'clcount' in cg.extras: |
|
1594 | if 'clcount' in cg.extras: | |
1595 | part.addparam('nbchanges', '%d' % cg.extras['clcount'], |
|
1595 | part.addparam('nbchanges', '%d' % cg.extras['clcount'], | |
1596 | mandatory=False) |
|
1596 | mandatory=False) | |
1597 | if opts.get('phases') and repo.revs('%ln and secret()', |
|
1597 | if opts.get('phases') and repo.revs('%ln and secret()', | |
1598 | outgoing.missingheads): |
|
1598 | outgoing.missingheads): | |
1599 | part.addparam('targetphase', '%d' % phases.secret, mandatory=False) |
|
1599 | part.addparam('targetphase', '%d' % phases.secret, mandatory=False) | |
1600 |
|
1600 | |||
1601 | if opts.get('streamv2', False): |
|
1601 | if opts.get('streamv2', False): | |
1602 | addpartbundlestream2(bundler, repo, stream=True) |
|
1602 | addpartbundlestream2(bundler, repo, stream=True) | |
1603 |
|
1603 | |||
1604 | if opts.get('tagsfnodescache', True): |
|
1604 | if opts.get('tagsfnodescache', True): | |
1605 | addparttagsfnodescache(repo, bundler, outgoing) |
|
1605 | addparttagsfnodescache(repo, bundler, outgoing) | |
1606 |
|
1606 | |||
1607 | if opts.get('revbranchcache', True): |
|
1607 | if opts.get('revbranchcache', True): | |
1608 | addpartrevbranchcache(repo, bundler, outgoing) |
|
1608 | addpartrevbranchcache(repo, bundler, outgoing) | |
1609 |
|
1609 | |||
1610 | if opts.get('obsolescence', False): |
|
1610 | if opts.get('obsolescence', False): | |
1611 | obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing) |
|
1611 | obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing) | |
1612 | buildobsmarkerspart(bundler, obsmarkers) |
|
1612 | buildobsmarkerspart(bundler, obsmarkers) | |
1613 |
|
1613 | |||
1614 | if opts.get('phases', False): |
|
1614 | if opts.get('phases', False): | |
1615 | headsbyphase = phases.subsetphaseheads(repo, outgoing.missing) |
|
1615 | headsbyphase = phases.subsetphaseheads(repo, outgoing.missing) | |
1616 | phasedata = phases.binaryencode(headsbyphase) |
|
1616 | phasedata = phases.binaryencode(headsbyphase) | |
1617 | bundler.newpart('phase-heads', data=phasedata) |
|
1617 | bundler.newpart('phase-heads', data=phasedata) | |
1618 |
|
1618 | |||
1619 | def addparttagsfnodescache(repo, bundler, outgoing): |
|
1619 | def addparttagsfnodescache(repo, bundler, outgoing): | |
1620 | # we include the tags fnode cache for the bundle changeset |
|
1620 | # we include the tags fnode cache for the bundle changeset | |
1621 | # (as an optional parts) |
|
1621 | # (as an optional parts) | |
1622 | cache = tags.hgtagsfnodescache(repo.unfiltered()) |
|
1622 | cache = tags.hgtagsfnodescache(repo.unfiltered()) | |
1623 | chunks = [] |
|
1623 | chunks = [] | |
1624 |
|
1624 | |||
1625 | # .hgtags fnodes are only relevant for head changesets. While we could |
|
1625 | # .hgtags fnodes are only relevant for head changesets. While we could | |
1626 | # transfer values for all known nodes, there will likely be little to |
|
1626 | # transfer values for all known nodes, there will likely be little to | |
1627 | # no benefit. |
|
1627 | # no benefit. | |
1628 | # |
|
1628 | # | |
1629 | # We don't bother using a generator to produce output data because |
|
1629 | # We don't bother using a generator to produce output data because | |
1630 | # a) we only have 40 bytes per head and even esoteric numbers of heads |
|
1630 | # a) we only have 40 bytes per head and even esoteric numbers of heads | |
1631 | # consume little memory (1M heads is 40MB) b) we don't want to send the |
|
1631 | # consume little memory (1M heads is 40MB) b) we don't want to send the | |
1632 | # part if we don't have entries and knowing if we have entries requires |
|
1632 | # part if we don't have entries and knowing if we have entries requires | |
1633 | # cache lookups. |
|
1633 | # cache lookups. | |
1634 | for node in outgoing.missingheads: |
|
1634 | for node in outgoing.missingheads: | |
1635 | # Don't compute missing, as this may slow down serving. |
|
1635 | # Don't compute missing, as this may slow down serving. | |
1636 | fnode = cache.getfnode(node, computemissing=False) |
|
1636 | fnode = cache.getfnode(node, computemissing=False) | |
1637 | if fnode is not None: |
|
1637 | if fnode is not None: | |
1638 | chunks.extend([node, fnode]) |
|
1638 | chunks.extend([node, fnode]) | |
1639 |
|
1639 | |||
1640 | if chunks: |
|
1640 | if chunks: | |
1641 | bundler.newpart('hgtagsfnodes', data=''.join(chunks)) |
|
1641 | bundler.newpart('hgtagsfnodes', data=''.join(chunks)) | |
1642 |
|
1642 | |||
1643 | def addpartrevbranchcache(repo, bundler, outgoing): |
|
1643 | def addpartrevbranchcache(repo, bundler, outgoing): | |
1644 | # we include the rev branch cache for the bundle changeset |
|
1644 | # we include the rev branch cache for the bundle changeset | |
1645 | # (as an optional parts) |
|
1645 | # (as an optional parts) | |
1646 | cache = repo.revbranchcache() |
|
1646 | cache = repo.revbranchcache() | |
1647 | cl = repo.unfiltered().changelog |
|
1647 | cl = repo.unfiltered().changelog | |
1648 | branchesdata = collections.defaultdict(lambda: (set(), set())) |
|
1648 | branchesdata = collections.defaultdict(lambda: (set(), set())) | |
1649 | for node in outgoing.missing: |
|
1649 | for node in outgoing.missing: | |
1650 | branch, close = cache.branchinfo(cl.rev(node)) |
|
1650 | branch, close = cache.branchinfo(cl.rev(node)) | |
1651 | branchesdata[branch][close].add(node) |
|
1651 | branchesdata[branch][close].add(node) | |
1652 |
|
1652 | |||
1653 | def generate(): |
|
1653 | def generate(): | |
1654 | for branch, (nodes, closed) in sorted(branchesdata.items()): |
|
1654 | for branch, (nodes, closed) in sorted(branchesdata.items()): | |
1655 | utf8branch = encoding.fromlocal(branch) |
|
1655 | utf8branch = encoding.fromlocal(branch) | |
1656 | yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed)) |
|
1656 | yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed)) | |
1657 | yield utf8branch |
|
1657 | yield utf8branch | |
1658 | for n in sorted(nodes): |
|
1658 | for n in sorted(nodes): | |
1659 | yield n |
|
1659 | yield n | |
1660 | for n in sorted(closed): |
|
1660 | for n in sorted(closed): | |
1661 | yield n |
|
1661 | yield n | |
1662 |
|
1662 | |||
1663 | bundler.newpart('cache:rev-branch-cache', data=generate(), |
|
1663 | bundler.newpart('cache:rev-branch-cache', data=generate(), | |
1664 | mandatory=False) |
|
1664 | mandatory=False) | |
1665 |
|
1665 | |||
1666 | def _formatrequirementsspec(requirements): |
|
1666 | def _formatrequirementsspec(requirements): | |
1667 | return urlreq.quote(','.join(sorted(requirements))) |
|
1667 | return urlreq.quote(','.join(sorted(requirements))) | |
1668 |
|
1668 | |||
1669 | def _formatrequirementsparams(requirements): |
|
1669 | def _formatrequirementsparams(requirements): | |
1670 | requirements = _formatrequirementsspec(requirements) |
|
1670 | requirements = _formatrequirementsspec(requirements) | |
1671 | params = "%s%s" % (urlreq.quote("requirements="), requirements) |
|
1671 | params = "%s%s" % (urlreq.quote("requirements="), requirements) | |
1672 | return params |
|
1672 | return params | |
1673 |
|
1673 | |||
1674 | def addpartbundlestream2(bundler, repo, **kwargs): |
|
1674 | def addpartbundlestream2(bundler, repo, **kwargs): | |
1675 | if not kwargs.get(r'stream', False): |
|
1675 | if not kwargs.get(r'stream', False): | |
1676 | return |
|
1676 | return | |
1677 |
|
1677 | |||
1678 | if not streamclone.allowservergeneration(repo): |
|
1678 | if not streamclone.allowservergeneration(repo): | |
1679 | raise error.Abort(_('stream data requested but server does not allow ' |
|
1679 | raise error.Abort(_('stream data requested but server does not allow ' | |
1680 | 'this feature'), |
|
1680 | 'this feature'), | |
1681 | hint=_('well-behaved clients should not be ' |
|
1681 | hint=_('well-behaved clients should not be ' | |
1682 | 'requesting stream data from servers not ' |
|
1682 | 'requesting stream data from servers not ' | |
1683 | 'advertising it; the client may be buggy')) |
|
1683 | 'advertising it; the client may be buggy')) | |
1684 |
|
1684 | |||
1685 | # Stream clones don't compress well. And compression undermines a |
|
1685 | # Stream clones don't compress well. And compression undermines a | |
1686 | # goal of stream clones, which is to be fast. Communicate the desire |
|
1686 | # goal of stream clones, which is to be fast. Communicate the desire | |
1687 | # to avoid compression to consumers of the bundle. |
|
1687 | # to avoid compression to consumers of the bundle. | |
1688 | bundler.prefercompressed = False |
|
1688 | bundler.prefercompressed = False | |
1689 |
|
1689 | |||
1690 | # get the includes and excludes |
|
1690 | # get the includes and excludes | |
1691 | includepats = kwargs.get(r'includepats') |
|
1691 | includepats = kwargs.get(r'includepats') | |
1692 | excludepats = kwargs.get(r'excludepats') |
|
1692 | excludepats = kwargs.get(r'excludepats') | |
1693 |
|
1693 | |||
1694 | narrowstream = repo.ui.configbool('experimental.server', |
|
1694 | narrowstream = repo.ui.configbool('experimental.server', | |
1695 | 'stream-narrow-clones') |
|
1695 | 'stream-narrow-clones') | |
1696 |
|
1696 | |||
1697 | if (includepats or excludepats) and not narrowstream: |
|
1697 | if (includepats or excludepats) and not narrowstream: | |
1698 | raise error.Abort(_('server does not support narrow stream clones')) |
|
1698 | raise error.Abort(_('server does not support narrow stream clones')) | |
1699 |
|
1699 | |||
|
1700 | includeobsmarkers = False | |||
|
1701 | if repo.obsstore: | |||
|
1702 | remoteversions = obsmarkersversion(bundler.capabilities) | |||
|
1703 | if repo.obsstore._version in remoteversions: | |||
|
1704 | includeobsmarkers = True | |||
|
1705 | ||||
1700 | filecount, bytecount, it = streamclone.generatev2(repo, includepats, |
|
1706 | filecount, bytecount, it = streamclone.generatev2(repo, includepats, | |
1701 |
excludepats |
|
1707 | excludepats, | |
|
1708 | includeobsmarkers) | |||
1702 | requirements = _formatrequirementsspec(repo.requirements) |
|
1709 | requirements = _formatrequirementsspec(repo.requirements) | |
1703 | part = bundler.newpart('stream2', data=it) |
|
1710 | part = bundler.newpart('stream2', data=it) | |
1704 | part.addparam('bytecount', '%d' % bytecount, mandatory=True) |
|
1711 | part.addparam('bytecount', '%d' % bytecount, mandatory=True) | |
1705 | part.addparam('filecount', '%d' % filecount, mandatory=True) |
|
1712 | part.addparam('filecount', '%d' % filecount, mandatory=True) | |
1706 | part.addparam('requirements', requirements, mandatory=True) |
|
1713 | part.addparam('requirements', requirements, mandatory=True) | |
1707 |
|
1714 | |||
1708 | def buildobsmarkerspart(bundler, markers): |
|
1715 | def buildobsmarkerspart(bundler, markers): | |
1709 | """add an obsmarker part to the bundler with <markers> |
|
1716 | """add an obsmarker part to the bundler with <markers> | |
1710 |
|
1717 | |||
1711 | No part is created if markers is empty. |
|
1718 | No part is created if markers is empty. | |
1712 | Raises ValueError if the bundler doesn't support any known obsmarker format. |
|
1719 | Raises ValueError if the bundler doesn't support any known obsmarker format. | |
1713 | """ |
|
1720 | """ | |
1714 | if not markers: |
|
1721 | if not markers: | |
1715 | return None |
|
1722 | return None | |
1716 |
|
1723 | |||
1717 | remoteversions = obsmarkersversion(bundler.capabilities) |
|
1724 | remoteversions = obsmarkersversion(bundler.capabilities) | |
1718 | version = obsolete.commonversion(remoteversions) |
|
1725 | version = obsolete.commonversion(remoteversions) | |
1719 | if version is None: |
|
1726 | if version is None: | |
1720 | raise ValueError('bundler does not support common obsmarker format') |
|
1727 | raise ValueError('bundler does not support common obsmarker format') | |
1721 | stream = obsolete.encodemarkers(markers, True, version=version) |
|
1728 | stream = obsolete.encodemarkers(markers, True, version=version) | |
1722 | return bundler.newpart('obsmarkers', data=stream) |
|
1729 | return bundler.newpart('obsmarkers', data=stream) | |
1723 |
|
1730 | |||
1724 | def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None, |
|
1731 | def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None, | |
1725 | compopts=None): |
|
1732 | compopts=None): | |
1726 | """Write a bundle file and return its filename. |
|
1733 | """Write a bundle file and return its filename. | |
1727 |
|
1734 | |||
1728 | Existing files will not be overwritten. |
|
1735 | Existing files will not be overwritten. | |
1729 | If no filename is specified, a temporary file is created. |
|
1736 | If no filename is specified, a temporary file is created. | |
1730 | bz2 compression can be turned off. |
|
1737 | bz2 compression can be turned off. | |
1731 | The bundle file will be deleted in case of errors. |
|
1738 | The bundle file will be deleted in case of errors. | |
1732 | """ |
|
1739 | """ | |
1733 |
|
1740 | |||
1734 | if bundletype == "HG20": |
|
1741 | if bundletype == "HG20": | |
1735 | bundle = bundle20(ui) |
|
1742 | bundle = bundle20(ui) | |
1736 | bundle.setcompression(compression, compopts) |
|
1743 | bundle.setcompression(compression, compopts) | |
1737 | part = bundle.newpart('changegroup', data=cg.getchunks()) |
|
1744 | part = bundle.newpart('changegroup', data=cg.getchunks()) | |
1738 | part.addparam('version', cg.version) |
|
1745 | part.addparam('version', cg.version) | |
1739 | if 'clcount' in cg.extras: |
|
1746 | if 'clcount' in cg.extras: | |
1740 | part.addparam('nbchanges', '%d' % cg.extras['clcount'], |
|
1747 | part.addparam('nbchanges', '%d' % cg.extras['clcount'], | |
1741 | mandatory=False) |
|
1748 | mandatory=False) | |
1742 | chunkiter = bundle.getchunks() |
|
1749 | chunkiter = bundle.getchunks() | |
1743 | else: |
|
1750 | else: | |
1744 | # compression argument is only for the bundle2 case |
|
1751 | # compression argument is only for the bundle2 case | |
1745 | assert compression is None |
|
1752 | assert compression is None | |
1746 | if cg.version != '01': |
|
1753 | if cg.version != '01': | |
1747 | raise error.Abort(_('old bundle types only supports v1 ' |
|
1754 | raise error.Abort(_('old bundle types only supports v1 ' | |
1748 | 'changegroups')) |
|
1755 | 'changegroups')) | |
1749 | header, comp = bundletypes[bundletype] |
|
1756 | header, comp = bundletypes[bundletype] | |
1750 | if comp not in util.compengines.supportedbundletypes: |
|
1757 | if comp not in util.compengines.supportedbundletypes: | |
1751 | raise error.Abort(_('unknown stream compression type: %s') |
|
1758 | raise error.Abort(_('unknown stream compression type: %s') | |
1752 | % comp) |
|
1759 | % comp) | |
1753 | compengine = util.compengines.forbundletype(comp) |
|
1760 | compengine = util.compengines.forbundletype(comp) | |
1754 | def chunkiter(): |
|
1761 | def chunkiter(): | |
1755 | yield header |
|
1762 | yield header | |
1756 | for chunk in compengine.compressstream(cg.getchunks(), compopts): |
|
1763 | for chunk in compengine.compressstream(cg.getchunks(), compopts): | |
1757 | yield chunk |
|
1764 | yield chunk | |
1758 | chunkiter = chunkiter() |
|
1765 | chunkiter = chunkiter() | |
1759 |
|
1766 | |||
1760 | # parse the changegroup data, otherwise we will block |
|
1767 | # parse the changegroup data, otherwise we will block | |
1761 | # in case of sshrepo because we don't know the end of the stream |
|
1768 | # in case of sshrepo because we don't know the end of the stream | |
1762 | return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs) |
|
1769 | return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs) | |
1763 |
|
1770 | |||
1764 | def combinechangegroupresults(op): |
|
1771 | def combinechangegroupresults(op): | |
1765 | """logic to combine 0 or more addchangegroup results into one""" |
|
1772 | """logic to combine 0 or more addchangegroup results into one""" | |
1766 | results = [r.get('return', 0) |
|
1773 | results = [r.get('return', 0) | |
1767 | for r in op.records['changegroup']] |
|
1774 | for r in op.records['changegroup']] | |
1768 | changedheads = 0 |
|
1775 | changedheads = 0 | |
1769 | result = 1 |
|
1776 | result = 1 | |
1770 | for ret in results: |
|
1777 | for ret in results: | |
1771 | # If any changegroup result is 0, return 0 |
|
1778 | # If any changegroup result is 0, return 0 | |
1772 | if ret == 0: |
|
1779 | if ret == 0: | |
1773 | result = 0 |
|
1780 | result = 0 | |
1774 | break |
|
1781 | break | |
1775 | if ret < -1: |
|
1782 | if ret < -1: | |
1776 | changedheads += ret + 1 |
|
1783 | changedheads += ret + 1 | |
1777 | elif ret > 1: |
|
1784 | elif ret > 1: | |
1778 | changedheads += ret - 1 |
|
1785 | changedheads += ret - 1 | |
1779 | if changedheads > 0: |
|
1786 | if changedheads > 0: | |
1780 | result = 1 + changedheads |
|
1787 | result = 1 + changedheads | |
1781 | elif changedheads < 0: |
|
1788 | elif changedheads < 0: | |
1782 | result = -1 + changedheads |
|
1789 | result = -1 + changedheads | |
1783 | return result |
|
1790 | return result | |
1784 |
|
1791 | |||
1785 | @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest', |
|
1792 | @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest', | |
1786 | 'targetphase')) |
|
1793 | 'targetphase')) | |
1787 | def handlechangegroup(op, inpart): |
|
1794 | def handlechangegroup(op, inpart): | |
1788 | """apply a changegroup part on the repo |
|
1795 | """apply a changegroup part on the repo | |
1789 |
|
1796 | |||
1790 | This is a very early implementation that will massive rework before being |
|
1797 | This is a very early implementation that will massive rework before being | |
1791 | inflicted to any end-user. |
|
1798 | inflicted to any end-user. | |
1792 | """ |
|
1799 | """ | |
1793 | from . import localrepo |
|
1800 | from . import localrepo | |
1794 |
|
1801 | |||
1795 | tr = op.gettransaction() |
|
1802 | tr = op.gettransaction() | |
1796 | unpackerversion = inpart.params.get('version', '01') |
|
1803 | unpackerversion = inpart.params.get('version', '01') | |
1797 | # We should raise an appropriate exception here |
|
1804 | # We should raise an appropriate exception here | |
1798 | cg = changegroup.getunbundler(unpackerversion, inpart, None) |
|
1805 | cg = changegroup.getunbundler(unpackerversion, inpart, None) | |
1799 | # the source and url passed here are overwritten by the one contained in |
|
1806 | # the source and url passed here are overwritten by the one contained in | |
1800 | # the transaction.hookargs argument. So 'bundle2' is a placeholder |
|
1807 | # the transaction.hookargs argument. So 'bundle2' is a placeholder | |
1801 | nbchangesets = None |
|
1808 | nbchangesets = None | |
1802 | if 'nbchanges' in inpart.params: |
|
1809 | if 'nbchanges' in inpart.params: | |
1803 | nbchangesets = int(inpart.params.get('nbchanges')) |
|
1810 | nbchangesets = int(inpart.params.get('nbchanges')) | |
1804 | if ('treemanifest' in inpart.params and |
|
1811 | if ('treemanifest' in inpart.params and | |
1805 | 'treemanifest' not in op.repo.requirements): |
|
1812 | 'treemanifest' not in op.repo.requirements): | |
1806 | if len(op.repo.changelog) != 0: |
|
1813 | if len(op.repo.changelog) != 0: | |
1807 | raise error.Abort(_( |
|
1814 | raise error.Abort(_( | |
1808 | "bundle contains tree manifests, but local repo is " |
|
1815 | "bundle contains tree manifests, but local repo is " | |
1809 | "non-empty and does not use tree manifests")) |
|
1816 | "non-empty and does not use tree manifests")) | |
1810 | op.repo.requirements.add('treemanifest') |
|
1817 | op.repo.requirements.add('treemanifest') | |
1811 | op.repo.svfs.options = localrepo.resolvestorevfsoptions( |
|
1818 | op.repo.svfs.options = localrepo.resolvestorevfsoptions( | |
1812 | op.repo.ui, op.repo.requirements, op.repo.features) |
|
1819 | op.repo.ui, op.repo.requirements, op.repo.features) | |
1813 | op.repo._writerequirements() |
|
1820 | op.repo._writerequirements() | |
1814 | extrakwargs = {} |
|
1821 | extrakwargs = {} | |
1815 | targetphase = inpart.params.get('targetphase') |
|
1822 | targetphase = inpart.params.get('targetphase') | |
1816 | if targetphase is not None: |
|
1823 | if targetphase is not None: | |
1817 | extrakwargs[r'targetphase'] = int(targetphase) |
|
1824 | extrakwargs[r'targetphase'] = int(targetphase) | |
1818 | ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2', |
|
1825 | ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2', | |
1819 | expectedtotal=nbchangesets, **extrakwargs) |
|
1826 | expectedtotal=nbchangesets, **extrakwargs) | |
1820 | if op.reply is not None: |
|
1827 | if op.reply is not None: | |
1821 | # This is definitely not the final form of this |
|
1828 | # This is definitely not the final form of this | |
1822 | # return. But one need to start somewhere. |
|
1829 | # return. But one need to start somewhere. | |
1823 | part = op.reply.newpart('reply:changegroup', mandatory=False) |
|
1830 | part = op.reply.newpart('reply:changegroup', mandatory=False) | |
1824 | part.addparam( |
|
1831 | part.addparam( | |
1825 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) |
|
1832 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) | |
1826 | part.addparam('return', '%i' % ret, mandatory=False) |
|
1833 | part.addparam('return', '%i' % ret, mandatory=False) | |
1827 | assert not inpart.read() |
|
1834 | assert not inpart.read() | |
1828 |
|
1835 | |||
1829 | _remotechangegroupparams = tuple(['url', 'size', 'digests'] + |
|
1836 | _remotechangegroupparams = tuple(['url', 'size', 'digests'] + | |
1830 | ['digest:%s' % k for k in util.DIGESTS.keys()]) |
|
1837 | ['digest:%s' % k for k in util.DIGESTS.keys()]) | |
1831 | @parthandler('remote-changegroup', _remotechangegroupparams) |
|
1838 | @parthandler('remote-changegroup', _remotechangegroupparams) | |
1832 | def handleremotechangegroup(op, inpart): |
|
1839 | def handleremotechangegroup(op, inpart): | |
1833 | """apply a bundle10 on the repo, given an url and validation information |
|
1840 | """apply a bundle10 on the repo, given an url and validation information | |
1834 |
|
1841 | |||
1835 | All the information about the remote bundle to import are given as |
|
1842 | All the information about the remote bundle to import are given as | |
1836 | parameters. The parameters include: |
|
1843 | parameters. The parameters include: | |
1837 | - url: the url to the bundle10. |
|
1844 | - url: the url to the bundle10. | |
1838 | - size: the bundle10 file size. It is used to validate what was |
|
1845 | - size: the bundle10 file size. It is used to validate what was | |
1839 | retrieved by the client matches the server knowledge about the bundle. |
|
1846 | retrieved by the client matches the server knowledge about the bundle. | |
1840 | - digests: a space separated list of the digest types provided as |
|
1847 | - digests: a space separated list of the digest types provided as | |
1841 | parameters. |
|
1848 | parameters. | |
1842 | - digest:<digest-type>: the hexadecimal representation of the digest with |
|
1849 | - digest:<digest-type>: the hexadecimal representation of the digest with | |
1843 | that name. Like the size, it is used to validate what was retrieved by |
|
1850 | that name. Like the size, it is used to validate what was retrieved by | |
1844 | the client matches what the server knows about the bundle. |
|
1851 | the client matches what the server knows about the bundle. | |
1845 |
|
1852 | |||
1846 | When multiple digest types are given, all of them are checked. |
|
1853 | When multiple digest types are given, all of them are checked. | |
1847 | """ |
|
1854 | """ | |
1848 | try: |
|
1855 | try: | |
1849 | raw_url = inpart.params['url'] |
|
1856 | raw_url = inpart.params['url'] | |
1850 | except KeyError: |
|
1857 | except KeyError: | |
1851 | raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url') |
|
1858 | raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url') | |
1852 | parsed_url = util.url(raw_url) |
|
1859 | parsed_url = util.url(raw_url) | |
1853 | if parsed_url.scheme not in capabilities['remote-changegroup']: |
|
1860 | if parsed_url.scheme not in capabilities['remote-changegroup']: | |
1854 | raise error.Abort(_('remote-changegroup does not support %s urls') % |
|
1861 | raise error.Abort(_('remote-changegroup does not support %s urls') % | |
1855 | parsed_url.scheme) |
|
1862 | parsed_url.scheme) | |
1856 |
|
1863 | |||
1857 | try: |
|
1864 | try: | |
1858 | size = int(inpart.params['size']) |
|
1865 | size = int(inpart.params['size']) | |
1859 | except ValueError: |
|
1866 | except ValueError: | |
1860 | raise error.Abort(_('remote-changegroup: invalid value for param "%s"') |
|
1867 | raise error.Abort(_('remote-changegroup: invalid value for param "%s"') | |
1861 | % 'size') |
|
1868 | % 'size') | |
1862 | except KeyError: |
|
1869 | except KeyError: | |
1863 | raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size') |
|
1870 | raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size') | |
1864 |
|
1871 | |||
1865 | digests = {} |
|
1872 | digests = {} | |
1866 | for typ in inpart.params.get('digests', '').split(): |
|
1873 | for typ in inpart.params.get('digests', '').split(): | |
1867 | param = 'digest:%s' % typ |
|
1874 | param = 'digest:%s' % typ | |
1868 | try: |
|
1875 | try: | |
1869 | value = inpart.params[param] |
|
1876 | value = inpart.params[param] | |
1870 | except KeyError: |
|
1877 | except KeyError: | |
1871 | raise error.Abort(_('remote-changegroup: missing "%s" param') % |
|
1878 | raise error.Abort(_('remote-changegroup: missing "%s" param') % | |
1872 | param) |
|
1879 | param) | |
1873 | digests[typ] = value |
|
1880 | digests[typ] = value | |
1874 |
|
1881 | |||
1875 | real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests) |
|
1882 | real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests) | |
1876 |
|
1883 | |||
1877 | tr = op.gettransaction() |
|
1884 | tr = op.gettransaction() | |
1878 | from . import exchange |
|
1885 | from . import exchange | |
1879 | cg = exchange.readbundle(op.repo.ui, real_part, raw_url) |
|
1886 | cg = exchange.readbundle(op.repo.ui, real_part, raw_url) | |
1880 | if not isinstance(cg, changegroup.cg1unpacker): |
|
1887 | if not isinstance(cg, changegroup.cg1unpacker): | |
1881 | raise error.Abort(_('%s: not a bundle version 1.0') % |
|
1888 | raise error.Abort(_('%s: not a bundle version 1.0') % | |
1882 | util.hidepassword(raw_url)) |
|
1889 | util.hidepassword(raw_url)) | |
1883 | ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2') |
|
1890 | ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2') | |
1884 | if op.reply is not None: |
|
1891 | if op.reply is not None: | |
1885 | # This is definitely not the final form of this |
|
1892 | # This is definitely not the final form of this | |
1886 | # return. But one need to start somewhere. |
|
1893 | # return. But one need to start somewhere. | |
1887 | part = op.reply.newpart('reply:changegroup') |
|
1894 | part = op.reply.newpart('reply:changegroup') | |
1888 | part.addparam( |
|
1895 | part.addparam( | |
1889 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) |
|
1896 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) | |
1890 | part.addparam('return', '%i' % ret, mandatory=False) |
|
1897 | part.addparam('return', '%i' % ret, mandatory=False) | |
1891 | try: |
|
1898 | try: | |
1892 | real_part.validate() |
|
1899 | real_part.validate() | |
1893 | except error.Abort as e: |
|
1900 | except error.Abort as e: | |
1894 | raise error.Abort(_('bundle at %s is corrupted:\n%s') % |
|
1901 | raise error.Abort(_('bundle at %s is corrupted:\n%s') % | |
1895 | (util.hidepassword(raw_url), bytes(e))) |
|
1902 | (util.hidepassword(raw_url), bytes(e))) | |
1896 | assert not inpart.read() |
|
1903 | assert not inpart.read() | |
1897 |
|
1904 | |||
1898 | @parthandler('reply:changegroup', ('return', 'in-reply-to')) |
|
1905 | @parthandler('reply:changegroup', ('return', 'in-reply-to')) | |
1899 | def handlereplychangegroup(op, inpart): |
|
1906 | def handlereplychangegroup(op, inpart): | |
1900 | ret = int(inpart.params['return']) |
|
1907 | ret = int(inpart.params['return']) | |
1901 | replyto = int(inpart.params['in-reply-to']) |
|
1908 | replyto = int(inpart.params['in-reply-to']) | |
1902 | op.records.add('changegroup', {'return': ret}, replyto) |
|
1909 | op.records.add('changegroup', {'return': ret}, replyto) | |
1903 |
|
1910 | |||
1904 | @parthandler('check:bookmarks') |
|
1911 | @parthandler('check:bookmarks') | |
1905 | def handlecheckbookmarks(op, inpart): |
|
1912 | def handlecheckbookmarks(op, inpart): | |
1906 | """check location of bookmarks |
|
1913 | """check location of bookmarks | |
1907 |
|
1914 | |||
1908 | This part is to be used to detect push race regarding bookmark, it |
|
1915 | This part is to be used to detect push race regarding bookmark, it | |
1909 | contains binary encoded (bookmark, node) tuple. If the local state does |
|
1916 | contains binary encoded (bookmark, node) tuple. If the local state does | |
1910 | not marks the one in the part, a PushRaced exception is raised |
|
1917 | not marks the one in the part, a PushRaced exception is raised | |
1911 | """ |
|
1918 | """ | |
1912 | bookdata = bookmarks.binarydecode(inpart) |
|
1919 | bookdata = bookmarks.binarydecode(inpart) | |
1913 |
|
1920 | |||
1914 | msgstandard = ('remote repository changed while pushing - please try again ' |
|
1921 | msgstandard = ('remote repository changed while pushing - please try again ' | |
1915 | '(bookmark "%s" move from %s to %s)') |
|
1922 | '(bookmark "%s" move from %s to %s)') | |
1916 | msgmissing = ('remote repository changed while pushing - please try again ' |
|
1923 | msgmissing = ('remote repository changed while pushing - please try again ' | |
1917 | '(bookmark "%s" is missing, expected %s)') |
|
1924 | '(bookmark "%s" is missing, expected %s)') | |
1918 | msgexist = ('remote repository changed while pushing - please try again ' |
|
1925 | msgexist = ('remote repository changed while pushing - please try again ' | |
1919 | '(bookmark "%s" set on %s, expected missing)') |
|
1926 | '(bookmark "%s" set on %s, expected missing)') | |
1920 | for book, node in bookdata: |
|
1927 | for book, node in bookdata: | |
1921 | currentnode = op.repo._bookmarks.get(book) |
|
1928 | currentnode = op.repo._bookmarks.get(book) | |
1922 | if currentnode != node: |
|
1929 | if currentnode != node: | |
1923 | if node is None: |
|
1930 | if node is None: | |
1924 | finalmsg = msgexist % (book, nodemod.short(currentnode)) |
|
1931 | finalmsg = msgexist % (book, nodemod.short(currentnode)) | |
1925 | elif currentnode is None: |
|
1932 | elif currentnode is None: | |
1926 | finalmsg = msgmissing % (book, nodemod.short(node)) |
|
1933 | finalmsg = msgmissing % (book, nodemod.short(node)) | |
1927 | else: |
|
1934 | else: | |
1928 | finalmsg = msgstandard % (book, nodemod.short(node), |
|
1935 | finalmsg = msgstandard % (book, nodemod.short(node), | |
1929 | nodemod.short(currentnode)) |
|
1936 | nodemod.short(currentnode)) | |
1930 | raise error.PushRaced(finalmsg) |
|
1937 | raise error.PushRaced(finalmsg) | |
1931 |
|
1938 | |||
1932 | @parthandler('check:heads') |
|
1939 | @parthandler('check:heads') | |
1933 | def handlecheckheads(op, inpart): |
|
1940 | def handlecheckheads(op, inpart): | |
1934 | """check that head of the repo did not change |
|
1941 | """check that head of the repo did not change | |
1935 |
|
1942 | |||
1936 | This is used to detect a push race when using unbundle. |
|
1943 | This is used to detect a push race when using unbundle. | |
1937 | This replaces the "heads" argument of unbundle.""" |
|
1944 | This replaces the "heads" argument of unbundle.""" | |
1938 | h = inpart.read(20) |
|
1945 | h = inpart.read(20) | |
1939 | heads = [] |
|
1946 | heads = [] | |
1940 | while len(h) == 20: |
|
1947 | while len(h) == 20: | |
1941 | heads.append(h) |
|
1948 | heads.append(h) | |
1942 | h = inpart.read(20) |
|
1949 | h = inpart.read(20) | |
1943 | assert not h |
|
1950 | assert not h | |
1944 | # Trigger a transaction so that we are guaranteed to have the lock now. |
|
1951 | # Trigger a transaction so that we are guaranteed to have the lock now. | |
1945 | if op.ui.configbool('experimental', 'bundle2lazylocking'): |
|
1952 | if op.ui.configbool('experimental', 'bundle2lazylocking'): | |
1946 | op.gettransaction() |
|
1953 | op.gettransaction() | |
1947 | if sorted(heads) != sorted(op.repo.heads()): |
|
1954 | if sorted(heads) != sorted(op.repo.heads()): | |
1948 | raise error.PushRaced('remote repository changed while pushing - ' |
|
1955 | raise error.PushRaced('remote repository changed while pushing - ' | |
1949 | 'please try again') |
|
1956 | 'please try again') | |
1950 |
|
1957 | |||
1951 | @parthandler('check:updated-heads') |
|
1958 | @parthandler('check:updated-heads') | |
1952 | def handlecheckupdatedheads(op, inpart): |
|
1959 | def handlecheckupdatedheads(op, inpart): | |
1953 | """check for race on the heads touched by a push |
|
1960 | """check for race on the heads touched by a push | |
1954 |
|
1961 | |||
1955 | This is similar to 'check:heads' but focus on the heads actually updated |
|
1962 | This is similar to 'check:heads' but focus on the heads actually updated | |
1956 | during the push. If other activities happen on unrelated heads, it is |
|
1963 | during the push. If other activities happen on unrelated heads, it is | |
1957 | ignored. |
|
1964 | ignored. | |
1958 |
|
1965 | |||
1959 | This allow server with high traffic to avoid push contention as long as |
|
1966 | This allow server with high traffic to avoid push contention as long as | |
1960 | unrelated parts of the graph are involved.""" |
|
1967 | unrelated parts of the graph are involved.""" | |
1961 | h = inpart.read(20) |
|
1968 | h = inpart.read(20) | |
1962 | heads = [] |
|
1969 | heads = [] | |
1963 | while len(h) == 20: |
|
1970 | while len(h) == 20: | |
1964 | heads.append(h) |
|
1971 | heads.append(h) | |
1965 | h = inpart.read(20) |
|
1972 | h = inpart.read(20) | |
1966 | assert not h |
|
1973 | assert not h | |
1967 | # trigger a transaction so that we are guaranteed to have the lock now. |
|
1974 | # trigger a transaction so that we are guaranteed to have the lock now. | |
1968 | if op.ui.configbool('experimental', 'bundle2lazylocking'): |
|
1975 | if op.ui.configbool('experimental', 'bundle2lazylocking'): | |
1969 | op.gettransaction() |
|
1976 | op.gettransaction() | |
1970 |
|
1977 | |||
1971 | currentheads = set() |
|
1978 | currentheads = set() | |
1972 | for ls in op.repo.branchmap().itervalues(): |
|
1979 | for ls in op.repo.branchmap().itervalues(): | |
1973 | currentheads.update(ls) |
|
1980 | currentheads.update(ls) | |
1974 |
|
1981 | |||
1975 | for h in heads: |
|
1982 | for h in heads: | |
1976 | if h not in currentheads: |
|
1983 | if h not in currentheads: | |
1977 | raise error.PushRaced('remote repository changed while pushing - ' |
|
1984 | raise error.PushRaced('remote repository changed while pushing - ' | |
1978 | 'please try again') |
|
1985 | 'please try again') | |
1979 |
|
1986 | |||
1980 | @parthandler('check:phases') |
|
1987 | @parthandler('check:phases') | |
1981 | def handlecheckphases(op, inpart): |
|
1988 | def handlecheckphases(op, inpart): | |
1982 | """check that phase boundaries of the repository did not change |
|
1989 | """check that phase boundaries of the repository did not change | |
1983 |
|
1990 | |||
1984 | This is used to detect a push race. |
|
1991 | This is used to detect a push race. | |
1985 | """ |
|
1992 | """ | |
1986 | phasetonodes = phases.binarydecode(inpart) |
|
1993 | phasetonodes = phases.binarydecode(inpart) | |
1987 | unfi = op.repo.unfiltered() |
|
1994 | unfi = op.repo.unfiltered() | |
1988 | cl = unfi.changelog |
|
1995 | cl = unfi.changelog | |
1989 | phasecache = unfi._phasecache |
|
1996 | phasecache = unfi._phasecache | |
1990 | msg = ('remote repository changed while pushing - please try again ' |
|
1997 | msg = ('remote repository changed while pushing - please try again ' | |
1991 | '(%s is %s expected %s)') |
|
1998 | '(%s is %s expected %s)') | |
1992 | for expectedphase, nodes in enumerate(phasetonodes): |
|
1999 | for expectedphase, nodes in enumerate(phasetonodes): | |
1993 | for n in nodes: |
|
2000 | for n in nodes: | |
1994 | actualphase = phasecache.phase(unfi, cl.rev(n)) |
|
2001 | actualphase = phasecache.phase(unfi, cl.rev(n)) | |
1995 | if actualphase != expectedphase: |
|
2002 | if actualphase != expectedphase: | |
1996 | finalmsg = msg % (nodemod.short(n), |
|
2003 | finalmsg = msg % (nodemod.short(n), | |
1997 | phases.phasenames[actualphase], |
|
2004 | phases.phasenames[actualphase], | |
1998 | phases.phasenames[expectedphase]) |
|
2005 | phases.phasenames[expectedphase]) | |
1999 | raise error.PushRaced(finalmsg) |
|
2006 | raise error.PushRaced(finalmsg) | |
2000 |
|
2007 | |||
2001 | @parthandler('output') |
|
2008 | @parthandler('output') | |
2002 | def handleoutput(op, inpart): |
|
2009 | def handleoutput(op, inpart): | |
2003 | """forward output captured on the server to the client""" |
|
2010 | """forward output captured on the server to the client""" | |
2004 | for line in inpart.read().splitlines(): |
|
2011 | for line in inpart.read().splitlines(): | |
2005 | op.ui.status(_('remote: %s\n') % line) |
|
2012 | op.ui.status(_('remote: %s\n') % line) | |
2006 |
|
2013 | |||
2007 | @parthandler('replycaps') |
|
2014 | @parthandler('replycaps') | |
2008 | def handlereplycaps(op, inpart): |
|
2015 | def handlereplycaps(op, inpart): | |
2009 | """Notify that a reply bundle should be created |
|
2016 | """Notify that a reply bundle should be created | |
2010 |
|
2017 | |||
2011 | The payload contains the capabilities information for the reply""" |
|
2018 | The payload contains the capabilities information for the reply""" | |
2012 | caps = decodecaps(inpart.read()) |
|
2019 | caps = decodecaps(inpart.read()) | |
2013 | if op.reply is None: |
|
2020 | if op.reply is None: | |
2014 | op.reply = bundle20(op.ui, caps) |
|
2021 | op.reply = bundle20(op.ui, caps) | |
2015 |
|
2022 | |||
2016 | class AbortFromPart(error.Abort): |
|
2023 | class AbortFromPart(error.Abort): | |
2017 | """Sub-class of Abort that denotes an error from a bundle2 part.""" |
|
2024 | """Sub-class of Abort that denotes an error from a bundle2 part.""" | |
2018 |
|
2025 | |||
2019 | @parthandler('error:abort', ('message', 'hint')) |
|
2026 | @parthandler('error:abort', ('message', 'hint')) | |
2020 | def handleerrorabort(op, inpart): |
|
2027 | def handleerrorabort(op, inpart): | |
2021 | """Used to transmit abort error over the wire""" |
|
2028 | """Used to transmit abort error over the wire""" | |
2022 | raise AbortFromPart(inpart.params['message'], |
|
2029 | raise AbortFromPart(inpart.params['message'], | |
2023 | hint=inpart.params.get('hint')) |
|
2030 | hint=inpart.params.get('hint')) | |
2024 |
|
2031 | |||
2025 | @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret', |
|
2032 | @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret', | |
2026 | 'in-reply-to')) |
|
2033 | 'in-reply-to')) | |
2027 | def handleerrorpushkey(op, inpart): |
|
2034 | def handleerrorpushkey(op, inpart): | |
2028 | """Used to transmit failure of a mandatory pushkey over the wire""" |
|
2035 | """Used to transmit failure of a mandatory pushkey over the wire""" | |
2029 | kwargs = {} |
|
2036 | kwargs = {} | |
2030 | for name in ('namespace', 'key', 'new', 'old', 'ret'): |
|
2037 | for name in ('namespace', 'key', 'new', 'old', 'ret'): | |
2031 | value = inpart.params.get(name) |
|
2038 | value = inpart.params.get(name) | |
2032 | if value is not None: |
|
2039 | if value is not None: | |
2033 | kwargs[name] = value |
|
2040 | kwargs[name] = value | |
2034 | raise error.PushkeyFailed(inpart.params['in-reply-to'], |
|
2041 | raise error.PushkeyFailed(inpart.params['in-reply-to'], | |
2035 | **pycompat.strkwargs(kwargs)) |
|
2042 | **pycompat.strkwargs(kwargs)) | |
2036 |
|
2043 | |||
2037 | @parthandler('error:unsupportedcontent', ('parttype', 'params')) |
|
2044 | @parthandler('error:unsupportedcontent', ('parttype', 'params')) | |
2038 | def handleerrorunsupportedcontent(op, inpart): |
|
2045 | def handleerrorunsupportedcontent(op, inpart): | |
2039 | """Used to transmit unknown content error over the wire""" |
|
2046 | """Used to transmit unknown content error over the wire""" | |
2040 | kwargs = {} |
|
2047 | kwargs = {} | |
2041 | parttype = inpart.params.get('parttype') |
|
2048 | parttype = inpart.params.get('parttype') | |
2042 | if parttype is not None: |
|
2049 | if parttype is not None: | |
2043 | kwargs['parttype'] = parttype |
|
2050 | kwargs['parttype'] = parttype | |
2044 | params = inpart.params.get('params') |
|
2051 | params = inpart.params.get('params') | |
2045 | if params is not None: |
|
2052 | if params is not None: | |
2046 | kwargs['params'] = params.split('\0') |
|
2053 | kwargs['params'] = params.split('\0') | |
2047 |
|
2054 | |||
2048 | raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs)) |
|
2055 | raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs)) | |
2049 |
|
2056 | |||
2050 | @parthandler('error:pushraced', ('message',)) |
|
2057 | @parthandler('error:pushraced', ('message',)) | |
2051 | def handleerrorpushraced(op, inpart): |
|
2058 | def handleerrorpushraced(op, inpart): | |
2052 | """Used to transmit push race error over the wire""" |
|
2059 | """Used to transmit push race error over the wire""" | |
2053 | raise error.ResponseError(_('push failed:'), inpart.params['message']) |
|
2060 | raise error.ResponseError(_('push failed:'), inpart.params['message']) | |
2054 |
|
2061 | |||
2055 | @parthandler('listkeys', ('namespace',)) |
|
2062 | @parthandler('listkeys', ('namespace',)) | |
2056 | def handlelistkeys(op, inpart): |
|
2063 | def handlelistkeys(op, inpart): | |
2057 | """retrieve pushkey namespace content stored in a bundle2""" |
|
2064 | """retrieve pushkey namespace content stored in a bundle2""" | |
2058 | namespace = inpart.params['namespace'] |
|
2065 | namespace = inpart.params['namespace'] | |
2059 | r = pushkey.decodekeys(inpart.read()) |
|
2066 | r = pushkey.decodekeys(inpart.read()) | |
2060 | op.records.add('listkeys', (namespace, r)) |
|
2067 | op.records.add('listkeys', (namespace, r)) | |
2061 |
|
2068 | |||
2062 | @parthandler('pushkey', ('namespace', 'key', 'old', 'new')) |
|
2069 | @parthandler('pushkey', ('namespace', 'key', 'old', 'new')) | |
2063 | def handlepushkey(op, inpart): |
|
2070 | def handlepushkey(op, inpart): | |
2064 | """process a pushkey request""" |
|
2071 | """process a pushkey request""" | |
2065 | dec = pushkey.decode |
|
2072 | dec = pushkey.decode | |
2066 | namespace = dec(inpart.params['namespace']) |
|
2073 | namespace = dec(inpart.params['namespace']) | |
2067 | key = dec(inpart.params['key']) |
|
2074 | key = dec(inpart.params['key']) | |
2068 | old = dec(inpart.params['old']) |
|
2075 | old = dec(inpart.params['old']) | |
2069 | new = dec(inpart.params['new']) |
|
2076 | new = dec(inpart.params['new']) | |
2070 | # Grab the transaction to ensure that we have the lock before performing the |
|
2077 | # Grab the transaction to ensure that we have the lock before performing the | |
2071 | # pushkey. |
|
2078 | # pushkey. | |
2072 | if op.ui.configbool('experimental', 'bundle2lazylocking'): |
|
2079 | if op.ui.configbool('experimental', 'bundle2lazylocking'): | |
2073 | op.gettransaction() |
|
2080 | op.gettransaction() | |
2074 | ret = op.repo.pushkey(namespace, key, old, new) |
|
2081 | ret = op.repo.pushkey(namespace, key, old, new) | |
2075 | record = {'namespace': namespace, |
|
2082 | record = {'namespace': namespace, | |
2076 | 'key': key, |
|
2083 | 'key': key, | |
2077 | 'old': old, |
|
2084 | 'old': old, | |
2078 | 'new': new} |
|
2085 | 'new': new} | |
2079 | op.records.add('pushkey', record) |
|
2086 | op.records.add('pushkey', record) | |
2080 | if op.reply is not None: |
|
2087 | if op.reply is not None: | |
2081 | rpart = op.reply.newpart('reply:pushkey') |
|
2088 | rpart = op.reply.newpart('reply:pushkey') | |
2082 | rpart.addparam( |
|
2089 | rpart.addparam( | |
2083 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) |
|
2090 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) | |
2084 | rpart.addparam('return', '%i' % ret, mandatory=False) |
|
2091 | rpart.addparam('return', '%i' % ret, mandatory=False) | |
2085 | if inpart.mandatory and not ret: |
|
2092 | if inpart.mandatory and not ret: | |
2086 | kwargs = {} |
|
2093 | kwargs = {} | |
2087 | for key in ('namespace', 'key', 'new', 'old', 'ret'): |
|
2094 | for key in ('namespace', 'key', 'new', 'old', 'ret'): | |
2088 | if key in inpart.params: |
|
2095 | if key in inpart.params: | |
2089 | kwargs[key] = inpart.params[key] |
|
2096 | kwargs[key] = inpart.params[key] | |
2090 | raise error.PushkeyFailed(partid='%d' % inpart.id, |
|
2097 | raise error.PushkeyFailed(partid='%d' % inpart.id, | |
2091 | **pycompat.strkwargs(kwargs)) |
|
2098 | **pycompat.strkwargs(kwargs)) | |
2092 |
|
2099 | |||
2093 | @parthandler('bookmarks') |
|
2100 | @parthandler('bookmarks') | |
2094 | def handlebookmark(op, inpart): |
|
2101 | def handlebookmark(op, inpart): | |
2095 | """transmit bookmark information |
|
2102 | """transmit bookmark information | |
2096 |
|
2103 | |||
2097 | The part contains binary encoded bookmark information. |
|
2104 | The part contains binary encoded bookmark information. | |
2098 |
|
2105 | |||
2099 | The exact behavior of this part can be controlled by the 'bookmarks' mode |
|
2106 | The exact behavior of this part can be controlled by the 'bookmarks' mode | |
2100 | on the bundle operation. |
|
2107 | on the bundle operation. | |
2101 |
|
2108 | |||
2102 | When mode is 'apply' (the default) the bookmark information is applied as |
|
2109 | When mode is 'apply' (the default) the bookmark information is applied as | |
2103 | is to the unbundling repository. Make sure a 'check:bookmarks' part is |
|
2110 | is to the unbundling repository. Make sure a 'check:bookmarks' part is | |
2104 | issued earlier to check for push races in such update. This behavior is |
|
2111 | issued earlier to check for push races in such update. This behavior is | |
2105 | suitable for pushing. |
|
2112 | suitable for pushing. | |
2106 |
|
2113 | |||
2107 | When mode is 'records', the information is recorded into the 'bookmarks' |
|
2114 | When mode is 'records', the information is recorded into the 'bookmarks' | |
2108 | records of the bundle operation. This behavior is suitable for pulling. |
|
2115 | records of the bundle operation. This behavior is suitable for pulling. | |
2109 | """ |
|
2116 | """ | |
2110 | changes = bookmarks.binarydecode(inpart) |
|
2117 | changes = bookmarks.binarydecode(inpart) | |
2111 |
|
2118 | |||
2112 | pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat') |
|
2119 | pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat') | |
2113 | bookmarksmode = op.modes.get('bookmarks', 'apply') |
|
2120 | bookmarksmode = op.modes.get('bookmarks', 'apply') | |
2114 |
|
2121 | |||
2115 | if bookmarksmode == 'apply': |
|
2122 | if bookmarksmode == 'apply': | |
2116 | tr = op.gettransaction() |
|
2123 | tr = op.gettransaction() | |
2117 | bookstore = op.repo._bookmarks |
|
2124 | bookstore = op.repo._bookmarks | |
2118 | if pushkeycompat: |
|
2125 | if pushkeycompat: | |
2119 | allhooks = [] |
|
2126 | allhooks = [] | |
2120 | for book, node in changes: |
|
2127 | for book, node in changes: | |
2121 | hookargs = tr.hookargs.copy() |
|
2128 | hookargs = tr.hookargs.copy() | |
2122 | hookargs['pushkeycompat'] = '1' |
|
2129 | hookargs['pushkeycompat'] = '1' | |
2123 | hookargs['namespace'] = 'bookmarks' |
|
2130 | hookargs['namespace'] = 'bookmarks' | |
2124 | hookargs['key'] = book |
|
2131 | hookargs['key'] = book | |
2125 | hookargs['old'] = nodemod.hex(bookstore.get(book, '')) |
|
2132 | hookargs['old'] = nodemod.hex(bookstore.get(book, '')) | |
2126 | hookargs['new'] = nodemod.hex(node if node is not None else '') |
|
2133 | hookargs['new'] = nodemod.hex(node if node is not None else '') | |
2127 | allhooks.append(hookargs) |
|
2134 | allhooks.append(hookargs) | |
2128 |
|
2135 | |||
2129 | for hookargs in allhooks: |
|
2136 | for hookargs in allhooks: | |
2130 | op.repo.hook('prepushkey', throw=True, |
|
2137 | op.repo.hook('prepushkey', throw=True, | |
2131 | **pycompat.strkwargs(hookargs)) |
|
2138 | **pycompat.strkwargs(hookargs)) | |
2132 |
|
2139 | |||
2133 | bookstore.applychanges(op.repo, op.gettransaction(), changes) |
|
2140 | bookstore.applychanges(op.repo, op.gettransaction(), changes) | |
2134 |
|
2141 | |||
2135 | if pushkeycompat: |
|
2142 | if pushkeycompat: | |
2136 | def runhook(): |
|
2143 | def runhook(): | |
2137 | for hookargs in allhooks: |
|
2144 | for hookargs in allhooks: | |
2138 | op.repo.hook('pushkey', **pycompat.strkwargs(hookargs)) |
|
2145 | op.repo.hook('pushkey', **pycompat.strkwargs(hookargs)) | |
2139 | op.repo._afterlock(runhook) |
|
2146 | op.repo._afterlock(runhook) | |
2140 |
|
2147 | |||
2141 | elif bookmarksmode == 'records': |
|
2148 | elif bookmarksmode == 'records': | |
2142 | for book, node in changes: |
|
2149 | for book, node in changes: | |
2143 | record = {'bookmark': book, 'node': node} |
|
2150 | record = {'bookmark': book, 'node': node} | |
2144 | op.records.add('bookmarks', record) |
|
2151 | op.records.add('bookmarks', record) | |
2145 | else: |
|
2152 | else: | |
2146 | raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode) |
|
2153 | raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode) | |
2147 |
|
2154 | |||
2148 | @parthandler('phase-heads') |
|
2155 | @parthandler('phase-heads') | |
2149 | def handlephases(op, inpart): |
|
2156 | def handlephases(op, inpart): | |
2150 | """apply phases from bundle part to repo""" |
|
2157 | """apply phases from bundle part to repo""" | |
2151 | headsbyphase = phases.binarydecode(inpart) |
|
2158 | headsbyphase = phases.binarydecode(inpart) | |
2152 | phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase) |
|
2159 | phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase) | |
2153 |
|
2160 | |||
2154 | @parthandler('reply:pushkey', ('return', 'in-reply-to')) |
|
2161 | @parthandler('reply:pushkey', ('return', 'in-reply-to')) | |
2155 | def handlepushkeyreply(op, inpart): |
|
2162 | def handlepushkeyreply(op, inpart): | |
2156 | """retrieve the result of a pushkey request""" |
|
2163 | """retrieve the result of a pushkey request""" | |
2157 | ret = int(inpart.params['return']) |
|
2164 | ret = int(inpart.params['return']) | |
2158 | partid = int(inpart.params['in-reply-to']) |
|
2165 | partid = int(inpart.params['in-reply-to']) | |
2159 | op.records.add('pushkey', {'return': ret}, partid) |
|
2166 | op.records.add('pushkey', {'return': ret}, partid) | |
2160 |
|
2167 | |||
2161 | @parthandler('obsmarkers') |
|
2168 | @parthandler('obsmarkers') | |
2162 | def handleobsmarker(op, inpart): |
|
2169 | def handleobsmarker(op, inpart): | |
2163 | """add a stream of obsmarkers to the repo""" |
|
2170 | """add a stream of obsmarkers to the repo""" | |
2164 | tr = op.gettransaction() |
|
2171 | tr = op.gettransaction() | |
2165 | markerdata = inpart.read() |
|
2172 | markerdata = inpart.read() | |
2166 | if op.ui.config('experimental', 'obsmarkers-exchange-debug'): |
|
2173 | if op.ui.config('experimental', 'obsmarkers-exchange-debug'): | |
2167 | op.ui.write(('obsmarker-exchange: %i bytes received\n') |
|
2174 | op.ui.write(('obsmarker-exchange: %i bytes received\n') | |
2168 | % len(markerdata)) |
|
2175 | % len(markerdata)) | |
2169 | # The mergemarkers call will crash if marker creation is not enabled. |
|
2176 | # The mergemarkers call will crash if marker creation is not enabled. | |
2170 | # we want to avoid this if the part is advisory. |
|
2177 | # we want to avoid this if the part is advisory. | |
2171 | if not inpart.mandatory and op.repo.obsstore.readonly: |
|
2178 | if not inpart.mandatory and op.repo.obsstore.readonly: | |
2172 | op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n') |
|
2179 | op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n') | |
2173 | return |
|
2180 | return | |
2174 | new = op.repo.obsstore.mergemarkers(tr, markerdata) |
|
2181 | new = op.repo.obsstore.mergemarkers(tr, markerdata) | |
2175 | op.repo.invalidatevolatilesets() |
|
2182 | op.repo.invalidatevolatilesets() | |
2176 | if new: |
|
2183 | if new: | |
2177 | op.repo.ui.status(_('%i new obsolescence markers\n') % new) |
|
2184 | op.repo.ui.status(_('%i new obsolescence markers\n') % new) | |
2178 | op.records.add('obsmarkers', {'new': new}) |
|
2185 | op.records.add('obsmarkers', {'new': new}) | |
2179 | if op.reply is not None: |
|
2186 | if op.reply is not None: | |
2180 | rpart = op.reply.newpart('reply:obsmarkers') |
|
2187 | rpart = op.reply.newpart('reply:obsmarkers') | |
2181 | rpart.addparam( |
|
2188 | rpart.addparam( | |
2182 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) |
|
2189 | 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False) | |
2183 | rpart.addparam('new', '%i' % new, mandatory=False) |
|
2190 | rpart.addparam('new', '%i' % new, mandatory=False) | |
2184 |
|
2191 | |||
2185 |
|
2192 | |||
2186 | @parthandler('reply:obsmarkers', ('new', 'in-reply-to')) |
|
2193 | @parthandler('reply:obsmarkers', ('new', 'in-reply-to')) | |
2187 | def handleobsmarkerreply(op, inpart): |
|
2194 | def handleobsmarkerreply(op, inpart): | |
2188 | """retrieve the result of a pushkey request""" |
|
2195 | """retrieve the result of a pushkey request""" | |
2189 | ret = int(inpart.params['new']) |
|
2196 | ret = int(inpart.params['new']) | |
2190 | partid = int(inpart.params['in-reply-to']) |
|
2197 | partid = int(inpart.params['in-reply-to']) | |
2191 | op.records.add('obsmarkers', {'new': ret}, partid) |
|
2198 | op.records.add('obsmarkers', {'new': ret}, partid) | |
2192 |
|
2199 | |||
2193 | @parthandler('hgtagsfnodes') |
|
2200 | @parthandler('hgtagsfnodes') | |
2194 | def handlehgtagsfnodes(op, inpart): |
|
2201 | def handlehgtagsfnodes(op, inpart): | |
2195 | """Applies .hgtags fnodes cache entries to the local repo. |
|
2202 | """Applies .hgtags fnodes cache entries to the local repo. | |
2196 |
|
2203 | |||
2197 | Payload is pairs of 20 byte changeset nodes and filenodes. |
|
2204 | Payload is pairs of 20 byte changeset nodes and filenodes. | |
2198 | """ |
|
2205 | """ | |
2199 | # Grab the transaction so we ensure that we have the lock at this point. |
|
2206 | # Grab the transaction so we ensure that we have the lock at this point. | |
2200 | if op.ui.configbool('experimental', 'bundle2lazylocking'): |
|
2207 | if op.ui.configbool('experimental', 'bundle2lazylocking'): | |
2201 | op.gettransaction() |
|
2208 | op.gettransaction() | |
2202 | cache = tags.hgtagsfnodescache(op.repo.unfiltered()) |
|
2209 | cache = tags.hgtagsfnodescache(op.repo.unfiltered()) | |
2203 |
|
2210 | |||
2204 | count = 0 |
|
2211 | count = 0 | |
2205 | while True: |
|
2212 | while True: | |
2206 | node = inpart.read(20) |
|
2213 | node = inpart.read(20) | |
2207 | fnode = inpart.read(20) |
|
2214 | fnode = inpart.read(20) | |
2208 | if len(node) < 20 or len(fnode) < 20: |
|
2215 | if len(node) < 20 or len(fnode) < 20: | |
2209 | op.ui.debug('ignoring incomplete received .hgtags fnodes data\n') |
|
2216 | op.ui.debug('ignoring incomplete received .hgtags fnodes data\n') | |
2210 | break |
|
2217 | break | |
2211 | cache.setfnode(node, fnode) |
|
2218 | cache.setfnode(node, fnode) | |
2212 | count += 1 |
|
2219 | count += 1 | |
2213 |
|
2220 | |||
2214 | cache.write() |
|
2221 | cache.write() | |
2215 | op.ui.debug('applied %i hgtags fnodes cache entries\n' % count) |
|
2222 | op.ui.debug('applied %i hgtags fnodes cache entries\n' % count) | |
2216 |
|
2223 | |||
2217 | rbcstruct = struct.Struct('>III') |
|
2224 | rbcstruct = struct.Struct('>III') | |
2218 |
|
2225 | |||
2219 | @parthandler('cache:rev-branch-cache') |
|
2226 | @parthandler('cache:rev-branch-cache') | |
2220 | def handlerbc(op, inpart): |
|
2227 | def handlerbc(op, inpart): | |
2221 | """receive a rev-branch-cache payload and update the local cache |
|
2228 | """receive a rev-branch-cache payload and update the local cache | |
2222 |
|
2229 | |||
2223 | The payload is a series of data related to each branch |
|
2230 | The payload is a series of data related to each branch | |
2224 |
|
2231 | |||
2225 | 1) branch name length |
|
2232 | 1) branch name length | |
2226 | 2) number of open heads |
|
2233 | 2) number of open heads | |
2227 | 3) number of closed heads |
|
2234 | 3) number of closed heads | |
2228 | 4) open heads nodes |
|
2235 | 4) open heads nodes | |
2229 | 5) closed heads nodes |
|
2236 | 5) closed heads nodes | |
2230 | """ |
|
2237 | """ | |
2231 | total = 0 |
|
2238 | total = 0 | |
2232 | rawheader = inpart.read(rbcstruct.size) |
|
2239 | rawheader = inpart.read(rbcstruct.size) | |
2233 | cache = op.repo.revbranchcache() |
|
2240 | cache = op.repo.revbranchcache() | |
2234 | cl = op.repo.unfiltered().changelog |
|
2241 | cl = op.repo.unfiltered().changelog | |
2235 | while rawheader: |
|
2242 | while rawheader: | |
2236 | header = rbcstruct.unpack(rawheader) |
|
2243 | header = rbcstruct.unpack(rawheader) | |
2237 | total += header[1] + header[2] |
|
2244 | total += header[1] + header[2] | |
2238 | utf8branch = inpart.read(header[0]) |
|
2245 | utf8branch = inpart.read(header[0]) | |
2239 | branch = encoding.tolocal(utf8branch) |
|
2246 | branch = encoding.tolocal(utf8branch) | |
2240 | for x in pycompat.xrange(header[1]): |
|
2247 | for x in pycompat.xrange(header[1]): | |
2241 | node = inpart.read(20) |
|
2248 | node = inpart.read(20) | |
2242 | rev = cl.rev(node) |
|
2249 | rev = cl.rev(node) | |
2243 | cache.setdata(branch, rev, node, False) |
|
2250 | cache.setdata(branch, rev, node, False) | |
2244 | for x in pycompat.xrange(header[2]): |
|
2251 | for x in pycompat.xrange(header[2]): | |
2245 | node = inpart.read(20) |
|
2252 | node = inpart.read(20) | |
2246 | rev = cl.rev(node) |
|
2253 | rev = cl.rev(node) | |
2247 | cache.setdata(branch, rev, node, True) |
|
2254 | cache.setdata(branch, rev, node, True) | |
2248 | rawheader = inpart.read(rbcstruct.size) |
|
2255 | rawheader = inpart.read(rbcstruct.size) | |
2249 | cache.write() |
|
2256 | cache.write() | |
2250 |
|
2257 | |||
2251 | @parthandler('pushvars') |
|
2258 | @parthandler('pushvars') | |
2252 | def bundle2getvars(op, part): |
|
2259 | def bundle2getvars(op, part): | |
2253 | '''unbundle a bundle2 containing shellvars on the server''' |
|
2260 | '''unbundle a bundle2 containing shellvars on the server''' | |
2254 | # An option to disable unbundling on server-side for security reasons |
|
2261 | # An option to disable unbundling on server-side for security reasons | |
2255 | if op.ui.configbool('push', 'pushvars.server'): |
|
2262 | if op.ui.configbool('push', 'pushvars.server'): | |
2256 | hookargs = {} |
|
2263 | hookargs = {} | |
2257 | for key, value in part.advisoryparams: |
|
2264 | for key, value in part.advisoryparams: | |
2258 | key = key.upper() |
|
2265 | key = key.upper() | |
2259 | # We want pushed variables to have USERVAR_ prepended so we know |
|
2266 | # We want pushed variables to have USERVAR_ prepended so we know | |
2260 | # they came from the --pushvar flag. |
|
2267 | # they came from the --pushvar flag. | |
2261 | key = "USERVAR_" + key |
|
2268 | key = "USERVAR_" + key | |
2262 | hookargs[key] = value |
|
2269 | hookargs[key] = value | |
2263 | op.addhookargs(hookargs) |
|
2270 | op.addhookargs(hookargs) | |
2264 |
|
2271 | |||
2265 | @parthandler('stream2', ('requirements', 'filecount', 'bytecount')) |
|
2272 | @parthandler('stream2', ('requirements', 'filecount', 'bytecount')) | |
2266 | def handlestreamv2bundle(op, part): |
|
2273 | def handlestreamv2bundle(op, part): | |
2267 |
|
2274 | |||
2268 | requirements = urlreq.unquote(part.params['requirements']).split(',') |
|
2275 | requirements = urlreq.unquote(part.params['requirements']).split(',') | |
2269 | filecount = int(part.params['filecount']) |
|
2276 | filecount = int(part.params['filecount']) | |
2270 | bytecount = int(part.params['bytecount']) |
|
2277 | bytecount = int(part.params['bytecount']) | |
2271 |
|
2278 | |||
2272 | repo = op.repo |
|
2279 | repo = op.repo | |
2273 | if len(repo): |
|
2280 | if len(repo): | |
2274 | msg = _('cannot apply stream clone to non empty repository') |
|
2281 | msg = _('cannot apply stream clone to non empty repository') | |
2275 | raise error.Abort(msg) |
|
2282 | raise error.Abort(msg) | |
2276 |
|
2283 | |||
2277 | repo.ui.debug('applying stream bundle\n') |
|
2284 | repo.ui.debug('applying stream bundle\n') | |
2278 | streamclone.applybundlev2(repo, part, filecount, bytecount, |
|
2285 | streamclone.applybundlev2(repo, part, filecount, bytecount, | |
2279 | requirements) |
|
2286 | requirements) | |
2280 |
|
2287 | |||
2281 | def widen_bundle(repo, oldmatcher, newmatcher, common, known, cgversion, |
|
2288 | def widen_bundle(repo, oldmatcher, newmatcher, common, known, cgversion, | |
2282 | ellipses): |
|
2289 | ellipses): | |
2283 | """generates bundle2 for widening a narrow clone |
|
2290 | """generates bundle2 for widening a narrow clone | |
2284 |
|
2291 | |||
2285 | repo is the localrepository instance |
|
2292 | repo is the localrepository instance | |
2286 | oldmatcher matches what the client already has |
|
2293 | oldmatcher matches what the client already has | |
2287 | newmatcher matches what the client needs (including what it already has) |
|
2294 | newmatcher matches what the client needs (including what it already has) | |
2288 | common is set of common heads between server and client |
|
2295 | common is set of common heads between server and client | |
2289 | known is a set of revs known on the client side (used in ellipses) |
|
2296 | known is a set of revs known on the client side (used in ellipses) | |
2290 | cgversion is the changegroup version to send |
|
2297 | cgversion is the changegroup version to send | |
2291 | ellipses is boolean value telling whether to send ellipses data or not |
|
2298 | ellipses is boolean value telling whether to send ellipses data or not | |
2292 |
|
2299 | |||
2293 | returns bundle2 of the data required for extending |
|
2300 | returns bundle2 of the data required for extending | |
2294 | """ |
|
2301 | """ | |
2295 | bundler = bundle20(repo.ui) |
|
2302 | bundler = bundle20(repo.ui) | |
2296 | commonnodes = set() |
|
2303 | commonnodes = set() | |
2297 | cl = repo.changelog |
|
2304 | cl = repo.changelog | |
2298 | for r in repo.revs("::%ln", common): |
|
2305 | for r in repo.revs("::%ln", common): | |
2299 | commonnodes.add(cl.node(r)) |
|
2306 | commonnodes.add(cl.node(r)) | |
2300 | if commonnodes: |
|
2307 | if commonnodes: | |
2301 | # XXX: we should only send the filelogs (and treemanifest). user |
|
2308 | # XXX: we should only send the filelogs (and treemanifest). user | |
2302 | # already has the changelog and manifest |
|
2309 | # already has the changelog and manifest | |
2303 | packer = changegroup.getbundler(cgversion, repo, |
|
2310 | packer = changegroup.getbundler(cgversion, repo, | |
2304 | oldmatcher=oldmatcher, |
|
2311 | oldmatcher=oldmatcher, | |
2305 | matcher=newmatcher, |
|
2312 | matcher=newmatcher, | |
2306 | fullnodes=commonnodes) |
|
2313 | fullnodes=commonnodes) | |
2307 | cgdata = packer.generate(set([nodemod.nullid]), list(commonnodes), |
|
2314 | cgdata = packer.generate(set([nodemod.nullid]), list(commonnodes), | |
2308 | False, 'narrow_widen', changelog=False) |
|
2315 | False, 'narrow_widen', changelog=False) | |
2309 |
|
2316 | |||
2310 | part = bundler.newpart('changegroup', data=cgdata) |
|
2317 | part = bundler.newpart('changegroup', data=cgdata) | |
2311 | part.addparam('version', cgversion) |
|
2318 | part.addparam('version', cgversion) | |
2312 | if 'treemanifest' in repo.requirements: |
|
2319 | if 'treemanifest' in repo.requirements: | |
2313 | part.addparam('treemanifest', '1') |
|
2320 | part.addparam('treemanifest', '1') | |
2314 |
|
2321 | |||
2315 | return bundler |
|
2322 | return bundler |
@@ -1,660 +1,663 b'' | |||||
1 | # streamclone.py - producing and consuming streaming repository data |
|
1 | # streamclone.py - producing and consuming streaming repository data | |
2 | # |
|
2 | # | |
3 | # Copyright 2015 Gregory Szorc <gregory.szorc@gmail.com> |
|
3 | # Copyright 2015 Gregory Szorc <gregory.szorc@gmail.com> | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
|
7 | |||
8 | from __future__ import absolute_import |
|
8 | from __future__ import absolute_import | |
9 |
|
9 | |||
10 | import contextlib |
|
10 | import contextlib | |
11 | import os |
|
11 | import os | |
12 | import struct |
|
12 | import struct | |
13 |
|
13 | |||
14 | from .i18n import _ |
|
14 | from .i18n import _ | |
15 | from . import ( |
|
15 | from . import ( | |
16 | branchmap, |
|
16 | branchmap, | |
17 | cacheutil, |
|
17 | cacheutil, | |
18 | error, |
|
18 | error, | |
19 | narrowspec, |
|
19 | narrowspec, | |
20 | phases, |
|
20 | phases, | |
21 | pycompat, |
|
21 | pycompat, | |
22 | repository, |
|
22 | repository, | |
23 | store, |
|
23 | store, | |
24 | util, |
|
24 | util, | |
25 | ) |
|
25 | ) | |
26 |
|
26 | |||
27 | def canperformstreamclone(pullop, bundle2=False): |
|
27 | def canperformstreamclone(pullop, bundle2=False): | |
28 | """Whether it is possible to perform a streaming clone as part of pull. |
|
28 | """Whether it is possible to perform a streaming clone as part of pull. | |
29 |
|
29 | |||
30 | ``bundle2`` will cause the function to consider stream clone through |
|
30 | ``bundle2`` will cause the function to consider stream clone through | |
31 | bundle2 and only through bundle2. |
|
31 | bundle2 and only through bundle2. | |
32 |
|
32 | |||
33 | Returns a tuple of (supported, requirements). ``supported`` is True if |
|
33 | Returns a tuple of (supported, requirements). ``supported`` is True if | |
34 | streaming clone is supported and False otherwise. ``requirements`` is |
|
34 | streaming clone is supported and False otherwise. ``requirements`` is | |
35 | a set of repo requirements from the remote, or ``None`` if stream clone |
|
35 | a set of repo requirements from the remote, or ``None`` if stream clone | |
36 | isn't supported. |
|
36 | isn't supported. | |
37 | """ |
|
37 | """ | |
38 | repo = pullop.repo |
|
38 | repo = pullop.repo | |
39 | remote = pullop.remote |
|
39 | remote = pullop.remote | |
40 |
|
40 | |||
41 | bundle2supported = False |
|
41 | bundle2supported = False | |
42 | if pullop.canusebundle2: |
|
42 | if pullop.canusebundle2: | |
43 | if 'v2' in pullop.remotebundle2caps.get('stream', []): |
|
43 | if 'v2' in pullop.remotebundle2caps.get('stream', []): | |
44 | bundle2supported = True |
|
44 | bundle2supported = True | |
45 | # else |
|
45 | # else | |
46 | # Server doesn't support bundle2 stream clone or doesn't support |
|
46 | # Server doesn't support bundle2 stream clone or doesn't support | |
47 | # the versions we support. Fall back and possibly allow legacy. |
|
47 | # the versions we support. Fall back and possibly allow legacy. | |
48 |
|
48 | |||
49 | # Ensures legacy code path uses available bundle2. |
|
49 | # Ensures legacy code path uses available bundle2. | |
50 | if bundle2supported and not bundle2: |
|
50 | if bundle2supported and not bundle2: | |
51 | return False, None |
|
51 | return False, None | |
52 | # Ensures bundle2 doesn't try to do a stream clone if it isn't supported. |
|
52 | # Ensures bundle2 doesn't try to do a stream clone if it isn't supported. | |
53 | elif bundle2 and not bundle2supported: |
|
53 | elif bundle2 and not bundle2supported: | |
54 | return False, None |
|
54 | return False, None | |
55 |
|
55 | |||
56 | # Streaming clone only works on empty repositories. |
|
56 | # Streaming clone only works on empty repositories. | |
57 | if len(repo): |
|
57 | if len(repo): | |
58 | return False, None |
|
58 | return False, None | |
59 |
|
59 | |||
60 | # Streaming clone only works if all data is being requested. |
|
60 | # Streaming clone only works if all data is being requested. | |
61 | if pullop.heads: |
|
61 | if pullop.heads: | |
62 | return False, None |
|
62 | return False, None | |
63 |
|
63 | |||
64 | streamrequested = pullop.streamclonerequested |
|
64 | streamrequested = pullop.streamclonerequested | |
65 |
|
65 | |||
66 | # If we don't have a preference, let the server decide for us. This |
|
66 | # If we don't have a preference, let the server decide for us. This | |
67 | # likely only comes into play in LANs. |
|
67 | # likely only comes into play in LANs. | |
68 | if streamrequested is None: |
|
68 | if streamrequested is None: | |
69 | # The server can advertise whether to prefer streaming clone. |
|
69 | # The server can advertise whether to prefer streaming clone. | |
70 | streamrequested = remote.capable('stream-preferred') |
|
70 | streamrequested = remote.capable('stream-preferred') | |
71 |
|
71 | |||
72 | if not streamrequested: |
|
72 | if not streamrequested: | |
73 | return False, None |
|
73 | return False, None | |
74 |
|
74 | |||
75 | # In order for stream clone to work, the client has to support all the |
|
75 | # In order for stream clone to work, the client has to support all the | |
76 | # requirements advertised by the server. |
|
76 | # requirements advertised by the server. | |
77 | # |
|
77 | # | |
78 | # The server advertises its requirements via the "stream" and "streamreqs" |
|
78 | # The server advertises its requirements via the "stream" and "streamreqs" | |
79 | # capability. "stream" (a value-less capability) is advertised if and only |
|
79 | # capability. "stream" (a value-less capability) is advertised if and only | |
80 | # if the only requirement is "revlogv1." Else, the "streamreqs" capability |
|
80 | # if the only requirement is "revlogv1." Else, the "streamreqs" capability | |
81 | # is advertised and contains a comma-delimited list of requirements. |
|
81 | # is advertised and contains a comma-delimited list of requirements. | |
82 | requirements = set() |
|
82 | requirements = set() | |
83 | if remote.capable('stream'): |
|
83 | if remote.capable('stream'): | |
84 | requirements.add('revlogv1') |
|
84 | requirements.add('revlogv1') | |
85 | else: |
|
85 | else: | |
86 | streamreqs = remote.capable('streamreqs') |
|
86 | streamreqs = remote.capable('streamreqs') | |
87 | # This is weird and shouldn't happen with modern servers. |
|
87 | # This is weird and shouldn't happen with modern servers. | |
88 | if not streamreqs: |
|
88 | if not streamreqs: | |
89 | pullop.repo.ui.warn(_( |
|
89 | pullop.repo.ui.warn(_( | |
90 | 'warning: stream clone requested but server has them ' |
|
90 | 'warning: stream clone requested but server has them ' | |
91 | 'disabled\n')) |
|
91 | 'disabled\n')) | |
92 | return False, None |
|
92 | return False, None | |
93 |
|
93 | |||
94 | streamreqs = set(streamreqs.split(',')) |
|
94 | streamreqs = set(streamreqs.split(',')) | |
95 | # Server requires something we don't support. Bail. |
|
95 | # Server requires something we don't support. Bail. | |
96 | missingreqs = streamreqs - repo.supportedformats |
|
96 | missingreqs = streamreqs - repo.supportedformats | |
97 | if missingreqs: |
|
97 | if missingreqs: | |
98 | pullop.repo.ui.warn(_( |
|
98 | pullop.repo.ui.warn(_( | |
99 | 'warning: stream clone requested but client is missing ' |
|
99 | 'warning: stream clone requested but client is missing ' | |
100 | 'requirements: %s\n') % ', '.join(sorted(missingreqs))) |
|
100 | 'requirements: %s\n') % ', '.join(sorted(missingreqs))) | |
101 | pullop.repo.ui.warn( |
|
101 | pullop.repo.ui.warn( | |
102 | _('(see https://www.mercurial-scm.org/wiki/MissingRequirement ' |
|
102 | _('(see https://www.mercurial-scm.org/wiki/MissingRequirement ' | |
103 | 'for more information)\n')) |
|
103 | 'for more information)\n')) | |
104 | return False, None |
|
104 | return False, None | |
105 | requirements = streamreqs |
|
105 | requirements = streamreqs | |
106 |
|
106 | |||
107 | return True, requirements |
|
107 | return True, requirements | |
108 |
|
108 | |||
109 | def maybeperformlegacystreamclone(pullop): |
|
109 | def maybeperformlegacystreamclone(pullop): | |
110 | """Possibly perform a legacy stream clone operation. |
|
110 | """Possibly perform a legacy stream clone operation. | |
111 |
|
111 | |||
112 | Legacy stream clones are performed as part of pull but before all other |
|
112 | Legacy stream clones are performed as part of pull but before all other | |
113 | operations. |
|
113 | operations. | |
114 |
|
114 | |||
115 | A legacy stream clone will not be performed if a bundle2 stream clone is |
|
115 | A legacy stream clone will not be performed if a bundle2 stream clone is | |
116 | supported. |
|
116 | supported. | |
117 | """ |
|
117 | """ | |
118 | from . import localrepo |
|
118 | from . import localrepo | |
119 |
|
119 | |||
120 | supported, requirements = canperformstreamclone(pullop) |
|
120 | supported, requirements = canperformstreamclone(pullop) | |
121 |
|
121 | |||
122 | if not supported: |
|
122 | if not supported: | |
123 | return |
|
123 | return | |
124 |
|
124 | |||
125 | repo = pullop.repo |
|
125 | repo = pullop.repo | |
126 | remote = pullop.remote |
|
126 | remote = pullop.remote | |
127 |
|
127 | |||
128 | # Save remote branchmap. We will use it later to speed up branchcache |
|
128 | # Save remote branchmap. We will use it later to speed up branchcache | |
129 | # creation. |
|
129 | # creation. | |
130 | rbranchmap = None |
|
130 | rbranchmap = None | |
131 | if remote.capable('branchmap'): |
|
131 | if remote.capable('branchmap'): | |
132 | with remote.commandexecutor() as e: |
|
132 | with remote.commandexecutor() as e: | |
133 | rbranchmap = e.callcommand('branchmap', {}).result() |
|
133 | rbranchmap = e.callcommand('branchmap', {}).result() | |
134 |
|
134 | |||
135 | repo.ui.status(_('streaming all changes\n')) |
|
135 | repo.ui.status(_('streaming all changes\n')) | |
136 |
|
136 | |||
137 | with remote.commandexecutor() as e: |
|
137 | with remote.commandexecutor() as e: | |
138 | fp = e.callcommand('stream_out', {}).result() |
|
138 | fp = e.callcommand('stream_out', {}).result() | |
139 |
|
139 | |||
140 | # TODO strictly speaking, this code should all be inside the context |
|
140 | # TODO strictly speaking, this code should all be inside the context | |
141 | # manager because the context manager is supposed to ensure all wire state |
|
141 | # manager because the context manager is supposed to ensure all wire state | |
142 | # is flushed when exiting. But the legacy peers don't do this, so it |
|
142 | # is flushed when exiting. But the legacy peers don't do this, so it | |
143 | # doesn't matter. |
|
143 | # doesn't matter. | |
144 | l = fp.readline() |
|
144 | l = fp.readline() | |
145 | try: |
|
145 | try: | |
146 | resp = int(l) |
|
146 | resp = int(l) | |
147 | except ValueError: |
|
147 | except ValueError: | |
148 | raise error.ResponseError( |
|
148 | raise error.ResponseError( | |
149 | _('unexpected response from remote server:'), l) |
|
149 | _('unexpected response from remote server:'), l) | |
150 | if resp == 1: |
|
150 | if resp == 1: | |
151 | raise error.Abort(_('operation forbidden by server')) |
|
151 | raise error.Abort(_('operation forbidden by server')) | |
152 | elif resp == 2: |
|
152 | elif resp == 2: | |
153 | raise error.Abort(_('locking the remote repository failed')) |
|
153 | raise error.Abort(_('locking the remote repository failed')) | |
154 | elif resp != 0: |
|
154 | elif resp != 0: | |
155 | raise error.Abort(_('the server sent an unknown error code')) |
|
155 | raise error.Abort(_('the server sent an unknown error code')) | |
156 |
|
156 | |||
157 | l = fp.readline() |
|
157 | l = fp.readline() | |
158 | try: |
|
158 | try: | |
159 | filecount, bytecount = map(int, l.split(' ', 1)) |
|
159 | filecount, bytecount = map(int, l.split(' ', 1)) | |
160 | except (ValueError, TypeError): |
|
160 | except (ValueError, TypeError): | |
161 | raise error.ResponseError( |
|
161 | raise error.ResponseError( | |
162 | _('unexpected response from remote server:'), l) |
|
162 | _('unexpected response from remote server:'), l) | |
163 |
|
163 | |||
164 | with repo.lock(): |
|
164 | with repo.lock(): | |
165 | consumev1(repo, fp, filecount, bytecount) |
|
165 | consumev1(repo, fp, filecount, bytecount) | |
166 |
|
166 | |||
167 | # new requirements = old non-format requirements + |
|
167 | # new requirements = old non-format requirements + | |
168 | # new format-related remote requirements |
|
168 | # new format-related remote requirements | |
169 | # requirements from the streamed-in repository |
|
169 | # requirements from the streamed-in repository | |
170 | repo.requirements = requirements | ( |
|
170 | repo.requirements = requirements | ( | |
171 | repo.requirements - repo.supportedformats) |
|
171 | repo.requirements - repo.supportedformats) | |
172 | repo.svfs.options = localrepo.resolvestorevfsoptions( |
|
172 | repo.svfs.options = localrepo.resolvestorevfsoptions( | |
173 | repo.ui, repo.requirements, repo.features) |
|
173 | repo.ui, repo.requirements, repo.features) | |
174 | repo._writerequirements() |
|
174 | repo._writerequirements() | |
175 |
|
175 | |||
176 | if rbranchmap: |
|
176 | if rbranchmap: | |
177 | branchmap.replacecache(repo, rbranchmap) |
|
177 | branchmap.replacecache(repo, rbranchmap) | |
178 |
|
178 | |||
179 | repo.invalidate() |
|
179 | repo.invalidate() | |
180 |
|
180 | |||
181 | def allowservergeneration(repo): |
|
181 | def allowservergeneration(repo): | |
182 | """Whether streaming clones are allowed from the server.""" |
|
182 | """Whether streaming clones are allowed from the server.""" | |
183 | if repository.REPO_FEATURE_STREAM_CLONE not in repo.features: |
|
183 | if repository.REPO_FEATURE_STREAM_CLONE not in repo.features: | |
184 | return False |
|
184 | return False | |
185 |
|
185 | |||
186 | if not repo.ui.configbool('server', 'uncompressed', untrusted=True): |
|
186 | if not repo.ui.configbool('server', 'uncompressed', untrusted=True): | |
187 | return False |
|
187 | return False | |
188 |
|
188 | |||
189 | # The way stream clone works makes it impossible to hide secret changesets. |
|
189 | # The way stream clone works makes it impossible to hide secret changesets. | |
190 | # So don't allow this by default. |
|
190 | # So don't allow this by default. | |
191 | secret = phases.hassecret(repo) |
|
191 | secret = phases.hassecret(repo) | |
192 | if secret: |
|
192 | if secret: | |
193 | return repo.ui.configbool('server', 'uncompressedallowsecret') |
|
193 | return repo.ui.configbool('server', 'uncompressedallowsecret') | |
194 |
|
194 | |||
195 | return True |
|
195 | return True | |
196 |
|
196 | |||
197 | # This is it's own function so extensions can override it. |
|
197 | # This is it's own function so extensions can override it. | |
198 | def _walkstreamfiles(repo, matcher=None): |
|
198 | def _walkstreamfiles(repo, matcher=None): | |
199 | return repo.store.walk(matcher) |
|
199 | return repo.store.walk(matcher) | |
200 |
|
200 | |||
201 | def generatev1(repo): |
|
201 | def generatev1(repo): | |
202 | """Emit content for version 1 of a streaming clone. |
|
202 | """Emit content for version 1 of a streaming clone. | |
203 |
|
203 | |||
204 | This returns a 3-tuple of (file count, byte size, data iterator). |
|
204 | This returns a 3-tuple of (file count, byte size, data iterator). | |
205 |
|
205 | |||
206 | The data iterator consists of N entries for each file being transferred. |
|
206 | The data iterator consists of N entries for each file being transferred. | |
207 | Each file entry starts as a line with the file name and integer size |
|
207 | Each file entry starts as a line with the file name and integer size | |
208 | delimited by a null byte. |
|
208 | delimited by a null byte. | |
209 |
|
209 | |||
210 | The raw file data follows. Following the raw file data is the next file |
|
210 | The raw file data follows. Following the raw file data is the next file | |
211 | entry, or EOF. |
|
211 | entry, or EOF. | |
212 |
|
212 | |||
213 | When used on the wire protocol, an additional line indicating protocol |
|
213 | When used on the wire protocol, an additional line indicating protocol | |
214 | success will be prepended to the stream. This function is not responsible |
|
214 | success will be prepended to the stream. This function is not responsible | |
215 | for adding it. |
|
215 | for adding it. | |
216 |
|
216 | |||
217 | This function will obtain a repository lock to ensure a consistent view of |
|
217 | This function will obtain a repository lock to ensure a consistent view of | |
218 | the store is captured. It therefore may raise LockError. |
|
218 | the store is captured. It therefore may raise LockError. | |
219 | """ |
|
219 | """ | |
220 | entries = [] |
|
220 | entries = [] | |
221 | total_bytes = 0 |
|
221 | total_bytes = 0 | |
222 | # Get consistent snapshot of repo, lock during scan. |
|
222 | # Get consistent snapshot of repo, lock during scan. | |
223 | with repo.lock(): |
|
223 | with repo.lock(): | |
224 | repo.ui.debug('scanning\n') |
|
224 | repo.ui.debug('scanning\n') | |
225 | for name, ename, size in _walkstreamfiles(repo): |
|
225 | for name, ename, size in _walkstreamfiles(repo): | |
226 | if size: |
|
226 | if size: | |
227 | entries.append((name, size)) |
|
227 | entries.append((name, size)) | |
228 | total_bytes += size |
|
228 | total_bytes += size | |
229 |
|
229 | |||
230 | repo.ui.debug('%d files, %d bytes to transfer\n' % |
|
230 | repo.ui.debug('%d files, %d bytes to transfer\n' % | |
231 | (len(entries), total_bytes)) |
|
231 | (len(entries), total_bytes)) | |
232 |
|
232 | |||
233 | svfs = repo.svfs |
|
233 | svfs = repo.svfs | |
234 | debugflag = repo.ui.debugflag |
|
234 | debugflag = repo.ui.debugflag | |
235 |
|
235 | |||
236 | def emitrevlogdata(): |
|
236 | def emitrevlogdata(): | |
237 | for name, size in entries: |
|
237 | for name, size in entries: | |
238 | if debugflag: |
|
238 | if debugflag: | |
239 | repo.ui.debug('sending %s (%d bytes)\n' % (name, size)) |
|
239 | repo.ui.debug('sending %s (%d bytes)\n' % (name, size)) | |
240 | # partially encode name over the wire for backwards compat |
|
240 | # partially encode name over the wire for backwards compat | |
241 | yield '%s\0%d\n' % (store.encodedir(name), size) |
|
241 | yield '%s\0%d\n' % (store.encodedir(name), size) | |
242 | # auditing at this stage is both pointless (paths are already |
|
242 | # auditing at this stage is both pointless (paths are already | |
243 | # trusted by the local repo) and expensive |
|
243 | # trusted by the local repo) and expensive | |
244 | with svfs(name, 'rb', auditpath=False) as fp: |
|
244 | with svfs(name, 'rb', auditpath=False) as fp: | |
245 | if size <= 65536: |
|
245 | if size <= 65536: | |
246 | yield fp.read(size) |
|
246 | yield fp.read(size) | |
247 | else: |
|
247 | else: | |
248 | for chunk in util.filechunkiter(fp, limit=size): |
|
248 | for chunk in util.filechunkiter(fp, limit=size): | |
249 | yield chunk |
|
249 | yield chunk | |
250 |
|
250 | |||
251 | return len(entries), total_bytes, emitrevlogdata() |
|
251 | return len(entries), total_bytes, emitrevlogdata() | |
252 |
|
252 | |||
253 | def generatev1wireproto(repo): |
|
253 | def generatev1wireproto(repo): | |
254 | """Emit content for version 1 of streaming clone suitable for the wire. |
|
254 | """Emit content for version 1 of streaming clone suitable for the wire. | |
255 |
|
255 | |||
256 | This is the data output from ``generatev1()`` with 2 header lines. The |
|
256 | This is the data output from ``generatev1()`` with 2 header lines. The | |
257 | first line indicates overall success. The 2nd contains the file count and |
|
257 | first line indicates overall success. The 2nd contains the file count and | |
258 | byte size of payload. |
|
258 | byte size of payload. | |
259 |
|
259 | |||
260 | The success line contains "0" for success, "1" for stream generation not |
|
260 | The success line contains "0" for success, "1" for stream generation not | |
261 | allowed, and "2" for error locking the repository (possibly indicating |
|
261 | allowed, and "2" for error locking the repository (possibly indicating | |
262 | a permissions error for the server process). |
|
262 | a permissions error for the server process). | |
263 | """ |
|
263 | """ | |
264 | if not allowservergeneration(repo): |
|
264 | if not allowservergeneration(repo): | |
265 | yield '1\n' |
|
265 | yield '1\n' | |
266 | return |
|
266 | return | |
267 |
|
267 | |||
268 | try: |
|
268 | try: | |
269 | filecount, bytecount, it = generatev1(repo) |
|
269 | filecount, bytecount, it = generatev1(repo) | |
270 | except error.LockError: |
|
270 | except error.LockError: | |
271 | yield '2\n' |
|
271 | yield '2\n' | |
272 | return |
|
272 | return | |
273 |
|
273 | |||
274 | # Indicates successful response. |
|
274 | # Indicates successful response. | |
275 | yield '0\n' |
|
275 | yield '0\n' | |
276 | yield '%d %d\n' % (filecount, bytecount) |
|
276 | yield '%d %d\n' % (filecount, bytecount) | |
277 | for chunk in it: |
|
277 | for chunk in it: | |
278 | yield chunk |
|
278 | yield chunk | |
279 |
|
279 | |||
280 | def generatebundlev1(repo, compression='UN'): |
|
280 | def generatebundlev1(repo, compression='UN'): | |
281 | """Emit content for version 1 of a stream clone bundle. |
|
281 | """Emit content for version 1 of a stream clone bundle. | |
282 |
|
282 | |||
283 | The first 4 bytes of the output ("HGS1") denote this as stream clone |
|
283 | The first 4 bytes of the output ("HGS1") denote this as stream clone | |
284 | bundle version 1. |
|
284 | bundle version 1. | |
285 |
|
285 | |||
286 | The next 2 bytes indicate the compression type. Only "UN" is currently |
|
286 | The next 2 bytes indicate the compression type. Only "UN" is currently | |
287 | supported. |
|
287 | supported. | |
288 |
|
288 | |||
289 | The next 16 bytes are two 64-bit big endian unsigned integers indicating |
|
289 | The next 16 bytes are two 64-bit big endian unsigned integers indicating | |
290 | file count and byte count, respectively. |
|
290 | file count and byte count, respectively. | |
291 |
|
291 | |||
292 | The next 2 bytes is a 16-bit big endian unsigned short declaring the length |
|
292 | The next 2 bytes is a 16-bit big endian unsigned short declaring the length | |
293 | of the requirements string, including a trailing \0. The following N bytes |
|
293 | of the requirements string, including a trailing \0. The following N bytes | |
294 | are the requirements string, which is ASCII containing a comma-delimited |
|
294 | are the requirements string, which is ASCII containing a comma-delimited | |
295 | list of repo requirements that are needed to support the data. |
|
295 | list of repo requirements that are needed to support the data. | |
296 |
|
296 | |||
297 | The remaining content is the output of ``generatev1()`` (which may be |
|
297 | The remaining content is the output of ``generatev1()`` (which may be | |
298 | compressed in the future). |
|
298 | compressed in the future). | |
299 |
|
299 | |||
300 | Returns a tuple of (requirements, data generator). |
|
300 | Returns a tuple of (requirements, data generator). | |
301 | """ |
|
301 | """ | |
302 | if compression != 'UN': |
|
302 | if compression != 'UN': | |
303 | raise ValueError('we do not support the compression argument yet') |
|
303 | raise ValueError('we do not support the compression argument yet') | |
304 |
|
304 | |||
305 | requirements = repo.requirements & repo.supportedformats |
|
305 | requirements = repo.requirements & repo.supportedformats | |
306 | requires = ','.join(sorted(requirements)) |
|
306 | requires = ','.join(sorted(requirements)) | |
307 |
|
307 | |||
308 | def gen(): |
|
308 | def gen(): | |
309 | yield 'HGS1' |
|
309 | yield 'HGS1' | |
310 | yield compression |
|
310 | yield compression | |
311 |
|
311 | |||
312 | filecount, bytecount, it = generatev1(repo) |
|
312 | filecount, bytecount, it = generatev1(repo) | |
313 | repo.ui.status(_('writing %d bytes for %d files\n') % |
|
313 | repo.ui.status(_('writing %d bytes for %d files\n') % | |
314 | (bytecount, filecount)) |
|
314 | (bytecount, filecount)) | |
315 |
|
315 | |||
316 | yield struct.pack('>QQ', filecount, bytecount) |
|
316 | yield struct.pack('>QQ', filecount, bytecount) | |
317 | yield struct.pack('>H', len(requires) + 1) |
|
317 | yield struct.pack('>H', len(requires) + 1) | |
318 | yield requires + '\0' |
|
318 | yield requires + '\0' | |
319 |
|
319 | |||
320 | # This is where we'll add compression in the future. |
|
320 | # This is where we'll add compression in the future. | |
321 | assert compression == 'UN' |
|
321 | assert compression == 'UN' | |
322 |
|
322 | |||
323 | progress = repo.ui.makeprogress(_('bundle'), total=bytecount, |
|
323 | progress = repo.ui.makeprogress(_('bundle'), total=bytecount, | |
324 | unit=_('bytes')) |
|
324 | unit=_('bytes')) | |
325 | progress.update(0) |
|
325 | progress.update(0) | |
326 |
|
326 | |||
327 | for chunk in it: |
|
327 | for chunk in it: | |
328 | progress.increment(step=len(chunk)) |
|
328 | progress.increment(step=len(chunk)) | |
329 | yield chunk |
|
329 | yield chunk | |
330 |
|
330 | |||
331 | progress.complete() |
|
331 | progress.complete() | |
332 |
|
332 | |||
333 | return requirements, gen() |
|
333 | return requirements, gen() | |
334 |
|
334 | |||
335 | def consumev1(repo, fp, filecount, bytecount): |
|
335 | def consumev1(repo, fp, filecount, bytecount): | |
336 | """Apply the contents from version 1 of a streaming clone file handle. |
|
336 | """Apply the contents from version 1 of a streaming clone file handle. | |
337 |
|
337 | |||
338 | This takes the output from "stream_out" and applies it to the specified |
|
338 | This takes the output from "stream_out" and applies it to the specified | |
339 | repository. |
|
339 | repository. | |
340 |
|
340 | |||
341 | Like "stream_out," the status line added by the wire protocol is not |
|
341 | Like "stream_out," the status line added by the wire protocol is not | |
342 | handled by this function. |
|
342 | handled by this function. | |
343 | """ |
|
343 | """ | |
344 | with repo.lock(): |
|
344 | with repo.lock(): | |
345 | repo.ui.status(_('%d files to transfer, %s of data\n') % |
|
345 | repo.ui.status(_('%d files to transfer, %s of data\n') % | |
346 | (filecount, util.bytecount(bytecount))) |
|
346 | (filecount, util.bytecount(bytecount))) | |
347 | progress = repo.ui.makeprogress(_('clone'), total=bytecount, |
|
347 | progress = repo.ui.makeprogress(_('clone'), total=bytecount, | |
348 | unit=_('bytes')) |
|
348 | unit=_('bytes')) | |
349 | progress.update(0) |
|
349 | progress.update(0) | |
350 | start = util.timer() |
|
350 | start = util.timer() | |
351 |
|
351 | |||
352 | # TODO: get rid of (potential) inconsistency |
|
352 | # TODO: get rid of (potential) inconsistency | |
353 | # |
|
353 | # | |
354 | # If transaction is started and any @filecache property is |
|
354 | # If transaction is started and any @filecache property is | |
355 | # changed at this point, it causes inconsistency between |
|
355 | # changed at this point, it causes inconsistency between | |
356 | # in-memory cached property and streamclone-ed file on the |
|
356 | # in-memory cached property and streamclone-ed file on the | |
357 | # disk. Nested transaction prevents transaction scope "clone" |
|
357 | # disk. Nested transaction prevents transaction scope "clone" | |
358 | # below from writing in-memory changes out at the end of it, |
|
358 | # below from writing in-memory changes out at the end of it, | |
359 | # even though in-memory changes are discarded at the end of it |
|
359 | # even though in-memory changes are discarded at the end of it | |
360 | # regardless of transaction nesting. |
|
360 | # regardless of transaction nesting. | |
361 | # |
|
361 | # | |
362 | # But transaction nesting can't be simply prohibited, because |
|
362 | # But transaction nesting can't be simply prohibited, because | |
363 | # nesting occurs also in ordinary case (e.g. enabling |
|
363 | # nesting occurs also in ordinary case (e.g. enabling | |
364 | # clonebundles). |
|
364 | # clonebundles). | |
365 |
|
365 | |||
366 | with repo.transaction('clone'): |
|
366 | with repo.transaction('clone'): | |
367 | with repo.svfs.backgroundclosing(repo.ui, expectedcount=filecount): |
|
367 | with repo.svfs.backgroundclosing(repo.ui, expectedcount=filecount): | |
368 | for i in pycompat.xrange(filecount): |
|
368 | for i in pycompat.xrange(filecount): | |
369 | # XXX doesn't support '\n' or '\r' in filenames |
|
369 | # XXX doesn't support '\n' or '\r' in filenames | |
370 | l = fp.readline() |
|
370 | l = fp.readline() | |
371 | try: |
|
371 | try: | |
372 | name, size = l.split('\0', 1) |
|
372 | name, size = l.split('\0', 1) | |
373 | size = int(size) |
|
373 | size = int(size) | |
374 | except (ValueError, TypeError): |
|
374 | except (ValueError, TypeError): | |
375 | raise error.ResponseError( |
|
375 | raise error.ResponseError( | |
376 | _('unexpected response from remote server:'), l) |
|
376 | _('unexpected response from remote server:'), l) | |
377 | if repo.ui.debugflag: |
|
377 | if repo.ui.debugflag: | |
378 | repo.ui.debug('adding %s (%s)\n' % |
|
378 | repo.ui.debug('adding %s (%s)\n' % | |
379 | (name, util.bytecount(size))) |
|
379 | (name, util.bytecount(size))) | |
380 | # for backwards compat, name was partially encoded |
|
380 | # for backwards compat, name was partially encoded | |
381 | path = store.decodedir(name) |
|
381 | path = store.decodedir(name) | |
382 | with repo.svfs(path, 'w', backgroundclose=True) as ofp: |
|
382 | with repo.svfs(path, 'w', backgroundclose=True) as ofp: | |
383 | for chunk in util.filechunkiter(fp, limit=size): |
|
383 | for chunk in util.filechunkiter(fp, limit=size): | |
384 | progress.increment(step=len(chunk)) |
|
384 | progress.increment(step=len(chunk)) | |
385 | ofp.write(chunk) |
|
385 | ofp.write(chunk) | |
386 |
|
386 | |||
387 | # force @filecache properties to be reloaded from |
|
387 | # force @filecache properties to be reloaded from | |
388 | # streamclone-ed file at next access |
|
388 | # streamclone-ed file at next access | |
389 | repo.invalidate(clearfilecache=True) |
|
389 | repo.invalidate(clearfilecache=True) | |
390 |
|
390 | |||
391 | elapsed = util.timer() - start |
|
391 | elapsed = util.timer() - start | |
392 | if elapsed <= 0: |
|
392 | if elapsed <= 0: | |
393 | elapsed = 0.001 |
|
393 | elapsed = 0.001 | |
394 | progress.complete() |
|
394 | progress.complete() | |
395 | repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') % |
|
395 | repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') % | |
396 | (util.bytecount(bytecount), elapsed, |
|
396 | (util.bytecount(bytecount), elapsed, | |
397 | util.bytecount(bytecount / elapsed))) |
|
397 | util.bytecount(bytecount / elapsed))) | |
398 |
|
398 | |||
399 | def readbundle1header(fp): |
|
399 | def readbundle1header(fp): | |
400 | compression = fp.read(2) |
|
400 | compression = fp.read(2) | |
401 | if compression != 'UN': |
|
401 | if compression != 'UN': | |
402 | raise error.Abort(_('only uncompressed stream clone bundles are ' |
|
402 | raise error.Abort(_('only uncompressed stream clone bundles are ' | |
403 | 'supported; got %s') % compression) |
|
403 | 'supported; got %s') % compression) | |
404 |
|
404 | |||
405 | filecount, bytecount = struct.unpack('>QQ', fp.read(16)) |
|
405 | filecount, bytecount = struct.unpack('>QQ', fp.read(16)) | |
406 | requireslen = struct.unpack('>H', fp.read(2))[0] |
|
406 | requireslen = struct.unpack('>H', fp.read(2))[0] | |
407 | requires = fp.read(requireslen) |
|
407 | requires = fp.read(requireslen) | |
408 |
|
408 | |||
409 | if not requires.endswith('\0'): |
|
409 | if not requires.endswith('\0'): | |
410 | raise error.Abort(_('malformed stream clone bundle: ' |
|
410 | raise error.Abort(_('malformed stream clone bundle: ' | |
411 | 'requirements not properly encoded')) |
|
411 | 'requirements not properly encoded')) | |
412 |
|
412 | |||
413 | requirements = set(requires.rstrip('\0').split(',')) |
|
413 | requirements = set(requires.rstrip('\0').split(',')) | |
414 |
|
414 | |||
415 | return filecount, bytecount, requirements |
|
415 | return filecount, bytecount, requirements | |
416 |
|
416 | |||
417 | def applybundlev1(repo, fp): |
|
417 | def applybundlev1(repo, fp): | |
418 | """Apply the content from a stream clone bundle version 1. |
|
418 | """Apply the content from a stream clone bundle version 1. | |
419 |
|
419 | |||
420 | We assume the 4 byte header has been read and validated and the file handle |
|
420 | We assume the 4 byte header has been read and validated and the file handle | |
421 | is at the 2 byte compression identifier. |
|
421 | is at the 2 byte compression identifier. | |
422 | """ |
|
422 | """ | |
423 | if len(repo): |
|
423 | if len(repo): | |
424 | raise error.Abort(_('cannot apply stream clone bundle on non-empty ' |
|
424 | raise error.Abort(_('cannot apply stream clone bundle on non-empty ' | |
425 | 'repo')) |
|
425 | 'repo')) | |
426 |
|
426 | |||
427 | filecount, bytecount, requirements = readbundle1header(fp) |
|
427 | filecount, bytecount, requirements = readbundle1header(fp) | |
428 | missingreqs = requirements - repo.supportedformats |
|
428 | missingreqs = requirements - repo.supportedformats | |
429 | if missingreqs: |
|
429 | if missingreqs: | |
430 | raise error.Abort(_('unable to apply stream clone: ' |
|
430 | raise error.Abort(_('unable to apply stream clone: ' | |
431 | 'unsupported format: %s') % |
|
431 | 'unsupported format: %s') % | |
432 | ', '.join(sorted(missingreqs))) |
|
432 | ', '.join(sorted(missingreqs))) | |
433 |
|
433 | |||
434 | consumev1(repo, fp, filecount, bytecount) |
|
434 | consumev1(repo, fp, filecount, bytecount) | |
435 |
|
435 | |||
436 | class streamcloneapplier(object): |
|
436 | class streamcloneapplier(object): | |
437 | """Class to manage applying streaming clone bundles. |
|
437 | """Class to manage applying streaming clone bundles. | |
438 |
|
438 | |||
439 | We need to wrap ``applybundlev1()`` in a dedicated type to enable bundle |
|
439 | We need to wrap ``applybundlev1()`` in a dedicated type to enable bundle | |
440 | readers to perform bundle type-specific functionality. |
|
440 | readers to perform bundle type-specific functionality. | |
441 | """ |
|
441 | """ | |
442 | def __init__(self, fh): |
|
442 | def __init__(self, fh): | |
443 | self._fh = fh |
|
443 | self._fh = fh | |
444 |
|
444 | |||
445 | def apply(self, repo): |
|
445 | def apply(self, repo): | |
446 | return applybundlev1(repo, self._fh) |
|
446 | return applybundlev1(repo, self._fh) | |
447 |
|
447 | |||
448 | # type of file to stream |
|
448 | # type of file to stream | |
449 | _fileappend = 0 # append only file |
|
449 | _fileappend = 0 # append only file | |
450 | _filefull = 1 # full snapshot file |
|
450 | _filefull = 1 # full snapshot file | |
451 |
|
451 | |||
452 | # Source of the file |
|
452 | # Source of the file | |
453 | _srcstore = 's' # store (svfs) |
|
453 | _srcstore = 's' # store (svfs) | |
454 | _srccache = 'c' # cache (cache) |
|
454 | _srccache = 'c' # cache (cache) | |
455 |
|
455 | |||
456 | # This is it's own function so extensions can override it. |
|
456 | # This is it's own function so extensions can override it. | |
457 | def _walkstreamfullstorefiles(repo): |
|
457 | def _walkstreamfullstorefiles(repo): | |
458 | """list snapshot file from the store""" |
|
458 | """list snapshot file from the store""" | |
459 | fnames = [] |
|
459 | fnames = [] | |
460 | if not repo.publishing(): |
|
460 | if not repo.publishing(): | |
461 | fnames.append('phaseroots') |
|
461 | fnames.append('phaseroots') | |
462 | return fnames |
|
462 | return fnames | |
463 |
|
463 | |||
464 | def _filterfull(entry, copy, vfsmap): |
|
464 | def _filterfull(entry, copy, vfsmap): | |
465 | """actually copy the snapshot files""" |
|
465 | """actually copy the snapshot files""" | |
466 | src, name, ftype, data = entry |
|
466 | src, name, ftype, data = entry | |
467 | if ftype != _filefull: |
|
467 | if ftype != _filefull: | |
468 | return entry |
|
468 | return entry | |
469 | return (src, name, ftype, copy(vfsmap[src].join(name))) |
|
469 | return (src, name, ftype, copy(vfsmap[src].join(name))) | |
470 |
|
470 | |||
471 | @contextlib.contextmanager |
|
471 | @contextlib.contextmanager | |
472 | def maketempcopies(): |
|
472 | def maketempcopies(): | |
473 | """return a function to temporary copy file""" |
|
473 | """return a function to temporary copy file""" | |
474 | files = [] |
|
474 | files = [] | |
475 | try: |
|
475 | try: | |
476 | def copy(src): |
|
476 | def copy(src): | |
477 | fd, dst = pycompat.mkstemp() |
|
477 | fd, dst = pycompat.mkstemp() | |
478 | os.close(fd) |
|
478 | os.close(fd) | |
479 | files.append(dst) |
|
479 | files.append(dst) | |
480 | util.copyfiles(src, dst, hardlink=True) |
|
480 | util.copyfiles(src, dst, hardlink=True) | |
481 | return dst |
|
481 | return dst | |
482 | yield copy |
|
482 | yield copy | |
483 | finally: |
|
483 | finally: | |
484 | for tmp in files: |
|
484 | for tmp in files: | |
485 | util.tryunlink(tmp) |
|
485 | util.tryunlink(tmp) | |
486 |
|
486 | |||
487 | def _makemap(repo): |
|
487 | def _makemap(repo): | |
488 | """make a (src -> vfs) map for the repo""" |
|
488 | """make a (src -> vfs) map for the repo""" | |
489 | vfsmap = { |
|
489 | vfsmap = { | |
490 | _srcstore: repo.svfs, |
|
490 | _srcstore: repo.svfs, | |
491 | _srccache: repo.cachevfs, |
|
491 | _srccache: repo.cachevfs, | |
492 | } |
|
492 | } | |
493 | # we keep repo.vfs out of the on purpose, ther are too many danger there |
|
493 | # we keep repo.vfs out of the on purpose, ther are too many danger there | |
494 | # (eg: .hg/hgrc) |
|
494 | # (eg: .hg/hgrc) | |
495 | assert repo.vfs not in vfsmap.values() |
|
495 | assert repo.vfs not in vfsmap.values() | |
496 |
|
496 | |||
497 | return vfsmap |
|
497 | return vfsmap | |
498 |
|
498 | |||
499 | def _emit2(repo, entries, totalfilesize): |
|
499 | def _emit2(repo, entries, totalfilesize): | |
500 | """actually emit the stream bundle""" |
|
500 | """actually emit the stream bundle""" | |
501 | vfsmap = _makemap(repo) |
|
501 | vfsmap = _makemap(repo) | |
502 | progress = repo.ui.makeprogress(_('bundle'), total=totalfilesize, |
|
502 | progress = repo.ui.makeprogress(_('bundle'), total=totalfilesize, | |
503 | unit=_('bytes')) |
|
503 | unit=_('bytes')) | |
504 | progress.update(0) |
|
504 | progress.update(0) | |
505 | with maketempcopies() as copy, progress: |
|
505 | with maketempcopies() as copy, progress: | |
506 | # copy is delayed until we are in the try |
|
506 | # copy is delayed until we are in the try | |
507 | entries = [_filterfull(e, copy, vfsmap) for e in entries] |
|
507 | entries = [_filterfull(e, copy, vfsmap) for e in entries] | |
508 | yield None # this release the lock on the repository |
|
508 | yield None # this release the lock on the repository | |
509 | seen = 0 |
|
509 | seen = 0 | |
510 |
|
510 | |||
511 | for src, name, ftype, data in entries: |
|
511 | for src, name, ftype, data in entries: | |
512 | vfs = vfsmap[src] |
|
512 | vfs = vfsmap[src] | |
513 | yield src |
|
513 | yield src | |
514 | yield util.uvarintencode(len(name)) |
|
514 | yield util.uvarintencode(len(name)) | |
515 | if ftype == _fileappend: |
|
515 | if ftype == _fileappend: | |
516 | fp = vfs(name) |
|
516 | fp = vfs(name) | |
517 | size = data |
|
517 | size = data | |
518 | elif ftype == _filefull: |
|
518 | elif ftype == _filefull: | |
519 | fp = open(data, 'rb') |
|
519 | fp = open(data, 'rb') | |
520 | size = util.fstat(fp).st_size |
|
520 | size = util.fstat(fp).st_size | |
521 | try: |
|
521 | try: | |
522 | yield util.uvarintencode(size) |
|
522 | yield util.uvarintencode(size) | |
523 | yield name |
|
523 | yield name | |
524 | if size <= 65536: |
|
524 | if size <= 65536: | |
525 | chunks = (fp.read(size),) |
|
525 | chunks = (fp.read(size),) | |
526 | else: |
|
526 | else: | |
527 | chunks = util.filechunkiter(fp, limit=size) |
|
527 | chunks = util.filechunkiter(fp, limit=size) | |
528 | for chunk in chunks: |
|
528 | for chunk in chunks: | |
529 | seen += len(chunk) |
|
529 | seen += len(chunk) | |
530 | progress.update(seen) |
|
530 | progress.update(seen) | |
531 | yield chunk |
|
531 | yield chunk | |
532 | finally: |
|
532 | finally: | |
533 | fp.close() |
|
533 | fp.close() | |
534 |
|
534 | |||
535 | def generatev2(repo, includes, excludes): |
|
535 | def generatev2(repo, includes, excludes, includeobsmarkers): | |
536 | """Emit content for version 2 of a streaming clone. |
|
536 | """Emit content for version 2 of a streaming clone. | |
537 |
|
537 | |||
538 | the data stream consists the following entries: |
|
538 | the data stream consists the following entries: | |
539 | 1) A char representing the file destination (eg: store or cache) |
|
539 | 1) A char representing the file destination (eg: store or cache) | |
540 | 2) A varint containing the length of the filename |
|
540 | 2) A varint containing the length of the filename | |
541 | 3) A varint containing the length of file data |
|
541 | 3) A varint containing the length of file data | |
542 | 4) N bytes containing the filename (the internal, store-agnostic form) |
|
542 | 4) N bytes containing the filename (the internal, store-agnostic form) | |
543 | 5) N bytes containing the file data |
|
543 | 5) N bytes containing the file data | |
544 |
|
544 | |||
545 | Returns a 3-tuple of (file count, file size, data iterator). |
|
545 | Returns a 3-tuple of (file count, file size, data iterator). | |
546 | """ |
|
546 | """ | |
547 |
|
547 | |||
548 | # temporarily raise error until we add storage level logic |
|
548 | # temporarily raise error until we add storage level logic | |
549 | if includes or excludes: |
|
549 | if includes or excludes: | |
550 | raise error.Abort(_("server does not support narrow stream clones")) |
|
550 | raise error.Abort(_("server does not support narrow stream clones")) | |
551 |
|
551 | |||
552 | with repo.lock(): |
|
552 | with repo.lock(): | |
553 |
|
553 | |||
554 | entries = [] |
|
554 | entries = [] | |
555 | totalfilesize = 0 |
|
555 | totalfilesize = 0 | |
556 |
|
556 | |||
557 | matcher = None |
|
557 | matcher = None | |
558 | if includes or excludes: |
|
558 | if includes or excludes: | |
559 | matcher = narrowspec.match(repo.root, includes, excludes) |
|
559 | matcher = narrowspec.match(repo.root, includes, excludes) | |
560 |
|
560 | |||
561 | repo.ui.debug('scanning\n') |
|
561 | repo.ui.debug('scanning\n') | |
562 | for name, ename, size in _walkstreamfiles(repo, matcher): |
|
562 | for name, ename, size in _walkstreamfiles(repo, matcher): | |
563 | if size: |
|
563 | if size: | |
564 | entries.append((_srcstore, name, _fileappend, size)) |
|
564 | entries.append((_srcstore, name, _fileappend, size)) | |
565 | totalfilesize += size |
|
565 | totalfilesize += size | |
566 | for name in _walkstreamfullstorefiles(repo): |
|
566 | for name in _walkstreamfullstorefiles(repo): | |
567 | if repo.svfs.exists(name): |
|
567 | if repo.svfs.exists(name): | |
568 | totalfilesize += repo.svfs.lstat(name).st_size |
|
568 | totalfilesize += repo.svfs.lstat(name).st_size | |
569 | entries.append((_srcstore, name, _filefull, None)) |
|
569 | entries.append((_srcstore, name, _filefull, None)) | |
|
570 | if includeobsmarkers and repo.svfs.exists('obsstore'): | |||
|
571 | totalfilesize += repo.svfs.lstat('obsstore').st_size | |||
|
572 | entries.append((_srcstore, 'obsstore', _filefull, None)) | |||
570 | for name in cacheutil.cachetocopy(repo): |
|
573 | for name in cacheutil.cachetocopy(repo): | |
571 | if repo.cachevfs.exists(name): |
|
574 | if repo.cachevfs.exists(name): | |
572 | totalfilesize += repo.cachevfs.lstat(name).st_size |
|
575 | totalfilesize += repo.cachevfs.lstat(name).st_size | |
573 | entries.append((_srccache, name, _filefull, None)) |
|
576 | entries.append((_srccache, name, _filefull, None)) | |
574 |
|
577 | |||
575 | chunks = _emit2(repo, entries, totalfilesize) |
|
578 | chunks = _emit2(repo, entries, totalfilesize) | |
576 | first = next(chunks) |
|
579 | first = next(chunks) | |
577 | assert first is None |
|
580 | assert first is None | |
578 |
|
581 | |||
579 | return len(entries), totalfilesize, chunks |
|
582 | return len(entries), totalfilesize, chunks | |
580 |
|
583 | |||
581 | @contextlib.contextmanager |
|
584 | @contextlib.contextmanager | |
582 | def nested(*ctxs): |
|
585 | def nested(*ctxs): | |
583 | this = ctxs[0] |
|
586 | this = ctxs[0] | |
584 | rest = ctxs[1:] |
|
587 | rest = ctxs[1:] | |
585 | with this: |
|
588 | with this: | |
586 | if rest: |
|
589 | if rest: | |
587 | with nested(*rest): |
|
590 | with nested(*rest): | |
588 | yield |
|
591 | yield | |
589 | else: |
|
592 | else: | |
590 | yield |
|
593 | yield | |
591 |
|
594 | |||
592 | def consumev2(repo, fp, filecount, filesize): |
|
595 | def consumev2(repo, fp, filecount, filesize): | |
593 | """Apply the contents from a version 2 streaming clone. |
|
596 | """Apply the contents from a version 2 streaming clone. | |
594 |
|
597 | |||
595 | Data is read from an object that only needs to provide a ``read(size)`` |
|
598 | Data is read from an object that only needs to provide a ``read(size)`` | |
596 | method. |
|
599 | method. | |
597 | """ |
|
600 | """ | |
598 | with repo.lock(): |
|
601 | with repo.lock(): | |
599 | repo.ui.status(_('%d files to transfer, %s of data\n') % |
|
602 | repo.ui.status(_('%d files to transfer, %s of data\n') % | |
600 | (filecount, util.bytecount(filesize))) |
|
603 | (filecount, util.bytecount(filesize))) | |
601 |
|
604 | |||
602 | start = util.timer() |
|
605 | start = util.timer() | |
603 | progress = repo.ui.makeprogress(_('clone'), total=filesize, |
|
606 | progress = repo.ui.makeprogress(_('clone'), total=filesize, | |
604 | unit=_('bytes')) |
|
607 | unit=_('bytes')) | |
605 | progress.update(0) |
|
608 | progress.update(0) | |
606 |
|
609 | |||
607 | vfsmap = _makemap(repo) |
|
610 | vfsmap = _makemap(repo) | |
608 |
|
611 | |||
609 | with repo.transaction('clone'): |
|
612 | with repo.transaction('clone'): | |
610 | ctxs = (vfs.backgroundclosing(repo.ui) |
|
613 | ctxs = (vfs.backgroundclosing(repo.ui) | |
611 | for vfs in vfsmap.values()) |
|
614 | for vfs in vfsmap.values()) | |
612 | with nested(*ctxs): |
|
615 | with nested(*ctxs): | |
613 | for i in range(filecount): |
|
616 | for i in range(filecount): | |
614 | src = util.readexactly(fp, 1) |
|
617 | src = util.readexactly(fp, 1) | |
615 | vfs = vfsmap[src] |
|
618 | vfs = vfsmap[src] | |
616 | namelen = util.uvarintdecodestream(fp) |
|
619 | namelen = util.uvarintdecodestream(fp) | |
617 | datalen = util.uvarintdecodestream(fp) |
|
620 | datalen = util.uvarintdecodestream(fp) | |
618 |
|
621 | |||
619 | name = util.readexactly(fp, namelen) |
|
622 | name = util.readexactly(fp, namelen) | |
620 |
|
623 | |||
621 | if repo.ui.debugflag: |
|
624 | if repo.ui.debugflag: | |
622 | repo.ui.debug('adding [%s] %s (%s)\n' % |
|
625 | repo.ui.debug('adding [%s] %s (%s)\n' % | |
623 | (src, name, util.bytecount(datalen))) |
|
626 | (src, name, util.bytecount(datalen))) | |
624 |
|
627 | |||
625 | with vfs(name, 'w') as ofp: |
|
628 | with vfs(name, 'w') as ofp: | |
626 | for chunk in util.filechunkiter(fp, limit=datalen): |
|
629 | for chunk in util.filechunkiter(fp, limit=datalen): | |
627 | progress.increment(step=len(chunk)) |
|
630 | progress.increment(step=len(chunk)) | |
628 | ofp.write(chunk) |
|
631 | ofp.write(chunk) | |
629 |
|
632 | |||
630 | # force @filecache properties to be reloaded from |
|
633 | # force @filecache properties to be reloaded from | |
631 | # streamclone-ed file at next access |
|
634 | # streamclone-ed file at next access | |
632 | repo.invalidate(clearfilecache=True) |
|
635 | repo.invalidate(clearfilecache=True) | |
633 |
|
636 | |||
634 | elapsed = util.timer() - start |
|
637 | elapsed = util.timer() - start | |
635 | if elapsed <= 0: |
|
638 | if elapsed <= 0: | |
636 | elapsed = 0.001 |
|
639 | elapsed = 0.001 | |
637 | repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') % |
|
640 | repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') % | |
638 | (util.bytecount(progress.pos), elapsed, |
|
641 | (util.bytecount(progress.pos), elapsed, | |
639 | util.bytecount(progress.pos / elapsed))) |
|
642 | util.bytecount(progress.pos / elapsed))) | |
640 | progress.complete() |
|
643 | progress.complete() | |
641 |
|
644 | |||
642 | def applybundlev2(repo, fp, filecount, filesize, requirements): |
|
645 | def applybundlev2(repo, fp, filecount, filesize, requirements): | |
643 | from . import localrepo |
|
646 | from . import localrepo | |
644 |
|
647 | |||
645 | missingreqs = [r for r in requirements if r not in repo.supported] |
|
648 | missingreqs = [r for r in requirements if r not in repo.supported] | |
646 | if missingreqs: |
|
649 | if missingreqs: | |
647 | raise error.Abort(_('unable to apply stream clone: ' |
|
650 | raise error.Abort(_('unable to apply stream clone: ' | |
648 | 'unsupported format: %s') % |
|
651 | 'unsupported format: %s') % | |
649 | ', '.join(sorted(missingreqs))) |
|
652 | ', '.join(sorted(missingreqs))) | |
650 |
|
653 | |||
651 | consumev2(repo, fp, filecount, filesize) |
|
654 | consumev2(repo, fp, filecount, filesize) | |
652 |
|
655 | |||
653 | # new requirements = old non-format requirements + |
|
656 | # new requirements = old non-format requirements + | |
654 | # new format-related remote requirements |
|
657 | # new format-related remote requirements | |
655 | # requirements from the streamed-in repository |
|
658 | # requirements from the streamed-in repository | |
656 | repo.requirements = set(requirements) | ( |
|
659 | repo.requirements = set(requirements) | ( | |
657 | repo.requirements - repo.supportedformats) |
|
660 | repo.requirements - repo.supportedformats) | |
658 | repo.svfs.options = localrepo.resolvestorevfsoptions( |
|
661 | repo.svfs.options = localrepo.resolvestorevfsoptions( | |
659 | repo.ui, repo.requirements, repo.features) |
|
662 | repo.ui, repo.requirements, repo.features) | |
660 | repo._writerequirements() |
|
663 | repo._writerequirements() |
@@ -1,516 +1,561 b'' | |||||
1 | #require serve no-reposimplestore no-chg |
|
1 | #require serve no-reposimplestore no-chg | |
2 |
|
2 | |||
3 | #testcases stream-legacy stream-bundle2 |
|
3 | #testcases stream-legacy stream-bundle2 | |
4 |
|
4 | |||
5 | #if stream-legacy |
|
5 | #if stream-legacy | |
6 | $ cat << EOF >> $HGRCPATH |
|
6 | $ cat << EOF >> $HGRCPATH | |
7 | > [server] |
|
7 | > [server] | |
8 | > bundle2.stream = no |
|
8 | > bundle2.stream = no | |
9 | > EOF |
|
9 | > EOF | |
10 | #endif |
|
10 | #endif | |
11 |
|
11 | |||
12 | Initialize repository |
|
12 | Initialize repository | |
13 | the status call is to check for issue5130 |
|
13 | the status call is to check for issue5130 | |
14 |
|
14 | |||
15 | $ hg init server |
|
15 | $ hg init server | |
16 | $ cd server |
|
16 | $ cd server | |
17 | $ touch foo |
|
17 | $ touch foo | |
18 | $ hg -q commit -A -m initial |
|
18 | $ hg -q commit -A -m initial | |
19 | >>> for i in range(1024): |
|
19 | >>> for i in range(1024): | |
20 | ... with open(str(i), 'wb') as fh: |
|
20 | ... with open(str(i), 'wb') as fh: | |
21 | ... fh.write(b"%d" % i) and None |
|
21 | ... fh.write(b"%d" % i) and None | |
22 | $ hg -q commit -A -m 'add a lot of files' |
|
22 | $ hg -q commit -A -m 'add a lot of files' | |
23 | $ hg st |
|
23 | $ hg st | |
24 | $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid |
|
24 | $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid | |
25 | $ cat hg.pid > $DAEMON_PIDS |
|
25 | $ cat hg.pid > $DAEMON_PIDS | |
26 | $ cd .. |
|
26 | $ cd .. | |
27 |
|
27 | |||
28 | Cannot stream clone when server.uncompressed is set |
|
28 | Cannot stream clone when server.uncompressed is set | |
29 |
|
29 | |||
30 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out' |
|
30 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out' | |
31 | 200 Script output follows |
|
31 | 200 Script output follows | |
32 |
|
32 | |||
33 | 1 |
|
33 | 1 | |
34 |
|
34 | |||
35 | #if stream-legacy |
|
35 | #if stream-legacy | |
36 | $ hg debugcapabilities http://localhost:$HGPORT |
|
36 | $ hg debugcapabilities http://localhost:$HGPORT | |
37 | Main capabilities: |
|
37 | Main capabilities: | |
38 | batch |
|
38 | batch | |
39 | branchmap |
|
39 | branchmap | |
40 | $USUAL_BUNDLE2_CAPS_SERVER$ |
|
40 | $USUAL_BUNDLE2_CAPS_SERVER$ | |
41 | changegroupsubset |
|
41 | changegroupsubset | |
42 | compression=$BUNDLE2_COMPRESSIONS$ |
|
42 | compression=$BUNDLE2_COMPRESSIONS$ | |
43 | getbundle |
|
43 | getbundle | |
44 | httpheader=1024 |
|
44 | httpheader=1024 | |
45 | httpmediatype=0.1rx,0.1tx,0.2tx |
|
45 | httpmediatype=0.1rx,0.1tx,0.2tx | |
46 | known |
|
46 | known | |
47 | lookup |
|
47 | lookup | |
48 | pushkey |
|
48 | pushkey | |
49 | unbundle=HG10GZ,HG10BZ,HG10UN |
|
49 | unbundle=HG10GZ,HG10BZ,HG10UN | |
50 | unbundlehash |
|
50 | unbundlehash | |
51 | Bundle2 capabilities: |
|
51 | Bundle2 capabilities: | |
52 | HG20 |
|
52 | HG20 | |
53 | bookmarks |
|
53 | bookmarks | |
54 | changegroup |
|
54 | changegroup | |
55 | 01 |
|
55 | 01 | |
56 | 02 |
|
56 | 02 | |
57 | digests |
|
57 | digests | |
58 | md5 |
|
58 | md5 | |
59 | sha1 |
|
59 | sha1 | |
60 | sha512 |
|
60 | sha512 | |
61 | error |
|
61 | error | |
62 | abort |
|
62 | abort | |
63 | unsupportedcontent |
|
63 | unsupportedcontent | |
64 | pushraced |
|
64 | pushraced | |
65 | pushkey |
|
65 | pushkey | |
66 | hgtagsfnodes |
|
66 | hgtagsfnodes | |
67 | listkeys |
|
67 | listkeys | |
68 | phases |
|
68 | phases | |
69 | heads |
|
69 | heads | |
70 | pushkey |
|
70 | pushkey | |
71 | remote-changegroup |
|
71 | remote-changegroup | |
72 | http |
|
72 | http | |
73 | https |
|
73 | https | |
74 | rev-branch-cache |
|
74 | rev-branch-cache | |
75 |
|
75 | |||
76 | $ hg clone --stream -U http://localhost:$HGPORT server-disabled |
|
76 | $ hg clone --stream -U http://localhost:$HGPORT server-disabled | |
77 | warning: stream clone requested but server has them disabled |
|
77 | warning: stream clone requested but server has them disabled | |
78 | requesting all changes |
|
78 | requesting all changes | |
79 | adding changesets |
|
79 | adding changesets | |
80 | adding manifests |
|
80 | adding manifests | |
81 | adding file changes |
|
81 | adding file changes | |
82 | added 2 changesets with 1025 changes to 1025 files |
|
82 | added 2 changesets with 1025 changes to 1025 files | |
83 | new changesets 96ee1d7354c4:c17445101a72 |
|
83 | new changesets 96ee1d7354c4:c17445101a72 | |
84 |
|
84 | |||
85 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" |
|
85 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" | |
86 | 200 Script output follows |
|
86 | 200 Script output follows | |
87 | content-type: application/mercurial-0.2 |
|
87 | content-type: application/mercurial-0.2 | |
88 |
|
88 | |||
89 |
|
89 | |||
90 | $ f --size body --hexdump --bytes 100 |
|
90 | $ f --size body --hexdump --bytes 100 | |
91 | body: size=232 |
|
91 | body: size=232 | |
92 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
92 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| | |
93 | 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...| |
|
93 | 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...| | |
94 | 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest| |
|
94 | 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest| | |
95 | 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| |
|
95 | 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| | |
96 | 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| |
|
96 | 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| | |
97 | 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| |
|
97 | 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| | |
98 | 0060: 69 73 20 66 |is f| |
|
98 | 0060: 69 73 20 66 |is f| | |
99 |
|
99 | |||
100 | #endif |
|
100 | #endif | |
101 | #if stream-bundle2 |
|
101 | #if stream-bundle2 | |
102 | $ hg debugcapabilities http://localhost:$HGPORT |
|
102 | $ hg debugcapabilities http://localhost:$HGPORT | |
103 | Main capabilities: |
|
103 | Main capabilities: | |
104 | batch |
|
104 | batch | |
105 | branchmap |
|
105 | branchmap | |
106 | $USUAL_BUNDLE2_CAPS_SERVER$ |
|
106 | $USUAL_BUNDLE2_CAPS_SERVER$ | |
107 | changegroupsubset |
|
107 | changegroupsubset | |
108 | compression=$BUNDLE2_COMPRESSIONS$ |
|
108 | compression=$BUNDLE2_COMPRESSIONS$ | |
109 | getbundle |
|
109 | getbundle | |
110 | httpheader=1024 |
|
110 | httpheader=1024 | |
111 | httpmediatype=0.1rx,0.1tx,0.2tx |
|
111 | httpmediatype=0.1rx,0.1tx,0.2tx | |
112 | known |
|
112 | known | |
113 | lookup |
|
113 | lookup | |
114 | pushkey |
|
114 | pushkey | |
115 | unbundle=HG10GZ,HG10BZ,HG10UN |
|
115 | unbundle=HG10GZ,HG10BZ,HG10UN | |
116 | unbundlehash |
|
116 | unbundlehash | |
117 | Bundle2 capabilities: |
|
117 | Bundle2 capabilities: | |
118 | HG20 |
|
118 | HG20 | |
119 | bookmarks |
|
119 | bookmarks | |
120 | changegroup |
|
120 | changegroup | |
121 | 01 |
|
121 | 01 | |
122 | 02 |
|
122 | 02 | |
123 | digests |
|
123 | digests | |
124 | md5 |
|
124 | md5 | |
125 | sha1 |
|
125 | sha1 | |
126 | sha512 |
|
126 | sha512 | |
127 | error |
|
127 | error | |
128 | abort |
|
128 | abort | |
129 | unsupportedcontent |
|
129 | unsupportedcontent | |
130 | pushraced |
|
130 | pushraced | |
131 | pushkey |
|
131 | pushkey | |
132 | hgtagsfnodes |
|
132 | hgtagsfnodes | |
133 | listkeys |
|
133 | listkeys | |
134 | phases |
|
134 | phases | |
135 | heads |
|
135 | heads | |
136 | pushkey |
|
136 | pushkey | |
137 | remote-changegroup |
|
137 | remote-changegroup | |
138 | http |
|
138 | http | |
139 | https |
|
139 | https | |
140 | rev-branch-cache |
|
140 | rev-branch-cache | |
141 |
|
141 | |||
142 | $ hg clone --stream -U http://localhost:$HGPORT server-disabled |
|
142 | $ hg clone --stream -U http://localhost:$HGPORT server-disabled | |
143 | warning: stream clone requested but server has them disabled |
|
143 | warning: stream clone requested but server has them disabled | |
144 | requesting all changes |
|
144 | requesting all changes | |
145 | adding changesets |
|
145 | adding changesets | |
146 | adding manifests |
|
146 | adding manifests | |
147 | adding file changes |
|
147 | adding file changes | |
148 | added 2 changesets with 1025 changes to 1025 files |
|
148 | added 2 changesets with 1025 changes to 1025 files | |
149 | new changesets 96ee1d7354c4:c17445101a72 |
|
149 | new changesets 96ee1d7354c4:c17445101a72 | |
150 |
|
150 | |||
151 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" |
|
151 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" | |
152 | 200 Script output follows |
|
152 | 200 Script output follows | |
153 | content-type: application/mercurial-0.2 |
|
153 | content-type: application/mercurial-0.2 | |
154 |
|
154 | |||
155 |
|
155 | |||
156 | $ f --size body --hexdump --bytes 100 |
|
156 | $ f --size body --hexdump --bytes 100 | |
157 | body: size=232 |
|
157 | body: size=232 | |
158 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
158 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| | |
159 | 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...| |
|
159 | 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...| | |
160 | 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest| |
|
160 | 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest| | |
161 | 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| |
|
161 | 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| | |
162 | 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| |
|
162 | 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| | |
163 | 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| |
|
163 | 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| | |
164 | 0060: 69 73 20 66 |is f| |
|
164 | 0060: 69 73 20 66 |is f| | |
165 |
|
165 | |||
166 | #endif |
|
166 | #endif | |
167 |
|
167 | |||
168 | $ killdaemons.py |
|
168 | $ killdaemons.py | |
169 | $ cd server |
|
169 | $ cd server | |
170 | $ hg serve -p $HGPORT -d --pid-file=hg.pid |
|
170 | $ hg serve -p $HGPORT -d --pid-file=hg.pid | |
171 | $ cat hg.pid > $DAEMON_PIDS |
|
171 | $ cat hg.pid > $DAEMON_PIDS | |
172 | $ cd .. |
|
172 | $ cd .. | |
173 |
|
173 | |||
174 | Basic clone |
|
174 | Basic clone | |
175 |
|
175 | |||
176 | #if stream-legacy |
|
176 | #if stream-legacy | |
177 | $ hg clone --stream -U http://localhost:$HGPORT clone1 |
|
177 | $ hg clone --stream -U http://localhost:$HGPORT clone1 | |
178 | streaming all changes |
|
178 | streaming all changes | |
179 | 1027 files to transfer, 96.3 KB of data |
|
179 | 1027 files to transfer, 96.3 KB of data | |
180 | transferred 96.3 KB in * seconds (*/sec) (glob) |
|
180 | transferred 96.3 KB in * seconds (*/sec) (glob) | |
181 | searching for changes |
|
181 | searching for changes | |
182 | no changes found |
|
182 | no changes found | |
183 | #endif |
|
183 | #endif | |
184 | #if stream-bundle2 |
|
184 | #if stream-bundle2 | |
185 | $ hg clone --stream -U http://localhost:$HGPORT clone1 |
|
185 | $ hg clone --stream -U http://localhost:$HGPORT clone1 | |
186 | streaming all changes |
|
186 | streaming all changes | |
187 | 1030 files to transfer, 96.4 KB of data |
|
187 | 1030 files to transfer, 96.4 KB of data | |
188 | transferred 96.4 KB in * seconds (* */sec) (glob) |
|
188 | transferred 96.4 KB in * seconds (* */sec) (glob) | |
189 |
|
189 | |||
190 | $ ls -1 clone1/.hg/cache |
|
190 | $ ls -1 clone1/.hg/cache | |
191 | branch2-served |
|
191 | branch2-served | |
192 | rbc-names-v1 |
|
192 | rbc-names-v1 | |
193 | rbc-revs-v1 |
|
193 | rbc-revs-v1 | |
194 | #endif |
|
194 | #endif | |
195 |
|
195 | |||
196 | getbundle requests with stream=1 are uncompressed |
|
196 | getbundle requests with stream=1 are uncompressed | |
197 |
|
197 | |||
198 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" |
|
198 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" | |
199 | 200 Script output follows |
|
199 | 200 Script output follows | |
200 | content-type: application/mercurial-0.2 |
|
200 | content-type: application/mercurial-0.2 | |
201 |
|
201 | |||
202 |
|
202 | |||
203 | $ f --size --hex --bytes 256 body |
|
203 | $ f --size --hex --bytes 256 body | |
204 | body: size=112230 |
|
204 | body: size=112230 | |
205 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
205 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| | |
206 | 0010: 70 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |p.STREAM2.......| |
|
206 | 0010: 70 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |p.STREAM2.......| | |
207 | 0020: 05 09 04 0c 35 62 79 74 65 63 6f 75 6e 74 39 38 |....5bytecount98| |
|
207 | 0020: 05 09 04 0c 35 62 79 74 65 63 6f 75 6e 74 39 38 |....5bytecount98| | |
208 | 0030: 37 35 38 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |758filecount1030| |
|
208 | 0030: 37 35 38 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |758filecount1030| | |
209 | 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote| |
|
209 | 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote| | |
210 | 0050: 6e 63 6f 64 65 25 32 43 66 6e 63 61 63 68 65 25 |ncode%2Cfncache%| |
|
210 | 0050: 6e 63 6f 64 65 25 32 43 66 6e 63 61 63 68 65 25 |ncode%2Cfncache%| | |
211 | 0060: 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 25 32 |2Cgeneraldelta%2| |
|
211 | 0060: 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 25 32 |2Cgeneraldelta%2| | |
212 | 0070: 43 72 65 76 6c 6f 67 76 31 25 32 43 73 74 6f 72 |Crevlogv1%2Cstor| |
|
212 | 0070: 43 72 65 76 6c 6f 67 76 31 25 32 43 73 74 6f 72 |Crevlogv1%2Cstor| | |
213 | 0080: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i| |
|
213 | 0080: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i| | |
214 | 0090: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................| |
|
214 | 0090: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................| | |
215 | 00a0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................| |
|
215 | 00a0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................| | |
216 | 00b0: 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 67 2c |.)c.I.#....Vg.g,| |
|
216 | 00b0: 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 67 2c |.)c.I.#....Vg.g,| | |
217 | 00c0: 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 00 00 |i..9............| |
|
217 | 00c0: 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 00 00 |i..9............| | |
218 | 00d0: 75 30 73 08 42 64 61 74 61 2f 31 2e 69 00 03 00 |u0s.Bdata/1.i...| |
|
218 | 00d0: 75 30 73 08 42 64 61 74 61 2f 31 2e 69 00 03 00 |u0s.Bdata/1.i...| | |
219 | 00e0: 01 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 |................| |
|
219 | 00e0: 01 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 |................| | |
220 | 00f0: 00 00 00 00 01 ff ff ff ff ff ff ff ff f9 76 da |..............v.| |
|
220 | 00f0: 00 00 00 00 01 ff ff ff ff ff ff ff ff f9 76 da |..............v.| | |
221 |
|
221 | |||
222 | --uncompressed is an alias to --stream |
|
222 | --uncompressed is an alias to --stream | |
223 |
|
223 | |||
224 | #if stream-legacy |
|
224 | #if stream-legacy | |
225 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed |
|
225 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed | |
226 | streaming all changes |
|
226 | streaming all changes | |
227 | 1027 files to transfer, 96.3 KB of data |
|
227 | 1027 files to transfer, 96.3 KB of data | |
228 | transferred 96.3 KB in * seconds (*/sec) (glob) |
|
228 | transferred 96.3 KB in * seconds (*/sec) (glob) | |
229 | searching for changes |
|
229 | searching for changes | |
230 | no changes found |
|
230 | no changes found | |
231 | #endif |
|
231 | #endif | |
232 | #if stream-bundle2 |
|
232 | #if stream-bundle2 | |
233 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed |
|
233 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed | |
234 | streaming all changes |
|
234 | streaming all changes | |
235 | 1030 files to transfer, 96.4 KB of data |
|
235 | 1030 files to transfer, 96.4 KB of data | |
236 | transferred 96.4 KB in * seconds (* */sec) (glob) |
|
236 | transferred 96.4 KB in * seconds (* */sec) (glob) | |
237 | #endif |
|
237 | #endif | |
238 |
|
238 | |||
239 | Clone with background file closing enabled |
|
239 | Clone with background file closing enabled | |
240 |
|
240 | |||
241 | #if stream-legacy |
|
241 | #if stream-legacy | |
242 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding |
|
242 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding | |
243 | using http://localhost:$HGPORT/ |
|
243 | using http://localhost:$HGPORT/ | |
244 | sending capabilities command |
|
244 | sending capabilities command | |
245 | sending branchmap command |
|
245 | sending branchmap command | |
246 | streaming all changes |
|
246 | streaming all changes | |
247 | sending stream_out command |
|
247 | sending stream_out command | |
248 | 1027 files to transfer, 96.3 KB of data |
|
248 | 1027 files to transfer, 96.3 KB of data | |
249 | starting 4 threads for background file closing |
|
249 | starting 4 threads for background file closing | |
250 | updating the branch cache |
|
250 | updating the branch cache | |
251 | transferred 96.3 KB in * seconds (*/sec) (glob) |
|
251 | transferred 96.3 KB in * seconds (*/sec) (glob) | |
252 | query 1; heads |
|
252 | query 1; heads | |
253 | sending batch command |
|
253 | sending batch command | |
254 | searching for changes |
|
254 | searching for changes | |
255 | all remote heads known locally |
|
255 | all remote heads known locally | |
256 | no changes found |
|
256 | no changes found | |
257 | sending getbundle command |
|
257 | sending getbundle command | |
258 | bundle2-input-bundle: with-transaction |
|
258 | bundle2-input-bundle: with-transaction | |
259 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
259 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported | |
260 | bundle2-input-part: "phase-heads" supported |
|
260 | bundle2-input-part: "phase-heads" supported | |
261 | bundle2-input-part: total payload size 24 |
|
261 | bundle2-input-part: total payload size 24 | |
262 | bundle2-input-bundle: 1 parts total |
|
262 | bundle2-input-bundle: 1 parts total | |
263 | checking for updated bookmarks |
|
263 | checking for updated bookmarks | |
264 | (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
264 | (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob) | |
265 | #endif |
|
265 | #endif | |
266 | #if stream-bundle2 |
|
266 | #if stream-bundle2 | |
267 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding |
|
267 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding | |
268 | using http://localhost:$HGPORT/ |
|
268 | using http://localhost:$HGPORT/ | |
269 | sending capabilities command |
|
269 | sending capabilities command | |
270 | query 1; heads |
|
270 | query 1; heads | |
271 | sending batch command |
|
271 | sending batch command | |
272 | streaming all changes |
|
272 | streaming all changes | |
273 | sending getbundle command |
|
273 | sending getbundle command | |
274 | bundle2-input-bundle: with-transaction |
|
274 | bundle2-input-bundle: with-transaction | |
275 | bundle2-input-part: "stream2" (params: 3 mandatory) supported |
|
275 | bundle2-input-part: "stream2" (params: 3 mandatory) supported | |
276 | applying stream bundle |
|
276 | applying stream bundle | |
277 | 1030 files to transfer, 96.4 KB of data |
|
277 | 1030 files to transfer, 96.4 KB of data | |
278 | starting 4 threads for background file closing |
|
278 | starting 4 threads for background file closing | |
279 | starting 4 threads for background file closing |
|
279 | starting 4 threads for background file closing | |
280 | updating the branch cache |
|
280 | updating the branch cache | |
281 | transferred 96.4 KB in * seconds (* */sec) (glob) |
|
281 | transferred 96.4 KB in * seconds (* */sec) (glob) | |
282 | bundle2-input-part: total payload size 112077 |
|
282 | bundle2-input-part: total payload size 112077 | |
283 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
283 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported | |
284 | bundle2-input-bundle: 1 parts total |
|
284 | bundle2-input-bundle: 1 parts total | |
285 | checking for updated bookmarks |
|
285 | checking for updated bookmarks | |
286 | (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
286 | (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) | |
287 | #endif |
|
287 | #endif | |
288 |
|
288 | |||
289 | Cannot stream clone when there are secret changesets |
|
289 | Cannot stream clone when there are secret changesets | |
290 |
|
290 | |||
291 | $ hg -R server phase --force --secret -r tip |
|
291 | $ hg -R server phase --force --secret -r tip | |
292 | $ hg clone --stream -U http://localhost:$HGPORT secret-denied |
|
292 | $ hg clone --stream -U http://localhost:$HGPORT secret-denied | |
293 | warning: stream clone requested but server has them disabled |
|
293 | warning: stream clone requested but server has them disabled | |
294 | requesting all changes |
|
294 | requesting all changes | |
295 | adding changesets |
|
295 | adding changesets | |
296 | adding manifests |
|
296 | adding manifests | |
297 | adding file changes |
|
297 | adding file changes | |
298 | added 1 changesets with 1 changes to 1 files |
|
298 | added 1 changesets with 1 changes to 1 files | |
299 | new changesets 96ee1d7354c4 |
|
299 | new changesets 96ee1d7354c4 | |
300 |
|
300 | |||
301 | $ killdaemons.py |
|
301 | $ killdaemons.py | |
302 |
|
302 | |||
303 | Streaming of secrets can be overridden by server config |
|
303 | Streaming of secrets can be overridden by server config | |
304 |
|
304 | |||
305 | $ cd server |
|
305 | $ cd server | |
306 | $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid |
|
306 | $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid | |
307 | $ cat hg.pid > $DAEMON_PIDS |
|
307 | $ cat hg.pid > $DAEMON_PIDS | |
308 | $ cd .. |
|
308 | $ cd .. | |
309 |
|
309 | |||
310 | #if stream-legacy |
|
310 | #if stream-legacy | |
311 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed |
|
311 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed | |
312 | streaming all changes |
|
312 | streaming all changes | |
313 | 1027 files to transfer, 96.3 KB of data |
|
313 | 1027 files to transfer, 96.3 KB of data | |
314 | transferred 96.3 KB in * seconds (*/sec) (glob) |
|
314 | transferred 96.3 KB in * seconds (*/sec) (glob) | |
315 | searching for changes |
|
315 | searching for changes | |
316 | no changes found |
|
316 | no changes found | |
317 | #endif |
|
317 | #endif | |
318 | #if stream-bundle2 |
|
318 | #if stream-bundle2 | |
319 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed |
|
319 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed | |
320 | streaming all changes |
|
320 | streaming all changes | |
321 | 1030 files to transfer, 96.4 KB of data |
|
321 | 1030 files to transfer, 96.4 KB of data | |
322 | transferred 96.4 KB in * seconds (* */sec) (glob) |
|
322 | transferred 96.4 KB in * seconds (* */sec) (glob) | |
323 | #endif |
|
323 | #endif | |
324 |
|
324 | |||
325 | $ killdaemons.py |
|
325 | $ killdaemons.py | |
326 |
|
326 | |||
327 | Verify interaction between preferuncompressed and secret presence |
|
327 | Verify interaction between preferuncompressed and secret presence | |
328 |
|
328 | |||
329 | $ cd server |
|
329 | $ cd server | |
330 | $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid |
|
330 | $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid | |
331 | $ cat hg.pid > $DAEMON_PIDS |
|
331 | $ cat hg.pid > $DAEMON_PIDS | |
332 | $ cd .. |
|
332 | $ cd .. | |
333 |
|
333 | |||
334 | $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret |
|
334 | $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret | |
335 | requesting all changes |
|
335 | requesting all changes | |
336 | adding changesets |
|
336 | adding changesets | |
337 | adding manifests |
|
337 | adding manifests | |
338 | adding file changes |
|
338 | adding file changes | |
339 | added 1 changesets with 1 changes to 1 files |
|
339 | added 1 changesets with 1 changes to 1 files | |
340 | new changesets 96ee1d7354c4 |
|
340 | new changesets 96ee1d7354c4 | |
341 |
|
341 | |||
342 | $ killdaemons.py |
|
342 | $ killdaemons.py | |
343 |
|
343 | |||
344 | Clone not allowed when full bundles disabled and can't serve secrets |
|
344 | Clone not allowed when full bundles disabled and can't serve secrets | |
345 |
|
345 | |||
346 | $ cd server |
|
346 | $ cd server | |
347 | $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid |
|
347 | $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid | |
348 | $ cat hg.pid > $DAEMON_PIDS |
|
348 | $ cat hg.pid > $DAEMON_PIDS | |
349 | $ cd .. |
|
349 | $ cd .. | |
350 |
|
350 | |||
351 | $ hg clone --stream http://localhost:$HGPORT secret-full-disabled |
|
351 | $ hg clone --stream http://localhost:$HGPORT secret-full-disabled | |
352 | warning: stream clone requested but server has them disabled |
|
352 | warning: stream clone requested but server has them disabled | |
353 | requesting all changes |
|
353 | requesting all changes | |
354 | remote: abort: server has pull-based clones disabled |
|
354 | remote: abort: server has pull-based clones disabled | |
355 | abort: pull failed on remote |
|
355 | abort: pull failed on remote | |
356 | (remove --pull if specified or upgrade Mercurial) |
|
356 | (remove --pull if specified or upgrade Mercurial) | |
357 | [255] |
|
357 | [255] | |
358 |
|
358 | |||
359 | Local stream clone with secrets involved |
|
359 | Local stream clone with secrets involved | |
360 | (This is just a test over behavior: if you have access to the repo's files, |
|
360 | (This is just a test over behavior: if you have access to the repo's files, | |
361 | there is no security so it isn't important to prevent a clone here.) |
|
361 | there is no security so it isn't important to prevent a clone here.) | |
362 |
|
362 | |||
363 | $ hg clone -U --stream server local-secret |
|
363 | $ hg clone -U --stream server local-secret | |
364 | warning: stream clone requested but server has them disabled |
|
364 | warning: stream clone requested but server has them disabled | |
365 | requesting all changes |
|
365 | requesting all changes | |
366 | adding changesets |
|
366 | adding changesets | |
367 | adding manifests |
|
367 | adding manifests | |
368 | adding file changes |
|
368 | adding file changes | |
369 | added 1 changesets with 1 changes to 1 files |
|
369 | added 1 changesets with 1 changes to 1 files | |
370 | new changesets 96ee1d7354c4 |
|
370 | new changesets 96ee1d7354c4 | |
371 |
|
371 | |||
372 | Stream clone while repo is changing: |
|
372 | Stream clone while repo is changing: | |
373 |
|
373 | |||
374 | $ mkdir changing |
|
374 | $ mkdir changing | |
375 | $ cd changing |
|
375 | $ cd changing | |
376 |
|
376 | |||
377 | extension for delaying the server process so we reliably can modify the repo |
|
377 | extension for delaying the server process so we reliably can modify the repo | |
378 | while cloning |
|
378 | while cloning | |
379 |
|
379 | |||
380 | $ cat > delayer.py <<EOF |
|
380 | $ cat > delayer.py <<EOF | |
381 | > import time |
|
381 | > import time | |
382 | > from mercurial import extensions, vfs |
|
382 | > from mercurial import extensions, vfs | |
383 | > def __call__(orig, self, path, *args, **kwargs): |
|
383 | > def __call__(orig, self, path, *args, **kwargs): | |
384 | > if path == 'data/f1.i': |
|
384 | > if path == 'data/f1.i': | |
385 | > time.sleep(2) |
|
385 | > time.sleep(2) | |
386 | > return orig(self, path, *args, **kwargs) |
|
386 | > return orig(self, path, *args, **kwargs) | |
387 | > extensions.wrapfunction(vfs.vfs, '__call__', __call__) |
|
387 | > extensions.wrapfunction(vfs.vfs, '__call__', __call__) | |
388 | > EOF |
|
388 | > EOF | |
389 |
|
389 | |||
390 | prepare repo with small and big file to cover both code paths in emitrevlogdata |
|
390 | prepare repo with small and big file to cover both code paths in emitrevlogdata | |
391 |
|
391 | |||
392 | $ hg init repo |
|
392 | $ hg init repo | |
393 | $ touch repo/f1 |
|
393 | $ touch repo/f1 | |
394 | $ $TESTDIR/seq.py 50000 > repo/f2 |
|
394 | $ $TESTDIR/seq.py 50000 > repo/f2 | |
395 | $ hg -R repo ci -Aqm "0" |
|
395 | $ hg -R repo ci -Aqm "0" | |
396 | $ hg serve -R repo -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py |
|
396 | $ hg serve -R repo -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py | |
397 | $ cat hg.pid >> $DAEMON_PIDS |
|
397 | $ cat hg.pid >> $DAEMON_PIDS | |
398 |
|
398 | |||
399 | clone while modifying the repo between stating file with write lock and |
|
399 | clone while modifying the repo between stating file with write lock and | |
400 | actually serving file content |
|
400 | actually serving file content | |
401 |
|
401 | |||
402 | $ hg clone -q --stream -U http://localhost:$HGPORT1 clone & |
|
402 | $ hg clone -q --stream -U http://localhost:$HGPORT1 clone & | |
403 | $ sleep 1 |
|
403 | $ sleep 1 | |
404 | $ echo >> repo/f1 |
|
404 | $ echo >> repo/f1 | |
405 | $ echo >> repo/f2 |
|
405 | $ echo >> repo/f2 | |
406 | $ hg -R repo ci -m "1" |
|
406 | $ hg -R repo ci -m "1" | |
407 | $ wait |
|
407 | $ wait | |
408 | $ hg -R clone id |
|
408 | $ hg -R clone id | |
409 | 000000000000 |
|
409 | 000000000000 | |
410 | $ cd .. |
|
410 | $ cd .. | |
411 |
|
411 | |||
412 | Stream repository with bookmarks |
|
412 | Stream repository with bookmarks | |
413 | -------------------------------- |
|
413 | -------------------------------- | |
414 |
|
414 | |||
415 | (revert introduction of secret changeset) |
|
415 | (revert introduction of secret changeset) | |
416 |
|
416 | |||
417 | $ hg -R server phase --draft 'secret()' |
|
417 | $ hg -R server phase --draft 'secret()' | |
418 |
|
418 | |||
419 | add a bookmark |
|
419 | add a bookmark | |
420 |
|
420 | |||
421 | $ hg -R server bookmark -r tip some-bookmark |
|
421 | $ hg -R server bookmark -r tip some-bookmark | |
422 |
|
422 | |||
423 | clone it |
|
423 | clone it | |
424 |
|
424 | |||
425 | #if stream-legacy |
|
425 | #if stream-legacy | |
426 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks |
|
426 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks | |
427 | streaming all changes |
|
427 | streaming all changes | |
428 | 1027 files to transfer, 96.3 KB of data |
|
428 | 1027 files to transfer, 96.3 KB of data | |
429 | transferred 96.3 KB in * seconds (*) (glob) |
|
429 | transferred 96.3 KB in * seconds (*) (glob) | |
430 | searching for changes |
|
430 | searching for changes | |
431 | no changes found |
|
431 | no changes found | |
432 | updating to branch default |
|
432 | updating to branch default | |
433 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
433 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
434 | #endif |
|
434 | #endif | |
435 | #if stream-bundle2 |
|
435 | #if stream-bundle2 | |
436 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks |
|
436 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks | |
437 | streaming all changes |
|
437 | streaming all changes | |
438 | 1033 files to transfer, 96.6 KB of data |
|
438 | 1033 files to transfer, 96.6 KB of data | |
439 | transferred 96.6 KB in * seconds (* */sec) (glob) |
|
439 | transferred 96.6 KB in * seconds (* */sec) (glob) | |
440 | updating to branch default |
|
440 | updating to branch default | |
441 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
441 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
442 | #endif |
|
442 | #endif | |
443 | $ hg -R with-bookmarks bookmarks |
|
443 | $ hg -R with-bookmarks bookmarks | |
444 | some-bookmark 1:c17445101a72 |
|
444 | some-bookmark 1:c17445101a72 | |
445 |
|
445 | |||
446 | Stream repository with phases |
|
446 | Stream repository with phases | |
447 | ----------------------------- |
|
447 | ----------------------------- | |
448 |
|
448 | |||
449 | Clone as publishing |
|
449 | Clone as publishing | |
450 |
|
450 | |||
451 | $ hg -R server phase -r 'all()' |
|
451 | $ hg -R server phase -r 'all()' | |
452 | 0: draft |
|
452 | 0: draft | |
453 | 1: draft |
|
453 | 1: draft | |
454 |
|
454 | |||
455 | #if stream-legacy |
|
455 | #if stream-legacy | |
456 | $ hg clone --stream http://localhost:$HGPORT phase-publish |
|
456 | $ hg clone --stream http://localhost:$HGPORT phase-publish | |
457 | streaming all changes |
|
457 | streaming all changes | |
458 | 1027 files to transfer, 96.3 KB of data |
|
458 | 1027 files to transfer, 96.3 KB of data | |
459 | transferred 96.3 KB in * seconds (*) (glob) |
|
459 | transferred 96.3 KB in * seconds (*) (glob) | |
460 | searching for changes |
|
460 | searching for changes | |
461 | no changes found |
|
461 | no changes found | |
462 | updating to branch default |
|
462 | updating to branch default | |
463 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
463 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
464 | #endif |
|
464 | #endif | |
465 | #if stream-bundle2 |
|
465 | #if stream-bundle2 | |
466 | $ hg clone --stream http://localhost:$HGPORT phase-publish |
|
466 | $ hg clone --stream http://localhost:$HGPORT phase-publish | |
467 | streaming all changes |
|
467 | streaming all changes | |
468 | 1033 files to transfer, 96.6 KB of data |
|
468 | 1033 files to transfer, 96.6 KB of data | |
469 | transferred 96.6 KB in * seconds (* */sec) (glob) |
|
469 | transferred 96.6 KB in * seconds (* */sec) (glob) | |
470 | updating to branch default |
|
470 | updating to branch default | |
471 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
471 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
472 | #endif |
|
472 | #endif | |
473 | $ hg -R phase-publish phase -r 'all()' |
|
473 | $ hg -R phase-publish phase -r 'all()' | |
474 | 0: public |
|
474 | 0: public | |
475 | 1: public |
|
475 | 1: public | |
476 |
|
476 | |||
477 | Clone as non publishing |
|
477 | Clone as non publishing | |
478 |
|
478 | |||
479 | $ cat << EOF >> server/.hg/hgrc |
|
479 | $ cat << EOF >> server/.hg/hgrc | |
480 | > [phases] |
|
480 | > [phases] | |
481 | > publish = False |
|
481 | > publish = False | |
482 | > EOF |
|
482 | > EOF | |
483 | $ killdaemons.py |
|
483 | $ killdaemons.py | |
484 | $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid |
|
484 | $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid | |
485 | $ cat hg.pid > $DAEMON_PIDS |
|
485 | $ cat hg.pid > $DAEMON_PIDS | |
486 |
|
486 | |||
487 | #if stream-legacy |
|
487 | #if stream-legacy | |
488 |
|
488 | |||
489 | With v1 of the stream protocol, changeset are always cloned as public. It make |
|
489 | With v1 of the stream protocol, changeset are always cloned as public. It make | |
490 | stream v1 unsuitable for non-publishing repository. |
|
490 | stream v1 unsuitable for non-publishing repository. | |
491 |
|
491 | |||
492 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish |
|
492 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish | |
493 | streaming all changes |
|
493 | streaming all changes | |
494 | 1027 files to transfer, 96.3 KB of data |
|
494 | 1027 files to transfer, 96.3 KB of data | |
495 | transferred 96.3 KB in * seconds (*) (glob) |
|
495 | transferred 96.3 KB in * seconds (*) (glob) | |
496 | searching for changes |
|
496 | searching for changes | |
497 | no changes found |
|
497 | no changes found | |
498 | updating to branch default |
|
498 | updating to branch default | |
499 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
499 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
500 | $ hg -R phase-no-publish phase -r 'all()' |
|
500 | $ hg -R phase-no-publish phase -r 'all()' | |
501 | 0: public |
|
501 | 0: public | |
502 | 1: public |
|
502 | 1: public | |
503 | #endif |
|
503 | #endif | |
504 | #if stream-bundle2 |
|
504 | #if stream-bundle2 | |
505 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish |
|
505 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish | |
506 | streaming all changes |
|
506 | streaming all changes | |
507 | 1034 files to transfer, 96.7 KB of data |
|
507 | 1034 files to transfer, 96.7 KB of data | |
508 | transferred 96.7 KB in * seconds (* */sec) (glob) |
|
508 | transferred 96.7 KB in * seconds (* */sec) (glob) | |
509 | updating to branch default |
|
509 | updating to branch default | |
510 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
510 | 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
511 | $ hg -R phase-no-publish phase -r 'all()' |
|
511 | $ hg -R phase-no-publish phase -r 'all()' | |
512 | 0: draft |
|
512 | 0: draft | |
513 | 1: draft |
|
513 | 1: draft | |
514 | #endif |
|
514 | #endif | |
515 |
|
515 | |||
516 | $ killdaemons.py |
|
516 | $ killdaemons.py | |
|
517 | ||||
|
518 | #if stream-legacy | |||
|
519 | ||||
|
520 | With v1 of the stream protocol, changeset are always cloned as public. There's | |||
|
521 | no obsolescence markers exchange in stream v1. | |||
|
522 | ||||
|
523 | #endif | |||
|
524 | #if stream-bundle2 | |||
|
525 | ||||
|
526 | Stream repository with obsolescence | |||
|
527 | ----------------------------------- | |||
|
528 | ||||
|
529 | Clone non-publishing with obsolescence | |||
|
530 | ||||
|
531 | $ cat >> $HGRCPATH << EOF | |||
|
532 | > [experimental] | |||
|
533 | > evolution=all | |||
|
534 | > EOF | |||
|
535 | ||||
|
536 | $ cd server | |||
|
537 | $ echo foo > foo | |||
|
538 | $ hg -q commit -m 'about to be pruned' | |||
|
539 | $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents | |||
|
540 | obsoleted 1 changesets | |||
|
541 | $ hg up null -q | |||
|
542 | $ hg log -T '{rev}: {phase}\n' | |||
|
543 | 1: draft | |||
|
544 | 0: draft | |||
|
545 | $ hg serve -p $HGPORT -d --pid-file=hg.pid | |||
|
546 | $ cat hg.pid > $DAEMON_PIDS | |||
|
547 | $ cd .. | |||
|
548 | ||||
|
549 | $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence | |||
|
550 | streaming all changes | |||
|
551 | 1035 files to transfer, 97.1 KB of data | |||
|
552 | transferred 97.1 KB in * seconds (* */sec) (glob) | |||
|
553 | $ hg -R with-obsolescence log -T '{rev}: {phase}\n' | |||
|
554 | 1: draft | |||
|
555 | 0: draft | |||
|
556 | $ hg debugobsolete -R with-obsolescence | |||
|
557 | 50382b884f66690b7045cac93a540cba4d4c906f 0 {c17445101a72edac06facd130d14808dfbd5c7c2} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'} | |||
|
558 | ||||
|
559 | $ killdaemons.py | |||
|
560 | ||||
|
561 | #endif |
General Comments 0
You need to be logged in to leave comments.
Login now